Then again, what do we mean by "thinking"? A computer can add up a row of figures in an instant. Does that constitute thinking? After all, there's something disappointingly trivial about such a task. Thinking, to most people, means knowledge, understanding, intelligence, creativity: none of that, in just adding numbers. But trying to make computers exhibit those traits gave us a whole new discipline: Artificial Intelligence (AI) -- the science of making computers think, or behave in human ways.
But how can we tell that a computer is thinking? Here's one way: if it acts as a person acts when thinking, we might say the computer is itself thinking. Suppose you ask a computer the same questions you might ask a person, and it gives you answers indistinguishable from the person's answers. If it did this consistently, you would likely say the machine is thinking.
In 1950, Alan Turing made exactly this argument in a remarkable paper called Computing Machinery and Intelligence. Thus was born what we now know as the Turing Test.
In the Test, a questioner puts a series of questions to a human volunteer and to a computer. Questions and answers are transferred back and forth via a keyboard and a screen; so the only information available to the questioner is what shows up on her screen. The computer is programmed to give as human-like answers as possible; the volunteer, we presume, will be human-like anyway.
The questioner must now decide: which of the two is the human?
If over a series of these tests, the questioner is consistently unable to identify the human volunteer, the computer passes the Test.
Passing the Turing Test has become a kind of Holy Grail for AI. Its practitioners believe that it is a strong, even sufficient, indication of a computer's intelligence and ability to think. And it's hard to disagree. For this -- questioning, discussion, conversation -- is precisely how we decide that a fellow human is intelligent. Shouldn't we apply the same standard to a computer?
Pursuing their Holy Grail, AI researchers have produced a number of interesting systems. Two early ones simulated psychotherapists. This sample conversation (quoted by Roger Penrose in his The Emperor's New Mind) best shows their behaviour:
- Computer: How can I help you?
Patient: I don't know.
C: Try to tell me something about it.
C: Can you elaborate?
P: I am sick and tired today.
C: Do you have any idea why?
P: Yes, I'm old.
C: Please go on.
P: I seem to have lost interest in everything.
C: Tell me more about such feelings.
P: I have some idea why I feel this way too.
C: You have.
P: It seems that I miss having a family.
C: What makes you feel you miss having a family?
What do you think, is there some definite, almost spooky understanding happening here? If you said yes, you're in good company. K. M. Colby and Joseph Weizenbaum, who separately wrote the two programs, reported that many people swore the computer really understood them. So well, too, that some preferred to unload all their problems on these electronic therapists rather than human ones.
Yet there's no understanding going on at all. As any halfway decent programmer will tell you, these systems are simply following some ordinary rules about what to say.
In the '70s, Roger Schank produced a programme that could understand stories like this one:
- A man walked into a restaurant and ordered a dosa. When it came, it was burned to a crisp. He was so furious that he walked out without paying or leaving a tip.
Asked "Did the man eat the dosa?", Schank's system correctly infers and answers: "No".
Impressive, you think? Many have argued that in the limited sense that such systems answered simple questions about simple contexts just as a human would, they had already passed the Turing Test. But can we truly say there's thinking going on? That the computers understand Schank's stories, or a patient's neurotic woes?
In a thought experiment that is still controversial among AI-ers, John Searle argued "no". He imagined himself simulating Schank's computer, like this.
Lock Searle in a room. Pass him Schank's story and the questions, now written in Chinese which Searle doesn't follow. Also give him instructions in English on what exactly to do to process the story. He follows the instructions, and thus carries out the steps of Schank's programme. Eventually he hands over the answers to the questions, also in Chinese.
Fine? But -- this is a critical "but" -- in what sense has Searle, sitting in this room, understood the stories, especially since he understands no Chinese at all?
With the identical reasoning, in what sense has Schank's computer understood the dosa story?
Some AI-ers would have it that intelligence is embodied in Schank's algorithm itself. They say that the mind works, in essence, like an enormously complicated algorithm. Others, like Searle, believe that intelligence cannot be simulated by computers carrying out the steps of an algorithm, however sophisticated. Intelligence, they say, means a certain consciousness that algorithms just don't have.
The debate between these two views rages on. But that's not such a bad thing. After all, AI's fondest hope is to produce a better understanding of intelligence itself. Sure, we might one day build a computer that passes the Turing test, that thinks. But when that happens, the real victory will be what AI has taught us about our own minds.
Think of that, if you will.