If you aren’t interested in artificial intelligence, then you probably haven’t heard of the Turing Test. But you should, because the more you learn about it, the weirder and funnier it becomes. It’s named after Alan Turing, a formidable figure in the history of computer science. Turing proposed (wrongly, but very interestingly) that if a computer could have a conversation with a person, and convince the person that the computer is also a person, then there would be no grounds to claim that the computer was not intelligent in a human way.
That simple benchmark has not been achieved or even approached. I know a guy who’s very active in the AI world. He told me that tests of this kind come and go all the time. Someone will propose a task that nobody has yet figured out how to make a computer do; “That,” he’ll say, gesturing grandly, “is when we’ll know we have it. That will be true AI.” Others convince themselves that he’s right and get excited about it, until sooner or later some sneaky smartass figures out how to get a computer to do it. And then, what do they do?
Then they all come to the sobering realization that it was all done algorithmically, through a process that can be broken down into wholly predictable, deterministic steps, and they say: “That’s obviously not AI. But this other task that I’ve devised, well, if you can get a machine to do this, and then you’ve got true artificial intelligence!” And the cycle repeats without end.
It comes to resemble trying to prove the existence of God. If you can prove God’s existence through a scientifically valid process, the argument goes, then you’ve shown God to be a part of the natural world, compatible with and susceptible to analysis by the laws of nature. Therefore, the argument concludes, the God that has been demonstrated is not a supernatural being at all, and thus God’s existence remains unproven. And that’s just how it is with AI: once a machine has passed our intelligence test, we understand that the machine did it by wholly programmed sequences, and it’s still just a dumb machine.
A great chess player named Aron Nimzovich once declared that no computer will ever be able to match a human at chess, because chess depends on “dialectical” thinking. He was wrong in that way that only Hegelian-influenced thinkers (e.g. Karl Marx) know how to be; computers have now surpassed us at chess, but no computer has ever done “dialectical” thinking, or thinking of any kind. And now that we have computers beating us at chess purely through uncreative algorithmic data-crunching, we scoff at the idea that a computer that wins at chess can be called intelligent.
The leading thinkers in AI have changed the definition of their field quite a lot over the years, and in many ways they’ve defined their science downwards. The goals and standards that they avidly embraced in the past, such as the Turing Test, they now deride as pure science fiction, but only after they found them unattainable. Many of them now spend a great deal of time designing algorithmic enhancements to aid humans in decision-making processes, which is a far cry from the AI vision of the past. And thinkers like John Searle have done a great deal to attack the underpinnings of true AI. But AI remains a seductive and alluring mistress, however disowned it may be, and the science-fiction-like aspects of it still sneak through the cracks frequently when we hear cliched statements like “the singularity is near.”
The unacknowledged goal of artificial intelligence is to get a computer that acts outside of its programming. In philosophic terms, true AI requires a machine to develop free will. And somehow the theorists who chase it imagine they can get a machine to discover freedom by programming it in meticulous detail. That’s so silly it hardly merits discussion, but the amount of effort that has been devoted to it is beyond estimation.
Back to the Turing Test. A fellow named Hugh Loebner set up an an annual contest for something he called the Loebner Prize, which awards prize money to the computer that can most closely simulate human conversation. Contestants program “chatterbots,” and judges have conversations with them through computer terminals, not knowing whether they’re talking to computers or machines. If a chatterbot can mimic human conversation well enough to fool the judges, it will be declared a winner.
The best known of these chatterbots is Cleverbot. I like to play with Cleverbot sometimes. I try to get it to do collaborative writing projects with me. The result is invariably a spectacular failure, but sometimes it’s amusing. I just had the following conversation with Cleverbot while writing this piece:
User: Cleverbot, let’s write a story!
User: Okay, you tell me a story.
Cleverbot: Zai jian.
User: Come on, it’s going to take more than that to pass a Turing Test.
Cleverbot: I understand that, I like rain too.
User: So what is our story about?
Cleverbot: I don’t know.
User: Think about it.
Cleverbot: I don’t want to think. It’s tiring.
User: Okay, you actually could pass for a Californian.
The upshot is, no chatterbot has passed the Turing Test, or even come close. And when we look at the history of the Loebner Prize, we find that it has turned some computer programmers into such childish, acrimonious wretches that one wonders whether they themselves would pass the Turing Test. (The Flaming Redhead thinks that “You fail the Turing Test’ is the finest way she knows to impugn someone’s intelligence.)
The Loebner Prize has a strange and fascinating history. AI theorists have declared the Loebner Prize invalid for all sorts of reasons: for instance, that the judges are untrained and unprepared, that the computers aren’t displaying true intelligence but only a brainless stimulus-response behavior, etc. In other words, even if a chatterbot were to pass the Turing Test within the setup of the Loebner Prize, observers would correctly say that it was not true AI, and that true AI would require other conditions.
The most notable, or at least the most noisy, of those observers has been a fellow named Marvin Minsky of MIT. He attacked Loebner’s system, and set up his own prize: “The Minsky Loebner Prize Revocation Prize.” This was a sum of $100 that he offered to whoever could convince Loebner to revoke his “stupid prize.”
Minsky and Loebner had a back-and-forth that would be the envy of any primary school playground. As far as I can tell, the feud is still going on today, as the Loebner Prize still mocks Minsky on its homepage, apparently having deluded themselves into the belief that they came out of the dispute looking dignified. Loebner has gleefully declared Minsky a co-sponsor of his contest, on the grounds that the only way to get him to revoke his prize was to win it, and therefore once a computer succeeded at the Turing Test, Minsky would be morally obligated to send the winner $100.
And that’s where they remain, like Dr. Seuss’s Zax, standing toe to toe, neither one budging, while the world moves on around them. It’s hard to say what lesson this teaches us. “Artificial intelligence is a silly field,” maybe. Or perhaps, “Never get drawn into arguments with people who have no sense of perspective.” Certainly computing power has done nothing to reduce the fragility of the human ego, if the actors in the AI world are anything to go by. It’s hard to see how intelligence could be recreated from scratch, by people who show so little of it themselves.