Two very intelligent individuals named Mitch Kapor and Ray Kurzweil have a long-running $10,000 bet about whether or not, by 2029, the Turing Test will be beaten.
However, both these individuals have missed something painfully obvious.
The Turing Test has already been beaten - and the world learned virtually nothing during the process.
It was beaten in the 70s.
In the 1960s there was a famous AI program called Eliza. Eliza simulated a Rogerian psychotherapist. Rogerian psychotherapy consisted entirely of repeating whatever the patient said, and/or asking them for more detail. During the 60s, it was very popular among a certain class of people.
Eliza, of course, did exactly the same thing - it repeated back to whatever you said to it, and occasionally asked you questions. Eliza is now famous as an AI program, but Eliza was not actually constructed to simulate human intelligence. Eliza was intended as satire. The researcher who built Eliza basically wanted to demonstrate that what Rogerian psychotherapists were doing was so brainless a machine could do it.
Because Eliza was satire, everybody was in on the gag, so nobody ever confused Eliza with a real human. But about ten years later, a less famous researcher - I've even forgotten his name, and the name of his program, although I remember where I read about it - created a version of Eliza with a twist. Instead of repeating whatever you said and peppering it with gentle questions, this version repeated whatever you said and peppered it with paranoid accusations. It also always phrased its questions aggressively, and with hostility and suspicion. This version of Eliza successfully decieved several psychologists, all of whom diagnosed it with various forms of acute paranoia.
The problem with the Turing Test is that it is a lot easier to beat by exploiting incomplete knowledge of human psychology than it is by designing a mega-neutron brain. All the test really does is expose how little we know about what makes people people. The fact that it's easy to fool people into thinking a computer is human doesn't actually teach you anything about the difference between computers and humans; all it does is teach you that it's easy to fool people.
Technically, the Wikipedia entry on the Turing Test says that this type of thing is
not the same as a Turing Test. Most obviously, the human party in the conversation has no reason to suspect they are talking to anything other than a human, whereas in a real Turing test the questioner is actively trying to determine the nature of the entity they are chatting with.
Honestly, however, I think that's splitting hairs. Neither academic nor corporate research pursues the Turing Test any longer. It could be that the test is considered too vague and ambitious for serious researchers, but I think the real reason is that it's too easy. The core of the problem was solved only 10 or 20 years after the Test itself was suggested in 1950. It's basically a solved problem with a ton of little implementation details still dangling off of it.
If the Turing Test isn't officially beaten by 2029, it won't be because it wasn't beaten. It'll be because it wasn't officially beaten. Not because the test was so hard. Because officials were too wrapped-up in being official to acknowledge that a prankster beat the Test back in the 70s as a joke.
What's especially funny, and yet especially bittersweet, is that the joke started as an attack on a pretentious field of study with no real record of results. I mean, another pretentious field of study with no real record of results. Some of the best moments in AI have come from a similarly insouciant point of view.
By the way, if it seems I'm being unfair to psychotherapists here, I'm a certified hypnotherapist, and I know for a fact that hypnotherapists can do very easily things which the psychology establishment says are impossible. Many, many psychotherapists have gotten people to "just accept" a huge legion of problems which a hypnotherapist could have simply solved. I'm definitely kind of scornful of psychotherapists, but it's not because I don't understand the good they do - it's because I know how much more good they could do if they took their work seriously.
And if it seems I'm being unfair to AI researchers, well, come on. If there's anybody whose understanding of human consciousness needs a reality check - anybody whose sacred cows need a little cow-tipping - it's AI researchers. The current apex of achievement in artificial intelligence is a robot vaccuum cleaner.