The Turing Test is not especially interesting for practical applications of artificial intelligence. The world already has billions of intelligent agents which are indistinguishable from humans because they are humans, and we will benefit the most from agents which are superhuman in some way. A calculator or a computer algebra system are useful not because they reproduce human skills but because they are superhuman in specific ways, and robot car drivers or more general intelligent agents will be useful on the same grounds.
Even so, the Turing Test is fun and has some practical value. Social interaction of machine intelligences with humans will be increasingly important. Passing the Turing Test is not necessary for social interaction, since humans will happily anthropomorphize agents that are quite alien and limited, but even so work toward passing the Turing Test adds to our knowledge of machine social interaction.
The Turing Test belongs on Machine Learning in Games because it is itself a game. Alan Turing based it on a parlor game. But I’ll only cover the most game-oriented cases, since there are so many.
| Jay : game learning : Turing Test |
BotPrize
A competition to create an Unreal Tournament 2004 bot which at least 50% of judges decide is human. The game is a first-person shooter, so the task is easier than it would be in (say) a real-time strategy game where players unfold deep plans that they must adapt to circumstances in real time. The competition ran each year from 2008 to 2012, and two teams finally won the prize in 2012. There are plans to continue the contest next year with a more difficult challenge. In 2012 the “most-human” human player was judged 53.3% human and the “most-human” bot was judged 52.2% human, and both humans and bots were on average judged less than 50% human, so the judges really couldn’t tell. The publications page has papers.
Human-like Bot Competition
This separate 2012 competition followed largely the same rules. It was funded by the IEEE Computational Intelligence Society and took place at the 2012 IEEE World Congress on Artificial Intelligence. Despite having many of the same computer and human participants, the top bot was judged only 21% human. The top human was judged only 40% human, so judging was harsh.
Human-like Bots in Unreal Tournament
The website of one of the two winning teams, associated with the Neural Networks Research Group at the University of Texas at Austin. They evolved neural networks. A method used for navigating the complex maps was to (when the bot got stuck) select and play back a trace of actual human behavior from a past game, an example of case-based reasoning.