If this year’s winner of the Loebner Prize is on the right track, call-center data could be what’s needed to achieve the ultimate goal of artificial intelligence (AI): creating a computer program smart enough to hold a natural conversation.
A self-trained enthusiast with no formal academic background in AI, Rollo Carpenter created the winning program, which learns by analyzing its conversations with people as they “chat” with it online. Regardless of the language, his program analyzes every utterance it witnesses, using what Carpenter calls contextual pattern-recognition techniques. Then, when a user asks the program a question, a database is combed for the best response, statistically speaking.
This method may work for idle chit-chat. But if his bots–automated programs meant to perform specific tasks–are ever to be used in a serious commercial application or to pass the famous Turing Test for artificial intelligence, they will need a vast number of conversations, and computing power to match, says Carpenter. “I need more data,” he says.
Thousands of fans have already conversed with his programs online, over nearly 10 years, and his software now contains several million utterances. But to pass itself off as “intelligent,” the software will require at least ten times that number of utterances, says Carpenter.
To give his bots an extra boost, he’s turning to call-center data. Carpenter has begun working with a firm in Japan, and if his plan succeeds, he says his “chat bots” may eventually be able to take over the roles of human operators.
This sort of statistical brute force approach to artificial intelligence has a lot of promise, says John Barnden, an AI researcher at the University of Birmingham, U.K., and one of the judges at this year’s Loebner Prize, which was held in London. “There is enough evidence to suggest that it’s worth trying.” However, it won’t be easy, he says. While Barnden suspects that training a bot on call-center data will work for an automated program designed to handle customer calls, it will probably take a broader range of knowledge and data to make it pass the coveted Turing Test, or at least the Loebner Prize version of it.
During the contest, a human judge chats with two subjects, using a keyboard: one subject is a machine, the other human. According to Alan Turing, the British mathematician who conceived of the test, if a judge is unable to tell which subject is a machine and which a human, the machine can reasonably be ascribed as having human-like intelligence.
Carpenter’s program, Joan, followed the context of some of the contest conversations, and begrudgingly told a joke, much like an unenthused human. But tests of Joan (see selected transcripts from the contest below) provides some insight into Barden’s pessimism about AI.
It will take time before anyone passes the Turing Test, he says. “Joan was certainly more coherent than the others,” he says, but it was very obviously a program.