Skip to Content
Artificial intelligence

This AI program could beat you in an argument—but it doesn’t know what it’s saying

The latest human-versus-machine matchup involves an argumentative AI system.
June 19, 2018
IBM Research

During a live debate in San Francisco this evening, an AI program made a surprisingly cogent argument that space exploration should be subsidized. When a human disagreed, the program offered a rebuttal.

The debate, between an IBM computer program called Project Debater and several human participants, is the latest proof that artificially intelligent machines are making progress at skills that previously were reserved for people—in this case, arguing.

During the event, the program and a human participant took turns making an argument on a specific topic, making a rebuttal, and adding a closing argument. In a second debate, the program argued for the increased use of telemedicine, while the human participant argued against it.  

IBM has been working for several years on the AI software, which mines through reams of text before constructing an argument on a specific topic. The company held the debate this evening to promote the technology.

Project Debater doesn’t try to build an argument based on an understanding of the subject in question. Instead, it simply constructs one by combining elements of previous arguments, along with relevant points of information from Wikipedia.

Noam Slonim and Ranit Aharonov, the IBM researchers behind Project Debater.
IBM research

Ranit Aharonov, a researcher behind the project who is based in Israel, acknowledges that it is limited. “There is still a long way to go in mastering language,” she says. However, Aharonov believes the technology could have a range of practical uses. It could help someone make a critical decision, for example, by providing a range of “for” and “against” arguments.

An argumentative AI system could, of course, also have nefarious uses, including powering more pernicious bots on social media and beyond. Aharonov’s collaborator Noam Slonim discounts the danger. “There is always a risk, and I actually think it is more limited than with other technologies,” he says.

Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence in Seattle, says it’s difficult to judge the capabilities of the IBM system purely on the basis of this contest. “It’s easier to put together a canned demo than an open one where they let you interact with it in a natural way,” he says.

Kristian Hammond, a professor at Northwestern University and the founder of Narrative Sciences, a company that automatically generates news reports and other content, says the technology could prove to be useful. But Hammond stresses that the IBM software is simply parroting what it’s dug up. “There’s never a stage at which the system knows what it’s talking about,” he says. “In humans, we think of that as shitty reasoning.”

Hammond also says the contest in San Francisco hardly demonstrates the utility of the system. “It’s a bit of a distraction,” he says.

Deep Dive

Artificial intelligence

This new data poisoning tool lets artists fight back against generative AI

The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. 

Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist

An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.

Driving companywide efficiencies with AI

Advanced AI and ML capabilities revolutionize how administrative and operations tasks are done.

Generative AI deployment: Strategies for smooth scaling

Our global poll examines key decision points for putting AI to use in the enterprise.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.