Skip to Content

This Inquisitive AI Will Kick Your Butt at Battleship

As AI gets smarter, it’ll learn to ask some damn good questions.
November 20, 2017
Brenden Lake and Anselm Rothe

A remarkably inquisitive artificial-intelligence system developed by a team of researchers at NYU has learned to play a game similar to Battleship with mind-blowing skill.

In the simple game the researchers created, players seek to find their opponent’s ships hidden on a small grid of squares by asking a series of questions that can be answered with a single number or word. Their program figures out how to ask these questions as efficiently as possible.

Taking inspiration from cognitive psychology, and using a fundamentally different approach from most of today’s AIs, the system shows how machines may learn how to ask useful questions about the world. The program treats questions as miniature programs, allowing it to learn from just a few examples and to construct its own questions on the basis of what it has learned.

The game was developed by Brenden Lake, an assistant professor at NYU; Todd Gureckis, an associate professor; and Anselm Rothe, a graduate student. “There’s a tremendous gap between the human and the machine ability to ask questions when seeking information about the world,” Lake says. The researchers describe the work in a paper posted online.

The researchers had humans play their game and recorded the questions they asked. They then translated the questions into conceptual components. For example, the questions “How long is the blue ship?” and “Does the blue ship have four tiles?” concern the length of a target. The question “Do the blue and red ships touch?” concerns position. The researchers then encoded these questions using a simple programming language and built a probabilistic model to determine which questions should yield the most useful information. This methodology allowed the AI system to efficiently construct novel questions that helped it win the game.

Most AI approaches involve simply feeding a computer huge quantities of example data and having it generate its own examples after that. While the NYU team’s method requires more hand-coding, it is far more efficient and effective at discovering smart questions to pose. The system also builds smart questions in a more methodical way, and it can even produce questions that no human thought to ask.

The researchers are exploring how their technology might make chatbots and other dialogue system more effective and less painful to use. With a little preprogramming, such a system might be able to help customers solve their problem more quickly by posing the right questions.

“Having dialogue systems that generate novel questions so that they can get more informative answers on the fly is going to make human-computer interaction more effortless and make these systems more useful and fun to use,” says Lake.

Remarkably, the game-playing program was able to construct “the ultimate question” for the battleship game. This consisted of asking an opponent to go through a series of mathematical steps, adding the length of one ship to 10 times the length of the next and so on. Such a question would be difficult for a person to follow or to answer correctly, but in theory the result could be used to back-calculate the entire board. “It was pretty interesting,” says Lake.

Sam Gershman, an assistant professor at Harvard University who develops approaches to AI inspired by cognitive neuroscience, says the NYU research provides insights into how humans think up good questions. “First, you need some form of compositionality in order to capture the bewildering variety of questions,” Gershman says. “Second, you need a set of criteria that weigh the relative strengths and weaknesses of a question.”

Gershman adds that humans seem to follow a similar strategy to the more successful approach employed by the program, carefully assessing the complexity of their questions in order to use cognitive resources sparingly.

Ultimately, machines won’t become truly intelligent unless they begin to get curious about the world around them. That begins with asking probing questions.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.