MIT Technology Review Subscribe

Facebook’s AI tourist finds its way around New York City by asking for help from another algorithm

AI algorithms can learn to navigate in the real world using language—and that might help make chatbots and voice assistants smarter.

If you get lost in New York without a smartphone or a map, you’ll most likely ask a local for directions. Facebook’s researchers are training AI programs to do the same thing, and they’re hoping this could eventually make them far better at using language.

The Facebook Artificial Intelligence Research (FAIR) group in New York created two AI programs: a “tourist” effectively lost in the Big Apple, and a “guide” designed to help its fellow algorithm find its way around by offering natural-language instructions. The lost tourist sees photos of the real world, while the “guide” sees a 2-D map with landmarks. Together they are tasked with reaching a specific destination.

Advertisement

The idea is that by learning how instructions relate to real objects like a “restaurant” or a “hotel,” just as a baby learns by associating words with real objects and actions, the tourist algorithm will start to figure out what these things actually are—or at least how they fit into a simple street view of the world. AI researchers hope that algorithms taught this way will be more sophisticated in their use of language.

This story is only available to subscribers.

Don’t settle for half the story.
Get paywall-free access to technology news for the here and now.

Subscribe now Already a subscriber? Sign in
You’ve read all your free stories.

MIT Technology Review provides an intelligent and independent filter for the flood of information about technology.

Subscribe now Already a subscriber? Sign in

Language remains a huge challenge for artificial intelligence. It’s easy to build algorithms capable of answering simple commands or even holding a rudimentary conversation, but complex dialogue is impossible for a machine. This is partly because decoding ambiguity in language requires some common-sense knowledge of the real world. Giving an algorithm simple rules or training it on large amounts of text often results in absurd misunderstandings (see “AI’s language problem”).

“One strategy for eventually building AI with human-level language understanding is to train those systems in a more natural way, by tying language to specific environments,” the researchers write in a related blog post. “Just as babies first learn to name what they can see and touch, this approach—sometimes referred to as ‘embodied AI’—favors learning in the context of a system’s surroundings, rather than training through large data sets of text.”

The Facebook research is an attempt to give AI algorithms some common sense by grounding their understanding of language in a simplified representation of the real world.

The idea of “embodied AI” has been around for some time, but most efforts to date have relied on simulated environments rather than actual images. Greater realism makes things more challenging, but it will be crucial if AI algorithms are to become more useful (see “Facebook helped create an AI scavenger hunt”).

The researchers used a 360° camera to capture New York City neighborhoods including Hell’s Kitchen, the Financial District, the Upper East Side, and Williamsburg.

They also ran experiments where the algorithms could experiment with their own protocols or language. Interestingly, the researchers found that things worked best when the algorithms were allowed to do this.

The Facebook researchers are releasing the code behind their project, called Walk the Talk, in hopes that other AI scientists will use it to further research on embodied AI and language algorithms.

Advertisement
This is your last free story.
Sign in Subscribe now

Your daily newsletter about what’s up in emerging technology from MIT Technology Review.

Please, enter a valid email.
Privacy Policy
Submitting...
There was an error submitting the request.
Thanks for signing up!

Our most popular stories

Advertisement