Skip to Content
Artificial intelligence

Facebook’s AI tourist finds its way around New York City by asking for help from another algorithm

AI algorithms can learn to navigate in the real world using language—and that might help make chatbots and voice assistants smarter.
July 12, 2018

If you get lost in New York without a smartphone or a map, you’ll most likely ask a local for directions. Facebook’s researchers are training AI programs to do the same thing, and they’re hoping this could eventually make them far better at using language.

The Facebook Artificial Intelligence Research (FAIR) group in New York created two AI programs: a “tourist” effectively lost in the Big Apple, and a “guide” designed to help its fellow algorithm find its way around by offering natural-language instructions. The lost tourist sees photos of the real world, while the “guide” sees a 2-D map with landmarks. Together they are tasked with reaching a specific destination.

The idea is that by learning how instructions relate to real objects like a “restaurant” or a “hotel,” just as a baby learns by associating words with real objects and actions, the tourist algorithm will start to figure out what these things actually are—or at least how they fit into a simple street view of the world. AI researchers hope that algorithms taught this way will be more sophisticated in their use of language.

Language remains a huge challenge for artificial intelligence. It’s easy to build algorithms capable of answering simple commands or even holding a rudimentary conversation, but complex dialogue is impossible for a machine. This is partly because decoding ambiguity in language requires some common-sense knowledge of the real world. Giving an algorithm simple rules or training it on large amounts of text often results in absurd misunderstandings (see “AI’s language problem”).

“One strategy for eventually building AI with human-level language understanding is to train those systems in a more natural way, by tying language to specific environments,” the researchers write in a related blog post. “Just as babies first learn to name what they can see and touch, this approach—sometimes referred to as ‘embodied AI’—favors learning in the context of a system’s surroundings, rather than training through large data sets of text.”

The Facebook research is an attempt to give AI algorithms some common sense by grounding their understanding of language in a simplified representation of the real world.

The idea of “embodied AI” has been around for some time, but most efforts to date have relied on simulated environments rather than actual images. Greater realism makes things more challenging, but it will be crucial if AI algorithms are to become more useful (see “Facebook helped create an AI scavenger hunt”).

The researchers used a 360° camera to capture New York City neighborhoods including Hell’s Kitchen, the Financial District, the Upper East Side, and Williamsburg.

They also ran experiments where the algorithms could experiment with their own protocols or language. Interestingly, the researchers found that things worked best when the algorithms were allowed to do this.

The Facebook researchers are releasing the code behind their project, called Walk the Talk, in hopes that other AI scientists will use it to further research on embodied AI and language algorithms.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

Providing the right products at the right time with machine learning

Amid shifting customer needs, CPG enterprises look to machine learning to bolster their data strategy, says global head of MLOps and platforms at Kraft Heinz Company, Jorge Balestra.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.