Skip to Content
Artificial intelligence

Robots won’t make it into our houses until they get common sense

AI’s big advances have so far relied on algorithms that train on huge piles of data. If robots are going to work in the real world, that will have to change.
March 25, 2019
Jeremy portje

Artificial intelligence has made tremendous progress in areas like image and speech recognition, largely by training the machines on large sets of labeled data. But robots that have to navigate the real world face unique challenges. As a result, robots are still largely limited to highly structured environments like factories, where they perform repetitive tasks.

Sergey Levine, an assistant professor in electrical engineering at UC Berkeley, says that if robots are ever going to find their way into homes and our broader daily lives, they need to teach themselves the common sense that would let them navigate unknown and unstructured environments.

Speaking at MIT Technology Review’s EmTech Digital conference in San Francisco, Levine gave several examples of robots making remarkable progress in teaching themselves how to navigate the world without labeled data or human supervision. In one recent example, a quadrupedal robot used an AI technique called deep reinforcement learning to learn how to “walk” after only two hours.

How long before robots are capable enough to live in our homes? Hard to predict, says Levine. In the near term, however, he sees them being increasingly used in various delivery tasks and in places such as hospitals for tasks like making beds.

Deep Dive

Artificial intelligence

The inside story of how ChatGPT was built from the people who made it

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.

AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.

GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why

We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.

The original startup behind Stable Diffusion has launched a generative AI for video

Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.