Researchers at Facebook have created a number of extremely realistic virtual homes and offices so that their AI algorithms can learn how the real world works.
Real deal: A team at Facebook Reality Labs created 18 “sample spaces” through a program known as Replica. The idea is for AI agents to learn about real-world objects through exploration and practice. In theory, this could make chatbots and robots smarter, and it could make it possible to manipulate VR in powerful ways. But the virtual spaces need to be extremely lifelike for this to be transferable to the real world.
Mirror world: The environments were created by mapping real offices and homes using a high-definition 3D camera rig. The researchers also developed new software to deal with reflections, which can easily confuse such scanning systems. Whereas other simulation engines run at around 50 to 100 frames per second, Facebook says AI Habitat runs at over 10,000 frames per second, which makes it possible to test AI agents rapidly.
Home alone: These virtual spaces can be loaded into a new environment called AI Habitat, inside which AI programs can explore and learn. The algorithms will first be trained to recognize objects in different settings. But over time they should build some common-sense understanding about the conventions of the physical world—like the fact that tables typically support other objects.
Uncommon sense: A lack of common sense is a glaring problem for today’s AI systems. Unlike a person, a chatbot or robot cannot rely on an understanding of the world—things like physics, logic, and social norms—to figure out the intent of an ambiguous command. The complexity and ambiguity of language makes this situation all too common.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
We are hurtling toward a glitchy, spammy, scammy, AI-powered internet
Large language models are full of security vulnerabilities, yet they’re being embedded into tech products on a vast scale.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.