A 3-D World for Smarter AI Agents
Google DeepMind, a subsidiary of Alphabet that’s focused on making fundamental progress toward general artificial intelligence, is releasing a new 3-D virtual world today, making it available for other researchers to experiment with and modify however they wish.
The new platform, called DeepMind Lab, resembles a blockish 3-D first-person shooter computer game. Inside the world, an AI agent takes the form of a floating orb that can perceive its surroundings, move around, and perform simple actions. Agents can be trained to perform various tasks through a form of machine learning that involves receiving positive rewards. Simple example tasks that will come bundled with the platform include navigating a maze, collecting fruit, and traversing narrow passages without falling off.
“We’re trying to develop these artificial intelligence agents that can learn to perform well on a wide range of tasks from looking at the environment and from observing what happens,” says Shane Legg, chief scientist and cofounder of DeepMind.
The company has used versions of the environment, known previously as Labyrinth, internally for some time (see "How Google Plans to Solve Artificial Intelligence"). It previously made some its first big headlines by creating AI agents capable of learning, through trial and error, how to play many Atari video games (see “Google’s AI Masters Space Invaders”).
An open and customizable 3-D world provides more complex and visually rich challenges for agents, but also means a much wider range of potential tasks. DeepMind Lab could lead to AI algorithms capable of transferring their learning from one task to the next.
Having AI agents work inside a 3-D environment could also have benefits for developing algorithms to control systems that work in the real world such as industrial robots, Legg says.
What’s more, the idea of creating agents that learn about a simulated world from basic principles taps into key ideas about how humans learn, something Legg explored in his academic career. “Just like you or I would learn about the world as a child, it’s a very fundamental approach to this learning and generality problem,” Legg says of DeepMind Lab.
Other AI experts welcomed the launch of DeepMind Lab. “It’s very good that they’re releasing more environments,” says Ilya Sutskevar, cofounder and research director at OpenAI, a nonprofit dedicated to doing basic research and releasing it publicly. “The more environments reinforcement learning agents have access to, the faster the field will move forward.
Zoubin Gahrahmani, a professor at the University of Cambridge in the U.K., says DeepMind Lab and other platforms for reinforcement learning make progress more transparent by letting researchers test out each other’s ideas.
However, Gahrahmani also notes that existing approaches to reinforcement do not always measure up to human abilities so well. For instance, it usually takes a human far less playing time to master a particular video game or board game. “Reinforcement learning approaches are very data inefficient,” he says. “How do we get systems to learn at a pace that’s comparable to humans?”
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
ChatGPT is going to change education, not destroy it
The narrative around cheating students doesn’t tell the whole story. Meet the teachers who think generative AI could actually make learning better.
Meet the people who use Notion to plan their whole lives
The workplace tool’s appeal extends far beyond organizing work projects. Many users find it’s just as useful for managing their free time.
Learning to code isn’t enough
Historically, learn-to-code efforts have provided opportunities for the few, but new efforts are aiming to be inclusive.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.