Skip to Content
Artificial intelligence

An AI that can play Goat Simulator is a step toward more useful machines

Google DeepMind’s new agent could tackle a variety of games it had never seen before by watching human players.

Coffee Stain Studios

Fly, goat, fly! A new AI agent from Google DeepMind can play different games, including ones it has never seen before such as Goat Simulator 3, a fun action game with exaggerated physics. Researchers were able to get it to follow text commands to play seven different games and move around in three different 3D research environments. It’s a step toward more generalized AI that can transfer skills across multiple environments.  

Google DeepMind has had huge success developing game-playing AI systems. Its system AlphaGo, which beat top professional player Lee Sedol at the game Go in 2016, was a major milestone that showed the power of deep learning. But unlike earlier game-playing AI systems, which mastered only one game or could only follow single goals or commands, this new agent is able to play a variety of different games, including Valheim and No Man’s Sky. It’s called SIMA, an acronym for “scalable, instructable, multiworld agent.”

In training AI systems, games are a good proxy for real-world tasks. “A general game-playing agent could, in principle, learn a lot more about how to navigate our world than anything in a single environment ever could,” says Michael Bernstein, an associate professor of computer science at Stanford University, who was not part of the research. 

“One could imagine one day rather than having superhuman agents which you play against, we could have agents like SIMA playing alongside you in games with you and with your friends,” says Tim Harley, a research engineer at Google DeepMind who was part of the team that developed the agent. 

The team trained SIMA on lots of examples of humans playing video games, both individually and collaboratively, alongside keyboard and mouse input and annotations of what the players did in the game, says Frederic Besse, a research engineer at Google DeepMind.  

Then they used an AI technique called imitation learning to teach the agent to play games as humans would. SIMA can follow 600 basic instructions, such as “Turn left,” “Climb the ladder,” and “Open the map,” each of which can be completed in less than approximately 10 seconds.

The team found that a SIMA agent that was trained on many games was better than an agent that learned how to play just one. This is because it was able to take advantage of concepts shared between games to learn better skills and get better at carrying out instructions, says Besse. 

“This is again a really exciting key property, as we have an agent that can play games it has never seen before, essentially,” he says. 

Seeing this sort of knowledge transfer between games is a significant milestone for AI research, says Paulo Rauber, a lecturer in artificial Intelligence at Queen Mary University of London. 

The basic idea of learning to execute instructions on the basis of examples provided by humans could lead to more powerful systems in the future, especially with bigger data sets, Rauber says. SIMA’s relatively limited data set is what is holding back its performance, he says. 

Although the number of game environments it’s been trained on is still small, SIMA is on the right track for scaling up, says Jim Fan, a senior research scientist at Nvidia who runs its  AI Agents Initiative. 

But the AI system is still not close to human level, says Harley. For example, in the game No Man’s Sky, the AI agent could do just 60% of the tasks humans could do. And when the researchers removed the ability for humans to give SIMA instructions, they found the agent performed much worse than before. 

Next, Besse says, the team is working on improving the agent’s performance. The researchers want to get it to work in as many environments as possible and learn new skills, and they want people to be able to chat with the agent and get a response. The team also wants SIMA to have more generalized skills, allowing it to quickly pick up games it has never seen before, much like a human. 

Humans “can generalize very well to unseen environments and unseen situations,” says Besse. “And we want our agents to be just the same.”  

SIMA inches us closer to a “ChatGPT moment” for autonomous agents, says Roy Fox, an assistant professor at the University of California, Irvine.  

But it is a long way away from actual autonomous AI. That would be “a whole different ball game,” he says. 

Deep Dive

Artificial intelligence

Sam Altman says helpful agents are poised to become AI’s killer function

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

Taking AI to the next level in manufacturing

Reducing data, talent, and organizational barriers to achieve scale.

Is robotics about to have its own ChatGPT moment?

Researchers are using generative AI and other techniques to teach robots new skills—including tasks they could perform in homes.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.