Skip to Content
Artificial intelligence

Virtual robots that teach themselves kung fu could revolutionize video games

Machine learning may make it much easier to build complex virtual characters.
April 10, 2018
Berkeley Artificial Intelligence Research

In the not-so-distant future, characters might practice kung-fu kicks in a digital dojo before bringing their moves into the latest video game.

AI researchers at UC Berkeley and the University of British Columbia have created virtual characters capable of imitating the way a person performs martial arts, parkour, and acrobatics, practicing moves relentlessly until they get them just right.

The work could transform the way video games and movies are made. Instead of planning a character’s actions in excruciating detail, animators might feed real footage into a program and have their characters master them through practice. Such a character could be dropped into a scene and left to perform the actions.

The same algorithm can be used to teach a wide range of challenging physical skills.
Berkeley Artificial Intelligence Research

“An artist can give just a few examples, and then the system can then generalize to all different situations,” says Jason Peng, a first-year PhD student at UC Berkeley, who carried out the research.

The virtual characters developed by the AI researcher use an AI technique known as reinforcement learning, which is loosely modeled on the way animals learn (see “10 Breakthrough Technologies 2017: Reinforcement Learning”).

The researchers captured the actions of expert martial artists and acrobats. A virtual character experiments with its motion and receives positive reinforcement each time it gets a little closer to the motions of that expert. The approach requires a character to have a physically realistic body and to inhabit a world with accurate physical rules.

It means the same algorithm can train a character to do a backflip or a moonwalk. “You can actually solve a large range of problems in animation,” says Sergey Levine, an assistant professor at UC Berkeley who’s involved with the project.

The computer-generated characters in high-budget video games and movies might look realistic, but they are little more than digital marionettes, following a painstakingly choreographed script.

The animation and computer games industries are already exploring the use of software that automatically adds realistic physics to characters. James Jacobs, CEO Ziva Dynamics, an animation company that specializes in building characters with realistic physical characteristics, says reinforcement learning offers a way to bring realism to behavior as well as appearance. “Up until this point people have been leaning on much simpler approaches,” Jacobs says. “In this case you are training a computation model to understand the way a human or a creature moves, and then you can just direct it, start applying external forces, and it will adapt to its environment.”

The reinforcement learning process involves making gradual progress—and the odd fall.
Berkeley Artificial Intelligence Research

The approach could have benefits that go beyond video games and special effects. Real robots may eventually learn to perform complex tasks with simulated practice. A bot might practice putting a table together in simulation, for instance, before trying it in the real world.

Levine says the robots could end up teaching us some new tricks. “If somebody wants to do some sort of gymnastics thing that nobody has ever tried before, in principle they could plug it into this and there’s a good chance something very reasonable would come out,” he says.

Deep Dive

Artificial intelligence

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.