Virtual robots that teach themselves kung fu could revolutionize video games
In the not-so-distant future, characters might practice kung-fu kicks in a digital dojo before bringing their moves into the latest video game.
AI researchers at UC Berkeley and the University of British Columbia have created virtual characters capable of imitating the way a person performs martial arts, parkour, and acrobatics, practicing moves relentlessly until they get them just right.
The work could transform the way video games and movies are made. Instead of planning a character’s actions in excruciating detail, animators might feed real footage into a program and have their characters master them through practice. Such a character could be dropped into a scene and left to perform the actions.
“An artist can give just a few examples, and then the system can then generalize to all different situations,” says Jason Peng, a first-year PhD student at UC Berkeley, who carried out the research.
The virtual characters developed by the AI researcher use an AI technique known as reinforcement learning, which is loosely modeled on the way animals learn (see “10 Breakthrough Technologies 2017: Reinforcement Learning”).
The researchers captured the actions of expert martial artists and acrobats. A virtual character experiments with its motion and receives positive reinforcement each time it gets a little closer to the motions of that expert. The approach requires a character to have a physically realistic body and to inhabit a world with accurate physical rules.
It means the same algorithm can train a character to do a backflip or a moonwalk. “You can actually solve a large range of problems in animation,” says Sergey Levine, an assistant professor at UC Berkeley who’s involved with the project.
The computer-generated characters in high-budget video games and movies might look realistic, but they are little more than digital marionettes, following a painstakingly choreographed script.
The animation and computer games industries are already exploring the use of software that automatically adds realistic physics to characters. James Jacobs, CEO Ziva Dynamics, an animation company that specializes in building characters with realistic physical characteristics, says reinforcement learning offers a way to bring realism to behavior as well as appearance. “Up until this point people have been leaning on much simpler approaches,” Jacobs says. “In this case you are training a computation model to understand the way a human or a creature moves, and then you can just direct it, start applying external forces, and it will adapt to its environment.”
The approach could have benefits that go beyond video games and special effects. Real robots may eventually learn to perform complex tasks with simulated practice. A bot might practice putting a table together in simulation, for instance, before trying it in the real world.
Levine says the robots could end up teaching us some new tricks. “If somebody wants to do some sort of gymnastics thing that nobody has ever tried before, in principle they could plug it into this and there’s a good chance something very reasonable would come out,” he says.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
The future of generative AI is niche, not generalized
ChatGPT has sparked speculation about artificial general intelligence. But the next real phase of AI will be in specific domains and contexts.
Video: Geoffrey Hinton talks about the “existential threat” of AI
Watch Hinton speak with Will Douglas Heaven, MIT Technology Review’s senior editor for AI, at EmTech Digital.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.