Skip to Content
Artificial intelligence

AI Can Re-create Video Games Just by Watching Them

September 11, 2017

Machines just took aim at video-game development—from the '80s. AIs have been able to learn to play games like Space Invaders  by watching them for a while. But now, Georgia Tech researchers have written a paper describing how AI can actually build the underlying game engine of Super Mario Bros. just by spectating.

The approach, first reported by the Verge, works by analyzing thousands of frames of game play to see what happens as everyone’s favorite mustachioed plumber moves through the game. The AI looks at what changes between one frame and the next, and tries to link cause to effect—what happens when Mario, say, touches a coin, or lands on an evil sentient mushroom (oh, okay, then: a Goomba).

Over time, the researchers say, the AI can build up rules into a rudimentary version of the game engine. The Verge’s James Vincent calls the results “glitchy, but passable” and notes that the tool is limited to simple 2-D platform games like Super Mario Bros. and Mega Man at the moment.

Speaking to the Verge, one of the researchers says that “a future version of this could [analyze] limited domains of reality.” That’s a nice idea, but as we’ve explained before, making sense of the world is one of the biggest challenges facing AI right now—and re-creating Super Mario Bros. is only a very small jump toward cracking it.

 

Deep Dive

Artificial intelligence

The inside story of how ChatGPT was built from the people who made it

Exclusive conversations that take us behind the scenes of a cultural phenomenon.

AI is dreaming up drugs that no one has ever seen. Now we’ve got to see if they work.

AI automation throughout the drug development pipeline is opening up the possibility of faster, cheaper pharmaceuticals.

GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why

We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.

The original startup behind Stable Diffusion has launched a generative AI for video

Runway’s new model, called Gen-1, can change the visual style of existing videos and movies.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.