Skip to Content
Artificial intelligence

Artificial Intelligence Can Now Design Realistic Video and Game Imagery

A remarkable machine-learning trick that cleans up pixelated videos and photographs can automatically generate high-quality computer-game graphics.
April 14, 2016

If you close your eyes and imagine a brick wall, you can probably come up with a pretty good mental image. After seeing many such walls, your brain knows what one should look like.

A startup in the U.K. is using machine learning to enable computers and smartphones to model visual information in a similar way. A computer could use these visual models for various tasks, from improving video streaming to automatically generating elements of a realistic virtual world.

Magic Pony Technology, created by graduates of Imperial College London with expertise in statistics, computer vision, and neuroscience, trains large neural networks to process visual information.

Live video-game feed shows how the system can sharpen up blurred footage in real time.

The company has developed a way to create high-quality videos or images from low-resolution ones. It feeds example images to a computer, which converts them to a lower resolution and then learns the difference between the two. Others have demonstrated the feat before, but the company is able to do it on an ordinary graphics processor, which could open up applications. One example it’s demonstrated uses the technique to improve a live gaming feed in real time.

Magic Pony’s algorithms can sharpen up a pixelated character.

Rob Bishop, a cofounder, says Magic Pony is currently in talks with several large companies interested in licensing the technology. “Online video-streaming businesses rely heavily on video compression,” Bishop says. “Our first product demonstrates that image quality can be greatly enhanced using deep learning, and fast mobile GPUs now allow us to deploy it anywhere.”

Bishop adds that the technology could improve the quality of images captured on smartphones with low-resolution cameras or in low light. The company is looking at other applications, including converting pixelated computer graphics into high-resolution ones or automatically generating miles of realistic-looking terrain and textures from earlier examples for games or virtual-reality environments.

What’s unusual about the company’s approach to processing video footage is that it does not need manually labeled examples. Instead, it recognizes statistical patterns in high-resolution and low-resolution examples and then teaches itself what edges, textures, straight lines, and other features should look like.

This type of learning could be important to the future of artificial intelligence (see “The Missing Link of Artificial Intelligence”). To date, deep learning has mostly been applied as a way of recognizing high-level objects such as particular faces in images and video, a feat accomplished by processing many labeled examples (see “10 Breakthrough Technologies 2013: Deep Learning”).

The system can automatically generate complex textures, such as a distressed brick wall.
The system can automatically generate complex textures, such as a distressed brick wall.

Researchers from Magic Pony will present a paper at a computer vision conference later this year. But Bishop says that since the paper was written, his team has “significantly improved” the technology to make it even more efficient.

Bishop explains that Magic Pony’s name comes from a meeting in which the earliest investor described the technology as a “magic pony” because no one would believe it without seeing it.

Deep Dive

Artificial intelligence

This new data poisoning tool lets artists fight back against generative AI

The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. 

Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist

An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.

Unpacking the hype around OpenAI’s rumored new Q* model

If OpenAI's new model can solve grade-school math, it could pave the way for more powerful systems.

Generative AI deployment: Strategies for smooth scaling

Our global poll examines key decision points for putting AI to use in the enterprise.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.