Artificial Intelligence Can Now Design Realistic Video and Game Imagery
If you close your eyes and imagine a brick wall, you can probably come up with a pretty good mental image. After seeing many such walls, your brain knows what one should look like.
A startup in the U.K. is using machine learning to enable computers and smartphones to model visual information in a similar way. A computer could use these visual models for various tasks, from improving video streaming to automatically generating elements of a realistic virtual world.
Magic Pony Technology, created by graduates of Imperial College London with expertise in statistics, computer vision, and neuroscience, trains large neural networks to process visual information.
The company has developed a way to create high-quality videos or images from low-resolution ones. It feeds example images to a computer, which converts them to a lower resolution and then learns the difference between the two. Others have demonstrated the feat before, but the company is able to do it on an ordinary graphics processor, which could open up applications. One example it’s demonstrated uses the technique to improve a live gaming feed in real time.
Rob Bishop, a cofounder, says Magic Pony is currently in talks with several large companies interested in licensing the technology. “Online video-streaming businesses rely heavily on video compression,” Bishop says. “Our first product demonstrates that image quality can be greatly enhanced using deep learning, and fast mobile GPUs now allow us to deploy it anywhere.”
Bishop adds that the technology could improve the quality of images captured on smartphones with low-resolution cameras or in low light. The company is looking at other applications, including converting pixelated computer graphics into high-resolution ones or automatically generating miles of realistic-looking terrain and textures from earlier examples for games or virtual-reality environments.
What’s unusual about the company’s approach to processing video footage is that it does not need manually labeled examples. Instead, it recognizes statistical patterns in high-resolution and low-resolution examples and then teaches itself what edges, textures, straight lines, and other features should look like.
This type of learning could be important to the future of artificial intelligence (see “The Missing Link of Artificial Intelligence”). To date, deep learning has mostly been applied as a way of recognizing high-level objects such as particular faces in images and video, a feat accomplished by processing many labeled examples (see “10 Breakthrough Technologies 2013: Deep Learning”).
Researchers from Magic Pony will present a paper at a computer vision conference later this year. But Bishop says that since the paper was written, his team has “significantly improved” the technology to make it even more efficient.
Bishop explains that Magic Pony’s name comes from a meeting in which the earliest investor described the technology as a “magic pony” because no one would believe it without seeing it.
Geoffrey Hinton tells us why he’s now scared of the tech he helped build
“I have suddenly switched my views on whether these things are going to be more intelligent than us.”
Deep learning pioneer Geoffrey Hinton has quit Google
Hinton will be speaking at EmTech Digital on Wednesday.
The future of generative AI is niche, not generalized
ChatGPT has sparked speculation about artificial general intelligence. But the next real phase of AI will be in specific domains and contexts.
Video: Geoffrey Hinton talks about the “existential threat” of AI
Watch Hinton speak with Will Douglas Heaven, MIT Technology Review’s senior editor for AI, at EmTech Digital.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.