If you close your eyes and imagine a brick wall, you can probably come up with a pretty good mental image. After seeing many such walls, your brain knows what one should look like.
A startup in the U.K. is using machine learning to enable computers and smartphones to model visual information in a similar way. A computer could use these visual models for various tasks, from improving video streaming to automatically generating elements of a realistic virtual world.
Magic Pony Technology, created by graduates of Imperial College London with expertise in statistics, computer vision, and neuroscience, trains large neural networks to process visual information.
The company has developed a way to create high-quality videos or images from low-resolution ones. It feeds example images to a computer, which converts them to a lower resolution and then learns the difference between the two. Others have demonstrated the feat before, but the company is able to do it on an ordinary graphics processor, which could open up applications. One example it’s demonstrated uses the technique to improve a live gaming feed in real time.
Rob Bishop, a cofounder, says Magic Pony is currently in talks with several large companies interested in licensing the technology. “Online video-streaming businesses rely heavily on video compression,” Bishop says. “Our first product demonstrates that image quality can be greatly enhanced using deep learning, and fast mobile GPUs now allow us to deploy it anywhere.”
Bishop adds that the technology could improve the quality of images captured on smartphones with low-resolution cameras or in low light. The company is looking at other applications, including converting pixelated computer graphics into high-resolution ones or automatically generating miles of realistic-looking terrain and textures from earlier examples for games or virtual-reality environments.
What’s unusual about the company’s approach to processing video footage is that it does not need manually labeled examples. Instead, it recognizes statistical patterns in high-resolution and low-resolution examples and then teaches itself what edges, textures, straight lines, and other features should look like.
This type of learning could be important to the future of artificial intelligence (see “The Missing Link of Artificial Intelligence”). To date, deep learning has mostly been applied as a way of recognizing high-level objects such as particular faces in images and video, a feat accomplished by processing many labeled examples (see “10 Breakthrough Technologies 2013: Deep Learning”).
Researchers from Magic Pony will present a paper at a computer vision conference later this year. But Bishop says that since the paper was written, his team has “significantly improved” the technology to make it even more efficient.
Bishop explains that Magic Pony’s name comes from a meeting in which the earliest investor described the technology as a “magic pony” because no one would believe it without seeing it.
A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?
Robot vacuum companies say your images are safe, but a sprawling global supply chain for data from our devices creates risk.
The viral AI avatar app Lensa undressed me—without my consent
My avatars were cartoonishly pornified, while my male colleagues got to be astronauts, explorers, and inventors.
Roomba testers feel misled after intimate images ended up on Facebook
An MIT Technology Review investigation recently revealed how images of a minor and a tester on the toilet ended up on social media. iRobot said it had consent to collect this kind of data from inside homes—but participants say otherwise.
How to spot AI-generated text
The internet is increasingly awash with text written by AI software. We need new tools to detect it.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.