Skip to Content

Cloud Streaming

Bringing high-performance software to mobile devices
April 19, 2011
This computationally intensive 3-D animation software appears to be running on a tablet, but is actually running on OnLive’s remote servers.

In the Silicon Valley conference room of OnLive, Steve Perlman touches the lifelike 3-D face of a computer-­generated woman displayed on his iPad. Swiping the screen with his fingers, Perlman rotates her head; her eyes move to compensate, so that she continues to stare at one spot. None of this computationally intensive animation and visualization is actually taking place on the iPad. The device isn’t powerful enough to run the program responsible—an expensive piece of software called Autodesk Maya. Rather, Perlman’s finger-swipe inputs are being sent to a data center running the software. The results are returned as a video stream that seems to respond instantaneously to his touch.

To make this work, Perlman has created a way of compressing a video stream that overcomes the problems marring previous attempts to use mobile devices as remote terminals for graphics-intensive applications. The technology could make applications such as sophisticated movie-editing or architectural-design tools accessible on hundreds of millions of Internet-­connected tablets, smart phones, and the like. And not only professional animators and architects would benefit. For consumers, it will allow streaming movies to be fast-forwarded and rewound in real time, as with a DVD player, while schools anywhere could gain easy access to software. “The long-term vision is actually to move all computing out to the cloud,” says Perlman, OnLive’s CEO.

Perlman’s biggest innovation is dispensing with the buffers that are typically used to store a few seconds or minutes of streaming video. Though buffers allow time for any lost or delayed data to be re-sent before it’s needed, they create a lag that makes it impossible to do real-time work. Instead, Perlman uses various strategies to fill in or hide missing details—in extreme cases even filling in entire frames by extrapolating from frames received earlier—so that the eye does not detect a problem should some data get lost or delayed. The system also continually checks the network connection’s quality, increasing the amount of video compression and decreasing bandwidth requirements as needed. To save precious milliseconds, Perlman has even negotiated with Internet carriers to ensure that data from his servers is carried directly on high-speed, high-capacity Internet backbones.

The goal is to respond to user inputs within 80 milliseconds, a key threshold for visual perception. Reaching that threshold is crucial for a broad range of applications, says Vivek Pai, a computer scientist at Princeton University: “If you see a delay between what you are doing and the result of what you are doing, your brain drifts off.”

Perlman founded OnLive in 2007 to commercialize his streaming technology, and last year he launched a subscription service offering cloud-based versions of popular action games, a particularly demanding application in terms of computing power and responsiveness. But games are just a start—OnLive’s investors include movie studio Warner Brothers and Autodesk, which, besides Maya, also makes CAD software for engineers and designers. Perlman believes that eventually, “any mobile device will be able to bring a huge level of computing power to any person in the world with as little as a cellular connection.”

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.