Skip to Content
Uncategorized

How to Deliver Online Gaming, Minus the Lag

OnLive CEO Steve Perlman explains how his cloud videogame service deals with real network conditions.
September 23, 2009

This March, a company called OnLive promised a gaming technology that seemed almost too good to be true. The company said it could deliver graphics-heavy video games over the Internet to any computer or to a miniconsole hooked to a television. This includes games such as the first-person shooter Crysis, which is normally beyond the capabilities of anything short of a multi-thousand-dollar gaming machine.

Today at Technology Review’s EmTech@MIT conference, OnLive founder and CEO Steve Perlman presented a live demo of the system in action.

OnLive has met with skepticism from hardcore gamers. The big question is whether the system can transmit high-end games over the Internet without serious lag, and many have said it can’t be done. OnLive is currently in an open beta, which involves testing its technology on a variety of real networks and computers.

Though OnLive has developed its own compression technology, Perlman says that this is “just one piece of a complex problem.”

The main issue, he suggests, is dealing with real-world network conditions. The company has spent the last seven years in stealth mode learning to do just this. Years ago, Perlman says, OnLive’s technology worked perfectly under ideal network conditions. Since then, a lot of work has gone into addressing less-than-perfect conditions.

When streaming something like a video, a computer builds up a buffer to protect against network problems. The buffer buys some time to check whether the stream is flowing smoothly and to ask the server to resend any information that gets lost or corrupted along the way. In the case of a video game, which is inherently unpredictable, Perlman says that such a technique is out of the question.

Instead, OnLive’s system uses perceptual science to keep the gaming experience smooth. The company’s algorithms adapt what’s shown so that it seems to be a complete image while the screen is moving, even if it wouldn’t look that way if the picture were still. This allows some leeway for network hiccups. “Each frame may not look good, but we always deliver the data,” Perlman says.

The company plans to launch to the public this winter.

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

OpenAI teases an amazing new generative video model called Sora

The firm is sharing Sora with a small group of safety testers but the rest of us will have to wait to learn more.

Google’s Gemini is now in everything. Here’s how you can try it out.

Gmail, Docs, and more will now come with Gemini baked in. But Europeans will have to wait before they can download the app.

This baby with a head camera helped teach an AI how kids learn language

A neural network trained on the experiences of a single young child managed to learn one of the core components of language: how to match words to the objects they represent.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.