Cloud Streaming

April 19, 2011 Off By David
Object Storage
Grazed from MIT Technology Review.  Author: David Talbot.

In the Silicon Valley conference room of OnLive, Steve Perlman touches the lifelike 3-D face of a computer-­generated woman displayed on his iPad. Swiping the screen with his fingers, Perlman rotates her head; her eyes move to compensate, so that she continues to stare at one spot. None of this computationally intensive animation and visualization is actually taking place on the iPad. The device isn’t powerful enough to run the program responsible—an expensive piece of software called Autodesk Maya. Rather, Perlman’s finger-swipe inputs are being sent to a data center running the software. The results are returned as a video stream that seems to respond instantaneously to his touch.

To make this work, Perlman has created a way of compressing a video stream that overcomes the problems marring previous attempts to use mobile devices as remote terminals for graphics-intensive applications. The technology could make applications such as sophisticated movie-editing or architectural-design tools accessible on hundreds of millions of Internet-­connected tablets, smart phones, and the like. And not only professional animators and architects would benefit. For consumers, it will allow streaming movies to be fast-forwarded and rewound in real time, as with a DVD player, while schools anywhere could gain easy access to software. "The long-term vision is actually to move all computing out to the cloud," says Perlman, OnLive’s CEO.

Perlman’s biggest innovation is dispensing with the buffers that are typically used to store a few seconds or minutes of streaming video. Though buffers allow time for any lost or delayed data to be re-sent before it’s needed, they create a lag that makes it impossible to do real-time work. Instead, Perlman uses various strategies to fill in or hide missing details—in extreme cases even filling in entire frames by extrapolating from frames received earlier—so that the eye does not detect a problem should some data get lost or delayed. The system also continually checks the network connection’s quality, increasing the amount of video compression and decreasing bandwidth requirements as needed. To save precious milliseconds, Perlman has even negotiated with Internet carriers to ensure that data from his servers is carried directly on high-speed, high-capacity Internet backbones.

The goal is to respond to user inputs within 80 milliseconds, a key threshold for visual perception. Reaching that threshold is crucial for a broad range of applications, says Vivek Pai, a computer scientist at Princeton University: "If you see a delay between what you are doing and the result of what you are doing, your brain drifts off."

Perlman founded OnLive in 2007 to commercialize his streaming technology, and last year he launched a subscription service offering cloud-based versions of popular action games, a particularly demanding application in terms of computing power and responsiveness. But games are just a start—OnLive’s investors include movie studio Warner Brothers and Autodesk, which, besides Maya, also makes CAD software for engineers and designers. Perlman believes that eventually, "any mobile device will be able to bring a huge level of computing power to any person in the world with as little as a cellular connection."