Big Data Plumbing Problems Hinder Cloud Computing

March 8, 2013 Off By David

Grazed from ElectronicDesign. Author: Al Wegener.

Let’s examine an impending problem looming at the intersection of big data and cloud computing. Big data is the vague, all-encompassing name given to immense datasets stored on enterprise servers like those at Google (which organizes 100 trillion Web pages), Facebook (1 million gigabytes of disk storage), and YouTube (20 petabytes of new video content per year).

Big data also is found in scientific applications such as weather forecasting, earthquake prediction, seismic processing, molecular modeling, and genetic sequencing. Many of these applications require servers with tens of petabytes of storage, such as the Sequoia (Lawrence Livermore) and Blue Waters (NCSA) supercomputers…

Cloud computing simply performs a desired computation (often on big data) on a remote server that a subscriber configures and controls, rather than on the subscriber’s local desktop PC or tablet. Amazon EC2, Microsoft Azure, and Google Compute Engine (still in beta) are leading commercial cloud computing providers. Cloud computing providers charge users as little as $0.10 per CPU-hour for renting MIPS, memory, and disk space…

Read more from the source @ http://electronicdesign.com/communications/big-data-plumbing-problems-hinder-cloud-computing