“Data Gravity” and Cloud Computing

February 18, 2014 Off By David

Grazed from VirtualQube. Author: Sid Herron.

Last July, Alistair Croll wrote an interesting post over at http://datagravity.org on the concept of “data gravity” – a term first coined by Dave McCrory. The concept goes like this: The speed with which information can get from where it is stored to where it is acted upon (e.g., from the hard disk in your PC to the processor) is the real limiting factor of computing speed. This is why microprocessors typically have built-in cache memory – to minimize the number of times you have to go back to the storage repository to access the data. What’s interesting is that this has practical implications for cloud computing.

Microsoft researcher Jim Gray, who spent a lot of his career looking at the economics of data, concluded that, compared to the cost of moving bytes around, everything else is effectively free! Getting information from your computer to the cloud (and vice-versa) is time-consuming and potentially expensive. The more data you’re trying to move, the bigger the problem is…

Just as a large physical mass exhibits inertia (i.e., it takes a lot of energy to get it moving) and gravity (i.e., it attracts other physical objects, and the larger the mass the stronger the attraction), a large chunk of data also exhibits a kind of inertia, and tends to attract other related data and the applications required to manipulate it. Hence the concept of data “gravity.”…

Read more from the source @ http://www.virtualqube.com/blog/data-gravity-and-cloud-computing/

Subscribe to the CloudCow bi-monthly newsletter @ http://eepurl.com/smZeb