Big Data is Driving HPC to the Cloud

April 21, 2015 Off By David

Grazed from ScientificComputing. Author: Leo Reiter.

Once upon a time, high performance computing (HPC) was mainly about one thing: speed. It started with fast processors, memory, bus fabrics and optimized software algorithms to take advantage of them. We ran FORTRAN-based computational benchmarks powered by LINPACK, which still to this day factors in the TOP500 list of supercomputers.

We soon learned of limiting factors such as heat, power and the pesky speed of light. Seymour Cray realized that “anyone can build a fast CPU. The trick is to build a fast system.” We responded with massively parallel systems made up of lots and lots of very fast components. All was good in the world…

So, what happened to HPC? Well, for many computationally-intensive applications, such as simulation, seismic processing and rendering, overall speed is still the name of the game. As long as the funding is there, the race to exascale is alive and well. With systems and algorithms becoming ever more parallel, HPC will continue marching forward the same way it has for the last several decades. Giant monolithic systems will get larger and faster, and, assuming the software architectures can take advantage of them, so will the processing applications…

Read more from the source @ http://www.scientificcomputing.com/blogs/2015/04/big-data-driving-hpc-cloud