18 hours, $33K, and 156,314 cores: Amazon cloud HPC hits a “petaflop”

November 12, 2013 Off By David
Object Storage

Grazed from ArsTechnica. Author: Jon Brodkin.

What do you do if you need more than 150,000 CPU cores but don’t have millions of dollars to spend on a supercomputer? Go to the Amazon cloud, of course. For the past few years, HPC software company Cycle Computing has been helping researchers harness the power of Amazon Web Services when they need serious computing power for short bursts of time. The company has completed its biggest Amazon cloud run yet, creating a cluster that ran for 18 hours, hitting 156,314 cores at its largest point and a theoretical peak speed of 1.21 petaflops. (A petaflop is one quadrillion floating point operations per second, or a million billion.)

To get all those cores, Cycle’s cluster ran simultaneously in Amazon data centers across the world, in Virginia, Oregon, Northern California, Ireland, Singapore, Tokyo, Sydney, and São Paulo. The bill from Amazon ended up being $33,000. USC chemistry professor Mark Thompson needed the cluster to design materials that might be well-suited to converting sunlight into solar energy…

"For any possible material, just figuring out how to synthesize it, purify it, and then analyze it typically takes a year of grad student time and hundreds of thousands of dollars in equipment, chemicals, and labor for that one molecule," Cycle Computing CEO Jason Stowe wrote in a blog post today…

Read more from the source @ http://arstechnica.com/information-technology/2013/11/18-hours-33k-and-156314-cores-amazon-cloud-hpc-hits-a-petaflop/