Cloud Computing: Supercomputing At Not-So-Super Costs

April 22, 2012 Off By David
Grazed from Wall Street Journal.   Author: Chris Boulton.

The term “supercomputing” suggests computing that comes at a great deal of cost and is reserved for massive companies. But in this age of cloud computing, where vendors host software on their servers and provision it to end user customers, supercomputing need not always come with super costs.

That’s the lesson learned from Cycle Computing, which used a computing cluster comprised of 50,000 computer chips to test drug compounds for less than $5,000 an hour for less than three hours, reports the New York Times’ Steve Lohr…

Cycle’s computing cluster conducted 21 million chemical compound simulations on applications from Schrodinger and Nimbus Discovery atop Amazon Web Services (AWS) in under three hours, consuming 12.5 processor years for less than $15,000.

Such computing power at that price would have been unimaginable to CIOs a decade ago, when companies would have spent a lot more money on hardware and software to achieve the same results as the supercomputing cluster. But thanks to the falling costs of processors and storage, combined with the willingness of companies to rent out their commodity servers by the hour, the model is increasingly common.

At the low prices cloud vendors are currently hawking, it’s hard for CIOs to ignore the cloud after years of paying licensing and maintenance fees that inflate their capital expenditures. CIOs willing to deal with boosts to their operational expenditures may want to look at the cloud.

To be sure, most shops won’t move entirely into the cloud. Gartner said earlier this month that  most shops will adopt “hybrid computing,” or a combination of public or private cloud computing and internal infrastructure or applications. Still, case studies such as those provide by Cycle Computing underscore how the cloud can be done while constraining costs.