99.9% Cloud Computing Uptime? That 0.1% Costs More Than You Would Think
June 20, 2012Reliability has long been a supposedly stalwart benefit of cloud technology–the ability to offer always-on service for midsize and enterprise companies. But while 99.9-percent cloud computing uptime sounds like a great number, recent studies show that the costs associated with that 0.1 percent of downtime are far higher than many admins realize. Nonetheless, cloud adoption proceeds full steam ahead, with providers like Google making the case that shirking the cloud is bad for the environment. So what’s an admin to do, fight for the right up time, or be energy conscious?…
Eco-Friendly and Cheap
A recent InformationWeek article reports on Google’s latest whitepaper, in which it suggests that both energy and money can be saved with a shift to cloud-hosted software. The company points to a recent Gmail migration by the US General Services Administration (GSA) in 2011, with the agency expecting an operation cost savings of approximately $3 million, or 50 percent of its current expenditure on e-mail. In addition, Google says that the GSA saved 93 percent on its energy costs and 85 percent in carbon emissions, though the agency hasn’t officially confirmed these figures. The tech giant’s argument is that by deploying apps on cloud servers only as needed rather than duplicating data on local machines to guard against failure, companies will not only save big money but earn a reputation for being environmentally conscious.
This is a tempting offer for any IT admin, promising lower costs and a feather in the cap for doing right by the planet. Still, it seems to good to be true, and according to a recent survey, might just be.
Going Down!
The cleanest, cheapest servers in the world won’t do a company any good if they are not up and running, and that’s the problem currently facing cloud providers, according to a recent article at Computerworld. The article discusses a study by the International Working Group on Cloud Computing Resiliency (IWGCR), which reported that since 2007, 568 hours of downtime at 13 large cloud providers has cost more than $71.1 million dollars.
According to their data, cloud services go dark an average of 7.5 hours per year, which amounts to an availability of 99.9 percent. Both of these numbers don’t sound so bad, until you consider that any mission-critical app should be at 99.999 percent uptime; the extra 0.099 is hard to come by, apparently, and the study notes that "as a comparison, the service average unavailability for electricity in a modern capital is less than 15 minutes per year." Clearly, it is possible to deliver this level of relability, but cloud computing uptime just hasn’t hit the mark. Estimates of costs for this downtime range from $225 per hour for a service like PayPal to $89,000 for travel providers like Amadeus. Google, Microsoft, and Amazon all come in at around $200 per hour that they are down.
What does it all mean for IT admins at midsize businesses? It means that the cloud is lean and green, but still a little bit mean when it comes to handling applications. The sub-par 99.9-percent downtime shouldn’t be enough to put off would-be adopters; instead, admins are well-advised to start with less mission critical applications on the cloud, those that a downtime won’t cripple. A slower migration can both provide immediate cost reductions as well as eco benefits, and limit the effects of less-than-perfect cloud computing uptime.


