Cloud Computing and Economics – The Agency CIO’s Dilemma
February 21, 2012The simple economic model for cloud computing is relatively straightforward:
- The enabling technologies (virtualization, blade servers, SANs, etc) drive data centers to massive consolidation to achieve economy-of-scale.
- The required capital investment drives government budget wonks to favor a service delivery model of ‘cloud computing" that rolls these up-front costs into the cost of per-unit delivery.
Lobbyists have been successful in writing language into recent legislation (i.e., NDAA 2012) that establishes a pre-bias towards cloud computing, by stating that it is cheaper and more secure than existing capabilities. But do we really understand the true costs involved; what economists refer to as the marginal costs (since they are rarely included in a simplified cost analysis)…
A long time ago an agency CIO asked participants in an awards dinner to define what is meant by a "legacy system". The CIO then explained that, from the agency’s perspective, it was one that works! A recent Information Week column by John Foley (http://www.informationweek.com/news/government/cloud-saas/232500025) discussed a recent DOE study that evaluated the efficacy of using Amazon’s cloud for high-performance-computing (HPC) and found that these applications did not scale well in a public cloud environment, and that the cost per CPU core hour was considerably higher. Like it or not, evaluating the cost associated with I/O intensive applications requires a comprehensive comparative analysis between the dedicated and virtualized environments.
Does a cloud cost-benefit analysis properly incorporate the marginal costs; whether in financial or human terms, of not being able to accomplish the mission. The first half of 2011 was not a stellar period for cloud service providers. Andrew Hickey, in a CRN article (http://www.crn.com/231000954/printablearticle.htm) summarized the most significant outages during the first half of the year:
- Microsoft’s Windows Live Hotmail outage in January
- Jive Software’s outage in January
- Google’s Gmail outage in February
- Intuit’s hosted services in March
- Amazon’s Web Service outage in April
- VMWare’s Cloud Foundry in April
- Yahoo Mail in April
- Microsoft’s Exchange Online cloud email services in May
- Microsoft’s Business Productivity Online Standard Suite (BPOS) in May
Data centers have outages, whether they are agency-owned or cloud-provided. Disaster recovery/business continuity planning and implementation costs money, whether through data center redundancy or cloud service provider diversity. Unfortunately, bigger clouds lead to bigger outages.
Security is another area where the potential cost savings from the FEDRAMP program need to be proven before CIOs can accurately factor in those costs.
The CIO quickly discovers that the same types of planning and analysis that were required in fielding legacy systems are needed in planning for the cloud – and the marginal costs have to be fully understood. A comprehensive business case analysis needs to address the full range of mission requirements in pursuing cloud solutions – from government-owned private clouds through public offerings. Those analyses rather come from either the technology or service providers.


