10 questions to ask your cloud provider

November 24, 2010 Off By David
Object Storage
Grazed from ZDNet.  Author: Colin Smith.

Earlier this month I attended a Red Hat sponsored Cloud Camp. (Full disclosure: They gave me a hat–guess what colour it is?) One of the most interesting discussions for me centered on Red Hat’s recently announced Cloud Engine.

I found the Deltacloud API concept quite compelling. Without getting into technical details of an as yet unreleased product, let me quickly outline the parts that I found most exciting by describing a possible future scenario of how the Cloud Engine would work using the Deltacloud functionality:

Imagine that you have three workloads that have the following business requirements:

Workload Security Requirements Description Due Date
Workload 1 High Contains customer identifiable data, credit card numbers, etc. End of Month
Workload 2 Medium Contains mostly public data 4 days
Workload 3 Low Contains product images that need to be displayed on an eCommerce website ASAP

If you feed all of the workloads into the hopper of the Cloud Engine, it would apply policies to the workloads to assign them to different cloud service queues. The Deltacloud functionality would prepare the workload for the target cloud service. The output could look something like the following:

  • Schedule workload 1 to be processed on the corporate private cloud during off peak hours.
  • Encrypt the data and schedule Workload 2 to be processed on the EC2 spot market when prices are 20 percent below retail on day 1, 10 percent on day 2, 5 percent on day 3 and at retail on day 4.
  • Schedule Workload 3 to process immediately on reserved cloud instances.

There will come a day, in the not too distant future, when we will be able to buy cloud service futures (just like pork bellies). As I mentioned earlier, Amazon already has a spot market for EC2 services. I expect the Cloud Engine (and other similar offerings) will be able to manage much more complex workloads and policies and be able to move workloads to many different types of clouds, as required.

The ability for systems like the Cloud Engine to really make the best use of available cloud services relies on some level of standardization among the cloud providers so that real comparisons can be used to make rational choices.

Unfortunately, not all clouds are created equal. It is very difficult to compare cloud service offerings as much of the detail is just not available. Take a look at the definitions of EC2 instance types and you will see terms like "virtual core", EC2 Compute Unit (one EC2 Compute Unit provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. This is also the equivalent to an early-2006 1.7 GHz Xeon processor), and high/moderate/low I/O performance. These are less than precise measures. It makes it challenging to know what you are purchasing and even more difficult to compare with other providers.

There are already several standards bodies involved in defining criteria and metrics for cloud computing. The wonderful thing about standards is that there are always plenty to choose from. Until they reach a consensus, the following list is an attempt at gathering enough information to be able to make a reasonable comparison between providers:

  1. How many locations do you have and how are they connected?
  2. How do you define a processor / virtual core / Compute Unit?
  3. How many IOPS can I expect at each I/O performance level?
  4. How does your memory access score on the STREAM benchmark?
  5. How does your virtualization system score on the SPECvirt benchmark?
  6. How many live copies of my data are there?
  7. How do you back up data?
  8. What is the retention period and recovery granularity?
  9. What happens to my data if I cancel my service?
  10.  What is your SLA and how do you compensate when it is not met?