Where Is the Public Cloud 2.0?

May 2, 2011 Off By David
Grazed from GigaOM.  Author: Derrick Harris.

Cloud computing is no longer just the “next big thing.” It has arrived in the consciousness of the mainstream with industry buzz, TV commercials showcasing its power, and the real promise of revolutionizing computing as we know it. But for those of us dancing on the ground trying to make clouds appear out of the clear blue sky, the next generation public cloud is still just over the horizon.

The current public cloud computing providers have done an excellent job in bringing innovation and cloud computing technology to the masses. We have seen numerous examples where the public cloud has scaled exceptionally well, and provided unparalleled compute power to customers. Cloud computing, however, is not yet a fully evolved technology and may take another decade to grow up and deliver on its full potential.

There are a number of features that the next generation of cloud computing should introduce, such as data portability, stricter security measures and more infrastructure transparency. However, I’d like to focus on the fundamental requirements that we should aspire to, so the public cloud can transform the industry: higher availability, an order of magnitude more scalability and more computing flexibility.

The next generation of public clouds will be more available. Infrastructure and services that have acceptable failures of hours per year do not live up to the full potential of cloud computing. In an ideal scenario, each server, storage, or service in a public cloud needs to be available 99.999 percent of the time (that’s approximately five and a half minutes of downtime per year). While this availability goal is lofty – and some say fundamentally unattainable within reasonable expense – if cloud computing is going to provide the basic infrastructure and services for all compute on the planet, the industry needs to start think about a transformational change.

Cloud providers should aim for this availability goal at all levels of the cloud – networking, compute, storage and applications. The cloud needs to be available as close to 100 percent of the time as possible and if that means building N+3 redundant power systems, fault-tolerant networks that provide rapid sub-millisecond convergence, servers deployed in high availability clusters, local and network-attached storage with RAID 10 and applications that are deployed across multiple continents, then let’s embrace this goal as an industry.

Beyond being nearly always on, the next generation of public clouds will need to scale compute capacity by at least one order of magnitude. As of today, there are few public clouds that can scale to handle the largest enterprise IT or web company workloads. Let’s scale public clouds from the purported hundreds of thousands of processors to millions or tens of millions.

This could be achieved by building larger-scale clouds and adding more available processors powered by the public cloud providers or by other innovative ideas. For example, like grid computing of the past decade, public clouds could potentially harvest distributed compute power similar to projects like SETI@home and others that leverage extra processing capacity connected to the Internet. Imagine leaving your computer on at night and getting paid by a public cloud provider for your extra processor cycles, such as the solution from Plura Processing. Regardless of the implementation details, public clouds should have grander scaling aspirations and never have compute capacity limit their adoption and use.

Lastly, the next generation cloud computing should be flexible enough to run any operating system, application, database, or storage system that users require. Today’s public cloud is flexible, but not as flexible an environment as buying a compute server from a vendor such as Dell, IBM or HP. On your own server you can run literally nearly any program you want –MySQL, IBM DB2, Oracle, SAP, Microsoft CIFS file system, RAW partitions, NFS mounted disks. However, on many public clouds you are restricted to a specific set of operating systems, programming languages, databases, storage file systems and networking capabilities. For many public cloud computing users, these environments work just fine, but they do place restrictions on the applications that can run in the public cloud. Greater flexibility is critical for cloud computing to reach its full potential as a transformational technology.

So, picture this…. your public cloud provider almost never has a failure, scales processing power beyond what you ever need, and allows you to run any service or application. Public cloud 2.0 might look like a bright sunny day after all.