Open Compute Plus Open Data Center = A Better Cloud?

September 29, 2011 Off By David
Object Storage
Grazed from CTO Edge.  Author: Julius Neudorfer.

There was an interesting “mind meld” announced at the Intel Developer Forum 2011 conference a few weeks ago.  During the event, held in conjunction with the Open Data Center Alliance (ODCA), Frank Frankovsky director of technical operations at Facebook, came onto the stage and announced that Facebook’s Open Compute Project would be collaborating with the ODCA to accelerate the development and adoption of cloud computing standards…

 

This is a somewhat unexpected alliance of seeming vastly different cultures. The ODCA was formed in 2010 as a unique consortium of leading end-user global IT organizations. The ODCA is led by a steering committee of senior IT executives from major, and yet diverse, organizations such as BMW, Deutsche Bank, JPMorgan Chase, Lockheed Martin, Marriott International, Inc. and Disney Technology Solutions. Intel serves as the organization’s technical advisor. The ODCA is growing rapidly and in less than two years now has grown to over 300 members. In June of this year the ODCA came out with its Open Data Center Usage Models, essentially a set of goals for vendors to adhere to. These member-defined requirements document the most pressing challenges and needed solutions for cloud deployment.

Unlike the ODCA, the Open Compute Project was formed primarily as a result of Facebook’s quest for improved energy efficiency for its new Prineville, Ore., data center. The Facebook design team did not limit itself to using standard IT equipment, nor for that matter, even the use of industry-standard racks. Moreover, the data center itself did not rely on standard power or even the usual computer room air conditioning considerations (CRACS). While Facebook was not the first to use custom servers and proprietary racks (Google has done that for many years), it shared the details via the creation of the Open Compute Project. 

Yet, with all the obvious differences in their cultures, it has come to some interesting, but independently derived parallel conclusions on its quest for more efficient open computing and in particular for the physical data center itself. Facebook’s Prineville data center , commissioned earlier this year, was designed to avoid using mechanical cooling and the attendant energy cost. It chose to use outside air and adiabatic cooling, which involves injecting water mist directly into the airflow entering the data center. Certainly this is not your typical enterprise data center. Oddly enough, independently from the Facebook design, Deutsche Bank built a smaller prototype data center in the New York City area in March, using a somewhat similar outside air and adiabatic cooling design. It seems somewhat appropriate that perhaps, in the near future, data centers designed for cloud computing may be using clouds (water mist) as their preferred method of cooling.

As more organizations seem to want to compute without worrying about the computer, cloud computing has become the solution du jour. This has the feel of "Back to the Future," in its resemblance of the early days of mainframes, with timesharing and VT sessions on dumb terminal screens, wherein many of the users never knew or cared where the data center was and what the computing hardware looked like.

Certainly not all clouds look and feel alike, and the ODCA is trying to unify the cloud into a set of usable standards. This would allow application and data to move seamlessly between private and public clouds, leading toward delivering a true "utility" computing capability experience with a consistent and portable environment, even though they may long reside on a specific physical server or any particular data center.

As it stands now, this is not really the case. In order to help achieve that, the stated goals of the collaboration are to: 

  • Accelerate efficient server, storage and data center infrastructure
  • Leverage complimentary organization charters
  • Collaborate on technical specifications and usage model requirements

Of course, the hardware still actually needs to exist in a data center "somewhere," but many cloud customers no longer care what brand of hardware is used, or what it looks like, or even if it is standardized IT equipment or custom-made servers such as the ones Facebook designed for its own data center. For that matter, some users may not know or even care where the data center is located (the cloud is everywhere right?).

And speaking of hardware, Kirk Skaugen, vice president, Intel Corporation and general manager, Datacenter and Connected Systems Group, presided over the forum. He was clearly pleased by this alliance; in fact, he remarked that Facebook was one of Intel’s top 10 volume customers.

The Bottom Line

The potential impact of this new alliance should not be underestimated or overlooked. The fact that the conservative belt and suspender-enterprise types of the ODCA are joining forces with the more free-thinking "social media" types in Facebook’s Open Compute project may bring about substantial changes. This could soon be seen in the new designs of the data center and the computing hardware, as well as the virtualization and cloud computing architecture itself. 

Who knows, perhaps in the near future you may need to “friend” your banker to get a loan, but will the money be virtual currency in the cloud or actual coin of the realm?