Cloud Creates New Focus on Infrastructure, Operations Planning

July 20, 2011 Off By David
Object Storage
Grazed from IT Business Edge.  Author: Arthur Cole.

It’s been said many times that the cloud represents a fundamental shift in enterprise operations. However, many conversations revolve primarily around the myriad ways in which the cloud will broaden existing data center functionality.

 

The implication is that cloud providers will fall over backwards to tailor their infrastructure to suit existing enterprise needs. But that’s true only up to a point. The other side of the coin is that today’s enterprises will have to make substantial changes, both technological and operational, in order to leverage the cloud to its full potential.

 

On the plus side, many of the changes needed for the cloud are already under way, or at the very least are items that are ripe for improvement anyway. Certainly, virtualization is a major enabler for the cloud — not an absolute necessity, but for all practical purposes the main engine that drives cloud functionality. High-speed networking is another, considering that success or failure of any given application depends not on the accumulation of resources but in the efficient transfer of data to and from those resources.

 

Are You Ready?

 

So how can you tell if you are really ready for the cloud? According to Peggy Canale, government segment manager at Emerson Network Power’s Avocent division, the first step is in knowing exactly what you’ve accumulated already.

 

"There are four areas we examine to ensure that the physical infrastructure is ready for the cloud," she says. "First, you need tight control and management over your current assets. If you don’t know what you have, you are starting in a cloud-risky posture. Second, performance data along with the ability to access all rack-level devices, no matter how remote, is necessary for a cloud-ready environment."

 

"Third," she added, "because a cloud’s virtualization technology means that a dynamic compute environment is built on a static foundation, you must be able to maintain ‘headroom’ to handle normal peaks in demand, but also the odd spikes from sudden and intense cloud utilization. Last, the cloud is all about flexibility and scalability. More than half of the organizations considering the cloud are worried about how it will affect availability and performance. You need to be able to fully visualize the effects of change on load, power and devices to plan confidently."

 

Because the cloud represents the ability to scale resources to meet fluctuating data needs, though, the days of having to over-build internal resources to meet peak demand are coming to an end. From an operational perspective, then, there will need to be greater integration between IT and facilities management to more accurately match available resources to daily data requirements.

 

"Before data center managers can truly have a cloud-ready infrastructure, they must first establish a daily working relationship with the facility managers," Canale says. "Historically, the contact between these two teams has been contained to infrequent meetings on long-term requirements and planning. Emerson Network Power is about giving these two groups the common set of tools and a holistic view that bridges what IT needs and what facilities can deliver."

 

Letting Go

 

One of the primary changes that the cloud will effect on the data center is the need for increased automation. That trend was already under way as enterprises sought to leverage their virtual infrastructures, and the cloud is expected to kick that drive into high gear. With the extremely high amount of data fluidity on the cloud, advanced techniques like real-time information gathering and network configuration management are becoming crucial — again, provided you intend to use the cloud as a dynamic resource rather than simply another storage tier.

 

There’s been a tendency among CIOs of late to try to leverage existing management infrastructure for the virtual and cloud layers. However, those who try are quickly finding out that the two don’t match up very well. In the physical realm, management of systems and boxes was crucial. The cloud turns that notion on its head by shifting the focus onto data and load management. The goal isn’t to effectively manage the resources at your narrow disposal as much as it is to ensure the most efficient and effective data environment.

 

It’s also true that the sheer complexity of the cloud is simply too much for human operators to handle. As unnerving as it is for CIOs to allow machines to control other machines, that’s pretty much where we’re headed — leaving humans to set the broad policies that oversee the automated environment.

 

"It’s too difficult to go to individual devices and templates," says Jeff Baher, senior director of product marketing at Force10 Networks. "A cookie-cutter approach is required to eliminate human error. It’s critical to make the complex simple, and it’s critical to enable self-service allowing you or me to go to the Web with a credit card and get compute cycles without an operator having to configure a box."

 

The cloud is starting to have an impact on hardware designs as well. In many cases, the tag line "cloud-ready" is merely marketing fluff designed to tap into a growing movement. But increasingly, manufacturers are having to rethink long-standing architectural philosophies because the relationships among users, applications and data are changing.

 

A case in point is Dell’s new PowerEdge C6145, which company executives say was a response to the increasing need for high-density, low-cost operation required of the cloud.

 

"Starting about four or five years ago, a group of our customers starting designing hyperscale (infrastructure)," says Barton George, cloud computing and scale-out evangelist at Dell. "We’d dealt with scale-out before, but this was scale-out on steroids — up to 10,000 servers at one time. The difference was that they weren’t just deploying more servers but a completely different architecture. In general-purpose computing, things like resiliency, availability and redundancy were built into hardware. But now it was being built into the software. So when we tried to sell into these hyperscale accounts, it wasn’t a good fit. We were burdened with resiliency and availability in hardware, and to them it was an added cost."

 

"So we realized that serving this new class of customer required a completely different approach to designing systems," he added. "We needed to strip out all resiliency and availability in hardware and then double down on other things like ToC, density, the amount of storage, number of nodes. We also had to be parsimonious in power and cooling — it was all about energy efficiency and how much revenue per square foot we could generate."

 

Both virtual and cloud architectures are still evolving, so it is almost certain that traditional data center architectures will grow and change as well. In the end, they form a symbiotic relationship in which underlying hardware and infrastructure forms the basis for new capabilities in the cloud, which in turn drives new hardware and infrastructure — sort of like the old hardware/software paradigm.

The challenge will be to manage this relationship in a way that maximizes each side’s primary advantages without over-extending your reliance on any one data architecture.