The bandwidth risk in cloud computing
January 5, 2012Grazed from FierceCIO. Author: Caron Carlson.
Security concerns continue to dampen enterprise eagerness to move applications to the cloud, but there is another cloud computing risk that could bring down doom if not anticipated: bandwidth bottlenecks. For an in-depth look at this less-discussed risk, see an article by Sandra Gittlen at Computerworld.
InterContinental Hotels Group has moved storage and in-house mobile phone applications to cloud services in recent years. The measure has saved money and improved customer service, leading to a current project migrating the room reservation system to the cloud. Despite the positive experience, IHG CIO Tom Conophy is quite clear that a successful cloud implementation requires careful attention to bandwidth needs…
"If your employees and your users can’t access data fast enough, then the cloud will be nothing more than a pipe dream," Conophy said.
If an application requires communication among data centers, it is susceptible to slower performance or an outage. This consideration is lost on many businesses looking at cloud solutions, said Theresa Lanowitz, founder of analyst firm Voke, Inc. The key to ensuring applications, storage and backups perform sufficiently is to test the infrastructure in an integrated environment rather than in silos, she advises.
"It’s no longer about delivering an application that is great; it’s about whether that application can survive in the wild," she said. "You have to examine the maximum use the cloud-based application and network will sustain."
The IHG’s cloud network relies on three main data centers to support users around the world. Additional data centers are located strategically to help maximize the user experience. Guests use smartphones, tablets and other means of access to connect to the hotel.
The challenge, according to Conophy, is to try to keep reservations data and guest profiles across data centers synchronized. IHG is accomplishing this by synchronizing Java Virtual Machines using the Terracotta Enterprise Suite and by having caches distributed throughout the data centers. "It’s basically a repository that lets us do data shifting from a primary database across multiple nodes," he said. As a result, users have 50 to 100 times faster access.