Ten things NOT to do in the cloud

July 26, 2011 Off By David
Grazed from Cloud Pro.  Author: Adrian Bridgwater.

There is a disproportionate amount of ‘industry chatter’ at the moment centred on what we should do with the new powers afforded to us by the cloud computing model of IT delivery. With vendors keen to talk up the public cloud’s suitability to what appears to be every possible deployment scenario; the reality of exactly what, where and when public cloud works well demands a somewhat more considered analysis…

A quick refresher course in high-school logic theory and advanced mathematics does not throw up a suitable theorem or formula for proving truths via the definition of untruths. So whether this is ‘contrapositive theory’ or the and simple act of dispelling myths, if we can agree on what NOT to do Let’s start with a good solid truism. Software systems such as databases that are, generally speaking, not best suited with infrastructure that you will be contesting for the same finite hardware

1 – HEAVYWEIGHT INPUT/OUTPUT APPLICATIONS
Let’s start with a good solid truism. Software systems such as databases that 
require high I/O physical performance are, generally speaking, not best 
suited infrastructure that you will be contesting for the same finite hardware 
performance. 
 
2 – COMPLEX, SENSITIVE MISSION-CRITICAL DATA
Sensitive data is, in simple terms, not best suited to shared multi-tenancy cloud computing. Yes the cloud is safe; there are security controls and firewalls that will provide customers with adequate levels of protection, but fundamentally it will be mitigated simply by keeping the data in question on a private server.
 
3 – CONSISTENT 24×7 WORKLOAD
If an application workload is flat and unchanging – then public cloud computing is hardly a best practice route towards maximised financial benefits.

According to Nigel Beighton, chief technology office at Rackspace. "Public cloud computing, with utility PAYG (pay as you go) charging offers immense flexibility; but if your workload is constant and unvarying then the ‘usage charge’ model is unlikely to be advantageous."

“A flat and unchanging IT requirement logically means that you can buy the appropriate amount of on-premise hardware to fit the bill. Using the cloud in this instance is not necessarily a problem, but it is really not prudent or efficient. For example, if you know you need a car seven days a week, you don’t go out and hire one every morning now do you? It is the same concept. The cloud is there to help cope with unpredictability and changeability, not computing scenarios characterised by stability,” added Beighton.

4 – STRINGENT AUDIT & COMPLIANCE
Some data regulations and compliancy rulings require the physical auditing of where a customer’s data and processes sit. By this we really mean “where” i.e. street address, floor, server room number, blade rack and disk partition. But the public cloud is a virtual world, where it is often not possible to identify (or the public cloud provider can not disclose) which actual physical server or disk the processes and data reside.

5 – A SOFTWARE LICENCE BRICK WALL
It’s a plain and simple fact, far too many software licences have still not evolved to accommodate for cloud computing. “Firms need to go round, over or under the perceived wall by leveraging a technology and licensing model that already enables on-premise or online licensing solutions to provide flexibility,” says Angus Lyon, enterprise strategy architect, Microsoft UK.


Nathan Marke,CTO of 2e2, suggests that the problem is rather more in-your-face and that existing maintenance, support and managed service contracts can act as a barrier to moving IT services into the cloud. Citing recent research from Vanson Bourne, which noted that 57 percent of organisations said such contracts would lead to delays in them deploying some cloud services, Marke says that this isn’t surprising considering that often organisations will have three to five year fixed-term contracts. "Organisations do need to be wary of migrating to the cloud when they may still be committed to legacy contracts that they won’t now utilise,” he says

.
6 – NO VERTICAL TAKE OFFS, CLOUD GROWTH IS HORIZONTAL
Applications that can only scale from increasing the performance of the server they are running on (i.e. virtual growth),are not really able to exploit the fundamental benefit of cloud – we need the right vehicle for the job in hand.“Don’t use a helicopter when you need a Hercules,” says Steve Eveleigh, product marketing manager at on-demand computing and communications company Star.

“Multi-tenanted/public cloud has its place, but will not meet all business requirements. Some applications need to be integrated and tailored to specific company needs; with dedicated/private cloud you can achieve this. The new breed of cloud-savvy CIOs understand business requirements and make business-focused decisions on whether public, private or hybrid are the right choice per application,” adds Eveleigh.

"Michael Newberry, Windows Azure Lead at Microsoft UK points out that since multicore chips became mainstream, performance growth has generally come from the adoption of concurrency. Apps that cannot scale concurrently, those that are single threaded for example, will not run faster with more cores regardless of whether those cores are on-premise or in the cloud he says. “But scalability is not the only benefit of cloud. There are other good reasons, such as rapid provisioning or agility, for moving something into the cloud even if the application itself does not scale linearly,” says Newberry.

7 – AVOIDING OVERLY CLOSED SOURCE CLOUDS
Open source evangelists of course argue that the core issue of escalating expenses associated with any vendor is inherently caused by vendor lock-in itself. If we accept this as truth, then there are ramifications for the cloud says Aram Kananov, product marketing manager EMEA, platform and cloud at Red Hat.

“We see such tactics adopted by majority of vendors who base their technology stacks on closed proprietary software and proprietary standards. This is why all major players in the emerging cloud space often base their stacks on open source software and open standards to prevent any particular vendor dictating their future technology decisions. This is also the reason why the vast majority of public cloud service providers run their infrastructures on Linux operating system. Indeed, the new emerging cloud applications are extensively using open source software,”he said.

 

8 – CRITICAL SEPARATION 

Just as application performance needs to exploit the inherent cloud feature of ‘scale through replication’, so does application resiliency. A single instance on any public cloud is vulnerable to all the aspects of a shared public service. Once again turning to Star’s Steve Eveleigh for comment here, "Just as application performance needs to exploit the inherent cloud feature of scale through replication, so does application resiliency. A single instance on any public cloud is vulnerable to all the aspects of a shared public service. The cloud-savvy CIOs will work with cloud services providers who ‘get this’ and will work to mitigate this risk.”

 

9 – STANDARD BUILD GOOD, CUSTOM BUILD BAD
Cloud computing does not necessarily suit a computing environment when a customer has an extremely complex ‘custom-build’ job for every software application that is going to be put into live production and use. “Cloud computing environments have much lower running costs in part because they are standardised.  So applications that require non-standard hardware will not necessarily benefit from the same economies of scale,” says Microsoft UK’s Michael Newberry.

“Software designers need to consider extensibility and configurability when they design applications so software can service individual customer requirements without necessarily needing to be a custom build. This enables them to take advantage of standardised infrastructures,” adds Newberry.

 

10 – CRUMBLING FOUNDATIONS, PALTRY PORTABILITY
The cloud may be ‘up there’ and the new compute resources it offers may be strong, but we still need to have strong IT foundations on the ground if we are to look skywards. This means that it’s not a great idea to move to the cloud if your underlying network is a flaky archaic collection of resources built around legacy applications that are closely tied to obsolete equipment – disentangling them might be a tough ask. If you’ve got a Roman ruin here on Earth, then not even Jupiter, Zeus and Apollo can get you into the cloud.

Analogies aside, the small print governs the precision-engineered elements of whether a cloud migration with enjoy fluid portability – and at this time, we are still desperately in need of hard and fast standards in this space. From portability of cloud-to-cloud applications, to bringing cloud-based workloads back into the enterprise data center if circumstances change, portability is paramount.

Now you know what not to do…
While this list may not form an exhaustive guide to how to formulate a complete cloud deployment/migration strategy, it should provide a few key pointers as to which cracks in the pavement not to step on — or which dark and stormy cloud to navigate away from if you absolutely insist on a more virtualisation compliant analogy.