Cloud Transactions: Timing as a Service (TaaS)

March 29, 2012 Off By David
Grazed from Sys Con Media.  Author: James Carlini.

Many organizations are starting to look at cloud computing as a universal solution, but there are many applications that cannot be considered unless the framework of cloud computing includes Timing as a Service (TaaS)© as part of its fabric. Mission-critical applications require security measures including encryption, monitoring, and redundancy/resiliency, but that is not enough.

If cloud computing is going to spread to more mission critical-type applications, it needs to get more exact and accurate when it comes to transaction-based applications. Trying to keep everything in a structured framework is going to require a more rigorous network infrastructure that includes timing down to milliseconds, if not nanoseconds.

Financial transactions that can already be generated by "robotic traders" are being sent across networks at a very rapid rate. When you have thousands upon thousands of transactions being generated in a few seconds, you need to be able to sort them out if something happens and you want to replicate (or re-construct) the event…

A perfect example of this was the Dow Drop of 1,000 points back in May 2010. They never had a good explanation of it, because they could not re-construct the event that triggered the 1,000 point drop.

In order to be able to reconstruct thousands of transactions flying out in microseconds, the timing of the network would have to come from one source and be able to be broken down into nanoseconds in order to sort out each transaction to a unique time element with a unique time stamp.

Mission-Critical Applications? Timing Is a Must
Currently, many mission-critical applications are not viewed as a possibility for cloud computing, but in order for them to have any chance of being implemented, "timing as a service" must be part of the infrastructure that they would reside on.

With this type of capability, a whole new set of applications could be included in product offerings from cloud computing vendors. The trouble is that once a company offers this, it needs to ensure it has enough solid technology support people who understand the intricacies of mission-critical applications.

When you think of bulletproof networks, you think of the old Bell System with its layers of technical support and unlimited money to build central office mock-ups that would be used to replicate any field problems that were then diagnosed by well-trained experts. As they say, "those were the days."

Some have compared the evolution of using early electrical power in companies in the industrial age to the use of computers in today’s companies. At that time, companies originally had their own electrical power generator within their building, but then they opted to get their power from outside sources and get rid of the internal power sources.

Some refer to cloud computing as a new "utility" today, but that is a little premature. Before it can be considered as a utility, it has to have a much stronger platform that is bulletproof.

One of the problems the old companies in the industrial age initially had in converting from an internal power source to one from an outside source (power company) was the measurement of power they were actually getting. Were they being charged an appropriate fee or were they getting ripped off?

Accurate measurement and metering was a must for everyone that wanted to get their power from an outside source. The same issues apply to computing. If we put an application up into the cloud, we have to make sure we can feel comfortable with its execution and accuracy.

Timing from a single source is one of the missing elements that cloud computing needs to incorporate into its fabric in order to provide a solid platform for enterprise mission-critical applications.