Microsoft Update Integrates Cloud, On-Premise Processing

November 20, 2010 Off By David
Grazed from IT Business Edge.  Author: Loraine Lawson.

I talk a lot about integration between the cloud and on-premise, but, by and large, that’s about integrating data. But when it comes to processing power, as far as I can tell, you pretty much have a choice: You’re on-premise or you’re off-premise, with something like Amazon EC2.

 

Microsoft, rather smartly, just added another option: Do both.

 

An upcoming service pack for Windows HPC Server 2008 will allow customers to connect their systems to Windows Azure. So, basically, you can distribute the workload between on-premise and cloud servers, Bill Hilf, general manager of Microsoft’s technical computing group, told Network World.

 

That makes it a great option for companies who have large, but short-term, spikes in calculations – which has traditionally been a major target group for companies like Google and Amazon that offer cloud-based processing power. And, obviously, Microsoft thinks this integration will be a key differentiator. I suspect they’re right, particularly for companies that already run HPC Server 2008.

 

The Windows HPC service pack is scheduled to be released by the year’s end.

 

I have to give Microsoft props for significantly adding to the cool factor by coupling the news with the human genome. At the SC10 supercomputing conference in New Orleans, the company unveiled its own BLAST version. For those of us not in the know, BLAST is “essentially a search engine for scanning through large databases of chemical compounds to find matches,” according to The Register.

 

Microsoft demoed how the NCBI database, running on Azure, can run comparisons on 100 billion protein sequences. It’s running on Microsoft’s Azure cloud, which the company is providing as a resource for scientists, and they don’t even have to install Windows HPC Server.

 

In not so impressive news, Microsoft also announced that its Windows HPC Server 2008 R2 was powering a Tsubame 2 supercomputer in Japan that broke the petaflop barrier – but then again, so was Linux, which ran the Tsubame 2.0 even faster. IBM and Los Alamos first broke the barrier back in 2008 with a Linux-based machine.

 

So Microsoft was long overdue for this victory and wasn’t letting the Linux issue cloud the celebration. In a second article on the petaflop barrier, Hilf told Network World:

We’re not trying to beat Linux. We’re not trying to be a supercomputing company. … For us, it’s not about some exotic supercomputers that are available to a small amount of users. We’re really interested in the bottom 500,000 computing users.

It’s not fancy, but that "good enough" approach has always worked well for Microsoft and its shareholders.