Monday, February 08, 2010

Windows Azure and Cloud Computing Posts for 2/8/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts, Databases, and DataHubs*”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the November CTP in January 2010. 
* Content for managing DataHubs will be added as Microsoft releases more details on data synchronization services for SQL Azure and Windows Azure.

Off-Topic: OakLeaf Blog Joins Technorati’s “Top 100 InfoTech” List on 10/24/2009.

Azure Blob, Table and Queue Services

Cory Fowler’s Windows Azure Table Storage explains the importance of the System.Data.Services.Client namespace to Azure table storage in a 2/7/2010 post:

I am currently in the middle of creating a Presentations on Windows Azure.  Included in this presentation is how to interact with Windows Azure Storage Services using the Managed Storage API.

While creating a class to manage connections to my Storage Service, I was writing a method to add some data to Table Storage. I had imported Microsoft.WindowsAzure.StorageClient, which is the namespace in which the Managed Storage API exists, that exposed only a few methods of the TableServiceContext class.  After looking around MSDN it seems that the TableServiceContext class is dependant on the System.Data.Services.Client.  The System.Data.Services.Client namespace is responsible for the majority of the functionality of the TableServiceContext class.

Add a reference to the System.Data.Services.Client namespace to your Project, when you are trying to use the Table functionality of Microsoft.WindowsAzure.StorageClient class. [Emphasis added.]

Neil MacKenzie shows you how to execute Queries in Azure Tables in this 2/6/2010 post:

This is a follow-up to a post on Azure Tables providing additional information on queries against the Azure Table Service.


There are several classes involved in querying Azure Tables using the Azure Storage Client library. However, there is a single method central to the querying process and that is CreateQuery<T>() in the DataServiceContext class. CreateQuery<T>() is declared:

public DataServiceQuery<T> CreateQuery<T>(String entitySetName);

This method is used implicitly or explicitly in every query against Azure Tables using the Storage Client library. The CreateQuery<T>() return type is DataServiceQuery which implements both the IQueryable<T> and IEnumerable<T> interfaces. IQueryable<T> provides functionality to query data sources while IEnumerable<T> provides functionality to enumerate the results of these queries.

LINQ supports the decoration of a query by operators filtering the results of the query. Although a full LINQ implementation has many decoration operators only the following are implemented for the Storage Client library:

These are implemented as extension methods on the DataServiceQuery<T> class. When a query is executed these decoration operators are translated into the $filter and $top operators used in the Azure Storage Services REST API query string submitted to the Azure Storage Service. The remaining LINQ query decoration operators are not implemented because the Azure Storage Services REST API does not provide an implementation for them.

<Return to section navigation list> 

SQL Azure Database (SADB, formerly SDS and SSDS)

Simon Munro asks on 2/8/2010 Is spatial support needed in SQL Azure? and concludes (tentatively) “No”:

An investigation triggered by the lack of support of spatial data in SQL Azure has left me with the (unconfirmed) opinion that although requested by customers, the support of spatial data in SQL Azure may not be good enough to handle the requirements of a scalable solution that has mapping functionality as a primary feature. …

To support this impending demand [for scalable spatial data support by Windows/SQL Azure], Microsoft needs to make spatial data a first class citizen of the .NET framework (system.spatial).  It wouldn’t take much, just get some engineers from SQL and Bing maps to talk to each other for a few weeks.  Microsoft, if you need some help with that, let me know.

In the meantime I will walk down the road of open source spatial libraries and let you know where that road leads.

Simon supports his contention with a lengthy, detailed essay about issues relating to SQL Azure’s spatial indexes.

Siddharth Mehta explains How to read and write data from SQL Azure using SQL Server Integration Services 2008 R2 in this MSSQLTip of 2/8/2010:

SQL Azure and cloud computing brings a new paradigm of database development and implementation strategy.  With these changes, all of the dependent technologies also have to start adapting to support this new paradigm. SSIS is one of the technologies that would be used to move data in and out of SQL Azure in any Microsoft Business Intelligence (BI) solution built on SQL Azure. In this tip, we would discuss how to use SQL Server Integration Services 2008 R2 to read and write data from SQL Azure.

SQL Azure can be seen as a light version of SQL Server on the Windows Azure cloud computing platform. To quickly come up to speed on how to create your account and database on SQL Azure, please read this tip which should bring you up to speed to get started for this tip. Please keep in mind that all the components and technologies discussed in this article uses SQL Server 2008 R2 (Nov CTP) version, so we would go with the assumption that SQL Server 2008 R2 Nov CTP (which is the latest release as of the draft of this article) is already installed on the development machine.

SQL Server Management Studio (SSMS) 2008 R2 has a fair amount of support for SQL Azure.  To learn how to get started with the this topic, please read this tip. I am not mentioning these steps again, as they are already available in the tips suggested above and to keep focus on the main subject of this tip.

Siddharth continues with detailed instructions for migrating the AdventureWorks database to SQL Server with SQL Server Integration Services (SSIS). Using the SQL Server Migration Wizard is far simpler.

RESguru’s RG021 – How to create a MS [SQL] Azure Database post of 2/8/2010 is a detailed tutorial for connecting RES PowerFuse 2010 RC1 to SQL Azure:

First you might be thinking; what the heck does this have to do with RES Software products?? Well, as you will find out with the currently available Release Candidate 1, RES PowerFuse 2010 supports a brand new database type, more specifically Microsoft [SQL] Azure, which is a cloud based database. …

From the perspective of RES PowerFuse, it’s quite simply briliant to be able to use a cloud based database. Number one, there’s the savings, number two – since PowerFuse (unlike some other products who shall remain nameless) does not store User Settings in the datastore, but in the users home directory and, the database is cached locally, the RES PowerFuse agent can run autonomously and performance is actually very good. This is despite you’ve just cut out a major part of the PowerFuse infrastructure and offloaded it externally.

RESguru doesn’t appear aware of SQL Azure’s real name. I added [SQL] above.

Anton Staykov’s MySQL hosted on Windows Azure post of 2/7/2010 recommends NOT running MySQL on Windows Azure:

People are often asking whether MySQL is supported on Windows Azure. The simple answer is YES, you can run a MySQL on Windows Azure! Great!

But is it worth? I would say NO! And here are my thoughts on that.

First, take a sneak peak at the presentation of Mohit Srivastava and Tushar Shanbhag from PDC’09: Developing PHP and MySQL Applications with Windows Azure. Or download the slides and take a quick look of “OK, you can run MySQL on Windows Azure”. After one hour of amazing talk we will be almost convinced that we definitely can run MySQL on Windows Azure.


I would question the value of bringing a MySQL to Azure! …

Anton details the reason for his judgment and concludes:

For me, there is just no worth of bothering with running MySQL on Windows Azure. You are going to lose all the strengths of Windows Azure and will utilize it just as virtual server hosting. Just give up on MySQL if you want to go with Windows Azure and refactor your PHP code to use SQL Azure!

James Hamilton added on 2/7/2010 another scalability war-story, Scaling Second Life, to his collection:

As many of you know I collect high-scale scaling war stories. I’ve appended many of them below. Last week Ars Technica published a detailed article on Scaling Second Life: What Second Life can Teach your Datacenter About Scaling Web Apps. This article by Ian Wilkes who worked at Second Life from 2001 to 2009 where he was director of operations.

His notes include these observation about scaling relational databases in general and MySQL in particular:

  • Understand the resource impact of features. Be especially cautious around relational database systems and object relational mapping frameworks. If nobody knows the resource requirements, expect trouble in the near future.
  • Database pain: “Almost all online systems use an SQL-based RDBMS, or something very much like one, to store some or all of their data, and this is often the first and biggest bottleneck. Depending on your choice of vendor, scaling a single database to higher capacity can range from very expensive to outright impossible. Linden's experience with uber-popular MySQL is illustrative: we used it for storing all manner of structured data, ultimately totaling hundreds of tables and billions of rows, and we ran into a variety of limitations which were not expected.”

MySQL specific issues:

  1. Lacks alter table statement
  2. Write heavy workload can run heavy CPU spikes due to internal lock conflicts
  3. Lack of effective per-user governors means a single application can bring the system to its knees

… If you are interested in reading more from Ian at Second Life: Interview with Ian Wilkes From Linden Lab.

James adds these links to earlier members of the Scaling-X series:

<Return to section navigation list> 

AppFabric: Access Control, Service Bus and Workflow

Jeremy K. Johnson describes Our AppFabric Deployment Challenge in this detailed post of 2/7/2010:

Connectivity is the core principle in the “killer app” we are developing.  When we decided to go forward with this product idea we chose the new Microsoft Azure platform as the core technology.  It will help us realize the level of connectivity we want to achieve.  We are making use of all aspects of Microsoft Azure: Azure Service, Sql Azure, and AppFabric (which itself is made up of .Net Services and Access Control).

One part of our overall solution is a piece that exposes a self-hosted service using the .Net ServiceBus.  If you have any experience using WCF you’ll find that creating a service that is hosted, or exposed, via the Servicebus is very familiar territory.  There many more similarities than differences.  You’ll create service contracts, data contracts, etc., just like you will with a WCF service.  The only real differences come into play when you actually create a host and expose the service.  Even then the differences aren’t really that different.  The underlying principles are still the same.

Up until today I had only run the hosted service on the same machine that was being used to develop it.  Since we were having a meeting this evening to go over our progress I thought I’d throw it up on my home theater PC to demonstrate that the core functionality was done.  Getting it to work on a “regular” PC wasn’t as easy as I had hoped. …

Jeremy continues with a description of three failed attempts and a fourth successful one. He concludes with: “Something That Still Needs Sorting Out:”

… Unfortunately there isn’t a lot of documentation on deploying a service that is exposed using the AppFabric.  There is plenty of information on developing an AppFabric hosted service as well as how to call a service over the .Net ServiceBus.  But there is very, very little on deploying a self-hosted service exposed with the .Net ServiceBus.

I’m open to suggestions on how to overcome this conundrum.  Again,  I realize that the vast majority of our future customers will not have the AppFabric SDK installed.  But I want to try and make sure that even those who do have the SDK experience the same easy, simple installation as those who do not.

<Return to section navigation list>

Live Windows Azure Apps, Tools and Test Harnesses

Jim Zimmerman explains how Thuzi uses Windows Azure and SQL Azure to scale Facebook apps in this 16:46 ARCast.TV - Scaling Facebook Applications with Windows Azure Channel9 video segment:

How do you meet the unknown scalability requirements for viral social networking Facebook applications? Thuzi is a Microsoft partner who has created a framework for delivering viral social networking campaigns via Facebook using Windows Azure, ASP.NET MVC, and the Facebook .NET SDK.
They launched on November 5th (before release at the PDC) and had grown Outback's fan base to over 450k in a matter of days via an innovative Facebook offer for a free "Blooming Onion" promotion

In this session from PDC 09, Jim Zimmerman from Thuzi talks about how they leveraged the capabilities of Windows Azure to provide the scalable web, messaging, and storage infrastructure that are an absolute requirement to meet the harsh demands of today’s viral social network outbreaks.

The components of the solution include ASP.NET MVC framework to  render a “view” using Facebook Markup Language ("FBML") , the Facebook Developer Toolkit, Windows Azure, SQL Azure  and Visual Studio 2010.

Eric Nelson provides a detailed, fully illustrated tutorial for the Windows Azure Platform TCO and ROI Calculator in his Q&A: How can I calculate the TCO and ROI when considering the Windows Azure Platform? post of 2/8/2010:

Awareness of the existence of the Windows Azure Platform TCO and ROI Calculator remains pretty low based on some conversations I have had lately at events.

It is available in both online and offline versions and aims to help measure the potential savings of product development or migration to the Windows Azure Platform.

Lets take a quick example of a brand new car insurance site (Based on my depth work with… car insurance sites!). In brief:

  • We will start small
  • Need to connect to plenty of existing systems to get quotations
  • Store a lot of documents
  • We hope to gradually grow
  • There will be spikes if we run successful advertising campaigns.

First I profile the application:


He continues with examples until he reaches the final ROI calculation:

Which left me with the following ROI and

Saved me £94,513 in 10minutes. I need a promotion :-)


Eric Nelson recommends Worker/Web Role consolidation in his Q&A: Do I get charged compute hours for every role in my Windows Azure Hosted Service? post of 2/8/2010:

A common question I get is “Do I get charged compute hours for every role type in my service?”

The short answer is “Yes you do”. Now for the longer answer…

Every role type is created as at least one Virtual Machine instance on the Windows Azure Platform – more if you have instance count > 1 for a role.

In this hosted service example I have  2 WebRoles and 3 WorkerRoles with 1 instance of each.


Which means I will have 5 Virtual Machines reserved for this service the moment I deploy.


Therefore in a 24 hour period of being deployed I will get charged 24 x 5 Compute Hours = 120 hours. I left it like this for a a little over a day and a half. Hence I was charged 120 + 60 = 180 hours. (NB: Billing currently appears to update every 12 hours)


Hopefully that should make things clearer. Role consolidation is your friend :-)

Liz McMillan asserts “Pervasive WebDI, a .Net application, also applied for and was selected for the Windows Azure platform Deep Dive program” in her Pervasive Software Selects Windows Azure report of 2/8/2010:

Pervasive Software … on Monday announced that it has selected the Microsoft Windows Azure platform as the future platform for Pervasive BusinessXchange WebDI, its hosted business-to-business data interchange service.

A Microsoft Gold Certified Partner and recent “Best of SaaS Showplace” award winner, Pervasive Software recently deepened its engagement with Microsoft through participation in the Front Runner for Windows Azure Platform program and the Microsoft Metro Early Adopter Program.

Gonzalo Ruiz explains automatic and manual in-place upgrades in his Upgrading an Azure service post of 2/7/2010:

… First of all, we can upgrade the service from two places:

  1. Using the Azure Developer Portal
  2. Using the Service Management API. The Service Management APIs are a set of REST services that can be consumed from a azure role or from outside Azure.In order to consume the Service Management Services you can develop your own .NET API or use the Azure Service Management CmdLets

Additionally, Windows Azure provides two mechanisms for upgrading your service:

  • In-place Upgrade: Windows Azure will stop and upgrade the services contained in each upgrade domain, so if you have distributed your roles in several upgrade domains your service will be responsive in a upgrade process.The following diagram illustrates the process :

You can do Automatic In-Place upgrades to automatically upgrade all domains or Manual for upgrading one domain at one time.

  • VIP Swap Upgrade: You can deploy a new version of your service to the staging server, then swap that deployment with the deployment currently in production. This type of upgrade is referred to as a VIP swap (Virtual IP) swap because the address of service running on the staging slot is swapped to the address of the service running in the production slot, and vice versa. See Performing Virtual IP Swap Upgrades for more information.

For more details about the upgrade points, see Upgrading a Service

My Funding Details for the Third (Azure) Cloud-Computing Grant to the National Science Foundation (NSF) post of 2/7/2010 begins:

… [W]ill offer funding for researchers to explore the use of the Microsoft Windows Azure platform via three mechanisms: supplemental grants to existing awards, EAGER grants, and a forthcoming new solicitation. All of these mechanisms will be used to support any kind of computing research and software development for any type of application associated with the Windows Azure platform, perhaps in combination with the use of other platforms.

Researchers may immediately submit supplemental proposals to any existing NSF award to the CCF division or to OCI via the Grant Proposal Guide, prefixing the title with "CiC: Supplement: ". Supplemental proposals may request extension of an existing NSF award for an additional year. Supplemental proposals should be submitted no later than April 15, 2010 to ensure consideration in the current fiscal year. PIs are cautioned that the existing award must still be open at the time the supplement is awarded (not submitted); awards that have concluded before the supplement is awarded will not be reopened. …

And notes:

Microsoft’s cloud-computing grant, which has achieved widespread publicity, follows these earlier cloud-services gifts by competitors to the NSF: 

  1. “A set of cloud-based software services supported by Google and IBM” (National Science Foundation Awards Millions to Fourteen Universities for Cloud Computing Research, 4/23/2009)
  2. “Access to another cluster supported by HP, Intel, and Yahoo housed at the University of Illinois at Urbana-Champaign.” (HP, Intel and Yahoo! Create Global Cloud Computing Research Test Bed, 7/29/2009)

Josh Holmes shows you how to use the Windows Azure Command Line Tools to create a simple Azure project and deploy it to the development fabric in a 2/5/2010 post:

There are times that you just need to leverage the raw power that you can get from the command line. For example, if you are trying to script something or if you are on a machine that is not all tooled up with Visual Studio, Eclipse and the like and, believe it or not, there are times that it’s just a lot easier to get stuff done without an IDE in the way. Great news is that we’ve got a couple in the Windows Azure SDK called CSPack and CSRun that work wonders.

To that end, please enjoy this little tutorial on using the command line tools to create a very simple Azure package and deploying it up to the development fabric. …

Josh is a Microsoft UX Architect Evangelist in the Central Region.

Dave Thompson reports Eye On Earth on BBC News in this 2/5/2010 post:

A couple of announcements to be made,

Firstly: BBC news has written an article on Eye On Earth, my previous project with Microsoft Consulting Services (MCS).

Also a proud post from Dom Green, who worked alongside me on this project and let everyone know about the article first.

EyeOnEarth is a rich Silverlight site, built with Bing Maps on the Azure platform utilizing the scalability of the Azure platform and data availability of SQL Azure as its main data source.

Secondly: Microsoft Consultancy Services (MCS), specifically the Solution Development Team, in which I proudly sit, launched a community blog, yesterday with two blog posts, one from Dom Green and the other from myself.  I am looking forward to seeing some great articles here, and contributing some myself.

The latest post to the MCS UK Solutions Development Team blog is Running Memcached in Windows Azure of 2/4/2010.

Jim Nakashima joins the chorus that’s warning Windows Azure Compute Hours include the time your Deployment is Stopped on 2/5/2010:

One of my tenants on this blog is to not do posts that simply point to someone else’s post.  I’m breaking that tenant today because of something I just found out that I think is super important for all Windows Azure customers to be clear on and saw that Ryan Dunn had already posted about this.

Windows Azure compute time is calculated based on the time that you’ve deployed the app, not the time your app is in the running state.

For example, if you put your deployment in the stopped state, you will continue to be charged for Compute hours – the rate at which will correspond to the VM Size you have selected.


In order to not be charged Compute hours, you need to delete the deployment.  This will look as follows:


Ryan goes on to show you how to use the powershell cmdlets to automate deleting your deployments, please check out his post.

We are working on making this more obvious on the developer portal.

It’s a reasonably sure bet that all Azure developers are aware of this issue by now.

The Windows Azure Platform Team sent an IMPORTANT NOTICE: Corrective action taken to resolve potential issues with your Windows Azure platform usage statement message to Azure account managers on 2/5/2010:

We would like to bring to your attention two items that we have identified that may result in some usage being misreported on your daily usage statements. However, neither item will have an adverse impact on your bill.

The first item is that a temporary system issue resulted in minor errors to the data transfer charges associated with your Windows Azure account. This issue affected only those customers who were using our South Central US sub region datacenter. The impact of this issue is that a small subset of "free" usage was inadvertently sent to our billing system as "billable". From a billing perspective, this issue only impacts the period from 12:00 AM GMT on February 2nd, 2010 to 11:59 PM GMT on February 4th, 2010. We have fixed the root cause of the issue. We have also ensured that no customer is overcharged by taking corrective steps before sending out a bill. To prevent any customer from being overcharged, we decided to not charge for any Windows Azure data transfers associated with our South Central US sub region for the period in question. We did this by adjusting the consumed daily data transfer quantities for this period to 0.000001. This insignificant amount will prevent any charges. The adjustment has been applied to Windows Azure data transfers only. These are identifiable by the values "Compute" or "Storage" in the Service column of the usage summary.

The second item is that "on peak" data transfers associated with SQL Azure Database were charged at the "off peak" rate, causing data transfer charges being lower than our normal bill rate. We are correcting this as part of the planned service update scheduled for the week of Feb 8th. Starting Feb 15th on-peak data transfer usage will be billed at the normal rate for subsequent periods. We will not be adjusting the previous billing for customers who benefitted from the "off peak" rate during the "on peak" usage period.

We regret any confusion this may have caused. If you have any questions in this regard, please visit and contact support.

Thank you for your continued interest in the Windows Azure platform.

David Gristwood interviews SharpCloud’s Rusty Johnson and Andy Britcliffe on 2/4/2010 in Channel9’s Real World Azure Projects – Sharpcloud 00:08:50 video:

David Gristwood meet up with the founders of Sharpcloud, a startup and member of the Microsoft BizSpark program, to find out how they build their newly launched Sharpcloud application, which is a visual, social and analytical environment that allows users to discuss and view information to help their strategic planning. The user interface is very rich, very dynamic, and is based on Silverlight, whilst all the backend processing and storage runs on Windows Azure. Find out why they chose Azure and Silverlight, and how they have architected their solution. See it in action at

Tim Acheson reports that Bing Maps now runs on the Azure Services Platform in his Highlights from Silverlight User Group UK #11 post of 2/4/2010:

Impressive demo and interesting facts presented by Johannes Kebeck (Bing Maps TSp)

  • 30-50 TERABYTES of new data is uploaded to Bing Maps each month!
  • Bing maps runs on the Azure cloud platform (until recently was on its own infrastructure in USA) -- no wonder it's so fast now even from here in London! [Emphasis added.]
  • Silverlight delivers functionality that Ajax simply can't. It's also much faster than Ajax, and performance of different types of operation doesn't vary hugely between browsers as it does with Ajax. (Google Maps has taken a wrong turn, and they are left behind.)
  • Firefox is the slowest browser for more complex JavaScript operations (see graph in slides). But debates about browser performance are dominated by simplistic traditional benchmarks.
  • Bing Maps developer portal offers the awesome official SDK and APIs, all 100% FREE for regular use on the web!

Return to section navigation list> 

Windows Azure Infrastructure

Joannes Vermorel’s Big Wish List for Windows Azure post of 2/8/2010 begins:

At Lokad, we have been working with Windows Azure for more than 1 year now. Although Microsoft is a late entrant in the cloud computing arena, So far, I am extremely satisfied with this choice as Microsoft is definitively moving forward in the right direction.

Here is my Big Wish List for Windows Azure. It's the features that would turn Azure into a killer product, deserving a lion-sized market share in the cloud computing marketplace. [Emphasis Joannes’.]

My wishes are ordered by component:

  • Windows Azure
  • SQL Azure
  • Table Storage
  • Queue Storage
  • Blob Storage
  • Windows Azure Console
  • New services …

and continues with detailed lists of Top Priority and Nice to Have enhancements. Joannes’ wish list is important because his company, Lokad, is one of Microsoft’s Azure case studies and he is one of the most accomplished Azure developers.

451 CloudScape offers a free synopsis of The 451 Group's Cloud Computing Outlook for 2010 in this 2/7/2010 press release:

Flexibility over cost: Clouds have profoundly changed the cost equation, but TCO benefits require reinforcement, especially in private clouds. While cloud vendors will continue to tout ROI advantages in the current economic environment, we continue to find that among enterprise adopters, flexibility outweighs cost savings as the main driver for cloud adoption.

Synopsis of Report

  • End users report restrictive license terms continue to throttle back cloud adoption. They're seeking better approaches from software manufacturers towards licensing in the cloud. Users will increasingly seek open source alternatives.
  • While security is a growing concern, most organizations indicate they believe it is manageable. Users will take pragmatic approaches to security in the cloud.
  • Cloud has become the meeting point for development and operations. The 451 Group believes a new class of team, which we call DevOps, will emerge. Agile development and deployment will be the goal.
  • Integrated workspaces are emerging as a spear tip of cloud adoption.
  • It is clear that a first step in assessing the readiness of existing applications for running on cloud is to make them available to users through AppStore-like self service catalog. Then, understanding the cost of these services and their SLAs will help determine what the strategic future for each should be: public, private, hybrid or none of these. …

If you don’t have a copy already, download the 34-page CloudScape: The Cloud Codex analysis of October 2009 by the 451 Group and Tier1Research. Although the report is a bit dated, it’s an authoritative source of cloud-computing taxonomy.

Dana Garder reports BriefingsDirect analyst panelists peer into crystal balls for latest IT growth and impact trends in this 2/7/2010 post, transcript and podcast, which rivals the length of my recent OakLeaf posts.

The next BriefingsDirect Analyst Insights Edition, Volume 49, hones in on the predictions for IT industry growth and impact, now that the recession appears to have bottomed out. We're going to ask our distinguished panel of analysts and experts for their top five predictions for IT growth through 2010 and beyond. …

[Contributors are] Jim Kobielus, senior analyst at Forrester Research; Joe McKendrick, independent analyst and prolific blogger; Tony Baer, senior analyst at Ovum; Brad Shimmin, principal analyst at Current Analysis; Dave Linthicum, CEO of Blue Mountain Labs; Dave Lounsbury, vice-president of collaboration services at The Open Group; Jason Bloomberg, managing partner at ZapThink, and JP Morgenthal, independent analyst and IT consultant.

Dana includes lengthy excerpts from the Shimmin, Linthicum, Lounsbury, Kobielus, McKendrick, Morgenthal, Bloomberg, and Baer analyses. As you would expect, most are bullish on cloud computing.

Tony Bishop asserts “Leading organizations are driving technology delivery to a ubiquitous always-on model” in his Enterprise Cloud Computing as the Digital Nervous System of the Enterprise post of 2/7/2010:

Businesses today face a tsunami of challenges unlike it has ever faced in history: globalization, geo-political, rise of the Internet consumer, customer mind-share dynamics, proliferation of information and content to manage and maintain (with regulatory & security concerns).

This requires new ways to do business:

  1. provide an enriched and consistent quality customer experience;
  2. conduct business over any form of electronic channel;
  3. rapid adoption of new business models;
  4. transact business in the most efficient & effective means possible;
  5. incorporate "turn on dime" transaction workflow, and rapidly make informed decisions.

So what are IT executives supposed to do with this? Most are faced with...

  1. budget cuts;
  2. skill set shortages; complexity of their infrastructure;
  3. while keeping the lights on and doing more with less!

The good news is...

Innovations in the "2.0" phase of everything IT (Web, Cloud, Client/Server, Grid, Utility, Datacenter, SOA, and Utility Computing) are creating a foundation of "interactive & real time" information, connectivity and processing capabilities.

These capabilities may just become the foundation building blocks for IT organizations to build and use to create an always connected, always available, always working type of platform. Perhaps the combination of these capabilities could become the foundation of building the "nervous system" of business. …

Tony is the Founder and CEO of Adaptivity.

tbtechnet reminds Azure users to Get the Inside Scoop: No-cost Tech Support for Windows Azure USA Developers in his 2/6/2010 post to the Windows Azure Platform, Web Hosting and Web Services blog:

Since we run the Front Runner* for Windows Azure for USA developers, we’re often asked for more details. So, Feb 9th we’re presenting a Microsoft Academy Live webcast.

I’ve mentioned before what Front Runner is: for USA developers who have signed up for a Windows Azure account – they can get no-cost technical support via phone or email to fast track their cloud app development.

Title: For USA Developers: Microsoft Front Runner - Early Adopter Program and No-cost Technical Support For Windows Azure Platform

Date: 2-9-2010 9:30 AM (Pacific)

Please note that you must be a registered partner to access content within the Partner Learning Center.  If you do not yet have an account, please sign up here.

Also note: For the actual Front Runner for Windows Azure campaign, you do NOT need to be a registered Microsoft partner.

Azure account sign up here; Front Runner here

Dana Blankenhorn asks Can Microsoft sell you a Windows cloud? in this 2/5/2010 post to SmartPlanet:

The idea of a cloud is that you don’t know, specifically, where your stuff is inside a host data center, or what its operating system is.

Microsoft needs to change that perception if it is going to remain relevant. It needs to tie cloud computing, as a service, to its Windows operating system.

The sale of that pitch has now begun. It starts, as all things Microsoft do, with alliances.

Microsoft has signed a deal with the National Science Foundation to give selected scientists free access to its cloud, called Azure. This is sort of like Nike giving uniforms to football teams. It spreads both goodwill and the brand.

The second alliance, with HP, is more significant. The idea is to connect companies with HP gear directly into Azure, making the cloud a direct extension of their infrastructure.

This is especially cool for Chinese businesses with scaling problems, or with concerns over the security of their own computer rooms.

All of which explains why Microsoft has been tippy-toeing around China lately, taking the country’s side against Google, staying out of arguments over content. So what if the Windows on Chinese desktops may be pirated? Connect those companies to the cloud and it no longer matters.

Then come those developers, developers, like Zend — and the fact that Zend is open source is a feature not a bug — who are tweaking their frameworks so they will be optimized for the Microsoft cloud. …

Hans Vredevoort claims Azure V2V seems inevitable with hybrid clouds in this 2/5/2010 post:

In my November blog I talked about moving virtual machines to the cloud. More evidence of an upcoming Infrastructure as a Service (IaaS) can be found on MSDN, which describes virtual machine sizing, ranging from 1-core (Small) to 8-core (ExtraLarge) virtual machines with memory up to 15GB of memory and 2TB of disk space.


Another service described is the ability to mount Windows Azure drives, which act as a local NTFS drive, mounted on the server’s file system, accessible to code running in a role.  This was previously referred to as an X-drive. Mounting a so called CloudDrive requires Windows Azure Guest OS 1.1 (release 201001-01).

Will we really be able to migrate a virtual machine from our private cloud to the Azure public cloud? I don’t think we have to wait much longer before hybrid clouds become a reality.

Will we be able to move a VMware VM to Azure? I think you can answer that for yourself.

Steven Bink claims Windows Azure may host virtual machines starting March in this 2/5/2010 post:

At the beginning of January Microsoft launched its Platform-as-a-Service (PaaS) cloud computing offering: Windows Azure.

Despite the company’s Chief Architect Ray Ozzie said that Azure will be able to compete with Amazon EC2 and similar Infrastructure-as-a-Service (IaaS) clouds, this component is not yet accessible, or at least we couldn’t find it, and Microsoft didn’t even officially confirm it exists.

A couple of months ago suggested that the IaaS component of Azure may appear in March, because Microsoft is going to release a cloud toolkit that month.

It seems that Azure will indeed start hosting virtual machines in March 2010 according to TechTarget:

…Microsoft has announced plans to add support for Remote Desktops and virtual machines (VMs) to Windows Azure, and the company also says that prices for Azure, now a baseline $0.12 per hour, will be subject to change every so often.

Prashant Ketkar, marketing director for Azure, said that the service would be adding Remote Desktop capabilities as soon as possible, as well as the ability to load and run virtual machine images directly on the platform. Ketkar did not give a date for the new features, but said they were the two most requested items…

Continue: Windows Azure may host virtual machines starting March

Maureen O’Gara reports Dell’s Reportedly Working on Newfangled Cloudedge Servers and “Commoditizes species of the custom servers it makes for sites such as Facebook, Yahoo and Microsoft Azure” in this 2/5/2010 post:

Seems we are supposed to expect a line of servers called Cloudedge out of Dell this year that commoditizes species of the custom servers it makes for sites such as Facebook, Yahoo and Microsoft Azure. They'd apparently go to folks who don't buy in such bulk for private clouds and second-tier Internet companies, according to PC World. It's thinking very energy-efficient and small but lacking redundancy so the user's software will have to be able to work around failure. It's also thinking of bundling them with VMware and Microsoft. They'd compete with IBM's Idataplex, HP's Extreme Scale-Out machines and SGI.

Jay Fry joins “the rename ‘cloud computing’ to ‘over the internet’” kerfluffel with his Save 'Cloud Computing' post of 2/3/2010 to Forbes Magazine that recommends “But exchange jargon for what customers really want.”:

"Abolish cloud computing!" said Forbes' Lee Gomes in his recent article. Or, at least the term "cloud computing." After all, he argued, if having IT systems "in the cloud" is the same thing as "over the Internet," what's the point of using all those extra, fluffy metaphors?

Now, I'm totally in favor of excising extraneous marketing lingo from IT conversations. It's unhelpful and distracting. And "cloud computing" is certainly one of the terms that the IT industry has fallen head-over-heels in love with recently. Which means it gets applied to everything, even when it shouldn't be.

But I think the aspects that make "cloud computing" something IT shops should consider in the first place make it worth having its own term.

If "cloud computing" can simply be replaced by "over the Internet," then, yes, it's not a worthwhile distinction. However, there are some additional capabilities that are part of the cloud computing concept that "over the Internet" doesn't cover.

Jay, who’s the author of the Data Center Dialog blog and is also vice president of business unit strategy at CA, where he works on the company's cloud computing efforts, continues with a list of the “additional capabilities.”

<Return to section navigation list> 

Cloud Security and Governance

Lori MacVittie asks “We worry about VM sprawl but what about device sprawl?” as a preface to her VM Sprawl is Bad but Network Sprawl is Badder post of 2/6/2010:

We worry about VM sprawl but what about device sprawl? Management of a multitude of network-deployed solutions can be as operationally inefficient as managing hundreds of virtual machines, and far more detrimental to the health and performance of your applications. Turning them all into virtual network appliances that might need scaling themselves? That’s even badder.

But all you hardware fanbois best not smirk too much because the proliferation of hardware network devices is only slightly less badder than the potential problems arising from virtual network appliance sprawl. … 

Phil Cox takes on the Is PCI compliance attainable in a public cloud? issue in this 2/2/2010 post to

In this tip, the ninth in our series of technical tips on cloud security, we will focus specifically on the question of achieving Payment Card Industry Data Security Standard v1.2 (PCI-DSS) compliance using the public cloud. As a disclaimer, I should note that while I am a PCI QSA, this is my interpretation of the PCI-DSS requirements. I do not speak on behalf of the PCI Security Standards Council (PCI SSC), nor do I speak for any other assessor.

Can I be PCI compliant in a public cloud?
If you do not store or process cardholder data in a public cloud, then it is possible to reach compliance with PCI-DSS. If you do store or process cardholder data in a public cloud, however, then it is my opinion that it would not be possible to currently achieve PCI-DSS compliance.

You can achieve compliance if all you are doing is securely transmitting cardholder data over a public cloud (similar to the Internet today).

PCI-DSS compliance issues with public clouds
PCI-DSS does not address the nuances involved with cloud providers. PCI-DSS does, however, directly address shared hosting providers, and there has been guidance on Internet Service Providers (ISPs). While it is reasonable for companies to view public cloud providers in the same light as shared hosting providers, the problem is with the requirements on those providers and how current cloud providers fall short. …

He offers these links to earlier cloud security posts:

Phil’s a principal consultant of SystemExperts Corporation, a consulting firm that specializes in system security and management.

<Return to section navigation list> 

Cloud Computing Events

The Code Project invites you to Try Windows® Azure in February and Win! an HP TouchSmart TX2-1370 12.1-inch multitouch laptop or one of 20 copies of Windows 7 Ultimate:

Follow the simple directions to create a trial Windows Azure account, then upload our CodeProject Sample App. Once you receive your confirmation email from us, you can remove your sample app – and you won’t be charged for Windows Azure usage. Hurry! You must enter by February 28, 2010.

As of 2/8/2010, there were only seven entries.

Curt Devlin of Microsoft will present Identity in the Cloud and Project Geneva to the Boston Azure User Group on 2/25/2010 at Microsoft NERD, One Memorial Drive, Cambridge, MA 02142, USA at 7:00 PM EST:

In this session, our featured speaker is  Curt Devlin of Microsoft. Curt will talk about the concept of Identity, why it is so important and challenging in the cloud, and how Azure's Geneva project helps deal with the challenge.

Photo of Curt Devlin, Architect at MicrosoftCurt Devlin is an architect in Microsoft's Developer & Platform Evangelism (DPE) group, focusing on enterprises in the financial services, as well as health and life sciences industries throughout New England. Curt has more than 20 years of experience with the architecture and design of enterprise class distributed systems on the Windows platform. Curt Devlin blogs as the philosophical architect.

Dave Evans (@DaveDev) announced that Eric Nelson and David Gristwood of Microsoft UK will keynote two Azure Open Space Coding Day sessions in Birmingham Science Park Alston – Faraday Wharf, Holt Street, B7 4BB Birmingham, UK on Saturday, February 27, 2010 from 9:00 AM - 5:00 PM (GMT):

We will have Eric Nelson and David Gristwood of Microsoft on the day to keynote two sessions. The first will be a morning session which is for folks brand new to Azure, to help them get software installed, up and running etc. 

The afternoon keynote session will be for more advanced topics as decided by the attendees.

In addition we’ll have up to 6 other sessions on Azure topics to be decided on the day. SQL Azure, AppFabric, Commercial considerations, PHP, Ruby... It’s up to you. And we may have some swag.

We are also in the process of organising a geek dinner from about 20:00 on the Friday night before the event, so please get a ticket if you want to attend the dinner too.

Registration start date and time to be advised.

SummitCloud claims “Giza Solution for Google Analytics Impresses Panel of Venture Capitalists at NYC Boot Camp” in its SummitCloud Takes Grand Prize at Microsoft’s BizSpark Camp in NYC press release of 2/6/2010:

SummitClouds’ Giza solution recently earned top honors and took the grand prize at Microsoft’s BizSpark Camp. Held to raise awareness of Microsoft’s Azure cloud computing platform, more than 20 software startup companies participated in BizSpark Camp where the technology startups pitched their applications to a panel of judges consisting of Microsoft representatives, venture capitalists, analysts and angels.

“We used stringent criteria to determine the winners including capabilities of the founding team, market and competitive understanding, and degree of technical innovation,” said Larry Gregory, senior director of U.S. Partner Evangelism at Microsoft and coordinator of the event. “We were pleased to see the Giza application leveraging Windows Azure to address a compelling customer scenario. SummitCloud demonstrated superior awareness of how to generate revenue, including leveraging the Microsoft partner community.”

“We’re thrilled with the win and the opportunity to have one-on-one development experience with the Microsoft team. This validates the marketability of our solution on the Azure platform as the only bridge between Google Analytics and enterprise Microsoft technologies,” said David Leibowitz, SummitCloud President and CEO. Armed with the insight and direction from Microsoft, Leibowitz estimates Giza to be released to private beta in the coming weeks.

About Giza. Giza, a SummitCloud solution, is the only bridge between Google Analytics and enterprise relational databases, enabling mashup with on-site line of business applications such as CRM, POS, eCommerce and more. It enables enterprise customers with investments in Microsoft technologies to unleash trending clickstream metrics using familiar enterprise tools like SQL Server and SharePoint. Giza eliminates the pain of manual data downloads from the Google Analytics web reporting silo online, the instability of single-user Excel Add-ins, or having to develop with Java-centric libraries; it lowers the barrier to incorporate Google Analytics with internal enterprise reporting.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Tim Heuer’s When Blobs attack – understanding cloud storage bursts and viewing logs post of 2/8/2010 explains the importance of logging cloud storage use to understand the cause of unusually high service charges:

Here’s how it started…

Lisa (my wife) [shouting from office into the kitchen]: Tim, what’s this Amazon charge for $193?
Me [thinking what I may have purchased and not remembered]: Um, don’t know…let me look.

I then logged into my Amazon account to see what order I may have forgotten.  Surely I didn’t order $200 worth of MP3…that’s ridiculous.  Sure enough nothing was there.  Immediately I’m thinking fraud.  I start freaking out, getting mad, figuring out my revenge scheme on the scammer, etc.

Then it hit me: Amazon Web Services account.

Tim continues with the details of logging Amazon S3 and CloudFront traffic. His experiences also apply to Azure storage and Azure Content Delivery Network (CDN) users.

William Vambenepe reports Oracle acquires Amberpoint in this 2/8/2010 post:

Oracle just announced that it has purchased Amberpoint. If you have ever been interested in Web services management, then you surely know about Amberpoint. The company has long led the pack of best-of-breed vendors for Web services and SOA Management. My history with them goes back to the old days of the OASIS WSDM technical committee, where their engineers brought to the group a unique level of experience and practical-mindedness.

The official page has more details. In short, Amberpoint is going to reinforce Oracle Enterprise Manager, especially in these areas:

  • Business Transaction Management
  • SOA Management
  • Application Performance Management
  • SOA Governance (BTW, Oracle Enterprise Repository 11g was released just over a week ago)

I am looking forward to working with my new colleagues from Amberpoint.

Matt Asay reported on 2/7/2010 that Oracle loses some MySQL mojo with the resignation of Ken Jacobs:

Ever since Oracle closed on its acquisition of MySQL, the open-source world has been wondering where the code has gone. Many people searched, fruitlessly, for the formerly available MySQL source code.

They might have done better to search for Oracle's point person on MySQL, Ken Jacobs.

On Friday, Jacobs announced his resignation from Oracle to key members of the MySQL team via e-mail. Jacobs, a 28-year Oracle veteran and one of its first 20 hires, has been Oracle's liaison with the MySQL community for the past several years, ever since Oracle acquired the popular MySQL storage engine, InnoDB.

While Jacobs doesn't give an explicit reason for his departure, he does hint at disappointment that he was not selected to run MySQL's database business post-acquisition. "I imagine you all know that I will not be leading the MySQL GBU, as I had expected," he said.

I share that disappointment.

I, among others, worried that Oracle's acquisition of InnoDB effectively amounted to a hostile takeover of MySQL's financial fortunes, but such has not been the case. Arguably, Jacobs is a primary reason that Oracle's ownership of InnoDB has been peaceful, not a declaration of war.

I don't expect Jacobs' departure to significantly alter Oracle's plans for MySQL, which I believe to be good (the temporary absence of source code notwithstanding), but I do worry that his thoughtful interaction with the open-source community will be missed.

Photo credit: Oracle.

James Urquhart reports Oracle signals change of tone about cloud in this 2/5/2010 post to C|Net News’ The Wisdom of Clouds blog:

Software heavyweight Oracle's acquisition of Sun Microsystems has (and will have) a wide impact on the technology market.

Oracle's strategy of targeting an "all in one" relationship with its customers--providing hardware, software, and services--is something to which the rest of the high-technology industry will have to pay close attention. Modeling yourself after the "IBM of the 1960s" is not a bad target, especially when you consider market share.

However, when it comes to cloud computing, Oracle has taken a fairly "arm's length" position. CEO Larry Ellison's famous "cloud is fashion" rant sort of set the tone for the company's perceived skepticism toward the cloud model.

Apparently, that's all about to change. According to TechTarget, Oracle is preparing a public-relations onslaught, intended to change the perception of Oracle as cloud critic. According to the article, in the Webcast Oracle hosted last week to discuss its strategy for the Sun assets, Ellison explained:

Said Ellison: "Everything's called cloud now. If you're in the data center, it's a private cloud. There's nothing left but cloud computing. People say I'm against cloud computing--how can I be against cloud computing when that's all there is?"

He also stressed what will doubtless become another key Oracle message, which is that Oracle software (and soon hardware) powers other people's clouds. …

Oracle CEO Larry Ellison photo credit: Dan Farber

IBM claims “Data Center Reduces Cost, Complexity, Speeds Delivery of Information Technology Services Using 50% Less Energy” in its New IBM Data Center in North Carolina Engineered to Support Cloud Computing press release of 2/4/2010:

PR Newswire - IBM's new, 60,000 square foot data center            ...

IBM today announced the opening of a new data center designed to support new compute models like cloud computing, in order to help clients from around the world operate smarter businesses, organizations and cities.

The new data center reduces technology infrastructure costs and complexity for clients while improving quality and speeding the deployment of services – using only half the energy required of a similar facility its size.  The data center will ultimately total 100,000 square feet at IBM's Research Triangle Park (RTP) campus and is part of a $362 million investment by the corporation to build the new data center in North Carolina.  IBM owns or operates more than 450 data centers worldwide.

<Return to section navigation list> 

blog comments powered by Disqus