Thursday, July 15, 2010

Windows Azure and Cloud Computing Posts for 7/15/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

Update 7/15/2010: Diagram above updated with Windows Azure Platform Appliance, announced 7/12/2010 at the Worldwide Partners Conference and Project “Sydney”

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.

Azure Blob, Drive, Table and Queue Services

Alex Handy claims NoSQL is an abbreviation for “Not only SQL” in his five-page Just say NoSQL article of 7/15/2010 for SDTimes:

image New waves of application development technology are often incompatible with old ways of thinking. Typically, when a brave new world opens to programmers, a healthy portion of them will cast aside the old ways in favor of the new. But the NoSQL movement is not about throwing out your SQL databases to be replaced by key-value stores. NoSQL, ironically, has nothing to do with avoiding SQL, and everything to do with the judicious use of relational databases.

NoSQL databases encompass a large swath of new databases. They include the Apache Cassandra Project, an array of key-value stores such as Tokyo Cabinet, and even document databases like CouchDB and MongoDB. NoSQL is a broad term that has more to do with what a database isn't rather than what it is.
Ping Li, general partner at Accel Partners, a venture capital firm in Silicon Valley, said his firm is watching the NoSQL movement closely but does not yet see a clear leader in which to invest. “It will be some time before one of [the NoSQL databases] becomes truly horizontal," he said.

"I think it's a fragmented market. I think if you believe there's a whole set of applications that are going be cloud-like, they're going to be built on this type of database."

Why to use
Mike Gualtieri, senior analyst at Forrester Research, said that NoSQL doesn't have anything to do with throwing out a relational database. He said that NoSQL really stands for “Not Only SQL.” He added that a NoSQL database can make a great alternative to spending enterprise funds on a new Oracle rack of database servers.
Gualtieri said NoSQL is “not a substitute for a database; it can augment a database. For transaction types of processing, you still need a database. You need integrity for those transactions. For storing other data, we don't need that consistency. NoSQL is a great way to store all that extra data.”

He said that saving actual customer purchasing information is better suited to a relational database, while storing more ephemeral information, such as customer product ratings and comments, is more appropriate for a NoSQL database.

Next Page

Alex Handy, Senior Editor of SD Times, is a veteran technology journalist.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Wayne Walter Berry recommends Video: Building a Multi-Tenant SaaS Application with SQL Azure and AppFabric in a 7/15/2010 post to the SQL Azure team blog:

image In this video Niraj Nagrani and Rick Negrin will teach you how SQL Azure, AppFabric, and the Windows Azure platform will enable you to grow your revenue and increase your market reach. Learn how to build elastic applications that will reduce costs and enable faster time to market using our highly available, self-service platform. These apps can easily span from the cloud to the enterprise. If you are either a traditional ISV looking to move to the cloud or a SaaS ISV who wants to get more capabilities and a larger geo-presence, this is the session that will show you how.

Video: View It

Doug at Learning Tree International recounts The Problem with Moving to the Cloud is Everything Works in a 7/15/2010 post to its Perspectives on Cloud Computing blog:

image I’ve had this customer for about 15 years and they are the perfect candidate to move to Windows Azure.

They are a small business.  Most of the employees are either on the road or working remotely.  Other than the few office workers, everyone is already accessing the data online.

There’s no dedicated IT staff managing servers.  I’m pretty sure the backups would… well pretty sure.  Security is OK… I think.  I don’t think I’d call anything we do “patch management”, but somehow things are kept up to date.  Nothing we do is really “managed” (there’s a positive).

Our servers are old. When I look at all the dust I think, “Wow, those servers are old”.  So we have to spend some money soon.

imageWe use a SQL Server for all of the data, and I know moving to SQL Azure will not be a big deal.  Compared to the cost of a new server it’s a bargain.  Heck, the data would even be replicated.

A lot of our development is done in .NET, so for those programs migration would be easy.

Of course, there is this thousand pound gorilla in the room – that’s the program the office staff uses.  It uses the SQL Server for the data, but the UI was originally written in Access version 2 back when I was a young programmer (let me pause to reminisce………..).  It has been migrated to every version of Access since.  Amazingly, it still works.  We know it has to be rewritten. It’s just one of those things. I’m sure you know what I mean.  The incredible thing is, even if we move to SQL Azure that old program will still work.  It’s just a different connection string.

Once the data is in SQL Azure, the changes to the .NET applications will be trivial.

I know in the long run moving to SQL Azure for the database and Windows Azure for application deployment is the right thing to do.

Which brings me to our biggest barrier to moving to the cloud.  Everything works now.  We could spend a few thousand dollars on new servers and software and installation.  Keep backing up the data like we’re doing (it might work you never know).  And we’re done.  The cloud would be better, but what we have is familiar.

We’re going to move to the cloud over time because the advantages vastly outweigh the costs.  But like most things that are worthwhile, it’s not a trivial process.  I’ll keep you posted on our progress.

If you’re also trying to figure out how to move to the cloud, come to Learning Tree course 2602: Windows® Azure™ Platform Introduction.  In the long run moving to the cloud offers simplified administration and cost savings.  But first you need a plan and you can’t create the plan if you don’t know how it works.

Marcelo Lopez Ruiz explains OData Decimal and JSON encoding in this 7/15/2010 post:

imageA few days ago I came upon a question about how decimal values are encoded in JSON payload in an OData response. The value was in quotes rather than as a numeric literal, so a script running in a browser found a string where the developer thought a number would be found.

I confirmed that this is indeed by design, and in fact you can verify this in the corresponding OData Protocol page. The reason for this is that JSON is used to interop across different kinds of systems, but while describes what a number looks like, it doesn't require implementations to support any particular representations in terms of how large / small / precise the values can be.

To make sure that systems can round-trip data - read it and write it back without unintentional modifications - we were very careful about only using a numeric literal form for very simple common cases (16-bit and 32-bit signed values, to be precise).

Wayne Walter Berry explains Creating Primary Keys Across Databases in this 7/15/2010 post to the SQL Azure Team blog:

imageWhen you horizontally partitioning data across multiple SQL Azure databases or using Data Sync Server for SQL Azure, there might come a time when you need to write to a member database without causing primary key merge conflicts. In this case you need to be able to generate a primary key that is unique across all databases. In this article we will discuss different techniques to generate primary keys and their advantages and disadvantage.


image One way to generate a unique primary keys is to use the NEWID() function in Transact-SQL, which generates a GUID as a uniqueidentifier data type. The GUID is guaranteed to be unique across all databases.


  • It is a native type to SQL Azure.
  • It is infinitely big and you will never run out of GUIDs
  • Works with both horizontal partitioning and Data Sync Services.


  • The disadvantages of using this technique is that based on the GUID; there is no way to identify what database generated it. This can cause extra complications when doing horizontal partitioning.
  • The uniqueidentifier data type is large and will add to the size of your row.

Another option is to use a bigint data type in place of an int. In this technique, the primary key is generated from being an identity column; however each identity in each database starts at a different offset. Different offset create the non-conflicting primary keys.

The first question most people ask, is bigint data type big enough to represent all the primary keys need[ed]. The bigInt data type can be as large as 9,223,372,036,854,775,807 because it is stored in 8 bytes. This is 4,294,967,298 times bigger than the maximum size of an int data type: 2,147,483,647. This means that you could potentially have 4 billion SQL Azure databases horizontally partitioned with tables of around 2 billion rows. More information about data types and sizes can be found here.

On the first SQL Azure database you would create the table like this:


On the second SQL Azure database you would create the table like this:


And continue incrementing the seed value for each database in the horizontal partitioning.


  • It is easier to upgrade from a legacy tables that used an int data type as the primary key to a bigint data type (the legacy table would be the first partition).
  • You can reparation easier than some of the other techniques, since moving rows involve a straight forward case statement (not a recalculated hash).
  • The data tier code implementing the partitioning can figure out which partition that the primary key is in, unlike a using a uniqueidentifier for a primary key.
  • The bigint data type consumes 8 bytes of space, which is smaller than the uniqueidentifier data type that take up 16 bytes.


  • The database schema for each partition is different.
  • This technique works well for horizontal partitioning, but not for Data Sync Service.
Primary Key Pool

In this technique a single identity database is built where all the primary keys are stored, however none of the data. This identity database just has a set of matching tables that contain a single column of integers (int data type) as an auto incrementing identity. When an insert is needed on any of the tables across the whole partition, the data tier code inserts into the identity database and fetches the @@IDENTITY. This primary key from the identity database is used as the primary key to insert into the member database or the partition. Because the identity database is generating the keys there is never a conflict.

So how many integers can a 50 Gigabyte SQL Azure database hold? This is a good question, because if you run out of space on your database acting as the primary key pool, then you can’t insert any more rows. If all your tables where single column integers in the primary key database you could have 25,000 tables with two million rows (table size of 2 Megabytes) in a 50 Gigabyte SQL Azure database. 50 Gigabytes is currently the largest SQL Azure database you could use for your primary key database. Or some combination of that, like 12,000 tables of 4 million rows, or 6,000 tables of 8 million rows.


  • This is the easiest technique to implement with legacy tables; there are no data type changes. However, the IDENTITY attribute needs to be removed from the data tables.
  • Works with both horizontal partitioning and Data Sync Services.


  • This technique works best in low write high read scenarios where contention for the primary key database isn’t an issue.
  • Every INSERT requires an extra query to the primary database, which can causes performance issues, especially if the data database and the primary database are not in the same data center.
  • There is no way for the data tier layer to identify in what partition that the data is located in from the primary key. For this reason it works best with Data Sync Services, where you have a known member database you are reading and write too.
  • There is a constraint on the number of primary keys (int data type) you can hold in the largest 50 Giga byte SQL Azure database, which is a limitation on the number rows in your overall partition.
Double Column Primary Key

Another technique is to use two columns to represent the primary key. The first column is an integer that specifies the partition or the member database. The second column is an int IDENTITY, that auto increments. With multiple member or partition databases the second column would have conflicts, however together the two columns would create a unique primary key.

Here is an example of a CREATE TABLE statement with a double column primary key:

CREATE TABLE TEST ([partition] int, 
    [id] int IDENTITY,
        PRIMARY KEY([partition], [id]));

Remember you need to add a second column for all the primary keys, and a second column to all foreign key references.


  • It is easier to upgrade from a legacy tables by adding an additional column then it is to convert those tables to uniqueidentifier primary keys.
  • Two integers consume 8 bytes of space, which is smaller than the uniqueidentifier data type that take up 16 bytes.
  • Works with both horizontal partitioning and Data Sync Services.
  • The data tier code implementing the partitioning can figure out which partition that the primary key is in, unlike a using a uniqueidentifier for a primary key.


  • It feels unnatural and cumbersome to have two columns be your primary key.
  • The data tier code needs to keep track of two integers to access the primary key. This can be obfuscated by using language elements to create a new data type, like a custom struct in C#.

Having two columns in a composite primary key doesn’t feel “unnatural and cumbersome” to me. Bear in mind that the Northwind sample database’s Order Details table has a composite primary key (OrderID + ProductID), which also prevents duplicating line items in an order.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Emily Gibson posted MSDN Windows Azure [AppFabric W]ebcast series, 8/11-9/15 on 9/13/2010:

image Attend this upcoming, free Windows Azure webcast series and learn how Windows Azure AppFabric fits into the Microsoft cloud strategy and see simple demonstrations of both Windows Azure AppFabric Access Control and Windows Azure AppFabric Service Bus.

Take a look at Windows Azure AppFabric pricing and how the pay-as-you-go model affects application usage. And find out about the Web Resource Authentication Protocol (WRAP) and Simple Web Token (SWT) protocols used by Access Control, and learn how to manage rule sets for claim processing.

It appears that you must register at Microsoft Events for individual Webcasts.

The “Geneva” Team announced existence of The Federated Identity Forum on 7/15/2010:

In order to consolidate our support for our Federated Identity platforms, we are removing the 'Email the Blog Author' functionality of this blog and reccomending that anyone with questions related to the AD FS, WIF, or CardSpace head over to our forum, located here.

This forum is actively monitored by members of the product group, as well as MVPs and the community.  We hope that we will better be able to provide support and answer your questions by directing them all through this single forum.

-The AD FS, WIF, and CardSpace teams

Mary Jo Foley quoted Amitabh Srivastava on Project Sydney in her Microsoft's upcoming cloud PDC [2010]: What's on tap? post to ZDNet’s All About Microsoft blog of 7/13/2010:

imageSrivastava also said Microsoft will share more about its Project Sydney — IPV6 technology designed to connect on-premises and cloud servers. (Last year, Microsoft officials said Sydney would be finalized before the end of 2010; I’m thinking a beta by the PDC timeframe is more likely.)

Here’s Mary Jo’s earlier commentary about Project Sydney from her Three new codenames and how they fit into Microsoft's cloud vision post of 11/17/2009:

Project Sydney [is t]echnology that enables customers to connect securely their on-premises and cloud servers. Some of the underlying technologies that are enabling it include IPSec, IPV6 and Microsoft’s Geneva federated-identity capability. It could be used for a variety of applications, such as allowing developers to fail over cloud apps to on-premises servers or to run an app that is structured to run on both on-premises and cloud servers, for example. Sydney is slated to go to beta early next year and go final in 2010.

Bob Muglia demonstrated Project Sydney during his Remarks by Ray Ozzie, chief software architect, and Bob Muglia, president of Server and Tools Business at the Professional Developers Conference (PDC) 2009 in Los Angeles on 11/17/2009:

[T]here are a lot of ways in which connectivity between on-premises datacenter and the centers in the cloud are very important.

The data service is one. Another is the application messages and connecting the message flow between applications. So, one of the things we've done, we showed this last year, we built a message bus, a service bus that is able to connect applications that are running in your datacenter together with Windows Azure cloud as well as with other trading partners that you might work with. So, that's an important level thing because it enables connectivity from point to point.

Now, all of this really requires identity, identity infrastructure and that concept of federation, that idea of a federated identity that can connect your on-site authentication system, typically Active Directory, together with Windows Azure as well as together with your trading partners and having a common access control service, which is a feature of the Windows Azure platform, was an important way to do this.

Now, these services are really critical and they're an important part of building the next generation of cloud applications. But sometimes it's important to be able to get low-level network access back onto an existing datacenter. And today what I'm pleased to announce is with Windows Azure, next year we will be entering into beta with a new project, something we call Project Sydney. And what Project Sydney does is it enables you to connect your existing servers inside your datacenter together with services that are running with Windows Azure. [Emphasis added.]

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

The Windows Azure Team delivered another Real World Windows Azure: Interview with Steve Orenstein, CEO at Connect2Field story on 7/15/2010:

image As part of the Real World Windows Azure series, we talked to Steve Orenstein, Chief Executive Officer at Connect2Field, about using the Windows Azure platform to deliver the company's software-as-a-service application. Here's what he had to say:

MSDN: Tell us about Connect2Field and the services you offer.

Orenstein: Our company's namesake, Connect2Field, is a web-based field service and job management software application. By using Connect2Field, businesses can manage customers, track jobs, dispatch and schedule work to employees in the field, manage inventory, provide quotes, and communicate easily with customers via email and short message service (SMS).

MSDN: What were the biggest challenges that Connect2Field faced prior to implementing the Windows Azure platform?

image Orenstein: Maintaining server uptime was a challenge for us-and a critical one-because any downtime for our customers can have a negative impact on their daily operations. We also need to ensure replication of our Microsoft SQL Server databases to give customers peace of mind that their data is protected; however, managing that replication became a headache for us. In addition, maintaining server hardware was very expensive and time consuming.

MSDN: Can you describe the solution you built with Windows Azure to address your need for high performance?

Orenstein: We migrated our existing application, which was originally built with Microsoft ASP.NET, to the Windows Azure platform. We are taking advantage of Microsoft SQL Azure and its built-in replication for our relational database needs. We're also using the Windows Azure Content Delivery Network to cache content at global data center nodes-this helps us deliver uptime and high levels of performance that we need for our global audience.

MSDN: What makes your solution unique?

Orenstein: Connect2Field is the first field service software application that runs in the cloud and is available to small and large customers. Any service business can get instant access to the software for as little at U.S.$65 a month. Connect2Field also has an application programming interface (API) that allows our application to connect with other cloud-based applications.

Zane Adams contributes to the Infor/Azure story with a Partners at WPC discuss benefits of SQL Azure and AppFabric post of the same date:

The Worldwide Partner Conference (WPC) is a great chance for us to sit down with industry partners and understand their latest plans for adopting cloud technologies and the opportunities they see with SQL Azure, Windows Azure AppFabric, and “Dallas” –as they adopt the Windows Azure Platform and deliver value to customers in new ways.  I’ll be sharing some of these stories and usage scenarios with you by linking to some very short partner interviews, presentations, and demos captured onsite here at the conference.

First up, here is a link to Brian Rose at Infor, discussing the benefits they see with SQL Azure and Windows Azure AppFabric.  Infor is a 2 Billion dollar per year company with more than 70,000 customers worldwide.  We are pleased to see Infor adopt the Windows Azure Platform!

Bruce McNee posted Infor and Windows Azure: The Big Boys Begin to Weigh In on 7/15/2010 as a Saugatuck Technologies Research Alert (Web site registration required):

image What is Happening? Microsoft held its annual Worldwide Partner Conference (WPC) this week in Washington, D.C. As expected, Microsoft continued to emphasize that it was embracing the Cloud at the core of its current and future strategy.

image While a more detailed look at the Microsoft WPC event will be pursued elsewhere in Saugatuck’s published research, this Research Alert focuses on the embrace of the Windows Azure Platform by important legacy ISVs such as Infor as a signpost of broader PaaS adoption by established ISVs, and the growing demand for cloud-based manufacturing / ERP solutions.

On Monday, July 12th, Infor made several key announcements related to its evolving business strategy. First and foremost, it announced the launch of Infor24, its core initiative to deliver Cloud versions of its business solution software. Key to this strategy is a new partnership with Microsoft (announced in late-June) – with Infor now leveraging a range of Microsoft solutions (Azure, SharePoint, Silverlight and others) as it builds out next-generation versions of its software. …

Bruce continues with the usual “Why is it Happening” and “Market Impact” analyses.

Wade Wegner explains Using the default SQL Server instance for Windows Azure development storage instead of the default localhost\SQLEXPRESS instance on 7/15/2010:

image This tip isn’t new, but it’s still useful.  I found myself building a new development box this week, and I didn’t want to use SQLExpress for the Windows Azure development storage.  Instead, I wanted to use the default instance for SQL Server.

It’s pretty simple to do this – after you install the Windows Azure SDK and Tools, go to a command prompt and browse to the following folder: C:\Program Files\Windows Azure SDK\v1.2\bin\devstore (or wherever you installed the SDK).  From there, use the DSInit.exe tool:

DSInti.exe /sqlInstance:.

Remember that the . is a reference to the default instance.  If you want to target an instance name, you can use:

DSInti.exe /sqlInstance:YourInstanceName

Now you’ll see that


Note: this tip is also helpful for when you get the error message “Failed to create database ‘DevelopmentStorageDb20090919’” during the automatic configuration of Windows Azure development storage.

David Pallman reported Azure ROI Calculator Updated for Azure 1.2 in this 7/15/2010 post:

Neudesic's Azure ROI Calculator has been updated. You can get to the calculator and other useful Azure resources from

The updated calculator reflects new features and pricing model changes added to the Windows Azure Platform in the 1.1 and 1.2 releases, including these:

  • Windows Azure Compute - VM Size
  • Windows Azure Content Delivery Service - added
  • SQL Azure Database - broader selection of databases sizes
  • AppFabric - Access Control Service revised pricing model
  • AppFabric - Service Bus revised pricing model / connection packs

Although there is a very comprehensive TCO tool available at, Azure ROI Calculator is useful when you need to compute cloud charges or the ROI of a migration quickly.

David Aiken demonstrates How to deploy an ASP.NET web site to Windows Azure! in this 7/14/2010 post:

Note I say web site – rather than web application.

If you have a web site, which is just a folder of files and stuff rather than a full blown VS project, you can easily deploy into Windows Azure without having to convert to a web app. (Of course you should really think about converting it because you get a better tool experience, debugging etc.)

I’m going to assume you have the Windows Azure SDK installed.

So the first thing is to get your files laid out on disk correctly. For our purposes I have a really simple web site containing a single default.aspx file. This is in a folder named ASPNetRole.


Next you need to create the cscfg and csdef files that Windows Azure requires to build a package. In the folder above I created the 2 files:


ServiceConfig.cscfg looks like this:

1: <?xml version="1.0"?>
2: <ServiceConfiguration ServiceName="myaspapp"
3: <Role name="ASPNetRole">
4: <ConfigurationSettings/>
5: <Instances count="2" />
6: </Role>
7: </ServiceConfiguration> 

ServiceDefinition.csdef looks like this

1: <?xml version="1.0" encoding="utf-8"?> 
2: <ServiceDefinition name="myaspapp" 
3: <WebRole name="ASPNetRole" vmsize="Small"> 
4: <InputEndpoints> 
5: <!-- Must use port 80 for http and port 443 for https when running in the cloud --> 
6: <InputEndpoint name="HttpIn" protocol="http" port="80" /> 
7: </InputEndpoints> 
8: <ConfigurationSettings/> 
9: </WebRole> 
10: </ServiceDefinition>

Make sure the role names match what you want to call your role.

Now you have these 2 files you are all ready to go!

There are 3 tasks you may want to do.

1. Package the web site to run in the local developer fabric.

"c:\Program Files\Windows Azure SDK\v1.2\bin\cspack.exe" 
"ServiceDefinition.csdef" /role:ASPNetRole;ASPNetRole; /copyOnly

2. Run the local developer fabric package:

"C:\Program Files\Windows Azure SDK\v1.2\bin\csrun.exe" 
"ServiceDefinition.csx" "ServiceConfig.cscfg" /launchBrowser

3. Package ready to deploy to Windows Azure.

"c:\Program Files\Windows Azure SDK\v1.2\bin\cspack.exe" 
"ServiceDefinition.csdef" /role:ASPNetRole;ASPNetRole;

When you run the last script it will generate the package you need to deploy to Windows Azure. I usually pop the above into 3 script files named prefabric.cmd, runfabric.cmd and buildpackage.cmd.

Have fun.


Geva Perry asks Oh, SaaS, Is There Anything You Can't Do? in this 7/15/2010 post to his Thinking Out Cloud blog:

image In APIs and the Growing Influence of Developers I wrote about three general eras in software business models: High-Touch, Open and Cloud.

In the so-called "open era" we saw many startups disrupt an existing market by simply taking advantage of the open model -- and specifically open source -- with either the (more common) "open core" approach or the "open crust" approach.

So the idea was (and to some extent, it still is) that you pick any established software product category and you come out with a similar product that freely provides the source code and almost always makes available a version of the product for free download.

Now, we are in the midst of the same trend but with cloud computing. Just pick any IT product category and provide it as a service in the cloud. As expected, when the trend started there were many skeptics who consistently said that for THIS product category, cloud computing won't work (whether software-as-a-service, platform-as-a-service or infrastructure-as-a-service, as the case may be).

So 10 years ago when just started with their SaaS CRM app, people said that enterprises would never put their most sensitive lead, opportunity and sales information "on some web site." I remember reading for the first time the news that landed Merrill Lynch as a customer. Sure it was a tiny deal made by some small division within the company, but I knew it was all over.

A similar thing happened not too long ago with Zoho landing a deal with GE, and since then both Zoho and Google Apps have won many deals with large enterprises replacing Microsoft Office with their SaaS offerings.

In the case of productivity apps, such as Office, Google Apps and Zoho, the naysayers said that the SaaS offerings are severely lacking in features. They neglected, however, to realize a few things:

  • With a SaaS product, rolling out features is an incremental low-effort process, so that new features will be rapidly added
  • For the vast majority of users, and the majority of uses, a relatively rudimentary set of features are good enough
  • The SaaS offerings had a new set of features that desktop-installed applications would find it extremely difficult to replicate, for example, collaboration and sharing, automatic backups in the cloud, accessibility from any computer, etc.

I am increasingly convinced that there isn't any category of software for which the cloud is not a good delivery method. Here's another example:

Cashierlive_hardwareI recently spoke to Tom Greenhaw, founder of Cashier Live. Cashier Live provides a software- as-a-service offering for point-of-sale (PoS): in other words, cash registers. What I found interesting about Cashier Live is that it requires non-commodity hardware and the software needs to interact with the hardware: open the cash register, print receipts, accept credit card and bar code scans, etc. Something that you wouldn't think is an optimal fit for SaaS.

But Cashier Live is doing well with hundreds of customers. They have overcome the technical challenge of hardware integration (for now using proprietary browser capabilities of Internet Explorer, but they are working on solutions for other browsers). And they provide compelling benefits, particularly for small and medium retailers who do not have an in-house IT staff and find it difficult to make big upfront expenditures on software licenses.

Do you have an example of another SaaS offering that is seemingly a bad fit for its category but makes sense to you? Please share in the comments.

Return to section navigation list> 

Windows Azure Infrastructure

Cloud Ventures reported “Using Cloud for new Microsoft SaaS delivery models” as a preface to their Microsoft Cloud Services post of 7/15/2010:

Of course Microsoft has their own Cloud hosting services, Azure, but there are still other scenarios where Microsoft software can be Cloud deployed, and these offer a fertile product development area for web hosting providers.

These 'Microsoft Cloud Services' (MCS) offer the ideal way to move into more of an MSP mode, providing a fuller range of IT outsourcing services and growing recurring revenues accordingly, but without having to stretch too far from their core web hosting product set.

Most organizations already have apps like Sharepoint and Exchange deployed internally, so hosted versions don't offer any pain-solving solutions. In contrast new, Cloud-based services that add value to these existing installations are very well targetted niche opportunities.

Collaboration Productivity: Microsoft is positioning 'Application Platforms' as the ideal way to package their technologies for rapid business solutions.

The underlying 'xRM' development toolset is specifically intended to engineer portable and modular software ideal for distributed Cloud environments, and so catering for this and providing database replication between data-centres are the kinds of ways service providers can offer value add to these projects.

Furthermore, today's world demands a level of security and legal compliance capabilities that traditional IT methods can no longer cope with, which Cloud Storage can, and via plug-ins to apps like Sharepoint and Exchange can be applied in a practical and very productive manner.

Microsoft positions Sharepoint as a key enabler of staff productivity. In their white paper People working together (13-page PDF) they describe how it can help achieve numerous organizational benefits.

The essence of the modular Application Framework approach is that this collaboration capability can then be further embedded into other apps like Dynamics CRM, as described in the Relational Productivity Application white paper (19-page PDF), and also into solution accelerators. Their 'Innovation Process Management' solution is a combination of Sharepoint and PPM.

Compliant Cloud Storage: However as the Aberdeen report Securing Unstructured Data (33 page PDF) highlights these tools bring with them increased security risks.

Unstructured data refers to Word documents, Excel spreadsheets, multi-media and all the other raw files that are proliferated across users laptops. The IT organization might secure Sharepoint in terms of technical security, hosting a central server behind the corporate firewall and restricting VPN acess to it, but this does nothing to secure the documents that are then downloaded and shared promiscuously via email, Instant Messanger or USB key.

Ultimately enterprise systems like HP-Trim are used for storing these records, but prior to this step there is considerable workflow and collaboration using tools like Sharepoint. If this 'work in progress' data is stored on user laptops and on single Sharepoint servers, without adequate security measures, then it's at risk of being lost or stolen, impacting both Business Continuity and Security and affecting IM compliance accordingly.

Since these files can also contain sensitive structured data, like customer records, it must be protected as if locked in the central database behind the corporate firewall, although obviously it's not, it's running around wild on users laptops. With an EMC report predicting a x44 factors explosion in this type of data the problem is only going to grow bigger and uglier.

The Aberdeen report author offers a technology bundle to tackle this issue which in a nutshell can be "baked into" Cloud Storage. A Cloud platform will manage the intelligent automation of dynamically provisioning data-centre resources to meet user needs, uniting them into a single service fabric that can achieve very high availability and high performance, and this can include software to encrypt and replicate their data across multiple centres so that it's secured according to the highest of standards.

By further linking in other Cloud services like time-stamping of data for authentication purposes, then data is 'Business Continuity approved' and also legally compliant via the same process. Information can live on unaffected in the event of a disaster in one data centre, and can be simultaneously certified in line with record-keeping legislations.

David Linthicum asserts “The cloud computing market is overheated, and many cloud providers are making some very avoidable blunders” in a preface to his The top 5 mistakes cloud vendors make -- and you should watch for post of 7/15/2010 to InfoWorld’s Cloud Computing blog:

image As the cloud computing market continues to heat up, I'm seeing some very profound mistakes made by both established and emerging cloud computing providers. Watch out for these blunders as you explore possible cloud providers.

Cloud computing mistake No. 1: Not focusing on the APIs
Whether the vendor is providing applications, infrastructure, or platforms, their clouds need to provide API access. APIs should be required for everything from accessing a credit report, such as for a CRM provider, to provisioning a virtual server, such as for an infrastructure provider. Even social networking providers, such as Twitter and Facebook, provide exceptional APIs -- and that's typically the way we interact with them.

Unfortunately, APIs are often an afterthought, and they exist as a subset of features the cloud provider offers -- or not at all. In the future, cloud providers will be defined by their APIs, so they'd better get good at them.

Cloud computing mistake No. 2: No integration strategy
The fact of the matter is that companies won't place their data in the cloud if there is no clear way to sync it back to on-premise systems. Cloud providers should not offer consulting engagements when you say the "bad" word "integration." Instead, they should offer you a predefined strategy and sets of technologies. That means having partnerships with the right technology vendors and a clear map for how to synchronize data from on-premise to cloud as well as from cloud to cloud.

Cloud computing mistake No. 3: Outage defensiveness
IT systems go down from time to time, and cloud computing providers are no exception. However, there seems to be a quick circling of the wagons when an outage occurs and no admission of the facts behind the issue, nor approaches to avoid the problem in the future. Providers shouldn't spin their mistakes. Instead, they should admit to them and learn from them. We'll understand.

Cloud computing mistake No. 4: Confusing SLAs
I can always tell that lawyers wrote these things, and like the user license agreements that we never can read completely when installing software, these SLAs need to be much easier to understand. Why can't they be in English?

Cloud computing mistake No. 5: Spinning standards
Everyone want standards, and cloud providers are certainly telling anyone that will listen that they are moving to standards. However, the action seems to be in writing the press releases, not actually adopting standards. They can't create standards by forming alliances and writing white papers; they actually need to figure out the details. And you need to keep holding their feet to the fire.

<Return to section navigation list> 

Windows Azure Platform Appliance 

OakLeafLogo100px My Windows Azure Platform Appliance (WAPA) Announced at Microsoft Worldwide Partner Conference 2010 post of 7/12/2010 (updated 7/13/2010) provides a one-stop shop for 80+ feet of mostly full-text reports about Microsoft’s WAPA, as well as its implementations by Dell, Fujitsu, HP, and eBay.

Because of its length, here are links to post’s sections:

Ian Grant’s Azure: buy the product, sell the company article of 7/15/2010 estimates the effect of Windows Azure and WAPA on Microsoft future margins:

image What is Microsoft doing, tying up with HP, Dell, Ebay and Fujitsu to sell its Azure cloud computing appliance? At its annual Worldwide Partner conference, Microsoft turned its cloud strategy on its head by allowing hardware partners and large enterprises to build private Azure clouds.

James Staten, vice-president and principal analyst with market researcher Forrester, believes Microsoft has two motives for offering the Azure appliance.

The first is to drag HP, Dell and Fujitsu, which are the three largest enterprise-focused outsourced IT firms, into the Microsoft cloud computing tent, he says. Microsoft will manage the cloud layer, but not own the infrastructure, the customer, or the customer interface.

The trio are free to package the service as they wish, including offering it as a hosted private cloud delinked from the Microsoft Windows Azure service. The point here is to offer customers a choice of public and private cloud environments, and a choice of hosts, or, as Microsoft's director of product management in the Windows server division, Mike Schutz, said, "Cloud any way you want it."

Staten points out that the current entry level is 1,000 servers. For the moment, that pretty much restricts the deal to the biggest public, hosted and private suppliers of infrastructure-, software- and service-as-a-service, he says.

"Microsoft is telegraphing that the cloud is a safe place for them to play," he says. In other words, Microsoft will not treat them purely as suppliers of commodity mips.

Optimising the software

In return, the hardware arms of the three firms will get Microsoft's help in optimising the software for their kit, and Redmond's rubber stamp on what emerges. They might even get to sell their hardware into Microsoft's own Azure datacentres, although Microsoft will not comment on that, or even on whose kit it uses now.

But there will surely be HP, Dell and Fujitsu kit in Microsoft datacentres. Microsoft's Schutz told Computer Weekly that the optimised Azure hardware would be "exactly the same as what's running in Microsoft".

The second reason, says Staten, is to preserve Microsoft's hegemony over the desktop. Microsoft's biggest threats here are VMware and Google, he says. VMware is already working with to deliver cloud-based apps and a programming environment in which to develop them.

The day after Microsoft announced its Azure appliance, VMware launched vSphere 4.1, which offers faster performance, automatic configuration tools, and lower costs per application. It also launched a new licensing regime, which kicks off in September. VMware says it offers customers better alignment between software costs and benefits delivered.

Staten points out that Microsoft's announcement offers Windows and SQL Server as the key Azure components. This distinguishes it from VMware and Google. VMware, its association with excepted, is more about managing datacentre resources, while Google is still mainly about apps. But both are bringing app development tools and application programme interfaces (APIs) to market, encroaching on Microsoft's turf.

How to handle licensing

According to Staten, Microsoft hasn't yet worked out how to handle licensing with its three partners. "I think they are going to let the market try a number of options and then go with what works, like they did with .Net," he says.

The biggest play in the enterprise market will be hosted private clouds, says Staten. This is where hosts can offer and capture extra value from users. Having access to the .Net and SQL Server installed base is clearly an opportunity for the Azure hosts, he adds.

But Oracle is the big fish in enterprise databases, and it has come out with very small cloud product, despite a lot of talk. Staten thinks that is about to change. "But they haven't said how," he says.

Could Netsuite, in which Oracle boss Larry Ellison holds a two-thirds share, be a player here? After all, it has just called for partners to take on Netsuite as a white label platform and to develop apps for vertical markets.

Staten is cautious. "It's possible," he says, "but Netsuite's main competitor is really It's not really a database-driven offer."

Privacy regulations

Another factor that could slow acceptance of hosted services, particularly in Europe, is privacy regulations, which largely prevent data from crossing borders. Microsoft has an Ireland-based datacentre precisely for this reason. Allowing its three partners to host Azure where they like clearly helps to overcome the data location issue, making cloud a more attractive option for Europeans in particular.

One thing that is now clear is how much money Microsoft expects to make from the cloud. European chairman Jan Muhlfeit told Computer Weekly earlier that its average revenue per user would shrink but it would make that up from more users. Forrester's Staten thinks per-use charging could also help. "It's like Starbucks," he says. "You don't notice how much you spend on a coffee, but if you had to pay for your annual consumption up-front, you'd probably drink water."

That may be true, but if Redmond is sharing that income with others, that leaves even less to distribute to shareholders. Maybe it's time to buy the Microsoft product as you sell the company's shares.

Dmitri Sotkinov reported “Microsoft Together with their Hardware Partner Offers the Azure Containers” as a preface to his less-than-optimistic Azure Appliance, a Turn-key Private Cloud? post of 7/14/2010:

Maybe not just yet unless you are an extremely large hosting company or enterprise with big IT and research and development (R&D) budgets.

To re-cap, this week at its Worldwide Partner Conference (WPC) Microsoft announced that together with their hardware partners they will start offering (some time later this year as a limited release for folks like Dell, HP, Fujitsu and eBay) Azure containers basically giving others the ability to run pretty much what Microsoft is running in their own public Windows Azure cloud datacenters.

This is an important move from Microsoft which they kind of hinted in the past and something we expected them to do back in 2009. Microsoft is not the only hosting company in the world, and there are governments and enterprises who – for security and other reasons – are continuing to invest in their own datacenters – these are big markets which Microsoft wants to address and not let go to VMware and other competitors.

However, the biggest drawback which all observers seem to be missing is that while Azure technology stack is similar to regular Microsoft Windows/IIS/SQL/.NET stack, it is not completely identical. You just cannot take an existing Windows Server application and point it to Azure. Even Microsoft’s own flagship server applications such as Exchange, SharePoint and Dynamics CRM and ERP systems do not run on Azure. Applications actually have to be ported to Azure which is certainly doable but does require R&D efforts on the side of application creators.

Today the set of applications available for Azure is so limited that I can probably count them with my fingers: Microsoft ported their SQL database, SugarCRM just released an Azure version of their tool, Quest Software has a set of cloud-based management services for administrators, and FullArmor has a beta of their endpoint management tool.

Maybe there is one or two other application that I missed – but you get the story. As of today, even if you get an Azure container (and you actually have to buy one – you will not be able to re-purpose the servers you already have) – there is not much you will be able to run on it.

private-cloudFor eBay this maybe worth it – they have their own custom-developed application and big budgets for developing and improving it. For most other folks out there – applications need to come first and make private Azure valuable enough. I am not saying that this will not happen – folks in Redmond are doing their best to recruit their partners to form the Microsoft cloud ecosystem – but we are definitely not there yet.

It appears to me that Dmitri’s looking for a Windows Azure App Store to compete with Apple’s and Google’s app stores. All in due time.

Ian Carlson, a Microsoft Senior Product Manager in the Server and Virtualization Product Marketing Organization, appeared in Microsoft Showcase | Core Infrastructure: Private Cloud Initiatives on 7/9/2010 to tout the new System Center Virtual Machine Manager Self Services Portal 2.0 in an 00:08:05 video segment: 


Mary Jo Foley quoted Amitabh Srivastava in her Microsoft's upcoming cloud PDC [2010]: What's on tap? post of 7/13/2010:

image Microsoft is going to talk about what’s happening with its promised virtual machine role capability for Windows Azure, known as Windows Server Virtual Machine Roles on Azure, at the October PDC, said Amitabh Srivastava, Senior Vice President of Microsoft’s Server and Cloud Division.

“IT (professionals) want infrastructure as a service, platform as a service and software as a service,” Srivastava said. “Azure was our platform as a service play. The VM (roles) will be our infrastructure as a service play.”

Microsoft is going to talk about what’s happening with its promised virtual machine role capability for Windows Azure, known as Windows Server Virtual Machine Roles on Azure, at the October PDC, said Amitabh Srivastava, Senior Vice President of Microsoft’s Server and Cloud Division.

“IT (professionals) want infrastructure as a service, platform as a service and software as a service,” Srivastava said. “Azure was our platform as a service play. The VM (roles) will be our infrastructure as a service play.”

The VM role feature will help users run their on-premises applications in the cloud, but applications that are hosted that way won’t be multitenant-capable.

Ian didn’t cover the Windows Azure Platform Appliance in the video, because it wasn’t unveiled until 7/12/2010 at the Worldwide Partners Conference in Washington, DC.

<Return to section navigation list> 

Cloud Security and Governance

Gregor Petri asks “Does lock-in simply come with the territory or can it be avoided?” as an introduction to his Vendor Lock-in and Cloud Computing post of 7/15/2010:

 IT vendor lock-in is as old as the IT industry itself. Some may even argue that lock-in is unavoidable when using any IT solution, regardless of whether we use it “on premise” or “as a service”. To determine whether this is the case, we examine traditional lock-in and the to-be-expected impact of cloud computing.

imageVendor lock-in is seen as one of the potential drawbacks of cloud computing. One of Gartner’s research analysts recently published a scenario where lock-in and standards even surpass security as the biggest objection to cloud computing. Despite efforts like Open Systems and Java, we have managed to get ourselves locked-in with every technology generation so far. Will the cloud be different or is lock-in just a fact of live we need to live with? Wikipedia defines vendor lock-in as:

“In economics, vendor lock-in, also known as proprietary lock-in, or customer lock-in, makes a customer dependent on a vendor for products and services, unable to use another vendor without substantial switching costs. Lock-in costs which create barriers to market entry may result in antitrust action against a monopoly.”

Let’s examine what lock-in means in practical terms when using IT solutions and how cloud computing would make this worse or better. For this we look at four dimensions of lock-in:

Horizontal lock-in: This restricts the ability to replace a product with a comparable or competitive product. If I choose solution A (let’s for example take a CRM solution or a development platform), then I will need to migrate my data and/or code, retrain my users and rebuild the integrations to my other solutions if I want to move to solution B. This is a bit like when I buy a Prius, I cannot drive a Volt. But it would be nice if I can use the same garage, loading cable, GPS, etc. when I switch.
Vertical lock-in: This restricts choice in other levels of the stack and occurs if choosing solution A mandates use of database X, operating system Y, hardware vendor Z and/or implementation partner S. To prevent this type of lock-in the industry embraced the idea of open systems, where hardware, middleware and operating systems could be chosen more independently. Before this time hardware vendors often sold specific solutions (like CRM or banking) that only ran on their specific hardware / OS etc. and could only be obtained in their entirety from them. So a bit like today’s (early market) SaaS offerings, where all needs to be obtained from one vendor.

Diagonal (of inclined) Lock-in: This is a tendency of companies to buy as many applications as possible from one provider, even if his solutions in those areas are less desirable. Companies picked a single vendor to make management, training and especially integration easier but also to be able to demand higher discounts. A trend that let to large, powerful vendors, which caused again higher degrees of lock-in. For now we call this voluntary form of lock-in diagonal Lock-in (although “inclined”- a synonym for diagonal - may describe this better).

Generational Lock-in: This last one is as inescapable as death and taxes and is an issue even if there is no desire to avoid horizontal, vertical or diagonal lock-in. No technology generation and thus no IT solution or IT platform lives forever (well, maybe with exception of the mainframe). The first three types of lock-in are not too bad if you had a good crystal ball and picked the right platforms (eg. Windows and not OS/2) and the right solution vendors (generally the ones that turned out to become the market leaders). But even such market leaders at some point reach end of life. Customers want to be able to replace them with the next generation of technology without it being prohibitively expensive or even impossible because of technical, contractual or practical lock-in.

The impact of cloud computing on lock-in

How does cloud computing, with incarnations like SaaS (software as a service), PaaS (platform as a service) and IaaS (infrastructure as a service) impact the above? In the consumer market we see people using a variety of cloud services from different vendors , for example Flickr to share pictures, Gmail to read email, Microsoft to chat, Twitter to Tweet and Facebook to … (well, what do they do on Facebook?), all seemingly without any lock-in issues. Many of these consumer solutions now even offer integration amongst each other. Based on this one might expect that using IT solutions “as a service” in an enterprise context also leads to less lock-in. But is this the case?

<Return to section navigation list> 

Cloud Computing Events

The Worldwide Partners Conference (WPC) 2010 Team published on 7/15/2010 links to most WPC session videos in the right frame of its Videos page:


Select 2010, Session and the subject you want (see above right) from the dropdown lists to filter the videos to your taste.

John Seo announced that the Center for Technology Innovation at Brookings will host a Moving to the Cloud: How the Public Sector Can Leverage the Power of Cloud Computing event at the Falk Auditorium, The Brookings Institution, 1775 Massachusetts Ave., NW, Washington, DC on Wednesday, 7/21/2010, 10:00 AM to 12:00 PM:

image The U.S. government spends billions of dollars each year on computer hardware, software and file servers that may no longer be necessary. Currently, the public sector makes relatively little use of cloud computing, even though studies suggest substantial government savings from a migration to more Internet-based computing with shared resources.

On July 21, the Center for Technology Innovation at Brookings will host a policy forum on steps to enhance public sector adoption of cloud computing innovations. Brookings Vice President Darrell West will moderate a panel of experts, including David McClure of the General Services Administration, Dawn Leaf of the National Institute for Standards and Technology, and Katie Ratte of the Federal Trade Commission. West will release a paper detailing the policy changes required to improve the efficiency and effectiveness of federal computing.

After the program, panelists will take audience questions.

Register here.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Amazon Web Services posted their 7/15/2010 Newsletter:

Dear AWS Community,

Over the past month, several AWS service teams have released exciting features and capabilities to help customers be successful. Amazon EC2 has released several features, including Cluster Compute Instances, designed for high-performance computing (HPC) applications and support for user managed kernels. Amazon S3 has released support for Bucket Policies and support for Reduced Redundancy Storage in the AWS Management Console. Amazon Simple Queue Service has introduced a free tier, giving customers the first 100,000 requests at no charge. We have also released several resources: a web hosting whitepaper, a new Mechanical Turk best practices guide, and developer resources for Windows and .NET.

  • News & Announcements
    • Announcing Cluster Compute Instances for Amazon EC2
    • Amazon EC2 Support for User Managed Kernels
    • Amazon VPC Adds IP Address Assignment Capability
    • Amazon CloudWatch Adds Monitoring for Amazon EBS Volumes
    • Amazon CloudFront Enhances Log Files
    • Amazon S3 Announces Support for Bucket Policies
    • Amazon S3 Enhances Support for Reduced Redundancy Storage
    • Amazon RDS Now Supports SSL Encrypted Connections
    • Amazon SQS Introduces Free Tier and Adds Support for Larger Messages and Longer Retention
    • New Amazon Mechanical Turk Best Practices Guide
    • AWS Authentication Survey
  • Developer Resources
  • AWS On The Road
  • Virtual Events

Matthew Weinberger reported Amazon S3 Gains Better Support For Less Redundancy in a 7/15/2010 post to the MSPMentor blog:

image Back in May, I mentioned Amazon Web Services’ (AWS) decision to offer an option for less redundancy on their Amazon S3 storage cloud. I found the move perplexing. Well, Amazon S3 now gets enhanced Reduced Redundancy Storage (RRS) support, but it leaves me even more confused who it’s for. Here’s the scoop.

Customers using the AWS Management Console, according to the company’s blog entry, can now select RRS as an option when putting new files in the cloud – or take a bunch of existing ones and downgrade them to the lower storage tier.

The other part of the Amazon S3 RRS update is the ability to configure your storage bucket to send you a message by way of the Amazon Simple Notification Service whenever object loss occurs.

When RRS was first announced, I postulated that it was an attempt to make Amazon S3 more attractive for consumer use – but these new features sound pretty enterprise-focused. Maybe Amazon Web Services will enhance their higher-redundancy option for the Amazon RDS database product to match.

Read More About This Topic

Joseph Goedart clarified Verizon’s new entry into the Health Information Exchange business with his Verizon Targets HIE Business post to the Health Data Management blog of 7/15/2010:

imageTelecommunications giant Verizon soon will unveil a Web-based health information exchange service.

Scalable for various sized entities, the Verizon-hosted HIE will enable use of existing organizational information systems, processes and workflows without large capital expenditures, according to the Basking Ridge, N.J.-based vendor.

Verizon's partners in the venture are Richmond, Va.-based MedVirginia; Warwick, R.I.-based MEDfx Corp.; and Redwood Shores, Calif.-based Oracle Corp.

MedVirginia, also the first client, will replace its core HIE platform from Wellogic with the Verizon HIE. MedVirginia brings institutional knowledge to the venture with five years experience in developing approaches to governance and policy development, and perspectives from regional and state levels, says Michael Matthews, CEO. "We know the areas of policy that need to be developed."

MEDfx brings Web portal and systems integration technology to Verizon's HIE, and Oracle contributes enterprise master patient index software, a health transactions processing engine and database systems.

More information is available at

James Staten reported VMware Embraces Per-VM Pricing - About Time in this 7/13/2010 post to the Forrester Research blogs:

image VMware today released an incremental upgrade to its core vSphere platform and took the opportunity to do some product repackaging and pricing actions - the latter being a big win for enterprise customers. The vSphere 4.1 enhancements focused on scalability to accommodate larger and larger virtual pools. The number of VMs per pool and number of hosts and VMs per instance of vCenter have been ratcheted up significantly, which will simplify large environments. The new network and storage I/O features and new memory compression and VMotion improvements will help customers pushing the upper limits of resource utilization. Storage vendors will laud the changes to vStorage too, which finally ends the conflict between what storage functions VMware performs versus what arrays do natively.

The company also telegraphed the end of life for ESX in favor of the more modern ESXi hypervisor architecture.

image But for the majority of VMware shops the pricing changes are perhaps the most significant. It's been a longstanding pain that in order to use some of the key value add management features such as Site Recovery Manager and AppSpeed you had to license them across the full host even if you only wanted to apply that feature to a few VMs. This led to some unnatural behavior such as grouping business critical applications on the same host - cost optimization that trumps availability best practices. Thankfully that has now been corrected.

VMware has also taken the high road on how it will track the deployment of these licenses. The traditional approach would have been to force customers to plan for the use of these products upfront and buy the number of licenses they think they would need over the next year. A great idea if the typical enterprise virtual environment were static, but no one's is - nor should it be. As we strive to transition our virtual environments into clouds we have to accommodate lifecycle management of VMs and elasticity. This is the opposite of a static environment.

The VMware solution: audit the high-water mark of managed VMs and bill against the trailing 12 months. This gives customers the flexibility to apply these management tools where needed, optimize their virtual pools for max utilization and overall availability while not being tied down by license constraints. Sure, this approach could result in a big bill the following year if you go nuts with the use of AppSpeed, but no longer will unnatural acts be necessary.

For some VMware management products this model makes less sense - such as CapacityIQ and Chargeback. Tools that help you track and forecast your resource consumption should be applied at the pool level. But the change in direction is welcome. Plus the new price points show a greater alignment to delivered value.

This pricing action also sets the stage for the introduction of vCloud later this quarter which will potentially open the door to a per-hour pricing need.

<Return to section navigation list> 

blog comments powered by Disqus