Sunday, February 13, 2011

Windows Azure and Cloud Computing Posts for 2/11/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.


• Updated 2/13/2011 with new articles marked

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Subscribe to the OakLeaf Systems blog for your Amazon Kindle! Only $1.99 per month.


imageHelp move the OakLeaf blog’s Bestsellers Rank out of the (very) long tail.

Notice the reference to SQL Data Services (SDS). I need to update the Product Description but so far haven’t found out how to do it.

Azure Blob, Drive, Table and Queue Services

• Joannes Vermorel announced the release of Lokad Cloud v1.2 a .NET O/C (object to cloud) mapper for Windows Azure storage in a 2/13/2011 post to Google Code:

image O/C mapper (Object to Cloud). Leverage Windows Azure without getting dragged down by low level technicalities.

NEW: Feb 13th, 2011, Lokad.Cloud 1.2 released. Lokad.Cloud.Storage.dll is now fully stand-alone.

June 23rd, 2010, Lokad is honored by the Windows Azure Partner Award of 2010 for Lokad.Cloud.

Strong typed put/get on the blob storage:

imageIBlobStorageProvider storage = ... ; // init snipped
CustomerBlobName name = ... ; // init snipped
var customer = storage.GetBlob(name); // strong type retrieved

Strong typed enumeration of the blob storage:

foreach(var name in storage.List(CustomerBlobName.Prefix(country))
  var customer = storage.GetBlob(name); // strong type, no cast!
  // do something with 'customer', snipped

Scalable services with implicit queue processing:

[QueueServiceSettings(AutoStart = true, QueueName = "ping")]
public class PingPongService : QueueService<double>
  protected override void Start(double x)
    var y = x * x; // 'x' has been retrieved from queue 'ping'
    Put(y, "pong"); // 'y' has been put in queue named 'pong'

Key orientations

  • Strong typing.
  • Scalable by design.
  • Cloud storage abstraction.

Key features (check the FeatureMap)
  • Queue Services as a scalable equivalent of Windows Services.
  • Scheduled Services as a cloud equivalent of the task scheduler.
  • Strong-typed blob I/O, queue I/O, table I/O.
  • Autoscaling with VM provisioning.
  • Logs and monitoring.
  • Inversion of Control on the cloud.
  • Web administration console.

This is yet another open source project from Lokad (Forecasting API).

If you want something more powerful and flexible for your Windows Azure enterprise solution - check out our Lokad.CQRS.

Live apps using Lokad.Cloud:

Want your app listed here? Just contact us.

Wely Lau explained Applying Custom Domain Names to Windows Azure Blob Storage in a 2/11/2011 post:


image If you register a Windows Azure storage service account, you will be prompted to enter a valid account name. This account name by default will be used as your blob URI. The default address would be applicable for the following format:


In example:

In many cases, we don’t want the “” default domain. Instead, we want to implement our own domain (or sub-domain name), for example: “”. So that you could have your:

imageI am happy to inform you that, it’s possible for you to do it on Windows Azure Blob Storage.

How To

In order to perform the custom domain, I assume you have the following items:

  1. Your own Windows Azure Subscription (associated with your Windows Live Id)
  2. Your own domain registered to certain registrar such as or
Creating Windows Azure Storage Account

1. Firstly, Login to Windows Azure Developer Portal,  and select the “Hosted Service, Storage Account & CDN” tab at the right hand side and subsequently select the “Storage Accounts”.


2. As such, the list of Subscriptions that are associated to the Windows Live Id will be shown. Click on New Storage Account button on the ribbon bar.


3. As the Create New Account dialog show up, select the intended subscription and enter your account name. Note that the account name must be globally unique. This will be form your default URL such as

Subsequently, select your preferred region or affinity group (if you have any) and click the Create button.


If everything goes well, you will see that the status of the new created storage becomes “Created”.

Adding Your Custom Domain

4. Click on the newly created storage, and then click on Add Domain button on the ribbon bar.


5. Immediately, a dialog box shows up. You will need to enter your custom domain name there and click Configure.


Verifying Your Domain Name

6. You are not done yet, you will see the instruction to create a CNAME record at your domain registrar portal and point to The reason is, Windows Azure requires you to verify you are the owner of that custom domain.


7. Since my domain ( was registered via, I’ll need to perform those action there. I believe it’s more or less the same although you are registering to other domain registrar.


Do note that, it may take a few minutes to propagate.

8. Go back to Azure Developer Portal, click on the Storage custom domain, and click on Validate Domain. After a few moments, you can see that your storage custom domain status changes to “Allowed”.


Are we done? No, we are not done yet. What we’ve done so far is to verify the domain belong to us, we have access over it.

Mapping the Custom Domain

9. Now, go back to your domain registrar portal again.

Create another CNAME record, with sub domain “blob” (if you want your address become and point it to your storage account name (with full address).


As usual, it may takes up to a few minutes to update the domain, just be patient.

Let’s Test It Out

10. Since that was newly created azure storage account, obviously it doesn’t have any blob inside.

To test whether the custom domain work, try to upload a blob to that account. You can upload the blobs through many ways including tools such as Cerebrata Cloud Storage Studio, Azure Storage Explorer, and so many more.


11. When I type my original blob address on the browser address bar:


How about using our custom domain name


Here you go, it works!

Rob Gillen reviewed Moving Applications To The Cloud with Windows Azure in a 2/11/2011 post:

appsinthecloud I just finished reading a book from the Microsoft Patterns & Practices group called Moving Applications to the Cloud on the Microsoft Windows Azure Platform. I’ve had the book for a few months, and my when I first received it, I read the first chapter or two, decided it wasn’t worth the read, and set it aside.

Lately, however, I picked it up again – finished the book, and am glad I did. Don’t get be wrong, it didn’t magically morph into a superb spectacle of literary greatness, but I did find that as I read further, the authors moved further from the very basics of the Windows Azure platform and the content became increasingly interesting.

image If you are new (or relatively so) to the Windows Azure platform and contemplating the moving of existing applications to the cloud, this is a worthwhile discussion of a fictitious scenario that did just that. The scenario is slightly on the cheesy side, but realistic enough to help you think through issues you may be facing in your business.

imageIf you are well experienced with the platform, you will likely find this a bit dry – especially the first portions. You’ll also likely be distracted or bothered by the not-so-covert marketing that takes place. That said, the book covers some more complex topics such as multiple tasks/threads sharing the same physical worker role, various optimization topics, and more. In the end, I’m glad I read it and feel that I learned some things from the book.

My last thought has nothing to do specifically with the book, but rather a growing frustration of mine with the Windows Azure platform – the design of the table storage platform. Upon reading books such as this I’m reminded (they stress it *many* times) how important your partition key/row key strategy is, and how literally hosed you are if you get it wrong. This compares with my recent experiences with Amazon’s SimpleDB product, and the delta couldn’t be more striking. Both platforms solve essentially the same problem, but in the case of SDB, it is effortless (at least by comparison). I don’t have to think of partition keys, or be overly concerned with how the underlying storage platform works… I just put data in it. Additionally, *every* column is indexed and performs reasonably under queries. I can’t shake the feeling that the Azure team is missing it here – there has to be a way to get a well-designed, horizontally scaling table structure without placing such a design burden on the users. [Emphasis added.]

I’ve been complaining about the lack of secondary indexes on Azure tables for more than a year. My last post on the topic was What Happened to Secondary Indexes for Azure Tables? of 11/5/2010.

TechRepublic offers Microsoft’s Windows Azure Jump Start (05): Windows Azure Storage, Part 1 podcast (site registration and enabling popups required):

imageBuilding Cloud Applications with the Windows Azure Platform. This podcast is Part 1 of a two-part section and covers the following options for storage as well as ways to access storage when leveraging the Windows Azure Platform: Non-Relational Storage, Relational SQL Azure Storage, Blobs, Drives, RESTful web services, and Content Delivery Networks (CDN). The Windows Azure Jump Start is for all architects and developers interested in designing, developing and delivering cloud-based applications leveraging the Windows Azure Platform. The overall target of this podcast is to help development teams make the right decisions with regard to cloud technology, the Azure environment and application lifecycle, Storage options, Diagnostics, Security and Scalability.

Download here.

<Return to section navigation list> 

SQL Azure Database and Reporting

Steve Yi described ISV of the Month: Sitecore in a 2/11/2011 post to the SQL Azure team blog:

image Sitecore has been chosen as our first ISV of the month.  Sitecore provides a content management solution for Enterprise websites and has strong momentum within the ISV community and with customers alike.  In this video below, Sitecore outlines why they are a leader in the Gartner magic quadrant.

imageSitecore’s CMS Azure Edition leverages the considerable advantages of Azure for scalable, Enterprise class deployment. Azure allows Sitecore to extend it solution to the cloud, allowing customers and partners to easily and quickly scale websites to new geographies and respond to surges in demand.

Microsoft Azure provides a global deployment platform for Sitecore public facing webs servers. Your Core and Master Sitecore servers are deployed locally at your facilities behind your firewall, but your public facing Sitecore web servers are hosted at Microsoft facilities. Microsoft Windows Azure’s compute, storage, networking, and management capabilities seamlessly and reliably links to your on-premise Sitecore application and servers.

The advantages of this hybrid approach is users can manage their main Sitecore editing and database servers securely on-premise and push website content to their front-end Sitecore based Windows Azure cloud servers in dispersed geographic locations.

Sitecore has an upcoming webcast, register here.

Find the Press Release here.

Sitecore CMS Azure Edition appears to be a classic hybrid cloud app.

Mark Kromer (@mssqldude) reported The Microsoft Cloud BI Story, Dateline February 2011 in a 2/10/2011 post to SQL Server Magazine’s SQL Server BI Blog:

Here we are in February 2011 and I thought it might make sense in the SQL Mag BI Blog to take stock of where Microsoft is at in terms of the “Cloud BI” story.

image First, let’s catch you up on a little research and see where different industry leaders, including Microsoft, land in terms of their definition and products for “Cloud BI”. As with many things in the IT industry, “Cloud BI” will mean different things to different people. I’m not listing them all here, just a few that I’ve worked with/for in the past and that I see quite a bit day-in and day-out:

Gartner calls “BI in the Cloud” a combination of 6 elements: data sources, data models, processing applications, computing power, analytic models, and sharing or storing of results. This focuses more on the analytics side of the BI equation and does not address data warehouses or data integration. This makes sense from the perspective that most vendors are taking today toward Cloud BI with smaller data sets (data marts) in the Cloud, or on premises, but authoring publishing tools in the Cloud.

Oracle, on the other hand, continues to frame Cloud in terms of “private cloud”, creating on-premises infrastructures that are virtualized, flexible, elastic and includes concepts such as chargeback. The key is that Oracle sees itself as the infrastructure provider for on-prem and Cloud-based providers. You won’t find a public Cloud version of Oracle database, Hyperion or OBIEE outside of hosting partner providers or the up-coming Amazon Oracle database offering.

In terms of offering a BI platform in the cloud, open-source BI vendors are turning to Amazon and RightScale (which runs on EC2) to make use of their virtualized, hosted infrastructure for MySQL, RDS and EC2. For example, Pentaho is offering “Cloud BI” versions of their product also leveraging Amazon’s EC2 and MySQL. I also see BIRT On Demand quite a bit these days, they use Amazon RDS instead of MySQL. Jaspersoft is now offering their BI reporting platform in the Cloud through RightScale’s platform.

Thinking back to some of the very first offerings of BI in the Cloud that was marketed by Crystal Reports and in the mid 2000’s, there were offerings by those vendors that were based around reporting tools hosted on a public server with subscription-based pricing. You would typically point to a data source such as a spreadsheet on your laptop to port the data into those tools. In fact, that data replication, synchronization and integration from legacy data sources in large company databases into Cloud databases and Cloud-based BI metadata or canonical models is still something that is evolving. One of the most exciting things that I am watching for in the Microsoft Azure platform for Cloud BI & database around SQL Azure, is Federated databases. This would allow us to create applications that can have a much easier time at database sharding and parallel processing. And on the BI front, the Data Sync tool could then be automated to move cleansed and transformed data into SQL Azure data marts for analysis.

imageSo, back to Microsoft and their “Cloud BI” story. Today, what I can demonstrate from the Microsoft stack is a rich feature set of database functionality in the Cloud with SQL Azure. I can move that data around to different databases in different data centers in the Cloud and I can report on that data with analytics and dashboards from Excel 2010 with PowerPivot and Report Builder. Both of those reporting tools are sitting on my laptop, they are not in the Cloud. So it is not 100% Cloud-based. But I do not need a separate server infrastructure to do my Excel-based PowerPivot data integration and analysis. And I can always use the existing on-prem SharePoint or SSRS services to publish the reports that are built form the Cloud-based SQL Azure data.

Now that more & more Microsoft partners are providing Windows Phone 7 mobile dashboards (see Derek’s previous articles on this topic in SQL Magazine) you could build a complete Cloud-based Microsoft BI solution. Some of these tools are in preview CTP releases right now. The new data integration Data Sync CTP 2 moves data between on-premises SQL Server and SQL Azure and Reporting Services in Azure is also being made available as CTP on PowerPivot is not yet available in Azure, but keep your eyes & ears tuned to the Microsoft sites ( for PowerPivot news.

Mark tweeted on 2/11/2011:

Oracle still doesn't do Cloud BI. Their "Cloud BI" whitepaper talks about OBI 11g as "Cloud Ready". That's it??

<Return to section navigation list> 

MarketPlace DataMarket and OData

• Jonathan Carter (@LostInTangent) updated CodePlex’s WCF Data Services Toolkit on 2/9/2011. From the Home page:

Elevator Pitch

image The WCF Data Services Toolkit is a set of extensions to WCF Data Services (the .NET implementation of OData) that attempt to make it easier to create OData services on top of arbitrary data stores without having deep knowledge of LINQ.

It was born out of the needs of real-world services such as Netflix, eBay, Facebook, Twitpic, etc. and is being used to run all of those services today. We've proven that it can solve some interesting problems, and it's working great in production, but it's by no means a supported product or something that you should take a hard commitment on unless you know what you're doing.

imageIn order to know whether you qualify for using the toolkit to solve your solution, you should be looking to expose a non-relational data store (EF + WCF Data Services solves this scenario beautifully) as an OData service. When we say "data store" we really do mean anything you can think of (please be responsible though):

  • An XML file (or files)
  • An existing web API (or APIs)
  • A legacy database that you want to re-shape the exposed schema dramatically without touching the database
  • A proprietary software system that provides its data in a funky one-off format
  • A cloud database (e.g. SQL Server) mashed up with a large schema-less storage repository (e.g. Windows Azure Table storage)
  • A CSV file zipped together with a MySQL database
  • A SOAP API combined with an in-memory cache
  • A parchment scroll infused with Egyptian hieroglyphics
That last one might be a bit tricky though...
Further Description

WCF Data Services provides the functionality for building OData services and clients on the .NET Framework. It makes adding OData on top of relational and in-memory data very trivial, and provides the extensibility for wrapping OData on top of any data source. While it's possible, it currently requires deep knowledge of LINQ (e.g. custom IQueryables and Expression trees), which makes the barrier of entry too high for developing services for many scenarios.

After working with many different developers in many different industries and domains, we realized that while lots of folks wanted to adopt OData, their data didn't fit into that "friendly path" that was easily achievable. Whether you want to wrap OData around an existing API (SOAP/REST/etc.), mash-up SQL Azure and Windows Azure Table storage, re-structure the shape of a legacy database, or expose any other data store you can come up with, the WCF Data Services Toolkit will help you out. That doesn't mean it will make every scenario trivial, but it will certainly help you out a lot.

In addition to this functionality, the toolkit also provides a lot of shortcuts and helpers for common tasks needed by every real-world OData service. You get JSONP support, output caching, URL sanitization, and more, all out of the box. As new scenarios are learned, and new features are derived, we'll add them to the toolkit. Make sure to let us know about any other pain points you're having, and we'll see how to solve it.

Last edited Tue at 10:37 PM by LostInTangent, version 10

From the Downloads page:

download file icon Recommended Download
download file icon Other Available Downloads

imageSee Beth Massi (@bethmassi) will present Creating and Consuming OData Services for Business Applications on 2/16/2010 at 6:30 to 8:30 PM in Microsoft’s San Francisco Office, 835 Market Street, Suite 700, San Francisco, CA 94103 in the Cloud Computing Events section below.

Klint Finley (pictured below) recommended Putting Information on the Balance Sheet in a 2/11/2011 post to the ReadWriteEnterprise blog:

image RedMonk's James Governor and EMC VP of Global Marketing Chuck Hollis are calling for enterprises to put information on the balance sheet. In other words, start considering useful information as an asset and poorly managed information as a liability.

Balance sheet"If you've got an expensive manufacturing machine, you invest periodically to keep the asset running in top shape, otherwise its value falls sharply over time," writes Hollis. "Are information bases any different? How many databases in your organization are providing declining value simply because there isn't a regular program of data maintenance and enhancement?"

Not a bad idea. Taking it a step further, Gartner and Forrester have been agitating for IT to be a profit center, and putting information on the balance sheet would be a step towards quantifying the value IT provides an organization.

image One way to make information even more clearly an asset is to sell information in a marketplace like Azure DataMarket. And, for organizations that simply can't do this, paying attention to these markets may help determine the value of information.

imageEven though the CIO role is increasingly under fire, it's also an increasingly important role. Intel CIO Diane Bryant said in a recent interview:

It's remarkable how dependent the business is on IT. Every business strategy we're trying to deploy, every growth opportunity we're pursuing and every cost reduction all funnels back to an IT solution. How do you go from hundreds of customers to thousands of customers? You do that through technology. You don't scale the sales force by 10 times. I look across Intel's corporate strategy and I can directly tie each one of our pillars to the IT solution that's going to enable it. It's a remarkable time to be in IT. All of a sudden the CIO had better be at the table in the business strategy discussions because they can't launch a strategy without you.
The new role for IT will be to make money for companies, not to just support operations. That's a big shift in thinking, and it's time to start making plans.

Photo by Philippe Put reported Webnodes Announces Support for OData in Their Semantic CMS in a 2/12/2011 press release:

Webnodes AS, a company developing an ASP.NET based semantic content management system, today announced full support for OData in the newest release of their CMS.

imageIn the latest version of Webnodes CMS, there’s built-in support for creating OData feeds. OData, also called the Open Data Protocol is an open protocol for sharing content. Content is shared using tried and tested web standards, and makes it easy to access the information from a variety of applications, services, and stores.

image“OData is a new technology that we believe strongly in”, said Ole Gulbrandsen, CTO of Webnodes. “It’s a big step forward for sharing of data on the web.”

Share content between web and mobile apps
One of the many uses for OData is integration of website content with mobile apps. OData exposes content in a standard format that can be easily used on popular mobile platforms like iOS (iPhone and iPad), Android and Windows Phone 7.

First step towards the semantic web
While OData is not seen as a semantic standard by most people, Webnodes see it as the first big step towards the semantic web. The gap between the current status quo, where websites are mostly separate data silos and the vision for the semantic web is huge. OData brings the data out of the data silos and onto the web in a standard format to be shared. This bridges the gap significantly, and brings the semantic web a lot closer to reality after many years as the next big technology.

About Webnodes CMS
Webnodes CMS is a unique ASP.NET based web content management system that is built on a flexible semantic content engine. The CMS is based on Webnodes’ 10 years of experience developing advanced web content management systems.

About Webnodes AS
Webnodes AS ( is the developer of the world class semantic web content management system Webnodes CMS, which enable companies to develop and use innovative and class leading websites. Webnodes is located in Oslo, Norway. Webnodes has implementation partners around the world.

Marcelo Lopez Ruiz posted OData, jQuery and datajs on 2/11/2011:

image Over the last couple of days, I've received a number of inquiries about the relationship between JSON, OData, jQuery and datajs and how to choose between them.

These aren't all the same kinds of things, so I'll take them one by one.

Talking the talk

JSON is a format to represent data, much like XML. It's the rules for reading and writing text and figuring out what pieces of data have what name and how they relate. JSON, however, doesn't tell you what this data means, or what you can do with it.

imageOData, on the other hand, is a protocol that uses JSON as well as ATOM and XML. If you're talking to an OData service and get a JSON response back, you know which pieces of information are property values, which are identifiers, which are used for concurrency control, etc. It also describes how you can interact with the service using the supported formats to do things with the data: create it, update it, delete, link it, etc.

So far, we've only been talking about specifications or agreements as to how things work, but we haven't discussed any actual implementations. jQuery and datajs are two specific JavaScript libraries that actually get things done.

Walking the walk

image Now we get to the final question: how do you compare and choose between jQuery and datajs? The answer is actually quite simple, because they do different things, so you use one or the other or both, depending on what you're trying to do.

jQuery is great at removing differences between browser APIs, manipulating the page structure, doing animations, supporting richer controls, and simplifying network access (I'm not part of the jQuery development team, so my apologies if I'm mischaracterising something). It includes AJAX support that allows you to send text, JSON, HTML and form-style fields over the web.

datajs is focused on handling data (unsurprisingly). The first release will deliver great OData support, including both ATOM and JSON formats, the ability to parse or statically declare metadata and apply it to improve the results, the ability to read and write batches of request/responses, smoothing format and versioning differences, and whatever else is needed to be first-class OData citizen. We don't foresee getting into the business of writing a DOM query library or a control framework - there are many other libraries that are really good at this and we'd rather focus on enabling new functionality.

Hope this clarifies things, but if not, just drop a message and I'll be happy to discuss.

Greg Duncan updated the OData Primer’s Consuming OData Services laundry list on 2/11/2011:

imageArticles related to Querying OData services
Articles related to OData and Microsoft Office
Articles Related to Specific Client Libraries
Windows Phone 7
Articles Related to Specific Services
Nerd Dinner
Microsoft DataMart (fka Codename "Dallas")
Microsoft Conference Feeds (PDC, etc)
Windows Live
Articles Related to Specific Third Party Components

Frederick Harper reported Drupal 7: out of the box SQL Server support in a 2/9/2011 post to the Web Central Station blog:

We are more than happy to welcome the new version of Drupal. For those of you who don’t know Drupal, you should take a look at it. Drupal is a free open-source CMS that helps you publish and manage easily the content on a website. The new version is now easier to use, more flexible and more scalable. This version is also the first one that comes out of the box with SQL Server support that brings even greater interoperability with the Microsoft platform.

In order for a SQL Server database to work with Drupal 7, it needs a PDO driver, and a Drupal Abstraction Layer.  Microsoft is providing the PDO driver for SQL Server , and Commerce Guys is releasing the Drupal SQL Server module. You can download it or use the Microsoft Web PI to install it.

Did you say Azure? At this moment, we don’t support SQL Server with Drupal on Azure, but it’s a work in progress. Actually, Drupal is working with MySQL, but don’t be sad, we got 4 new modules for you! [Emphasis added.]

  • Bing Maps Module: enable easy & flexible embedding of Bing Map in Drupal content types (like articles for example)
  • Silverlight Pivot viewer Module: enable easy & flexible embedding of Silverlight Pivot in Drupal content types, using a set of preconfigured data sources (OData, a, b, c).
  • Windows Live ID Module: allow Drupal user to associate their Drupal account to their Windows Live ID, and then to login on Drupal with their Windows Live ID
  • imageOData Module: allow data sources based on OData to be included in Drupal content types (such as articles). The generic module includes a basic OData query builder and renders data in a simple HTML Table. The package includes a sample module base on an Open Government Data Initiative (OGDI) OData source, showing how to build advanced rendering (with Bing Maps).

If you want to learn more about these, you should read the blog post of Craig Kitterman here. It’s another way that we support interoperability and we are proud to listen to our customer, but enough read for now, let’s try the new version of Drupal!

<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

• Vittorio Bertocci (@vibronet) posted a very detailed Fun with FabrikamShipping SaaS I: Creating a Small Business Edition Instance (several feet long) on 7/12/2011:

It’s been few months that we pulled the wrap off FabrikamShipping SaaS, and the response (example here) has been just great: I am glad you guys are finding the sample useful!

imageIn fact, FabrikamShipping SaaS contains really a lot of interesting stuff and I am guilty of not having found the time to highlight the various scenarios, lessons learned and reusable little code gems it contains. Right now, the exploratory material is limited to the intro video, the recording of my session at TechEd Europe and the StartHere pages of the source code & enterprise companion packages.

image7223222We designed the online demo instance and the downloadable packages to be as user-friendly as we could, and in fact we have tens of people creating tenants every day, but it’s undeniable that some more documentation may help to zero on the most interesting scenarios. Hence, I am going to start writing more about the demo. Some times we’ll dive very deep in code and architecture, some other we’ll stay at higher level.

I’ll begin by walking you through the process of subscribing to a small business edition instance of FabrikamShipping: the beauty of this demo angle is that all you need to experience it is a browser, an internet connection and one or more accounts at Live, Google or Facebook. Despite of the nimble requirements, however, this demo path demonstrates many important concepts for SaaS and cloud based solutions: in fact, I am told it is the demo that most often my field colleagues use in their presentations, events and engagements.

Last thing before diving in: I am going to organize this as instructions you can follow for going thru the demo, almost as a script, so that you can get the big picture reasonably fast; I will expand on the details on later posts.

Subscribing to a Small Business Edition instance of FabrikamShipping

Let’s say that today you are Adam Carter: you work for Contoso7, a fictional startup, and you are responsible for the logistic operations. Part of the Contoso7 business entails sending products to their customers, and you are tasked with finding a solution for handling Contoso7’s shipping needs. You have no resources (or desire) to maintain software in-house for a commodity function such as shipping, hence you are on the hunt for a SaaS solution that can give you what you need just by pointing your browser to the right place.

Contoso7 employees are mostly remote; furthermore, there is a seasonal component in Contoso7 business which requires a lot of workers in the summer and significantly less stuff in the winter. As a result, Contoso7 does not keep accounts for those workers in a directory, but asks them to use their email and accounts from web providers such as Google, Live Id, or even Facebook.

In your hunt for the right solution, you stumble on FabrikamShipping: it turns out they offer a great shipping solution, delivered as a monthly subscription service to a customized instance of their application. The small business edition is super-affordable, and it supports authentication from web providers. It’s a go!

You navigate to the application home page at, and sign up for one instance.


As mentioned, the Small Business Edition is the right fit for you; hence, you just click on the associated button.


Before everything else, FabrikamShipping establishes a secure session: in order to define your instance, you’ll have to input information you may not want to share too widely! FabrikamShipping also needs to establish a business relationship with you: if you will successfully complete the onboarding process, the identity you use here will be the one associated to all the subscription administration activities.

You can choose to sign in from any of the identity providers offered above. FabrikamShipping trusts ACS to broker all its authentication need: in fact, the list of supported IPs comes from directly from the FabrikamShipping namespace in ACS. Pick any IP you like!


In this case, I have picked a live id. Note for the demoers: the identity you use at this point is associated to your subscription, and is also the way in which FabrikamShipping determines which instance you should administer when you come back to the management console. You can only have one instance associated to one identity, hence once you create a subscription with this identity you won’t be able to re-use the same identity for creating a NEW subscription until the tenant gets deleted (typically every 3 days).


Once you authenticated, FabrikamShipping starts the secure session in which you’ll provide the details of your instance. The sequence of tabs you see on top of the page represent the sequence of steps you need to go through: the FabrikamShipping code contains a generic provisioning engine which can adapt to different provisioning processes to accommodate multiple editions, and it sports a generic UI engine which can adapt to it as well. The flow here is specific to the small business edition.


The first screen gathers basic information about your business: the name of the company, the email address at which you want to receive notifications, which Windows Azure data center you want your app to run on, and so on. Fill the form and hit Next.


In this screen you can define the list of the users that will have access to your soon-to-appear application instance for Contoso7.

Users of a Small Business instance authenticate via web identity providers: this means that at authentication time you won’t receive a whole lot of information in form of claims, some times you’ll just get an identifier. However, in order to operate the shipping application every user need some profile information (name, phone, etc) and the level of access it will be granted to the application features (i.e., roles).

As a result, you as the subscription administrator need to enter that information about your users; furthermore, you need to specify for every user a valid email address so that FabrikamShipping can generate invitation emails with activation links in them (more details below).

In this case, I am adding myself (ie Adam Carter) as an application user (the subscription administrator is not added automatically) and using the same hotmail account I used before. Make sure you use an email address you actually have access to, or you won’t receive notifications you need for moving forward in the demo. Once you filled in all fields, you can click Add as New for adding the entry in the users’ list.


For good measure I always add another user for the instance, typically with a gmail or Facebook account. I like the idea of showing that the same instance of a SaaS app can be accessed by users coming from different IPs, something that before the rise of the social would have been considered weird at best Smile

Once you are satisfied with your list of users, you can click Next.


The last screen summarizes your main instance options: if you are satisfied, you can hit Subscribe and get FabrikamShipping to start the provisioning process which will create your instance.

Note: on a real-life solution this would be the moment to show the color of your money. FabrikamShipping is nicely integrated with the Adaptive Payment APIs and demonstrates both explicit payments and automated, preapproved charging from Windows Azure. I think it is real cool, and that it deserves a dedicated blog post: also, in order to work it requires you to have an account with the PayPal developer sandbox, hence this would add steps to the flow: more reasons to defer it to another post.

Alrighty, hit Subscribe!


FabrikamShipping thanks you for your business, and tells you that your instance will be ready within 48 hours. In reality that’s the SLA for the enterprise edition, which I’ll describe in another post, for the Small Business one we are WAAAY faster. If you click on the link for verifying the provisioning status, you’ll have proof.


Here you entered the Management Console: now you are officially a Fabrikam customer, and you get to manage your instance.

The workflow you see above is, once again, a customizable component of the sample: the Enterprise edition one would be muuuch longer. In fact, you can just hit F5 a few times and you’ll see that the entire thing will turn green in typically less than 30 seconds. That means that your Contoso7 instance of FabrikamShipping is ready!

Now: what happened in those few seconds between hitting Subscribe and the workflow turning green? Quite a lot of things. The provisioning engine creates a dedicated instance of the app database in SQL Azure, creates the database of the profiles and the various invitation tickets, add the proper entry in the Windows Azure store which tracks tenants and options, creates dedicated certificates and upload them in ACS, creates entries in ACS for the new relying party and issuer, sends email notifications to the subscriber and invites to the users, and many other small things which are needed for presenting Contoso7 with a personalized instance of FabrikamShipping. There are so many interesting things taking place there that for this too we’ll need a specific post. The bottom line here is: the PaaS capabilities offered by the WIndows Azure platform are what made it possible for us to put together something so sophisticated as a sample, instead of requiring the armies of developers you’d need for implementing features like the ones above from scratch. With the management APIs from Windows Azure, SQL Azure and ACS we can literally build the provisioning process as if we’d be playing with Lego blocks.

Activating One Account and Accessing the Instance

The instance is ready. Awesome! Now, how to start using it? The first thing Adam needs to do is check his email.


Above you can see that Adam received two mails from FabrikamShipping: let’s take a look to the first one.


The first mail informs Adam, in his capacity of subscription manager, that the instance he paid for is now ready to start producing return on investment. It provides the address of the instance, that in good SaaS tradition is of the form http://<applicationname>/<tenant>, and explains how the instance work: here there’s the instance address, your users all received activation invitations, this is just a sample hence the instance will be gone in few days, and similar. Great. If we want to start using the app, Adam needs to drop the subscription manager hat and pick up the one of application user. For this, we need to open the next message.


This message is for Adam the user. It contains a link to an activation page (in fact we are using MVC) which will take care of associating the record in the profile with the token Adam will use for the sign-up. As you can imagine, the activation link is unique for every user and becomes useless once it’s been used. Let’s click on the activation link.


Here we are already on the Contoso7 instance, as you can see from the logo (here I uploaded a random image (not really random, it’s the logo of my WP7 free English-Chinese dictionary app (in fact, it’s my Chinese seal))). Once again, the list of identity providers is rendered from a list dynamically provided by the ACS: although ACS provides a ready-to-use page for picking IPs, the approach shown here allows Fabrikam to maintain a consistent look and feel and give continuity of experience, customize the message to make the user aware of the significance of this specific step (sign-up), and so on. Take a peek at the source code to see how that’s done.

Let’s say that Adam picks live id: as he is already authenticated with it from the former steps, the association happens automatically.


The page confirms that the current account has been associated to the profile; to prove it, we can now finally access the Contoso7 instance. We can go back to the mail and follow the provided link, or use directly the link in the page here.


This is the page every Contoso7 user will see when landing on their instance: it may look very similar to the sign-up page above, but notice the different message clarifying that this is a sign-in screen.


As Adam is already authenticated with Live ID, as soon as he hits the link he gets redirected to ACS, gets a token and uses it to authenticate with the instance. Behind the scenes, Windows Identity Foundation uses a custom ClaimsAuthenticationManager to shred the incoming token: it verifies that the user is accessing the right tenant (tenant isolation is king), then retrieves form SQL Azure the profile data and adds them as claims in the current context (there are solid reasons for which we store those at the RP side, once again: stuff for another post). As a result, Adam gets all his attributes and roles dehydrated in the current context and the app can take advantage of claims based identity for customizing the experience and restrict access as appropriate. In practical terms, that means that Adam’s sender data are pre-populated: and that Adam can do pretty much what he wants with the app, since he is in the Shipping Manager role that he self-awarded to his user at subscription time.

In less than 5 minutes, if he is a fast typist, Adam got for his company a shipping solution; all the users already received instructions on how to get started, and Adam himself can already send packages around. Life is good!

Works with Google, too! And all the Others*

*in the Lost sense

Let’s leave Adam for a moment and let’s walk few clicks in the Joe’s mouse. If you recall the subscription process, you’ll remember that Adam defined two users: himself and Joe. Joe is on gmail: let’s go take a look to what he got. If you are doing this from the same machine as before: remember to close all browsers or you risk to carry forward existing authentication sessions!


Joe is “just” a user, hence he received only the user activation email.


The mail is absolutely analogous to the activation mail received by Adam: the only differences are the activation link, specific to Joe’s profile, and how gmail renders HTML mails.


Let’s follow the activation link.


Joe gets the same sign-up UI we observed with Adam: but this time Joe has a gmail account, hence we’ll pick the Google option.


ACS connects with google via the OpenID protocol: the UI above is what google shows you when an application (in this case the ACS endpoint used by FabrikamShipping) requests an attribute exchange transaction, so that Joe can give or refuse his consent to the exchange. Of course Joe knows that the app is trusted, as he got a headsup from Adam, and he gives his consent. This will cause one token to flow to the ACS, which will transform it and make it available for the browser to authenticate with FabrikamShipping. From now on, we already know what will happen: the token will be matched with the profile connected to this activation page, a link will be established and the ticket will be voided. Joe just joined the Contoso7’s FabrikamShipping instance family!


And now, same drill as before: in order to access the instance, all Joe needs to do is click on the link above or use the link in the notification (better to bookmark it).


Joe picks google as his IP…


..and since he flagged “remember this approval” at sign-up time, he’ll just see the page above briefly flashing in the browser and will get authenticated without further clicks.


And here we are! Joe is logged in the Contoso7 instance of FabrikamShipping.

As you can see in the upper right corner, his role is Shipping Creator, as assigned by Adam at subscription time. That means that he can create new shipments, but he cannot modify existing ones. If you want to double check that, just go through the shipment creation wizard, verify that it works and then try to modify the newly created shipment: you’ll see that the first operation will succeed, and the second will fail. Close the browser, reopen the Contoso7 instance, sign in again as Adam and verify that you are instead able to do both creation and modifications. Of course the main SaaS explanatory value of this demo is in the provisioning rather than the application itself, but it’s nice to know that the instances itself actually use the claims as well.

Aaand that’s it for creating and consuming Small Business edition instances. Seems long? Well, it takes long to write it down: but with a good form filler, I can do the entire demo walkthrough above well under 3 minutes. Also: this is just one of the possible path, but you can add your own spins & variations (for example, I am sure that a lot of people will want to try using facebook). The source code is fully available, hence if you want to add new identity providers (yahoo, ADFS instances or arbitrary OpenID providers are all super-easy to add) you can definitely have fun with it.

Now that you saw the flow from the customer perspective, in one of the next installments we’ll take a look at some of the inner workings of our implementation: but now… it’s Saturday night, and I better leave the PC alone before they come to grab my hair and drag me away from it. 

Microsoft’s San Antonio data center sent [AppFabric Access Control] [South Central US] [Yellow] We are currently investigating a potential problem impacting Windows Azure AppFabric on 2/12/2010 CST:

image7223222Feb 11 2011 7:11PM We are currently investigating a potential problem impacting Windows Azure AppFabric.
Feb 13 2011 3:54AM Service is running normally.

Here’s a capture from the Azure Services Dashboard:


More than a day to correct a “potential problem” seems like a long time to me.

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN reported [Windows Azure CDN] [Worldwide] [Green(Info)] Azure CDN Maintenance on 2/13/2011:

Feb 13 2011 8:02PM Customers may see delay in Azure CDN new provisioning requests until 1pm PST due to scheduled maintenance.

Here’s the report from the Windows Azure Service Dashboard:


See the David Makogon (@dmakogon) posted an Azure Tip: Overload your Web Role on 7/13/2011 article in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section below re an earlier RDP tip.

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Joe Brinkman promised a series of posts starting with DotNetNuke and Windows Azure: Understanding Azure on 2/9/2011:

image For the last year or so there has been a lot of interest in the DotNetNuke community about how to run DotNetNuke on Windows Azure.  Many people have looked at the problem and could not find a viable solution that didn’t involve major changes to the core platform.  This past fall, DotNetNuke Corp. was asked by Microsoft to perform a feasibility study identifying any technical barriers that prevented DotNetNuke from running on Windows Azure.

image I was pleasantly surprised by what I found and over the course of the next few weeks I’ll present my findings in a series of blog posts.  Yes Virginia, there is a Santa Clause, and he is running DotNetNuke on Windows Azure.

  • Understanding Azure
  • SQL Azure
  • Azure Drives
  • Web Roles and IIS
  • Putting it All together
imagePart 1: Understanding Azure

Prior to the official launch of Windows Azure, Charles Nurse had looked at running DotNetNuke on Windows Azure.  At the time it was concluded that we could not run without major architectural changes to DotNetNuke or to Windows Azure.  Since that time several other people in the community have also tried to get DotNetNuke running on Windows Azure and have arrived at the same conclusion.  David Rodriguez has actually made significant progress, but his solution required substantial changes to DotNetNuke and is not compatible without also modifying any module you wish to use.

DotNetNuke already runs on a number of different Cloud platforms and we really don’t want to re-architect DotNetNuke just to run on Azure.  That approach was rejected because ultimately Azure support is only needed by a small fraction of our overall community.  Re-architecting the platform would require significant development effort which could be better spent on features that serve a much larger segment of our community.  Also, re-architecting the platform would introduce a significant amount of risk since it would potentially impact every Module and Skin currently running on the platform.  The downsides of re-architecting DotNetNuke vastly outweigh the anticipated benefits to a small percentage of our user base.

The Magic of the Fabric Controller

To understand the major challenge with hosting on Windows Azure it is important to understand some of the basics of the platform.  Windows Azure was created on the premise that applications running in the cloud should be completely unaware of the specific hardware they were running on.  Microsoft is free to upgrade hardware at any time or even use different hardware in different data centers.  Any server in the Azure cloud (assuming it has the resources specified by the application service definition) is an equally viable spot on which to host a customers’ applications.

A key aspect of Windows Azure hosting is that the Azure Fabric Controller constantly monitors the state of the VMs and other hardware necessary to run your application (this includes load balancers, switches, routers etc.).  If a VM becomes unstable or unresponsive, the Fabric Controller can shutdown and restart the VM.  If the VM cannot be successfully brought back online, the Fabric Controller may move the application to a new VM.  All of this occurs seamlessly and without the intervention of the customer or even Azure technical staff.  Steve Nagy has a great post that explains the Fabric Controller in more detail.

The Fabric Controller is a complex piece of code that essentially replaces an entire IT department.  Adding new instances of your application can happen in a matter of minutes.  New instances of your application can be placed in data centers located around the world at a moments notice, and they can be taken down just as quickly.  The Fabric Controller provides an incredible amount of flexibility and redundancy without requiring you to hire an army of IT specialists or to buy expensive hardware that will be outdated in a matter of months.  The costs of running your application are more predictable and scales directly in proportion to the needs of your application.

Immutable Applications

All of this scalability and redundancy has a price.  In order to accomplish this seeming bit of magic, Windows Azure places one giant limitation on applications running in their infrastructure:  the application service package that is loaded into Windows Azure is immutable.  If you want to make changes to the application, you must submit a whole new service package to Windows Azure and this new package will be deployed to the appropriate VM(s) running in the appropriate data centers.

This limitation is in place for a number of reasons. Because Azure defines their roles as read-only, Microsoft doesn’t have to worry about making backups of your application.  As long as they have the original service package that was uploaded, then they have a valid backup of your entire application.  Any data that you need to store should be stored in Azure Storage or SQL Azure which have appropriate backup strategies in place to ensure your data is protected. 

Also, because every service package is signed by the user before it is uploaded to Azure, Microsoft can be reasonably certain that it has not been corrupted since they first received the package.  Any bugs or corruption issues exist in your application because that is what you submitted and not because of some failure on the part of the  Azure infrastructure.

Finally, because the role is tightly locked down, your application is much more secure.  It becomes extremely difficult for a hacker to upload malicious code to your site as it would require them to essentially break through the security protections that Microsoft’s engineers have put in place.  Not infeasible, but certainly harder than breaking through the security implemented by most company IT departments who don’t have nearly the security expertise as the people who wrote the Operating System.

This “read-only” limitation makes it very easy for Windows Azure to manage your application.  It can always add a new application instance, or move your application to a new VM by just configuring a new VM as specified in your service description and then deploying your original service package to the appropriate VM instance.  If your application instance were to become corrupted, the Fabric Controller wouldn’t have to try and figure out what was the last valid backup of your application as it would always have the original service package safely tucked away. 

Because the service package is signed, the Fabric Controller can quickly verify that the service package is not corrupted and that you have fully approved that package for execution.  Moving VMs or adding new instances becomes a repeatable and secure process, which is very important when you are trying to scale to 100s of thousands of customers and applications.

Unfortunately, the assumption that applications are immutable is directly at odds with a fundamental tenant of ASP.Net applications: especially those running in Medium trust and requiring the ability for users to upload content to the application.  Most ASP.Net applications that support file uploads store the content in some directory located within the application directory.  In fact, when running in Medium Trust, ASP.Net applications are prevented from accessing file locations outside of the application directory.  Likewise those applications are also prohibited from making web-service calls to store content on external sites like Flickr or Amazon S3.

With DotNetNuke, and other extensible web applications like DotNetNuke, the situation is even more dire.  One of the greatest features of most modern CMS’s, and certainly of DotNetNuke, is the ability to install new modules or extensions at runtime.  As part of this module installation process, new code is uploaded to the site and stored in the appropriate folders.  This new code is then available to be executed in the context of the base application.  Given the immutable nature of Windows Azure this functionality just isn’t supported.

The VM Role Won’t Save Us

Last year at PDC Microsoft announced that they would be releasing a VM role that did away with the limitations imposed by the Web and Worker roles in Windows Azure.  Many people within the DotNetNuke community who have looked at running DotNetNuke on Windows Azure thought that this would remove the roadblocks for DotNetNuke.  With a VM role, instead of supplying a service package that is just your application, you actually provide a .vhd image of an entire Windows 2008 R2 server that is configured to run your application.  Any dependencies needed to run your application are already installed and ready to go. This is great as it gives you complete control over the VM and even allows you to configure your application to have read/write access to your application directories.  However, when you understand how the Azure Fabric Controller operates, it becomes clear that having control over the VM and read/write access to the application directories doesn’t really solve the problems preventing DotNetNuke from running. 

For example, if DotNetNuke was running in a VM role that was running running behind a specific network switch and that switch suffered a hardware failure, the Fabric Controller would want to startup a new VM for your application on another network segment that was unaffected by the outage.  Since Azure doesn’t have to worry about having up to the minute backups of your application, it can just reload your original VM image on a new server and have your application back up and running very quickly.  If you had been writing data or installing code to the application directories, all of that new data  and code uploaded at runtime by your website administrators and content editors would not be present on the original .vhd image you uploaded to Azure.

As you can see the VM Role is not the answer.  Don’t despair however.  A lot has changed with Windows Azure since Charles Nurse performed his first evaluation and there is a solution. Over the next 4 posts in the series I’ll show you how we solve the immutability problem and get DotNetNuke running in Azure without any architectural changes and just a minimal amount of changes to our SQL Scripts.

Andy Cross (@andybareweb) explained File Based Diagnostics Config with IntelliSense in Azure SDK 1.3 in a detailed 2/13/2011 post with a link to source code at the end:

image Earlier in the week, the Windows Azure team posted about a way to use a configuration file to set up the runtime diagnostics in the Windows Azure SDK version 1.3. This is an alternative to the imperative programmatic approach and has the key benefit of executing the setup before the role itself starts and before any Startup tasks execute. In this blog I will show how to use the diagnostics.wadcfg diagnostics configuration file, how to use intellisense with it and how it can be used to capture early Windows Azure lifecycle events. A Source code example is also provided.

First of all we need to start with a vanilla Windows Azure Worker Role. I chose this for simplicity, but the approach does work for other role types. For adjustments you need to make for different role types, see the later section called Differences between role types.

To this root of this Worker Role, we must add a file called diagnostics.wadcfg. You can choose to add it as a text file or an xml file; the latter will allow basic validation checking such as ensuring tags are closed. We will choose the Xml file approach, as it also allows us to setup intellisense for a richer design-time experience. Right click on your solution and

Add a new file to the solution

Add a new file to the solution

Next choose the XML file option given to use and give it the correct filename.

Add an xml file with the correct filename

Add an xml file with the correct filename

The new file should be set to have the correct build properties set in order that the file is packaged correctly.

Set the Build Action to Content and Copy to Output to a "Copy **" option

Set the Build Action to Content and Copy to Output to a "Copy **" option

The basic XML file generated by Visual Studio is show below. You can delete all the contents of this file:

Basic xml to be deleted

Basic xml to be deleted

If you try to type in to the newly created file you will see that you are not given any particularly useful options by IntelliSense. This next step is optional, and so skip the next section if you are just going to paste in an existing file. I will now show how to validate any XML file against a known XSD schema in Visual Studio 2010.

How to Validate an XML file against a know XSD schema in Visual Studio 2010

Firstly, enter the XML | Schemas … dialog:

Enter the XML Schemas option

Enter the XML Schemas option

This is what the dialog looks like:

XML Schemas dialog

XML Schemas dialog

Click the Add button, and in the resulting dialog browse to the path of your XSD. Select the XSD and click OK. The path to the Windows Azure Diagnostic Configuration file XSD is located at %ProgramFiles%\Windows Azure SDK\v1.3\schemas\DiagnosticsConfig201010.xsd. MSDN has a more detailed document on the schema for this XML file located at Windows Azure Diagnostics Configuration Schema.

Browse and select the XSD

Browse and select the XSD

Visual Studio will then show the schema that you have added as the highlighted row in the next dialog box. This will also by default have a tick in the “Use” column, meaning it has been loaded and is associated with the current document. In future, this all xml files will automatically associated with the correct schema if there is a matching xmlns in the XML file.

Confirmation of schema loaded

Confirmation of schema loaded

Clicking OK on this final dialog completes the process. This allows IntelliSense to begin providing suggestions for new nodes as well as validating existing nodes:



XML value and how to test

Now we can start adding in a basic set of XML nodes that will allow us to trace Windows Event Log details. The basic code is shown below. The schema is very familiar should you be experienced with Windows Azure Diagnostics. If not, I suggest you may like to read this blog post about how those diagnostics work. The only slight complexity is that scheduledTransferPeriod is in an encoded form, using ISO-8601 to encode a TimeSpan of 1 minute as “PT1M”.

<DiagnosticMonitorConfiguration xmlns=""
<WindowsEventLog bufferQuotaInMB="4096" scheduledTransferLogLevelFilter="Verbose" scheduledTransferPeriod="PT1M">
    <DataSource name="Application!*"/>

You can see how IntelliSense is really useful in this scenario by this screenshot:

IntelliSense again!

IntelliSense again!

Once you have this value in your wadcfg file, you will have a role that will copy any Windows event logs for Application that occur – this happens very early in the lifecycle, and we can prove this by adding in a very simple console application that writes to the Windows Application log.

The console application I will not go into details regarding how to create and link it to the worker role – it is included in the source code provided and if you want to know more details they’re included in my blog post about Custom Performance Counters in Windows Azure, which uses a similar console application to install the counters.

The code within the console application is very straight forward:

using System;
using System.Diagnostics;

namespace EventLogWriter
     class Program
          static void Main(string[] args)
               string eventSourceName = "EventLogWriter";

               if (!EventLog.SourceExists(eventSourceName))
                    EventLog.CreateEventSource(eventSourceName, "Application");

               EventLog.WriteEntry(eventSourceName, args[0], EventLogEntryType.Warning);

The program just adds whatever is provided as the first argument to it into the Application Event Log. The solution will look like this:

Solution structure

Solution structure

You must make sure you add the Startup task to your ServiceDefinition.csdef file:



Differences Between Role Types

The different role types in Windows Azure need a slightly different setup in order to use this approach. The differences are simply related to File Location of the diagnostics.wadcfg file. From the new msdn documentation:

The following list identifies the locations of the diagnostics configuration file for the different role types:

  • For worker roles, the configuration file is located in the root directory of the role.
  • For web roles, the configuration file is located in the bin directory under the root directory of the role.
  • For VM roles, the configuration file must be located in the %ProgramFiles%\Windows Azure Integration Components\v1.0\Diagnostics folder in the server image that you are uploading to the Windows Azure Management Portal. A default file is located in this folder that you can modify or you can overwrite this file with one of your own.
  • Conclusion

    This screen show the logs being created in the Windows Event Log:

    Event Log

    Event Log

    This is the associated row in Windows Azure Blob Storage, as copied by the Windows Azure Diagnostics library:

    Result in BlobStorage

    Result in BlobStorage

    Potential Problems

    When I was creating this blog, I ran into a problem:

    If you get this error, make sure you have all the required attributes on each node!

    If you get this error, make sure you have all the required attributes on each node!

    The message is “Windows Azure Diagnostics Agent has stopped working”. This was my fault, as I had hand crafted the XML file and had missed some important attributes on the <DiagnosticMonitorConfiguration/> root node.  Make sure you have the configurationChangePollInterval and overallQuotaInMB specified, otherwise you will get this problem.

    <DiagnosticMonitorConfiguration xmlns=""
    Source code

    As promised, the source code can be downloaded here: FileBasedDiagnosticsConfig

    David Makogon (@dmakogon) posted an Azure Tip: Overload your Web Role on 7/13/2011:

    image Recently, I blogged about endpoint usage when using Remote Desktop with Azure 1.3. The gist was that, even though Azure roles support up to five endpoints, Remote Desktop consumes one of those endpoints, and an additional endpoint is required for the Remote Desktop forwarder (this endpoint may be on any of your roles, so you can move it to any role definition).

    imageTo create the demo for the RDP tip, I created a simple Web Role with a handful of endpoints defined, to demonstrate the error seen when going beyond 5 total endpoints. The key detail here is that my demo was based on a Web Role. Why is this significant???

    This brings me to today’s tip: Overload your Web Role.

    First, a quick bit of history is in order. Prior to Azure 1.3, there was an interesting limit related to Role definitions. The Worker Role supported up to 5 endpoints. Any mix of input and external endpoints was supported. Input endpoints are public-facing, while internal endpoints are only accessible by role instances in your deployment. These input and internal endpoints supported http, https, and tcp.

    However, the Web Role, while also supporting 5 total endpoints, only supported two input endpoints: one http and one https. Because of this limitation, if your Azure deployment required any additional externally-facing services (for example, a WCF endpoint ), you’d need a Web Role for the customer-facing web application, and a Worker Role for additional service hosting. When considering a live deployment taking advantage of Azure’s SLA (which requires 2 instances of a role), this equates to a minimum of 4 instances: 2 Web Role instances and 2 Worker Role instances (though if your worker role is processing lower-priority background tasks, it might be ok to maintain a single instance).

    With Azure 1.3, the Web Role endpoint restriction no longer exists. You may now define endpoints any way you see fit, just like with a Worker Role. This is a significant enhancement, especially when building low-volume web sites. Let’s say you had a hypothetical deployment scenario with the following moving parts:

    • Customer-facing website (http port)
    • Management website (https port)
    • ftp server for file uploads (tcp port)
    • MongoDB (or other) database server (tcp port)
    • WCF service stack (tcp port)
    • Some background processing tasks that work asynchronously off an Azure queue

    Let’s further assume that your application’s traffic is relatively light, and that the combining of all these services still provides an acceptable user experience . With Azure 1.3, you can now run all of these moving parts within a single Web Role. This is easily configurable in the role’s property page, in the Endpoints tab:


    Your minimum usage footprint is now 2 instances! And if you felt like living on the wild side and forgoing SLA peace-of-mind, you could drop this to a single instance and accept the fact that your application will have periodic downtime (for OS updates, hardware failure/recovery, etc.).


    This example might seem a bit extreme, as I’m loading up  quite a bit in a single VM. If traffic spikes, I’ll need to scale out to multiple instances, which scales all of these services together. This is probably not an ideal model for a high-volume site, as you’ll want the ability to scale different parts of your system independently (for instance, scaling up your customer-facing web, while leaving your background processes scaled back).

    Don’t forget about Remote Desktop: If you plan on having an RDP connection to your overloaded Web Role, restrict your Web Role to only 3 or 4 endpoints (see my Remote Desktop tip for more information about this).

    Lastly: Since you’re loading up a significant number of services on a single role, you’ll want to carefully monitor performance (CPU, web page connection latency, average page time, IIS request queue length,  Azure Queue length (assuming you’re using one to control a background worker process), etc. As traffic grows, you might want to consider separating processes into different roles.

    • Petri I. Salonen analyzed Nokia and Microsoft from a partner-to-partner (p-2-p) perspective to achieve a vibrant ecosystem: recommendations for Microsoft and Nokia partners with a Windows Azure twist on 1/12/2011:

    image When I look at the Microsoft ecosystem and what is happening today from a technological perspective the cloud and the mobility are the two current topics that everybody seems to be talking about. Look at what is happening with our youth. They assume to be able to consume services from the cloud and they are born with smartphones and know how to utilize them effectively by using messenger and SMS.

    imageWhat does this mean for Microsoft and Nokia? Microsoft has more than 30.000 Windows Azure clients and an ecosystem building cloud solutions. These cloud solutions need mobility applications and this is where I see the thousands of current Symbian developers to have a huge opportunity if they so want to see it. It is going to be a change that they have to take and any change will hurt and be difficult but without change and pain, there is no gain. There are thousands of Microsoft partners that are more than willing to partner with mobility professionals and once the mourning is over, it is time to jump on the bandwagon of cloud and mobility and start building new innovative solutions.

    Petri’s disclosure:

    I am a former global Chairman for International Association of Microsoft Channel Partners ( that according to IDC 2009 study had $10,1 billion in partner-to-partner business. My views might be biased due to two factors: I am a Finn living in the US and earn 100% of my revenue from the Microsoft ecosystem. This blog entry is about the possibilities for Microsoft and Nokia, and not about the past mistakes that might have been made.


    Matt Rosoff reported Nokia CEO Elop Denies Being "Trojan Horse" For Microsoft in a 2/13/2011 post from the Mobile World Congress in Barcelona to the Business Insider blog:

    Stephen Elop Ballmer

    Nokia finished up its press conference at Mobile World Congress a few minutes ago, and toward the end of the conference somebody in the audience yelled to CEO Stephen Elop: "Are you a Trojan Horse?"

    The questioner was referring to the fact that Elop left Microsoft in October to take the CEO job. After assessing the situation, Elop announced on Friday that Nokia is essentially abandoning its decade old Symbian smartphone platform and adopting Microsoft's new Windows Phone 7 platform instead.

    Elop of course denied that he was a plant. His response, as reported on Engadget's live blog:

    The obvious answer is no. We made sure that the entire management team was involved in the process, and of course the board of directors of Nokia are the only ones that can make this significant of a decision about Nokia. They made that final decision on Thursday night.

    Elop also answered a question about his Microsoft shares, saying that he sold some as soon as he was allowed to but stopped selling prior to the announcement of the Nokia-Microsoft deal, as required by law. He also denied being seventh-largest individual Microsoft shareholder as some statistics show -- he was at the company for less than two years, and holds about 130,000 shares worth about $3.18 million, according to Daily Finance. By way of comparison, top shareholder Bill Gates has 583 million shares worth $16.06 billion.

    On the same topic from Bloomberg BusinessWeek by Diana ben-Aaron: Nokia Falls Most Since July 2009 After Microsoft Deal:

    (Updates with closing Microsoft shares in fifth paragraph.)

    image Feb. 11 (Bloomberg) -- Nokia Oyj, the world’s biggest maker of mobile phones, tumbled the most in almost 19 months on investor concern that a partnership with Microsoft Corp. won’t be enough in its battle with Google Inc. and Apple Inc.

    imageNokia, led by Chief Executive Officer Stephen Elop, unveiled plans today to make Microsoft’s Windows its primary software in the competition for smartphone customers against Apple’s iOS and Google’s Android platforms. Nokia’s shares fell 14 percent, the steepest slide since July 16, 2009.

    “My first thought is to sell Nokia stock because Nokia has just given themselves away for free and Google and Apple are laughing all the way to a duopoly,” said Neil Campling, an analyst at Aviate Global LLP in London.

    The move may be the biggest strategy shift by Nokia since the one-time wood pulp company began making mobile phones in the 1980s. Elop, who was hired from Microsoft in September to lead the Espoo, Finland-based company, is struggling to revive Nokia after its piece of the fast-growing smartphone market plunged to 27.1 percent in the last quarter from 50.8 percent when Apple shipped its iPhone in June 2007, according to Gartner Inc. Nokia has lost more than 60 percent of its market value in that time.

    Nokia’s shares fell 1.16 euros, closing in Helsinki at 7 euros. Microsoft slipped 25 cents to $27.25 at 4 p.m. New York time in Nasdaq Stock Market trading.

    “When you are facing a fire, you need to move quickly because it expands fast,” said Pierre Ferragu, an analyst with Sanford C. Bernstein in London. “This partnership will take time to implement and deliver phones. This is what may kill Nokia.”

    Arun Gupta posted a Java wishlist for Windows Azure on 2/12/2011:

    image TOTD [Tip of the Day] #155 explains how to run GlassFish in Windows Azure. It works but as evident from the blog, its not easy and intuitive. It uses Worker Role to install JDK and GlassFish but the concepts used are nothing specific to Java. Microsoft has released Azure SDK for Java and AppFabric SDK for Java which is a good start but there are a few key elements missing IMHO. These may be known issues but I thought of listing them here while my memory is fresh :-)

    imageHere is my wish list to make Java a better on Windows Azure:

    1. Windows Azure Tools for Eclipse has "PHP Development Toolkit" and "Azure SDK for Java" but no tooling from the Java perspective. I cannot build a Java/Java EE project and say "Go Deploy it to Azure" and then Eclipse + Azure do the magic and provide me with a URL of the deployed project.
    2. Why do I need to configure IIS on my local Visual Studio development for deploying a Java project ?
    3. Why do I have to explicitly upload my JDK to Azure Storage ? I'd love to specify an element in the "ServiceConfiguration" or where ever appropriate which should take care of installing JDK for me in the provisioned instance. And also set JAVA_HOME for me.
    4. Allow to leverage clustering capabilities of application servers such as GlassFish. This will also provide session-failover capabilities on Azure :-)
    5. Sticky session load balancing.
    6. If Windows VM crashes for some reason then App Fabric restarts it which is good. But I'd like my Java processes to be monitored and restarted if they go kaput. And accordingly Load Balancer switches to the next available process in the cluster.
    7. Visual Studio tooling is nice but allow me to automate/script the deployment of project to Azure.
    8. Just like Web, Worker, and VM role - how about a Java role ?
    9. And since this is a wishlist, NetBeans is the best IDE for Java EE 6 development. Why not have a NetBeans plugin for Azure ?
    10. A better integration with Java EE APIs and there are several of them - JPA, Servlets, EJB, JAX-RS, JMS, etc.
    11. The "happy scenario" where every thing works as expected is fine is good but that rarely happens in software development. The availabilty of debugging information is pretty minimal during the "not so happy scenario". Visual Studio should show more information if the processes started during "Launch.ps1" cannot start correctly for some reason.

    And I'm not even talking about management, monitoring, adminstration, logging etc.

    Thank you Microsoft for a good start with Java on Azure but its pretty basic right now and needs work. I'll continue my exploration!

    Christmas is coming later this year ... and I'll be waiting :)

    Arun says he’s “a technology enthusiast, a passionate runner, and a community guy who works for Oracle Corp.”

    Barbara Duck (@MedicalQuack) posted Health IT Trends for 2011 With Pharma Adopting Azure & Other Cloud Technologies on 2/11/2011:

    image Back a couple years ago I had a discussion with Mike Naimoli and that was when Azure services were just introduced so by reading his update you can see how fast things are developing with using the cloud.  Microsoft Azure cloud has moved along and you can read his entire post at the link below and one highlight I included was the use of high powered computing below of reducing the computing time from 7 hours to 7 minutes, that’s big. 

    A Deep Dive into Microsoft Life Sciences Today and in the Future – Interview with Michael Naimoli

    imageI remember reading a while back about one of the pharmaceutical companies with their research division running a query and having to either wait to get on the network or once it was set in motion, well it took a few hours to run.  We are talking massive amounts of data and analytics being processed of course so if you were a research person you would set it in motion, go have lunch and whatever else was on your agenda and come back later.  He mentions how Glaxo Smith Kline has reduced their cost by 30% by using the cloud and I might guess we have some energy consumption included here as it takes a lot of power to run.  Just a quick note of my own here, this is where we need to educate our folks in Congress as last year they didn’t look to fund cloud services, so again a need for some additional information as to the savings and other benefits to be realized. 

    imageBelow is an image of the case study pages with several names listed you might recognize and you can link here to read more.  Microsoft in Life Sciences “plays in the sandbox”, in other words works and integrates with other technologies to provide a clean and useful interface for the end user.  BD 


    When it comes to technology innovation, life sciences companies are typically early adopters. This is a research-intensive industry that generates massive amounts of data, and this data needs to be securely and seamlessly shared across businesses and borders—in real-time.

    At the same time, the industry is desperately searching for ways to boost innovation. There is a $25 billion tidal wave of patent expirations coming this year, and R&D costs are rising. So it shouldn't be a surprise to any of us that the most forward-thinking life sciences firms are ramping up the pace of innovation, thanks in part to technologies that fuel collaboration across borders, faster processing of massive amounts of data, and information sharing in real-time to ultimately enhance discovery and speed the time to market.

    HPC on Steroids – Cloud-based computing, like Microsoft Azure, gave life sciences greater access to compute intensive applications such as 3D. By running high-performance computing on-demand on a cloud-based platform, we've already seen data processing time reduced from 7 hours to just 7 minutes. That's a HUGE improvement for companies racing to make the next big discovery.

    See also the Nicole Hemsoth reported DRC Energizes Smith-Waterman, Opens Door to On-Demand Service in a 2/9/2011 post to the HPC in the Cloud blog article in the Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds section of my Windows Azure and Cloud Computing Posts for 2/10/2011+ post.

    Emmanuel Huna described How to automate the creation of a Windows Azure 1.3 package in Team Foundation Server (TFS) for a web role hosted in ‘Full IIS’ with multiple sites in a 2/11/2011 post:

    image For the last few months we have automated the deployment of our Windows Azure web and worker roles – with a couple of clicks we can compile, package, deploy and run our latest bits in the Azure cloud!

    image A sequence of activities in our TFS Build XAML workflow –
    notice the first step is to create the Azure Package

    We use Team Foundation Server 2010 (TFS) and one of the steps is to create the .CSPKG file – the windows azure package that needs to be uploaded to blob storage before you can use the Azure Management APIs to deploy and run your service. 

    Below I have some details on how to get that working with the latest Windows Azure SDK, version 1.3.  Note that you could use a different source control system and something like Cruise Control .NET ( to automate the deployment of your web and worker roles – the principles are the same.

    How to create a Windows Azure package in Team Foundation Server (TFS)
    (when you have multiple sites hosted in ‘Full IIS’ in one Azure web role)

    To create the Azure package, with the Azure SDK 1.2 and below, I was using an invoke process workflow activity and calling CSPACK.  Unfortunately, this stopped working after I installed the Azure SDK 1.3 and I started using the “Full IIS” feature to host multiple sites in one web role.

    Here’s how you can create your Azure Package in TFS if you have a web role hosted in ‘Full IIS’ with multiple sites -

    Step 1 – Edit your Azure project (.csproj) and change the build target

    Open your windows azure project (the one that contains your web and/or worker roles) in Visual Studio and edit the project file (.csproj) in the XML editor.

    Change the project’s ‘DefaultTargets’ from ‘Build’ to -


    So your project XML will look something like this -


    <?xml version="1.0" encoding="utf-8"?>
    <Project ToolsVersion="4.0" DefaultTargets="PrepareForPackaging;Build;CheckRoleInstanceCount;CopyServiceDefinitionAndConfiguration;
    ConfigureWebDeploy;IntelliTrace;CorePublish" xmlns="">

    Step 2 – Create a pre-build event that compiles all of the sites included in your web role

    The only way I found to make sure all of the sites deployed in my web role included all of the needed assemblies was to have a pre-build event on my azure project that compiled each site.
    You can easily do that by right-clicking on your project > Properties > Build Events > and editing the ‘Pre-build event command line’ -
    You can also edit your project XML directly like we did in step 1-
            echo PREBUILDSTEP for $(ProjectName)

            %windir%\Microsoft.NET\Framework\v4.0.30319\msbuild $(ProjectDir)\..\..\
            if errorlevel 1 goto BuildEventFailed

            %windir%\Microsoft.NET\Framework\v4.0.30319\msbuild $(ProjectDir)\..\..\
            if errorlevel 1 goto BuildEventFailed

            REM Exit properly because the build will not fail
            REM unless the final step exits with an error code
            goto BuildEventOK
            echo PREBUILDSTEP for $(ProjectName) FAILED
            exit 1
            echo PREBUILDSTEP for $(ProjectName) COMPLETED OK
    If I didn’t do this, I would find that after deploying, some assemblies were missing from the azure package (even though I did set ‘copy local=true’ for the assemblies in the project and my solution compiled in TFS did include the sites themselves) -


    Note: once you make these changes, your compilation time in Visual Studio may increase, since you will be building the Azure package every time you compile. 

    We tend to run and debug our web sites in IIS in local development and we end up unloading the azure projects during development – so this is not an issue for us.

    Step 3 – Edit your TFS Build XAML workflow and add a “MSBuild” activity

    1) In your XAML workflow, after your project/solution has been compiled, add an activity of type “Microsoft.TeamFoundation.Build.Workflow.Activities.MSBuild” -


    Make sure you choose your targets to be “CorePublish” (although since we edited the project directly, this matters less) -


    That’s it!  Now with one click you can compile, run your tests, and create the windows azure package to deploy your web role hosting multiple sites to the cloud!

    Inside Baseball - How I got this working

    After our automatic deployments to the Azure cloud stopped working, I had some talks with my co-worker M. and some discussions with @smarx and others in this thread -

    One web role with two sites:
    how do I package it in TFS or using msbuild tasks?

    which got me digging into the MSBUILD tasks that are installed by the Azure SDK here:

    C:\Program Files (x86)\MSBuild\Microsoft\Cloud Service\1.0\Visual Studio 10.0\Microsoft.CloudService.targets

    Final note: Development, QA, and Production environments

    Just a quick note on our software development process – we have three different branches in source control, corresponding to our three environments. 

    Our TFS build XAML workflow can handle all three environments/branches and deploys our services (the web and worker roles) differently depending on the arguments – here’s an example of our schedule:


    Make sure your XAML workflow can handle different environments and different services – it will save you tons of time.

    Good times!

    The announced Level 5 Motorsports Goes Global with Windows Azure on 2/11/2011:

    imageLevel 5 Motorsports is taking their championship-winning ways to the global stage. On February 9th, in Paris, the Automobile Club de l’Ouest (ACO) made a formal invitation to team owner Scott Tucker to compete in the 24 Hours of Le mans and the Intercontinental Le Mans Cup (ILMC) in 2011. Level 5 will be the only American team competing in the prototype class in the seven-race ILMC schedule that coincides with the American Le Mans Series, in which they will campaign two LMP2 entries for the first time in 2011. [Link added.]

    image "We've identified partners that share our goals," Tucker said. "Our decision to work with Lola, Honda and Michelin reflects our commitment to putting the best cars on the track every time we race. As this team goes racing across the globe, we need the best technical partners."

    imageReturning as Level 5’s primary sponsor and partner this year is Microsoft with a dramatic expansion of the Microsoft Office partnership of 2010. In addition to Microsoft Office, the Office 365 and Windows Azure brands will utilize the team as a test bed for the latest in cloud-based computing technologies. [Emphasis added.]

    "Microsoft Office is proud to partner again with Level 5 in their pursuit of a global championship” said Chris Barry, Director in Microsoft Office. “Showcasing the power of our software technology with a high performance organization like Level 5, competing on multiple continents will certainly be exciting."

    Prashant Ketkar, Director of Product Marketing with Microsoft Azure, seconds that enthusiasm.

    “For us, the bottom line is helping clients win. Windows Azure’s numerous data centers, located in North America, Europe and Asia, are perfectly positioned to help Level 5 gain a competitive advantage by leveraging cloud-based data and applications.”

    PR Web announced on 2/10/2011 ISC Introduces the Open Intel Project, Where Open Data Meets Business Intelligence running on Windows Azure:

    Open Intel Demo Site with San Francisco Data

    The MapDotNet Team at ISC is pleased to announce a new, open source project called Open Intel. Open Intel (OI) is a feature rich codebase built with ISC’s MapDotNet UX and is heavily focused on spatial data visualization and cartographic analysis with Microsoft’s Bing Maps for Enterprise. Designed for use with Microsoft Windows Azure and SQL Azure, OI can benefit both government and the private sector enterprise. [Emphasis added.]

    OI can fulfill data transparency requirements as an open government data portal for publishing data for internal or external consumption. Additionally, OI is the foundation for building advanced business intelligence applications for analyzing large amounts of spatial and non-spatial data. Out-of-the-box, OI allows users to preview datasets in a data catalog, download data in portable formats such as shapefiles or comma separated values, connect to data feeds for building apps or widgets, or produce a mash up for advanced data analysis. The codebase, which has been in development for the past three months, is being released on Microsoft’s CodePlex Open Source Community, at

    “We wanted to make building a mashup with spatial data as easy as creating a music playlist,” said Brian Hearn, the MapDotNet Lead Architect and a coordinator on the Open Intel project. “With inspiration from Microsoft’s Zune user interface and following Microsoft’s Metro styling guidelines, we designed the OI application for today’s consumer who expects an app to be responsive and easy to use,” adds Hearn. ISC is hosting a live demo of the OI application loaded with a large number of datasets from the City and County of San Francisco, at

    The application is written in C# .NET and leverages Microsoft’s Silverlight technology for the frontend user interface. OI can be easily deployed into Windows Azure and leverages SQL Azure for data storage. “Since Open Intel is designed for deployment Microsoft’s cloud computing environment, Windows Azure, it makes setting up and getting started with the application very easy,” says Hearn. For customers that don’t have .NET developers on staff, ISC can provide support and software-as-a-service options for deploying and hosting the application for customers.

    For more information on the Open Intel project, please visit

    Founded in 1989, ISC is a worldwide leader in software that combines today’s latest advances in web-based, consumer mapping with powerful enterprise GIS (geographic information systems). The company offers a wide range of products and services designed to empower people through great geospatial software.

    HeyGov! and MapDotNet are trademarks of ISC in the United States and other countries. Microsoft, Windows, Windows Azure, Bing Maps for Enterprise, SQL Server, Silverlight, WPF and Zune are either registered trademarks or trademarks of Microsoft Corp. in the United States.

    <Return to section navigation list> 

    Visual Studio LightSwitch

    • Dan Moyer (@danmoyer) concluded his Visual Studio LightSwitch series with Connecting to a Workflow Service from LightSwitch Part 3 of 2/13/2011:


    imageIn the third and final post of this topic I’ll show one way of using the results from a workflow service from a LightSwitch application.

    The content of this post draws heavily from two other blog posts:

    image22242222Beth Massi’s How Do I Video: Create a Screen that can Both Edit and Add Records in a LightSwitch Application and Robert Green’s Using Both Remote and Local Data in a LightSwitch Application.

    Take note of some problems I have in this demo implementation. Look for Issue near the end of this post for more discussion.

    As time permits, I plan to investigate further and will be soliciting feedback from the LightSwitch developer’s forum.

    If you know of a different implementation to avoid the problems I’ve encountered, please don’t hesitate to send me an email or comment here.


    I want to keep the example easy to demonstrate, so I’ll use only the OrderHeader table from the AdventureWorks sample database.

    The OrderHeader table contains fields such as ShipMethod, TotalDue, and Freight. For demo purposes I’m treating the data in Freight as Weight, instead of a freight charge.

    Based on its business rules, the workflow service returns a value of a calculated shipping amount. The OrderHeader table does not contain a ShippingAmount. So I thought this a good opportunity to use what Beth Massi and Robert Green explained in their blog to have LightSwitch create a table in its intrinsic database and connect that table’s data to the AdventureWorks OrderHeader table.

    In Part 2 of this topic, you created a LightSwitch application and added Silverlight library project to the solution containing an implementation of a proxy to connect to the workflow service. In the LightSwitch project, you should already have a connection to the AdventureWorks database and the SalesOrderHeader table.

    Now add a new table to the project. Right click on Data Source node, and select Add a Table in the context menu. Call the table ShippingCost. This new table is created in the LightSwitch intrinsic database.

    Add a new field, ShipCost of type Decimal to the table.

    Add a relationship to the SalesOrderHeaders table, with a zero to 1 and 1 relationship.


    Change the Summary Property for ShippingCost from the default Id to ShipCost.


    Now add a search and detail screen for the SalesOrderHeader entity. The default naming of the screens are SearchSalesOrderHeader and SalesOrderHeaderDetail.

    I want to use the SalesOrderHeaderDetail as the default screen when a user clicks the Add… button on the search screen.

    How to do this is explained in detail in Beth Massi’s How Do I video and Robert Green’s blog.

    In the SearchSalesOrderHeader screen designer, Add the Add… button. Right click the Add… button and select Edit Execute Code in the context menu. LightSwitch opens the SearchSalesOrderHeader.cs file. Add the following code to the griddAddAndEditNew_Execute() method for LightSwitch to display the SalesOrderHeaderDetail screen.


    Now open the SalesOrderHeaderDetail screen in the designer. Rename the query to SalesOrderHeaderQuery. Add a SalesOrderHeader data item, via the Add Data Item menu on the top of the designer. I’m being brief on the detail steps here because these steps are very well explained in the Beth Massi’s video and Robert Green’s blog.

    Your screen designer should appear similar to this:


    Click on the Write Code button in the upper left menu in the designer and select SalesOrderHeaderDetail_Loaded under General Methods.

    With the SalesOrderHeaderDetail.cs file open, add a private method which this screen code can call for getting the calculated shipping amount.


    This method creates an instance of the proxy to connect to the workflow service and passes the Freight (weight), SubTotal, and ShipMethod from the selected SalesOrderHeader row. (Lines 87-90)

    It returns the calculated value with a rounding to two decimal places. (Line 91)

    When the user selects a SalesOrderHeader item from the search screen, the code in SalesOrderHeaderDetail_Loaded() is called and the SalesOrderID.HasValue will be true. (Line 17)


    The _Loaded() method calls the workflow service to get the calculated shipping data. (line 21)

    It then checks if the selected row already contains a ShippingCost item. (Line 23).

    When you first run the application, the ShippingCost item will be null because no ShippingCost item was created and associated with the SalesOrderHeader item from the AdventureWorks data.

    So the implementation creates a new ShippingCosts entity, sets the calculated ShippingCost, associates the new SalesOrderItem entity to the new ShippingCosts entity, and saves the changes to the database. Lines 25-29.

    If the SalesOrderHeader does have a ShippingCost, the code gets the ShippingCost entity, updates the ShipCost property with the latest calculated shipping cost and saves the update. (Line 33 – 38)

    Lines 19 – 40 handle the case when the user selects an existing SalesOrderHeader item from the search screen. Here the implementation handles the case when the user selects the Add… button and the SalesOrderID.HasValue (Line 17) is false.


    For this condition, the implementation creates a new SalesOrderHeader item and a new ShippingCost item. The AdventureWorks database defaults some of the fields when a new row is added. It requires some fields to be initialized, such as the rowguid, to a new unique value. (Line 53)

    For this demo, to create a the new SalesOrderHeader without having an exception, I initialized the properties with values shown in Lines 45 – 53. Similarly, I initialized the properties of the new ShippingCost item as shown in lines 56 – 58.

    Note the setting of ShipCost to -1 in line 56. This is another ‘code smell.’ I set the property here and check for that value in the SalesOrderHeaderDetail_Saved() method. I found when this new record is saved, in the _Saved() method, it’s necessary to call the GetCalculatedShipping() method to overwrite the uninitialized (-1) value of the Shipping cost.

    Issue There must be a better way of doing this implementation and I would love to receive comments in email or to this posting of ways to improve this code. This works for demo purposes. But it doesn’t look right for an application I’d want to use beyond a demo.

    Finally, there is the case where the user has clicked the Add… button and saves the changes or where the user has made a change to an existing SalesOrderHeader item and clicks the Save button.


    Here, I found you need to check if the SalesOrderHeader.Shipping cost is not defined. (Lines 69 – 76).

    If there is not an existing ShippingCost entity associated with the SalesOrderHeader item, a new one is created (line 71) and its properties are set (lines 73 – 75). LightSwitch will update the database appropriately.

    And there is that special case mentioned above, where SalesOrderHeader does contain an existing ShippingCost item, but contains a non-initialized ShippingCost item. (Line 77). For this case, the ShipCost is updated.

    Issue What I also discovered in this scenario that the user needs to click the Save button twice in order to have the data for both the SalesOrderHeader and the ShipCost entities saved.

    Perhaps I haven’t done the implementation here as it should be. Or perhaps I’ve come across a behavior which exists in the Beta 1 version in the scenario of linking two entities and needing to update the data in both of them.

    Again, I solicit comments for a better implementation to eliminate the problems I tried to work around.

    I think some of the problems in the demo arise from using the intrinsic table ShippingCost table with the external AdventureWorks table and the way LightSwitch does validation when saving associated entities.

    If I modified the schema of the AdventureWorks OrderHeader table to include a new column, say for instance, a CalculatedShippingCost column, this code would become less problematic, because it would be working with just the SalesOrderHeader entity, and not two related entities in two databases.

    • Dan Moyer (@danmoyer) continued his Visual Studio LightSwitch series with Connecting to a Workflow Service from LightSwitch Part 2 of 2/13/2011:


    image The second part of this topic discusses how to deploy the Workflow service that was implemented in Part 1, how to create a proxy used in either a console application or test project and how to create a proxy used by a LightSwitch application.

    Deploy the Workflow service

    image22242222For the following task, start Visual Studio 2010 and a Visual Studio command prompt window with administrator privileges. You need sufficient rights to publish to IIS as well as run other tasks, such as iisreset, you may need during debugging.

    I am running on Windows 7 with IIS installed on the same machine. If you’re running on Windows Server 2008 or deploying to IIS running on a different machine, your steps may be slightly different, yet have enough similarity to follow along here.

    In the VS 2010 command prompt window, navigate to the inetpub\wwwroot directory and create a folder to publish the workflow service. I named my folder WFServiceOrderProc.

    Next, go to the VS 2010 solution and right mouse click on the workflow service project, ServiceLibrary. Select Publish… in the pop up context menu to display the Publish Web dialog.:

    Select File System as the publish method in the Publish Web dialog. Specify the target folder you created in the VS 2010 command prompt window.


    Click Publish to deploy the workflow service to IIS.

    Next, start IIS Manager. One way to quickly start IIS Manager is by entering INETMGR in the search programs and files textbox from start programs.

    In IIS Manager, change the WFServiceOrderProc from a folder to a web application.


    Next, in the IIS category, double click the Directory Browsing icon to display the Directory Browsing properties. In the right panel, click enable to enable directory browsing.


    Next, right mouse click the WFServiceProc -> Manage Application -> Browse


    A browser similar to this should start:


    Click the OrderProcessing.xamlx link should display this:


    Copy and paste to a temporary file the line starting with SVCUTIL.EXE. Change the machine name LAPTOP1 (my machine) to local host.

    svcutil.exe http://localhost/WFServiceOrderProc/OrderProcessing.xamlx?wsdl

    With these steps completed, you have the workflow service deployed and verified IIS can run it. The next step is creating a client project to run and test the workflow service.

    Create a Test Project and Proxy

    Return to Visual Studio and create a new project using the Test template.


    In the Visual Studio command prompt, navigate to the Test project folder and run the SVCUTIL command.


    The SVCUTIL command will connect to the workflow service, read the metadata, and create a proxy called OrderProcessing.cs.

    Add the OrderProcessing.cs file to the Test project.

    Rename the file UnitTest1.cs to TestCalcShippingWorkFlow.cs

    Add the following assembly references to the test project:

    • System.Runtime.Serialization
    • System.Runtime.ServiceModel

    Next implement a constructor to create the proxy and implement workflow service tests.

    Below are the constructor and one of the test implementations. You can find all the tests in the zip file that’s part of this posting.


    Create a Proxy for LightSwitch

    Because LightSwitch is a Silverlight application, the proxy created above will not work in a LightSwitch solution. You need to implement the proxy using async methods and use different assembly references for a LightSwitch solutions.

    Creating this workflow service proxy is very similar to creating the proxy to a WCF service I described in an earlier blog post. To ensure we’re on the same page, I’ll explain the steps again here.

    Open a light switch solution. I called mine OrderHeaderTest. At this point, select Attach to External Database and connect to the AdventureWorks database. Use the wizard to connect the SalesOrderHeaders table. It’s not important at this point to do more in the LightSwitch project. I’ll describe more about the screens and code behind in the next post.

    Add a new project to the OrderHeaderTest solution using the Silverlight Class library template. Rename the Class1.cs file to OrderProcessing.cs.

    In the VS 2010 command prompt, navigate to the folder containing the OrderProcessLib solution and use the SLsvcutil.exe to create a Silverlight proxy. On my machine, the SLsvcutil utility is in this path:

    “c:\Program files (x86)\Microsoft SDKs\Silverlight\v4.0\Tools\SLsvcutil.exe” http://localhost/WFServiceOrderProc/OrderProcessing.xamlx?wsdl


    Add the generated proxy, OrderProcessing.cs, to your OrderProcessLib project.

    Implement the LightSwitch Proxy

    The class constructor creates the proxy, an OrderProcessingRequest and OrderProcessingResponse objects. (Lines 20 – 29)

    The constructor initializes a AutoResetEvent object used to signal completion of the async callback method, _proxy_ProcessOrderCompleted(). (Line 24)

    Finally the constructor wires up the async callback method. (Line 31)


    The proxy’s method to invoke the workflow is straightforward. It takes three parameters which it copies to the OrderProcessingRequest object. (Lines 42 – 44). In then evokes an async call to the workflow service (line 46) and waits for the completion of the callback (line 48). On completion, the proxy returns the value of the calculated shipping charge (line 50).


    That completes the deployment of the workflow service, testing the service, and implementing the proxy for the LightSwitch application.

    The next blog post will describe using the workflow service from the LightSwitch project.

    Windows Azure and Cloud Computing Posts for 2/10/2011+ contains Dan’s first post on this topic.

    Thom Chichester will conduct a two-day Exploring Visual Studio LightSwitch training session on 6/28 and 6/29/2011 onsite at CODE Training Center (6605 Cypresswood Dr. Suite 300, Spring, TX 77379) or remotely via GoToMeeting:

    image CODE Training and EPS Software will be holding an intensive 2-day lecture style boot-camp on LightSwitch, specifically designed for developers of business applications who wish to learn the latest Microsoft .NET technologies. This particular class shows how to create modern but standard data/business applications as quickly as you used to be able to since the long gone days of monolithic, single-workstation apps. The training class will be held at our offices in Houston, Texas as well as online via GoToMeeting.  After the class, attendees will receive the PowerPoint slides, the code samples and videos of each class as reference materials. 

    image22242222This comprehensive 2-day class starts at a beginner level, but quickly moves beyond theory to enable attendees to learn how to write real world LightSwitch applications. Attendees will also get the opportunity to discuss their projects and have their questions personally answered by LightSwitch experts.

    Visual Studio LightSwitch


    LightSwitch Extension Points


    Discounts may be available for companies who have previously attended our classes. Call Patrick for details. Please note that instructors & exact curriculum may change. All sales are final.

    Return to section navigation list> 

    Windows Azure Infrastructure

    Microsoft TechNet recently added a new Cloud Scenario Hub as a jumping-off point for IT-oriented information about the Windows Azure Platform, Office 365 and other Microsoft cloud incentives:


    • Steven S. Warren (@stevenswarren) posted Making sense of Windows Azure's VM role to TechTarget’s SearchWindowsServer blog on 2/8/2011 (missed when published):

    Windows Azure now includes a Virtual Machine role that allows organizations to host their own virtual machines in the cloud.

    imageMicrosoft introduced this role as a way to ease the migration of applications to cloud computing. So instead of waiting until your code is “cloud-ready”, you can use this role to move applications to the cloud while refactoring old code.


    Where the VM role fits within Azure
    Windows Azure currently has three roles: Web, Worker and Virtual Machine (VM). The Web role is used for web application programming on Internet Information Services (IIS) 7.0, while the Worker role is basically for any type of process that runs in the background with a front-end interface.

    image The VM role is the newbie, and it uses a virtual hard disk (VHD) image of a Windows 2008 R2 server. The image is created internally on your network using Hyper-V technology and then uploaded to Windows Azure. This image can be customized and configured to run whatever software you would like to run in the cloud.

    Before pushing virtual machines out to the cloud, however, it’s important to understand the pricing, licensing and perquisites involved. Any instance of a VM role is priced by the compute hour, and licensing of the role is included in the cost (see the table below).


    Note:This same table can be found on Microsoft’s Azure Compute page.

    All virtual machines are created using Hyper-V Manager on a Windows Server 2008 operating system, where R2 is recommended. You’ll also find that Hyper-V, IIS 7.0, Windows Azure SDK, ASP.NET are all required, with an optional install of Visual Studio 2010 also available. (More requirements for the Azure VM role are listed on MSDN.)

    Where the VM role makes sense
    So why would you want to implement the VM role? Well, let’s say you’ve done your due diligence and decided on Windows Azure as your cloud platform of choice. You are ready to move forward but have a lot of existing legacy applications that are written differently and may not work on the Azure platform. A rewrite of this code could have a lengthy roadmap even if you are utilizing agile programming. In my opinion, this is where the VM role should be used.

    More on Windows Azure

    The VM role gives you complete control over the system where your code runs, so while you are rewriting code to work in Azure, you could also create and deploy customized VHD images to the cloud immediately. In other words, the VM role can be used to migrate an on-premise application to run as a service in the Windows Azure cloud.

    Another ideal time to implement the VM role is when you aren’t sure if you want stay with Windows Azure for the long-term. What if you decide to change vendors? Windows Azure is a Platform as a Service (PaaS), which is simply a framework for developers to create applications and store data.

    Basically, once you develop your product for Windows Azure, it runs on Windows Azure. But if your company takes a new direction and wants to leverage a different cloud platform from Amazon or VMware, guess what? You’ll have to recode because you won’t be able to move that application. The VM role acts as a bridge that connects PaaS with Infrastructure as a Service (IaaS); it gives Microsoft an IaaS platform and provides you with the portability to move vendors if a change of direction is needed.

    When not to use the Azure VM role
    While the use cases above make sense, to me a VM role in the cloud doesn’t seem like the best option for the long-term. For starters, if you push virtual machines to the cloud, you need good speeds to upload. So the bigger the VM, the longer that upload process will take. Secondly, Microsoft doesn’t maintain your virtual machines for you; you are responsible for patching and uploading the changes as a differencing disk.

    When you look at it that way, maintaining a VM role for an extended period of time seems like a nightmare. Not only could the uptake be tremendous, but differencing disks are not my favorite virtual machine technology anyway as they are prone to corruption. Snapshot technology is much easier to deal with.

    So while the Windows Azure VM role is good to have in the Azure platform, in my opinion it’s not a great long-term PaaS solution. What it can do is help bridge the gap while you are busy coding for a true Platform as a Service.

    You can follow on Twitter @WindowsTT.

    Steve is the author of The VMware Workstation 5 Handbook and has held the Microsoft MVP award for 8 consecutive years.

    Full disclosure: I”m a paid contributor to TechTarget’s blog.

    Chris Czarnecki posted Comparing PaaS with IaaS Revisited on 2/12/2011 to the Learning Tree blog:

    image A few months ago in response to being frequently asked about the differences between Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) I posted an article which discussed the differences. With the rapid change in Cloud Computing I felt the subject needed revisiting.

    image One of the main differences between PaaS and IaaS is the level of control and administration available. With PaaS services such as Azure and Google App Engine then the application is deployed and no access to servers or the underlying operating system is available. With IaaS once the infrastructure is provisioned, total control to all aspects of the software including operating systems and server software is available. PaaS clearly has a big benefit and is a reason why Heroku,, VMware and many other have developed platforms for various environments.

    On the other hand, having no access to server software or operating systems is sometimes very constraining – for example wanting to tweak IIS in Azure so that you can run more than one application on the same host. It is for exactly these reasons that Azure now provides elevated privileges if required on its PaaS. Also Azure allows customer VM’s to be uploaded to Azure too. It seems that the Azure PaaS now offers IaaS features too – but only if access is required. Similarly, Amazon have released Elastic Beanstalk which is PaaS on Amazon. This allows Java application developers to deploy to PaaS but if required gain access to the underlying infrastructure in the same way as IaaS.

    Whilst the distinction between PaaS and IaaS is still clear, the changes vendors are making to their products makes it more difficult to categorise them and appreciate what they have to offer. This is an example of the kind of subject addressed in Learning Trees Cloud Computing course. If you would like to get a deep, pragmatic understanding of Cloud Computing why not consider attending.

    Wade Wegner (@WadeWegner) posted Cloud Cover Episode 36 - Mark Russinovich Talks Fabric Controller and Cyber Terrorism with Steve Marx (@smarx) in a 00:34:03 video segment on 2/11/2011:


    Join Wade and Steve each week as they cover the Windows Azure Platform. You can follow and interact with the show at @cloudcovershow.

    In this episode, Steve and Wade are joined by Mark Russinovich, Technical Fellow at Microsoft, as they:

    • Discuss Mark's new book, Zero Day, and his role on the Fabric Controller team in Windows Azure
    • Discuss new leadership in Microsoft's Server and Tools Business
    • Explain the reason for the refresh to the Windows Azure SDK 1.3 refresh
    • Share the Windows Azure AppFabric CTP February Refresh announcement

    Cloud Cover has been nominated for a "Niney" award in the "Favorite Channel 9 Show" category. If you're a fan of the show, vote for us!

    Show links:

    Microsoft Appoints Satya Nadella as President of Server and Tools Business
    Windows Azure Software Development Kit (SDK) Refresh Released
    New Version of the Windows Azure Service Management CmdLets
    Windows Azure AppFabric CTP February, Caching Enhancements, and the New LABS Portal
    Managing the SecureString in the DataCacheSecurity Constructor

    Dmitri Sotkinov [pictured below] prognosticated What Satya Nadella means to Azure future in a 2/11/2011 post:

    image With the recent changes in the leadership of one of Microsoft’s key business units – Server and Tools – from Bob Muglia to Satya Nadella one can’t help speculating what this means for the business unit and how it will affect Microsoft’s cloud strategy, specifically Windows Azure – Microsoft’s platform as a service.

    image Here’s my uneducated guess based on the assumption that given a new task humans tend to use the same approaches which worked well for them last time, and that Satya [pictured right] definitely got this post as a recognition for successfully rolling out Bing and transforming Microsoft’s search business from nothing to a competitor really frustrating Google.

    Here’s what I think Satya will bring to Microsoft’s Server and Tools Business:

    • More focus on online (Azure) than on Windows Server: Bob Muglia made Windows Server business a success, this was his kid, while Windows Azure (one could argue) was kind of a step-child, imposed on him and added to his business during a re-org. Satya will likely feel much different: for last few years he has been “living in the cloud” leading Bing, and Steve Ballmer very explicitly made lack of cloud focus the reason for changing the business unit leadership.
    • Compete against the market leader: Bing clearly was developed to compete against Google. I guess this means that now Azure development will become aggressively anti-Amazon.
    • Acquisitions and partnerships: so far Azure has really been a ground-up effort by Microsoft engineers, Bing team tried to buy Yahoo, and when this did not work hired a lot of top talent from Yahoo and finally essentially acquired its search and ad business. Satya was directly involved in these efforts. So who is a runner up in IaaS business who Microsoft could acquire to get more visible in that space? Rackspace? Savvis? Although, one could argue that search share was more relevant in search advertising business in which the big get bigger (why even bother advertising with small players?) and this advantage of scale is not as relevant in hosting, so acquisitions might not be as effective. We will see…
    • Not sure if Azure appliance emphasis will persist: Azure appliance made a lot of sense under old leadership. Server and Tools Business knows how to sell to enterprises, so let’s turn Azure into an appliance which we can sell to our existing biggest partners and customers. Will Satya feel the same? I don’t think Bing folks were paying much attention to Microsoft’s search appliance strategy leaving this all to SharePoint/FAST and concentrating on pure cloud play…

    There were speculations after Ray Ozzie left that Azure might get de-emphasized – after all Azure was one of Ray’s pet projects. With Satya’s appointment, I would say that we should expect Azure to only gain priority at Microsoft. We’ll see how applicable will Bing experience be for making Windows Azure a top player in the cloud platform space.

    Dimitry leads the new product research and development team for Quest’s Windows Management business unit.

    Bing still has a long way to go before “really frustrating Google” with a sizable market share.

    Dana Gardner adds a cloud computing spin in his Some Thoughts on the Microsoft and Nokia Tag Team on Mobile or Bust News Briefing Directs post of 2/11/2011:

    Given what they are and where they have been, there's little logical reason for Microsoft not dominating the mobile smartphone computing landscape. And it should have been a done-deal in many global major markets at least four years ago.

    The only reason that Microsoft is now partnering with Nokia on mobile -- clearly not the client giant's first and primary strategy on winning the market -- is because of a lack of execution. I surely recall speaking to Microsofties as many as 10 years ago, and they were all-in on the importance and imperative for mobile platforms. Windows CE's heritage is long and deep. Nokia just as well knew the stakes, knew the technology directions, knew the competition.

    Now. In the above two paragraphs replace the words "Microsoft" and "Nokia." Still works. Both had huge wind in their sails (sales?) to steer into the mobile category for keeps, neigh to define and deliver the mobile category to a hungry world and wireless provider landscape ... on their, the platform-providers.'  terms!

    So now here we have two respective global giants who had a lead, one may even say a monopoly or monopoly-adjacency, in mobile and platforms and tools for mobile. And now it is together and somehow federated -- rather than separately or in traditional OEM partnership -- that they will rear up and gallop toward the front of the mobile device pack -- the iOS, Android, RIM and HP-Palm pack. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

    How exactly is their respective inability, Microsoft and Nokia, to execute separately amid huge market position advantages enhanced now by trying to execute in cahoots ... loosely, based mostly on a common set of foes? I'll point you to the history of such business alliances, often based on fear, and its not any better than the history of big technology mergers and acquisitions. It stinks. It stinks for end-users, investors, partners and employees.

    But why not reward the leadership of these laggards with some more perks and bonuses? Works in banking.

    A developer paradise
    And talk about an ace in the hole. Not long ago, hordes of developers and ISVs -- an entire global ecosystem -- were begging Microsoft to show them the mobile way, how to use their Visual Studio skills to skin the new cat of mobile apps. They were sheep waiting to be lead (and not to slaughter). The shepherd, it turned out, was out to lunch. Wily Coyote, super genius.

    And execution is not the only big reason these companies have found themselves scrambling as the world around them shifts mightily away. Each Microsoft and Nokia clearly had the innovators dilemma issues in droves. But these were no secret. (See reason one above on execution again ... endless loop).

    Microsoft had the fat PC business to protect, which as usual divided the company on how to proceed on any other course, Titantic-like. Nokia had the mobile voice business and mobile telecom provider channel to protect. So many masters, so many varieties of handsets and localizations to cough up. Motorola had a tough time with them one too. Yes, it was quite a distraction.

    But again, how do these pressures to remain inert inside of older models change by the two giants teaming up? Unless they spin off the right corporate bits and re-assemble them together under a shared brand, and go after the market anew, the financial pressures not to change fast remain steadfast. (See reason one above on execution again ... endless loop).

    What's more there's no time to pull off such a corporate shell game. The developers are leaving (or left), the app store model is solidifying elsewhere, the carriers are being pulled by the end-users expectations (and soon enterprises). And so this Microsoft-Nokia mashup is an eighth-inning change in the line-up and there's no time to go back to Spring training and create a new team.

    Too little, too late
    Nope, I just can't see how these synergies signal anything but a desperation play. Too little, too late, too complex, too hard to execute. Too much baggage.

    At best, the apps created for a pending Nokia-Microsoft channel nee platform will be four down the list for native app support. More likely, HTML 5 and mobile web support (standards, not native) may prove enough to include the market Microsoft and Nokia muster together. But that won't be enough to reverse their lackluster mobile position, or get them the synergies they crave.

    Each Microsoft and Nokia were dark horses in the mobile devices and associated cloud services race. Attempting to hitch the two horses together with baling wire and press releases doesn't get them any kind of leg up on the competition. It may even hobble them for good. [Emphasis added.]

    Microsoft Loses Another Visionary Behind Windows Azure about Amitabh Srivastava was #1 in CRN’s Five Companies That Dropped The Ball This Week slide show of 2/11/2011:

    Amitabh Srivastava, a senior vice president in Microsoft's Server and Tools Business, is leaving Microsoft after 14 years at the company, the latest in a long string of Microsoft executive departures.

    The timing of Srivastava's resignation may have just been a coincidence, but Srivastava was one of the principal architects of Windows Azure, along with the departing Ray Ozzie, and was believed to be in the running for the top post at STB. Microsoft instead tapped Satya Nadella from Online Services to replace the pushed aside Bob Muglia.

    Ballmer is in the midst of an executive shakeup aimed at infusing more engineering talent into Microsoft's executive ranks. But when was the last time a couple of weeks passed without a key Microsoft executive leaving? The doors at Redmond have been getting quite the workout lately and that's getting tougher and tougher to spin in a positive light.

    <Return to section navigation list> 

    Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds presented an advertising-supported James Urquhart Cloud Expert Inside theCube at Stata Conference video interview on 2/11/2011:

    James Urquhart, cloud expert , blogger at CNET and strategist with Cisco discusses with John Furrier and Dave Vellante the impact of cloud and the changing world of data. From the Strata Conference on Big Data.

    Go here to watch the video.

    James continues his promotion of a hybrid cloud model.

    <Return to section navigation list> 

    Cloud Security and Governance

    Tanya Forsheit (@Forsheit) continued her analysis of the Song-Beverly Credit Card Act with a California Supreme Court Says Zip Codes are PII-Really. (As California Goes, So Goes the Nation? Part Two) article of 2/11/2011 posted to the Infomation Law Group blog:


    image Thinking hard about how business and consumer interests can be harmonized by effective and privacy/security-friendly policies and practices? We thought so. Worried that zip codes might be treated as personal information in this country?  Probably not.  All that may be changing.  In a ruling already attracting criticism and attention from some high profile privacy bloggers, the California Supreme Court ruled Thursday, in Pineda v. Williams-Sonoma, that zip codes are "personal identification information" for purposes of California's Song-Beverly Credit Card Act, California Civil Code section 1747.08, reversing the Court of Appeal's decision that we discussed last year

    For those of you who may be wondering, yes - the statute provides for penalties of up to $250 for the first violation and $1,000 for each subsequent violation, and does not require any allegations of harm to the consumer.  California has already seen dozens, if not hundreds, of class action lawsuits around the Song-Beverly Credit Card Act.  The Court's interpretation of "personal identification information" as including zip codes is likely to spark a new round of class action suits. California retailers should carefully consider the Pineda decision in crafting and updating their personnel policies and training programs with respect to collection of information during credit card transactions.

    The legislation at issue prohibits retailers from asking customers for their personal identification information and recording it during credit card transactions. Section 1747.08(a) provides that "no . . . firm . . . that accepts credit cards for the transaction of business shall . . . [r]equest, or require as a condition to accepting the credit card as payment in full or in part for goods or services, the cardholder to provide personal identification information, which the . . . firm . . . accepting the credit card writes, causes to be written, or otherwise records upon the credit card transaction form or otherwise."  Subdivision (b) defines "personal identification information" as “information concerning the cardholder . . . including, but not limited to, the cardholder's address and telephone number.”

    The California Supreme Court reversed the Court of Appeal, holding that the definition means exactly what it says - personal identification information means any "information concerning the cardholder."  The Court cited Webster's, noting that "concerning" is "a broad term meaning “pertaining to; regarding; having relation to; [or] respecting."  The Court rejected the Court of Appeal's reasoning that a zip code pertains to a group of individuals, not a specific individual, finding that the reference to address in the definition of "personal identification information" must also include components of an address. The Court attacked the Court of Appeal's assumption that a complete address and telephone number are not specific to an individual. The Court took the position that interpreting the term "personal identification information" to mean any information of any kind "concerning" a consumer is consistent with the consumer protection goals of the statute.  The Court reasoned:

    the legislative history of the Credit Card Act in general, and section 1747.08 in particular, demonstrates the Legislature intended to provide robust consumer protections by prohibiting retailers from soliciting and recording information about the cardholder that is unnecessary to the credit card transaction.

    The Court's discussion of "information concerning" reminds me of the boilerplate definitions we litigators always use (and then fight about) in discovery requests and meet and confers.  The litigators out there know what I am talking about:  "for purposes of these document requests, the term 'concerning' means 'discussing, describing, reflecting, containing, commenting, evidencing, constituting, setting forth, considering, pertaining to," and on, and on, and on . . . Such definitions, interpretations, and arguments may be fun for litigators, but in real life no one knows what they really mean and they have no practical application.  If "concerning" can mean anything, it kind of means nothing for purposes of providing practical guidance for reasonable business practices

    Further, while the Court's reading of the statute might make sense in a vacuum as a matter of plain language statutory interpretation based on the phrase "information concerning," the Court's analysis seems to omit any discussion of the words "personal identification" in the term "personal identification information."  Zip codes may be information "concerning" a person, but they do not personally identify any individual.

    Finally, and perhaps most significantly, it is not clear how collection of zip codes, while perhaps unnecessary to credit card transactions, is of any potential harm to the consumer. And that, as the Court notes, is the point of the statute - consumer protection.  The Court does not discuss any potential harm to the consumer from collection of zip codes.  That is not surprising since collection of zip codes does not give rise to any obvious or apparent consumer harm. 

    I'm off to speak at the RSA Conference.  Look forward to hearing your thoughts on this one.  Happy weekend to all.

    <Return to section navigation list> 

    Cloud Computing Events

    • Beth Massi (@bethmassi) will present Creating and Consuming OData Services for Business Applications on 2/16/2010 at 6:30 to 8:30 PM in Microsoft’s San Francisco Office, 835 Market Street, Suite 700, San Francisco, CA 94103:

    Event Description

    imageThe Open Data Protocol (OData) is a REST-ful protocol for exposing and consuming data on the web and is becoming the new standard for data-based services. In this session you will learn how to easily create these services using WCF Data Services in Visual Studio 2010 and will gain a firm understand of how they work. You'll also see how to consume these services and connect them to other data sources in the cloud to create powerful BI data analysis in Excel 2010 using the PowerPivot add-in. Finally, we will build our own Office add-ins that consume OData services exposed by SharePoint 2010.

    Speaker's Bio

    imageBeth Massi is a Senior Program Manager on the Microsoft Visual Studio BizApps team who build the Visual Studio tools for Azure, Office, SharePoint as well as Visual Studio LightSwitch. Beth is a community champion for business application developers and is responsible for producing and managing online content and community interaction for the BizApps team. She has over 15 years of industry experience building business applications and is a frequent speaker at various software development events. You can find her on a variety of developer sites including MSDN Developer Centers, Channel 9, and her blog Follow her on twitter @BethMassi.

    • image 6:00 doors open (pizza and drinks)
    • 6:10 - 6:25 Lightning talks
    • 6:30 announcements
    • 6:45 - 8:15 presentation
    • 8:15 - 8:30 raffle
    This event is sponsored by Infragistics, JetBrains and Apress.

    • Beth Massi (@bethmassi) will be the presenter for the East Bay.NET User Group’s March meeting – Build Business Applications Quickly with Visual Studio LightSwitch on 3/9/2011 at 6:45 PM in the University of Phoenix Learning Center, 2481 Constitution Drive, Room 105, Livermore, CA:

    • image22242222When: Wednesday, 3/9/2011 at 6:45 PM
    • Where: University of Phoenix Learning Center in Livermore, 2481 Constitution Drive, Room 105

    Build Business Applications Quickly with Visual Studio LightSwitch—Beth Massi

    Microsoft Visual Studio LightSwitch is the simplest way to build business applications for the desktop and the cloud. In the session you will see how to build and deploy an end-to-end business application that federates multiple data sources. You’ll see how to create sophisticated UI, how to implement middle tier application logic, how to interact with Office, and much more without leaving the LightSwitch development environment. We’ll also discuss architecture and additional extensibility points as well demonstrate how to author custom controls and data sources that LightSwitch can consume using Visual Studio Professional+ and your existing Silverlight and .Net skills.

    FUNdamentals Series
    Enterprise Requirements for Cloud Computing—Adwait Ullal

    When an Enterprise decides to adopt a cloud computing platform such as Azure, what are the key requirements, apart from security and the usual ones you hear in the media? Adwait Ullal will enumerate additional requirements based on his experience of evaluating Azure at a major financial services company.


    image 6:00 - 6:30 .NET FUNdamentals
    6:30 - 6:45 check-in and registration
    6:45 - 7:00 tech talk; announcements and open discussion
    7:00 - 9:00 main presentation and give aways

    Presenters’ Bios:

    Beth Massi

    See above post.

    Adwait Ullal (FUNdamentals speaker)

    Adwait Ullal has over 25 years of software development experience in diverse technologies and industries. Adwait is currently an independent consultant who helps companies, large and small with their IT strategy, methodology and processes. Adwait also maintains a jobs blog at

    The Enterprise Developers Guild will hold its February Meeting: Visual Studio LightSwitch Beta on 2/15/2011 at 6:00 PM in the Multipurpose Room (MPR) of the Microsoft Campus at 8055 Microsoft Way, Charlotte, NC 28273:

    image22242222Join us Tuesday, February 15, at 6:00 PM in the Multipurpose Room (MPR) of the Charlotte Microsoft Campus for our February meeting on a special night. Remember when we used to get a prototype up in hours and get the app out to the department in a few days? Those days are coming back. Guild founder Bill Jones will develop a simple app before our very eyes and then tell us "what's in there" when we create a Visual Studio LightSwitch project. Since LightSwitch implements the UI in Silverlight, creating Windows, web or applications to support both is a deployment decision. LightSwitch has pros and cons, but you do end up with a Visual Studio project that can be maintained independently. If you need something fast or need to automate configuration or administration, you really should know about LightSwitch.

    The meeting presenter is Bill Jones Jr., MVP.

    As a Solutions Architect for Coca-Cola Bottling Company Consolidated, Bill specializes in software development using .NET, VB.Net, ASP.NET, C#, Windows Mobile, SQL CE and SQL Server. He is well versed in all phases of the development life cycle - process, work flow, class abstraction, data structures, reporting and user interfaces. In his spare time, Bill founded and continues to lead the Enterprise Developers Guild, the .Net User Group in Charlotte NC with more than 2700 members. He is also the INETA Membership Mentor for the two Carolinas. Bill's blog can be found at

    The meeting is sponsored by TEKsystems. …

    The Boston Azure Cloud Users Group will hold its next event, Data Security and Identity Management, on 2/24/2011 at 6:00 to 8:00 PM at Microsoft New England R&D Center, One Memorial Drive, Cambridge, MA 02142:

    • image In the main talk, Walt Lapinsky, VP Cloud Security at Purposeful Clouds, will dig into Data Security concerns for the cloud - generally, and as applied to Windows Azure. Walt has diverse background in this area, with over 40 years of proven expertise in supporting bet-your-business IT environments, including in the military and intelligence communities.
    • In the opening talk, Bill Wilder will introduce how to create a Single Sign On (SSO) experience in the cloud using capabilities such as Windows Identity Foundation (WIF) and Azure's Access Control Service (ACS), including how to easily outsource your authentication to Facebook, Gmail, LiveId, or other existing providers - with Azure's ACS and .NET's WIF doing the heavy lifting.
    • This is a FREE event. All are welcome.
    • Pizza and drinks will be provided. Please register so we know what to expect for attendance.
    • After we wrap up the meeting, feel free to join us for a drink at a local watering hole.
    • For MORE DETAIL and UP-TO-DATE INFORMATION please visit While there, also consider joining the (low volume) mailing list.
    Register Now

    The VBUG UK Users Group will hold the VBUG Spring Conference on 3/28 and 3/29/2011 at Wokefield Park, Mortimer, Reading RG7 3AH, UK:

    Day One (Mon 28 March):

    • Developing SharePoint 2010 with Visual Studio 2010 - Dave McMahon
    • Cache Out with Windows Server AppFabric - Phil Pursglove
    • Extending your Corporate Network in to the Windows Azure Data Centre with Windows Azure Connect - Steve Plank
    • Good Site Architecture in Kentico SMS - James Cannings
    • Silverlight Development on Windows Phone 7 - Andy Wigley 

    Day Two (Tues 29 March):

    • Self Service BI for your users, but what does that mean for you? - Andrew Fryer
    • Design Patterns – Compare and Contrast – Gary Short
    • Projecting your corporate identity to the cloud – Steve Plank
    • May the Silverlight 4 be with you – Richard Costall
    • The Step up to ALM – an Introduction to Visual Studio 2010 TFS for the Visual Sourcesafe User - Richard Fennell

    The Canada Partner Learning Blog announced on 2/11/2011 a Windows Azure Platform Acceleration Technical Training Tour in Toronto and Calgary starting April 26 in Toronto:


    Accelerate the information and skills to help you understand and start driving the Windows Azure platform now. This two-day instructor-led workshop provides delegates with an early opportunity to gain insight into the nature and benefits of the Windows Azure platform. Delivered through workshop style presentations and hands-on lab exercises, the workshop will focus on three key services - Windows Azure, SQL Azure and AppFabric.

    Windows Azure Platform is an internet-scale cloud services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services that can be used individually or together. Windows Azure platform's flexible and interoperable platform can be used to build new applications to run from the cloud or enhance existing applications with cloud-based capabilities.

    SQL Azure extends the capabilities of Microsoft SQL Server into the cloud as a Web-based, distributed relational database. It provides Web-based services that enable relational queries, search, and data synchronization with mobile users, remote offices and business partners. It can store and retrieve structured, semi-structured, and unstructured data.

    AppFabric makes developing loosely coupled cloud-based applications easier; AppFabric includes access control to help secure your applications, as well as a service bus for communicating across applications and services. These hosted services allow you to easily create federated applications that span from on-premises environments to the cloud.

    Who Should Attend:
    This course is aimed at software developers with at least 6 months practical experience using Visual Studio 2008, 2010, or C# and who are familiar with Virtual PC 2007 or Windows Virtual PC.

    Click here to register for the Toronto and Calgary sessions starting April 26 in Calgary

    Day 1
    Module 1: Windows Azure Platform Overview
    Module 2: Windows Azure Compute + Storage
    LAB: Introduction to Windows Azure
    Module 3: Introduction to SQL Azure
    LAB: Introduction to SQL Azure
    Module 4: AppFabric Service Bus
    LAB: Introduction Service Bus
    Module 5: SQLAzure Advanced Tips and Tricks
    LAB: SQLAzure Tips and Tricks

    Day 2

    Module 6: Exploring Windows Azure Storage
    LAB: Exploring Windows Azure Storage
    Module 7: Building High Performance Web Apps
    LAB: Building Windows Azure Apps with Cache Service
    Module 8: Moving Apps to the Web with Web and Worker Role and VM Role
    LAB: Advanced Web and Worker Roles
    Module 9: Connecting Apps with Windows Azure Connect (Sydney demo)
    Module 10: Identity and Access Control in the Cloud
    LAB: Federated Authentication in a Windows Azure Web Role Application

    The HPC in the Clouds blog announced on 2/11/2011 a Platform Computing CEO, 451 Group Analyst to Discuss Enterprise Private Clouds Webinar on 2/16/2011 at 8:00 AM PST:

    Platform Computing announced that it will be hosting a webinar on February 16 featuring the company’s CEO, Songnian Zhou and William Fellows, Principal Analyst of the 451 Group.

    The webinar, entitled, “The Journey to the Private Cloud” will outline potential strategies to help enterprise private cloud builders plan the process according to best practices gathered from use cases and existing private cloud implementations.

    The co-hosts will discuss their perspectives on the development of trends in private cloud computing within the enterprise context and will also examine how companies can build a practical business case to justify investments in moving to a private cloud model.

    Additionally, Zhou and Fellows will address planning and proof-of-concept issues, including how to best create a private cloud roadmap and see the process through from initial plan to final implementation.

    As Platform’s release about the upcoming event stated, “many are calling 2011 the year of the private cloud [but] the journey to adoption is still unchartered territory for most companies developing or re-examining their cloud strategies.”

    imageThe webinar (registration required to participate), which is open to the public, will take place on Wednesday, Feb. 16 at 11:00 a.m. EST/8:00 a.m. PST/4:00 p.m. GMT.

    Platform invites all CIOs, IT managers, enterprise architects, business analysts, cloud strategy teams and others interested in the concept of private clouds within the enterprise to join.

    <Return to section navigation list> 

    Other Cloud Computing Platforms and Services

    Joseph Galante asserted “James Hamilton's challenge: making data centers more efficient while fending off a threat from IBM, AT&T, and Microsoft” in a deck for his Amazon's Cloud-Computing Guru Honed Skills Fixing Lamborghinis post to Bloomberg BusinessWeek of 2/9/2011: (AMZN) Vice-President James Hamilton's schooling in computer data centers started under the hood of a Lamborghini Countach.

    image Fixing luxury Italian autos in British Columbia while in his 20s taught Hamilton, 51, valuable lessons in problem solving, forcing him to come up with creative ways to repair cars because replacement parts were hard to find. "It's amazing how many things you can pick up in one industry and apply to another," Hamilton, who has also been a distinguished engineer at Amazon since 2009, says in an interview.

    image Hamilton is putting these skills to use at Amazon, where he's central to an effort by Chief Executive Officer Jeff Bezos to make Amazon Web Services, which leases server space and computing power to other companies, as big as the core e-commerce business. He's charged with finding ways to make data centers work faster and more efficiently while fending off competition from Microsoft (MSFT) and IBM (IBM), his two prior employers, and AT&T (T).

    image Revenue from the kinds of cloud services offered by Amazon is likely to surge to $56 billion in 2014, from more than $16 billion in 2009, according to research firm IDC. Amazon's Web services brought in about $500 million in revenue in the past year, according to estimates from Barclays Capital and Lazard Capital Markets, or about 1.5 percent of Amazon's $34.2 billion in sales. The company doesn't disclose revenue from Web services, also called cloud computing.

    Margin Concern

    As they pursue growth, Hamilton and his team will have to ensure that Amazon's investment in Web services is well-spent. Investors pummeled shares of Seattle-based Amazon on Jan. 28, the day after the company said it would boost spending on data centers and warehouses, fueling concern that margins will narrow.

    Although still relatively small, Amazon Web Services is growing at a faster rate than the company's core business, and it's more profitable, says Sandeep Aggarwal, an analyst at Caris & Co. Web services may generate as much as $900 million in sales this year, and operating margins could be as wide as 23 percent, compared with 5 percent margins in the main business, Aggarwal says. "There aren't many companies with an incremental business that's more profitable than their core business," he says.

    Hamilton, who has filed almost 50 patents in various technologies, is tasked with developing new ideas in cloud computing, which lets companies run their software and infrastructure in remote data centers on an as-needed basis, rather than in a computer room down the hall.

    He spends much of his time shuttling between departments, encouraging teams focused on storage, databases, networking, and other functions to work together. One aim: devising ways to squeeze costs out of multimillion-dollar data centers and pass those savings on to customers such as Eli Lilly (LLY) and Netflix (NFLX).

    Nascent Industry

    Hamilton and the teams must figure out how to build the business on the fly. Cloud computing is so nascent that engineers can't rely on case studies or past industry experience to guide their decisions, says Lew Tucker, chief technology officer of cloud computing at Cisco Systems (CSCO). "This is not in college textbooks yet," says Tucker, who has followed Hamilton's work as an industry peer. "These architectures are new and evolving every day."

    Read more: Page 1, 2

    Photo from James’ blog.

    Derrick Harris reported “Rackspace is in talks with Microsoft about providing a managed version of Windows Azure” in his Lew Moorman Talks Anso Labs, OpenStack and Cloud Revenue post of 2/11/2011 to GigaOm’s Structure blog:

    During a phone call this morning, Rackspace Cloud President Lew Moorman flatly dismissed allegations that his company bought Anso Labs to gain extra power within the OpenStack community, and acknowledged that Rackspace is in talks with Microsoft about providing a managed version of Windows Azure. Rackspace has made the headlines twice this week — first for buying Anso Labs, a San Francisco-based consulting firm that helped NASA develop its Nova cloud-computing software and launch its Nebula cloud, then for topping $100 million in cloud revenue in 2010 — and Moorman addressed both in order.

    On Anso Labs and OpenStack

    image The Anso Labs deal is very notable, actually, because Anso was the brains behind Nova, the computing software that runs NASA’s Nebula cloud and that comprises the compute component of OpenStack. Bringing it in-house likely will help improve the process of developing that technology, as well as the process of integrating it with the OpenStack Storage component originally developed by Rackspace. Issues arose, however, when someone started counting seats and realized that by acquiring Anso Labs, Rackspace now has 3 of 4 seats on the OpenStack governance board and 9 of 10 seats on the Project Oversight Committee. Because OpenStack is an open source community, such a sudden shift in power stoked fears that Rackspace bought Anso Labs so it could push its own agenda within OpenStack.

    imageMoorman says there is “no truth at all” to these allegations. In fact, he added, the concerns are reasonable, and Rackspace will listen to them before, probably, suggesting changes to the governing board. “If we end up becoming a dictator in this thing,” he explained, “it won’t go anywhere.” That’s why Rackspace has yet to deny membership to any contributor or to reject anybody’s code. Further, he noted, that with major vendors such as Citrix, Dell and Cisco getting more involved with OpenStack, Rackspace wouldn’t likely be able to rule the project with an iron fist even if it wanted to. The “community is going to continue to hold us to account,” he said.

    image That being said, Rackspace does intend to make money off of OpenStack. It won’t do so by utilizing Anso Labs’ expertise to develop a Rackspace-only version of OpenStack or a premium version that it could sell to third parties, but Moorman said Rackspace is “working through” how it might offer support services to organizations deploying OpenStack. Anso Labs presents some interesting capabilities in terms of actual deployment support, Moorman noted, and there’s always Rackspace’s trademarked “Fanatical Support.” Another option he suggested might be selling Cloudkick’s monitoring and management software to OpenStack users. Rackspace will be “much more explicit” about its OpenStack support plans in the next month or so, but, Moorman clarified, adoption of the software is still its No. 1 goal, and Rackspace won’t compete too aggressively against its community partners.

    On Driving More Cloud Revenues

    Whatever money Rackspace is able to realize from OpenStack support will just be more padding on the “cloud revenue” line of its quarterly earnings statements. Cloud revenue consists of Cloud Servers, Cloud Files, Cloud Sites and Rackspace’s managed email offering (although Moorman said the former two drive it) and it topped $100 million in 2010. Moorman said Rackspace isn’t seeing any decline in interest for cloud services, and it only has plans to add more options to its cloud portfolio. He hinted at forthcoming PaaS and/or SaaS offerings, stating that Rackspace definitely will move up the stack, as well as bring traditional hosting services such as security and load balancing to the cloud. Surprisingly, Moorman also said Rackspace is “actively engaged” in talking with Microsoft about offering a managed version of Windows Azure, although there are no plans in place yet.

    In the end, though, Moorman does think OpenStack will be integral to Rackspace’s cloud success, even though the company has done just fine without it. The reason is that he views cloud computing as an ecosystem-driven business, a theory bolstered by the growing footprints of cloud providers like Amazon Web Services and VMware. He thinks OpenStack can serve the role of the cloud’s open ecosystem, and that Rackspace will reap a goodly amount of business from being part of it. And there’s a lot of business to be had; referencing Guy Rosen’s monthly count of which cloud providers are hosting the greatest number of large web sites, Moorman noted that the top six cloud providers combine to host less than 2 percent of the top 500,000 sites.

    If Rackspace emerges from what’s now a mini telco buying spree independent — a possibility that seems increasingly likely given its fast-growing business and penchant for making its own acquisition lately — it appears poised to finally make a run at Amazon Web Services. If not in users, in revenue, but not without a thriving OpenStack ecosystem.

    Related content from GigaOM Pro (sub req’d):

    Microsoft licensing Rackspace to deliver “a managed version of Windows Azure” makes no sense to me.

    Randy Bias reminded readers that IaaS != Virtualization in a 2/11/2011 post:

    Just a brief note to highlight a fairly obvious misconception, so please excuse if I am preaching to the choir.

    imageInfrastructure-as-a-Service (IaaS) is not virtualization-on-demand.  I realize that there is a certain amount of terminology hijacking by marketers, but when you say ‘IaaS’, I don’t think ‘virtual servers on demand’, I think of ‘infrastructure on demand’, which includes: servers, storage (block and object), networking, DNS, and related.

    Simply put:

    Virtualization < IaaS

    I know many of you are clear on this.  I’m just taking a strong stand on the poisoning or dilution of the definition of IaaS.

    Randy’s right. Many pseudo-pundits confuse the two terms.

    Joe Panettieri reported VMware Says Microsoft, Oracle Are Blowing Cloud Smoke in a 2/10/2011 post to the Talkin’ Cloud blog:

    image The gloves have finally come off at VMware Partner Exchange. After discussing product development and partner momentum for two days, VMware is now attacking cloud strategies promoted by Amazon, Microsoft and Oracle. VMware Chief Marketing Officer Rick Jackson says Amazon, Microsoft and Oracle are “blowing cloud smoke” instead of listening to CIO and customer needs.

    First came the usual channel applause: Partners now drive about 85 percent of imageVMware’s annual revenues, according to Carl Eschbach, president of customer operations at VMware. But as customers begin to shift to the cloud, partners need to ensure they focus on end-customer needs rather than vendor hype, asserts VMware Chief Marketing Officer Rick Jackson.

    Then, Jackson warned attendees not to get locked into cloud strategies from Microsoft, Oracle and Amazon. He asserted that 85 percent of the cloud market will involve corporate private clouds, yet Microsoft is busy promoting the Windows Azure public cloud and Amazon is busy promoting Amazon Web Services.

    The punches didn’t stop there.

    Discussing Oracle’s cloud strategy: “You get a choice of Oracle hardware, Oracle middleware and Oracle databases topped off with Oracle’s applications. They call it cloud in a box. I call it a mainframe,” said Jackson.


    Discussing Microsoft: “Microsoft is smart. If you can entrap the developers with proprietary frameworks you can lock them in [to Azure]. But how many [partners expect to] see successful, growing, profitable businesses if Microsoft takes all of your customer data and moves it into Microsoft’s data centers?”

    Rival Views

    Does Jackson have a point? Perhaps. But it’s important to give VMware’s rivals equal time on the topic. In a recent interview with Talkin’ Cloud, Oracle Channel Chief Judson Althoff predicted Oracle Exadata and Exalogic solutions will emerge as top cloud computing platforms. And since acquiring Sun last year, Oracle has quickly restored Sun’s business to profitability while transitioning legacy Sun channel partners into the Oracle PartnerNetwork (OPN) Specialized partner program.

    Meanwhile, Microsoft has aggressively positioned Windows Azure as an open cloud that supports multiple closed source and open source software development tools. Microsoft recently celebrated Windows Azure’s first anniversary of operations, but Talkin’ Cloud thinks there are at least five different ways Microsoft can improve Windows Azure to better serve channel partners.

    As for Amazon Web Services, the platform truly is a public cloud play. But Eucalyptus Systems has developed an Amazon-compatible software platform for private clouds. In theory, that means customers and partners can move information and applications between private clouds and the Amazon public cloud.

    K. Catallo claimed Red Hat’s Makara Cloud Application Platform Makes it Easier to Deploy to the Rackspace Cloud in a 2/9/2011 post to the Red Hat news blog:

    image Red Hat’s Makara team is pleased to announce the availability of the Makara Cloud Application Platform – Developer Preview. Makara is a Platform-as-a-Service (PaaS) that allows you to deploy, manage, monitor and auto-scale your new or existing Java and PHP applications in the cloud with little or no modifications. With this release: Makara is now available on the Rackspace Cloud, we’ve introduced a new MongoDB cartridge, support for Amazon micro-instances, 64-bit instance types across all clouds, plus a new security feature that gives administrators the ability to manage their organization’s user accounts across multiple clouds, clusters and applications.

    image Makara has been supporting the Amazon public cloud for almost a year now, but we kept hearing from devops folks and app developers that they would love to see Makara available on the Rackspace Cloud. Ask and you shall receive! Getting started with Makara on the Rackspace Cloud is easy. Just head on over to the Try-It link on to get your trial started. For those of you not familiar with Makara and who’d like to learn more about how it works, check out our Resources page for videos, tutorials and how-to guides. The Rackspace Cloud team is also hosting a webinar on March 17, 2011 at 12 p.m. PT where Issac Roth, director of product marketing, Cloud Solutions for Red Hat, who joined the company via Makara, will be demonstrating how to deploy, manage, monitor and scale your Java or PHP app on the Rackspace Cloud. Plus, he’ll cover the new features in this release.

    And speaking of features, we are really excited to offer native support for MongoDB. We got some great insight from the folks over at 10Gen on where the pain points were in deploying, managing and scaling MongoDB, and we listened. In this release you gain the “point and click” ability to deploy a MongoDB cluster onto the cloud, including routers, config, shard and replica servers. Being that this is our first release with support for MongoDB, we know it might be a little rough around the edges, but we are planning on building out the feature set with your feedback and help. Please let us know what you think by engaging the Makara team and other users on IRC at, room #makara or on our community page.

    In this release we’ve also added support for Amazon’s micro-instances, as well as 64-bit instance types across all clouds. This means both more affordable instances and bigger instances with more memory, bigger caches and better performance.

    The other cool feature in this release is the introduction of the concept of “organizations”. With Makara, a single user who is designated as an organization’s administrator can have an interface to manage all the organization’s developer accounts across clouds, clusters and applications. This means that an administrator can gain visibility and control around how cloud resources are being accessed and consumed.

    Finally, as many of you know, Makara was acquired by Red Hat in November 2010. Under Red Hat, we are providing the current version of the Makara Cloud Application Platform as a free Developer Preview. This Developer Preview is unsupported, though we will participate in community forums and IRC to offer advice and discussion. We expect that future versions will be incorporated into the upcoming Red Hat PaaS offering — stay tuned for more exciting news on this. For more information concerning Red Hat’s PaaS and Cloud Foundations offerings, visit the Cloud Foundations website.

    If you still have questions, drop us a line, we’d love to hear from you.

    Derrick Harris reported VMware Soups Up vCloud, Still Has PaaS Plans in a 2/8/2011 post to Giga Om’s Structure blog (missed when posted):

    VMware’s enterprise cloud computing story got stronger Tuesday morning with the announcement that the first three vCloud Datacenter partners are now online and a new tool for managing hybrid VMware clouds is now available. Despite lots of talk from enterprise customers about being interested in the cloud, it appears that few have actually taken the plunge to any significant degree. Regardless whether they meet the strict definition of cloud computing pushed by some purists (see, e.g., this recent Wikibon post), these types of capabilities will bring enterprise users into the cloud fold, perhaps leading to even cloudier ambitions in the future.

    image The vCloud Datacenter program was announced during VMworld in late August along with several early partners, and now Verizon, BlueLock and Colt are online and ready to start serving enterprise cloud customers. The primary difference between vCloud Datacenter and vCloud Express, VMware’s initial foray into service-provider partnerships, is that vCloud Datacenter clouds utilize a variety of vSphere, vCloud and vShield products and must meet VMware-defined levels of security and compliance. Among the tools utilized is vCloud Director, which lets customers turn all their on-premise and cloud-based VMs into a private cloud and includes capabilities such as self-service provisioning and chargeback. Other vCloud Datacenter partners not yet up and running are Terremark (recently bought by Verizon) and SingTel.

    imageVMware also released vCloud Connector, a free vSphere plugin that provides a single interface for moving and managing applications across vSphere-based clouds. This includes, of course, the entire stable of vCloud Datacenter partners. As far as plug-ins go, vCloud Connector is pretty significant; Mathew Lodge, VMware’s senior director of cloud services, acknowledged that excitement around Connector has been surprisingly high, especially among day-to-day administrators. It’s understandable, though, because once IT has made the decision to roll out VMware-based cloud resources, Connector does enable hybrid cloud management, a capability often cited as a must-have for enterprise users.

    But don’t let VMware’s catering to conservative enterprise IT types fool you. As anyone watching VMware over the past two years has noticed, it’s actually pushing the cloud envelope in certain areas, such as PaaS. Lodge explained that VMware “think[s] about the two markets [IaaS and PaaS] differently”: whereas IaaS is about moving existing applications to the cloud — the clear goal of most vCloud efforts thus far — there will be a “transformation of how applications are built” over the next 10 years. As companies begin rewriting existing applications and bringing new ones into play, they will start using more “true” cloud offerings such as PaaS, which VMware is already pushing via both the Spring framework for Java applications and an in-development multi-language PaaS offering called Cloud OS. Ultimately, Lodge said, VMware sees its Spring and PaaS business being “as big as” the IaaS business that encompasses its vCloud efforts.

    Hearing this vision from VMware is nothing new — CEO Paul Maritz said as much at Structure last year — but seeing it actually take shape in the form of products is. With a respectable set of IaaS products and offerings now available, VMware customers, at least, should be starting their moves to the cloud. We’ll see how it takes before this business peaks and begins to converge with a rising PaaS business.

    Photo courtesy Flickr user cote.

    Related content from GigaOM Pro (sub req’d):

    <Return to section navigation list>