Sunday, March 11, 2012

Windows Azure and Cloud Computing Posts for 3/9/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222


•• Updated 3/11/2012 10:00 AM PDT for

• Updated 3/10/2012 7:00 AM PST with Bill Laing’s Root Cause Summary of Windows Azure Service Disruption on Feb 29th, 2012 in the Windows Azure Infrastructure and DevOps section below. Effective 3/10/2012 a + symbol will be added to title dates only when a update has occurred a day or more later than the original publication.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

Azure Blob, Drive, Table, Queue and Hadoop Services

Bruce Kyle reported Big Data Hadoop Preview Comes to Windows Azure in a 3/9/2012 post:

imageMicrosoft has announced plans to release an additional limited preview of an Apache Hadoop-based service for Windows Azure in the first half of 2012.

Since the first limited preview released in December, customers such as Webtrends and the University of Dundee are using the Hadoop-based service to glean simple, actionable insights from complex data sets hosted in the cloud.

image_thumb3_thumbCustomers interested in signing up for the latest preview should visit

Microsoft + Big Data

Unlock business insights from all your structured and unstructured data, including large volumes of data not previously activated, with Microsoft’s Big Data solution. Microsoft’s end-to-end roadmap for Big Data embraces Apache Hadoop™ by distributing enterprise class, Hadoop-based solutions on both Windows Server and Windows Azure.

Key benefits:

  • Broader access of Hadoop to end users, IT professionals, and developers, through easy installation and configuration and simplified programming with JavaScript.
  • Enterprise-ready Hadoop distribution with greater security, performance, ease of management and options for Hybrid IT usage.
  • Breakthrough insights through the use of familiar tools such as Excel, PowerPivot, SQL Server Analysis Services and Reporting Services.
About Hadoop

The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers by using a simple programming model. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.

Rather than rely on hardware to deliver high-availability, the library detects and handles failures at the application layer. This results in a highly-available service on top of a cluster of computers, each of which may be prone to failures.

For More Information

See the announcements at:

Joe McKendrick (@joemckendrick) reported HADOOP Enters the Enterprise Mainstream, and Big Data Will Never Be the Same in a 3/7/2012 article for Database Trends and Applications magazine’s March 2012 issue:

imageFor enterprises grappling with the onslaught of big data, a new platform has emerged from the open source world that promises to provide a cost-effective way to store and process petabytes and petabytes worth of information. Hadoop, an Apache project, is already being eagerly embraced by data managers and technologists as a way to manage and analyze mountains of data streaming in from websites and devices. Running data such as weblogs through traditional platforms such as data warehouses or standard analytical toolsets often cannot be cost-justified, as these solutions tend to have high overhead costs. However, organizations are beginning to recognize that such information ultimately can be of tremendous value to the business. Hadoop packages up such data and makes it digestible.

image_thumb3_thumbFor this reason, Hadoop "has become a sought-after commodity in the enterprise space over the past year," Anjul Bhambhri, vice president of Big Data Products at IBM, tells DBTA. "It is cost-effective, and it allows businesses to conduct analysis of larger data sets and on more information types than ever before, unlocking key information and insight." Hadoop couldn't have arrived on the scene at a better time, since it is estimated that 2.5 quintillion bytes of data are now being created every day, she adds.

imageThe advantage that Hadoop provides is that it enables enterprises to store and analyze large data sets with virtually no size limits. "We often talk about users needing to throw data on the floor because they cannot store it," says Alan Gates, co-founder of HortonWorks and an original member of the engineering team that took the Pig subproject (an analysis tool run on Hadoop) from a Yahoo! Labs research project to an Apache open source project. "Hadoop addresses this, and does more," he tells DBTA. "By supporting the storage and processing of unstructured and semi-structured data, it allows users to derive value from data that they would not otherwise be able to. This fundamentally changes the data platform market. What was waste to be thrown out before is now a resource to be mined."

Hadoop is an open source software framework originally created by Doug Cutting, an engineer with Yahoo! at the time, and named after his son's toy elephant. The Hadoop framework includes the Hadoop Distributed File System (HDFS), which stores files across clustered storage nodes and is designed to scale to tens of petabytes of storage. Prominent Hadoop users include social media giants such as Facebook, Twitter, LinkedIn, Yahoo!, and Amazon. Facebook's Hadoop cluster is estimated to be the largest in the industry, with reportedly more than 30 petabytes of storage.

But it's no longer just the social media giants that are interested in Hadoop, as their leaders point out. Data managers within big data companies are growing just as enthusiastic about the potential that Hadoop offers to get big data under control, says Peter Skomoroch, principal data scientist at LinkedIn. Hadoop "is a disruptive force, hitting the mainstream and being adopted by the big players in the Fortune 100," he tells DBTA. "A year ago, Apache Hadoop needed to mature as a platform, particularly in security, and further define enterprise adoption outside of the consumer web space. Today, Hadoop has hit the milestone of a 1.0 release and the community has put a significant amount of thought and effort into security, with government organizations and large established companies making Hadoop a key part of their data strategies."


Hadoop also includes a robust tools ecosystem, which includes the MapReduce engine, originally designed by Google, which supports functions called Job Tracker and Task Tracker that seek to run applications in the same node or as close to data sources as possible, thereby reducing latency. Additional tools include ZooKeeper, a configuration and coordination service; Sqoop (SQL-to-Hadoop), a data import tool; Hive, a Hadoop-centric data warehouse infrastructure; Pig, an analysis platform for data from parallelized applications; Oozie, a workflow service; Hue, a graphical user interface for Hadoop; and Chukwa, a monitoring tool for large Hadoop-enabled systems.


While there is a lot of hype and excitement around Hadoop, David Gorbet, vice president of product strategy at MarkLogic Corp., urges companies to step back and evaluate their big data needs. "At its core, Hadoop was born out of a need for a parallel compute framework for large-scale batch processing," he tells DBTA. "Hadoop is exciting because it presents a new option for solving computationally expensive problems on commodity hardware. By breaking the problem up and farming pieces of it out to be executed in parallel by ‘map' functions, and then rolling up the result in a ‘reduce' function, it allows faster processing of data to enable complex tasks like finding, formatting or enriching unstructured data."

One of the most compelling value propositions for Hadoop-in combination with MapReduce-is the ability to apply analytics against large sets of data. From this perspective, "the current primary value of Hadoop is low cost storage for large volumes of data, along with support for parallel processing," Mark Troester, data management and IT strategist with SAS, tells DBTA.

"For my company, Hadoop means we can analyze extremely large sets of data at a very localized level to help us buy the best impressions for our customers' digital advertising campaigns," says Kurt Carlson, CTO of MaxPoint Interactive.

A popular application for Hadoop has been the ability to turn around large data processing jobs in a very short time. "We have seen broad adoption of Hadoop by users looking to shrink the time to run a batch job from weeks to hours," says Max Schireson, president of 10gen. "By giving engineers a flexible batch processing framework on which to program their batch processing jobs, Hadoop has gained enormous traction very quickly."

Industry observers also point out that we are only at the beginning stages of the innovations Hadoop will bring to data management operations. "We identified predictive analytics, visualization, and packages atop Hadoop Core to address a larger scope of problems," Murali Raghavan, senior vice president and head of Horizontal Services at iGATE Patni, tells DBTA. "The next phase around data cleanup will be the real-time analysis and decision-making, and we're looking at using those technologies to help our clients make more informed decisions based off their data."

The next stage for Hadoop will be "for processing analytic use cases directly in Hadoop," says Troester. "However, before analysis applications can fully leverage Hadoop, those solutions need to be able to identify the relative data. Ideally, this would be done using a visual exploratory capability that is aware of the organizational context. The organizational context consideration is especially important, since data streams in. If an organization can identify information that is relevant and timely based on organizational knowledge built into email, wikis, product categories and other sources, it can better determine what data to analyze and process. The data that is not relevant at that point in time can be placed on cheap, commodity-based storage for later use." …


No significant articles today.


<Return to section navigation list>

SQL Azure Database, Federations and Reporting

imageNo significant articles today.

<Return to section navigation list>

MarketPlace DataMarket, Social Analytics, Big Data and OData


See Beth Massi (@bethmassi) explained Creating and Consuming LightSwitch OData Services in a 3/9/2012 post in the Visual Studio LightSwitch and Entity Framework v4+ section below.


<Return to section navigation list>

Windows Azure Access Control, Service Bus and Workflow

•• Yves Goeleven (@YvesGoeleven) asked Access Control Service – Why not to rely on the nameidentifier claim? in a 3/10/2011 post:

imageOver the past week, while preparing for a migration to our production account on windows azure, I learned an important lesson that I would like to share with you, so that you don’t have to make the same mistake as I did.

The web front end, outsources authentication to various identity providers using the windows azure access control service. All of these identity providers provide a common claim that can be used to authenticate the user, being the nameidentifier claim. The value of the name identifier is unique for every user and each application. But in the scenario where there is a man in the middle, such as the access control service, this means that the value of the nameidentifier is actually unique per user per access control instance (as that is considered the application by the identity provider).

imageThis prevents you from doing a certain number of operations with your access control service namespace as the value of the nameidentifier changes when you switch access control service instance, aka you loose your customers.

Things you can no longer provide are:

  • Migration of your namespace
  • Disaster recovery
  • High availability
  • Geographic proximity for travelling users

Therefore it’s better to correlate the user’s information with an identity on another claim, email address for example, which remains stable across different access control service instances.

The live id provider, does however not provide any additional information besides the nameidentifier, so I’m sad to report that I will have to stop supporting it!

And to make matters worse, I did not save any of the other claims. so now I have to go beg all of my users to help me upgrade their account :(

So if you are a user of , please help me update your account:

  • For LiveId users: Login with your account, navigate to profile > identities and associate any of the other providers.
  • For Non LiveId users: Just login with your account, this will automatically fix your identity.

<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

•• The Microsoft patterns & practices Team updated Part 1 (Adatum) and Part 2 (Tailspin) of its Windows Azure Architectural Guide’s (WAAG) source code, hands-on labs and documentation on 3/9/2011, and published it to CodePlex:

download file icon Downloads
Release Notes

WAAG - Part 1 Samples and Hands-on-Labs are updated to support Windows Azure SDK 1.6 and Windows Azure Tools for Microsoft Visual Studio 2010 - March 2012 [Emphasis added.]

Issues with Part 2: As far as I’m aware, there is no March 2012 version of the Windows Azure Tools for Microsoft Visual Studio 2010. Also, attempts to install the required Microsoft Anti-Cross Site Scripting library fails for Part 2’s HOL with a 404 error:


Michael Collier (@MichaelCollier) continued his Windows Azure for Developers – Building Block Services Webcast series on 3/9/2012:

imageThis past Wednesday, March 7th I wrapped up the fourth webcast in my Windows Azure for Developers series. This one was focused on what I sometimes refer to as the building block services for Windows Azure – Access Control Services (ACS), Caching, and Service Bus. These are incredibly powerful services available as part of Windows Azure. You can use them on their own*, outside of any Windows Azure hosted applications, to enhance your existing or new applications. They’re also great for enhancing developer productivity – because not many people want to write their own identity management, caching, or messaging layer. These just let you consume the service and focus on your core application.

imageYou can view the full webcast at

I’m making the slides available as well.

At the end of the webcast, a few questions came up that I mentioned I would try to provide some answers for on my blog.

Q: How to use ACS from a Silverlight application?

A: I haven’t personally tried this yet, but it seems others have. Here are a few resources I would recommend to learn more:

Q: How to use ACS with Active Directory?

A: Have a look at the “Single Sign-On from Active Directory to a Windows Azure Application Whitepaper”.

* I wouldn’t necessarily recommend using the cache service outside of Windows Azure as the network latency (going over the internet) would likely reduce any benefit of caching.

Wade Wegner (@WadeWegner) posted Episode 71 - Using Cloud9 IDE to Deploy to Windows Azure to Channel9 on 3/9/2012:

imageJoin Wade and David each week as they cover Windows Azure. You can follow and interact with the show at @CloudCoverShow.

In this episode, Nathan Totten and Glenn Block are joined by Ruben Daniels and Matt Pardee of Cloud9 IDE who talk about how their tool can build and deploy Node.JS applications into Windows Azure. Glenn Block ends the show with a tip on how to save yourself frustration by clearing your node package manager cache.

imageIn the news:

The Windows Azure Team published a Node.js Web Application with Storage on MongoDB tutorial on 3/8/2012:

This tutorial describes how to use MongoDB to store and access data from a Windows Azure application written in Node.js. This guide assumes that you have some prior experience using Windows Azure and Node.js. For an introduction to Windows Azure and Node.js, see Node.js Web Application. The guide also assumes that you have some knowledge of MongoDB. For an overview of MongoDB, see the MongoDB website.

In this tutorial, you will learn how to:

  • Add MongoDB support to an existing Windows Azure service that was created using the Windows Azure SDK for Node.js.

  • Use npm to install the MongoDB driver for Node.js.

  • Use MongoDB within a Node.js application.

  • Run your MongoDB Node.js application locally using the Windows Azure compute emulator.

  • Publish your MongoDB Node.js application to Windows Azure.

Throughout this tutorial you will build a simple web-based task-management application that allows retrieving and creating tasks stored in MongoDB. MongoDB is hosted in a Windows Azure worker role, and the web application is hosted in a web role.

The project files for this tutorial will be stored in C:\node and the completed application will look similar to:

task list application screenshot

Setting up the deployment environment

Before you can begin developing your Windows Azure application, you need to get the tools and set up your development environment. For details about getting and installing the Windows Azure SDK for Node.js, see Setup the Development Environment in the Node.js Web Application tutorial.

NOTE: this tutorial requires Node 0.6.10 or above. This version is included in the current Windows Azure SDK for Node.js; however if you have installed a previous version you will need to upgrade to the latest version.

Install Windows Azure Tools for MongoDB and Node.js

To get MongoDB running inside Windows Azure and to create the necessary connections between Node and MongoDB, you will need to install the AzureMongoDeploymentCmdlets package.

  1. Download and run the Windows Azure Tools for MongoDB and Node.js MSI from the MongoDB download site.

    Windows Azure Tools for MongoDB and Node.js Installer

Launching Windows Azure PowerShell for Node.js

The Windows Azure SDK for Node.js includes a Windows PowerShell environment that is configured for Windows Azure and Node development. Installing the Windows Azure Tools for MongoDB and Node.js also configured the enviroment to include the MongoDB Windows PowerShell cmdlets.

  1. On the Start menu, click All Programs, click Windows Azure SDK for Node.js - November 2011. Some of the commands require Administrator permissions, so right-click Windows PowerShell for Node.js and click Run as administrator.

    launch PowerShell environment

Download the MongoDB Binary Package
  1. You must download the MongoDB binaries as a separate package, in addition to installing the Windows Azure Tools for MongoDB and Node.js. Use the Get-AzureMongoDBBinaries cmdlet to download the binaries:

    PS C:\> Get-AzureMongoDBBinaries

    You will see the following response:

    Get-AzureMongoDBBinaries output

Create a new application
  1. Create a new node directory on your C drive and change to the c:\node directory. If you have completed other Windows Azure tutorials in Node.js, this directory may already exist.

    create directory

  2. Use the New-AzureService cmdlet to create a new solution:

    PS C:\node> New-AzureService tasklistMongo

    You will see the following response:

    New-AzureService cmdlet response

  3. Enter the following command to add a new web role instance:

    PS C:\node\tasklistMongo> Add-AzureNodeWebRole

    You will see the following response:

    task list application

  4. Enter the following command to change to the newly generated directory:

    PS C:\node\tasklistMongo> cd WebRole1
  5. To get started, you will create a simple application that shows the status of your running MongoDB database. Run the following command to copy starter code into your server.js file:

    PS C:\node\tasklistMongo\WebRole1> copy "C:\Program Files (x86)\MongoDB\Windows Azure\Nodejs\Scaffolding\MongoDB\NodeIntegration\WebRole\node_modules\azureMongoEndpoints\examples\showStatusSample\mongoDbSample.js" server.js
  6. Enter the following command to open the updated file in notepad.

    PS C:\node\tasklistMongo\WebRole1> notepad server.js

    You can see that the file is querying the mongoDB database to display status.

  7. Close the file when you are done reviewing it.

  8. The server.js code references a worker role called ReplicaSetRole that hosts MongoDB. Enter the following command to create ReplicaSetRole (which is the default role name) as a new MongoDB worker role:

    PS C:\node\tasklistMongo\WebRole1> Add-AzureMongoWorkerRole

    Add-AzureMongoWorkerRole cmdlet response

  9. The final step before you can deploy is joining the roles running Node and MongoDB so the web application can communicate with MongoDB. Use the following command to integrate them.

    PS C:\node\tasklistMongo\WebRole1> Join-AzureNodeRoleToMongoRole WebRole1

    Join-AzureNodeRoleToMongoRole cmdlet response

  10. The response of the Join-AzureNodeRoleToMongoRole command shows the npm (node package manager) command to use to install the MongoDB driver for Node.js. Run the following command to install the MongoDB driver:

    PS C:\node\tasklistMongo\WebRole1> npm install mongodb
  11. By default your application will be configured to include one WebRole1 instance and one ReplicaSetRole instance. In order to enable MongoDB replication, you need three instances of ReplicaSetRole. Run the following command to specify that three instances of ReplicaSetRole should be created:

    PS C:\node\tasklistMongo\WebRole1> Set-AzureInstances ReplicaSetRole 3
Running Your Application Locally in the Emulator
  1. Enter the following command to run your service in the emulator and launch a browser window:

    PS C:\node\tasklistMongo\WebRole1> Start-AzureEmulator -launch

    A browser will open and display content similar to the details shown in the screenshot below. This indicates that the service is running in the compute emulator and is working correctly.

    application running in emulator

    Running the application in the emulator also starts instances of mongod.exe and AzureEndpointsAgent.exe running on your local machine. You will see three console windows open for the mongod.exe instances—one for each replicated instance. A fourth console window will open for AzureEndpointsAgent.exe. Calling Stop-AzureEmulator will also cause the instances of these applications to stop.

    Note: In some cases, your browser window may launch and attempt to load the web application before your worker role instances are running, which will cause an error message to be displayed in the browser. If this occurs, refreshing the page in the browser when the worker role instances are running will result in the page being displayed correctly.

  2. To stop the compute emulator, enter the following command:

    PS C:\node\tasklistMongo\WebRole1> Stop-AzureEmulator
Creating a Node.js application using express and other tools

In this section you will pull additional module packages into your application using npm. You will then extend your earlier application using the express module to build a task-list application that stores data in MongoDB.

For the task-list application you will use the following modules (in addition to the mongodb module that you installed earlier):

  • express - A web framework inspired by Sinatra.

  • node-uuid - A utility library for creating universally unique identifiers (UUIDs) (similar to GUIDs)

  1. To install the modules, enter the command below from WebRole1 folder:

    PS C:\node\tasklistMongo\WebRole1> npm install express node-uuid
  2. You will use the scaffolding tool included with the express package, by entering the command below from WebRole1 folder:

    PS C:\node\tasklistMongo\WebRole1> .\node_modules\.bin\express
  3. You will be prompted to overwrite your earlier application. Enter y or yes to continue, and express will generate a folder structure for building your application.

    linstalling express module

  4. Delete your existing server.js file and rename the generated app.js file to server.js. This is needed because the Windows Azure web role in your application is configured to dispatch HTTP requests to server.js.

    PS C:\node\tasklistMongo\WebRole1> del server.js
    PS C:\node\tasklistMongo\WebRole1> ren app.js server.js
  5. To install the jade engine, enter the following command:

    PS C:\node\tasklistMongo\WebRole1> npm install

    npm will now download additional dependencies defined in the generated package.json file

Using MongoDB in a Node application

In this section you will extend your application to create a web-based task-list application that you will deploy to Azure. The task list will allow a user to retrieve tasks, add new tasks, and mark tasks as completed. The application will utilize MongoDB to store task data.

Create the connector for the MongoDB driver
  1. The taskProvider.js file will contain the connector for the MongoDB driver for the tasklist application. The MongoDB driver taps into the existing Windows Azure resources to provide connectivity. Enter the following command to create and open the taskProvider.js file:

    PS C:\node\tasklistMongo\WebRole1> notepad taskProvider.js
  2. At the beginning of the file add the following code to reference required libraries:

    var AzureMongoEndpoint = require('azureMongoEndpoints').AzureMongoEndpoint;
    var mongoDb = require('mongodb').Db;
    var mongoDbConnection = require('mongodb').Connection;
    var mongoServer = require('mongodb').Server;
    var bson = require('mongodb').BSONNative;
    var objectID = require('mongodb').ObjectID;
  3. Next, you will add code to set up the TaskProvider object. This object will be used to perform interactions with the MongoDB database.

    var TaskProvider = function() {
      var self = this;
      // Create mongodb azure endpoint
      var mongoEndpoints = new AzureMongoEndpoint('ReplicaSetRole', 'MongodPort');
      // Watch the endpoint for topologyChange events
      mongoEndpoints.on('topologyChange', function() {
        if (self.db) {
          self.db = null;
        var mongoDbServerConfig = mongoEndpoints.getMongoDBServerConfig();
        self.db = new mongoDb('test', mongoDbServerConfig, {native_parser:false}); {});
      mongoEndpoints.on('error', function(error) {
        throw error;

    Note that the mongoEndpoints object is used to get the MongoDB endpoint listener. This listener keeps track of the IP addresses associated with the running MongoDB servers and will automatically be updated as MongoDB server instances come on and off line.

  4. The remaining code to finish off the MongoDB driver is fairly standard code that you may be familiar with from previous MongoDB projects:

    TaskProvider.prototype.getCollection = function(callback) {
      var self = this;
      var ensureMongoDbConnection = function(callback) {
        if (self.db.state !== 'connected') {
 (error, client) {
        } else {
      ensureMongoDbConnection(function(error) {
        if (error) {
        } else {
          self.db.collection('task', function(error, task_collection) {
            if (error) {
            } else {
              callback(null, task_collection);
    TaskProvider.prototype.findAll = function(callback) {
      this.getCollection(function(error, task_collection) {
        if (error) {
        } else {
          task_collection.find().toArray(function(error, results) {
            if (error) {
            } else {
              callback(null, results)
    }; = function(tasks, callback) {
      this.getCollection(function (error, task_collection) {
        if (error) {
        } else {
          if (typeof (tasks.length) == "undefined") {
            tasks = [tasks];
          for (var i = 0; i < tasks.length; i++) {
            task = tasks[i];
            task.created_at = new Date();
          task_collection.insert(tasks, function (err) {
            callback(null, tasks);
    exports.TaskProvider = TaskProvider;
  5. Save and close the taskprovider.js file.

Modify server.js
  1. Enter the following command to open the server.js file:

    PS C:\node\tasklistMongo\WebRole1> notepad server.js
  2. Include the node-uuid, home, and azure modules. The home module does not exist yet, but you will create it shortly. Add the code below after the line that ends with express.createServer().

    server.js snippet

    var TaskProvider = require('./taskProvider').TaskProvider;
    var taskProvider = new TaskProvider();
    var Home = require('./home');
    var home = new Home(taskProvider);
  3. Replace the existing code in the route section with the code below. It will create a home controller instance and route all requests to "/" or "/home" to it.

    server.js snippet

    var home = new Home(taskProvider);
    app.get('/', home.showItems.bind(home));
    app.get('/home', home.showItems.bind(home));
  4. Replace the last two lines of the file with the code below. This configures Node to listen on the environment PORT value provided by Windows Azure when published to the cloud.

    server.js snippet

  5. Save and close the server.js file.

Create the home controller

The home controller will handle all requests for the task list site.

  1. Create a new home.js file in Notepad, using the command below. This will be the controller for handling the logic for the task list.

    PS C:\node\tasklistMongo\WebRole1> notepad home.js
  2. Replace the contents with the code below and save the file. The code below uses the javascript module pattern. It exports a Home function. The Home prototype contains the functions to handle the actual requests.

    module.exports = Home;
    function Home (taskProvider) {
      this.taskProvider = taskProvider;
    Home.prototype = {
      showItems: function (req, res) {
        var self = this;
        this.getItems(function (error, tasklist) {
          if (!tasklist) {
            tasklist = [];
          self.showResults(res, tasklist);
      getItems: function (callback) {
      showResults: function (res, tasklist) {
        res.render('home', { title: 'Todo list', layout: false, tasklist: tasklist });

    Your home controller now includes three functions:

    • showItems handles the request.

    • getItems uses the table client to retrieve open task items from your tasks table. Notice that the query can have additional filters applied; for example, the above query filters only show tasks where completed is equal to false.

    • showResults calls the Express render function to render the page using the home view that you will create in the next section.

  3. Save and close the home.js file.

Modify the home view using jade
  1. From the Windows PowerShell command window, change to the views folder and create a new home.jade file by calling the following commands:

    PS C:\node\tasklistMongo\WebRole1> cd views
    PS C:\node\tasklistMongo\WebRole1\views> notepad home.jade
  2. Replace the contents of the home.jade file with the code below and save the file. The form below contains functionality for reading and updating the task items. (Note that currently the home controller only supports reading; you will change this later.) The form contains details for each item in the task list.

    title Index
    h1 My ToDo List
        td Name
        td Category
        td Date
        td Complete
        each item in tasklist
            td #{}
            td #{item.category}
            td #{}
              input(type="checkbox", name="completed", value="#{item.RowKey}")
  3. Save and close the home.jade file.

Run your application locally in the emulator
  1. In the Windows PowerShell window, enter the following command to launch your service in the compute emulator and display a web page that calls your service.

    PS C:\node\tasklistMongo\WebRole1\views> Start-AzureEmulator -launch

    When the emulator is running, your browser will display the following page, showing the structure for task items that will be retrieved from MongoDB:

    screenshot of app running in emulator

Adding new task functionality

In this section you will update the application to support adding new task items.

  1. First, add a new route to server.js. In the server.js file, add the following line after the last route entry for /home, and then save the file.

    server.js snippet'/home/newitem', home.newItem.bind(home));

    The routes section should now look as follows:

    // Routes
    var home = new Home(client);
    app.get('/', home.showItems.bind(home));
    app.get('/home', home.showItems.bind(home));'/home/newitem', home.newItem.bind(home));
  2. In order to use the node-uuid module to create a unique identifier, add the following line at the top of the home.js file after the first line where the module is exported.

    home.js snippet

    var uuid = require('node-uuid');
  3. To implement the new item functionality, create a newItem function. In your home.js file, paste the following code after the last function and then save the file.

    home.js snippet 2

    newItem: function (req, res) {
      var self = this;
      var createItem = function (resp, tasklist) {
        if (!tasklist) {
          tasklist = [];
        var count = tasklist.length;
        var item = req.body.item;
        item.completed = false;
        var newtasks = new Array();
        newtasks[0] = item;
       ,function (error, tasks) {
        self.showItems(req, res);

    The newItem function performs the following tasks:

    • Extracts the posted item from the body.

    • Inserts the item into the tasks table by calling the insertEntity function.

    • Renders the page by calling the save function.

  4. Now, update the view by adding a new form to allow the user to add an item. In the home.jade file, paste the following code at the end of the file and save. Note that in Jade, whitespace is significant, so do not remove any of the spacing below.

    form(action="/home/newitem", method="post")
          td Item Name:
            input(name="item[name]", type="textbox")
          td Item Category:
            input(name="item[category]", type="textbox")
          td Item Date:
            input(name="item[date]", type="textbox")
            input(type="submit", value="Add item")
  5. Because the Windows Azure emulator is already running, you can browse the updated application:

    PS C:\node\tasklistMongo\WebRole1\views> start http://localhost:81/home

    The browser will open and display the following page:

    screenshot of task list application in emulator

  6. Enter the following values:

    • Item Name: New task functionality

    • Item Category: Site work

    • Item Date: 01/05/2012

  7. Then click Add item.

    The item will be added to your tasks table in MongoDB and displayed as shown in the screenshot below.

    task list application running in emulator

  8. Enter the following command to stop the Windows Azure compute emulator.

    PS C:\node\tasklistMongo\WebRole1\views> Stop-AzureEmulator
Deploying the application to Windows Azure

In order to deploy your application to Windows Azure, you need an account. Once you are logged in with your account, you can download a Windows Azure publishing profile, which will authorize your machine to publish deployment packages to Windows Azure using the Windows PowerShell commands.

Create a Windows Azure account

If you do not already have a Windows Azure account, you can sign up for a free trial account.

  1. Open a web browser, and browse to

  2. To get started with a free account, click on Free Trial in the upper right corner and follow the steps

Download the Windows Azure Publishing Settings

If this is the first Node.js application that you are deploying to Windows Azure, you will need to install publishing settings to your machine before deploying. For details about downloading and installing Windows Azure publishing settings, see Downloading the Windows Azure Publishing Settings in the Node.js Web Application tutorial.

Publish the Application
  1. MongoDB requires access to a Windows Azure storage account to store data and replication information. Before publishing, run the following command to set the proper storage account. This command is specifying a storage account called taskListMongo that will be used to store the MongoDB data. This account will be automatically created if it doesn't already exist. Subscription-1 is the default subscription name that is used when you create a free trial account. Your subscription name may be different; if so, replace Subscription-1 with the name of your subscription.

    PS C:\node\tasklistMongo\WebRole1\views> Set-AzureMongoStorageAccount -StorageAccountName taskListMongo -Subscription "Subscription-1"
  2. Publish the application using the Publish-AzureService command, as shown below. Note that the name specified must be unique across all Windows Azure services. For this reason, taskListMongo is prefixed with Contoso, the company name, to make the service name unique.

    PS C:\node\tasklistMongo\WebRole1\views> Publish-AzureService -name ContosoTaskListMongo -location "North Central US" -launch

    After the deployment is complete, the browser will also open to the URL for your service and display a web page that calls your service.

Stop and Delete the Application

Windows Azure bills role instances per hour of server time consumed, and server time is consumed while your application is deployed, even if the instances are not running and are in the stopped state. With your web role plus three instances of the MongoDB worker role, your application is currently running four Windows Azure instances.

The following steps describe how to stop and delete your application.

  1. In the Windows PowerShell window, call the Stop-AzureService command to stop the service deployment created in the previous section:

    PS C:\node\tasklistMongo\WebRole1\views> Stop-AzureService

    Stopping the service may take several minutes. When the service is stopped, you will receive a message indicating that it has stopped.

  2. To delete the service, call the Remove-AzureService command:

    PS C:\node\tasklistMongo\WebRole1\views> Remove-AzureService
  3. When prompted, enter Y to delete the service.

    After the service has been deleted you will receive a message indicating that the service has been deleted.

Guy Harrison asserted Sentiment Analysis Could Revolutionize Market Research in a 3/7/2012 post to the Database Trends and Applications magazine’s March 2012 issue:

imageKnowing how your customers feel about your products is arguably as important as actual sales data but often much harder to determine. Traditionally, companies have used surveys, focus groups, customer visits, and similar active sampling techniques to perform this sort of market research.

Opposition or lack of faith in market research takes a number of forms. Henry Ford once said, "If I had asked people what they wanted, they would have said faster horses," while Steve Jobs said, "People don't know what they want until you show it to them." The real problem with market research is more pragmatic: It's difficult and expensive to find out what people think. Customers don't want to complete surveys-and the ones who do are not representative of the ones who don't, so you always are working with a skewed sample. In addition, it takes time to collate survey information, which leads to an information lag that can be fatal.

The emerging discipline of sentiment analysis may address the deficiencies of traditional market research by leveraging the wealth of data generated by social networks such as Twitter and Facebook, as well as customer comments on ecommerce sites such as

Strictly speaking, sentiment analysis examines some input data to determine the author's attitude to a product or concept. The core of sentiment analysis involves natural language processing-making sense of human generated text. This processing can be as simple as counting key phrases such as "awesome" and "awful." "Unsupervised" sentiment analysis of this type-using static rules-can provide some value, but typically fails to parse ambivalent, sarcastic, or ambiguous inputs. Increasingly sophisticated techniques have been developed to more accurately parse sentiment from more complex text. Some of these solutions are developed using machine learning techniques: Initial machine sentiment guesses are validated by humans so that the algorithm "learns" to make better evaluations. In some cases, this human training is provided by crowdsourcing-farming out evaluations to labor marketplaces such as Amazon's Mechanical Turk.

Sentiment analysis as an alternative to traditional market research suits some segments particularly well. Movies are a classic case, where a sentiment analysis of movie reviews can create "meta-scores" based on analysis of dozens or hundreds of reviews. Until recently, however, sentiment analysis has been of limited use in other areas.

Sentiment analysis is now becoming a hot topic as companies attempt to mine the increasing volume of information available from sites such as Facebook or Twitter. Twitter receives more than 250 million tweets per day from more than 100 million active users. Chances are, many of your customers will be tweeting, and many of them may be tweeting about your product. And Twitter's 140-character limit makes sentiment analysis relatively simple. Facebook provides an equally rich source of information-with 1 billion users anticipated by the middle of 2012.

To see sentiment analysis in action, take a look at This site allows you to generate sentiment analysis for recent tweets on any keyword. Tweets used in the analysis are listed so you can evaluate the accuracy of the algorithm. As an example, the site rates 91% of tweets on SOPA (the Stop Online Piracy Act) as negative, while 74% of tweets about "kittens" were rated positively.

Commercialization of sentiment analysis is progressing, with vendors such as offering tools that can analyze sentiment across a wide range of web sources-incorporating blogs, reviews, comments, and social networking data.

Sentiment analysis will not eliminate the need for traditional market research techniques; but, as a practical application of "big data" analysis, it's definitely a game-changer.

Guy Harrison is a director of research and development at Quest Software, and author of Oracle Performance Survival Guide (Prentice Hall, 2009). Contact him at

Background for Microsoft Codename “Social Analytics.”

<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

•• Beth Massi (@bethmassi) uploaded Contoso Construction - LightSwitch Advanced Sample (Visual Studio 11 Beta) to the Windows Azure Dev Center on 3/6/2012 (missed when published):

imageThis sample demonstrates some of the more advanced code, screen, and data customizations you can do with LightSwitch in Visual Studio 11 beta. It shows off new features in Visual Studio 11 beta and different levels of customization that you can do to provide enhanced capabilities. [Emphasis added.]

Download - Select a language: VB.NET

imageThis sample demonstrates some of the more advanced code, screen, and data customizations you can do with LightSwitch inVisual Studio 11 beta. Please seethis versionif you are using Visual Studio LightSwitch in Visual Studio 2010.

If you are not a professional developer or do not have any experience with LightSwitch, please see theGetting Started section of theLightSwitch Developer Center for step-by-step walkthroughs and How-to videos. Please make sure you read the setup instructions below.

Features of this sample include:

  • A “Home screen” with static images and text
  • Creating and consuming OData services
  • Group boxes and new layout features of screens
  • New business types, percent and web address
  • Personalization with My Appointments displayed on log in
  • “Show Map..” links under the addresses in data grids
  • Picture editors
  • Reporting via COM interop to Word and Import data from Excel using the Office Integration Pack
  • Composite LINQ queries to retrieve/aggregate data
  • Custom report filter using the Advanced Filter Control
  • Emailing appointments via SMTP using iCal format in response to events on the save pipeline

Building the Sample

You will need Visual Studio 11 beta installed to run this sample. Before building the sample you will need to set up a few things so that all the pieces work. Once you complete the following steps, press F5 to run the application in debug mode.

1. Install Extensions

You will need the following extensions installed to load this application:
- Filter Control
- Office Integration Pack
- Bing Map control

These are .VSIX packages and are located in the root folder of this sample. Close Visual Studio and then double-click them to install.

2. Set Up Bing Map Control

In order to use the Bing Maps Control and the Bing Maps Web Services, you need a Bing Maps Key. Getting the key is a free and straightforward process you can complete by following these steps:

  • Go to the Bing Maps Account Center at
  • Click Sign In, to sign in using your Windows Live ID credentials.
  • If you haven’t got an account, you will be prompted to create one.
  • Enter the requested information and then click Save.
  • Click the "Create or View Keys" link on the left navigation bar.
  • Fill in the requested information and click "Create Key" to generate a Bing Maps Key.
  • In the ContosoConstruction application open the MapScreen screen.
  • Select the Bing Map control and enter the key in the Properties window.

3. Set Up Email Server Settings

When you create, update or cancel an appointment in the system between a customer and an employee emails can be sent. In order for the emailing of appointments to work you must add the correct settings for your SMTP server in the ServerGenerated project's Web.config:

  • Open the ContosoConstruction project and in the solution explorer select "File View".
  • Expand the Serve project and open the Web.config file.
  • You will see the following settings that you must change to valid values:

<add key="SMTPServer" value="" />
< add key="SMTPPort" value="25" />
< add key="SMTPUserId" value="admin" />
< add key="SMTPPassword" value="password" />

  • Run the application and open the employees screen, select Test User and specify a
    valid email address. When you select this user on appointments, emails will be sent here.

4. Set Up OData Source from Azure DataMarket

In order to see how consuming OData services works in LightSwitch, this sample utilizes some free data on the Azure DataMarket. You will need to sign into this portal and update the settings in the Web.config for this to work.

  • Go to the Azure DataMarket at
  • Click Sign In, to sign in using your Windows Live ID credentials.
  • If you haven’t got an account, you will be prompted to create one.
  • Enter the requested information and then click Save.
  • Click the "My Account" link on the top navigation bar.
  • Note the customer ID and Account Key
  • Subscribe to the Data.Gov Crime data set by searching for in the search box at the top right
  • Select the Data.Gov Crime dataset and then click the Sign Up button on the right to activate the subscription
  • Open the ContosoConstruction project and in the solution explorer select "File View".
  • Expand the Server project and open the Web.config file.
  • Enter your customer ID and Account Key in the connection string:

<add name="4fdc6d24-73b7-42ef-9f56-d3c1951f22ff"
connectionString="Service Url=;
Is Windows Authentication=False;
Password=&quot;ACCOUNT KEY&quot;" /

Additional Setup Notes:


In order to more easily access the OData Services that LightSwitch generates, change the Application Type to "Web", run the application to get the base URL & port #, then navigate your browser tohttp://localhost:####/ApplicationData.svc

Alternately you can use Fiddler to view the traffic & OData feeds. Download here:

You can load the Contoso Construction OData into Excel in order to do anaylsis on the data. Install the free add-in PowerPivot at A sample PowerPivot spreadsheet is included in the root folder called ContosoAnaylsisPowerPivot.xlsx


The system is set to Forms Authentication but if you change it to Windows Authenticaion then in order for the "My Appointments" feature to work you will need to add yourself into the Employees table and specify your domain name as the user name. Make sure to specify a valid email address if you want to email appointments.

Excel Import:

In order to import data on the Materials Catalog screen, copy the StructuralMaterials.xls located in the root of this sample to your My Documents folder first. Then click the Import from Excel button on the screen and select the spreadsheet. You can them map the columns in the spreadsheet to the entity properties and the data from the spreadsheet will appear as new rows on the Materials Catalog. Click Save to send the data to the database.

Word Reports:

In order to print out the Project Status report, copy the ProductStatusReport.docx report template located in the root of this sample to your My Documents folder first. Then click the Project Status button at the top of the Project Detail screen and select the report template.

Additional Resources

Here are some more resources of Visual Studio LightSwitch to explore:

For questions related to this sample please contact me here. For other LightSwitch questions and troubleshooting please visit theLightSwitch in Visual Studio 11 Beta Forum.

Beth Massi (@bethmassi) explained Creating and Consuming LightSwitch OData Services in a 3/9/2012 post:

NOTE: This information applies to LightSwitch in Visual Studio 11 beta.

imageIn the next version of LightSwitch, we’ve added support for OData services, both consuming external services as well as producing services from the LightSwitch middle-tier. The Open Data Protocol (OData) standardizes the way we communicate with data services over the web. Many enterprises today use OData as a means to exchange data between systems, partners, as well as provide an easy access into their data stores. So it makes perfect sense that LightSwitch, which centers around data, also should work with data services via OData. Since OData is a standard protocol, it also means that other clients can access the data you create through LightSwitch.

imageIn my last post on OData in LightSwitch I showed you how we could use external OData services to enhance our LightSwitch applications. In this post I’ll show you how to consume OData services that LightSwitch exposes.

Creating OData Services with LightSwitch

imageCreating OData services in LightSwitch doesn’t take any additional skills than you had before. In fact, you don’t have to do anything special to create these services. They are automatically created when you define your data and compile your application. Each data source you use in your LightSwitch application becomes a data service endpoint. Within those endpoints, each entity you define in LightSwitch is exposed automatically. Instead of the “black box” middle-tier we had in the first version of LightSwitch, we now have an open middle-tier that can be used to interface with other systems and clients.


What’s really compelling here is not only can you model data services easily with LightSwitch, but any business logic (and user permissions) that you have written on your entities will execute as well, no matter what client is accessing the services. And since OData is an open protocol there are a multitude of client libraries available for creating all sorts of applications on a variety of platforms, from web, phone, etc.

Let’s dig into the service endpoints a little more and see what they look like. For this example I’m going to use the Contso Construction sample.

Getting to Your LightSwitch OData Services

When you deploy your LightSwitch application in a three-tier configuration (either hosting the middle-tier in IIS or Windows Azure) then the service endpoints are exposed. The name of the services correspond to the name of your data sources. In the Contoso Construction sample, we have two service endpoints because we have two data sources. Within each of the services, we can navigate to all the entity sets we model in the data designer.


There are a couple ways you can get to the services when debugging. The easiest thing to do is to change the client application type in the project properties to “Web”. When you debug the app (F5) you will see the port number Visual Studio has assigned you in the address bar of your browser.


While debugging, open another tab in the browser and navigate to your OData service using the port number. You will see the list of entity sets available to query through the service.


The Open Data Protocol is a REST-ful protocol based on AtomPub and defines a set of query operations that can be performed on the data using a set of URI conventions. You query the service with an HTTP-GET request and the service will return to you a feed with the results in the response. To see the raw feed in IE, go to Tools –> Internet Options, Content tab. Under Feeds & Web Slices click “Settings” then uncheck “Turn on feed reading view”.


If we want to see all the customers in the system we can simply type the URL http://localhost:41155/ApplicationData.svc/Customers and you will get the following response from the service. Each entry in the feed is a customer entity which corresponds to a row in the database.


Keep in mind that the queries are case sensitive. Notice it’s Customers not customers in the URL. If you want to return a customer who’s ID = 1 then use:


which would return only the first customer shown above. Similarly, the OData protocol defines a standard way of navigating relationships via navigation properties. If you want to get all the construction projects for a particular customer use:


which would return just that customer’s projects. If you want to return only Customers who’s last name is “Massi” then use:

http://localhost:41155/ApplicationData.svc/Customers?$filter=LastName eq 'Massi'

Of course there are a whole slew of other query operations supported like OrderBy, Top, Skip, Sort, etc. Take a look at the Open Data Protocol URI conventions for a full list. There are also operations defined in the protocol for performing updates, inserts and deletes using standard HTTP verbs. The LightSwitch client communicates with the middle-tier data services this way.

Another way to inspect the service requests and responses is to install a tool like Fiddler. Fiddler is a Web Debugging Proxy which logs all http(s) traffic between your computer and the Internet or localhost. This way you don’t have to change the client application type, you can leave it as a desktop application and still see the traffic. If you’re trying to build your own clients against OData services this is a must-have tool.


Now that you understand how OData services are exposed from the LightSwitch middle-tier, let’s move on and consume some of our data from another client outside the LightSwitch client. One of my favorite tools for analyzing data is Excel.

Consuming LightSwitch Services in Excel using PowerPivot

You don’t actually need to know anything about OData to consume these services in Excel. Excel has a free add-in aimed at power users called PowerPivot that you can install into Excel 2010 to get access to all sorts of data sources that Excel doesn’t support out of the box, including OData feeds. Download it at You can arm your power users with this add-in and point them to your LightSwitch data services to perform complex data analysis and create reports all in a familiar tool.

To connect Excel to an OData service, click on the PowerPivot tab and then click the “PowerPivot Window” button to launch PowerPivot. Click the “From Data Feeds” button and supply the Data Feed URL, then click next.


At this point you can select from a list of data sets.


Select the data you want and then click finish. PowerPivot will import the data you select into a set of spreadsheets. You can keep importing additional feeds or data sources into PowerPivot and then relate them together to create powerful data mashups. To create relationships between your data sets select the Design tab on the PowerPivot window and click Manage Relationships.


Once you set up the relationships you need, you can create pivot tables and charts by selecting the Home tab and dropping down the PivotTable button.


You will then see the PowerPivot Field List window that you can use to create charts like normal. Keep in mind that if you are not working against LightSwitch data services that are deployed but are instead trying this while debugging, you may need to update the port number to your data service. You can do this in the PowerPivot window by selecting the Design tab and then clicking “Existing Connections”. Then you can update the port number and refresh the data back on the Home tab.

The updated Contoso Construction sample contains a spreadsheet in the root folder called ContsoAnalysisPowerPivot.xlsx that has a variety of charts you can play with.

Download Sample Contoso Construction - LightSwitch Advanced Sample (Visual Studio 11 Beta)


Wrap Up

I hope you are now starting to realize how powerful LightSwitch can be not only to create business applications, but also data services. Being able to consume as well as expose LightSwitch data sources as OData services opens the door for a wide variety of client applications accessing your data through your business rules. In the next few weeks the team and I will take you on some more adventures with OData and show you some of the numerous possibilities you now have. Of course, we’ll also drill into other new features as well. Stay tuned!

Julie Lerman (@julielerman) reported Entity Framework is now Part of ASP.NET in a 3/9/2012 post to her Don’t Be Iffy blog:

imageArthur Vickers, from the EF team, tweeted:

Also, very happy to be moving to @scottgu's org with the rest of the Entity Framework team. Great for us, great for EF, great for Microsoft.

Just a reminder of Scott’s current role: “responsible for delivering the development platform for Windows Azure, as well as the .NET Framework and Visual Studio technologies used in building Web and server applications.”

A little later there was this on twitter from Noah Coad who is at Dallas Days of .NET (follow #dodn12 on twitter)

@shanselman at #dodn12 "just announced, we have an M in MVC, Entity Framework is now part of ASP.NET"

And then this interesting question from Daniel Bradley:

if Entity framework is to be part of ASP.NET, does this mean open-sourcing of the tools & framework?

Stay tuned….

(and since someone asked, EF has always been part of ADO.NET)

This probably is good news.

Return to section navigation list>

Windows Azure Infrastructure and DevOps

• Bill Laing posted a Root Cause Summary of Windows Azure Service Disruption on Feb 29th, 2012 to the Windows Azure blog on 3/10/2012:


imageAs a follow-up to my March 1 posting, I want to share the findings of our root cause analysis of the service disruption of February 29th. We know that many of our customers were impacted by this event and we want to be transparent about what happened, what issues we found, how we plan to address these issues, and how we are learning from the incident to prevent a similar occurrence in the future.


Again, we sincerely apologize for the disruption, downtime and inconvenience this incident has caused. We will be proactively issuing a service credit to our impacted customers as explained below. Rest assured that we are already hard at work using our learnings to improve Windows Azure.

Overview of Windows Azure and the Service Disruption

Windows Azure comprises many different services, including Compute, Storage, Networking and higher-level services like Service Bus and SQL Azure. This partial service outage impacted Windows Azure Compute and dependent services: Access Control Service (ACS), Windows Azure Service Bus, SQL Azure Portal, and Data Sync Services. It did not impact Windows Azure Storage or SQL Azure.

While the trigger for this incident was a specific software bug, Windows Azure consists of many components and there were other interactions with normal operations that complicated this disruption. There were two phases to this incident. The first phase was focused on the detection, response and fix of the initial software bug. The second phase was focused on the handful of clusters that were impacted due to unanticipated interactions with our normal servicing operations that were underway. Understanding the technical details of the issue requires some background on the functioning of some of the low-level Windows Azure components.

Fabric Controllers, Agents and Certificates

In Windows Azure, cloud applications consist of virtual machines running on physical servers in Microsoft datacenters. Servers are grouped into “clusters” of about 1000 that are each independently managed by a scaled-out and redundant platform software component called the Fabric Controller (FC), as depicted in Figure 1. Each FC manages the lifecycle of applications running in its cluster, provisions and monitors the health of the hardware under its control. It executes both autonomic operations, like reincarnating virtual machine instances on healthy servers when it determines that a server has failed, as well as application-management operations like deploying, updating and scaling out applications. Dividing the datacenter into clusters isolates faults at the FC level, preventing certain classes of errors from affecting servers beyond the cluster in which they occur.

Figure 1. Clusters and Fabric Controllers

Part of Windows Azure’s Platform as a Service (PaaS) functionality requires its tight integration with applications that run in VMs through the use of a “guest agent” (GA) that it deploys into the OS image used by the VMs, shown in Figure 2. Each server has a “host agent” (HA) that the FC leverages to deploy application secrets, like SSL certificates that an application includes in its package for securing HTTPS endpoints, as well as to “heart beat” with the GA to determine whether the VM is healthy or if the FC should take recovery actions.

Figure 2. Host Agent and Guest Agent Initialization

So that the application secrets, like certificates, are always encrypted when transmitted over the physical or logical networks, the GA creates a “transfer certificate” when it initializes. The first step the GA takes during the setup of its connection with the HA is to pass the HA the public key version of the transfer certificate. The HA can then encrypt secrets and because only the GA has the private key, only the GA in the target VM can decrypt those secrets.

There are several cases that require generation of a new transfer certificate. Most of the time that’s only when a new VM is created, which occurs when a user launches a new deployment, when a deployment scales out, or when a deployment updates its VM operating system. The fourth case is when the FC reincarnates a VM that was running on a server it has deemed unhealthy to a different server, a process the platform calls “service healing.”

The Leap Day Bug

When the GA creates the transfer certificate, it gives it a one year validity range. It uses midnight UST of the current day as the valid-from date and one year from that date as the valid-to date. The leap day bug is that the GA calculated the valid-to date by simply taking the current date and adding one to its year. That meant that any GA that tried to create a transfer certificate on leap day set a valid-to date of February 29, 2013, an invalid date that caused the certificate creation to fail.

As mentioned, transfer certificate creation is the first step of the GA initialization and is required before it will connect to the HA. When a GA fails to create its certificates, it terminates. The HA has a 25-minute timeout for hearing from the GA. When a GA doesn’t connect within that timeout, the HA reinitializes the VM’s OS and restarts it.

If a clean VM (one in which no customer code has executed) times out its GA connection three times in a row, the HA decides that a hardware problem must be the cause since the GA would otherwise have reported an error. The HA then reports to the FC that the server is faulty and the FC moves it to a state called Human Investigate (HI). As part of its standard autonomic failure recovery operations for a server in the HI state, the FC will service heal any VMs that were assigned to the failed server by reincarnating them to other servers. In a case like this, when the VMs are moved to available servers the leap day bug will reproduce during GA initialization, resulting in a cascade of servers that move to HI.

To prevent a cascading software bug from causing the outage of an entire cluster, the FC has an HI threshold, that when hit, essentially moves the whole cluster to a similar HI state. At that point the FC stops all internally initiated software updates and automatic service healing is disabled. This state, while degraded, gives operators the opportunity to take control and repair the problem before it progresses further.

The Leap Day Bug in Action

The leap day bug immediately triggered at 4:00PM PST, February 28th (00:00 UST February 29th) when GAs in new VMs tried to generate certificates. Storage clusters were not affected because they don’t run with a GA, but normal application deployment, scale-out and service healing would have resulted in new VM creation. At the same time many clusters were also in the midst of the rollout of a new version of the FC, HA and GA. That ensured that the bug would be hit immediately in those clusters and the server HI threshold hit precisely 75 minutes (3 times 25 minute timeout) later at 5:15PM PST. The bug worked its way more slowly through clusters that were not being updated, but the critical alarms on the updating clusters automatically stopped the updates and alerted operations staff to the problem. They in turn notified on-call FC developers, who researched the cause and at 6:38PM PST our developers identified the bug.

By this time some applications had single VMs offline and some also had multiple VMs offline, but most applications with multiple VMs maintained availability, albeit with some reduced capacity. To prevent customers from inadvertently causing further impact to their running applications, unsuccessfully scaling-out their applications, and fruitlessly trying to deploy new applications, we disabled service management functionality in all clusters worldwide at 6:55PM PST. This is the first time we’ve ever taken this step. Service management allows customers to deploy, update, stop and scale their applications but isn’t necessary for the continued operation of already deployed applications. However stopping service management prevents customers from modifying or updating their currently deployed applications.

We created a test and rollout plan for the updated GA by approximately 10:00PM PST, had the updated GA code ready at 11:20PM PST, and finished testing it in a test cluster at 1:50AM PST, February 29th. In parallel, we successfully tested the fix in production clusters on the VMs of several of our own applications. We next initiated rollout of the GA to one production cluster and that completed successfully at 2:11AM PST, at which time we pushed the fix to all clusters. As clusters were updated we restored service management functionality for them and at 5:23AM PST we announced service management had been restored to the majority of our clusters.

Secondary Outage

When service management was disabled, most of the clusters either were already running the latest FC, GA and HA versions or almost done with their rollouts. Those clusters were completely repaired. Seven clusters, however, had just started their rollouts when the bug affected them. Most servers had the old HA/GA combination and some had the new combination, both of which contained the GA leap day bug, as shown below:

Figure 3. Servers running different versions of the HA and GA

We took a different approach to repair these seven clusters, which were in a partially updated state. We restored to previous versions of the FC, HA, but with a fixed GA, instead of updating them to the new HA with a fixed new GA. The first step we took was to test the solution by putting the older HA on a server that had previously been updated to the new HA to keep version compatibility with the older GA. The VMs on the server started successfully and appeared to be healthy.

Under normal circumstances when we apply HA and GA updates to a cluster, the update takes many hours because we honor deployment availability constraints called Update Domains (UDs). Instead of pushing the older HA out using the standard deployment functionality, we felt confident enough with the tests to opt for a “blast” update, which simultaneously updated to the older version the HA on all servers at the same time.

Unfortunately, in our eagerness to get the fix deployed, we had overlooked the fact that the update package we created with the older HA included the networking plugin that was written for the newer HA, and the two were incompatible. The networking plugin is responsible for configuring a VM’s virtual network and without its functionality a VM has no networking capability. Our test of the single server had not included testing network connectivity to the VMs on the server, which was not working. Figure 4 depicts the incompatible combination.

Figure 4. Servers running the incompatible combination of HA and HA networking plugin

At 2:47 AM PST on the 29th, we pushed the incompatible combination of components to those seven clusters and every VM, including ones that had been healthy previously, causing them to become disconnected from the network. Since major services such as Access Control Service (ACS) and Windows Azure Service Bus deployments were in those clusters, any application using them was now impacted because of the loss of services on which they depended.

We quickly produced a corrected HA package and at 3:40 AM PST tested again, this time verifying VM connectivity and other aspects of VM health. Given the impact on these seven clusters, we chose to blast out the fix starting at 5:40 AM PST. The clusters were largely operational again by 8:00 AM PST, but a number of servers were in corrupted states as a result of the various transitions. Developers and operations staff worked furiously through the rest of the day manually restoring and validating these servers. As clusters and services were brought back online we provided updates to the dashboard, and posted the last incident update to the Windows Azure dashboard that all Windows Azure services were healthy at 2:15 AM PST, March 1st.

Improving the Service

After an incident occurs, we take the time to analyze the incident and ways we can improve our engineering, operations and communications. To learn as much as we can, we do the root cause analysis but also follow this up with an analysis of all aspects of the incident. The three truths of cloud computing are: hardware fails, software has bugs and people make mistakes. Our job is to mitigate all of these unpredictable issues to provide a robust service for our customers. By understanding and addressing these issues we will continue to improve the service we offer to our customers.

The analysis is organized into four major areas, looking at each part of the incident lifecycle as well as the engineering process that preceded it:

  • Prevention – how the system can avoid, isolate, and/or recover from failures
  • Detection – how to rapidly surface failures and prioritize recovery
  • Response – how to support our customers during an incident
  • Recovery – how to reduce the recovery time and impact on our customers
  • Testing. The root cause of the initial outage was a software bug due to the incorrect manipulation of date/time values. We are taking steps that improve our testing to detect time-related bugs. We are also enhancing our code analysis tools to detect this and similar classes of coding issues, and we have already reviewed our code base.
  • Fault Isolation. The Fabric Controller moved nodes to a Human Investigate (HI) state when their operations failed due to the Guest Agent (GA) bug. It incorrectly assumed the hardware, not the GA, was faulty. We are taking steps to distinguish these faults and isolate them before they can propagate further into the system.
  • Graceful Degradation. We took the step of turning off service management to protect customers’ already running services during this incident, but this also prevented any ongoing management of their services. We are taking steps to have finer granularity controls to allow disabling different aspects of the service while keeping others up and visible.
  • Fail Fast. GA failures were not surfaced until 75 minutes after a long timeout. We are taking steps to better classify errors so that we fail-fast in these cases, alert these failures and start recovery.
  • Service Dashboard. The Windows Azure Dashboard is the primary mechanism to communicate individual service health to customers. However the service dashboard experienced intermittent availability issues, didn’t provide a summary of the situation in its entirety, and didn’t provide the granularity of detail and transparency our customers need and expect.
    • Intermittent availability: This dashboard is run on two different internal infrastructures, Windows Azure and, to deal with the catastrophic failure of either system. It is also geo-replicated to deal with geographic specific incidents. However, the dashboard experienced intermittent availability issues due to exceptionally high volume and fail-over/load balancing that was taking place. We have taken steps to correct this and ensure more robust service in the future.
    • Situation summary: The service dashboard provides information on the health status of 60+ individual services at the sub-region level. While this is valuable in understanding individual service status, the lack of summary information made it difficult for customers to understand the situation holistically. Customers have asked for a summarized view on the dashboard to quickly gain a comprehensive understanding of the scope and severity of the outage. We are taking steps to make this change.
    • Detail and transparency: Although updates are posted on an hourly basis, the status updates were often generic or repeated the information provided in the last couple of hours. Customers have asked that we provide more details and new information on the specific work taking place to resolve the issue. We are committed to providing more detail and transparency on steps we’re taking to resolve an outage as well as details on progress and setbacks along the way.
  • Customer Support. During this incident, we had exceptionally high call volumes that led to longer than expected wait times. While we are staffed to handle high call volumes in the event of an outage the intermittent availability of the service dashboard and lack of updates through other communication channels contributed to the increased call volume. We are reevaluating our customer support staffing needs and taking steps to provide more transparent communication through a broader set of channels.
  • Other Communication Channels. A significant number of customers are asking us to better use our blog, Facebook page, and Twitter handle to communicate with them in the event of an incident. They are also asking that we provide official communication through email more quickly in the days following the incident. We are taking steps to improve our communication overall and to provide more proactive information through these vehicles. We are also taking steps to provide more granular tools to customers and support to diagnose problems with their specific services.
  • Internal tooling. We developed and modified some of our internal tooling to address this incident. We will continue to invest in our tools to help speed recovery and make recovery from intermediate states more predictable.
  • Dependency priorities. We are also examining our processes to make sure dependencies are factored into recovery to ensure that all Windows Azure infrastructure services, such as ACS and Windows Azure Service Bus, are recovered first to reduce the impact on customers.
  • Visibility. We are looking at how we can provide better visibility into recovery steps and provide customers with visibility into the intermediate progress being made.
Service Credits

Microsoft recognizes that this outage had a significant impact on many of our customers. We stand behind the quality of our service and our Service Level Agreement (SLA), and we remain committed to our customers. Due to the extraordinary nature of this event, we have decided to provide a 33% credit to all customers of Windows Azure Compute, Access Control, Service Bus and Caching for the entire affected billing month(s) for these services, regardless of whether their service was impacted. These credits will be applied proactively and will be reflected on a billing period subsequent to the affected billing period. Customers who have additional questions can contact support for more information.


We will continue to spend time to fully understand all of the issues outlined above and over the coming days and weeks we will take steps to address and mitigate the issues to improve our service. We know that our customers depend on Windows Azure for their services and we take our SLA with customers very seriously. We will strive to continue to be transparent with customers when incidents occur and will use the learning to advance our engineering, operations, communications and customer support and improve our service to you.


Bill Laing and the Windows Azure Team

Joe Brockmeier (@jzb) asserted Microsoft Trying Hard to Match AWS, Cuts Azure Pricing in a 3/9/2012 post to the ReadWriteCloud blog:

imageThis should be fun. Just a few days after Amazon announced its 19th price cut, Microsoft is announcing its own price cuts for Azure Storage and Compute. If you're looking at running a smaller instance with a 100MB database, you can now get one for less than $20 a month. If you're doing heavy computing, though, Azure still seems a bit behind AWS.


The cuts are 12% to Azure Storage (now $0.125 per GB), and the extra small compute instance for Azure has been dropped by 50% to $0.02 per hour. If you're buying the six month plans, Microsoft has reduced storage prices by "up to 14%."

Comparing Azure to AWS

imageMicrosoft breaks their prices down by month rather than by compute hour on their pricing calculator, so it's not as simple as one might hope to compare pricing. But here's how it breaks down, assuming 732 billable hours per month:

  • Microsoft Extra Small (1GHz CPU, 768MB RAM, 20GB Storage): $0.02 per hour, $15.00 per month
  • Microsoft Small (1.6GHz CPU, 1.75GB RAM, 225GB Storage): $0.123 per hour, $90.00 per month
  • Microsoft Medium (2 x 1.6GHz CPU, 3.5GB RAM, 490GB Storage): $0.25 per hour, $180.00 per month
  • Microsoft Large (4 x 1.6GHz CPU, 7GB RAM, 1,000GB Storage): $0.49 per hour, $360.00 per month
  • Microsoft Extra Large (8 x 1.6GHz CPU, 14GB RAM, 2,040GB Storage): $0.98 per hour, $720 per month

How does that stack up with AWS EC2 pricing? I took a look at the pricing for Windows instances on EC2, without any reserved pricing bonuses:

  • AWS Micro (Up to 2 EC2 Compute Units in bursts, 613MB RAM, EBS Storage Only): $0.03 per hour, $21.96 per month
  • AWS Small (1 EC2 Compute Unit, 1.7GB RAM, 160GB Storage): $0.115 per hour, $84.19 per month
  • AWS Medium (2 EC2 Compute Units, 3.75GB RAM, 410GB Storage): $0.23 per hour, $168.36 per month
  • AWS Large (4 EC2 Compute Units, 7.5GB RAM, 850GB Storage): $0.46 per hour, $336.72 per month
  • AWS XL (8 EC2 Compute Units, 15GB RAM, 1,690GB Storage): $0.92 per hour, $673.44 per month

The EC2 compute units are described as "the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor," so it doesn't look like you're getting quite the same horsepower out of an AWS compute unit, and AWS is more stingy with storage. AWS instances have a wee bit more RAM, though. Pricing-wise, AWS is coming in cheaper if you accept the instances as roughly equivalent.

Where it gets trickier is if you opt for longer-term pricing with Azure. You can buy six month plans with Azure for "up to 20%" less on a monthly basis. Amazon has several tiers of reserved instances, and additional discounts for big spenders as we covered earlier this week.

SQL Server Pricing

What gets trickier is adding in SQL server. If you want, specifically, Microsoft's brand 'o SQL, then you're looking at Microsoft's SQL Azure Database or running AWS EC2 Instances with SQL Server. Here you must choose large or XL instances, as AWS doesn't offer micro, small or medium instances with SQL Server. You can run SQL Server Express Edition on any of the instance types, but for the full package you're only going with the two higher-end instances.

That inflates the pricing to $1.06 per hour for large instances, and $1.52 per hour for XL instances.

Azure, on the other hand, charges by the size of the database and the number of databases. So comparing the two services gets pretty hairy if you factor in SQL Server.

If you're looking at a lot of compute instances, Amazon still seems to have the edge in pricing. One might also be tempted to look at AWS after the Azure meltdown at the end of February, though AWS has had its own issues.

Microsoft does have plenty of money to throw at this, so this is unlikely to be the last price cut we see in 2012 from either vendor.

See the Google announced price cuts for cloud data storage in an 3/6/2012 update to the Pricing and Support page of the Google Developers site article in the Other Cloud Computing Platforms and Services section below.

David Linthicum (@DavidLinthicum) asserted “Cloud adoption should drive tremendous overall economic growth, if we actually adopt it” in a deck for his Those 14 million new cloud-driven jobs won't be in IT post of 3/9/2012 to InfoWorld’s Cloud Computing blog:

imageIDC, in a study sponsored by Microsoft, predicts that cloud computing will generate nearly 14 million jobs globally by 2015. This prediction assumes that companies moving to the cloud will generate more revenue and cost savings, which in turn will lead them to create jobs. The IDC study predicts that IT innovation enabled by the cloud could help increase business revenue by $1.1 trillion by 2015, concluding that this would lead to an uptick in jobs across many different fields.

imageKeep in mind that this is a sponsored study, so it naturally reflects positively on those who pay for the study (Microsoft, in this case). Even with that bias, I don't see the predictions being too far from reality. For years, I've tried to focus the use of cloud computing around the business case, more so than the technology itself. The problem is that business cases are boring, and the massive migration of enterprise computing resources to Internet-delivered shared services is very exciting. We love to talk about exciting things. Boring things are, well, boring.

What's different about the IDC findings is that they look at the business efficiency around the use of cloud computing. This is in direct contrast to the hype-driven excitement around the use of new and cool cloud computing technology that promises to change IT. In other words, cloud computing will help businesses do better and thus create jobs -- not that cloud computing will create the jobs required to support cloud computing. Those jobs related to overall business growth are much more systemic and longer lasting.

However, the path from the current circumstances to the future "better for businesses using cloud computing" scenario will take a good deal of work. This includes accepting some risks and some disruptions that come with adopting cloud computing -- or any new technology, for that matter. That's the biggest speed bump to cloud-driven jobs, if you believe this study.

“14 million new cloud-driven jobs” sounds to me like a Mitt Romney campaign promise.

Wenchang Liu reported the availability of Safer passwords with SqlCredential with .NET Framework 4.5 in a 3/9/2012 post to the ADO.NET Team blog:


imageMany users of SqlClient with SQL Server Authentication have expressed interest in setting credentials outside of the connection string to mitigate the memory dump vulnerability of keeping the User Name and Password in the connection string. Starting with .Net Framework 4.5, we have introduced the ability to set the credentials outside of the connection string via the new SqlCredential Credential property of SqlConnection. Now the developer can create a SqlCredential object with a UserId and a SecureString Password to hold the credential values of a connection when connecting to a server. This helps mitigate the threat of credentials being leaked out to the page file in a page swap or being evident in a crash dump.

Use Case Example
System.Windows.Controls.TextBox txtUserId = new System.Windows.Controls.TextBox();
System.Windows.Controls.PasswordBox txtPwd = new System.Windows.Controls.PasswordBox();
using (SqlConnection conn = new SqlConnection("Server=myServer;Initial Catalog=myDB;"))
SecureString pwd = txtPwd.SecurePassword;
SqlCredential cred = new SqlCredential(txtUserId.Text, pwd);
conn.Credential = cred;

Alternatively we can use the new SqlConnection constructor overload which takes both a connection string and credential object:

SecureString pwd = txtPwd.SecurePassword;
SqlCredential cred = new SqlCredential(txtUserId.Text, pwd);
using (SqlConnection conn = new SqlConnection("Server=myServer;Initial Catalog=myDB;", cred))
SqlCredential Class

More information about the new SqlCredential class can be found at:

For information on how to get or set the SqlConnection.Credential property, please refer to:

It’s important to note that the SqlCredential constructor only allows SecureString marked as read only to be passed in as the Password parameter or it will raise an ArgumentException. The new credential property is incompatible with existing UserId, Password, Context Connection=True, and Integrated Security=True connection string keywords, and setting the credential property on an open connection is not allowed. It is strongly recommended that you set PersistSecurityInfo=False (default) so the credential property is not returned as part of the connection once it is opened.

Connection Pooling with Credential Property

With this new improvement now the connection pooling algorithm also takes the Credential property into consideration in addition to the connection string property when creating connection pools and getting connections from the pool. Connections with the same connection string and same instance of Credential property will use the same connection pool, but connections with different instances of the Credential property will use different connection pools, even if they use the same UserId and Password. For example, the developer tries to open several connections with different configurations as below:

string str1 = “Server=myServer;Initial Catalog=myDB;User Id=user1;Password=pwd1;”;
string str2 = “Server=myServer;Initial Catalog=myDB;”;
SqlCredential cred1 = new SqlCredential(user1, pwd1);
SqlCredential cred2 = new SqlCredential(user1, pwd1);
SqlCredential cred3 = new SqlCredential(user2, pwd2);
  1. 1. SqlConnection conn1 = SqlConnection(str1, null); //different connection string
  2. 2. SqlConnection conn2 = SqlConnection(str2, cred1); //different credential object
  3. 3. SqlConnection conn3 = SqlConnection(str2, cred2); //different credential object
  4. 4. SqlConnection conn4 = SqlConnection(str2, cred3); //different credential object and user/pwd

All 4 connections will use different connection pools, the most important thing to note here is that conn2 and conn3 will not share the same connection pool, as they use different credential object instances, even though those two instances use the same UserId and Password.

Using SqlCredential with other classes

To use the new secure password feature with SqlDataAdapter or SqlBulkCopy, a SqlConnection object with SqlCredential property needs to be constructed first and passed into the appropriate constructor of SqlDataAdapter and SqlBulkCopy that takes in a SqlConnection object rather than a connection string. The SqlDependency class currently does not support starting a listener for receiving dependency change notifications from SQL Server for both connection string and credential.

Of course the usage of SQL Server Integrated Authentication Mode is still the recommended way to authenticate for users with an Active Directory® infrastructure as there is no credential propagation and all security sensitive information is stored in the Active Directory’s database. And the usage of SQL Server Mixed Mode Authentication with UserId and Password specified in the connection string remains unchanged.

imageThis would be a good feature for SQLAzure logins.

<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds


•• Mick Badran (@mickba) and Scott Scovell posted a 42-slide Feature Decision Making with Hybrid Solutions presentation on 3/9/2012:



VTSP is a new acronym for me. Apparently it represents Microsoft Virtual Technical Solutions Professional.

<Return to section navigation list>

Cloud Security and Governance

•• Brian Musthaler reported New key technology simplifies data encryption in the cloud in a 3/9/2011 Network World article:

Data at rest has long been protected by technology called public key infrastructure (PKI), in which data is encrypted when it's created by a public key and only decrypted, in theory, by an authorized person holding the private key. But extending this type of data protection to the cloud can be complicated.

The migration to the cloud has introduced a new set of complex security issues for IT teams to manage due to the lack of direct control over the security of the data. Moreover, cloud providers believe that data security is a shared responsibility, where the service provider assures physical security and the subscribers must secure their servers and data. Presumably this would include a strategy for encryption and key management which requires that the keys be stored outside the cloud rather than in it.

Startup security company Porticor just released a solution that addresses the concern about data at rest in the cloud. Porticor offers a split key encryption solution where the cloud customer is the only one who knows the master key. What's more, Porticor handles all the complexity of encrypting data so the customer barely needs to think about it. The security and convenience is all in the unique implementation of key management.

The fundamental problem of encrypting data in the cloud is where to store the keys. The customer can't store the keys on a disk in the cloud because they could be vulnerable to hackers. The customer could allow a vendor to store its keys, but that means putting trust in a third party. The customer could bring the keys back into his own data center, but that seems to defeat the purpose of outsourcing data center services to the cloud. Porticor now offers an alternative for key management that is both simple and secure.

Porticor's approach is based on the concept of the safe deposit box that has two keys -- one for the customer and the other for the banker, or in this case, the Porticor Virtual Key Management Service. Just like the safe deposit box, the customer can't decrypt the data without the key held by Porticor, and Porticor can't decrypt the data without the master key held by the customer. In practice, the customer actually has one key per project, which is usually an application. Porticor has thousands of keys, one for each file or disk belonging to that project. Still, the keys must pair up in order to provide access to the encrypted data.

Beyond the keys being split between the customer and Porticor, the unique part of the solution is the keys themselves are encrypted by the customer's master key, which only the customer holds and knows. As a result, Porticor holds project keys but the vendor can't read them because they are encrypted. By encrypting the "banker" keys with the customer master key, Porticor gives the customer complete mitigation of end data protection. The customer must write down the master key and literally store it in a steel box. Once that is done, no one in the world other than the steel box ever sees the key. (Another option is to put the master key in an escrow service.) …

Read more: 2, Next >

Bruce Kyle continued his Windows Azure Security Best Practices – Part 3: Identifying Your Security Frame on 3/9/2012:

imageWhen you are building out your cloud application, security should be front and center in your Windows Azure planning and execution. In part1 one I described the threat landscape and that your application should employ defense in depth. In part 2, I explained that security was a shared responsibility and that Windows Azure provides your application with security features that go beyond those that you need to consider in your on premises application. But then, it also exposes other vulnerabilities that you should consider.

In this part, I explore how you can examine the architecture of your application. The pattern and practices teams provide the idea of a Security Frame as a way to look at your application to determine treats and your responses, before you even begin coding.

image_thumbI also describe how you can use the The Microsoft Security Development Lifecycle (SDL) in a prescribed way that you can adapt in your organization to address security in every process of your application’s lifecycle.

Security Frame


A Security Frame acts as a simple lens for you to look into your apps from a security viewpoint.

The concept is explained in depth in Windows Azure Security Notes. This document from the Patterns and Practices team was authored by by J.D. Meier, Principal Program Manager and Paul Enfield. It was developed with help from customers, field engineers, product teams, and industry experts provides solutions for securing common application scenarios on Windows Azure based on common principles, patterns, and practices.

The paper provides an overview of the threats, attacks, vulnerabilities, and countermeasures you can take. It provides a details set of scenarios for many of the common application types. There are paper provides a Security Frame, as a way to provide a frame for thinking about security when designing and architecting Windows Azure applications.

The paper starts with a common ASP.NET application and identifies a set of categories as a set of actionable buckets:

  • Auditing and Logging
  • Authentication
  • Authorization
  • Communication
  • Configuration Management
  • Cryptography
  • Exception Management
  • Sensitive Data
  • Session Management
  • Validation

The approach help you to approach securing your solution by addressing key security hotspots defined by the security frame.

For your on premises application, you need to handle each of these main issues. The visual model represents a fairly typical on-premise application architecture, and then pins hotspots against it.


With a managed infrastructure we can remove some of the concerns because they are items handled by the managed infrastructure. For example, a Windows Azure application will not have permissions to create user accounts, or elevate privileges.


The paper advises you to

Represent your architecture with a base diagram, and overlay the frame on it. Once the frame is overlaid, you can evaluate each item for applicability and quickly scope out categories not needing attention.

The paper goes on to describe 28 different attacks and suggests 71 things you can do to make your application more secure. In fact, these represent coding standards you can use in your organization. (See Counter Measures Explained on page 18 for a great set of these standards that should be part of your software development).

Choosing Web Service Security Architectures

The paper explains several application scenarios that represent a set of common application implementations on the Windows Azure platform involving web based services.

  • ASP.NET to WCF. This scenario illustrates Windows Azure hosting and ASP.NET application, a WCF service and how to connect the two. It uses transport security and calls to the service are made as a single identity.
  • ASP.NET On-Site to WCF on Windows Azure. In this scenario an ASP.NET application hosted on-site calls a WCF service hosted on Windows Azure using message security, and passing the identity of the original caller.
  • ASP.NET to WCF with Claims. This scenario uses the Windows Identity Framework to connect an ASP.NET application to a WCF service while authenticating using claims in an authentication token, and maintaining the identity of the authenticated user.
  • REST Service with AppFabric Access Control. In this scenario, a web service is implemented with a RESTful interface. To authenticate access to the REST service, the AppFabric Access Control is used to obtain a Simple Web Token (SWT). Access Control works in a trust relationship with an Identity Provider such as ADFS to issue Simple Web Tokens.

Tradeoffs for each of the architectures is described, followed by use cases for each scenario.

And it provides the criteria to select the right fit for your application.

Choosing Data Security Architectures

The following application scenarios represent a set of common application implementations on the Windows Azure platform involving data storage security.

  • ASP.NET to Azure Storage: This scenario demonstrates securing access to Windows Azure Storage. It uses ASP.NET membership and roles, while mapping users to a single connection.
  • ASP.NET to SQL Azure: In this scenario we demonstrate SQL Azure access providing users with differing levels privileges to the data by mapping them to set roles.
  • ASP.NET On-Site to Data on Azure through WCF: This scenario illustrates Data as a Service by connecting an on-site application to data hosted in the cloud using WCF.
  • ASP.NET on Windows Azure to SQL Server On-site: In this scenario you have deployed an ASP.NET application to Windows Azure, but the data lives on-site. A WCF service is used to expose the data, and the Windows Azure Service Bus is used to expose the service through the corporate firewall to the Windows Azure Application.

The benefits and consideration for each way to store data is described for each data security architecture.


The article provides a checklist for securing your Windows Azure application. There is a check list for each of the major buckets:

  • Architecture and Design
  • Deployment Considerations
  • Auditing and logging
  • Authentication
  • Authorization
  • Communications
  • Configuration Management
  • Cryptography
  • Exception Management
  • Input and Data Validation
  • Sensitive Data

One example of the checklist, this one is for Architecture and Design, define the things you should be doing in your application architectures which highlights role-based security:

□ The application authentication code has been removed from application code, and is implemented separately.
□ Instead of the application determining who the user is, identify the user by the claims they provide.
□ The design identifies permissions required by the application.
□ The design verifies that required permissions do not exceed Windows Azure trust policies.
□ The design identifies storage requirements against storage options capabilities.
□ The application doesn’t use explicit IP addresses, it uses friendly DNS names instead.

There’s a set of treats and countermeasures that you can use to specifically uses to design and implement your software for each category.

  • Cheat Sheet - Web Application Security Threats and Countermeasures at a Glance
  • Cheat Sheet - Web Service (SOAP) Security Threats and Countermeasures at a Glance
  • Cheat Sheet – Web Services (REST) Security Threats and Countermeasures at a Glance
  • Cheat Sheet - Data Security Threats and Countermeasures at a Glance

Those sheets and the other information in Windows Azure Security Notes will provide you a good place to start.

Setting the Security in Your Development Lifecycle

I’m often asked about how Microsoft approaches security for our own applications and for Windows Azure itself.

Microsoft requires that the The Microsoft Security Development Lifecycle (SDL) be followed for any Microsoft-developed software deployed on Windows Azure.

The Security Development Lifecycle (SDL) is a software development security assurance process consisting of security practices grouped by seven phases: training, requirements, design, implementation, verification, release, and response.


Starting with the requirements phase, the SDL process includes a number of specific activities when considered with development of applications to be hosted in the Microsoft cloud in mind:

Requirements – The primary objective in this phase is to identify key security objectives and otherwise maximize software security while minimizing disruption to customer usability, plans, and schedules. This activity may include an operational discussion when dealing with hosted applications that focuses on defining how the service will utilize network connections and message transports.

Design – Critical security steps in this phase include documenting the potential attack surface and conducting threat modeling. As with the requirements phase, environmental criteria may be identified when going through this process for a hosted application.

Implementation – Coding and testing occur in this phase. Preventing the creation of code with security vulnerabilities and taking steps to remove such issues, if present, are the key practices during implementation.

Verification – The beta phase is when new applications are considered functionally complete. During this phase, close attention is paid to determining what security risks are present when the application is deployed in a real-world scenario and what steps can be taken to eliminate or mitigate the security risks.

Release – The Final Security Review (FSR) happens during this phase. If needed, an Operational Security Review (OSR) also occurs before the new application can be released into Microsoft’s cloud environment.

Response – For Microsoft’s cloud environment, the SIM team takes the lead in responding to security incidents and works closely with product, service delivery teams, and members of the Microsoft Security Response Center to triage, research, and remediate reported incidents.

Tools and Processes

SDL includes tools and processes that you can use freely. For example, you can use:

SDL applies equally to applications built on the Windows Azure platform and any other platform. Most Windows Azure applications have been built, or will be built using agile methods. As a result, the SDL for agile process may be more applicable to applications hosted on Windows Azure than to the classic phase-based SDL. The Microsoft SDL Web site also covers SDL for Agile in detail.


Windows Azure Security Notes Many thanks to J.D. Meier and Paul Enfield of the Patterns and Practices team.

For more information about SDL, see The Microsoft Security Development Lifecycle (SDL) page.

Next Up

Windows Azure Security Best Practices – Part 4: What You Need to Do. In addition to protecting your application from threats, there are additional steps you should take when you deploy your application. We provide a list of mitigations that you should employ in your application development and deployment.

<Return to section navigation list>

Cloud Computing Events

•• Tim Anderson (@timanderson) reported Microsoft’s platform nearly invisible at QCon London 2012 in a 3/10/2012 post:

imageQCon London ended yesterday. It was the biggest London QCon yet, with around 1200 developers and a certain amount of room chaos, but still a friendly atmosphere and a great opportunity to catch up with developers, vendors, and industry trends.

Microsoft was near-invisible at QCon. There was a sparsely attended Azure session, mainly I would guess because QCon attendees do not see that Azure has any relevance to them. What does it offer that they cannot get from Amazon EC2, Google App Engine, Joyent or another niche provider, or from their own private clouds?

imageMark Rendle at the Azure session did state that Node.js runs better on Windows (and Azure) than on Linux. However he did not have performance figure to hand. A quick search throws up these figures from Node.js inventor Ryan Dahl:


These figures are more “nothing to choose between them” than evidence for better performance, but since 0.6.0 was the first Windows release it is possible that it has swung in its favour since. It is a decent showing for sure, but there are other more important factors when choosing a cloud platform: cost, resiliency, services available and so on. Amazon is charging ahead; why choose Azure?

My sense is that developers presume that Azure is mainly relevant to Microsoft platform businesses hosting Microsoft platform applications; and I suspect that a detailed analysis would bear out that presumption despite the encouraging figures above. That said, Azure seems to me a solid though somewhat expensive offering and one that the company has undersold.

I have focused on Azure because QCon tends to be more about the server than the client (though there was a good deal of mobile this year), and at enterprise scale. It beats me why Microsoft was not exhibiting there, as the attendees are an influential lot and exactly the target audience, if the company wants to move beyond its home crowd.

I heard little talk of Windows 8 and little talk of Windows Phone 7, though Nokia sponsored some of the catering and ran a hospitality suite which unfortunately I was not able to attend.

Nor did I get to Tomis Petricek’s talk on asynchronous programming in F#, though functional programming was hot at QCon last year and I would guess he drew a bigger audience than Azure managed.

Microsoft is coming from behind in cloud - Infrastructure as a Service and/or Platform as a Service – as well as in mobile.

I should add the company is, from what I hear, doing better with its Software as a Service cloud, Office 365; and of course I realise that there are plenty of Microsoft-platform folk who attend other events such as the company’s own BUILD, Tech Ed and so on.


This is the basis for the claim that node.js runs better on Windows:

IOCP supports Sockets, Pipes, and Regular Files.
That is, Windows has true async kernel file I/O.
(In Unix we have to fake it with a userspace thread pool.)

from Dahl’s presentation on the Node roadmap at NodeConf May 2011.

Related posts:

  1. Sold out QCon kicks off in London: big data, mobile, cloud, HTML 5
  2. QCon London 2010 report: fix your code, adopt simplicity, cool .NET things
  3. QCon London

Kevin Remde (@KevinRemde) recommended that you Kick-Start your Azure Cloud Development Skills in a 3/9/2012 post for US mid-westerners:

imageAre you interested in learning the ins-and-outs of working with Windows Azure? Wondering how to get started cheaply - as in “for FREE”? We’re giving you a chance to spend a day with some of the nation’s leading cloud experts and for you to learn how to build a web application that runs in Windows Azure. You will learn how to sign up for free time in the cloud, and how to build a typical web application using the same ASP.NET tools and techniques. You’ll explore web roles, cloud storage, SQL Azure, and common scenarios. Get your questions answered via open Q&A, and learn what workloads should not be moved to cloud.

Don't forget: we have tools for IT Pros to manage both Private and Public CloudsThis will be a hands-on learning experience. The invite links below have the details needed for setup your machine. Of course we’ll have help onsite to get the right software installed as well. Lunch will be provided, and prizes awarded. You can use the registration links below to get registered.

By the way.. If you have an MSDN Subscription you already have free cloud benefits! This video shows you how to get your risk free access to Windows Azure to explore and learn the cloud. Or activate your MSDN Cloud benefits here. If you have questions, send our Azure team members an email: msnextde at


Here is the schedule. Click the date to get location details and to register. Do it soon, because seats are limited.

imageNo significant articles today.

<Return to section navigation list>

Other Cloud Computing Platforms and Services

Nicole Hemsoth reported IBM Big Data VP Surveys Landscape in a 3/9/2012 post to the Datanami blog:

imageWhen it comes to marking the evolution of data and the innumerable tools to tame it that have emerged over the last decade, Anjul Bhambhri has what amounts to a bird’s eye view of the industry.

imageAn electrical engineer by training, Bhambhri now serves as IBM’s Vice President of Big Data Products, a role that taps into her experiences working with IBM’s Optim app and other data management tools at companies like Informix and Sybase.

Known for her work in developing the XML core for IBM’s DB2 offering, [Bhanbrhri, pictured at right,] has a rather unique perspective on both the software and hardware sides of the emerging big data ecosystem.

Even with all cynicism about the big data market thrown in, it’s almost impossible not to point to IBM as one of the leading providers of big data solutions, particularly on the software side.

However, as we pointed to during our conversation with Bhambhri, the big data ecosystem is generating a lot of attention for startups and open source initiatives, some of which directly impede on IBM’s business on price, community support and other ways.

Bhambhri’s answer to the matter of blooming competition from both startups and open source (and what seems mostly in big data these days to be open source-based startups) was the following:

“I think it makes a lot of sense that solutions are mushrooming, which are leveraging the insights that can be gained from big data. I think for the customers that no longer want to build their own applications and if they can find a solution that fits their needs with maybe some customization, it’s important that those solutions be made available to customers so we are working with a lot of companies in the space who are providing solutions.

IBM has offerings in that space as well, from internally developed solutions to technologies we’ve acquired from Unica, for example, so we’re we are obviously working with both the solution providers who are inside IBM as well as small companies or other companies that are providing big data solutions that we are partnering with that can be run on BigInsights so they can analyze and show the results of the analytics capabilities that we can provide on big data. It makes their solutions more complete and more competitive than if they were not able to analyze this big data, so the answer would be partnering as well as enabling any solutions that IBM itself is building.”

As Bhambhri noted, however, this is nothing new and everyone in the industry is just getting started on the journey to big data solutions. As she noted, “For relational databases, a lot of players providing offerings in this space go through the cycle of what the needs are for structured data. As you can imagine, a lot of that work is also starting for unstructured or semi-structured data.”

Acquisitions on IBM’s part are a continual event and we asked what might be down the road for IBM and the big data division she leads. As she told us, however: “I cannot promise specifically on what we would do from an acquisition standpoint, but we are certainly partnering with a lot of players in the space and really making them successful, by providing our technology and sharing with them what the possibilities are around that. So I can’t promise specifically on what acquisitions we would be doing in this space or not.”

She says that while IBM isn’t missing any crucial pieces of the big data pie early on, she says that this is a new area and as such, the emphasis of IBM’s approach is focused on the platform. From our conversation:

“If you look at what we have done recently, you see we are paying attention to all fronts of the big data platform, especially in terms of how we can ingest data from a variety of sources, and also in terms of being able to analyze the data, perform historical analysis and uncover patterns. We are also focusing on providing tooling so that application developers can build new sources of applications and do ad hoc analysis as well as write, debug and deploy applications.”

She pointed to the value of this approach, saying that the company has already worked closely with over 200 customers over the last couple of years using this platform approach that looks to meet the needs of everyone from the analyst to the developer to the system admins.

Bhambhri notes, however, that there is still a lot of work that is happening around their big data solutions and the providing of what she calls big data accelerators. As she told us, IBM has “around 100 sample applications that have been harvested from the work we have done for specific use cases and customers. These have been built into the product so the customer can spend time analyzing as opposed to implementing.”

To highlight this she says that users can take these applications as a starting point to customers; that way if they want to, say, analyze data coming in from social media versus starting from scratch they can look at actual IBM data and while it might not be exact, it can provide a reasonable start point that IBM can continue developing into a more complete solution.

On that note, she pointed to a number of existing IBM products and case studies that highlight the ways big data is being harnessed. At the core of this part of our chat was our discussion of the company’s approach to complex event processing. From the transcript of our discussion:

What we have that goes a step beyond complex event processing is called InfoSphere Streams and it has the ability to ingest, analyze and ask of very large volumes of structured, semi-structured and unstructured data and the data as it is coming in can be processed and analyzed at micro-second latency.

When you look at the volume there are applications which needed real-time analytics, which we have deployed in the telecommunications sector. We are processing 6 billion calls per day. Those volumes are huge and beyond what any complex event processing system could handle or analyze with that kind of latency. The volume of data is huge and it could be structured or unstructured. The reason the telco wants to analyze this data is because they want to resolve billing disputes and understand customer choice. …

Read more.

Jeff Barr (@jeffbarr) reported New IAM Features: Password Management and Access to Account Activity and Usage Reports Pages in a 3/9/2012 post:

imageWe are excited to announce that we have added several new AWS Identity Access and Management (IAM) features aimed at providing you more flexibility and control when managing your IAM users.
With today’s launch you can now:

  1. Grant your IAM users access to the Account Activity page and Usage Reports page on the AWS website.
  2. Create a password policy to enforce the password strength for your IAM users.
  3. Grant your IAM users the ability to change their own password.

Access to Account Activity and Usage Reports
imageThis new feature allows you to create separate and distinct IAM users for business and technical purposes. You can grant your business users access to the Account Activity and/or Usage Reports pages of the AWS website to allow them to access billing and usage data without giving them access to other AWS resources such as EC2 instances or files in S3. Check out Controlling User Access to your AWS Accounts Billing Information for more details.

Password Policies
You can now define an account-wide policy in the IAM Console that enforces password strength for your IAM users upon password change. You can specify password length, and require that passwords include any combination of uppercase letters, lowercase letters, numbers, or symbols. For more details, check out Managing an IAM Password Policy.

Changing Passwords
We have also added a simple graphical user interface in the IAM Console that allows IAM users to change their own password. They can access this by selecting the “Security Credentials” option from the drop down in the upper right corner of the AWS Management Console. For more details, check out How IAM Users Change Their Own Password.

These features were created to address customer needs that have been brought to our attention via the IAM Forum. Please feel free to let us know what else we can do to make IAM an even better fit for your needs.

Amazon appears to be winning the new features race with Windows Azure.

Google announced price cuts for cloud data storage in an 3/6/2012 update to the Pricing and Support page of the Google Developers site:

Free Trial Quota

imageGoogle Cloud Storage offers a free trial quota until June 30, 2012. This quota is only applicable to your first project that uses Google Cloud Storage and gives you free usage of resources within that project, up to:

  • 5 GB of storage
  • 25 GB of download data (20 GB to Americas and EMEA*; 5 GB to Asia-Pacific)
  • 25 GB of upload data (20 GB to Americas and EMEA*; 5 GB to Asia-Pacific)
  • 30,000 GET, HEAD requests
  • 3,000 PUT, POST, GET bucket**, GET service** requests

The free quota will be reflected on your invoice at the end of the billing period. All usage above the trial quota will be charged at our standard rates below.

Note: If you signed up before May 10, 2011, please read the FAQ on how this trial offer affects you.


If you exceed your free trial quota, your Google Cloud Storage account is billed according to the following tables:



AppHarbor (@AppHarbor) finally posted Announcing pricing on 2/2/2012 (missed when published):

AppHarbor has been available in public beta for one year and we wanted to share some incredible statistics. Our users clocked more than 26,871,840 hours of free hosting. AppHarbor received an amazing 117,587 builds. 18,291 builds did not compile or failed unit tests and AppHarbor stopped them from going live. We deployed and scaled 99,296 builds!

AppHarbor cares deeply about our community. We responded 4,201 times on our support forum, our median response time is 1h 34m. Our community is not only on AppHarbor and we go where they go, including a thriving community on Twitter and a fast growing group on one of our favorite sites, StackOverflow.

When we founded AppHarbor, we set out to give .NET developers everywhere an amazing platform to deploy and scale their applications in the cloud. A portable platform so that developers could move their application somewhere else without any hassle. A platform where developers could easily use 3rd party add-ons to enrich their applications. A platform where any developer can throw together an application and deploy it live for free. And a platform that our users could rely on.

We are committed to always have a free offering. Through the last 12 months one of the biggest concerns we have heard from customers, is that you haven't been able to pay us. We realise you want to know that we will be around for years to come. We always planned on offering premium plans, and today we are announcing two new premium plans.

Meet Catamaran and Yacht which will complement our newly branded Canoe free plan. Canoe comes with one worker, a complimentary subdomain and piggyback SSL using the certificate.

Pricing page

Like all AppHarbor applications, apps on the Canoe plan have access to our build service and continuous integration infrastructure. This includes built-in source code repositories, integrations with external source code repository providers such as BitBucket and GitHub, source code compilation, unit test execution and notifications for completed builds. Canoe apps also have access to the full catalog of add-ons many of which have free plans. Canoe apps are load balanced from the start, so it's ready to scale out when you need it.

Catamaran, at $49/month, gives your application 2 workers along with custom hostnames and SNI SSL. Yacht comes with 4 workers, custom hostnames and IP SSL for $199/month. Multiple workers are recommended for production applications, they increase the number of requests your application can handle and ensure that your application runs with no downtime in case of a sudden server failure. That is why we built it in to our paid plans.

If you are happy with the Canoe plan and just need custom hostnames or SNI SSL, those can be added a la carte for $10/month. If your app needs IP SSL with either the Canoe or Catamaran plans, that can be had for $100/month. Additional workers are $49/month and can be added to any plan.

The workers in our plans can be used as either web or background workers. Background workers are not available yet, but we are working hard on the remaining pieces and will launch a beta shortly.

Applications that already have custom domains or are using SNI SSL can keep using those for free for another month, until March 1st. Once the grace period expires, we will either start charging (if a credit card is added to your account) or downgrade the app. Applications that are currently scaled to two or more instances will be placed on the Catamaran plan.

We want our prices to be as straightforward as the platform itself. You don't need to calculate how much I/O you're going to use, or what amount of bandwidth you require. We've set some soft and hard limits, but chances are that you'll never exceed those.

We chose to charge for custom domains because our free plan is meant to be a great developer sandbox. Custom domains are a sign that what is being built requires a certain branding, whether it is a business product or even a personal blog. We think it is fair that users who get value from using AppHarbor share the costs of running the platform. We kept the price of a custom domain low and we do not intend to make money from it. In fact we plan to donate any profits generated by custom domains to open-source projects and charity! In addition, we are looking at other ways to support applications that provide a valuable service to the .NET community, non-profits and education, for example by extending additional free resources to such apps. Please drop us a line if you think your application qualifies.

Charging for AppHarbor ensures you can keep using our service in the future. It also means that we can start to offer more advanced features - starting today scaling to more than two workers and IP SSL is generally available to all customers. As always, we value your comments and feedback and look forward to keep improving what we already think is the best .NET application platform.

AppHarbor also announced

in February 2012.

<Return to section navigation list>