Monday, December 06, 2010

Windows Azure and Cloud Computing Posts for 12/6/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.

Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

Bruno Terkaly posted 6-Minute Tutorial - Windows Azure Development - Unit Tests to Interact with Storage–Tables, Blobs, Queues on 12/6/2010:


This post is a continuation of the previous post: Previous Blog Post

imageUnderstanding on how to work with tables, blobs, and queues is the hallmark of cloud computing. Although many developers may choose a relational data store, they won’t be able to achieve the same scalability and throughput of Azure tables, blobs, and queues.


A 6 minute video is posted. This video is super high quality. It is crystal clear. Download it and play it at your convenience. Typically, it is just a 4 minute download.


6-Minute Tutorial - Windows Azure Development - Unit Tests to Interact with Storage–Working with the Developer and Storage Fabrics


Download the video

A 6 minute video is posted. This video is super high quality. It is crystal clear. Download it and play it at your convenience. Typically, it is just a 4 minute download.


<Return to section navigation list> 

SQL Azure Database and Reporting

Kevin Ritchie (@KevRitchie) continued his Day 6 - Windows Azure Platform - SQL Azure Database series on 12/6/2010:

image On the 6th day of Windows Azure Platform Christmas my true love gave to me the SQL Azure Database.

What is the SQL Azure Database?

imageThe SQL Azure Database is a Windows Azure platform relational database system based on SQL Server technology and because of this provides a very familiar development model. So; as developers, you can still use familiar connection protocols like; ADO.NET, ODBC and the Entity Framework to name a few.

You can also use the standard SQL tools that you’re used to like Management Studio, Integration Services, Analysis Services and BCP. So, if you want to manage or move an existing database in the Cloud, it should be very simple.

But that’s not the only benefit of using a Cloud-based relational database system, there are more.

Let’s imagine that your server suddenly comes under some heavy workload; not to worry. SQL Azure replicates multiple redundant copies of your data to multiple physical servers to maintain data availability. But what if my server fails? Simple; SQL Azure provides automatic failover. All this without you having to manage a single thing Smile.

You can also scale up and scale down the service as your data grows or reduces and with the use of a “pay-as-you-grow” pricing model, this makes sure that you only pay for what you store.

I’ve only touched on a few benefits here, but what I wanted to show was the familiarity and ease with which you can store, manage and access data in the Cloud.

Tomorrow’s installment: SQL Azure - Reporting

Telerik explained Running Telerik Reports in the cloud with Windows Azure in a 12/6/2010 post and accompanying video segment:

imageCloud computing is getting quite the hype these days, especially Windows Azure. To bring you up to speed, Windows Azure is a cloud services’ operating system which provides developers with on-demand computing and storage to host, scale, and manage .NET web applications on the internet through Microsoft datacenters.

In this post, we would demonstrate how to add a Telerik Report to a Windows Azure Project, publish it to Windows Azure and serve it from the Azure cloud.


What you should have installed prior starting:

Creating the project

We start by creating a Windows Azure project (make sure you change the framework version to .NET3.5 or later in order to see the Visual Studio template) and select ASP.NET Web Role.


We would not dig into this as it is explained thoroughly on their site. Here is a nice video - Windows Azure: Getting the Tools, Creating a Project, Creating Roles and Configuration.

To add the reports - as noted in Telerik Reporting’s Quickstart section, we create a class library which would contain Telerik Reports or add an existing class library. To show the reports, we would need a report viewer and since we’ve added ASP.NET Web Role, we would use the Web Report Viewer:


Next step is to add reference to the report class library in the WebRole1 web application and after that assign a report to the report viewer:

protected void Page_Load(object sender, EventArgs e) { this.ReportViewer1.Report = new ReportCatalog(); }

and we’re done. Set the WindowsAzureProject as starting application and run it to make sure everything works in the cloud simulation environment.

Publish to Windows Azure

Once we’ve verified that everything works correctly (rendering report, printing, exporting) the final step is to publish the project to Windows Azure. An important thing to note here is that we need the Telerik Reporting assemblies in the bin folder of the application, that is why we alter their Copy Local = True property.

In order to publish you would need to have a Windows Azure account – login with your credentials. Review the following video: Windows Azure Part III: Deploying Windows Azure Applications from Visual Studio which elaborates on that matter.

The final result is Telerik Reporting in the cloud! Check it out on the Azure cloud yourself:


imageAttached to this post you can find the project we’ve published which includes several of our demo reports. In order to run the sample you need to modify the Fion string in the web.config file with your existing SQL Azure server name and login credentials. Connecting and consuming an existing SQL Azure database in Telerik Report would be covered in an upcoming post.

As you can see, Telerik Reporting just works with the Azure platform and database.

The video below would guide you through the whole process. The video is also available on Telerik TV: Running Telerik Reports on Windows Azure.

<Return to section navigation list> 

Dataplace DataMarket and OData

imageSee Rob Tiffany continued his series with Windows Phone 7 Line of Business App Dev :: Moving your WCF REST + JSON Service to Windows Azure in an undated 12/2010 post in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section below.

<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

The Windows Azure AppFabric Team announced on 6/12/2010 Windows Azure AppFabric CTP December release – announcement and scheduled maintenance on 12/15/2010:

image722322The next update to the Windows Azure AppFabric LABS environment is scheduled for December 15, 2010 (Wednesday).  Users will have NO access to the AppFabric LABS portal and services during the scheduled maintenance down time. 


  • START: December 15, 2010, 10 am PST
  • END:  December 15, 2010, 6 pm PST

Impact Alert:

The AppFabric LABS environment (Service Bus, Access Control, Caching, and portal) will be unavailable during this period. Additional impacts are described below.

Action Required:

Existing accounts and Service Namespaces will be available after the services are deployed.

However, ACS Identity Providers, Relying Party Applications, Rule Groups, Certificates, Keys, Service Identities and Management Credentials will NOT be persisted and restored after the maintenance. The user will be responsible for both backing up and restoring any ACS entities they care to reuse after the Windows Azure AppFabric LABS December Release.

Cache users will see the web.config snippet on their provisioned cache page change automatically as part of this release. We advise Cache customers to redeploy their application after applying the new snippet from the provisioned cache page to their application

Thank you for working in LABS and giving us valuable feedback.  Once the update becomes available, we'll post the details via this blog. 

Stay tuned for the upcoming LABS release!

Waiting for the “exciting new features.”

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, and CDN

MSDN presents Exercise 1: Connecting an Azure Web Role to an External SQL Server Database with Windows Azure Connect online version of the WAPTK from Microsoft Developer Network > Learn > Courses > Windows Azure Platform Training Course > Windows Azure:

Exercise 1: Connecting an Azure Web Role to an External SQL Server Database with Windows Azure Connect

In this exercise, you will setup network connectivity between a simple Web Role and your local machine. The Web Site used for sample purposes in this exercise will leverage Windows Azure Connect and connect to your local SQL Server instance to retreive a list of customers that will be shown in a simple table within the site.

Task 1 - Configuring the Application to run in Windows Azure with Windows Azure Connect

To use Windows Azure Connect to connect external resources with your Azure service, you need to enable one or more of its roles.You do this by provisioning the role with the Connect plug-in that is part of the Windows Azure SDK v1.3 release. Only roles of the service provisioned with the Connect plug-in will be able to connect to external resources.

  1. In order to make the Azure Web Role be able to connect with the database using SQL Server credentials, open SQL Server Management Studio and connect to the local SQL Server (i.e. .\sqlexpress).
  2. Right click in the server node and select Properties.
  3. Select Security and make sure SQL Server and Windows Authentication mode is selected.

    Figure 1

    SQL Server Properties - Security

  4. Click OK button.
  5. Restart the SQL Server instance in order to make previous configuration change to take effect.

    Figure 2

    Restart SQL Server

  6. Open Visual Studio 2010 as an administrator. Go to File | Open | Project menu and select the Begin.sln located in Source\Ex1-ConnectingToExternalSQL\Begin folder of the lab.
  7. Press F5 key to run the application.
  8. Notice in the connection information panel that the application is connected to the local SQL Server SQLEXPRESS instance.

    Figure 3

    Application running locally

  9. Navigate to
  10. Click on Virtual Network link on Windows Azure Platform left pane. This are the contents related to Windows Azure Connect.

    Figure 4

    Clicking Virtual Network

  11. Click on {your-service-subscription-name} node located under Connect node on the upper side of left pane.
  12. Click Ok on Enable Windows Azure Connect popup. This popup appears only the first time you need to enable Windows Azure Connect with the current subscription.

    Figure 5

    Enabling Windows Azure Connect

  13. Once enabled, click Close on Enable Windows Azure Connect popup.

    Figure 6

    Windows Azure Connect enabled

  14. Click on {your-service-subscription-name} node to expand and see Windows Azure Connect information. To do this, click on Connect node on the upper side of the left pane.

    Figure 7

    Reviewing Windows Azure Connect information

  15. Click the “Get Activation Token” button. You will retrieve the “client activation token” for your Windows Azure service.

    Figure 8

    Getting Activation Token

  16. Click on Copy Token to Clipboard button on Get Activation Token for Windows Azure Roles popup to configure your Windows Azure Service.

    Figure 9

    Copying Client Activation token

  17. Click Yes if Microsoft Silverlight ask you to allow clipboard access.

    Figure 10

    Allowing Silverlight access clipboard

  18. Click Ok to close the Get Activation Token for Windows Azure Roles popup.

    Figure 11

    Closing popup

  19. Go back to Visual Studio 2010. Under the CustomerSearch project, open the CustomersWebRole settings and select the VirtualNetwork tab. Ensure that the option labeled Activate Windows Azure Connect is selected. Paste from the clipboard the token you have copied in the previous step.

    Figure 12

    Filling Virtual Network tab

  20. Press Ctrl-S to save config file.
  21. Open the Web.config file for the CustomersWebRole project to update the SQL connection string. Find the CustomersEntitiesconnectionString, and replace the .\SQLEXPRESS value in the Data Source attribute to {your-machine-name}\SQLEXPRESS,1433. The number 1433 in the attribute represents the port number. The following snippet show the result after applying the update, assuming that your machine name is “YourMachine” (replace this value with your machine mane):



    <add name="CustomersEntities" connectionString="metadata=res://*/Customers.csdl|res://*/Customers.ssdl|res://*/Customers.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=YourMachine\SQLEXPRESS,1433;Initial Catalog=Customers;Persist Security Info=True;User ID=labUser;Password=Passw0rd!;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" />



    Once you deploy the application to Windows Azure, the Web Role will connect to the SQL Server running in your machine through the machine name. That is the reason why you need to change the .\SQLExpress value to use explicitly your machine name. Notice also that you need to explicitly specify the default port as part of the data source because the connection will be set using TCP/IP as the protocol.

  22. You need to deploy the solution to Windows Azure. You can deploy the application using the Windows Azure Tools for Visual Studio, or create a service package and use the portal to deploy it. For more information on deployment options, see the “Windows Azure Deployment” hands-on lab.
  23. Once the deployment completed successfully you should see information about the roles in Virtual Network. To do this, click on Connect node on the left pane.

    Figure 13

    Roles information

  24. Click on Hosted Services, Storage Account CDN link on the left pane to review your role information. If Hosted Services is not already selected, click Hosted Services to select it.

    Figure 14

    Selecting Compute, Storage & CDN

  25. Click on your service located on the center pane to review your service information. Once selected, click on the DNS Name link on the right pane. This opens the published Web site.

    Figure 15

    Clicking on Web Site URL

  26. Verify that the application is running in Windows Azure, without being able to connect to the external SQL server machine. You should see an exception saying that the connection to SQL Server could not be established.

    Figure 16

    Application running in Azure, showing an exception saying that the connection to SQL Server could not be established

The tutorial continues with 32 more figures and even more detailed steps for the following tasks:

imageSee Steve Plank (@plankytronixx) announced on 12/6/2010 that he’ll present Plankytronixx Academy: Windows Azure Connect - Live Meeting on 15th December 16:00-17:15 UK time (08:00–09:15 PST) in the Cloud Computing Events section.

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

I (@rogerjenn) updated my Strange Behavior of Windows Azure Platform Training Kit with Windows Azure SDK v1.3 under 64-bit Windows 7 post on 12/6/2010 with input from Steve Marx and Fernando Tubio and the results of a reinstallation of the WAPTK November 2010 update:

image Update 12/6/2010: Steve Marx suggested in a 12/5/2010 reply to my forum thread that I might have installed the VS 2008 version (WAPTKVS2008-August2010.exe) instead of the VS 2010 version (WAPTKVS2010-November2010.exe), so I repeated the installation with the following result:

I just renamed my old C:\WAPTK folder, downloaded WAPTKVS2010-November2010.exe to my ...\Downloads folder, and installed it to C:\WAPTK.

imageResults were exactly as before; failed to detect the Windows Azure SDK v1.3 installation and started the Upgrade Wizard. There might be something strange going on with my machine, but it's a new partition (not a VM) specifically for VS 2010 and Azure development with no related beta software installed.

Fernando Tubio observed on 12/6/2010:

From the screenshots in your blog post, I see that the problem you are having is in one of the demo scripts. The hands-on labs have been upgraded to 1.3 but the demo scripts still target 1.2. There's a note in the download page that mentions this but it's easy to miss.

That sounds like “not fully cooked” to me. The download page contains the following Overview paragraph:

The November update provides new and updated hands-on labs for the Windows Azure November 2010 enhancements and the Windows Azure Tools for Microsoft Visual Studio 1.3. These new hands-on labs demonstrate how to use new Windows Azure features such as Virtual Machine Role, Elevated Privileges, Full IIS, and more. This release also includes hands-on labs that were updated in late October 2010 to demonstrate some of the new Windows Azure AppFabric services that were announced at the Professional Developers Conference ( including the Windows Azure AppFabric Access Control Service, Caching Service, and the Service Bus.

I assumed (wrongly) that all source code had been updated for SDK v1.3. The inclusion of a reference to the Microsoft.WindowsAzure.StorageClient v1.1 library (see the Solution Explorer screen capture below) led me to believe that the demo code had been updated.

Bill Zack provided on 6/12/2010 a MSDN map to developers Lost finding information about Windows Azure:


If you are an ISV overwhelmed with the tremendous amount of information available about Windows Azure here is an outstanding index to the information on Windows Azure Platform in the MSDN library.

This index guides you to the best information on how to:

  • Build a Windows Azure Application
  • Use the Windows Azure SDK Tools to Package and Deploy an Application
  • Configure a Web Application
  • Manage Windows Azure VM Roles
  • Administering Windows Azure Hosted Services
  • Deploy a Windows Azure Application
  • Upgrade a Service
  • Manage Upgrades to the Windows Azure Guest OS
  • Configure Windows Azure Connect

Thanks to Tejaswi Redkar

Here’s a current sample:

How to: Build a Windows Azure Application
How to Configure Virtual Machine Sizes
How to Configure Connection Strings
How to Configure Operating System Versions
How to Configure Local Storage Resources
How to Create a Certificate for a Role
How to Create a Remote Desktop Protocol File
How to Define Environment Variables Before a Role Starts
How to Define Input Endpoints for a Role
How to Define Internal Endpoints for a Role
How to Define Startup Tasks for a Role
How to Encrypt a Password
How to Restrict Communication Between Roles
How to Retrieve Role Instance Data
How to Use the RoleEnvironment.Changing Event
How to Use the RoleEnvironment.Changed Event

How to: Use the Windows Azure SDK Tools to Package and Deploy an Application
How to Prepare the Windows Azure Compute Emulator
How to Configure the Compute Emulator to Emulate Windows Azure
How to Package an Application by Using the CSPack Command-Line Tool
How to Run an Application in the Compute Emulator by Using the CSRun Command-Line Tool
How to Initialize the Storage Emulator by Using the DSInit Command-Line Tool
How to Change the Configuration of a Running Service
How to Attach a Debugger to New Role Instances
How to View Trace Information in the Compute Emulator
How to Configure SQL Server for the Storage Emulator

How to Configure a Web Application
How to Configure a Web Role for Multiple Web Sites
How to Configure the Virtual Directory Location
How to Configure a Windows Azure Port
How to Configure the Site Entry in the Service Definition File
How to Configure IIS Components in Windows Azure
How to Configure a Service to Use a Legacy Web Role

How to: Manage Windows Azure VM Roles
How to Create the Base VHD
How to Install the Windows Azure Integration Components
How to Prepare the Image for Deployment
How to Deploy an Image to Windows Azure
How to Create and Deploy the VM Role Service Model
How to Create a Differencing VHD
How to Change the Configuration of a VM role

How to: Administering Windows Azure Hosted Services
How to Setup a Windows Azure Subscription
How to Setup Multiple Administrator Accounts

How to: Deploy a Windows Azure Application
How to Package your Service
How to Deploy a Service
How to Create a Hosted Service
How to Create a Storage Account
How to Configure the Service Topology

How to: Upgrade a Service
How to Perform In-Place Upgrades
How to Swap a Service's VIPs

How to: Manage Upgrades to the Windows Azure Guest OS
How to Determine the Current Guest OS of your Service
How to Upgrade the Guest OS in the Management Portal
How to Upgrade the Guest OS in the Service Configuration File

How to: Configure Windows Azure Connect
How to Activate Windows Azure Roles for Windows Azure Connect
How to Install Local Endpoints with Windows Azure Connect
How to Create and Configure a Group of Endpoints in Windows Azure Connect

The Windows Azure Team reported on 12/6/2010 that the Latest BidNow Sample Application Leverages New Windows Azure Platform Features:

imageWade Wegner has posted an update to the BidNow Sample application on the MSDN Code Gallery. This update increases the applications use of the Windows Azure platform and incorporates many of the latest features and services.  He has also written a great blog post that details the new services and features supported in this release. The BidNow Sample is an online auction site that demonstrates how the comprehensive set of cloud services in the Windows Azure platform can be used to develop a highly scalable consumer application. You can see a live version of the sample application here.

BidNow Screenshot

With this latest update, BidNow leverages: SQL Azure for relational data, Windows Azure Access Control for authentication and authorization, Caching to store reference and activity data, OData services to extend reach, and a Windows Phone 7 client to demonstrate how to leverage cloud services from a mobile device.  Sample applications like BidNow, FabrikamShipping SaaS, and myTODO highlight the ways in which you can build applications that leverage the full capabilities of the Windows Azure platform.

For additional information, please see Getting Started with BidNow or watch the video, Setting Up the BidNow Sample Application for Windows Azure.Windows Azure

Bruce Kyle reported New MSDEV Developer Training for Azure Highlight Diagnosis, Data Manager, Sync Framework in a 12/6/2010 post to the US ISV Evangelism blog:

image Three new training courses from MSDEV show you how you can diagnosis your service and perform troubleshooting, show you how to use the new lightweight data manager for SQL Azure, and how to get started using Sync Framework with SQL Azure.

msdevWindows Azure Diagnosis.  This webinar shows you how to implement Windows Azure Diagnostics, which let you collect diagnostic data from a service running in Windows Azure. The diagnostic data can be used in debugging and troubleshooting, measuring performance, monitoring resource usage, traffic analysis, capacity planning, and auditing.

imageIntroduction to Database Manager for SQL Azure. This video is an introduction to the lightweight database management tool for SQL Azure, designed specifically for Web developers and other technology professionals who need a straightforward and easy-to-manage database management solution.

Sync Framework for SQL Azure. This video introduces and demonstrates the Sync Framework that provides bi-directional data synchronization between SQL Azure cloud databases and on-premises databases.

Free Support for Windows Azure Platform

For free support for your Windows Azure project, join Microsoft Platform Ready. Earn the Powered by Windows Azure logo. Sign up for a free month trial of Windows Azure – Use Promo Code DPWE01.

Cerebrata Software Private Ltd announced on 12/6/2010 updates to v2010.12.01.00 of the following products:

imageAzure Diagnostics Manager:

Configurable expansion of hosted services nodes
Till the current version, Azure Diagnostics Manager automatically tries to find our active deployments in all the hosted services under a subscription node which could become cumbersome and time consuming if you have many hosted services under that subscription as Azure Diagnostics Manager will try and fetch this data for each and every hosted service. In this version, we have made this configurable so that you can instruct Azure Diagnostics Manager to fetch active deployments only for the hosted service that you wish to see the diagnostics data for. Please click on the release notes demo video below to see more details about this feature.

Definition for last "n" hours changed
Till the current version, when you are specifying search criteria and want to see the data for last 1 hour, Azure Diagnostics Manager gets the data for the last hour instead of going back 60 minutes from the current time. In this version, we have changed that so now if you select last 1 hour, Azure Diagnostics Manager will bring the data from last 60 minutes instead of last hour. Please click on the release notes demo video below to see more details about this feature.

Release Notes Screencast
Download High Quality WMV

Cloud Storage Studio:

Blob container/folder statistics
In this version, we've included a feature using which you will be able to find out the total number of files and the size of a blob container. It also lists file extensions and the count of files by extensions and size of files by that extension. Please click on the release notes demo video below to see more details about this feature.

Bug fixes

  • Moving Messages with Binary Content: There was an issue with moving binary messages. After the message was moved, its contents were corrupted. This issue is now fixed in this release.
  • Special characters in blob names: There was an issue with blobs with special characters (e.g. #, +, ? etc.) in their names when the blobs were copied/moved or deleted. This issue is now fixed in this release.

Release Notes Screencast
Download High Quality WMV

Full disclosure: Cerebrata has provided me a free license for these two products.

Igor Papirov asserted “It will get you and your team thinking about the impact of your choices in a new dimension” as a preface to his Five tips for implementing cost effective Windows Azure solutions on 12/6/2010:

image Cloud-computing providers in general and Windows Azure in particular offer nearly infinite scalability, virtually unlimited capacity, blazing performance and extremely quick provision times.  However, to properly take advantage of these great benefits, teams need to plan ahead and understand all potential pitfalls and challenges.  One of the more significant differences between development of on-premise applications and cloud applications is a rather direct correlation between choices made during construction of an application and its support costs after deployment.  Because of the Windows Azure pricing model, every inefficient architectural decision and every inefficient line of code will show up as an extra line item on your Azure invoice.

image This article will focus on a few actionable items that you can do today to minimize the cost of your Windows Azure application tomorrow.  The list of items is by no means exhaustive, but it will get you and your team thinking about the impact of your choices in a new dimension.

imageFirst, let's analyze the popular moving blocks when it comes to Windows Azure pricing. While seemingly straightforward individually, it is their combination together - and with obvious influence of already existing non-functional requirements for scalability, performance, security, availability, etc. - that make architecture in the cloud be a complex jigsaw puzzle.

  • Compute hours - quantity and size of nodes (servers) and charged by hour when online
  • Transfer costs - data that crosses Microsoft's data center boundaries is subject to transfer charges.
  • Azure Table Storage (ATS) costs - charge by gigabyte per month for the amount of space used
  • ATS transaction costs - charges for the amount of requests to ATS your application will make
  • Size of the SQL Azure databases - every database that you host in SQL Azure is charged by size

There are costs for other less frequently used services like Azure AppFabric or Content Delivery Network (CDN) that are not covered in this article.

Tip 1 - Avoid crossing data center boundaries
This is fairly straightforward.  Data that does not leave Microsoft data center is not subject to Transfer charges.  Keep your communication between compute nodes, SQL Azure, and Table Storage within the same data center as much as possible.  This is especially important for applications distributed among multiple geo-locations.  If you must communicate between different geo-locations, limit communication to non-transactional, batch calls that occur less frequently while utilizing compression where it makes sense to cut down on the amount of data transferred.  Employ caching technologies where possible.

Tip 2 - Minimize the number of compute hours by using auto scalingCompute hours will likely make up the largest part of your Azure bill and thus need to receive the greatest amount of attention.  It is important to remember that Windows Azure does not automatically scale down the number of compute nodes, even if there is little or no demand on your application.  Architect for and plan to have an automatic scaling strategy in place, where the amount of nodes increases when demand spikes up and decreases when demand tapers off.  This can easily cut your bill for compute hours in half.  Implementing a comprehensive auto-scaling engine can be more complex than it sounds.  While there are a number of open-source examples that show the basics of how this can be done, it is also a perfect opportunity to outsource the auto-scaling to third party services such as AzureWatch.

In order for auto-scaling to be most effective, group your system components by their scaling strategies into Azure Roles.  It is important to keep in mind that if you need high availability of your components and want to take advantage of Azure SLA, you will need to maintain at least two online nodes for each Azure Role you have deployed.

Tip 3 - Use both Azure Table Storage (ATS) and SQL Azure
Try to not limit yourself to a choice between ATS or SQL Azure.  Instead, it would be best to figure out when to use both together to your advantage.  This is likely to be one of the tougher decisions that architects will need to make, as there are many compromises between relational storage of SQL Azure and highly scalable storage of ATS.  Neither technology is perfect for every situation.

On one hand accessing SQL Azure from within the boundaries of a data center is free and SQL Azure offers a familiar relational model which most developers will be comfortable with, transactions that assure data integrity, integration with popular ORM frameworks such as Entity Framework or NHibernate, and compatibility with numerous tools that work with relational databases.  On the other hand, ATS offers vastly greater scalability than SQL Azure and can hold a nearly infinite amount of data at a fraction of SQL Azure's cost.  You are charged, however, for every request made to ATS, even within the boundaries of a data center.
From a cost perspective, SQL Azure makes sense when access to data is not required to be highly scalable and when the amount of data is limited.  ATS makes sense for large amounts of data or when serious scalability is needed.

Tip 4 - ATS table modeling
If you have made the choice to use Azure Table Storage, you have essentially committed to converting parts of your data access components into mini database servers.  Setting Blob storage aside which is primarily used for media files or documents, ATS provides three levels of data hierarchy (Table, PartitionKey, and RowKey) that can be accessed and navigated extremely efficiently.  However, anything beyond that will require custom code and CPU cycles of your compute nodes.  This is the key difference to work around.  It would be prudent to spend a significant amount of time modeling table storage with appropriate Blobs, Tables, PartitionKeys and RowKeys to accommodate for efficient data storage and retrieval strategies.  This will not only speed up your transactions and minimize the amount of data transferred in and out of ATS, but also reduce the burden on you compute nodes that will be required to manipulate data and directly translate into cost savings across the board.

Tip 5 - Data purging in ATS
Because you are charged for every gigabyte stored in ATS, it may be prudent to have a data purging strategy.  However, while it may seem like a straightforward problem in a world of relational databases, this is not the case with ATS.  Since ATS is not relational, deletion of each and every row from an ATS table requires two transactions.  In certain cases it may be possible to delete a single row using only one transaction.  Either way, this is extremely slow, inefficient and expensive.  A better way would be to partition a single table into multiple versions (e.g. Sales2010, Sales2011, Sales2012, etc.) and purge obsolete data by deleting a version of a table at a time.

The shift to cloud computing represents a major leap forward and enables almost everyone, from small companies to large enterprise, reduce their capital expenses, minimize time to market and significantly decrease support costs.  By investing even small effort into planning ahead, cloud computing can result in meaningful savings and benefits.

Igor founded Paraleap Technologies, an emerging Chicago-based startup.

Adron Hall (@adronbh) posted Windows Azure SDK 1.3 Broken Deployments on 12/6/2010:

image I’ve been using the 1.3 SDK now for about a week.  I have one single machine that can deploy an ASP.NET MVC Application to Windows Azure.  The other 32 bit machine and 64 bit machine do not, fail pretty much every time I do a build on them.  This is how it looks so far.

Machine 1 is: 32 bit, 4 GB RAM with Win 7, .NET 4.0 with VS 2008 + 2010 individual installs of software over a period of about 10 months.  In other words, it isn’t the cleanest installation anymore.

Machine 2 is: 32 bit, 4 GB RAM with Win 7, .NET 4.0 with VS 2010.  Relatively clean enterprise level installation of Windows.

Machine 3 is: 64 bit, 8 GB RAM with Win 7, .NET 4.0 with VS 2010.  The installation is literally 4 days old now.

Machine 3 fails to provide a build that is deployable every single time.  It creates what I call a “Ghost Deploy”.  See the video below:

Windows Azure Broken 1.3 SDK

Windows Azure Broken 1.3 SDK

This movie requires Adobe Flash for playback.

Machine 2 actually completes a deployment after setting the assemblies in the ASP.NET MVC Web Application to local and removing the diagnostic configuration settings.  This is the only machine I’ve had successfully deploy a Windows Azure 1.3 SDK ASP.NET MVC Web Application in the last week.

Machine 1 actually just spools the deployment into the Web Role, but then it spins, busy without starting.  Once after about 45 minutes it did finally stop.

In the video I did not have a certificate added to the web role, but have since added one and tried to launch the web app (on Machine 2), still nothing.  It at least does not turn into a Ghost Deployment.  On machine 3 the deployment with the certificate still just dies.

I’ve tried to set all assemblies to local, I get no changes.  I’ve assured no configuration changes have diagnostics in them.

Machines 1 and 2 both deployed Windows Azure ASP.NET MVC Application without any issue before SDK 1.3.  Machine 3 I’ve just built, so certain issues may be inherent in the machine load itself, but with the other issues still rearing their heads, I doubt that Machine 3 is at fault but instead am leaning toward SDK 1.3.

imageAnyone else out there having any issues?  Seen the continual spooling?  Compared to SDK 1.2 (which I didn’t even have a single issue with) this is really disconcerting.  I encourage cloud technology use to major enterprises, and this type of mistake is worrisome.  Fortunately for me (or maybe I should say fortunately for Microsoft) I’m not pushing any efforts with the 1.3 SDK right now.  It has a few new features I’d really like to try out, but at the current time I can rarely get a decent deployment, let alone something that tests out the new features.  Plz fix k thx bye.  :)

Adron Hall (@adronbh) reported Re: Cloudcamp Seattle on 12/5/2010:

imageSummary Statement:  CloudCamp rocked!  I got to meet a lot of smart people and have a lot of smart conversations!

Ok, so I probably shouldn’t write the summary statement first, but I’m not one for standard operating procedure.  But I digress, I’ll dive straight into the cloud topics and the event itself.

The event kicked off with an introduction and lighting talks by Tony Cowan, Mithun Dhar, Steve Riley, John Janakiraman, Margaret Dawson, and Patrick Escarcega.  Margaret and Steve really stood out to me in their talks, I’ll be keeping an eye on any future speaking engagements they may have.

One of the quotes that led off CloudCamp during the lightning talks was, “If you’re still talking about if the cloud is secure…” you’re already behind, out of touch, missing the reality of it, or simply not understanding the technology.  After further conversation though, it really boils down to the most common excuse.  The statement “the cloud isn’t secure enough” translates to “I’ve got my fingers in my ears and am not listening to your cloud talk”.

Margaret Dawson from Hubspan really took a great stance with her lightning talk.  The talk was titled “To Cloud or Not To Cloud” with “Don’t buy the cloud, buy a solution” as the summarized idea.  The other thing that she mentioned during her talk was she likes adding “AASes” to cloud computing, such as “BPaaS”.  I’ll admit I laughed guiltily along with a few dozen others and forgot to note what BPaaS stands for.  Whoops!  :)

An attempt at creating a generalized definition of cloud computing was also made.  It was stated that we can, as a community, agree on the following definitions of cloud computing.  The definition involved three parts:

  • Cloud computing is on demand.
  • Cloud computing can be turned off or on as needed.
  • Cloud computing can autoscale without issue to handle peaks and lulls in demand.

Another funny statement came from Dave Neilsen (@daveneilsen), CloudCamp Organizer, “I agree, the cloud isn’t right for everyone” to which someone in the crowd jokingly hollered back “You’re Fired!”  The energy in the audience and each of the sessions was great!

After the lightning talks Dave Neilsen led the conference with a cloud panel to field some questions.  A few topics related to this wikileaks thing :P came up along with some others.  I tired diligently to take good notes during this time, but it was a bit fast paced and I left the note taking to be more involved in listening.

These activities kicked off the overall event, which then led into everyone breaking out to different sessions depending on topics created by the attendees.  The sessions included (and I may have missed one or two);

  • Open Source Software in the Cloud
  • Best Practices for Low Latency
  • imageIntro to Cloud Computing + Windows Azure
  • How does a traditional Microsoft Stack fit in Amazon Web Services (AWS)
  • Google Cloud Services
  • What are your personal projects?

Adron continued with “a few tweets that mentioned or had something useful in relation to the #cloudcamp + #seattle hashtags from last night.”

Danilo Diaz and Max Zilberman wrote Build Data-Driven Apps with Windows Azure and Windows Phone 7 for MSDN Magazine’s 12/2010 issue:

In the last 30 years, we’ve seen an explosion in the computer hardware industry. From mainframes to desktop computers to handheld devices, the hardware keeps getting more powerful even as it shrinks. Developers have, to some extent, become a bit spoiled by this constant increase in computing power and now expect limitless computer resources on every device for which they write applications. Many younger developers have no memory of a time when the size and efficiency of your code were important factors.

The latest trend in development is in embracing the rise in popularity of smartphones. When coding for smartphone devices, many developers have to adjust to the fact that, although today’s phones are extremely powerful compared to devices of just a few years ago, they do face limitations. These limitations are related to size, processor power, memory and connectivity. You need to understand how to work around these limitations when creating mobile applications to ensure good performance and the best user experience.

Some of the reasons for less-than-optimal app performance can be attributed directly to poor design decisions by the developer. However, in other cases, some of these factors are not directly in the control of the developer. A poorly performing application could be a symptom of a slow or offline third-party service, dropped mobile broadband connections or the nature of the data you’re working with (such as streaming media files or large sets of data).

Whatever the cause might be, the performance perceived by the end user of your application must be one of the top concerns of any software developer. In this article, we’ll cover some high-level considerations for designing robust, data-driven Windows Phone 7 applications in a way that can provide a great user experience and scale gracefully.

Let’s first take a moment and set up a scenario within which we can examine some design and coding choices. As an example, we’re going to work with a fictitious travel information application that provides information about user-selected airline flights. As shown in Figure 1, the main screen of the application shows a number of data elements including current weather and flight status. You can see that, as applications become more expressive and data-centric, developing them becomes a bit more challenging. There are simply more areas where your code can fall short.

image: The Flight Information Sample App

Figure 1 The Flight Information Sample App


Figure 2 Flight Data Storage Schema

Danilo and Max continued with the following topics:

    • UI Thread Blocking
    • Dealing with Data
    • Cached and Persistent Data
    • Returning Data
    • Dealing with Network Failures
    • Using Push Notifications
    • Caching Data Locally
    • Caching Data on Your Server
    • Monitoring Service
    • Putting It All Together

Rob Tiffany (@robtiffany) continued his series with Windows Phone 7 Line of Business App Dev :: Moving your WCF REST + JSON Service to Windows Azure in an undated 12/2010 post:

image Ever since my last blog post where I demonstrated how to create lightweight WCF REST + JSON services for consumption by Windows Phone 7, I’ve received many requests from folks wanting to know how to do the same thing from Windows Azure.  Using Visual Studio 2010, the Azure Development Fabric and SQL Server, I will show you how to move this code to the cloud. [Link added.]

Fire up VS2010 and create a new cloud project (you’ll be prompted to download all the Azure bits if you haven’t done so already).

Azure1 thumb Windows Phone 7 Line of Business App Dev :: Moving your WCF REST + JSON Service to Windows Azure

Select WCF Service Web Role and move it over to your Cloud Service Solution.  Rename it to AzureRestService and click OK.

Azure2 thumb Windows Phone 7 Line of Business App Dev :: Moving your WCF REST + JSON Service to Windows Azure

You’ll then be presented with the default Service1.svc.cs SOAP web service that implements the IService1.cs Interface.  Needless to say, you’ll need to makes some modifications to these two files as well as Web.config if you want to be a true RESTafarian.

In Service1.svc.cs, delete the GetDataUsingDataContract method but leave the GetData method since you’ll use it to perform an initial test.

Next, open IService1.cs and delete the GetDataUsingDataContract [OperationContract] as well as the CompositeType [DataContract].  You should be left with the simple GetData [OperationContract].

Open Web.config.  You’ll notice that it’s already pretty full of configuration items.  After the closing </serviceBehaviors> tag, tap on your Enter key a few times to give you some room to insert some new stuff.  Insert the following just below the closing </serviceBehaviors> tag and just above the closing </behaviors> tag as shown:

                <behavior name="REST">
                    <webHttp />

This provides you with the all-important webHttp behavior that enables lean REST calls using HTTP Verbs.

Below the closing </behaviors> tag and above <serviceHostingEnvironment multipleSiteBindingsEnabled="true" />, insert the following as shown:

                <service name="AzureRestService.Service1">
                    <endpoint address="" behaviorConfiguration="REST" binding="webHttpBinding" contract="AzureRestService.IService1" />

Here is where we define our service name and contract.  It’s also where we point our behaviorConfiguration at the webHttp behavior you named “REST” and set the binding to webHttpBinding.

Now it’s time to decorate your interface’s [OperationContract] with a WebGet attribute and utilize a UriTemplate to give the Windows Phone 7 caller a web-friendly Uri to call.  So beneath [OperationContract] and above string GetData(int value);, squeeze in the following:

[WebGet(UriTemplate = "/getdata?number={value}", BodyStyle = WebMessageBodyStyle.Bare)]

Since we want to call the GetData method via a GET request, we use WebGet and then we set our UriTemplate to something that anyone could access via their browser.  Lastly, we strip out all unnecessary junk by setting WebMessageBodyStyle.Bare.

It’s convenient that I mentioned using a browser to access this new REST service because that’s exactly how we’re going to test it.  Hit F5 in Visual Studio to fire up the Azure Development Fabric and start your Web Role.  Internet Explorer will come up and you’ll probably see an Error page because it points to the Root of your Role Site.  This is expected behavior.  In order to test the service, type the following in the IE address bar:

This points to a loopback address on your computer with a port number of 81.  If your environment uses a different port, then just change what you pasted in as appropriate.  After the port number and “/”, you type in the name of the service you created which is service1.svc.  After the next “/”, you type the format you described in the UriTemplate.  You can type any Integer you wish and if everything works, the browser will display the following result:

<string xmlns="">You entered: 5</string>

With your test REST service working from your local Azure Development Fabric, it’s time to bring over the business logic from my last blog post where I showed you how to return Customer information from an on-premise WCF Service connected to SQL Server.  I don’t necessarily expect you to have a SQL Azure account so you’ll add a connection string to Web.config that points to a local SQL Server Express instance.  Don’t worry, you can swap this connection string out later to point to our awesome cloud database.  Beneath the closing </system.web> tag and above the <system.serviceModel> tag, insert the following:

    <add name="ContosoBottlingConnectionString" connectionString="Data Source=RTIFFANY2\SQLEXPRESS;Initial Catalog=ContosoBottling;Integrated Security=True" providerName="System.Data.SqlClient" />

This is the same connection string from the last blog post and you’ll definitely need to modify it to work with both your local SQL Server instance and SQL Azure when you’re ready to deploy.  Bear with me as the rest of this blog post will be a large Copy and Paste effort.

Open IService1.cs and add the following:

using System.Collections.ObjectModel;


[WebGet(UriTemplate = "/Customers", BodyStyle = WebMessageBodyStyle.Bare, ResponseFormat = WebMessageFormat.Json)]
ObservableCollection<Customer> GetCustomers();

Open Service1.svc.cs and add the following:

using System.Web.Configuration;
using System.Collections.ObjectModel;
using System.Data.SqlClient;


//Get the Database Connection string
private string _connectionString = WebConfigurationManager.ConnectionStrings["ContosoBottlingConnectionString"].ConnectionString;


public ObservableCollection<Customer> GetCustomers()
    SqlConnection _cn = new SqlConnection(_connectionString);
    SqlCommand _cmd = new SqlCommand();
    _cmd.CommandText = "SELECT CustomerId, DistributionCenterId, RouteId, Name, StreetAddress, City, StateProvince, PostalCode FROM Customer";

        _cmd.Connection = _cn;

        ObservableCollection<Customer> _customerList = new ObservableCollection<Customer>();

        SqlDataReader _dr = _cmd.ExecuteReader();
        while (_dr.Read())
            Customer _customer = new Customer();
            _customer.CustomerId = Convert.ToInt32(_dr["CustomerId"]);
            _customer.DistributionCenterId = Convert.ToInt32(_dr["DistributionCenterId"]);
            _customer.RouteId = Convert.ToInt32(_dr["RouteId"]);
            _customer.Name = Convert.ToString(_dr["Name"]);
            _customer.StreetAddress = Convert.ToString(_dr["StreetAddress"]);
            _customer.City = Convert.ToString(_dr["City"]);
            _customer.StateProvince = Convert.ToString(_dr["StateProvince"]);
            _customer.PostalCode = Convert.ToString(_dr["PostalCode"]);

            //Add to List
        return _customerList;

As you can see, the only remaining error squigglies refer to the lack of the Customer class I discussed in the on-premise WCF project from the last blog post.  To add it, I want you to right-click on your AzureRestService project and select Add | Class and name the class Customer.

Azure4 thumb Windows Phone 7 Line of Business App Dev :: Moving your WCF REST + JSON Service to Windows Azure

Now I want you to paste the code below into this new class:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Runtime.Serialization;
using System.ComponentModel;

namespace AzureRestService
    public class Customer : INotifyPropertyChanged
        public Customer() { }

        private int customerId;
        private int distributionCenterId;
        private int routeId;
        private string name;
        private string streetAddress;
        private string city;
        private string stateProvince;
        private string postalCode;

        public int CustomerId
            get { return customerId; }
                customerId = value;

        public int DistributionCenterId
            get { return distributionCenterId; }
                distributionCenterId = value;

        public int RouteId
            get { return routeId; }
                routeId = value;

        public string Name
            get { return name; }
                name = value;

        public string StreetAddress
            get { return streetAddress; }
                streetAddress = value;

        public string City
            get { return city; }
                city = value;

        public string StateProvince
            get { return stateProvince; }
                stateProvince = value;

        public string PostalCode
            get { return postalCode; }
                postalCode = value;

        public event PropertyChangedEventHandler PropertyChanged;
        private void NotifyPropertyChanged(String propertyName)
            if (null != PropertyChanged)
                PropertyChanged(this, new PropertyChangedEventArgs(propertyName));


As I mentioned in the last article, this class is a little overkill since it inherits from INotifyPropertyChanged and adds all the code associated with firing NotifyPropertyChanged events.  I only do this because you will use this same class in your Windows Phone 7 project to support two-way data binding.

The Customer table you’ll be pulling data from is shown in SQL Server Management Studio below:

Azure5 thumb Windows Phone 7 Line of Business App Dev :: Moving your WCF REST + JSON Service to Windows Azure

We’re now ready to roll so hit F5 in Visual Studio to debug this new cloud solution in the Azure Development Fabric.  When Internet Explorer comes up, type the following in the IE address bar:


You might be surprised to see the following dialog pop up instead of XML rendered in the browser:

Azure6 thumb Windows Phone 7 Line of Business App Dev :: Moving your WCF REST + JSON Service to Windows Azure

The reason you see this is because you’re returning the data objects in wireless-friendly JSON format.  Notice that the dialog say the Unknown file type is only 671 bytes.  This is a good thing.  Click the Save button and save this file to your desktop.

Now find the customer file on your desktop and rename it to customer.txt so you can view it in Notepad.  Double-click on this text box to reveal the tiny, JSON-encoded data in Notepad that you just looked at in the previous SQL Server Management Studio picture.

Azure7 thumb Windows Phone 7 Line of Business App Dev :: Moving your WCF REST + JSON Service to Windows Azure


If you followed me through this example and all the code executed properly, you now know how to build Windows Azure REST + JSON services designed to conquer those slow, unreliable, and highly-latent wireless data networks we all deal with all over the world.  When combined with my last article, both your on-premise and Windows Azure bases are covered with WCF.  The only thing left to do is sign up for an Windows Azure Platform account and move this Web Role and SQL Azure database to cloud.  In my next article, I’ll show you how to use the WebClient object from Silverlight in Windows Phone 7 to call these services.

Rob Tiffany is a Mobility Architect at Microsoft focused on designing and delivering the best possible Mobile solutions for his global customers.

The OnWindows blog posted Resource: T-Mobile deploys Windows Phone 7 on 12/2/2010 (missed when posted):

image T-Mobile USA, a leading provider of wireless services, wanted to create new mobile software to simplify communications for families. The company needed to implement the application and its server infrastructure while facing a tight deadline. T-Mobile decided to build the solution with Microsoft Visual Studio 2010 Professional and base it on Windows Phone 7 and the Windows Azure platform.

image“We talked about a lot of different cloud options,” says Lipe. “At the end of the day, we felt that the Windows Azure platform best met our needs with the right feature set, compatibility, and reliability.”
With an outline for the project in place, T-Mobile and Microsoft asked Matchbox Mobile, a mobile software development company, to join the project team. Matchbox Mobile creates innovative applications for original equipment manufacturers, wireless operators, and other vendors.
“As a mobile phone operator, it’s a no-brainer to couple Windows Phone 7 with the Windows Azure platform,” says Joshua Lipe, product manager of Devices at T-Mobile.

<Return to section navigation list> 

Visual Studio LightSwitch

image2224222No significant articles today.

Return to section navigation list> 

Windows Azure Infrastructure

Bill Zack recommended David Pallmann on What's New from PDC10, Part 1 - Windows Azure in a 12/6/2010 post:

If you are an ISV thinking about moving your hosted or customer premises based application to the cloud many of the new features announced at PDC10 can help you accomplish your goals. This is another post and video about the Windows Azure announcements that came out of PDC10 in November.  What makes this one somewhat unique it that it was done by one of our Windows Azure MVPs. 

That makes it more than just the usual laundry list list of features. Smile  It is part one of a four part series that will be unfolding over the next few months.

In this episode David discusses the new integrated Windows Azure Management Portal.


He also covers features such as:

  • The new New 1.3 SDK / Tools for Visual Studio,
  • Administrative Mode / Startup Tasks and Remote Desktop
  • Full IIS
  • Low-cost Compute Instance VM Size
  • SQL Azure Reporting Services
  • Windows Azure Connect
  • AppFabric Cache Service
  • VM Role
  • More extensive support for Java and Eclipse

Many of these features will be the subject of future webcasts in the series.

You should watch this video to get his unique viewpoint on these features and why they are important to your business. 

Mike Wood described Guest OS Version on Windows Azure in a 12/6/2010 post:

image One of the many options you have when deploying to Azure is what Azure OS version your Azure Service should run on.  In Azure you have a choice between the OS Family (which currently means Windows Server 2008 SP2 or Windows Server 2008 R2) and the OS Version.  I feel that this is slightly confusing because I’d normally say that the 2008 R2 was the version of the OS as compared to say, Windows Server 2003.  In the case of Azure the version relates to the Windows Azure Guest OS, or the additional Azure specific bits that are loaded on the top of the OS Family when an instance is provisioned by Azure for your code to run on (I’m not 100% sure when this combination is set up, so it may not happen at provisioning time).

imageYou configure this choice in the Service Configuration file as attributes on the ServiceConfiguration element.  The choice you make covers ALL of the roles in your service.  So, for example, you can’t have one role running on one OS and a second role running on another.


The values for these attributes can be read about in the docs.  It’s important to note that the OS Family and Version also have a tie in with what SDK you used to create your service, so make sure to check out the compatibility matrix as well.  You must specify a value for the OS Family, but the OS version can be set to * (which is the default).  This translates to “I want the newest one”, which means that as new guest OS versions are released your service will be updated.  They update this using the same mechanism that you might use if you did a rolling update on your service, meaning they will upgrade each update domain one at a time taking the roles offline, applying the new OS version, redeploying and bringing the instances back online.

Note that if you set the OS Version to * it will do the upgrades for you automagically; however, you will NOT be upgraded to a new OS Family without input from you.  You can read more about this process on the MSDN docs.  The docs also show how you can update these values directly from the management portal as well.

I wanted to blog about this because, as the docs will tell you the value of * for the guest level OS is a best practice.  Quote: “You can specify that Windows Azure should automatically upgrade the guest OS when a new version becomes available. Automatically upgrading the guest OS is a recommended best practice.

Part of the benefit of the Platform as a Service (PaaS) model of cloud computing is that you no longer are concerned with patches and OS level updates.  The service provider makes sure that the correct security patches are applied and that the recommended OS updates are installed.  That being said, I’d like to suggest you take a moment to really decide on the behavior you want when you deploy your application and if this automatic update is the right choice for you.

For example, I don’t know too many enterprise shops that have Windows Update turned on for each server in their data center.  Most of them have hooks set up to push patches out to the servers after they have been tested to determine if any of them interfere with the operations of the servers and applications the data center supports.  Granted, you want to have this testing done quickly, especially with regards to security related patches, but you still want to make sure that you don’t break an application your company depends on by rolling out a patch.

In my opinion having your own data center servers running Windows Update would be similar to setting the OS Version to * in Azure.  You are making the choice that the update can happen without your having time to test it out in the staging environment.  I’m not saying this is bad, I’m just saying you need to be aware of the choice that is being made.

Taking the approach of updating to the Guest OS Version on your own does remove some of the benefit that PaaS gives you; however, you should weigh that against possible breaking changes that may be caused by an update and how that will affect your business/application.  Doing the updates yourself means making sure as new versions come out that you do a full test of your code (hopefully automated as much as possible with tests) against the new OS level in staging, then updating the configuration to do the update in production.  This could be done as part of a rollout of new features, or on it’s own.  Depending on size of your code or application this can also be very time consuming, hence while the automatic updates sound really nice.

In conclusion, I’d suggest that you think about how you want your upgrade process for OS versions to work.  You could choose the automatic route and you may never see an issue at all, or you might get bit once in a while, it really is impossible to predict.  It is completely up to you and your needs that will determine the route you need to go, but it is a decision that should be made consciously.

After saying all of this, Microsoft does point out in their docs that they do reserve the right to patch an OS for some security related threats without input from you, or ability for you to opt out.  If you think about it this makes sense.  If a security hole is found that someone can manipulate and take over the computer I’d want all the servers in my own data center patched immediately, and want the same for my assets in Azure as well.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

Matthew Lynley asked Are hybrid clouds the path to cloud-computing nirvana? in a 12/6/2010 VentureBeat tutorial sponsored by Dell Computer Inc.:

image If you’re tackling your company’s computing needs, you’re going to have to get your head in the clouds. But which ones?

You’ve likely heard of cloud computing – shared computing resources available over the “cloud,” or the Internet. But it turns out there’s more than one way to cloud.

When most people talk about the cloud, they mean a public cloud — big server farms maintained by companies like Rackspace and available to and shared by a wide range of customers. They typically sell storage, bandwidth, and computing power at rates cheaper than most businesses could obtain on their own by maintaining their own computing infrastructure.

imageThere are also cloud applications, like’s customer-relationship management service, which provide both the software and the computing power needed to run it as a package deal. These, too, are a specialized form of public cloud.

The cost savings are compelling: Why own when you can rent? But cloud computing requires a shift in how programmers design and develop applications, however. That’s a burden for businesses both large and small. Add to that lingering concerns over security and availability, and it’s easy to understand why not everyone’s rushing to the public cloud.

Security concerns with the public cloud are mostly a myth, said Jason Hoffman, founder and chief technology officer of cloud-computing provider Joyent. But most major companies will probably still always have security standards that will prevent them from moving their business into the public cloud. Many businesses don’t want to ship sensitive information off to public cloud servers, especially if they’re in regulated industries. And for time-sensitive tasks like, say, computerized trading, firms may not want to give up the edge they get from running their own servers.

That doesn’t mean companies can’t embrace cloud computing. The notion that the cloud is “all or nothing” is a myth, CTO Werner Vogels, a big public-cloud proponent, said earlier this year.

Some businesses are beginning to set up their own cloud-like pools of computing resources, called private clouds. They use the same kind of over-the-Internet architectures as public clouds, but they’re reserved for the use of the organization and can be firewalled off from the public Internet for a higher level of security and performance.

The best-of-both-worlds mix, where businesses use private clouds for their most important computing tasks and public clouds for occasional peaks of demand or less-sensitive tasks, like serving up images on a website, is the hybrid cloud. And it could be the way forward for businesses that aren’t ready to sail all the way to the cloud.

Startups and big software companies are gearing up for the hybrid-cloud opportunity. Eucalyptus Systems, a startup which recently raised $20 million, is making tools that help businesses adapt their applications to run on hybrid clouds. Microsoft and SAP are increasingly talking about hybrid clouds, where their software is available for installation on customer-owned servers and also provided as a service over the Internet.

Odds are that the public cloud will be the infrastructure that inevitably wins out, especially as the strength of their security gets tested and proven to the satisfaction of customers and regulators. But hybrid clouds could win in the short term, as a way to get businesses started on cloud architectures. And in some ways they live up to the ultimate promise of cloud computing — that it doesn’t matter where our servers are physically located. Public cloud, private cloud, hybrid cloud — as long as it’s in the cloud, and we’re getting more efficient, we’re headed in the right direction.

It’s not surprising that Dell would sponsor a post about private clouds; they are more resource-intensive than public clouds.

Lori MacVittie (@lmacvittie) asserted The debate between private and public cloud is ridiculous and we shouldn’t even be having it in the first place as a preface to her It’s Called Cloud Computing not Cheap Computing post of 12/6/2010:

image There’s a growing sector of the “cloud” market that is mobilizing to “discredit” private cloud. That ulterior motives exist behind this effort is certain (as followers of the movement would similarly claim regarding those who continue to support the private cloud) and these will certainly vary based on whom may be leading the charge at any given moment.

What-we-Have-Here-is-a-FailReality is, however, that enterprises are going to build “cloud-like” architectural models whether the movement succeeds or not. While folks like Phil Wainewright can patiently point out that public clouds are less expensive and have a better TCO than any so-called private cloud implementation, he and others miss that it isn’t necessarily about raw dollars. It’s about a relationship between costs and benefits and risks, and analysis of the cost-risk-benefit relationship cannot be performed in a generalized, abstract manner. Such business analysis requires careful consideration of, well, the business and its needs – and that can’t be extrapolated and turned into a generalized formula without a lot of fine print, disclaimers, and caveats.

But let’s assume for a moment that no matter what the real cost-benefit analysis of private cloud versus public cloud might be for an organization that public cloud is less expensive.

So what?

If price were the only factor in IT acquisitions then a whole lot of us would be out of a job. Face it, just because a cheaper alternative to “leading brand X” exists does not mean that organizations buy into them (and vice-versa). Organizations have requirements for functionality, support, compliance with government and industry regulations and standards; they have an architecture into which such solutions must fit and integrate, interoperate and collaborate; they have needs that are both operational and business that must be balanced against costs.

Did you buy a Yugo instead of that BMW? No? Why not? The Yugo was certainly cheaper, after all, and that’s what counts, right?

IT organizations are no different. Do they want to lower their costs? Heck yeah. Do they want to do it at the expense of their business and operational requirements? Heck no. IT acquisition is always a balancing act and while there’s certainly an upper bounds for pricing it isn’t necessarily the deciding factor nor is it always a deal breaker.

It’s about the value of the solution for the cost. In some infrastructure that’s about performance and port density. In other it’s about features and flexibility. In still others it’s how well supported it is by other application infrastructure. The value of public cloud right now is in cheap compute and storage resources. For some organizations that’s enough, for others, it’s barely breaking the surface. The value of cloud is in its ability to orchestrate – to automatically manage resources according to business and operational needs. Those needs are unique to each organization and thus the cost-benefit-risk analysis of public versus private cloud must also be unique. Unilaterally declaring either public or private a “better value” is ludicrous unless you’ve factored in all the variables in the equation.


I will, however, grant that public cloud computing offerings are almost certainly cheaper resources than private. But let’s look at the cost to integrate public cloud-deployed applications with enterprise infrastructure and supporting architectural components versus a private cloud integration effort.

Applications deployed out in the cloud still require things like application access control (a.k.a. ID management), and data stores, and remote access and analytics and monitoring and, well, you get the picture. Organizations have two options if they aren’t moving the entirety of their data center to the public cloud environment:

  1. DUPLICATION Organizations can replicate the infrastructure and supporting components necessary in the public cloud. Additional costs are incurred to synchronize, license, secure, and manage.
  2. INTEGRATION Organizations can simply integrate and leverage existing corporate-bound infrastructure through traditional means or they can acquire emerging “cloud” integration solutions. The former is going to require effort around ensuring security and performance of that connection (don’t want requests timing out on users, that’s bad for productivity) and the latter will incur capital (and ongoing operational) expenses.

Integration of public cloud-deployed applications with network and application infrastructure is going to happen because very few organizations are “green fields”. That means the organization has existing applications and organization processes and policies that must be integrated, followed, and adhered to by any new application. Applications are not silos, they are not islands, they are not the cheese that stands alone at the end of the childhood game. And because organizations are not green fields, the expense of fork-lifting an entire data center architecture and depositing it in a public cloud – which would be necessary to alleviate the additional costs in effort and solutions associated with cross-internet integration – is far greater than the “benefit” of cheaper resources.

Andi Mann twitterbird said it so well in a recent blog “Public Cloud Computing is NOT For Everyone”:

blockquote Public cloud might be logical for most smaller businesses, new businesses, or new applications like Netflix’ streaming video service, but for large enterprises, completely abandoning many millions of dollars of paid-for equipment, and an immeasurable amount of process and skill investment, is frequently unjustifiable. As much as they might want to get rid of internal IT, for large enterprises especially, it simply will not make sense – financially, or to the business.

Whether pundits and experts continue to disparage efforts by enterprise organizations will not change the reality that they are building such architectural models in their own data centers today. If the results are not as efficient, or as cheap, or as “cloudy” as a public cloud, well, as long as it offers the business and IT organization value and benefits over what they had, does it matter if it’s not “perfect” or as “inexpensive” if it provides value?

The constant “put down” of private cloud and organizations actively seeking to implement them is as bad as the constant excuse of security (or lack thereof) in public cloud as a means to avoid them. Public and private cloud computing both aim to reduce costs and increase flexibility and operational efficiency of IT organizations. If that means all public, all private, or some mix of the two then that’s what it takes. 

That’s why I’m convinced that hybrid (sorry Randy) cloud computing will, in the end, be the preferred – or perhaps default - architectural model. There are applications for which public cloud computing makes sense in every organization, and applications for which private cloud computing makes sense. And then there are those applications for which cloud computing of any kind makes no sense.

Flexibility and agility is about choice; it’s about “personalization” of architectures and implementations for IT organizations such that they can build out a data center that meets the needs of the business they support. If you aren’t enabling that flexibility and choice, then you’re part of the problem, not the solution.

<Return to section navigation list> 

Cloud Security and Governance

John Moore published New PCI Standard: MSPs Deal With Semantics to the MSPMentor blog on 12/6/2010:

image Sometimes an MSP’s job is semantical as well as technical. Take the latest version of the Payment Card Industry Data Security Standard (PCI DSS 2.0), which has been out for a few weeks now. MSPs say the standard clarifies  the language of the previous iteration, which had some enterprises confused. The PCI standard prescribes security measures for businesses that handle customers’ credit card data. The PCI Security Standards Council, which manages the PCI DSS 2.0 standard, said most of the changes in the new version “are modifications to the language, which clarify the meaning of the requirements and make adoption easier for merchants.” Here’s the update.

Not a bad idea, considering that one company that took too literally a directive to classify media so it could be identified as confidential. The company affixed “confidential” stickers on its backup tapes — a move that rather subverted the intent of the standard by inadvertently creating a temping target.

Eric Browning, security engineer at SecureWorks Inc., a managed security services provider, related that story, noting that the MSSP quickly put the customer on a surer security footing.

But it’s not just end customers that have experienced language issues. Browning said some Qualified Security Assessors, companies that check for PCI DSS compliance, interpreted the previous standard as prohibiting virtualization. He said that was not the case, adding that SecureWorks, a QSA itself, has been advising customers accordingly. The latest standard takes up the issue, including wordage that “explicitly allows virtualization to take place in a PCI environment,” Browning explained.

Still to Come

More advice on PCI and virtualization is forthcoming. One lingering question centers on whether every virtual machine on a server falls within the scope of PCI DSS or whether one virtual machine may deemed in scope and another not. Browning said a PCI council special interest group will provide guidance on scoping earlier next year.

Rahul Bakshi, vice president of managed services strategy and solutions design at  SunGard Availability Services, said the new standard also impacts monitoring and reporting.

In that area, the standard provides additional clarify around “not just being able to monitor things, but also making sure you are actually capturing the data you are monitoring,” Bakshi said.

In the case automated monitoring, organizations need to make sure they have established the appropriate controls and alerting process.

Overall, Bakshi said PCI DSS 2.0 marks the continued maturation of the standard. It also ushers in a three-year lifecycle for standards development as opposed to the previous two-year cycle.

That’s all the more time for MSPs and their customers to learn the language.

Read More About This Topic

<Return to section navigation list> 

Cloud Computing Events

Steve Plank (@plankytronixx) announced on 12/6/2010 that he’ll present Plankytronixx Academy: Windows Azure Connect - Live Meeting on 15th December 16:00-17:15 UK time (08:00–09:15 PST):

I will be presenting a Live Meeting session on Windows Azure Connect for 1 hour 15 minutes on 15th December 16:00 UK time.

The session will include an overview of the networking and domain-join aspects of Windows Azure Connect, including demos of both features. I’ll also talk about things this is useful for and also when you might not want to use it, going in to a little more detail on the various blog posts I’ve made about these things.

You can join the meeting with this link on the 15th December at 16:00 UK time (08:00 PST).

If there are specific things you’d like me to cover, please leave comments and I’ll try to accommodate you, though please bear in mind we only have 1:15 and many people will know almost nothing about it at all.

I thought I’d do a series of these things I’ve nominally monikered “Plankytronixx Academy”. This one is the first…


If there’s a reasonable turnout and it’s a success I’ll do more of them.

From the look of the above image, Plankytronixx appears to be a flight school for aspiring DC-3 (known as a Dakota in the UK) pilots. Not sure what the aircraft has to do with Azure Connect.

Michael Coté reported on 12/6/2010 Real cloud aren’t fluffy – upcoming webinar scheduled for 12/15/2010 at 9:00 AM PST: Go to full article

image Want to hear more practical talk about using cloud computing? I’ll be part of a free (of course) webinar next week discussing such stuff, along with RedMonk client If you dial into the live webinar they’re giving away a ride on a jet fighter.

The Agenda

Here’s the description from them:

Brugge laundromat

“The Cloud” offers enormous opportunity for modern business, and every IT vendor is talking about it. But what does it really mean? What is the public cloud? Is “private cloud” even a real thing? How could you use cloud services in your business today? Go behind the hype with Michael Coté from Redmonk and experts from for a real-world, down to earth discussion. Michael will provide an analyst perspective on cloud services and opportunities, and will show a live demonstration of how you can make the most of this technology today.

We’re set for a discussion, question & answer format so you won’t have to put up with dreary slides from me.

When and registering

There’s two broadcasts (and a recording afterwards, of course): Wednesday December 15th from 9:00 AM PST and then Thursday December 16th from 9:00AM GMT.

If you’re interested, go on over there and register for it – tell me what you thought of it afterwards, and good luck with that jet ride and all ;>

Disclosure: is a client and is paying for my participation in this webinar.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Mary Jo Foley reported Microsoft seeks to lure Salesforce users with new promotion for Dynamics CRM Online in this 12/6/2010 post to ZDNet’s All About Microsoft blog:

image On the opening day of’s Dreamforce user conference, Microsoft is going on the offensive by launching a new promotion aimed at getting users to switch to its Dynamics CRM Online offering.

Microsoft is publishing an open letter targeted at Salesforce customers detailing the new “Cloud CRM for Less” offer. The deal, which runs from December 6 through June 30, 2011, calls for Microsoft to supply companies who switch to Microsoft’s CRM Online a rebate of $200 per user.

image “This rebate can be applied for services to help you switch – such as migrating your data or customizing the solution for your unique busines needs,” according to the terms and conditions. Switcher case studies and CRM Online demos are available on Microsoft’s site.

imageMicrosoft’s CRM Online 2011 product is looking like a January 2011 deliverable. The on-premises software complement is due to launch in Q1 2011, shortly after the online version goes live.

Via today’s promotion, Microsoft is focusing its messaging around cost.

“At Microsoft, we do not believe you should be forced to pay a premium to achieve business success,” said Michael Park, Corporate Vice President, Sales, Marketing and Operations for Microsoft Business Solutions, via the Microsoft open letter.

Microsoft is playing up the fact’s Enterprise Edition product is two to three times more expensive than the comparable Dynamics CRM Online offering. The list price of Salesforce’s Enterprise Edition is $125 per user per month.

Microsoft officials said this fall that the company will be offering its CRM Online 2011 service for a promotional introductory price of $34 per user per month (for the first year the product is available).

The Softies also are highlighting Microsoft’s “financially backed 99.9 percent uptime commitment for every Microsoft Dynamics CRM Online customer”; its CRM analytics tools; and is integration with Outlook and Office as competitive selling points.

Kick off your day with ZDNet's daily e-mail newsletter. It's the freshest tech news and opinion, served hot. Get it.

Lydia Leong (@cloudpundit) asked What does the cloud mean to you? in a 12/5/2010 post to her CloudPundit blog:

image My [Gartner] Magic Quadrant for Cloud Infrastructure as a Service and Web Hosting is done. The last week has been spent in discussion with service providers over their positioning and the positioning of their competitors and the whys and wherefores and whatnots. That has proven to be remarkably interesting this year, because it’s been full of angry indignation by providers claiming diametrically opposed things about the market.

Gartner gathers its data about what people want in two ways — from primary research surveys, and, often more importantly, from client inquiry, the IT organizations who are actually planning to buy things or better yet are actually buying things. I currently see a very large number of data points — a dozen or more conversations of this sort a day, much of it focused on buying cloud IaaS.

And so when a provider tells me, “Nobody in the market wants to buy X!”, I generally have a good base from which to judge whether or not that’s true, particularly since I’ve got an entire team of colleagues here looking at cloud stuff. It’s never that those customers don’t exist; it’s that the provider’s positioning has essentially guaranteed that they don’t see the deals outside their tunnel vision service.

The top common fallacy, overwhelmingly, is that enterprises don’t want to buy from Amazon. I’ve blogged previously about how wrong this is, but at some point in the future, I’m going to have to devote a post (or even a research note) to why this is one of the single greatest, and most dangerous, delusions, that a cloud provider can have. If you offer cloud IaaS, or heck, you’re a data-center-related business, and you think you don’t compete with Amazon, you are almost certainly wrong. Yes, even if your customers are purely enterprise — especially if your customers are large enterprises.

The fact of the matter is that the people out there are looking at different slices of cloud IaaS, but they are still slices of the same market. This requires enough examination that I’m actually going to write a research note instead of just blogging about it, but in summary, my thinking goes like this (crudely segmented, saving the refined thinking for a research note):

There are customers who want self-managed IaaS. They are confident and comfortable managing their infrastructure on their own. They want someone to provide them with the closest thing they can get to bare metal, good tools to control things (or an API they can use to write their own tools), and then they’ll make decisions about what they’re comfortable trusting to this environment.

There are customers who want lightly-managed IaaS, which I often think of as “give me raw infrastructure, but don’t let me get hacked” — which is to say, OS management (specifically patch management) and managed security. They’re happy managing their own applications, but would like someone to do all the duties they typically entrust to their junior sysadmins.

There are customers who want complex management, who really want soup-to-nuts operations, possibly also including application management.

And then in each of these segments, you can divide customers into those with a single application (which may have multiple components and be highly complex, potentially), and those who have a whole range of stuff that encompass more general data center needs. That drives different customer behaviors and different service requirements.

Claiming that there’s no “real” enterprise market for self-managed is just as delusional as claiming there’s no market for complex management. They’re different use cases in the same market, and customers often start out confused about where they fall along this spectrum, and many customers will eventually need solutions all along this spectrum.

Now, there’s absolutely an argument to be made that the self-managed and lightly-managed segments together represent an especially important segment of the market, where a high degree of innovation is taking place. It means that I’m writing some targeted research — selection notes, a Critical Capabilities rating of individual services, probably a Magic Quadrant that focuses specifically on this next year. But the whole spectrum is part of the cloud IaaS adoption phenomenon, and any individual segment isn’t representative of the total market evolution.

Jeff Barr (@jeffbarr) described Amazon Route 53 - The AWS Domain Name Service in a 12/5/2010 post to the Amazon Web Services blog:

image In 1995 I registered my first domain name and put it online. Back then, registration was expensive and complex. Before you could even register a domain you had to convince at least two of your friends to host the Domain Name Service (DNS) records for it. These days, domain registration is inexpensive and simple. DNS hosting has also been simplified, but it is still a human-powered forms-based process.

Today we are introducing Amazon Route 53, a programmable Domain Name Service. You can now create, modify, and delete DNS zone files for any domain that you own. You can do all of this under full program control—you can easily add and modify DNS entries in response to changing circumstances. For example, you could create a new sub-domain for each new customer of a Software as a Service (SaaS) application. DNS queries for information within your domains will be routed to a global network of 16 edge locations tuned for high availability and high performance.

image Route 53 introduces a new concept called a Hosted Zone. A Hosted Zone is equivalent to a DNS zone file. It begins with the customary SOA (Start of Authority) record and can contain other records such as A (IPV4 address), AAAA (IPV6 address), CNAME (canonical name), MX (mail exchanger), NS (name server), and SPF (Sender Policy Framework). You have full control over the set of records in each Hosted Zone.

You start out by creating a new Hosted Zone for a domain. The new zone will contain one SOA record and four NS records. Then you can post batches of changes (additions, deletions, and alterations) to the Hosted Zone. You'll get back a change id for each batch. You can poll Route 53 to verify that the changes in the batch (as identified by the change id) have been propagated to all of the name servers (this typically takes place within 60 seconds).

The zone's status will change from PENDING to INSYNC when all of the changes have been propagated. You can update your domain registration with the new nameservers at this point. Our Route 53 Getting Started Guide contains a complete guide to getting started with a new Hosted Zone.

Each record in a Hosted Zone can refer to AWS or non-AWS resources as desired. This means that you can use Route 53 to provide DNS services for any desired combination of traditional and cloud-based resources, and that you can switch back and forth quickly and easily.

You can access Route 53 using a small set of REST APIs. Toolkit and AWS Management Console support is on the drawing board, as is support for the so-called "Zone Apex" issue.

Route 53 will cost you $1 per month per Hosted Zone, $0.50 per million queries for the first billion queries per month, and $0.25 per million queries after that.  Most sites typically see an order of magnitude fewer DNS queries than page views. If your site gets one million page views per month, it would be reasonable to expect about 100,000 DNS queries per month. In other words, one billion queries is a lot of queries and many sites won’t come anywhere near this number. The results of a DNS query are cached by clients. You could set a high TTL (Time to Live) on the records in your Hosted Zone in order to reduce the number of queries and the cost.

Route 53 supports up to 100 Hosted Zones per AWS account. If you need more, simply contact us and we'll be happy to help.

The Route 53 / CloudFront team has openings for several software developers and a senior development manager.

Werner Vogels (@werner) added Route 53 background with Expanding the Cloud with DNS - Introducing Amazon Route 53 on 12/5/2010:

image I am very excited that today we have launched Amazon Route 53, a high-performance and highly-available Domain Name System (DNS) service. DNS is one of the fundamental building blocks of internet applications and was high on the wish list of our customers for some time already. Route 53 has the business properties that you have come to expect from an AWS service: fully self-service and programmable, with transparent pay-as-you-go pricing and no minimum usage commitments.

Some Fundamentals on Naming

image Naming is one of the fundamental concepts in Distributed Systems. Entities in a system are identified through their name, which is separate from the way that you would choose to access that entity, the address that the access point resides at and what route to take to get to that address.

A simple example is the situation with Persons and Telephones; a person has a name, a person can have one or more telephones and each phone can have one or more telephone numbers. To reach an individual you will look up him or her in your address book, and select a phone (home, work, mobile) and then a number to dial. The number will be used to route the call through the myriad of switches to its destination. The person is the entity with its name, the phones are access points and the phones numbers are addresses.

Names do not necessarily need to be unique, but it makes life a lot easier if that is the case. There is more than one Werner Vogels in this world and although I never get emails, snail mail or phones calls for any of my peers, I am sure they are somewhat frustrated if they type in our name in a search engine :-).

In distributed systems we use namespaces to ensure that we can create rich naming without having to continuously worry about whether these names are indeed globally unique. Often these namespaces are hierarchical in nature such that it becomes easier to manage them and to decentralize control, which makes the system more scalable. The naming system that we are all most familiar with in the internet is the Domain Name System (DNS) that manages the naming of the many different entities in our global network; its most common use is to map a name to an IP address, but it also provides facilities for aliases, finding mail servers, managing security keys, and much more. The DNS namespace is hierarchical in nature and managed by organizations called registries in different countries. Domain registrars are the commercial interface between the DNS registries and those wishing to manage their own namespace.

DNS is an absolutely critical piece of the internet infrastructure. If it is down or does not function correctly, almost everything breaks down. It would not be a first that a customer thinks that his EC2 instance is down when in reality it is some name server somewhere that is not functioning correctly.

DNS looks relatively simple on the outside, but is pretty complex on the inside. To ensure that this critical component of the internet scales and is robust in the face of outages, replication is used pervasively using epidemic style techniques. The DNS is one of those systems that rely on Eventual Consistency to manage its globally replicated state.

While registrars manage the namespace in the DNS naming architecture, DNS servers are used to provide the mapping between names and the addresses used to identify an access point. There are two main types of DNS servers: authoritative servers and caching resolvers. Authoritative servers hold the definitive mappings. Authoritative servers are connected to each other in a top down hierarchy, delegating responsibility to each other for different parts of the namespace. This provides the decentralized control needed to scale the DNS namespace.

But the real robustness of the DNS system comes through the way lookups are handled, which is what caching resolvers do. Resolvers operate in a completely separate hierarchy which is bottoms up, starting with software caches in a browser or the OS, to a local resolver or a regional resolver operated by an ISP or a corporate IT service. Caching resolvers are able to find the right authoritative server to answer any question, and then use eventual consistency to cache the result. Caching techniques ensure that the DNS system doesn't get overloaded with queries.

The Domain Name System is a wonderful practical piece of technology; it is a fundamental building block of our modern internet. As always there are many improvements possible, and many in the area of security and robustness are always in progress.

Amazon Route 53

Amazon Route 53 is a new service in the Amazon Web Services suite that manages DNS names and answers DNS queries. Route 53 provides Authoritative DNS functionality implemented using a world-wide network of highly-available DNS servers. Amazon Route 53 sets itself apart from other DNS services that are being offered in several ways:

A familiar cloud business model: A complete self-service environment with no sales people in the loop. No upfront commitments are necessary and you only pay for what you have used. The pricing is transparent and no bundling is required and no overage fees are charged.

Very fast update propagation times: One of the difficulties with many of the existing DNS services are the very long update propagation times, sometimes it may even take up to 24 hours before updates are received at all replicas. Modern systems require much faster update propagation to for example deal with outages. We have designed Route 53 to propagate updates very quickly and give the customer the tools to find out when all changes have been propagated.

Low-latency query resolution The query resolution functionality of Route 53 is based on anycast, which will route the request automatically to the DNS server that is the closest. This achieves very low-latency for queries which is crucial for the overall performance of internet applications. Anycast is also very robust in the presence of network or server failures as requests are automatically routed to the next closest server.

No lock-in. While we have made sure that Route 53 works really well with other Amazon services such as Amazon EC2 and Amazon S3, it is not restricted to using it within AWS. You can use Route 53 with any of the resources and entities that you want to control, whether they are in the cloud or on premise.

We chose the name "Route 53" as a play on the fact that DNS servers respond to queries on port 53. But in the future we plan for Route 53 to also give you greater control over the final aspect of distributed system naming, the route your users take to reach an endpoint. If you want to learn more about Route 53 visit and read the blog post at the AWS Developer weblog.

James Hamilton rang in with Amazon Route 53 DNS Service on 12/6/2010:

image Even working in Amazon Web Services, I’m finding the frequency of new product announcements and updates a bit dizzying. It’s amazing how fast the cloud is taking shape and the feature set is filling out. Utility computing has really been on fire over the last 9 months. I’ve never seen an entire new industry created and come fully to life this fast. Fun times.

image Before joining AWS, I used to say that I had an inside line on what AWS was working upon and what new features were coming in the near future.  My trick? I went to AWS customer meetings and just listened. AWS delivers what customers are asking for with such regularity that it’s really not all that hard to predict new product features soon to be delivered. This trend continues with today’s announcement. Customers have been asking for a Domain Name Service with consistency and, today, AWS is announcing the availability of Route 53, a scalable, highly-redundant and reliable, global DNS service.

The Domain Name System is essentially a global, distributed database that allows various pieces of information to be associated with a domain name.  In the most common case, DNS is used to look up the numeric IP address for an domain name. So, for example, I just looked up and found that one of the addresses being used to host is And, when your browser accessed this blog (assuming you came here directly rather than using RSS) it would have looked up to get an IP address. This mapping is stored in an DNS “A” (address) record. Other popular DNS records are CNAME (canonical name), MX (mail exchange), and SPF (Sender Policy Framework). A full list of DNS record types is at: Route 53 currently supports:

  • A (address record)
  • AAAA (IPv6 address record)
  • CNAME (canonical name record)
  • MX (mail exchange record)
  • NS (name server record)
  • PTR (pointer record)
  • SOA (start of authority record)
  • SPF (sender policy framework)
  • SRV (service locator)
  • TXT (text record)

DNS, on the surface, is fairly simple and is easy to understand. What is difficult with DNS is providing absolute rock-solid stability at scales ranging from a request per day on some domains to billions on others. Running DNS rock-solid, low-latency, and highly reliable is hard.  And it’s just the kind of problem that loves scale. Scale allows more investment in the underlying service and supports a wide, many-datacenter footprint.

The AWS Route 53 Service is hosted in a global network of edge locations including the following 16 facilities:

United States

  • Ashburn, VA
  • Dallas/Fort Worth, TX
  • Los Angeles, CA
  • Miami, FL
  • New York, NY
  • Newark, NJ
  • Palo Alto, CA
  • Seattle, WA
  • St. Louis, MO


  • Amsterdam
  • Dublin
  • Frankfurt
  • London


  • Hong Kong
  • Tokyo
  • Singapore

Many DNS lookups are resolved in local caches but, when there is a cache miss, it will need to be routed back to the authoritative name server.  The right approach to answering these requests with low latency is to route to the nearest datacenter hosting an appropriate DNS server.  In Route 53 this is done using anycast. Anycast is a cool routing trick where the same IP address range is advertised to be at many different locations. Using this technique, the same IP address range is advertized as being in each of the world-wide fleet of datacenters. This results in the request being routed to the nearest facility from a network perspective.

Route 53 routes to the nearest datacenter to deliver low-latency, reliable results. This is good but Route 53 is not the only DNS service that is well implemented over a globally distributed fleet of datacenters. What makes Route 53 unique is it’s a cloud service. Cloud means the price is advertised rather than negotiated.  Cloud means you make an API call rather than talking to a sales representative. Cloud means it’s a simple API and you don’t need professional services or a customer support contact. And cloud means its running NOW rather than tomorrow morning when the administration team comes in. Offering a rock-solid service is half the battle but it’s the cloud aspects of Route 53 that are most interesting. 

Route 53 pricing is advertised and available to all:

  • Hosted Zones: $1 per hosted zone per month
  • Requests: $0.50 per million queries for first billion queries and $0.25 per million queries over 1B month

You can have it running in less time than it took to read this posting. Go to: ROUTE 53 Details. You don’t need to talk to anyone, negotiate a volume discount, hire a professional service team, call the customer support group, or wait until tomorrow. Make the API calls to set it up and, on average, 60 seconds later you are fully operating.

Tim Anderson (@timanderson) posted Google App Engine and why vendor honesty pays on 12/6/2010:

image I’ve just attended a Cloudstock session on Google App Engine and new Google platform technologies – an introductory talk by Google’s Christian Schalk.

App Engine has been a subject of considerable debate recently, thanks to a blog post by Carlos Ble called Goodbye App Engine:

Choosing GAE as the platform four our project is a mistake which cost I estimate in about 15000€. Considering it’s been my money, it is a "bit" painful.

image Ble’s points is that App Engine has many limitations. Since Google tends not to highlight these in its marketing, Ble discovered them as he went, causing frustrations and costly workarounds. In addition, it has not proved reliable:

Once you overcome all the limitations with your complex code, you are supposed to gain scalabilty for millions of users. After all, you are hosted by Google. This is the last big lie.

Since the last update they did in september 2010, we starting facing random 500 error codes that some days got the site down 60% of the time.

Ble has now partially retracted his post.

I am rewriting this post is because Patrick Chanezon (from Google), has added a kind and respectful comment to this post. Given the huge amount of traffic this post has generated (never expected nor wanted) I don’t want to damage the GAE project which can be a great platform in the future.

He is still not exactly positive, and adds:

I also don’t want to try Azure. The more experience I gain, the less I trust platforms/frameworks which code I can’t see.

Ble’s post is honest, but many of the issues are avoidable and arguably his main error was not to research the platform more thoroughly before than diving in. He blames the platform for issues that in some cases are implementation mistakes.

Still, here at Cloudstock I was interested to see if Schalk was going to mention any of these limitations or respond to Ble’s widely-read post. The answer is no – I got the impression that anything you can do in Java or Python, you can do on App Engine, with unlimited scalability thrown in.

My view is that it pays vendors to explain the “why not” as well as the “why” of using their platform. Otherwise there is a risk of disillusionment, and disillusioned customers are hard to win back.

Related posts:

  1. One day of hacks, REST and cloud: Cloudstock
  2. Google App Engine 25% ready for prime time
  3. Google App Engine is easier than Windows Azure for getting started

Re item 3: For Pythonistas, perhaps, but not for .NET developers (or at least me.)

<Return to section navigation list>