Monday, November 15, 2010

Windows Azure and Cloud Computing Posts for 11/12/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

Update 11/14/2010: Articles marked

• Update 11/13/2010: Articles marked

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.

Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

Peter Kellner posted Building a Simple Azure Blob Tree Viewer With Azure StorageClient API on 11/12/2010:

image Understanding how Azure Blob Storage can be used to simulate directory structures is a little tricky to say the least.  I’ve got a long forum thread on the Windows Azure Community site  now discussing the details.  As always, Steve Marx has been a big help here with a bunch of code. Steve’s got a great blog where he provides lots of examples and insights.  Neil Mackenzie has also contributed here to getting to the answer.

imageJust so we now have an example, I’ve put together a simple windows form app that let’s you set a few variables in your app.config to point at your azure storage and container, let you view your app as a tree as well as see the code how it can be done.  I have not commented the code much, just thought it would be good to get it out there.  The running application shows you the data as follows.


So, for the details, I’m pasting below the meet of the code.  Basically, it does what you would expect in terms of iterating through the directories recursively to build the list.  Again, just set your parameters in your app.config as follows:


and you can run it for yourself and see how it goes:

Joe Giardino of the Windows Azure Storage Team warned developers on 11/12/2010 that Windows Azure Storage Client Library: CloudBlob.DownloadToFile() may not entirely overwrite file contents:


imageThere is an issue in the Windows Azure Storage Client Library that can lead to unexpected behavior when utilizing the CloudBlob.DownloadToFile() methods

The current implementation of CloudBlob.DownloadToFile() does not erase or clear any preexisting data in the file. Therefore, if you download a blob which is smaller than the existing file, preexisting data at the tail end of the file will still exist.

Example: Let’s say I have a blob titled movieblob that currently contains all the movies that I would like to watch in the future. I want to download this blob to a local file moviesToWatch.txt, which currently contains a lot of romantic comedies which my wife recently watched, however, when I overwrite that file with the action movies I want to watch (which happens to be a smaller list) the existing text is not completely overwritten which may lead to a somewhat random movie selection.


You've Got Mail;P.S. I Love You.;Gone With The Wind;Sleepless in Seattle;Notting Hill;Pretty Woman;The Runaway Bride;The Holiday;Little Women;When Harry Met Sally


The Dark Knight;The Matrix;Braveheart;The Core;Star Trek 2:The Wrath of Khan;The Dirty Dozen;

moviesToWatch.txt (updated)

The Dark Knight;The Matrix;Braveheart;The Core;Star Trek 2:The Wrath of Khan;The Dirty Dozen;Woman;The Runaway Bride;The Holiday;Little Women;When Harry Met Sally

As you can see in the updated local moviesToWatch.txt file, the last section of the previous movie data still exists on the tail end of the file.

This issue will be addressed in a forthcoming release of the Storage Client Library.


In order to avoid this behavior you can use the CloudBlob.DownloadToStream() method and pass in the stream for a file that you have already called File.Create on, see below.

using (var stream = File.Create("myFile.txt"))

To reiterate, this issue only affects scenarios where the file already exists, and the downloaded blob contents are less than the length of the previously existing file. If you are using CloudBlob.DownloadToFile() to write to a new file then you will be unaffected by this issue. Until the issue is resolved in a future release, we recommend that users follow the pattern above.

Ranier Stropek described a Custom SSIS Data Source For Loading Azure Tables Into SQL Server with a 00:03:29 video segment and downloadable source code in an 11/11/2010 post:

imageYesterday the wether in Frankfurt was horrible and so my plane from Berlin was late. I missed my connection flight to Linz and had to stay in a hotel in Frankfurt. Therefore I had some time and I used it for implementing a little sample showing how you can use a customer SSIS data source to easily transfer data from Windows Azure Table Storage to SQL Server databases using the ETL tool "SQL Server Integration Services" (SSIS).

Here is the source code for download. Please remember:

  1. This is just a sample.
  2. The code has not been tested.
  3. If you want to use this stuff you have to compile and deploy it. Check out the post-build actions in the project to see which DLLs you have to copy to which folders in order to make them run.

Let's start by demonstrating how the resulting component works inside SSIS. For this I have created this very short video:

Now let's take a look at the source code.

Reading an Azure Table without a fixed class

The first problem that has to be solved is to read data from an Azure table without knowing it's schema at compile time. There is an excellent post covering that in the Azure Community pages. I took the sourcecode shown there and extended/modified it a little bit so that it fits to what I needed.

First class is just a helper representing a column in the table store (Column.cs):

using System;
using Microsoft.SqlServer.Dts.Runtime.Wrapper;
namespace TableStorageSsisSource
public class Column
  public Column(string columnName, string typeName, string valueAsString)
   this.ColumnName = columnName;
   this.ClrType = Column.GetType(typeName);
   this.DtsType = Column.GetSsisType(typeName);
   this.Value = Column.GetValue(this.DtsType, valueAsString);
  public string ColumnName { get; private set; }
  public Type ClrType { get; private set; }
  public DataType DtsType { get; private set; }
  public object Value { get; private set; }
  private static Type GetType(string type)
   switch (type)
    case "Edm.String": return typeof(string);
    case "Edm.Int32": return typeof(int);
    case "Edm.Int64": return typeof(long);
    case "Edm.Double": return typeof(double);
    case "Edm.Boolean": return typeof(bool);
    case "Edm.DateTime": return typeof(DateTime);
    case "Edm.Binary": return typeof(byte[]);
    case "Edm.Guid": return typeof(Guid);
    default: throw new NotSupportedException(string.Format("Unsupported data type {0}", type));
  private static DataType GetSsisType(string type)
   switch (type)
    case "Edm.String": return DataType.DT_NTEXT;
    case "Edm.Binary": return DataType.DT_IMAGE;
    case "Edm.Int32": return DataType.DT_I4;
    case "Edm.Int64": return DataType.DT_I8;
    case "Edm.Boolean": return DataType.DT_BOOL;
    case "Edm.DateTime": return DataType.DT_DATE;
    case "Edm.Guid": return DataType.DT_GUID;
    case "Edm.Double": return DataType.DT_R8;
    default: throw new NotSupportedException(string.Format("Unsupported data type {0}", type));
  private static object GetValue(DataType dtsType, string valueAsString)
   switch (dtsType)
    case DataType.DT_NTEXT: return valueAsString;
    case DataType.DT_IMAGE: return Convert.FromBase64String(valueAsString);
    case DataType.DT_BOOL: return bool.Parse(valueAsString);
    case DataType.DT_DATE: return DateTime.Parse(valueAsString);
    case DataType.DT_GUID: return new Guid(valueAsString);
    case DataType.DT_I2: return Int32.Parse(valueAsString);
    case DataType.DT_I4: return Int64.Parse(valueAsString);
    case DataType.DT_R8: return double.Parse(valueAsString);
    default: throw new NotSupportedException(string.Format("Unsupported data type {0}", dtsType));

Second class represents a row inside the table store (without strong schema; GenericEntity.cs):

using System.Collections.Generic;
using Microsoft.WindowsAzure.StorageClient;
namespace TableStorageSsisSource
    public class GenericEntity : TableServiceEntity
        private Dictionary<string, Column> properties = new Dictionary<string, Column>();
        public Column this[string key]
                if (
                    return null;
      [key] = value;
        public IEnumerable<Column> GetProperties()
        public void SetProperties(IEnumerable<Column> properties)
            foreach (var property in properties)
                this[property.ColumnName] = property;

Last but not least we need a context class that interprets the AtomPub format and builds the generic content objects (GenericTableContent.cs):

using System;
using System.Data.Services.Client;
using System.Linq;
using System.Xml.Linq;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;
namespace TableStorageSsisSource
public class GenericTableContext : TableServiceContext
  public GenericTableContext(string baseAddress, StorageCredentials credentials)
   : base(baseAddress, credentials)
   this.IgnoreMissingProperties = true;
   this.ReadingEntity += new EventHandler<ReadingWritingEntityEventArgs>(GenericTableContext_ReadingEntity);
  public GenericEntity GetFirstOrDefault(string tableName)
   return this.CreateQuery<GenericEntity>(tableName).FirstOrDefault();
  private static readonly XNamespace AtomNamespace = "";
  private static readonly XNamespace AstoriaDataNamespace = "";
  private static readonly XNamespace AstoriaMetadataNamespace = "";
  private void GenericTableContext_ReadingEntity(object sender, ReadingWritingEntityEventArgs e)
   var entity = e.Entity as GenericEntity;
   if (entity != null)
     .Element(AtomNamespace + "content")
     .Element(AstoriaMetadataNamespace + "properties")
     .Select(p =>
       Name = p.Name.LocalName,
       IsNull = string.Equals("true", p.Attribute(AstoriaMetadataNamespace + "null") == null ? null : p.Attribute(AstoriaMetadataNamespace + "null").Value, StringComparison.OrdinalIgnoreCase),
       TypeName = p.Attribute(AstoriaMetadataNamespace + "type") == null ? null : p.Attribute(AstoriaMetadataNamespace + "type").Value,
     .Select(dp => new Column(dp.Name, dp.TypeName, dp.Value.ToString()))
     .ForEach(column => entity[column.ColumnName] = column);

The Custom SSIS Data Source

The custom SSIS data source is quite simple (TableStorageSsisSource.cs):

using System.Collections.Generic;
using Microsoft.SqlServer.Dts.Pipeline;
using Microsoft.SqlServer.Dts.Pipeline.Wrapper;
using Microsoft.WindowsAzure;
namespace TableStorageSsisSource
[DtsPipelineComponent(DisplayName = "Azure Table Storage Source", ComponentType = ComponentType.SourceAdapter)]
public class TableStorageSsisSource : PipelineComponent
  public override void ProvideComponentProperties()
   // Reset the component.
   // Add output
   IDTSOutput100 output = ComponentMetaData.OutputCollection.New();
   output.Name = "Output";
   // Properties
   var storageConnectionStringProperty = this.ComponentMetaData.CustomPropertyCollection.New();
   storageConnectionStringProperty.Name = "StorageConnectionString";
   storageConnectionStringProperty.Description = "Azure storage connection string";
   storageConnectionStringProperty.Value = "UseDevelopmentStorage=true";
   var tableNameProperty = this.ComponentMetaData.CustomPropertyCollection.New();
   tableNameProperty.Name = "TableName";
   tableNameProperty.Description = "Name of the source table";
   tableNameProperty.Value = string.Empty;
  public override IDTSCustomProperty100 SetComponentProperty(string propertyName, object propertyValue)
   var resultingColumn = base.SetComponentProperty(propertyName, propertyValue);
   var storageConnectionString = (string)this.ComponentMetaData.CustomPropertyCollection["StorageConnectionString"].Value;
   var tableName = (string)this.ComponentMetaData.CustomPropertyCollection["TableName"].Value;
   if (!string.IsNullOrEmpty(storageConnectionString) && !string.IsNullOrEmpty(tableName))
    var cloudStorageAccount = CloudStorageAccount.Parse(storageConnectionString);
    var context = new GenericTableContext(cloudStorageAccount.TableEndpoint.AbsoluteUri, cloudStorageAccount.Credentials);
    var firstRow = context.GetFirstOrDefault(tableName);
    if (firstRow != null)
     var output = this.ComponentMetaData.OutputCollection[0];
     foreach (var column in firstRow.GetProperties())
      var newOutputCol = output.OutputColumnCollection.New();
      newOutputCol.Name = column.ColumnName;
      newOutputCol.SetDataTypeProperties(column.DtsType, 0, 0, 0, 0);
   return resultingColumn;
  private List<ColumnInfo> columnInformation;
  private GenericTableContext context;
  private struct ColumnInfo
   public int BufferColumnIndex;
   public string ColumnName;
  public override void PreExecute()
   this.columnInformation = new List<ColumnInfo>();
   IDTSOutput100 output = ComponentMetaData.OutputCollection[0];
   var cloudStorageAccount = CloudStorageAccount.Parse((string)this.ComponentMetaData.CustomPropertyCollection["StorageConnectionString"].Value);
   context = new GenericTableContext(cloudStorageAccount.TableEndpoint.AbsoluteUri, cloudStorageAccount.Credentials);
   foreach (IDTSOutputColumn100 col in output.OutputColumnCollection)
    ColumnInfo ci = new ColumnInfo();
    ci.BufferColumnIndex = BufferManager.FindColumnByLineageID(output.Buffer, col.LineageID);
    ci.ColumnName = col.Name;
  public override void PrimeOutput(int outputs, int[] outputIDs, PipelineBuffer[] buffers)
   IDTSOutput100 output = ComponentMetaData.OutputCollection[0];
   PipelineBuffer buffer = buffers[0];
   foreach (var item in this.context.CreateQuery<GenericEntity>((string)this.ComponentMetaData.CustomPropertyCollection["TableName"].Value))
    for (int x = 0; x < columnInformation.Count; x++)
     var ci = (ColumnInfo)columnInformation[x];
     var value = item[ci.ColumnName].Value;
     if (value != null)
      buffer[ci.BufferColumnIndex] = value;

<Return to section navigation list> 

SQL Azure Database and Reporting

Steve Yi recommended on 11/12/2010 Julie Lerman’s (@julielerman) Using the Entity Framework to Reduce Network Latency to SQL Azure article in MSDN Magazine’s November 2010 issue:

image Julie Lerman writing for MSDN Magazine has written an article titled, “Using the Entity Framework to Reduce Network Latency to SQL Azure.” This article examines mitigation strategies for addressing network latency with SQL Azure Database when connecting from on-premises applications, which can substantially affect overall application performance.

imageFortunately, a good understanding of the effects of network latency leaves you in a powerful position to use the Entity Framework to reduce that impact in the context of SQL Azure Database.

These concerns are eliminated if the application is running on Windows Azure and is deployed to the same datacenter as your SQL Azure Database.

Read: Using the Entity Framework to Reduce Network Latency to SQL Azure

Alyson Behr reported Microsoft's PASS Summit features new business and data cloud products for SD Times on the Web on 11/11/2010:

image Microsoft made several announcements at the Professional Association for SQL Server (PASS) 2010 Summit this week, including the general availability of SQL Server 2008 R2 Parallel Data Warehouse, previously codenamed “Madison”; the CTP1 of SQL Server, codenamed “Denali;” and the release of a new cloud service, codenamed “Atlanta."

According to the company, SQL Server 2008 R2 Parallel Data Warehouse is a scalable, high-performance appliance targeted toward large warehouses with hundreds of terabytes of data, as it is pre-architected to deliver simplicity and decreased deployment time. This release is the next-generation iteration of DATAllegro’s data warehouse appliance.

Ted Kummert, senior vice president of the Business Platform Division at Microsoft, said, “Enterprises today are facing challenges of increasing volumes of data from which they need to gain business insight rapidly.” He added: “SQL Server 2008 R2 Parallel Data Warehouse provides high-scale enterprise capabilities delivered as an appliance with choice and deployment simplicity.”

The product uses massively parallel processing (MPP) on SQL Server 2008 R2, Windows Server 2008 and other industry-standard hardware. Microsoft currently is in partnership with HP and is looking to forge additional partnerships with France-based cluster provider Bull. The MPP architecture helps enable better scalability, more predictable performance, reduced risk and a lower cost per terabyte.

Query processing occurs within one physical instance of a database in symmetric multi-processing architecture. The CPU, memory and storage impose physical limits on speed and scale. The new release partitions large tables across multiple physical nodes, each node having a dedicated CPU, memory and storage, and each running its own instance of SQL Server in a parallel, shared architecture.
The warehousing appliance will be sold at a per-processor software price of US$38,255 in racks of 11 or 22 nodes.

Microsoft also unveiled the community technology preview (CTP) of SQL Server, codenamed “Denali." "The appliance is an emerging industry trend," said Microsoft General Manager Eugene Saburi. "Consumers are looking for simplicity in the way they consume technologies. We're trying to take some of that complexity out by shipping preconfigured and pre-optimized appliances."

The new version will feature SQL Server AlwaysOn, designed to reduce downtime and lower total cost of ownership, and a project codenamed "Crescent," a new Web-based data visualization and reporting solution.

imageNew capabilities for strengthened data management, performance and integration include Project Apollo, a new column-store database for increased query performance; Project Juneau, a new tool integrated into Visual Studio that unifies SQL Server and cloud SQL Azure development for database and application developers; and Data Quality Services, knowledge-driven tools that developers can use to create and maintain a Data Quality Knowledge Base. [Emphasis added.]

Also released is a new cloud service that oversees SQL Server configuration, called Atlanta. The secure cloud service assists IT in avoiding configuration problems and resolving issues. The Atlanta agent and gateway require Microsoft .NET Framework 3.5 SP1 and Silverlight 4, and can be run on recent versions of either Firefox or Internet Explorer. Atlanta is supported by most business versions of the 32- and 64-bit editions of Windows Server 2008 and 2008 R2.

<Return to section navigation list> 

Dataplace DataMarket and OData

Shawn Wildermuth described working with OData on WP7 in his Architecting WP7 - Part 7 of 10: Data on the Wire(less) post of 11/13/2010:

image In this (somewhat belated) part 7 6 of my Architecture for the Windows Phone 7, I want to talk about dealing with data across the wire (or lack of wire I guess). This is at the heart of the idea that the phone is one of those screens in '3 screens and the cloud'. The use-cases for using data are varied including:

  • Consuming public data (e.g. displaying Netflix Queue or Amazon Catalog).
  • Consuming private data (e.g. showing your company's private public data).
  • Data Entry on the phone.

image When coming from Silverlight or the web, the real challenge is to meet your needs while realizing you're working with limitations. When you are creating an app for the desktop (e.g. browser, plug-in based or desktop client) you can make assumptions about network bandwidth and machine memory. While most developers won't admit it, we will often (to help the project get done) just consume the data we need without regard to these limitations.  For the most part on the desktop this works as we often have enough bandwidth and lots of memory.  On the phone this is definitely different.

You have a number of choices for gathering data across the wire(less) but the real job in architecting a solution is to get just enough data from the cloud. The limitations of a 3G (or eventually 4G) connections aside, making smart decisions about what to bring over is crucial. You may think that 3G should be enough to just get the data you want but don't forget that you need consume that data too.

imageI recently chagned updated my Training app for the Windows Phone 7 to optimize the experience. I found that over 3G (which is hard to test without a device) that the experience was less then perfect. When I built originally build the app, I just pulled down all the data for my public courses into the application. In doing that the start-up time was pretty awful. To address this, I purposely tuned the experience to make sure that I only loaded the data that I really needed. But what that meant at the time was to only pull down the information on the selected workshop when the user requested it.  In fact, I did some slight-of-hand to load the outline and events of the workshop while the description of the workshop was shown to the user.  For example:

void LoadWorkshopIntoView(int id)
  // Get the right workshop
  _workshop = App.CurrentApp.TheVM.Workshops
                                  .Where(w => w.WorkshopId == id)

  // Make the Data Bind
  DataContext = _workshop;

  // Only load the data if the data is needed
  if (_workshop != null && _workshop.WorkshopTopics.Count == 0)
    // Load the data asynchronously while the user 
    // reads the description

I released the application to the marketplace and the performance was acceptable...but acceptable is not good enough. Upon refactoring the code, I realized that I was loading the entire entity from the server (even though there were a lot of fields I never used). It became clear that if the size of the payload were lower, then the performance could really be better.

This story does not depend on the nature of your data access. In my case I was using OData to consume the data.  My original request looked like this:

var uri = string.Concat("/Events?$filter=Workshop/WorkshopId eq {0}",

var qryUri = string.Format(uri, workshop.WorkshopId);

By requesting the entire Workshop (and the related EventLocation, Instructor and TrainingPartner) I was retrieving a small number of very large objects.  Viewing the requests in Fiddler told me that the size of these objects were pretty good:

HTTP/1.1 200 OK
Cache-Control: no-cache
Content-Length: 20033
Content-Type: application/atom+xml;charset=utf-8
Server: Microsoft-IIS/7.5
DataServiceVersion: 1.0;
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sat, 13 Nov 2010 21:17:38 GMT

By limiting the type of data to only fields I needed I thought I could see some small change.  I decided to use the projection support in OData to only retrieve the objects I needed:

var uri = string.Concat("/Events?$filter=Workshop/WorkshopId eq {0}",

var qryUri = string.Format(uri, workshop.WorkshopId);

Selecting these fields which were the *only* ones I used reduced the size to:

HTTP/1.1 200 OK
Cache-Control: no-cache
Content-Length: 8637
Content-Type: application/atom+xml;charset=utf-8
Server: Microsoft-IIS/7.5
DataServiceVersion: 1.0;
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sat, 13 Nov 2010 21:17:59 GMT

That's over a 55+% savings! The size difference wasn't only in bandwidth, but the cost in memory to deserialize the results, storage size in memory and CPU cost should be substantially cheaper too. I did the same thing with similar results to the Events list.  I saw a 55+% savings there too.

While using OData's projection mechanism ($select) worked for my case, OData isn't special here. You could have done the same when you're building a web service or REST service.  The only case where this type of decision isn't possible is when you're building an app on top of a service you don't control.  Since most REST and Web Services don't have a built-in mechanism to limit the result set, you could proxy through your servers and trim the offending sized entities too.

The important thing to think about is that you're working with a different kind of platform than you have in the past (when working strictly on the desktop). You have to think about optimization the whole way through.

User20358 asked OData based WCF service or regular WCF service for Silverlight application? in StackOverflow on 11/13/2010:

imageHi, I have just started evaluating whether or not I should be using OData influenced wcf data services or a standard WCF service application as the primary data source for Silverlight applications. I would like your thoughts on which is a better way under what situation/circumstance. What is lighter over the wire, easier to maintain, etc.

What I have gathered so far is:

  • There are no Wcf data service templates in VS2010 that I know of, and I will need to create a web project first and then add a wcf data service, so its going to affect how I structure my projects.
  • WCF Data services expose actual table names over the service. I don't know yet of a way I can alias them and I'm not sure its a good idea to let the world know my table structure
  • In a standard wcf service I will need to write linq queries against the EF or Domain service classes on the service side, while in a data service I can move that processing logic to the client.
  • At first glance examining the classes exposed by the wcf data services seem easier to read and understand than those exposed by the EF

Do add your thoughts on this..

Mark Scheuner replied on the same date:

There are no Wcf data service templates in VS2010 that I know of,

Not project template - just an item template (for use inside an ASP.NET web site or web app). WCF DataServices are very tightly coupled to HTTP, so they only make sense in a web site/app.

WCF Data services expose actual table names over the service.

NO ! At least not necessarily. The whole point of EF is that you can decouple the actual physical structure of your database from the (conceptual) model that gets exposed. You can totally rename entities, you can map several entities onto a single table, split up an entity over several tables, you can leave out attributes - anything you like!

At first glance examining the classes exposed by the wcf data services seem easier to read and understand than those exposed by the EF

I doubt it - because by default, WCF Data Services will use a Linq-to-SQL or EF model as their basis, really. You can make that as simple or as complicated as you like.

Using a "regular" WCF service allows you to use the netTcpBinding for much faster performance (thanks to binary message encoding vs. textual messages for other bindings), when using your Silverlight 4 app in a company-internal network (doesn't work for internet scenarios) - not something you can do with WCF DataServices.

The main difference in my opinion is the SOAP vs. REST difference:

  • SOAP (traditional WCF) is oriented towards methods - you think and design your system in terms of methods - things you can do (GetCustomer, SaveOrder etc.)

  • REST (the WCF DataServices approach) is all about resources, e.g. you have your resources and collections of resources (e.g. Customers) and you expose those out to the world, with standard HTTP verbs (GET, POST, PUT, DELETE) instead of separate specific methods that you define

So both approaches have their pros and cons. I guess the most important question is: what kind of app are you creating, and what kind of user audience are you targetting?


  • for intranet / internal apps, I would think the advantage of a netTcpBinding (binary encoding) would justify using a classic WCF service - also for data-intensive apps, I personally find a method-based approach (GetCustomer, SaveCustomer) to be easier to use and understand

  • for a public-facing app, using HTTP and being as interoperable as possible is probably your major concern, so in that scenario, I'd probably favor the WCF Data Service - easy to use, easy to understand URLs for the user

Nina (Ling) Hu posted a link to her Building Offline Applications using Sync Framework and SQL Azure PDC2010 presentation on 11/13/2010. Notice the use of OData as the sync protocol:


image Following is the demo summary:


The video presentation from the PDS 2010 archives is here.

Jon Udell (@JonUdell) explained How to translate an OData feed into an iCalendar feed in an 11/12/2010 post to the O’Reilly Answers blog:


image In this week's companion article on the Radar blog, I bemoan the fact that content management systems typically produce events pages in HTML format without a corresponding iCalendar feed, I implore CMS vendors to make iCalendar feeds, and I point to a validator that can help make them right. Typically these content management systems store event data in a database, and flow the data through HTML templates to produce HTML views. In this installment I'll show one way that same data can be translated into an iCalendar feed.

The PDC 2010 schedule

imageThe schedule for the 2010 Microsoft Professional Developers conference was, inadvertently, an example of a calendar made available as an HTML page but not also as a companion iCalendar feed. There was, however, an OData feed at http://odata.microso...ataSchedule.svc. When Jamie Thomson noticed that, he blogged:

Whoop-de-doo! Now we can, get this, view the PDC schedule as raw XML rather than on a web page or in Outlook or on our phone, how cool is THAT?

Seriously, I admire Microsoft's commitment to OData, both in their Creative Commons licensing of it and support of it in a myriad of products but advocating its use for things that it patently should not be used for is verging on irresponsible and using OData to publish schedule information is a classic example.

I both understood Jamie's frustration, and applauded the publication of a generic OData service that can be used in all sorts of ways. Here, for example, are some questions that you could ask and answer directly in your browser:

Q: How many speakers? (http://odata.microso...Speakers/$count)

A: 79
Q: How many Scott Hanselman sessions? (

A: 1
Q: How many cloud services sessiosn? (http://odata.microso...oud+Services%27)

A: 31

General-purpose access to data is a wonderful thing. But Jamie was right, a special-purpose iCalendar feed ought to have been provided too. Why wasn't that done? It was partly just an oversight. But it's an all-too-common oversight because, although iCalendar is strongly analogous to RSS, that analogy isn't yet widely appreciated.

To satisfy Jamie's request, and to demonstrate one way to translate a general-purpose data feed into an iCalendar feed, I wrote a small program to do that translation. Let's explore how it works.

The PDC 2010 OData feed

If you hit the PDC's OData endpoint with your browser, you'll retrieve this Atom service document:

<service xml:base="" 






    <collection href="ScheduleOfEvents">



    <collection href="Sessions">



    <collection href="Tracks">



    <collection href="TimeSlots">



    <collection href="Speakers">



    <collection href="Manifests">



    <collection href="Presenters">



    <collection href="Contents">



    <collection href="RelatedSessions">





The service document tells you which collections exist, and how to form URLs to access them. If you form the URL http://odata.microso...le.svc/Sessions and hit it with your browser, you'll retrieve an Atom feed with entries like this:

    <content type="application/xml">



        <d:Tags>Windows Azure Platform</d:Tags>

        <d:SessionId m:type="Edm.Guid">1b08b109-c959-4470-961b-ebe8840eeb84</d:SessionId>

        <d:TrackId>Cloud Services</d:TrackId>

        <d:TimeSlotId m:type="Edm.Guid">bd676f93-2294-4f76-bf7f-60e355d8577b</d:TimeSlotId>


        <d:TwitterHashtag>#azure #platform #pdc2010 #cs01</d:TwitterHashtag>




        <d:StartTime m:type="Edm.Int32">0</d:StartTime>

        <d:ShortTitle>Building High Performance Web Apps with Azure</d:ShortTitle>

        <d:ShortDescription xml:space="preserve">

Windows Azure Platform enables developers to build dynamically

scalable web applications easily. Come and learn how forthcoming new

application services in conjunction with services like the Windows 


        <d:FullTitle>Building High Performance Web Applications with the Windows Azure Platform</d:FullTitle>



To represent this event in iCalendar doesn't require much information: a title, a description, a location, and the times. The mapping from the fields shown here and the properties in an iCalendar feed can go like this:







We're off to a good start, but where will DTSTART and DTEND come from? Let's check out the TimeSlots collection. It's an Atom feed with entries like this:


    <content type="application/xml">



        <d:Id m:type="Edm.Guid">bd676f93-2294-4f76-bf7f-60e355d8577b</d:Id>

        <d:Start m:type="Edm.DateTime">2010-10-29T15:15:00</d:Start>

        <d:End m:type="Edm.DateTime">2010-10-29T16:15:00</d:End>




Nothing about OData requires the sessions to be modeled this way. The start and end times could have been included in the Sessions table. But since they weren't, we'll access them indirectly by matching the TrackId in the Sessions table to the Id in the TimeSlots table.

Reading the OData collections

OData collections are just Atom feeds, so you can read them using any XML parser. The elements in each entry are optionally typed, according to a convention defined by the OData specification. One way to represent the entries in a feed is as a list of dictionaries, where each dictionary has keys that are field names and values that are objects. Here's one way to convert a feed into that kind of list.

static List<Dictionary<string, object>> GetODataDicts(byte[] bytes)
      XNamespace ns_odata_metadata = "";

      var dicts = new List<Dictionary<string, object>>();

      var xdoc = XmlUtils.XdocFromXmlBytes(bytes);
      IEnumerable<XElement> propbags = from props in xdoc.Descendants(ns_odata_metadata + "properties") select props;

      dicts = UnpackPropBags(propbags);

      return dicts;
// walk and unpack an enumeration of  elements
static List<Dictionary<string, object>> UnpackPropBags(IEnumerable<XElement> propbags)
      var dicts = new List<Dictionary<string, object>>();

      foreach (XElement propbag in propbags)
        var dict = new Dictionary<string, object>();
        IEnumerable<XElement> xprops = from prop in propbag.Descendants() select prop;
        foreach (XElement xprop in xprops)
          object value = xprop.Value;
          var attrs = xprop.Attributes().ToList();
          var type_attr = attrs.Find(a => a.Name.LocalName == "type");
          if ( type_attr != null)
            switch ( type_attr.Value.ToString() )
              case "Edm.DateTime":
                value = Convert.ToDateTime(xprop.Value);
              case "Edm.Int32":
                value = Convert.ToInt32(xprop.Value);
              case "Edm.Float":
                value = float.Parse(xprop.Value);
              case "Edm.Boolean":
                value = Convert.ToBoolean(xprop.Value);
          dict.Add(xprop.Name.LocalName, value);
      return dicts;

In this C# example the XML parser is the one provided by System.Xml.Linq. The GetODataDicts method creates a System.Xml.Linq.XDocument in the variable xdoc, and then forms a LINQ expression to query the document for elements, which are in the namespace http://schemas.micro...vices/metadata. Then it hands the query to UnpackPropBags, which enumerates the elements and does the following for each:

  • Creates an empty dictionary
  • Enumerates the properties
  • Saves each property's value in an object
  • If the property is typed, converts the object to the indicated type
  • Adds each property to the dictionary
  • Adds the dictionary to the accumulating list
Writing the iCalendar feed

Now we can use GetODataDicts twice, once for sessions and again for timeslots. Then we can walk through the list of session dictionaries, translate each into a corresponding iCalendar object, and finally serialize the calendar as a text file.

      // load sessions
      var sessions_uri = new Uri("");
      var xml_bytes = new WebClient().DownloadData(sessions_uri);
      var session_dicts = GetODataDicts(xml_bytes);

      // load timeslots
      var timeslots_uri = new Uri("");
      xml_bytes = new WebClient().DownloadData(timeslots_uri);
      var timeslot_dicts = GetODataDicts(xml_bytes);

      // create calendar object
      var ical = new DDay.iCal.iCalendar();

      // add VTIMEZONE
      var tzid = "Pacific Standard Time";
      var tzinfo = System.TimeZoneInfo.FindSystemTimeZoneById(tzid);
      var timezone = DDay.iCal.iCalTimeZone.FromSystemTimeZone(tzinfo);

      foreach (var session_dict in session_dicts)
        var url = "";
        var summary = session_dict["ShortTitle"].ToString();
        var description = session_dict["ShortDescription"].ToString();
        var location = session_dict["Room"].ToString();
        var timeslot_id = session_dict["TimeSlotId"]; // find the timeslot
        var timeslot_dict = timeslot_dicts.Find(ts => (string) ts["Id"] == timeslot_id.ToString());
        if (timeslot_dict != null) // because test record has id 00000000-0000-0000-0000-000000000000
          var dtstart = (DateTime)timeslot_dict["Start"]; // local time
          var dtend = (DateTime)timeslot_dict["End"];    

          var evt = new DDay.iCal.Event();

          evt.DTStart = new iCalDateTime(dtstart);        // time object with zone
          evt.DTStart.TZID = tzid;                        // "Pacific Standard Time"
          evt.DTEnd = new iCalDateTime(dtend);
          evt.DTEnd.TZID = tzid;
          evt.Summary = summary;
          evt.Url = new Uri(url);
          if (location != null)
            evt.Location = location;
          if (description != null)
            evt.Description = description;

      var serializer = new DDay.iCal.Serialization.iCalendar.iCalendarSerializer(ical);
      var ics_text = serializer.SerializeToString(ical);
      File.WriteAllText("pdc.ics", ics_text);

In 2010 the PDC was held in Redmond, WA, and the times expressed in the OData feed were local to the Pacific time zone. But the event was globally available and live everywhere. I was following it from my home in the Eastern time zone, for example, so I wanted PDC events starting at 9AM Pacific to show up at 12PM in my calendar. To accomplish that, the program that generates the iCalendar feed has to do two things:

1. Produce a VTIMEZONE component that defines the standard and daylight savings offsets from GMT, and when daylight savings starts and stops. On Windows, you acquire this ruleset by looking up a time zone id (e.g. "Pacific Standard Time") using System.TimeZoneInfo.FindSystemTimeZoneById. DDay.iCal can then apply the ruleset, using DDay.iCal.iCalTimeZone.FromSystemTimeZone, to produce the VTIMEZONE component shown below. On a non-Windows system, using some other iCalendar library, you'd do the same thing -- but in these cases, the ruleset is defined in the Olson (aka Zoneinfo, aka tz) database. If you've never had the pleasure, you should read through this remarkable and entertaining document sometime, it's a classic work of historical scholarship!

2. Relate the local times for events to the specified timezone. Using DDay.iCal, I do that by assigning the same time zone id used in the VTIMEZONE component (i.e., "Pacific Standard Time") to each event's DTStart.TZID and DTEnd.TZID. If you don't do that, the dates and times come out like this:
If my calendar program is set to Eastern time then I'll see these events at 9AM my time, when I should see them at noon.
If you do assign the time zone id to DTSTART and DTEND, then the dates and times come out like this:
DTSTART;TZID=Pacific Standard Time:20101029T090000
DTEND;TZID=Pacific Standard Time:20101029T100000
Now an event at 9AM Pacific shows up at noon on my Eastern calendar. Here's a picture of my personal calendar and the PDC calendar during the PDC:

Examining the iCalendar feed

Here's a version of the output that I've stripped down to just one event.



PRODID:-// DDay.iCal 1.0//EN


TZID:Pacific Standard Time




TZNAME:Pacific Standard Time







TZNAME:Pacific Daylight Time






DESCRIPTION:The Windows Azure Platform is an open and interoperable platfor

 m which supports development using many programming languages and tools.  

 In this session\, you will see how to build large-scale applications in th

 e cloud using Java\, taking advantage of new Windows Azure Platform featur

 es.  You will learn how to build Windows Azure applications using Java wit

 h Eclipse\, Apache Tomcat\, and the Windows Azure SDK for Java.

DTEND;TZID=Pacific Standard Time:20101029T100000


DTSTART;TZID=Pacific Standard Time:20101029T090000



SUMMARY:Open in the Cloud: Windows Azure and Java





As I discuss in this week's companion article on the Radar blog, this isn't rocket science. Back in the day, many of us whipped up basic RSS 0.9 feeds "by hand" without using libraries or toolkits. It's feasible to do the same with iCalendar, especially now that there's a validator to help you check your work.

Clint Rutkas and Bob Tabor present Windows Phone7 for Absolute Beginners, a series of Channel9 videos with downloadable source code:

image This video series will help aspiring Windows Phone 7 developers get started. We'll start off with the basics and work our way up so in a few hours, you will know enough to build simple WP7 applications, such as a GPS aware note taking application. 

image We'll walk you through getting the tools, knowing what an if statement is, to using the GPS built into the phone and much more!

Michael Washington (@ADefWebserver) posted Windows Phone 7 Development for Absolute Beginners (The parts you care about) on 11/13/2010:

image Microsoft has posted a great series Windows Phone 7 Development for Absolute Beginners. However, it starts with “how to program”. This might drive you crazy if you already know how to program. Also, if you are like me and simply want the links to download the videos and the code, this should help:

Isolated Storage, ListBox and DataTemplates

imageIn a continuation of the previous video, Bob demonstrates how to read a list of file names saved to the phone's flash drive and display those names in a ListBox. Each file name is displayed in a DataTemplate that contains a HyperlinkButton. This results in one Hyperlink button for each file name. Bob then demonstrates how to pass the file name in the QueryString to a second Silverlight page that allows the user to view the contents of the file.

Download the source code

Michael continues with 20+ entries with links and abstracts. If you’re interested in MarketPlace Datamarket and OData, you’ll probably want to write WP7 clients.

Alex James (@adjames) described Named Resource Streams in an 11/12/2010 post to the WCF Data Services blog:

imageOne of the new WCF Data Services features in the October 2010 CTP is something called Named Resource Streams.


Data Services already supports Media Link Entries which allows you to associate a single streamed blob with an entry. For example, you could have a Photo entry that lists the metadata about the photo and links directly to the photo itself.

But what happens though if you have multiple versions of the Photo?

Today you could model this with multiple MLEs, but doing so requires you to have multiple copies of the metadata for each version of the stream. Clearly this is not desirable when you have multiple versions of essentially the photo.

It turns out that this is a very common scenario, common enough that we thought it needed to be supported without forcing people to use multiple MLEs. So, with this release we’ve allowed an entry to have multiple streams associated with it such that you can now create services that do things such as expose a Photo entry with links to its print, web and thumbnail versions.

Let’s explore how to use this feature.


Once a producer exposes Named Resource Streams you have two ways to manipulate them on the client, the first is via new overloads of GetReadStream(..) and SetSaveStream(..) that take a stream name:

// retrieve a person
Person fred = (from p in context.People
where p.Name == "Fred"
select p).Single();
// set custom headers etc via args if needed.
var args = new DataServiceRequestArgs();
// make the request to get the stream ‘Photo’ stream on the Fred entity.
var response = context.GetReadStream(fred, "Photo", args);
Stream stream = response.Stream;

If you want to update the stream you use SetWriteStream(..) something like this:
var fileStream = new FileStream("C:\\fred.jpeg", FileMode.Open);
context.SetSaveStream(person, "Photo", fileStream, true, args);

The other option is to use the StreamDescriptor which hangs off the entity’s EntityDescriptor, and carries information about the SelfLink, EditLink (often they are the same, but you can sometimes get and update the stream via different urls), ETag and ContentType of the Stream:

// Get the entity descriptor for Fred
var dscptor = context.GetEntityDescriptor(fred);

// Get the ‘Photo’ StreamDescriptor
var photoDscptor = dscptor.StreamDescriptors.Single(s => s.Name == "Photo");
Uri uriOfFredsPhoto = photoDscptor.SelfLink;

With the StreamDescriptor you can do low level network activities or perhaps use this Uri when rendering a webpage in the ‘src’ of an <img> tag.


Adding Named Resource Streams to your Model

The first step to using Named Resource Streams is to add them into your model. Because data services supports 3 different types of data sources (Entity Framework, Reflection, Custom) there are three ways to add Named Resource Streams into your model.

Entity Framework

When using the Entity Framework adding a Named Resource Stream to an EntityType is pretty straight forward, you simply add a structural annotation into your Entity Framework EDMX file something like this:

<EntityType Name="Person">
<PropertyRef Name="ID" />
<Property Name="ID" Type="Edm.Int32" Nullable="false" />
<Property Name="Name" Type="Edm.String" Nullable="true" />
<m:NamedStream Name="PhotoThumbnail" />
<m:NamedStream Name="Photo" />

Here the the m:NamedStreams element (xmlns:m="") indicates Person has two Named Resource Streams, Photo and PhotoThumbnail.

Reflection Provider

To add a Named Resource Streams to an EntityType using the Reflection Provider we added a new attribute called NamedStreamAttribute:

public class Person
public int ID {get;set;}
public string Name {get;set}

This example is the Reflection provider equivalent of the Entity Framework example above.

Custom Provider

When you write a custom provider (see this series for more) you add the Named Resource Streams via your implementation of IDataServiceMetadataProvider.

To support this we added a new ResourceStreamInfo class which you can add to your ResourceType definitions something like this:

ResourceType person = …
person.AddNamedStream(new ResourceStreamInfo("PhotoThumbnail"));
person.AddNamedStream(new ResourceStreamInfo("Photo"));


No matter how you tell Data Services about your Named Resource Streams consumers always learn about Named Streams via $metadata, which is an EDMX file, so as you might guess we simply tell consumers about named streams using the same structured annotations we used in the Entity Framework provider example above.

Implementing Named Resource Streams

Next you have to implement a new interface called IDataServiceStreamProvider2

Implementing this interface is very similar to implementing IDataServiceStreamProvider which Glenn explains well in his two part series (Part1 & Part2).

IDataServiceStreamProvider2 extends IDataServiceStreamProvider by adding some overloads to work with Named Resource Streams.

The overloads always take a ResourceStreamInfo from which you can learn which Named Resource Streams is targeted. When reading from or writing to the stream you also get the eTag for the stream and a flag indicating whether the eTag should be checked.


As you can see Named Resource Streams add a significant new feature to Data Services, which is very useful if you have any entities that contain metadata for anything that has multiple representations, like for example Photos or Videos.

Named Resource Streams are very easy to use client-side and build upon our existing Stream capabilities on the server side.

As always we are keen to hear your thoughts and feedback.

Alex James (@adjames) proposed Support for Any and All OData operators in an 11/11/2010 post to the OData blog and mailing list:

image One thing that folks in this list and elsewhere have brought up is the need to express filters based on the contents of a collection, be that a set of related entities or a multi-valued property.

imageThis is because currently there is no easy way to:

  • find all movies starring a particular actor
  • find all orders where at least one of the order-lines is for a particular product
  • find all orders where every order-line has a value greater than $400
  • find all movies tagged 'quirky' - where Tags is a multi-valued property.
  • find all movies with a least one actor tagged as a favorite

This proposal addresses that by introducing two new operators, "any" and "all".


For example it would be nice if this:

~/Movies/?$filter=any(Actors,Name eq 'John Belushi')

could be used to find any John Belushi movie. And this:

~/Orders/?$filter=any(OrderLines,Product/Name eq 'kinect')

would find orders that include one or more 'kinect' orderline. And this:

~/Orders/?$filter=all(OrderLines, Cost gt 400.0m)

would find orders where all orderlines costs more than $400.

Multi-Valued properties

All of the above query relationships, but this also needs to work for Multi-Value properties too:

The key difference here from a semantics perspective - is that a multi-valued property is often just a primitive, rather than an entity - so you often need to refer to 'it' directly in comparisons, perhaps something like this:

~/Movies/?$filter=any(Tags, it eq 'quirky')

In this example Tags is a multi-valued property containing strings. And 'it' or 'this' or something similar is required to refer to the current tag.

Given the need to be able refer to the 'current' item being tested in the predicate, perhaps we should force the use of 'it' even when the thing referred to is an entity?

Which would mean this:

~/Movies/?$filter=any(Actors, Name eq 'John Belushi')

Would need to become this:

~/Movies/?$filter=any(Actors, it/Name eq 'John Belushi')

Interestingly forcing this would conceptually allow for queries like this too:

~/Movies/?$filter=any(Actors, it eq Director)

Here we are looking for movies where an actor in the movie is also the movie's director.

Note: Today OData doesn't allow for entity to entity comparisons like above, so instead you'd have to compare keys like this:

~/Movies/?$filter=any(Actors, it/ID eq Director/ID)

Nested queries

Our design must also accommodate nesting too:

~/Movies/?$filter=any(Actors,any(it/Tags, it eq 'Favorite'))

Here we are asking for movies where any of actors are tagged as a favorite.

Notice though that 'it' is used twice and has a different meaning each time. First it refers to 'an actor' then it refers to 'a tag'.

Does this matter?

Absolutely if you want to be able to refer to the outer variable inside an inner predicate, which is vital if you need to ask questions like:

  • Find any movies where any of the actors is tagged as a favorite
  • Find any movies where an actor has also directed another movie

Clearly these are useful queries.

LINQ handles this by forcing you to explicitly name your variable whenever you call Any/All.
from m in ctx.Movies
where m.Actors.Any(a => a.Tags.Any(t => t == "Favorite"))
select m;

from m in ctx.Movies
where m.Actors.Any(a => a.Movies.Any(am => am.Director.ID == a.ID))
select m;

If we did something similar it would look like this:
~/Movies/?$filter=any(Actors a, a/Name eq 'John Belushi')

This is a little less concise, but on the plus side you get something that is much more unambiguously expressive:

~/Movies/?$filter=any(Actors a, any(a/Movies m, a/ID eq m/Director/ID))

Here we are looking for movies where any of the actors is also a director of at least one movie.

Final Proposal

Given that it is nice to support nested scenarios like this, it is probably better to require explicit names.

Trying to be flexible for the sake of a little conciseness often leads to pain and confusion down the road, so let's not do that J

That leaves us with a proposal where you have to provide an explicit alias any time you need Any or All.

Per the proposal here are some examples of valid queries:

~/Orders/?$filter=all(OrderLines line, line/Cost gt 400.0m)
~/Movies/?$filter=any(Actors a, a/Name eq 'John Belushi')
~/Movies/?$filter=any(Actors a, a/ID eq Director/ID)
~/Movies/?$filter=any(Actors a, any(a/Tags t, t eq 'Favorite'))
~/Movies/?$filter=any(Actors a, any(a/Movies m, m/Director/ID eq a/ID))


Adding Any / All support to the protocol will greatly improve the expressiveness of OData queries.

I think the above proposal is expressive, flexible and unambiguous, and importantly it also addresses the major limitations of MultiValue properties that a few of you were rightly concerned about.

As always I'm very keen to hear your thoughts…

Cumulux posted a SQL Azure OData Service overview on 11/11/2010:

imageOData is an emerging web protocol for querying and updating data over the Web. This allows data to be updated modularly freeing it from the huge blocks in applications. OData is built upon technologies such as HTTP, REST, Atom publishing protocol and JSON to provide access to information from a variety of applications, services, and stores. The way OData is implemented is consistent with Internet standards and this enables data integration and interoperability across a broad range of clients, servers, services, and tools.

image OData can be used to access and update data in variety of sources.  The sources may be relational databases or file systems or content management systems or traditional websites. In addition, various clients, from ASP.NET, PHP, Java websites or Excel or mobile devices applications, can seamlessly access vast data stores through OData.

imageSQL Azure OData Service provides an OData interface to SQL Azure databases that is hosted by Microsoft.  This feature is currently available only as a part of SQL Azure Labs . Users with Live ID can Sign in to try the feature.

This integration will give SQL Azure, an additional advantage and a great technical service to the developer to control and work on the required data efficaciously over the web. The end user will have a more optimized and quicker web service.

For more detailed technical overview of OData Service, Refer Microsoft PDC 2010 session on OData Services.

Cumulux continued its Azure-related posts with Microsoft Azure Market Place on 11/11/2010:

imageIn the announcement of Azure Market Place in the Microsoft Professional Development Conference 2010, it is defined as “ an online marketplace for developers to share, find, buy and sell building block components, training, service templates, premium data sets plus finished services and applications needed to build Windows Azure platform applications. “ There are two sections in the Market place, the Data Market and the App Market.

image The Data Market, codenamed Project Dallas, contains data, imagery, and real-time web services from various leading data providers and authoritative public data sources. The users will have access to data points, statistics of demography, environment, financial markets, retail business, weather, sports etc. Data Market uses visualizations and analytics to enable insight in addition to the data. This feature is commercially available. Users can access Microsoft Azure Market Place Data Market to shop Data.

The App Market will be launched as a Beta edition by the end of 2010. This will include listings of building block components, training, services, and finished services/applications. These building blocks are designed in such a fashion to be incorporated by other developers into their Windows Azure platform applications. Other examples could include developer tools, administrative tools, components and plug-ins, and service templates.

This feature will redefine the way work is done and will give rise to innovative solutions in the cloud. This will define the landscape where companies which develop products on and for Microsoft Azure will compete, deliver and grow. With this feature in place, data and information will be made readily available to the developer and user once the respective services are subscribed to.

For more detailed overview of Microsoft Azure Market Place, Refer Microsoft PDC 2010 session on Azure Market Place.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Igor Ladnik described Automating Windows Applications Using Azure AppFabric Service Bus in an 11/13/2010 post with demo and source code to the Code Project:

image Unmanaged Windows GUI application with no exported program interface gets automated with injected code. A WCF aware injected component allows world-wide located clients to communicate with automated application via Azure AppFabric Service Bus despite firewalls and dynamic IP addresses

image Introduction

This article is the third in row of my CodeProject articles on automation of unmanaged Windows GUI applications with no exported program interface ("black box" applications). These applications are referred to below as target applications. While using the same injection technique, the articles present different ways of communication between injected code and outside world.


The suggested technique transforms target application to an automation server, i.e., its clients (subscribers) running in separate processes are able to control target application and get some of its services either in synchronous or asynchronous (by receiving event notifications) modes. In the previous articles [1] and [2], this transformation was described in detail with appropriate code samples. So here only a brief overview is given.

The course of actions for the transformation is as follows:

  • A piece of foreign code is injected into target application using the remote thread technique [3];
  • The injected code subclasses windows of target application adding desirable [pre]processing for Windows messages received by subclassed windows;
  • A COM object is created in the injected code in context of target application process. This object is responsible for communication of automated target application with the outside world.

The above steps are common for all three articles. The main difference lays in the nature and composition of the COM object operating inside the target application. In the earliest article [1], the communication object was an ordinary COM object self-registered with Running Object Table (ROT). It supports direct interface for inbound calls and outgoing (sink) interface for asynchronous notification of its subscribers. Such objects provide duplex communication with several subscriber applications (both managed and unmanaged) machine-wide but cannot support communication with remote subscriber applications since ROT is available only locally.

The communication object of the next generation [2] constitutes COM Callable Wrapper (CCW) around managed WCF server component. This allows automated target application to be connected to both local and remote subscribers, thus considerably expanding access range of newly formed automation server. But this advanced communication object still has limitation: it enables only remote subscribers locating in the same Local Area Network (LAN). Subscribers outside the LAN are banished by firewall and lack of permanent IP address of target application machine. It means that if I automate a GUI application on my home desktop, then my daughter and my dog can utilize its services from their machines at home (within the same LAN), whereas a friend of mine is deprived of such pleasure in his/her home (outside my LAN).

Azure AppFabric Service Bus

This is where the Microsoft Azure platform [4] comes to the rescue. Among other useful things it offers AppFabric Service Bus, technology and infrastructure enabling connection between WCF aware server and client with changeable IP addresses locating behind their firewalls. Since dynamic IP address and firewall are obstacles for inbound calls and transparent for outbound calls, Service Bus acts as Internet-based mediator for client and server outbound calls.

It turned out that to make automated target application available world-wide with Azure Service Bus is amazingly simple. Taking software in the article [2] as my departure point, I found that no change in code is required (well, actually I did change code a little, but this change was not compulsory). Actually three actions should be carried out:

  • Open account for Windows Azure AppFabric Service Bus and create a service namespace (see MSDN or e.g. [5], chapter 11)
  • Install Windows Azure SDK on both target application and client machines, and
  • Change configuration files for both target application and client.

New configuration files are needed because usage of Service Bus implies special Relay binding with appropriate security setting and special endpoint behavior enabling Service Bus connection. In our sample, netTcpRelayBinding is used. Endpoint behavior provides issuerName and issuerSecret parameters for Service Bus connection. Both of them should be copied from your Azure AppFabric Service Bus account. From the same account, mandatory part of WCF service base addresses should be copied:

Collapse | Copy Code


(SERVICE-NAMESPACE placeholder should be replaced with your service namespace). Details of configuration files can be seen in the sample.

Code Sample

Like in the previous articles, well known Notepad text editor is taken as target application for automation. Console application InjectorToNotepad.exe injects NotepadPlugin.dll into target application with the remote thread technique using Injector.dll COM object. The injected NotepadPlugin subclasses both frame and view windows of Notepad and creates NotepadHandlerNET managed component in COM Callable Wrapper (CCW). NotepadHandlerNET is responsible for communication with outside world. This technique was explained in full details in my previous article [2].

Build and Run Sample
  • Run Visual Studio 2010 as administrator and open NotepadAuto.sln.
  • Open Solution Property Pages, select Multiple startup projects, set for projects InjectorToNotepad and AutomationClientNET Action attribute to Start and press OK button.
  • In App.config files of NotepadHandlerNET and AutomationClientNET projects, replace values of issuerName and issuerSecret attributes in <sharedSecret> tag and placeholder SERVICE-NAMESPACE with appropriate values of your Azure AppFabric Service Bus account.
  • Build and run the solution.
Output of projects will be placed in Release or Debug (according to your built choice) working directory. InjectorToNotepad application will copy Notepad.exe to the same directory, and fancy configuration file Notepad.exe.config is created during the build. InjectorToNotepad console application runs Notepad from working directory and should report SUCCESS! However it takes some time before word [AUTOMATED] will appear in caption of the Notepad frame window. Then press big button tagged with "Bind to Automated Notepad via Azure Service Bus". Binding takes time (up to 30 sec). On its completion, the big button becomes disabled and three smaller operation buttons get enabled. Please see Test Sample section to perform test.
Run Demo

Demo contains two directories, namely, Injector and Client. The Injector directory should be placed on the target application machine. It contains all components required for injection to Notepad target application. The Client directory should be placed on client (subscriber) machines.

Before starting sample, please install Windows Azure SDK on both target application and client machines. Replace in configuration files Notepad.exe.config and AutomationClientNET.exe.config values of issuerName and issuerSecret attributes in <sharedSecret> tag and placeholder SERVICE-NAMESPACE with appropriate values of your Azure AppFabric Service Bus account. Please note that cmd and exe (except for target application Notepad) files should be run as administrator. Make sure that you have stopped your software protecting applications from injection (since this protective software considers any injection as malicious action).

First run COMRegisteration.cmd command file. It registers COM components WindowFinder.dll and Injector.dll with regsvr32.exe utility and CCW for NotepadHandlerNET.dll with regasm.exe utility. Then run InjectorToNotepad.exe console application. It copies Notepad.exe from system to working directory (for successful operation of injected COM object Notepad should run from working directory) and performs injection. It takes some time after injector prints message SUCCESS! before Notepad will show word [AUTOMATED] in its caption. InjectorToNotepad.exe console application may be closed. Now our Notepad target application is automated and ready to serve its clients.

From Client directory (that can be placed on any machine world-wide), run AutomationClientNET.exe client application and press big button "Bind to Automated Notepad via Azure Service Bus" on it. It takes some time (app. 30 sec) before the client binds to automated Notepad. If the binding succeeds, the big button becomes disabled while operation buttons get enabled.

Test Sample

Now the sample is ready for test. You may open Notepad's Find dialog, add custom menu item by pressing appropriate buttons of the client application. You may type some text in Notepad and copy it to clients by pressing client's "Copy Text" button. To test asynchronous action, type some text in Notepad followed by a sentence conclusion sign (".", "?", or "!"). Text typed will be reproduced in an edit box of client application.

You may test the sample for several clients. For this, you have to make sure that URI of each client is unique. So in AutomationClientNET.exe.config file of another AutomationClientNET.exe client application, replace suffix of its service base address from /Subscriber1 to, say, /Subscriber2, start this new client and bind it to automated Notepad. All active clients get copied text and asynchronous events when text typed in Notepad is concluded with ".", "?", or "!".


The described approach shows great usefulness of Azure AppFabric Service Bus for automation of desktop applications. The Service Bus allows automated application to serve its clients world-wide overcoming firewall and dynamic IP address limitations, and this is achieved without additional code. The Service Bus may be used as a mediator between any kind of automated applications and their clients.

In the sample for the article, the target application and its clients communicate only via Service Bus. In real world automation, different endpoints should be provided for local client, clients located in the same Local Area Network (LAN) and for "distant" remote clients. Service Bus endpoints should be of hybrid type allowing the most efficient way for communication with each client.


The article presents evolution of a technique for automation of Windows applications with injected objects. A .NET WCF equipped component inside unmanaged target process provides communication between automated application and its clients via Azure AppFabric Service Bus. This approach overcomes such obstacles as firewalls and dynamic IP addresses on both sides making automation server available world-wide.


[1] Igor Ladnik. Automating Windows Applications. CodeProject.
[2] Igor Ladnik. Automating Windows Applications Using the WCF Equipped Injected Component. CodeProject.
[3] Jeffrey Richter, Christophe Nasarre. Windows via C/C++. Fifth edition. Microsoft Press, 2008.
[4] Roger Jennings. Cloud Computing with the Windows Azure Platform. Wiley Publishing, 2009.
[5] Juval Lowy. Programming WCF Services. Third edition. O'Reilly, 2010.

  • 12th November, 2010: Initial version

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Alik Levin explained Windows Phone 7 and RESTful Services: Delegated Access Using Azure AppFabric Access Control Service (ACS) And OAuth in an 11/12/2010 post:

image[18]This post is a summary of steps I have taken in order to run the Windows Phone 7 Sample that Caleb Baker recently published on ACS Codeplex site. The sample demonstrates how to use Azure AppFabric Access Control Service to allow end users sign in to RESTful service with their Windows Live ID, Yahoo!, Facebook, Google, or corporate accounts managed in Active Directory.

image722322This sample answers the following question:

How can I externalize authentication when accessing RESTful services from Windows Phone 7 applications?


Windows Phone 7 Federated Authentication Scenario


Windows Phone 7 Federated Authentication Solution

Summary of steps:
  • Step 1 – Prepare development environment
  • Step 2 – Create and configure ACS project, namespace, and Relying Party
  • Step 3 – Create Windows Phone 7 Silverlight application (the sample code)
  • Step 4 – Enhance Windows Phone 7 Silverlight application with OAuth capability
  • Step 5 – Test your work
Step 1 – Prepare development environment

There are several prerequisites needed to be installed on the development environment. Here is the list:

Step 2 – Create and configure ACS project, namespace, and Relying Party
  1. Login to
  2. If you have not created your project and a namespace, do so now.
  3. On the Project page click on “Access Control” link next to your namespace in the bottom right corner.
  4. On the “Access Control Settings” page click on “Manage Access Control” link at the bottom.
  5. Sign in using any of the available accounts. I used Windows Live ID.
  6. On the “Access Control Service” page click on “Relying Party Applications” link.
  7. On the “Relying Party Applications” page click on “Add Relying Party Application”.
  8. Fill the fields as follows:
    1. Name: ContosoContacts
    2. Realm: http://ContosoContacts/
    3. Return URL: http://localhost:9000/Default.aspx
    4. Token format: SWT
    5. Token lifetime (secs): leave default, 600
    6. Identity providers: leave default, I have Google and Windows Live ID
    7. Rule groups: leave default “Create New Rule Group” checked
  9. Click “Generate” button for the token signing key field. Copy the key into a safe place for further reuse.
  10. Click “Save” button.
  11. Click on “Access Control Service” link in the breadcrumb at the top.
  12. On the “Access Control Service” page click on the “Rule Groups” link.
  13. On the “Rule Groups” page click on the “Default Rule Group for ContosoContacts” link.
  14. On the “Edit Rule Group’ page click on “Generate Rules” link at the bottom.
  15. On the “Generate Rules: Default Rule Group for ContosoContacts” page click on “Generate” button.
  16. On the “Edit Rule Group” page click “Save” button.
  17. Click on the “Access Control Service” link in the breadcrumb at the top.
  18. On the “Access Control Service” page click on “Application Integration” at the bottom.
  19. On the “Application Integration” page click on “Login Pages” link.
  20. On the “Login Page Integration” page click on ContosoContacts relying party link.
  21. On the “Login Page Integration: ContosoContacts” page copy the link at the bottom without the ending “&callback=”. It will be used in the application itself. Paste it to a safe place for further reuse.  Mine looks as follows:
Step 3 – Create Windows Phone 7 Silverlight application (the sample code)
  1. Make sure you completed all requirements in the Step 1 – Prepare your development environment.
  2. Run Visual Studio 2010 as Administrator.
  3. Open ContosoContactsApp.sln – this is the code you downloaded and extracted as part of the Windows Phone 7 Sample code.
  4. Open SignIn.xaml.cs in ContosoContactsApp project.
  5. Locate SignInControl.GetSecurityToken( new Uri(“...”)); and update with the URI from the previous step.
  6. Open MainPage.xaml.cs in teh same project, ContosoContactsApp.
  7. Locate client.DownloadStringAsync(new Uri("...”)); and update with http://localhost/contacts/Directory.
  8. Add CustomerInformationService project to the solution. You will be prompted to create IIS Virtual Directory for it.
Step 4 – Enhance Windows Phone 7 Silverlight application with OAuth capability
  1. Add DPE.OAuth project to the solution. The project comes as part of the Source Code for FabrikamShipping Demo.
  2. Add a reference to the DPE.OAuth project from the CustomerInformationService project.
  3. Open web.config of the CustomerInformationService project.
  4. Locate issuerIdentifier=”https://[Service Namespace]” entry.
  5. Replace [Service Namespace] with your namespace. In my case it is my-namespace1. Look at (21) for reference in Step 2 – Create and configure ACS project, namespace, and Relying Party.
  6. Locate serviceKey="[Insert Symmetric Token Signing Key]" entry and update with your Key. Look at (9) for reference in Step 2 – Create and configure ACS project, namespace, and Relying Party. If you missed that step. Navigate to the ACS portal at, click on your project, click on the “Access Control” link next to your namespace, click on “Manage Access Control” link, click on “Certificates and Keys”, click on ”Default Signing Key for ContosoContacts”, copy the value in the Key filed on the “Edit Token Signing Certificate or Key” page.
Step 5 – Test your work 
  1. Switch to Visual Studio.
  2. Make sure ContosoContactsApp solution selected.
  3. Press F5 button.
  4. You should see the Sign In screen:
  5. Click on Sign In link. You should be presented with the options for Signing In: 
  6. I clicked on Google which resulted in presenting me with Google’s sign page: 
  7. After I provided my credentials, Google asked me for my consent that a service is asking for information on my behalf:
  8. Click on “Allow” button and receive desired information from the service:
Related Books
Related Info

DZone posted on 11/11/2010 Design Patterns used with Windows Azure App Fabric: The Service Bus, a 00:11:30 video interview with Clemens Vasters (@ClemensV):

imageClemens Vasters is a principal technical lead at Microsoft.  His role is to be a product manager and an architect for the service bus area of the Windows Azure App Frabric.  In this interview, you'll learn about the work he's doing and you'll also learn about design patterns, practices, and techniques used with a service bus.  He also gives real world customer cases where these practices are used.

image722322The Windows Azure App Fabric has two services: A service bus and access control.  The service bus connects cloud applications to resources that are on-premises.  This allows users of Azure to keep control of their data while still sending what they need up to the cloud.  Access control solves the issues surrounding federated identity.  Hybrid cloud applications and solutions are especially well-suited for Windows Azure App Fabric.

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, and CDN

The Windows Management Scripting Blog posted Azure CDN (Content Delivery Network) In Your App on 11/12/2010:

The Windows Azure CDN is a convenient way for application developers to minimize latency that geographically dispersed application users will experience. As of  as of February 2010, the Azure CDN is currently in CTP and incurring no charges.

Who Needs a CDN?

Application users that are geographically far away from the data center where a file they are attempting to access resides will experience a lag due to the distance the data has to travel to reach their client. CDN’s attempt to solve this issue by caching copies of the file in various geographic locations across the globe, the user attempting to access the file will be served from the nearest data center to minimize the latency. Latency is an especially acute problem for large files that are streamed to users (notably video files), but with the declining cost of CDN’s they are more and more commonly used to deliver smaller static files – the theme files for (such as the logo and css files) are all served via the Amazon CloudFront CDN.

Introducing the Windows Azure CDN

imageThe Azure CDN features 20 edge locations (geographic points were the cached data is served from) for blobs which are in public containers.

Using The Azure CDN

The Windows Azure CDN is tightly integrated into Azure Storage, to enable it simply navigate to the appropriate storage account in the Azure developer portal and click Enable CDN, this will turn on the CDN for that storage account although there may be a delay of up to 60 minutes before it is available for use.
Once the CDN is functioning the only change will be the URL from which your blobs are accessible, the URL will be in the following format  – http://<guid><container name>/.

TTL – Time to Live

One issue you will have to contend with is the caching of the blobs. If you need to update the blob (ie replace the file with an updated version) the new version will not be accessable by users until the cache has expired and the Azure CDN refreshes the cache from the actual blob in the storage container. The expiry time for the cache is know as the TTL (Time To Live). The default for Azure is 72 hours, but this can be manually adjusted by setting the CloudBlob.Attributes.Properties.CacheControl property of the CloudBlob object in your code. It should be noted that setting the TTL to a small time will reduce the effectiveness of the CDN as the CDN will only reduce the request time when the blob is cached at an edge location.
It should be noted that blobs which are deleted will still be available at any edge location it is cached at until the TTL expires.

Registering a Custom Domain for Azure CDN

If the default URL does not appeal, you can register a custom domain as the CDN URL for that storage account (note that only one custom domain can be registered per CDN endpoint). To register a custom domain, navigate to your storage account, if it has the Azure CDN enabled it will have a URL in the format http://<guid> listed under Custom Domains. Click ‘Manage’ to the right of this URL. On the Custom Domain for CDN Endpoint page, enter the custom domain and click Generate Key. The next page will show a CNAME record which you will have to enter for the domain and then Validate to ensure you are the domain’s admin or owner.
Once the domain’s ownership has been validated you can make a CNAME record for your chosen domain and then point that to your default Azure CDN URL, this will then allow you to use your custom domain for all URL’s.

You can also read:

  1. Part 3 – Auto migrate exising ASP.NET application onto Windows Azure
  2. Part 2 – Auto migrate exising ASP.NET application onto Windows Azure
  3. Part 1 – Auto migrate exising ASP.NET application onto Windows Azure
  4. Host PHP in the Cloud with Windows Azure
  5. Windows Azure Storage

Tavasoft published a BizTalk Development Made More Powerful with the Introduction of AppFabric Connect press release on 11/11/2010:

image Powerful BizTalk Development has become even more powerful with the advent of AppFabric Connect. Updates and upgrades make the product even more useful to companies using them. Microsoft has proved this fact by adding an amazing feature called AppFabric Connect to the BizTalk server 2010 which now will allow BizTalk developers to use all its capabilities from Windows Azure AppFabric and Windows Server AppFabric for BizTalk development.

Windows Server AppFabric helps developers to use BizTalk Mapper and BizTalk LOB Adapters within Windows Workflow applications. On the other hand AppFabric Connect in Windows Azure AppFabric helps BizTalk developers working on any BizTalk development project to accommodate service endpoints in the cloud for BizTalk artifacts. The “connection” when used in this way facilitates communication with third parties without any complex firewall and security infrastructure.

AppFabric Connect for BizTalk development can be useful in different development scenarios like when the application does not need the use of all BizTalk capabilities to function properly or when additional BizTalk infrastructure is not needed. Microsoft has suggested a few scenarios where BizTalk developers can effectively use AppFabric Connect.

On-premises LOB integration: Through AppFabric Connect developers can use the BizTalk LOB Adapter Wizard in the interior of the workflow applications to create workflow activities which will show interactions with LOB systems. Besides this developers can also use BizTalk Mapper with BizTalk LOB Adapter Wizard for schema transformation.

Integration with customers and partners: Reach of the LOB Services can be extended in the Windows Azure AppFabric using AppFabric Connect. Though the services will run on premises, cloud can be used to access them easily and securely.

Developers can also use AppFabric connect in BizTalk development projects for low-latency on-premises LOB integration. Though this scenario is not explained in all that detail it is suggested that the adaptor enhances fast, easy and secure communication with partners and clients.

Besides all these BizTalk developers can work out different ways to use the AppFabric Adaptor in their BizTalk development project.

# # #

Tatvasoft is rightly called a software giant as it provides powerful BizTalk development ( services using all the capabilities of the platform and its adaptors.

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Avkash Chauhan described Handling "System.AppDomainUnloadedException was unhandled" exception in Windows Azure service in an 11/13/2010 post:

When running a web service application Windows Azure development fabric, in some cases you may hit the following error:

System.AppDomainUnloadedException was unhandled

Message: Attempted to access an unloaded AppDomain.

This issue has been discussed in details in Azure forums:

First I would say that there could be several different reasons for this issue however even if you received this error in your development fabric, your service will have no problem on Cloud due to this exception. There could be several ways to solve this issue and a few of them are as below:

  1. Please check and delete the contents of Environment Variables called _CSRUN_STATE_DIRECTORY. Due to a known issue, the data in this folder could be corrupted.
  2. Create a brand new VS2010 cloud application with any template code, and compile it first to ensure there is no error. And then add your code to it step by step and keep checking what piece of code is giving this error.
  3. If you have any non C# compiler references (i.e. J# etc) in your web.config then remove it completely
  4. Please set all your references have Copy Local set to true.

What I found while working with a partner is that, following change in Global.asax solved this problem:

  • Original:         <%@ Application Codefile="Global.asax.cs" Inherits="<Your_Service_Name>.Global" Language="C#" %>
  • Change to:    <%@ Application Language="C#" %>

The purpose for this change is to remove your code dependency from your global code file and code-behind class for the application class it is inheriting, if applicable.

Allan Spartacus Mangune posted a Building PHP Applications on Windows Azure tutorial on 11/8/2010:

This tutortial demonstrates the capabilities of Windows Azure of running PHP applications. Before you start, make sure the following are installed/enabled on your development machine:

  • PHP
  • Visual Studio 2010
  • Windows Azure Tools for Visual Studio
  • IIS with WCF HTTP Activation

You can download the PHP from Choose the binaries that are labeled as "VC9 x86 Non Thread Safe" since they are optimized with FastCGI  because PHP process is not required to wait for thread synchronization, it works best when you use PHP as standard CGI interface.

Visual Studio Cloud Project

In tutorial, you will use the Visual Studio 2010 to build your Windows Azure project with PHP.

  1. Open the Visual Studio 2010 and create a new project. File > New Project > Visual C# > Cloud > Windows Azure Cloud Service.
  2. Enter PHPCloudService  as the name of the project.
  3. Click OK and then select CGIWebRole.
  4. Click the > arrow to createa a Cloud Service Solution.
  5. Rename the solution to WebCgiRole.
  6. Click OK.

Visual Studio will create the Windows Azure project as shown in Figure 1.

Figure 1 - Windows Azure Cloud Service with PHP

As you can see, two separate projects were created in the solution. One is the PHPCloudService and another is WebCgiRole. The PHPCloudService projects holds the information about the configuration of WebCgiRole project and the worker role attached into it. The WebCgiRole is the typical FastCgi project that may run PHP web pages.

The Cloud Service Project

The cloud service project named PHPCloudService holds two configuration files namely ServiceConfiguration.cscfg and ServiceDefinition.csdef. The ServiceConfiguration.cscfg as shown in Figure 2 is a configuration file that holds information about the web roles on the Cloud service and the numbers of instances of each web role.

Figure 2 - The ServiceConfiguration.cscfg file

The major elements of the ServiceConfiguration.cscfg file as shown in Figure 2 are explained below.

  • ServiceConfiguration - This is the root of the configuration file and it contains the name of the Cloud service.
  • Role - This element point to a web role and it sets the number of instances that web role is created on the cloud.
  • ConfigurationSettings - This sets the Role's collection of configuration. 

The ServiceDefinition.csdef defines the available roles and how the native code is executed. It also sets the service endpoints and other configuration settings.

The major elements of the ServiceDefinition.csdef as shown in Figure 3 are explained below.

Figure 3 - ServiceDefinition.csdef file

  • ServiceDefinition - The root element which defines the name of the service definition and its namespace.
  • WebRole - Defines the unique name of the web role and protocol used for the external end point to the web role.
PHP Intepreter

Access the install folder of PHP on your computer and copy the folder of the intepreter to the root you CGI application as shown in Figure 4 below:


The web.roleconfig is use[d] to configure the IIS to run FastCGI to the Window[s] Azure cloud. Access the web.roleconfig of the WebCgiRole project and set the full path to the php-cgi.exe as shown in Figure 5 below:

Figure 5 - Setting the path to the php-cgi.exe

• Bruno Terkaly posted Source Code and Snippets for my one-hour screencast–Phone and Cloud–An natural pair of technologies

imagePurpose of Blog: I recently posted a huge, one-hour screencast of building mobile cloud applications, leveraging SQL Azure as a data source.


Full Source Code to Phone Cloud Application

I chose orange for the ticker symbol because the San Francisco Giants won the World Series. Email me if you would like access to the video at

One of the the applications will look like this:


There is also a Windows Phone 7 Project here:


Windows Phone 7 Sample


Bruno continues with 14 source code snippets. I’ve requested access to his video.

• Patrick Butler Monterde asserted Azure and Patterns & Practices still apply in an 11/13/2010 post:

image The fact of migrating or developing new applications on Azure provides the opportunity to review architectures, refactor code and apply best practices that we may have missed.  This is most current documentation available from the MS Patterns and Practices team:


Solution Development Fundamentals

imageSolution development fundamentals cover the cross-cutting aspects of solution development, such as security, caching, data access, validation, exception management, and so on. It also includes application architecture, development process, the software development life cycle (SDLC), and application life cycle guidance. You will find guidance and patterns that are generally applicable to solution development regardless of the specific architecture or scenario.

Active Releases
  • Enterprise Library. Enterprise Library is a collection of application blocks that address common cross-cutting concerns that developers face when developing applications. The latest version of Enterprise Library (version 5.0) was released in April 2010 and includes support for Visual Studio 2010, as well as many performance and functionality improvements.
  • Unity. Unity is our implementation of a dependency injection container. We recently released Unity 2.0 (also part of the latest version of Enterprise Library) which includes a number of key features, including support for policy-based interception and support for Silverlight. Unity provides a mature and stable foundation for building high-quality, flexible, pattern-based libraries and reference implementations.
  • A Guide to Claims–based Identity and Access Control. This guide gives you enough information to evaluate claims-based identity as a possible option when you're planning a new application or making changes to an existing one. It is intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates Web applications and services that require identity information about their users.
  • Microsoft Application Architecture Guide, 2nd Edition. This guide provides design-level guidance for the architecture and design of applications built on the .NET Framework.
  • Performance Testing Guidance for Web Applications. This guide shows you an end-to-end approach for implementing performance testing for your Web applications.
General Guidance
Enterprise Library
Unity Application Block
Technical Articles

Gavin Clarke asked “MongoDB. Memcached. MariaDB?” in a prefix to his Microsoft coaches NoSQL options for Azure cloud post to the Channel Register blog of 11/12/2010:

imageMicrosoft is working with NoSQL startups to simplify deployment of their databases on its Azure cloud.

image MongoDB-shop 10gen has told The Reg that it's close to announcing tighter integration between MongoDB and Azure. The news is expected next month. 10gen Roger Bodamer told The Reg that MongoDB will be integrated with Azure with changes available to all MongoDB users.

Microsoft has also started working with Membase to give Azure an in-memory key-value store component using memcached. The pair are looking for ways to tweak Membase's supported version of memcached for Microsoft's cloud.

The thinking is that once any kinks are ironed out then Microsoft's Azure site would point to Membase's site as a kind of preferred version of memcached for use by Azure customers. Ultimately, Membase could be offered on Microsoft's cloud as a service.

Separately, Microsoft is understood to have had talks with Monty Widenius's Monty Program, busy forking and supporting Oracle's MySQL database with MariaDB. Monty Program and10gen are on Microsoft's Technology Adoption Program (TAP) for Windows Azure, while Membase is a Microsoft partner.

TAP is a program Microsoft offers early in the life of a product that's used to gather feed back on the kinds of features they'd like to see. People on TAP provide input on features, can file bug reports, and get access to Microsoft's development teams via a technical sponsor.

You can already run some NoSQL databases on Microsoft's cloud while Azure already offers a database option. SQL Azure based on Microsoft's popular SQL database.

Robert Duffner, Windows Azure's director of product management, told The Reg that Microsoft wants to provide more choices for Azure customers. That attitude - interoperability over "anything you want as long as it's .NET" - is gaining a firm footing inside Microsoft's server and tools business, home to Azure. At the recent Professional Developers' Conference (PDC) Microsoft announced a Java SDK, class libraries, and Eclipse-based tooling for Java applications running on Azure.

The addition of NoSQL suits Microsoft - by bringing more people to Azure - and it suits the NoSQLers, because they get more Windows devs to support.

You can run NoSQL options like Mongo and Memcached on Azure after some fiddling and configuring. The goal now is to deliver a development, deployment, and management experience already familiar to those on Windows, SQL Server, and Visual Basic.

"People want to have choices," Duffner told The Reg. "People want to plug into various frameworks. If you don't want to use our table store but want to take advantage of Membase that's totally fine."

The work on MongoDB is more a matter of polish than major integration. Windows is already one of MongoDB's biggest development platforms next to Linux. "We want to make it feel a bit more native - by providing the Azure install wrapper and more of a native feel for the Widows users," Bodamer told us.

He noted it's still too early to tell if MongoDB will take off in large numbers, but he said whenever 10gen holds events people invariably ask asking about integration with Azure. 10gen's next MongoDB event is on Microsoft's Mountain View, California, campus on December 3.

Kevin Kell published a More Open Source on Azure post to the Learning Tree blog on 11/12/2010:

image A few weeks ago, just for fun, I set up Moodle on an Amazon EC2 micro Windows instance I had created. I used the Windows Platform Installer to install the software and all required components. The process was quick and painless and within a very short time my test site was up, online and available for use.

imageThis week I decided to try something different; hosting another community  application – this time WordPress – on Azure. For that I used the Windows Azure Companion.

imageThe first step is to download the zip file appropriate for your VM size. I want a small instance so I chose that file. The zip file contains an Azure service package and a configuration file.

Figure 1 Contents of downloaded zip file

The next step was to edit the configuration file. I needed to put in parameters for my particular deployment. My changes are highlighted below.

Figure 2 ServiceConfiguration.cscfg

The Azure Companion runs in a single Worker Role on Windows Azure. I have also provided credentials to an Azure storage account and information about the administrator of this instance.

Note that in this case, rather than writing my own product feed, I am just using the sample product feed contributed by Maarten Balliauw. Writing a custom feed will be the subject for another day!

I then simply deployed the package and configuration to a hosted Azure service that I had previously set up.

This screencast walks through the process of deploying the package then installing and configuring WordPress.

All in all it was a pretty good experience. It did seem to me to be a little more time-consuming and cumbersome than using the Windows Platform Installer on an EC2 instance but, hey, now I am hosted on the Azure platform! That means I get some of the features of PaaS vs. IaaS.

One final comment; in this example I used SQL Azure as the backend database. It may have been more cost effective to use MySQL. Next week I will explore installing MySQL on Azure and try using that instead in my WordPress configuration.

Robert Duffner posted Thought Leaders in the Cloud: Talking with Jonathan Ellis, Co-Founder of Riptano on 11/11/2010:

image Jonathan Ellis [pictured at right] chairs the Cassandra project at the Apache Software Foundation and is the co-founder of Riptano, a commercial entity that provides software, support, and training for Cassandra. Prior to the founding of Riptano in the spring of 2010, Jonathan was a system architect at Rackspace, and before that, he designed and built a large-scale storage system at Mozy.

In this interview, we cover:

  • The Cassandra distributed database, originally developed by Facebook
  • A critical benefit of the cloud; deferred capacity planning
  • Before databases like Cassandra, the Web 2.0 forefathers were stuck using SQL databases in a NoSQL way
  • Controversies and options for scaling with relational and distributed databases
  • High-profile attempts to move to Cassandra, such as Reddit and Digg
  • What's in store for the future of Cassandra

Click here for the the interview transcript if not visible in the page from the above link.

David Chappell announced on 11/11/2010 the availability of his Microsoft-sponsored The Windows Azure Programming Model white paper:

image Writing an application for Windows Azure is much like writing one for Windows Server. But it isn't exactly the same. In fact,Windows Azure is different enough to qualify as having its own programming model.

I've written a Microsoft-sponsored white paper, available here, that describes the Windows Azure programming model at a high level. There's no code in the paper; instead, I've tried to explain the concepts that underlie the model. 

imageThe paper lays out three rules that Windows Azure applications should follow:

  • A Windows Azure application is built from one or more roles.
  • A Windows Azure application runs multiple instances of each role.
  • A Windows Azure application behaves correctly when any role instance fails.

You don't absolutely have to follow all of these rules when you create a Windows Azure app. But if you don't, you probably won't be happy with how your app behaves--this isn't the familiar Windows Server world.

Getting a handle on this new programming model isn't hard, but it does require learning a few new things. As the Windows Azure platform gets more important, both in the cloud and on-premises, more and more apps will be built on it rather than on Windows Server. If you plan to build apps on Windows Azure--that is, if you plan to continue developing in the Microsoft environment--I encourage you to read this paper.

The Microsoft Case Studies Group posted Mobile Operator Speeds Time-to-Market for Innovative Social Networking Solution on 11/9/2010 (missed when posted):

image T-Mobile USA, a leading provider of wireless services, wanted to create new mobile software to simplify communications for families. The company needed to implement the application and its server infrastructure while facing a tight deadline. T-Mobile decided to build the solution with Microsoft Visual Studio 2010 Professional and base it on Windows Phone 7 and the Windows Azure platform. By taking advantage of an integrated development environment and cloud services, the company completed the project in just six weeks. Using a cloud platform instead of maintaining physical servers has also simplified management. As a result, developers have more time available to focus on enhancing the application. Customers will benefit from a streamlined, reliable communications solution with strong security, and T-Mobile is already designing new features for users to enjoy. …

In February 2010, T-Mobile met with Microsoft to discuss the possibility of deploying its application on Windows Phones. When the company learned more about an upcoming release, Windows Phone 7, it knew it had found the right choice. The phone includes unique features such as Hubs, which bring web content and applications together for specific tasks. This capability would be an ideal foundation for the T-Mobile® Family Room™ application.

Other technology enhancements would be helpful as well. For example, T-Mobile could use the Microsoft Push Notification Service in Windows Phone 7 to automatically update information across multiple Windows Phones. With other phones, a mobile application typically polls a web service to check for pending notifications. With Windows Phone 7, a web service could send an alert to the application, reducing the device’s bandwidth and battery consumption.

image Working with Windows Phone 7 would also simplify implementation. The company could speed time-to-market with familiar tools such as the Microsoft Visual Studio 2010 Professional development system and the Microsoft .NET Framework 4. With an outline for the project in place, T-Mobile and Microsoft asked Matchbox Mobile, a mobile software development company, to join the project team. Matchbox Mobile creates innovative applications for original equipment manufacturers, wireless operators, and other vendors. The Microsoft Certified Partner offers a full range of services, including research and development, testing, and implementation.

The next step was choosing a cloud-based infrastructure for hosting the application and data. The team considered multiple services, including offerings from vendors it already worked with. In the end, it chose the Windows Azure platform, a set of cloud computing services that includes the Windows Azure operating system and the Microsoft SQL Azure relational database. “We talked about a lot of different cloud options,” says Lipe. “At the end of the day, we felt that the Windows Azure platform best met our needs with the right feature set, compatibility, and reliability.” Matthew Calamatta, Technical Consultant at Matchbox Mobile, adds, “We found that, with Windows Azure, we could build the solution very quickly.”

T-Mobile was also confident that it could better protect customer information with the Windows Azure platform. With Windows Communication Foundation—a component of the .NET Framework—the team could create a web service that used HTTPS and Transport Layer Security. It could also implement certificate-based authentication.

Working with Visual Studio 2010, the team began a rapid development process with a completion deadline of two months. Matchbox Mobile developers split into two groups, one that worked on the client application while the other designed software for the web server. However, developers from both groups quickly discovered they could work together to expedite the project. “The common tool set and sophistication of the development process with Visual Studio 2010 meant that we could get people from both teams working together,” says Calamatta. “So if people finished something on the client side, they could immediately work on integrating it with the web server.”

In just six weeks, the team delivered its first Family Room application for Windows Phone 7. The application includes a Live Tile displayed on the phone’s Start screen. By using Family Room, members can update calendar information and write notes to the group on a virtual chalkboard. They will also be able to share photos. A web-based application running on Windows Azure completes tasks such as sending push notifications to phones, authenticating devices, and registering new users. Data is stored with the SQL Azure database, and the company also uses SQL Azure analytic tools.

The application is available on the new HTC HD7 Windows Phone from T-Mobile.

Damir Dobric published on 11/11/2010 Windows Azure Starter Page with links to and brief descriptions of Windows Azure resources:

  • image Introducing the Windows Azure Platform
    Using computers in the cloud can make lots of sense. Rather than buying and maintaining your own machines, why not exploit the acres of internet-accessible servers on offer today? Learn about the Windows Azure platform in this white paper.
  • Introducing Windows Azure
    Cloud computing is here. Running applications on machines in an Internet-accessible data center can bring plenty of advantages. Yet wherever they run, applications are built on some kind of platform. For on-premises applications, this platform usually includes an operating system, some way to store data, and perhaps more. Applications running in the cloud need a similar foundation. The goal of Microsoft's Windows Azure is to provide this. Part of the larger Windows Azure platform, Windows Azure is a platform for running Windows applications and storing data in the cloud.
  • Windows Azure and ISVs – A Guide for Decision Makers
    Why should an independent software vendor (ISV) care about cloud computing? The answer is simple: Using the cloud has the potential to increase an ISV's revenues and/or decrease its costs. Running code and storing data on computers in large Internet-accessible data centers owned and operated by another organization can offer compelling advantages. If you are responsible for charting your course as an ISV, you'll want to consider how cloud computing can positively impact your business. This white paper explains how ISVs can benefit by using Windows Azure.
  • Windows Azure Security Overview
    This document describes the array of security controls implemented within Windows Azure, so customers can evaluate if these capabilities and controls are suitable for their unique requirements.
  • An Introduction to Windows Azure AppFabric for Developers
    This overview paper introduces the Services Bus and Access Control for the Windows Azure platform AppFabric and how they fit together.
  • Overview of Microsoft SQL Azure
    Companies today are faced with ever-increasing amounts of data from numerous sources that need to be shared across a variety of devices. Meeting these needs requires constant investment in servers, operating systems, storage, and networking. Microsoft® SQL Azure Database, provides an improved way to respond to these challenges with enhanced manageability, scalability, and developer agility. This whitepaper provides an overview of the SQL Azure Database. It lays out the advantages for using a cloud-based relational database service, and also describes practical usage scenarios that help you understand how SQL Azure Database can be used to optimize business solutions.
  • Getting Started with SQL Azure
    SQL Azure Database is a cloud based relational database service from Microsoft. SQL Azure provides relational database functionality as a utility service. Cloud-based database solutions such as SQL Azure can provide many benefits, including rapid provisioning, cost-effective scalability, high availability, and reduced management overhead. This document provides guidelines on how to sign up for SQL Azure and how to get started creating SQL Azure servers and databases.
  • Similarities and Differences - SQL Azure vs. SQL Server
    SQL Azure Database is a cloud database service from Microsoft. SQL Azure provides web-facing database functionality as a utility service. Cloud-based database solutions such as SQL Azure can provide many benefits, including rapid provisioning, cost-effective scalability, high availability, and reduced management overhead. This paper provides an architectural overview of SQL Azure Database, and describes how you can use SQL Azure to augment your existing on-premises data infrastructure or as your complete database solution.
  • Windows Azure Platform Storage: An Overview
    This whitepaper presents an overview of the Windows Azure and SQL Azure storage offerings, including a description of the different data types, available storage mechanisms, and a review of the different considerations between on-premises data and cloud-based data.
  • Microsoft Codename "Dallas" – Whitepaper
    Microsoft Codename "Dallas" is a new cloud service that provides a global marketplace for information including data, web services, and analytics. This whitepaper provides an overview on key features, typical scenarios and the architectural layout.
  • System Integrator Whitepaper
    This white paper describes how Microsoft® system integrator partners are using Windows Azure™, an Internet-scale cloud services platform that is hosted in Microsoft data centers, to develop applications and services that are quick to deploy, easy to manage, readily scalable, and competitively priced.
  • Windows Azure Drive Whitepaper– Using a Durable Drive for your NTFS Cloud Applications
    Customers have told us that one of the challenges is taking their already running Windows applications and running them in the cloud while making sure their data is durable while using the standard Windows NTFS APIs. With Windows Azure Drive, your Windows Azure applications running in the cloud can use existing NTFS APIs to access a durable drive. This can significantly ease the migration of existing Windows applications to the cloud. The Windows Azure application can read from or write to a drive letter (e.g., X:\) that represents a durable NTFS volume for storing and accessing data. The durable drive is implemented as a Windows Azure Page Blob containing an NTFS-formatted Virtual Hard Drive (VHD). This paper describes what a Windows Azure Drive is and how to use it.
For developers, developers, developers
  • Cloud Optimization – Expanding Capabilities, while Aligning Computing and Business Needs - A framework for making business decisions about cloud computing.
    Cloud computing now provides organizations with new ways to deploy and maintain enterprise applications— allowing for greater flexibility and reduced complexity. Fully understanding the range of potential cloud computing benefits requires a broad perspective that recognizes that real computing resource optimization aligns computing capabilities with business needs. So, in addition to uptime, organizations can now achieve agility, integration, scalability, accelerated deployment, better utilization, and transparent cost accounting.
  • Windows Azure Table – Programming Table Storage
    Windows Azure Table provides scalable, available, and durable structured storage in the form of tables. The tables contain entities, and the entities contain properties. The tables are scalable to billions of entities and terabytes of data, and may be partitioned across thousands of servers. The tables support ACID transactions over single entities and rich queries over the entire table. Simple and familiar .NET and REST programming interfaces are provided via ADO.NET Data Services. This paper describes these concepts and the advanced features of Windows Azure Table.
  • Windows Azure Blob – Programming Blob Storage
    Windows Azure Storage provides durable, scalable, available, and performance-efficient storage services for the cloud, and it does this through familiar and easy-to-use programming interfaces. Windows Azure Blob provides a simple interface for storing named files along with metadata for a file. This paper describes the Windows Azure Blob programming interface and the advanced blob concepts.
  • Windows Azure Queue - Programming Queue Storage
    Windows Azure Storage provides durable, scalable, available, and performance-efficient storage services for the cloud, and it does this through familiar and easy-to-use programming interfaces. Windows Azure Queue provides reliable storage and delivery of messages for an application. This paper describes the Windows Azure Queue programming interface and the advanced queue concepts.
  • A Developer's Guide to Access Control for the Windows Azure AppFabric
    This whitepaper shows developers how to use a claims-based identity model and Access Control to implement single sign-on, federated identity, and role based access control in Web applications and services.
  • A Developer's Guide to Service Bus for the Windows Azure AppFabric
    This whitepaper shows developers how to use the Service Bus to provide a secure, standards-based messaging fabric to connect applications across the Internet.
  • Security Best Practices For Developing Windows Azure Applications
    This paper focuses on the security challenges and recommended approaches to design and develop more secure applications for Microsoft's Windows Azure platform. Microsoft Security Engineering Center (MSEC) and Microsoft's Online Services Security & Compliance (OSSC) team have partnered with the Windows Azure team to build on the same security principles and processes that Microsoft has developed through years of experience managing security risks in traditional development and operating environments.
  • Security Guidelines for SQL Azure
    SQL Azure Database is a cloud database service from Microsoft. SQL Azure provides Web-facing database functionality as a utility service. Cloud-based database solutions such as SQL Azure can provide many benefits, including rapid provisioning, cost-effective scalability, high availability, and reduced management overhead. This document provides an overview of security guidelines for customers who connect to SQL Azure Database, and who build secure applications on SQL Azure.
  • Developing and Deploying with SQL Azure
    SQL Azure Database is a cloud based relational database service from Microsoft. SQL Azure provides relational database functionality as a utility service. Cloud-based database solutions such as SQL Azure can provide many benefits, including rapid provisioning, cost-effective scalability, high availability, and reduced management overhead. This document provides guidelines on how to deploy an existing on-premise SQL Server database into SQL Azure. It also discusses best practices related to data migration.
  • Scaling out with SQL Azure
    SQL Azure Database is a cloud database service from Microsoft. SQL Azure provides Web-facing database functionality as a utility service. Cloud-based database solutions such as SQL Azure can provide many benefits, including rapid provisioning, cost-effective scalability, high availability, and reduced management overhead. This paper provides an overview on some scale-out strategies, challenges with scaling out on-premise, and the benefits of scaling out with SQL Azure.
  • Custom IIS Web/Microsoft SQL Server Application Migration Scenario
    Many organizations maintain applications that fit an IIS Web/Microsoft SQL Server architectural pattern. These applications frequently perform important functions and are particularly prevalent as internal or departmental applications. Due to their architecture, these applications tend to require few changes when migrated to Windows Azure, and for numerous other reasons, they often make excellent candidates for early migration. Since these applications tend to avoid high levels of complexity, they may be selected for Migration to Windows Azure as part of an overall application migration and may prove to be an excellent first step into the cloud for many organizations. Learn about migrating custom IIS Web/Microsoft SQL Server applications in this whitepaper.
  • Custom E-Commerce (Elasticity Focus) Application Migration Scenario
    E-commerce applications that deal with uneven computing usage demands are frequently faced with difficult choices when it comes to growth. Maintaining sufficient capacity to expand offerings while continuing to support strong customer experiences can be prohibitively expensive—especially when demand is erratic and unpredictable. The Windows Azure Platform and Windows Azure Services can be used to help these organizations add computing resources on an as-needed basis to respond to uneven demand, and when it makes sense, these organizations can even implement programmatic system monitoring and parameters to help them achieve dynamic elasticity, which it is only possible when organizations become unconstrained by the costs and complexity of building out their on-premises computing infrastructure. While programmatic elasticity may be non-trivial to achieve, for certain organizations it can become a competitive advantage and a differentiating capability enabling faster growth, greater profitability, and improved customer satisfaction. Learn about migrating custom e-commerce (elasticity focus) applications in this whitepaper.
  • Custom Web (Rapid Scaling Focus) Application Migration Scenario
    Organizations hosting custom web applications that serve extremely large numbers of users frequently need to scale very rapidly. These organizations may have little to no warning when website traffic spikes, and many organizations that have launched this type of custom application were unprepared for subsequent sudden popularity and explosive growth. For some organizations, their resources became overburdened, and their sites crashed; instead of riding a wave of positive publicity, their upward trajectory was halted, and they needed to repair their reputations and assuage angry customers and site visitors. The Windows Azure Platform and Windows Azure Services can help organizations facing these challenges achieve web application architecture that is both highly scalable and cost effective. Learn about migrating custom web (rapid scaling focus) applications in this whitepaper.

Scott Crenshaw opined “Trade Lots of Little Lock-Ins for One Big Mother Lock-in!” as a deck to his Microsoft is the New "Open" post of 11/11/2010 to NetworkWorld’s Open Atmosphere blog:

image The last few weeks have seen some amazing revelations from Redmond. From what I can glean, gone are the days when they fought tooth-and-nail to lock developers in to every element of their stack. They've apparently seen the light, and decided that the way to make more money in is to be more ... open. Huh?

imageForget .net as the supreme (and only) platform. Microsoft now says Java is a "first class citizen." Why? "We saw that more and more people were writing cloud applications in Java," Amitabh Srivastava, senior vice president of Microsoft's Server and Cloud Division, said. "... we're going to make the whole Eclipse integration with Azure be first class ... we're going to expose the APIs in Windows Azure in Java ... and .. we're investing in optimizing the performance of Java applications on Windows Azure."

image For the past two decades, Microsoft has been trying to force people to use Microsoft for every stack component and every tool. The use of one component forced the use of another. And so on, until the entire stack was Microsoft.

Now, they're changing their tune: "You can't ask a developer to use a certain language or tools; you have to evolve the platform to support them where they are." True words, but who ever thought they'd hear them coming from M$?

If these revelations weren't enough, Microsoft acknowledges that HTML 5 will be more pervasive than Silverlight: "... HTML is the only true cross platform solution for everything, including (Apple's) iOS platform," according to Bob Muglia, president of the Server and Tools Division at Microsoft..

What's happening here? Why this tectonic shift?

Here's the answer: Bob Muglia wants to win the platform-as-a-service (PaaS) business, and he recognizes that to do it, he has to win developers and applications who aren't in the Microsoft camp today.  In his words:

"I think that people getting to platform as a service will be the key destination....

"I think there are only two other platform[s] as a service actually available in the market today: Google with AppEngine, and Salesforce. Both of those are very narrow and special purpose relative to what Windows Azure provides. Windows Azure has a much broader set of services that is applicable to a much broader set of app than either of those. "

And Microsoft is backing this with more than statements of intent. They are unleashing a several hundred million dollar marketing juggernaut to bring the world to Azure.

You've got to Microsoft credit. They may have gotten beaten up by VMWare, bruised by Amazon, and haven't been very well acknowledged by the cloud market. But they have made some fundamental strategic changes designed to accomplish one thing, and one thing only: driving more applications and more users to their cloud. But the leopard hasn't really changed its spots. They've simply recognized the control points in the industry are changing. Microsoft is being more open on the periphery to draw users into its monolithic, proprietary cloud platform.

Scott Crenshaw leads Red Hat's Cloud Business Unit, responsible for Red Hat's Cloud Computing and virtualization businesses. In addition, Scott leads the company's drive to integrate marketing across all business units.

Joydip Kanjilal posted a Developing Cloud Applications using Microsoft Visual Studio 2010 [or 2008] beginner’s tutorial to’s CodeGuru site on 11/3/2010 (missed when posted):

imageCloud Computing is an infrastructure that enables you to develop and deploy your applications in the cloud, i.e., in remote servers. The most important benefits of using Cloud Computing Infrastructure include the following: reduced cost, scalability, flexibility and efficiency. Microsoft's Windows Azure platform is the Operating System for the Cloud and comprises a group of cloud technologies with each of them providing a specific set of services. This article presents an overview of cloud computing, its benefits and then discusses how you can develop cloud applications using Microsoft Visual Studio 2010.


To use the code examples illustrated in this article, you should have the following installed in your system:

  • Microsoft Visual Studio 2008
  • Microsoft Azure Tools for Microsoft Visual Studio

Alternatively, you can have Microsoft Visual Studio 2010 and Windows Azure Tools for Microsoft Visual Studio 2010.

What is Cloud Computing?

Cloud Computing is a buzzword these days. It may be defined as a phenomenon that promises to increase business agility by increasing the velocity with which applications are deployed and lowering the costs. Note that Cloud Computing is not a technology revolution. Rather, it is a business and process evolution. The Windows Azure platform is Microsoft's Cloud Computing Framework that provides a wide range of Internet services that are consumable from both on-premises environments and even the internet.

The Microsoft Windows Azure Services platform provides you the hosting platform and the necessary tools to develop applications that can reside in the cloud. It comprises of the following components:

  1. Windows Microsoft Azure
  2. SQL Azure
  3. Microsoft .NET Services
  4. Live Services
Building Your First Cloud Service in Microsoft Visual Studio 2010

Microsoft Visual Studio 2010 comes packed with features aplenty that eases development of applications both for the desktop and the web. In this section we will explore how we can develop cloud applications using Visual Studio 2010. Visual Studio 2010 contains Windows Azure Tools that helps developers develop, debug and deploy applications and services that can reside in the cloud. The MSDN states: "The Windows Azure Tools and Visual Studio 2010 makes it easy to create, edit, configure, debug and deploy applications that run on Windows Azure. They allow you to leverage your existing skills with ASP.NET and with Microsoft Visual Studio." Reference:

To create you[r] first Cloud Service in Microsoft Visual Studio, follow these steps:

  1. Open Microsoft Visual Studio 2010 IDE
  2. Click on File->New Project
  3. Select Cloud Service as the project type

    Microsoft Azure - Select Cloud Service as the project type
    (Full Size Image) Figure 1

  4. Click on Enable Windows Azure Tools and click OK

    Click on Enable Microsoft Azure Tools and click OK
    (Full Size Image) Figure 2

  5. In the screen that comes up next, click on the "Download Windows Azure Tools" button

    In essence, you cannot start developing applications that can reside in the cloud using Microsoft Visual Studio 2010</STRONG itxtvisited="1"> unless you have the Windows Azure Tools for Visual Studio 2010 installed in your system. Once you have clicked on the button shown in the figure above, the download of the Windows Azure Tools for Visual Studio 2010 starts. You can see the progress in the figure that follows next.

    download of the Microsoft Azure Tools for Microsoft Visual Studio 2010
    Figure 3

    Note that before you install Windows Azure Tools for Visual Studio 2010 you should have IIS installed in your system. Once the download of Windows Azure Tools for Microsoft Visual Studio 2010 is complete, double-click on the .msi file to install it. The following figure shows the installation progress.

    Azure installation progress
    (Full Size Image) Figure 4

    When the installation of Windows Azure Tools for Visual Studio 2010 is complete, the following screen appears:

    installation of Microsoft Azure Tools for Visual Studio 2010 is complete screen
    (Full Size Image) Figure 5

    Note that you should run Visual Studio 2010 as an administrator. Now that Windows Azure Tools for Microsoft Visual Studio 2010 and the necessary components have been installed, you will get the necessary templates and runtime components that are needed to develop cloud applications--you need not wait to register for access to Azure cloud services and invitation tokens.

Read more: 2, Next

<Return to section navigation list> 

Visual Studio LightSwitch

• Gary Short asserts Microsoft’s Office group should market LightSwitch in his TechEd Europe Day 2 – Is LightSwitch in the Wrong Product Group? post of 11/12/2010:

image2224222I spoke to a number of attendees who had been to the Lightswitch session today. They told me that the presenter walked the audience through Lightswitch and it’s place in the Visual Studio family of products. It quickly became clear to the attendees that there was a certain amount of push back from developers in the audience, the general feeling was that Lightswitch is not a developer tool. The feeling is most certainly that Lightswitch is a power user tool and does not belong in the Visual Studio stable. I mean, what does Lightswtich do? It helps you create Silverlight applications, right? Well I’m a developer, if I want to create Silverlight apps I have ways of doing that already.

image So, if Lightswitch shouldn’t be in the Visual Studio stable, where should it be? Personally, I’d like to see it in the Office stable, as that’s where the power users on the Microsoft platform hang out. People who are already creating “applications” using Excel and Access should now have the ability to create Silverlight applications using Lightswitch for Office. If it were in the Office stable, Microsoft would get a double win. Firstly, devs would get off their backs about there being a “toy” in the Visual Studio line up, and secondly, power users would win because they could stop writing mission critical software in Excel and Access and get their applications on a more “professional” footing.

So why do so many power users write so many apps in Office tools? Simple. A great many enterprises have outsourced their IT capacity. So now to get even the most simple application written someone has to fill out a form in triplicate, submit it to four different people and pay through the nose for it. Not to mention that when the application finally gets delivered, it’s “a day late and a dollar short”. And why do they go through this pain? Well they have no choice normally as the terms of the outsourcing agreement means that all development and support work on applications must be done by the outsourcing company.

However, under many such agreements, Office is deemed to be “software of personal productivity” and so falls outside of the agreement, mainly because Office is on every desktop in the enterprise and if the outsourcing company had to support it, they’d never been done fielding calls of the “how to I make titles bold?” variety. However, this loophole means that if someone want a new application, this side of Armageddon, then they can write it immediately using Excel or Access.

So what do you think, am I right, should Lightswitch be in the Office stable? Leave you comments in the… well, comments

The session discussed is Drew RobbinsDEV206 - Building Business Applications with Visual Studio LightSwitch.

I described LightSwitch as a potential Access 2010 competitor in my forthcoming Microsoft Access 2010 In Depth book.

Return to section navigation list> 

Windows Azure Infrastructure

James Hamilton (pictured below) provides power consumption details on Microsoft’s Quincy, Chicago, San Antonio and Dublin data centers in his Datacenter Power Efficiency post of 11/12/2010:

image Kushagra Vaid presented at Datacenter Power Efficiency at Hotpower ’10. The URL to the slides is below and my notes follow. Interesting across the board but most notable for all the detail on the four major Microsoft datacenters:

Services speeds and feeds:

  • Windows LiveID: More than 1B authentications/day
  • Exchange Hosted Services: 2 to 4B email messages per day
  • MSN: 550M unique visitors monthly
  • Hotmail: 400M active users
  • Messenger: 320M active users
  • Bing: >2B queries per month

Microsoft Datacenters: 141MW total, 2.25M sq ft

Quincy: 27MW, 500k sq ft total, 54W/sq ft

Chicago: 60MW, 707k sq ft total, 85W/sq ft, over $500M invested, $8.30/W, assuming $500M cost, 3,400 tons of steel 190 miles of conduit, 2,400 tons of copper, 26,000 yards of concrete, 7.5 miles of chilled water piping, 1.5M man hours of labor, Two floors of IT equipment:

  • Lower floor: medium reliability container bay (standard IOS containers supplied by Dell)
  • Upper floor: high reliability traditional colo facility

Note the picture of the upper floor shows colo cages suggesting that Microsoft may not be using this facility for their internal workloads.

PUE: 1.2 to 1.5, 3,000 construction related jobs

Dublin: 27MW, 570k sq ft total, 47W/sq ft,

San Antonio: 27MW, 477k sq ft total, 57W/sq ft

The power densities range between 45 and 85W/sq ft range which is incredibly low for relatively new builds. I would have expected something in the 200W/sq ft to 225W/sq ft range and perhaps higher. I suspect the floor space numbers are gross floor space numbers including mechanical, electrical, and office space rather than raised floor.

Kushagra reports typical build costs range from $10 to $15/W

Based upon the Chicago data point, the Microsoft build costs are around $8/sq ft. Perhaps a bit more in the other facilities in that Chicago isn’t fully redundant on the lower floor whereas the generator count at the other facilities suggest they are.

Gen 4 design: Modular design but the use of ISO standard containers has been discontinued. However, they modules are prefabricated and delivered by flatbed truck

  • No mechanical cooling; very low water consumption
  • 30% to 50% lower costs
  • PUE of 1.05 to 1.1

Analysis of AC vs DC power distribution concluding that DC more efficient at low loads and AC at high loads. Over all the best designs are within 1 to 2% of each other. Recommends higher datacenter temperatures

Key observation: The best price/performance and power/performance is often found in lower-power processors

image Kushagra (pictured right) found substantial power savings using C6 and showed server idle power can be dropped to 31% of full load power. Good servers typically run in the 45 to 50% range and poor designs can be as bad as 80%.

The slides:

Kushagra is Principal Architect and Director, Datacenter Compute Infrastructure at Microsoft.

Matt Asay asserted Private clouds will roll over” as a deck for his Memo to Microsoft: don't bet against Amazon Open...and Shut article of 11/12/2010 for The Register:

imageThe technology world used to be fairly easy to understand. For a time, IBM dominated the back office while Microsoft monopolized the desktop. More recently, Microsoft and Linux split the difference on servers while Oracle bought the known universe to dominate enterprise middleware and applications.

But along comes the cloud, and everything is up for grabs. Everything.

imageThe incumbent vendors want to pretend that the cloud is business as usual. Red Hat makes it easy to run instances of RHEL on Amazon, while Microsoft's Windows Azure tries to extend its server and database lead to the cloud. Oracle's cloud vision is much the same, except with Oracle products playing center stage. But these and other incumbent vendors may end up disappointed.

The reason is that the cloud creates all sorts of scaling problems, problems that require novel solutions. Fast IP co-founder Benjamin Black suggests that users will bury their preconceived notions of ideal technologies or vendors to achieve scale:

Lost in all the debates about SQL vs. NoSQL, ACID vs. BASE, CAP, and all the rest is simply this: focus on the process to build a great company and great products. That's what every successful technology company has done in order to reach the scale where things like BigTable and FlockDB are required. They didn't become successful because they built these systems. They built these systems because they became successful. If your success drives you to the scale where relational databases are no longer efficient, don't be afraid to look beyond them. That might mean adopting one of the existing NoSQL systems, or even building your own to meet your exact needs. This is what happens at scale.

This quest for and realization of scale initially has companies stopping off at private cloud computing, which favors incumbent technology giants, but it's fast leading to a more disruptive embrace of public cloud providers, according to Barron's Mark Veverka.

The first step away from old-school vendors is new-school private cloud providers like Rackspace's OpenStack, Eucalyptus,, and more. Importantly, not only are these companies new, but they're using the underdogs' preferred tool, open source, to unseat calcified purchasing bureaucracies. 'Try before you buy' effectively translates to 'try me rather than buy them.' And a rising number of organizations are buying into cloud computing according to Gartner.

This wouldn't matter if the public cloud were to remain unpalatable for large enterprises fearful of security and performance issues sometimes associated with public cloud computing. But it won't. Already Gartner is projecting 100-per cent adoption of public cloud computing by Global 2000 organizations by 2016. One hundred per cent means "all of them." The public cloud threatens to be a public nuisance for incumbent technology vendors.

Why? Because, as Redmonk analyst Stephen O'Grady argues, we may see traditional incumbent vendors outmaneuvered by the scale public cloud providers have mastered. The benefits of cloud computing can be achieved in a limited way with private clouds, but are likely to only be fully realized in the public cloud, and the profits from the public cloud will almost certainly congregate in the new kids on the enterprise computing block like Amazon and Google:

Besides the economies of scale that Amazon, Google and others may bring to bear, which might be matched by larger commercial institutions, the experience of web native entities operating infrastructure at scale is likely to prove differentiating. It's not just a matter of running a datacenter at scale; it's knowing how to run applications at scale, which means understanding which applications can run at scale and which cannot. The cloud "public or private" must be more than a large virtualized infrastructure. Whether or not private cloud suppliers and their customers realize that yet is debatable; whether Amazon, Google et al do is not.

Along the way, we're seeing serious grappling for the hearts and minds of developers in the public cloud, most recently in the tug-of-war between Rackspace and Amazon. Some data favors Rackspace, while other data favors Amazon. And while both suit different needs, there is one common driver for both companies' success: the need for scale.

Beyond scale, however, Amazon has another thing going for it. Two things, actually. The first is developers. Amazon has been very effective in reaching out to developers and making EC2 a haven for developer experimentation. Rackspace has also done well with developers but its announced strategy of moving into the "managed cloud" and offering more expensive, premium services threatens to undermine its competitiveness with developers.

Which leaves Amazon to duke it out with Google, another main contender in the public cloud. In its fight with Google, however, Amazon's second big advantage comes into play: focus. Unlike Google, which dabbles in just about everything, Amazon is very focused, as Chris Dixon articulates.

The search for scale will significantly disrupt the technology industry, including enterprise software. Given the importance of cloud computing in this, and particularly the public cloud, it would be tough to bet against Amazon, the company that knows how to scale cloud applications better than anyone. ®

Matt Asay is chief operating officer of Ubuntu commercial operation Canonical. With more than a decade spent in open source, Asay served as Alfreso's general manager for the Americas and vice president of business development, and he helped put Novell on its open-source track.

Robert McNeill, Mike West and Bill McNee co-authored a Channels and the Cloud: Disruption, Innovation and Partnering Research Alert on 11/11/2010 for Saugatuck Technology(site registration required):

image What Is Happening?  Blogs and analyst reports have been trumpeting the dramatic changes impacting the independent software vendor (ISVs) and technology vendor channels, as well as the emergence of new Cloud channels displacing traditional partner relationships.

The Cloud is significantly disrupting many of the traditional channels and routes to market that leading ISVs and technology providers have used in the past. New Saugatuck research from September 2010 indicates that almost half of 546 executives globally that we recently surveyed - across both SMB and large enterprises - prefer to acquire Cloud IT solutions direct from large established vendors, as can be seen in Figure 1 below.

Figure 1: Cloud IT Preferences

Source: Saugatuck Technology, Cloud IT survey / Sept. 2010, N=546 (global)

As the data shows, the market for Cloud IT services is rapidly maturing, with Cloud hosting providers, managed services providers, and Global System Integrators just as appealing to potential buyers as Cloud pure-play vendors are.  This data indicates that Cloud IT is now truly mainstream.

The trio continue with the usual “Why Is It Happening” and “Market Impact” sections.

Rolf Harms, Director, and Michael Yamartino, Manager, Microsoft Corporate Strategy Group summarize a newly published, 22-page white paper, Economics of the Cloud in an 11/11/2010 post to TechNet’s Microsoft on the Issues blog:

image Information technology is undergoing a seismic shift towards the cloud, a disruption we believe is as game-changing as the transition from mainframes to client/server.  This shift will impact every player in the IT world, from service providers and systems architects to developers and end users.  We are excited about this transformation and the vast improvements in efficiency, agility and innovation it will bring.

imageTo prepare for the cloud transition, IT leaders who make investments in infrastructure, architectures, and skills have a critical need for a clear vision of where the industry is heading.  We believe the best way to form this vision is to understand the underlying economics driving this long-term trend.  We’ve done extensive analysis of these economics in Microsoft’s Corporate Strategy Group, leveraging Microsoft’s experience with cloud services like Windows Azure, Office 365, Windows Live, and Bing. We decided to share these insights with our customers, partners and the broader industry by publishing a new whitepaper, “The Economics of the Cloud.”

Our analysis uncovers economies of scale for cloud that are much greater than commonly thought.  We believe that large clouds could one day deliver computing power at up to 80% lower cost than small clouds.  This is due to the combined effects of three factors: supply-side economies of scale which allow large clouds to purchase and operate infrastructure cheaper; demand-side economies of scale which allow large clouds to run that infrastructure more efficiently by pooling users; and multi-tenancy which allows users to share an application, splitting the cost of managing that application.

In “The Economics of the Cloud” we also discuss how these economics impact public clouds and private clouds to different degrees and describe how to weigh the trade-off that this creates.  Private clouds address many of the concerns IT leaders have about cloud computing, and so they may be perfectly suited for certain situations.   But because of their limited ability to take advantage of demand-side economies of scale and multi-tenancy, we believe that private clouds may one day carry a cost that is as much as 10x the cost of public clouds.

The conclusions of this paper are a powerful motivator behind Microsoft’s commitment to the cloud.  We invite you to read the full paper to share in our vision.  Microsoft is committed to bringing the best features of the cloud to developers and users, both through our Windows Azure platform and our Office 365 applications.  This transformation will take time, but as they say, ‘the journey is half the fun’, so we hope that you’ll join us on this exciting journey to the cloud.

Kevin Fogarty asked “In a bid to raise the profile of Hyper-V and Azure, Microsoft has launched new partnerships, features and packages. But is it enough to make them a credible leader in the cloud computing market?” in an introduction to his  Microsoft Ramps up Fight to Be Cloud Leader article of 11/11/2010 for

imageMicrosoft (MSFT) has launched a series of partnerships, functional enhancements and product packages in an effort to make its cloud offerings more attractive to enterprise customers and raise the profile of its Hyper-V hypervisor as a potential ingredient for cloud-based systems.

imageIt has also enhanced its Azure cloud service by giving customers the ability to launch and control Hyper-V based virtual machines and SQL Server instances, neither of which were available in earlier editions.

image That additional support gives IT better control and makes it more possible to migrate applications from an internal data center to a cloud environment, though not to the point that it's easy to just lift internal applications to Azure and press 'run,' according to Chris Wolf, a research VP at Gartner.

The core part of Microsoft's new offering isn't a product, but a set of deployment guides, partnerships with hardware manufacturers who can supply preconfigured servers ready for Hyper-V based clouds, and service providers able to provide everything from hosting to development help.

Microsoft is accrediting hosting providers, offering training and reference architectures for Hyper-V based clouds to both internal IT and external integration providers, and offering direct consulting from both its own Consulting Services or outside partners. It also built an online catalog listing partners and services by function in a listing called the Cloud Hypermarket.

The hardware and reference architectures for software stacks that work well in Microsoft-based cloud environments come from Dell (DELL), Fujitsu, Hitachi, Hewlett-Packard (HPQ), IBM (IBM) and NEC, and include both private and public cloud deployments.

Microsoft offers its Azure platform-as-a-service offering as one of the service-provider options, but not the only one, allowing customers to choose whether they need platform-, infrastructure- software-as-a-service or other hosting services for their particular cloud.

Getting Beyond Azure Lock-In Worries?

Much of Microsoft's effort is an attempt to bring attention to its cloud offerings and the potential to use Hyper-V as the basis for either external or internal clouds, Wolf says.

"They're getting a little beyond the point that had people worried about lock-in to Azure and focusing more on a wider type of deployment," Wolf says. Rival VMware (VMW), which owns the bulk of the virtual server market and has made cloud computing the basis of its strategy for almost two years, is far better established, has a wider variety of technical options and a far wider installed base in companies with cloud projects, according to Jonathan Reeve, vice president of product strategy at Hyper9, which develops capacity planning software for cloud and virtual environments.

Continue Reading: 2, next page

Phil Fersht continued his analysis with The Industry Speaks about Cloud, Part II: business execs fear its impact on work culture, IT execs doubt their ability to drive competitive advantage posted to the Enterprise Irregulars blog on 11/11/2010:

HfS Research and The Outsourcing Unit at the London School of Economics have surveyed 1053 organizations on the future of Cloud Business Service

The coloss[al] Cloud Business Services study we just conducted, in conjunction with the Outsourcing Unit at the London School of Economics, has served up some contrasting concerns that business executives are having versus their IT counterparts:  Cloud’s potential impact on work culture versus its impact on the value of the today’s IT department.

Essentially, two-thirds of business executives have expressed concern over the impact Cloud business services could have on the speed by which they could be driven to operate in virtual environments.  Moreover, a similar number expressed concerns over Cloud impeding their ability to collaborate with other businesses.

Conversely, IT executives are hugely worried (80%) by the potential for Cloud providers to exploit customers, but contradict these fears by also worrying about competitors leveraging Cloud to steal competitive advantage from them:

The bottom-line: when the business execs look at Cloud, they sense a major cultural change in the way they work, while IT executives are terrified by the potential curtailment of their value  as the technology-enabler of core business processes.

The fact that the IT side of the house recognizes the competitive advantages Cloud can give business (see Part I), creates a massive challenge to the CIO today: how can their IT department become a vehicle for helping their organization find competitive value from Cloud. Because if the CIO fails to deliver this value, the business side will be forced to look at alternative avenues.  We’ll talk about the business transformation implications of Cloud shortly.  Stay tuned for more…

Related articles

Alex Willams posted his Weekly Poll: Do You Believe Analyst Forecasts About the Cloud? to the ReadWriteCloud blog on 11/11/2010:

Thumbnail image for Thumbnail image for Thumbnail image for Thumbnail image for oracleweeklypollchart.pngYou know cloud computing is big.
Then you see the analyst forecasts like the ones compiled by Total Cloud.

And you have to ask yourself: Do you believe these numbers?

Here are the market estimates that Total Cloud collected and posted to its blog:

  • Small business spending on cloud computing will reach $100 billion by 2014.
  • Gartner estimates the Cloud market at $150B by 2013 while Merrill Lynch has it at 160B by 2011.
  • A recent survey of 500 IT decision-makers by SandHill found that about 50% of respondents cited business agility as their primary reason for adopting cloud applications.
  • Mobile and social computing are growing faster than anything before in the history of technology, and enterprise applications will need to adapt.
  • Gartner estimates that virtualization is growing rapidly and that by 2013, 60% of server workloads will be virtualized.
  • Public cloud infrastructure, applications and platforms are growing at 25%+ yet IDC projects that the market for enterprise servers will double by 2013.
  • IDC estimates the market for public cloud products and services at $16B in 2010, growing to $56B by 2014.

What do you think?

Poll: Do You Believe Analyst Forecasts About the Cloud?

Adron Hall outlined Cloud Software Architect Necessities in an 11/11/2010 post:

imageCloud Architecture is becoming more and more relevant in the software industry today.  A lot of efforts are becoming less about software and more about cloud software.  The whole gamut of cloud technology; platform, service, infrastructure, or platforms on platforms are growing rapidly in number.  The days you didn’t need to know what the cloud was are rapidly coming to an end.

As I move further along in my efforts with cloud development, both at application level and services levels, I’ve come to a few conclusions about what I do and do not need.  The following points are what I have recently drawn conclusions about, that cause frustration or are drawn from the beauty of cloud architecture.

REST is King, Period

When I use REST in this context, I don’t just mean a nice clean URI or “not SOAP”.  I’m talking about the whole entire enchilada of REST Architecture Principles.  A good place to start learning about REST is the Wikipedia Article.  Other key resources to check out include;  Roy T. Fielding’s Dissertation, specifically chapter 5, Principled Design of the Modern Web Architecture, and the O’Reilly Book RESTful Web Services.

REST Architecture is fundamental to the web and the fact that the web is continuous in uptime.  The web, or Internet, doesn’t go down, doesn’t crash, and is always available in some way.  REST Architecture and the principles around it are what enables the web to be this way.  The cloud seeks to have the same abilities, functionality, and basically be always on, thus REST Architecture is key to that underpinning.

Core Development

Currently I do most of my development with C# using the .NET Framework.  This is great for developing in cloud environments in Windows Azure and Amazon Web Services.  The .NET Framework has a lot of libraries that work around, with, and attempt to provide good RESTful Architecture.  However, there are also a lot of issues, such as the fact that WCF and ASP.NET at their core aren’t built with good intent against RESTful Architecture.  ASP.NET MVC and some of the latest WCF Releases (the out of band stuff) are extensively cleaning this up and I hear that this may be sooner than later.  I look forward to have cleaner, bare bones, fast implementations to use for RESTful development in Azure or AWS.

The current standing ASP.NET (Web Forms) Architecture is built well above the level it should be to utilize REST Architecture well.  Thus it creates a lot of overhead and unnecessary hardware utilization, especially for big sites that want to follow good RESTful Practice.

WCF on the other hand had some great ideas behind it, but as REST increased in popularity and demanded better use of the core principles the web is founded on, the WCF model has had to bend and give way some of its functional context in order to meet basic REST Architecture.

Anyway, that’s enough for my ramblings right now.  Just wanted to get some clear initial thoughts written down around Cloud Architecture needs for the software developer.

Cumulux started a series on cloud-computing basics with Head in clouds – Part 1 on 1/11/2010:

image Simple and effective. Cloud computing will play a major role in the future of Enterprise IT. It’s the solution that will define a whole new shift in the methods and way in which business is done. The hype and halo around “Cloud” is true and pronounced. I say this, not just because I have to write about it but the growing concern among the enterprises, big and small is an inevitable sign of growing prowess of the cloud.

Even though it is a little cliched, I would like to reiterate that Cloud gives economical and technological benefits on a whole new scale. It is appropriate to call it a evolution than a revolution. It is only natural that as web and web technologies penetrate deeper and wider, that the business should leverage and scale on that platform.

When did the idea of cloud start ?
Honestly,  Really Long back. It’s always been the ultimate dream. If you look at the history of computers, technology, it is always been wanting to develop and pursue a more interconnected world where data and information freely flow across for the benefit and common good of every one. What if all this can come with a economical advantage ? Then that’s a win-win for the business and end users.

So, why is this concept getting popular now ?
Due to advent of more mobile world. As well said, necessity is the mother of invention. Inventions will lead to business and growth. Over the past 30 odd years, since the development of world wide web, protocols and security has got better and better to suit the growing demands of online space. A more open standard has evolved and has been serving as the roots of this huge Banyan tree called “Internet”

With the development of necessary infrastructural support and technological advances, and Enterprises having driven to control costs and save energy, the light shined on Cloud computing as the solution.

What is the Economical and technological Advantage ?
The right question should be how does the cloud serve beneficial to its users. The usual and existing business model prompts the user to buy infrastructure to run application required for the business practices and organizations. There are outsourcing firms and in-house firms which develop, deploy, maintain the application. The applications can be varied from a simple web application to a complex sophisticated service. To bring in more reality, now the users have to address over head issues of maintenance, evaluation of software/hardware vendors, In-house work force to co-ordinate the IT tasks and above all, to provide High availability, Business continuity with a back end infrastructure to support scalability to cater to fluctuating demand.

I can hear your thoughts. We have been doing this for so long that we are used to it. Precisely, that’s why I strongly recommend Cloud as it is the much awaited disruptive force to help us evolve to next level of collaborated distributed computing to leverage the power of shared utilization of resources, to optimize performance and achieve greater results.

The cloud can be visualized as well connected and tightly coupled Data Centers distributed across the globe. Big players like Microsoft, Amazon, Google, etc run these data centers and provide the platform, infrastructure and software as a service to the end user. The cloud also provides a low entry barrier for start ups and SMEs by helping them utilize the power of cloud to deliver their products than invest heavily on the infrastructure and platform.

Since it is a shared architecture and computing is seen as a utility like electricity, it stands to deliver economically. The users register for the respective services [ Platform, Infrastructure or Software ] and are billed based on their usage. The major highlight of the cloud is Business continuity and On-demand scalability becomes a default feature. All applications hosted on cloud are by default highly available and can scale up and down at will. So, the cloud stands to serve technologically too.

So, what all options do we have in cloud ? What happens to the data ? How am I billed? How does the cloud work? Doesn’t it have any disadvantages ? What are the benchmarks and standards ? Find out more in the next blog.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA) and Hyper-V Cloud

Jon Bodkin asserted “Amazon, VMware and even Microsoft partners threatened by Windows Azure move” as a deck for his Microsoft opens new competitive fronts with cloud-based Windows Server post of 11/12/2010 to NetworkWorld’s Data Center blog:


Microsoft doesn't want to admit it, but a Gartner analyst says the vendor's decision to offer Windows Server instances in the Azure cloud is opening a new competitive front against partner hosting companies.

image Before 2010 is over, Microsoft will update Windows Azure with the ability to run Windows Server 2008 R2 instances in the Microsoft cloud service. The move could blur the lines between platform-as-a-service (PaaS) clouds like Azure, which provide abstracted tools to application developers, and infrastructure-as-a-service (IaaS) clouds such as Amazon's EC2, which provide raw access to compute and storage capacity.

Microsoft doesn't want to admit it, but a Gartner analyst says the vendor's decision to offer Windows Server instances in the Azure cloud is opening a new competitive front against partner hosting companies.

Before 2010 is over, Microsoft will update Windows Azure with the ability to run Windows Server 2008 R2 instances in the Microsoft cloud service. The move could blur the lines between platform-as-a-service (PaaS) clouds like Azure, which provide abstracted tools to application developers, and infrastructure-as-a-service (IaaS) clouds such as Amazon's EC2, which provide raw access to compute and storage capacity.

This move also improves Microsoft's competitive stance against VMware, which is teaming with hosting companies to offer PaaS developer tools and VMware-based infrastructure clouds.

But the cloud-based Windows Server instances open up a new competitive front against Rackspace and other Web hosters who are Microsoft partners, according to Gartner analyst Neil MacDonald.

Microsoft has, to some extent, downplayed the new capabilities, saying the cloud-based Windows Server – which goes under the name Windows Azure Virtual Machine Role – is primarily an on-ramp to port some applications to the Azure cloud.

"What they really want is people using Azure," MacDonald says. At the same time, VM Role "is a form of infrastructure-as-a-service," he continues. "The reason Microsoft is being so vague is they really don't want to upset their ecosystem partners, all the hosters out there in the world making good money hosting Windows workloads. Microsoft doesn't really want to emphasize that it is competing against them."

Whereas IaaS clouds provide access to raw compute power, in the form of virtual machines, and storage that is consumed by those VMs, PaaS clouds provide what could be described as a layer of middleware on top of the infrastructure layer. Developers using PaaS are given abstracted tools to build applications without having to manage the underlying infrastructure, but have less control over the basic computing and storage resources. With Azure, developers can use programming languages .Net, PHP, Ruby, Python or Java, and the development tools Visual Studio and Eclipse.

Microsoft officials have previously predicted that the lines between PaaS and IaaS clouds will blur over time, but stress that Windows Azure will remain a developer platform.

In response to MacDonald's comment, Windows Azure general manager Doug Hauger says "our partners provide a vast range of services to customers for hosting an infrastructure-as-a-service [cloud]. The VM Role does not compete with them in this space."

For what it's worth, Rackspace does view Microsoft as a cloud competitor. "The cloud market is going to be huge and there are many ways to win in it," Rackspace President Lew Moorman says. "Microsoft is serious about the market and we view them as an emerging competitor as well as partner. We are confident that our service difference will resonate to a large part of the market regardless of the technical offers that emerge from players such as Microsoft." …

Read more: 2, 3, Next >

Simon May posted What was talked about at TechEd Europe today? on 11/8/2010 (missed when posted):


Hyper-V Cloud

imageHyper-V Cloud is IaaS (Infrastructure as a Service) for your own private use, you own and manage the hardware and the software.  Furthermore we’ve announced partners who’ve produced hardware reference architectures (Fast Track) which you can buy direct from them to accelerate the process of building your own cloud.  One example is from HP, but there are others from Dell, NEC and Hitachi as well as others.  So what do you need to build your own, well Windows Server 2008 R2 Hyper-V, System Center and the Virtual Machine Manager Self Service Portal.  For a deployment guide take a look here and you’ll get best practice for doing private cloud deployment.

Not only that but market share for Hyper-V is up by 12% which means that even more people are trusting Hyper-V.

System Center and Private Clouds

We saw a demo of the next version of System Center which has a new console (not an MMC).  Server App-V, where applications on the server are virtualised, was introduced along with the features of System Center that allow you to design an application as a service using templates so that an application owner can provision their own application in just a few clicks but in a way that allows administrators to control the deployment.  What System Center does is move an application from a stand alone unit to an elastic service.  It’s a true private cloud.

Windows Azure and System Center

Much of what I explained in my PDC wrap up was covered again from the IT Pro perspective System Center will help you manage both in the future.  The image at the top of this post is the new Azure portal, which opens up all the features in that post.  One of the demos showed how ANY application written for Azure with  .NET can be instantly monitored by System Center with almost Zero effort, allowing almost instant application level tracing.  What this means is you can suddenly tell your dev guys why the whizzy application they created is running slow.  It’s also possible to monitor SQL Azure databases from System Center too and you can view them in the context of your network, that is along side your own services.

There was also some clarification around our thinking of Paas and Iaas

Windows Server = IaaS

Windows Azure = PaaS

<Return to section navigation list> 

Cloud Security and Governance

• Tanya Forsheit of the Information Law Group (@InfoLawGroup) will present Mitigating Data Security & Compliance Risks of Clouds at the UP 2010 Cloud Computing Conference on 11/16/2010 at 17:00 to 18:00 EST:

imageMany organizations are considering cloud solutions for enterprise email and other applications that may contain sensitive data. While the cloud holds the potential for cost savings, it also poses considerable risk for such companies. This program will explore some of the privacy and data security legal issues and compliance risks associated with cloud computing deals. It will also provide practical guidance for internally evaluating legal and compliance risks; conducting due diligence with respect to potential cloud providers; and negotiating terms that address, among other things, security for sensitive data and allocation of liability in the event of data security breach.

image Tanya L. Forsheit (@Forsheit) is one of the Founding Partners of the InformationLawGroup, based in Los Angeles, California. Tanya founded the InformationLawGroup after 12 years as a litigator and privacy/data security counselor at Proskauer Rose LLP, where, most recently, she was Co-Chair of the firm’s international Privacy and Data Security practice group. In 2009, Tanya was named one of the Los Angeles Daily Journal’s Top 100 women litigators in California.

image Certified as an information privacy professional by the International Association of Privacy Professionals (IAPP), Tanya works with clients to address legal requirements and best practices for protection of customer and employee information. She regularly advises clients regarding legal restrictions on information-sharing and data retention, and provides guidance regarding state laws requiring notification in the event of data security breaches. Tanya frequently speaks and writes on recent developments in federal and state privacy laws, and launched Proskauer’s popular Privacy Law Blog in early 2007.

<Return to section navigation list> 

Cloud Computing Events

I updated my Windows Azure, SQL Azure, OData and Office 365 Sessions at Tech*Ed Europe 2010 post on 11/13/2010 with tested links to TechEd Europe 2010 session videos and slide decks, which overcome problems with the TechEd site’s selection pages. Here’s a sample with a capture of the linked video page:

• ASI410 - Windows Azure AppFabric Service Bus - A Deep Dive: Code, Patterns, Code.
  • Session Type: Breakout Session
  • Track: Application Server & Infrastructure
  • Speaker(s): Clemens Vasters
  • Video and Slide Deck

In this session Clemens Vasters, Principal Technical Lead on the Service Bus team, will take you on a tour through Windows Azure AppFabric Service Bus. We’ll explore various ways that you can take existing services and applications and expose them across network boundaries for integration with cloud applications, either by changing the WCF bindings (simple) or by bringing out the bigger toolbox and fronting existing services with a tunnel or proxy solution via Service Bus (harder). We’ll explore the various security options including a drill down into how the Access Control service works together with Service Bus in a range of sophisticated scenarios. You will also learn how you can make your Windows Azure applications (and applications hosted elsewhere) more transparent by leveraging Service Bus for diagnostics and direct per-node control. And, last but not least, you will also get a code-level view at the new features of the latest AppFabric Labs release of Service Bus.


Bruce Kyle invited readers to Join Us For Windows Azure Discovery Events in Western US in a 11/13/2010 post to the ISV Developer Community blog:

Microsoft would like to invite you to a special event specifically designed for ISVs interested in learning more about the Windows Azure Platform.

imageThe “Windows Azure Platform Discover Events” are one day events that will be held worldwide with the goal of helping ISVs understand the Microsoft’s Cloud Computing offerings with the Windows Azure Platform, discuss the opportunities for the cloud, and show ISVs how they can get started using Windows Azure and SQL Azure today.


The target audience for these events includes BDMs, TDMs, Architects, and Development leads.  The sessions are targeted at the 100-200 level with a mix of business focused information as well as technical information.

Denver, CO December 1 denverazure
Los  Angeles, CA December 16 laazure
Redmond, WA December 8 redmondazure
Silicon Valley, CA December 1 svcazure

The UP 2010 Cloud Computing Conference announced Microsoft’s PaaS Solution: The Windows Azure Platform, a keynote by Niraj Nagrani, Director of Product Marketing at Microsoft to be given on 11/17/2010 at 13:00 to 14:00 EST:

image Abstract:
In this Keynote Session, you will gain an overview of the Windows Azure Platform. We will walk through the key building blocks such as the only enterprise-grade RDBMS in the cloud, common application scenarios, interop capabilities such as Java & PHP, and remarkable examples of how the Windows Azure platform has been used to date by small and large enterprises.

Doug Hauger will present The Move Is On: Cloud Stategies for Businesses on 11/14/2010 at 9:00 to 10:00 PST:


As cloud adoption is gaining industry momentum, the focus is turning increasingly toward practical application of this new computing model, and how businesses can adopt the cloud in the context of their existing software portfolio. In this Headline Keynote, Microsoft's Doug Hauger will delve into the benefits that the cloud offers and discuss, using production cases, how businesses can embrace the cloud.

About the Speaker:
cloud computing conference 2010Doug Hauger is General Manager, Cloud Infrastructure Services Product Management group. He is responsible for bringing to market the Microsoft cloud service hosting platform. This includes product planning, product marketing, business model, and channel strategy.
Previously, Hauger was the Chief Operating Officer for Microsoft India. During his three years in India, Hauger was responsible for business planning, operations, marketing, advertising and public relations for Microsoft India.

image Hauger has been with Microsoft since June of 1999 and has held a variety of roles responsible for driving business and technical strategy for the enterprise field organization.

The full conference agenda is here.

The TechEd Europe 2010 Web Site Team continues to have problems with navigating video download pages. Here’s the first page of the Cloud Computing & Online Services track:


Navigating to page 2 returns a totally unrelated group of session videos. The same problem occurs with Virtualization session videos. Page 1 is OK but other pages aren’t:


The problem appears related to appending Page2 (or Page1) to the URL for MVC routing.

Simon May published PDC10 for the IT Professional – the view from #ukpdc10 on 10/29/2010 (missed when posted):


image I love it when we put on a good show, geeks, streams, quizzes, phones and most importantly TECH!  Last night we played host to a whole bunch of people at the Microsoft Campus in Reading who all left happy (twitter says so #ukpdc10) and who all learnt some new stuff about Azure, Windows Phone 7, and IE9.  There were some stonking announcements on the HD feed from Redmond given by a Steve Ballmer, and Bob Muglia and special guest stars like Pixar studios and Buzz Light-year.  This was a developer conference so what’s important to the IT Pro in what was announced?

Windows Azure


TIP: If you don’t know what Azure is yet jump to my blog and subscribe where I’ll be explaining it next week but…

Windows Azure is true PaaS self scale-able (elastic) computing that grows and shrinks as the application needs to.  At PDC10 we announced new Virtual Machine(VM) role which is a rock star move because with the VM role you can move an existing application to the cloud.  How is such a feat achieved?  Simple, take your application, install it on Windows 2008 R2 and take an image to a VHD file (super easy if you’re using Hyper-V ‘cos you already have the file) then copy-and-paste to file to the cloud server.  With this new VM role you can do pretty much what you want, run the services you want and run scheduled tasks if you want to.  Because it’s your server in the cloud you get to be the race car driver, make the decisions and be involved in the engineering process.  I can’t stress how excited this role makes me as an IT Professional…but it gets even better.

Next your the VM role will be able to take your Windows 2003 Servers (but do yourself a favour and go to 2008 R2, you might as well) and you’ll be able to build the VMs in the cloud rather than just on premise.

The Web Role gets the enhancement of full features IIS, meaning that one role can run multiple sites and you can install IIS modules…oh yeah and management becomes familiar with Remote Desktop (RDP) and by elevating privileges you can do more complex deployments.  So it’s now possible to install MSI files on a web role for example.  By the way the PDC site and even Channel 9 are running on Azure.

The announcement of Windows Azure Connect means you can plumb Windows Azure into your internal network.  That’s right you can domain join your Azure server roles so it’s just like it’s on premise, in your private data centre.  Just by way of an example that means you could deploy your Intranet site to a Web Role or your expenses application to a VM Role and bosh it’s just there…you can probably use the VM Role to poke a DC up there too!  It’s all done using familiar IP networking and VPN like connections.  That sounds like a job for the IT Professional to me.  Next year will bring SSL/TLS encryption for the pipes and Dynamic content caching so less stuff goes over the pipes (a bit like branch cache for the cloud) and a build out of the networking infrastructure.

Azure Licensing can be seen as too costly for some people so we’ve downsized!  There’s a new Extra Small instance that costs just $0.05 per hour for a 1Ghz CPU, 768MB RAM and 20GB of storage…that sounds like the perfect kit to base my first instance on of an elastic application.  All the Windows Azure Roles are Compute Instances and so are charged the same.  There’s no CAL requirement to connect to an Azure VM role (awesome) and the Azure role license is covered through the compute costs…making it as cheap as (silicone) chips!

It’s all about to go Beta and we at UK TechNet will let you know when we drop the beta bomb.

So IT Pro’s need to skill up on:

  • Server 2008 R2
  • Hyper-V
  • IP
  • IIS7

You’ll be wanting to Azure to get your head around it…trials are included as part of your MSDN subscription too.

SQL Azure

imageCommunity Technology Previews were announced for a bunch of new features including Reporting so reports can be authored using SQL Services Reporting tools and embedded in the database.  Data Sync CTP 2 can sync databases across datacentres and with the data on your premises in your own SQL Server.  That means you can have multiple geo-redundant SQL database or even just keep the data closest to the people who need it.  So say you have your business has 10 people in Japan, 10 people in Europe and 100 people data mining in India the guys in Japan and Europe can access the DB from SQL Azure from their fastest local DC and the guys in India receive a “caching” effect of having the data sync to their local SQL Server saving on the cost of the main Internet pipe to the office.

The lightweight Database Manager formerly known as “Houston” (stunning Siverlight based app if you have  a look) has entered CTP too and will become part of the developer portal.

DBAs and IT Pros doing SQL stuff need to skill up on:

  • not a whole lot…but if you aren’t on SQL 2008 you need to nail that.

You’ll be wanting to Try SQL Azure to get your head around it…trials are included as part of your MSDN subscription too.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Jeff Barr described Converting an S3-Backed Windows AMI to an EBS-Backed AMI in this 11/11/2010 post to the Amazon Web Services blog:

image If you are running a Windows Server 2003 AMI it is most likely S3-backed. If you'd like to migrate it to an EBS-backed AMI so that you can take advantage of new features such as the ability to stop it and then restart it later, I've got some good news for you.

We've just put together a set of step-by-step instructions for converting an S3-backed Windows AMI to an EBS-backed AMI.

image You will need to launch the existing AMI, create an EBS volume, and copy some data back and forth. It is pretty straightforward and you need only be able to count to 11 (in decimal) to succeed.

The document also includes information on the best way to resize an EBS-backed Windows instance and outlines some conversion approaches that may appear promising but are actually dead-ends.

Windows Azure has provided features similar to this since its inception.

<Return to section navigation list> 


Andrew said...

Thank you for this detailed and informative post.