Monday, November 22, 2010

Windows Azure and Cloud Computing Posts for 11/20/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.

Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

Kyle McClellan described Windows Azure Table Storage LINQ Support in this 11/22/2010 post:

image Windows Azure Table storage has minimal support for LINQ queries. They support a few key operations, but a majority of the operators are unsupported. For RIA developers used to Entity Framework development, this is a significant difference. I wanted to put together this short post to draw your attention to this link.

imageWhen you use an unsupported operator, your application will fail at runtime. Typically the error is something along these lines.

An error occurred while processing this request.
at System.Data.Services.Client.DataServiceRequest.Execute[TElement](DataServiceContext context, QueryComponents queryComponents)
at System.Data.Services.Client.DataServiceQuery`1.Execute()
at System.Data.Services.Client.DataServiceQuery`1.GetEnumerator()
at SampleCloudApplication.Web.MyDomainService.GetMyParentEntitiesWithChildren()
at …

with an InnerException that looks like this.

<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<error xmlns="">
  <message xml:lang="en-US">The requested operation is not implemented on the specified resource.</message>

To solve this problem, you will need to bring the data to the mid-tier (where your DomainService is running) using the .ToArray() method. For example, “return this.EntityContext.MyEntities.OrderBy(e => e.MyProperty)” will fail but “return this.EntityContext.MyEntities.ToArray().OrderBy(e => e.MyProperty)” will succeed.

The lack of support is an issue on the mid-tier, but has very little effect on the client. The base class, TableDomainService, does a lot of work processing the query to correctly handle most things sent from the client. The one caveat to this is some .Where filters are too complex to be processed. For example, most string operations like Contains can’t be used and string equality is case sensitive (so be sure to set FilterDescriptor.IsCaseSensitive to true when using DomainDataSource filters). Just like the other unsupported operators, you can still implement this on the mid-tier using .ToArray() before performing the complex .Where query.

The Azure Cloud Computing Developers Group will hold their December Meetup on 12/6/2010 at 6:30 PM at Microsoft San Francisco (in Westfield Mall where Powell meets Market Street), 835 Market Street Golden Gate Rooms - 7th Floor, San Francisco CA 94103:

We have an exciting event planned for Dec 6th. First, the talented Neil Mackenzie, expert in the architecture of data-related software systems, will talk about the critically important topic of Windows Azure Diagnostics. Neil will delight you with information about the design, configuration, and management of the Windows Azure Diagnostics.

imageWindows Azure Diagnostics enables you to collect diagnostic data from a service running in Windows Azure. You can use diagnostic data for tasks like debugging and troubleshooting, measuring performance, monitoring resource usage, traffic analysis and capacity planning, and auditing. Once collected, diagnostic data can be transferred to a Windows Azure storage account for persistence. Transfers can either be scheduled or on-demand.

image The second topic will be around using Unit Tests to understand how the Windows Azure Storage system works. Specifically, I will illustrate how to create, delete and update Azure storage mechanisms, including Tables, Blogs, and Queues. Unit Tests typically aren't used as a teaching tool, but I have found them to be a great way to quickly and easily grasp core concepts, including the related C# code.

Please join us for a very informative evening.

Bruno Terkaly [pictured above right], Organizer

See my Adding Trace, Event, Counter and Error Logging to the OakLeaf Systems Azure Table Services Sample Project post of 11/22/2010 in the Live Windows Azure Apps, APIs, Tools and Test Harnesses for more details on Windows Azure diagnostics.

Adron Hall (@adronbh) concluded his series with Windows Azure Table Storage Part 2 of 11/22/2010 (with C# code elided for brevity):

image In the first part of this two part series I reviewed what table storage in Windows Azure is. In addition I began a how-to for setting up an ASP.NET MVC 2 Web Application for accessing the Windows Azure Table Storage (sounds like keyword soup all of a sudden!). This sample so far is enough to run against the Windows Azure Dev Fabric.

imageHowever I still need to setup the creation, update, and deletion views for the site, so without further ado, let’s roll.

  1. In the Storage directory of the ASP.NET MVC Web Application right click and select Add and then View. In the Add View Dialog select the Create a strongly-typed view option. From the View data class drop down select the EmailMergeManagement.Models.EmailMergeModel, select Create form the View content drop down box, and uncheck the Select master page check box. When complete the dialog should look as shown below.

    Add New View to Project Dialog

    Add New View to Project Dialog

  2. Now right click and add another view using the same settings for Delete and name the view Delete.
  3. Right click and add another view using the same settings for Details and name the view Details.
  4. Right click and add another view for Edit and List, naming these views Edit and List respectively.  When done the Storage directory should have the following views; Create.aspx, Delete.aspx, Details.aspx, Edit.aspx, Index.aspx, and List.aspx.

Now the next steps are to setup a RoleEntryPoint class for the web role to handle a number of configuration and initializations of the storage table.  The first bit of this code will start the diagnostics connection string and wire up the event for the role environment changing.  After this the cloud storage account will have the configuration for the publisher set so the configuration settings can be used.  Finally the role environment setting will be setup so that it will recycle so that the latest settings and credentials will be used when executing.

  1. Create a new class in the root of the ASP.NET MVC Web Application and call it EmailMergeWebAppRole.
  2. Add the following code to the EmailMergeWebAppRole class. …

Now open up the StorageController class in the Controllers directory.

  1. Add an action method for the List view with the following code. …
  1. On the Index.aspx view add some action links for the following commands.
  1. Open up the Create.aspx view and remove the following HTML & other related markup from the page. …

There is now enough collateral in the web application to run the application and create new EmailMergeModel items and view them with the list view. Click on F5 to run the application. The first page that should come up is show[n] below.

Windows Azure Storage Samples Site Home Page

Windows Azure Storage Samples Site Home Page

Click on the Windows Azure Table Storage link to navigate to the main view of the storage path. Here there is now the Create and List link. Click on the Create link and add a record to the table storage. The Create view will look like below when you run it.

Windows Azure Storage Samples Site Create Email Merge Listing

Windows Azure Storage Samples Site Create Email Merge Listing

I’ve added a couple just for viewing purposes, with the List view now looking like this.

Windows Azure Storage Samples Site Listings

Windows Azure Storage Samples Site Listings

Well that’s a fully functional CRUD (CReate, Update, and Delete) Web Application running against Windows Azure.  With a details screen to boot.

Peter Kellner explained Using Cloud Storage Studio From Cerebrata Software For Azure Storage Viewing in this 11/21/2010 post:

image Just a quick shout out to the makers of Cloud Storage Studio Cerebrate Software.   Thanks for a great product offering!  I’ve been doing quite a bit of work recently with Microsoft Windows Azure Blob Storage and have really appreciated the insight into that storage Cloud Studio gives me.  I had been using another product to do the same thing which I had though was easier and faster, but after just a couple hours of working with Cloud Studio, I’m finding I’ve really been missing out. 

imageI’m attaching a screen shot below which shows viewing what is actually in the storage (not a hierarchical false view) as well as a display of all the meta data.  Either of those two features are show starters for me and enough to switch.


If you have not used it, give it  try (and have some patience, it takes a little bit of practice to find the true value).

Good luck!

<Return to section navigation list> 

SQL Azure Database and Reporting

Cihangir Biyikoglu posted SQL Azure Federations – Sampling of Scenarios where federations will help! on 11/22/2010:

image In previous posts I talked about scaling options and techniques and introduced the concepts for SQL Azure federations. Let’s solidify the concepts with a few sample scenarios that showcase federations. Don’t expect an exhaustive list... I am sure you all can come up with many other examples of how to use SQL Azure federations.

imageSharding pattern has been popularized by internet scale applications so scenarios in this category turn out to be perfect fits for SQL Azure federations. I spent the last few years building one of these types of apps, working on electronic health record store on the web called HealthVault. HealthVault to stores electronic health records for the population of a whole country. It is available in the US and a number of other countries. Users and applications can put and get small data such as body weight all the way to huge blobs like MRIs into their health records. Much like many other application on the web, the traffic changes on the site as news articles, blogs and TV appearances attract users to the site.

Capacity requirement shift rapidly and even tough it is possible to predict capacity few days or weeks in advance but predicting next months is much more difficult. Pick your favorite free site and it is easy to generate examples like this; Imagine a news website or a blog site. On a given day, there isn’t always a way to predict which articles will go viral. As people render and comment on the popular viral articles, the database traffic goes up. Best way to handle such rapid shift in resource requirements is to repartition data to evenly distribute the load and take advantage of the full capacity of all machines in your cluster. Such unpredictability makes capacity planning a tough challenge day to day. Add on top, the unpredictable capacity requirements of software updates to these sites, with new and improved functionality and one thing is crystal clear.

There isn’t a scale-up machine you can buy that can handle HealthVault or other internet scale application workloads for their peak loads. This is where federations shine! Federations provide the ability to repartition data online. With no loss of availability, you can initiate an “ALTER FEDERATION … SPLIT AT” for example, to spread the workload on the federation member database that contains 100s or 1000s of news articles to 2 equal federations members covering half the new articles and get twice the throughput.

Another great scenario is multi-tenant ISV apps. Much like us in Azure teams, any software vendor who is building solutions for random number of customers have to consider the long tail of customers they will need to handle as well as the large head. Imagine a web-front for solo doctors’ offices or small bed & breakfast operators that manage scheduling and reservations. OR a human resources app tracking franchises each store… Given the hurried pace of business, customer’s use self-service UIs, provision then use the app right away… As your customers load shift, you can adjust the tenant count per database through online SPLITs and MERGEs or your databases.

Some ISVs try to manage each customers with dedicated database approach and that works when you a small number of customers to service. However for anyone serving moderate to large number of customers, managing 100s or 1Ks of databases become unpractical quickly. Federations provide a great way to work with tenants, customers or stores as atomic units and use the FILTERING connections to auto filter based on a federation key instance. (in case you have not seen one of the talks at PDC or PASS on SQL Azure federations, I will talk about this more in upcoming posts).

Federations can also help with sliding window scenarios where you manage time based data such as inventory over months or entity based federation where a ticker symbol could be used to manage asks and bids in a stock trading platform. In all these partitioned cases, if you’d like to repartition data without losing availability and with robust connection routing, federation will help.

I am certain there are many more examples out there brewing. I hope some of these examples help solidify the SQL Azure federations for you and help you visualize how to utilize federations. If you have scenarios you’d like to share, just leave a note on the blog…

<Return to section navigation list> 

Dataplace DataMarket and OData

Gunnar Peipman described Creating [a] Twitpic client using ASP.NET and OData in this 11/22/2010 post:


image Open Data Protocol (OData) is one of new HTTP based protocols for updating and querying data. It is simple protocol and it makes use of other protocols like HTTP, ATOM and JSON. One of sites that allows to consume their data over OData protocol is Twitpic – the picture service for Twitter. In this posting I will show you how to build simple Twitpic client using ASP.NET.

Source code

You can find source code of this example from Visual Studio 2010 experiments repository at GitHub.

Source code @ GitHub
Source code repository

Simple image editor belongs to Experiments.OData solution, project name is Experiments.OData.TwitPicClient.

Sample application

My sample application is simple. It has some usernames on left side that user can select. There is also textbox to filter images. If user inserts something to textbox then string inserted to textbox is searched from message part of pictures. Results are shown like on the following screenshot.

My twitpic client
Click on image to see it at original size.

Maximum number of images returned by one request is limited to ten. You can change the limit or remove it. It is hear just to show you how queries can be done.

Querying Twitpic

I wrote this application on Visual Studio 2010. To get Twitpic data there I did nothing special – just added service reference to Twitpic OData service that is located at Visual Studio understands the format and generates all classes for you so you can start using the service immediately.

Now let’s see the how to query OData service. This is the Page_Load method of my default page. Note how I built LINQ query step-by-step here.


My query is built by following steps:

  • create query that joins user and images and add condition for username,
  • if search textbox was filled then add search condition to query,
  • sort data by timestamp to descending order,
  • make query to return only first ten results.

Now let’s see the OData query that is sent to Twitpic OData service. You can click on image to see it at original size.

My OData query in Fiddler2
Click on image to see it at original size.

This is the query that our service client sent to OData service (I searched Belgrade from my pictures):


If you look at the request string you can see that all conditions I set through LINQ are represented in this URL. The URL is not very short but it has simple syntax and it is easy to read. Answer to our query is pure XML that is mapped to our object model when answer is arrived. There is nothing complex as you can see.


Getting our Twitpic client work and done was extremely simple task. We did nothing special – just added reference to service and wrote one simple LINQ query that we gave to repeater that shows data. As we saw from monitoring proxy server report then the answer is a little bit messy but still easy XML that we can read also using some XML library if we don’t have any better option. You can find more OData services from OData producers page.

David Giard reported [Deep Fried Bytes] Episode 126: Chris Woodruff on OData on 11/22/2010:

imageIn this interview, Deep Fried Bytes host Chris Woodruff explains the standard protocol OData and how to use .Net tools to create and consume data in this format.


The Deep Fried Bytes blog wasn’t up to date when I checked on 11/22/2010

Scott Guthrie (@scottgu) listed additional Upcoming Web Camps that will include OData hands-on development in a 11/21/2010 post:

image Earlier this year I blogged about some Web Camp events that Microsoft is sponsoring around the world.  These training events provide a great way to learn about a variety of technologies including ASP.NET 4, ASP.NET MVC, VS 2010, Web Matrix, Silverlight, and IE9.  The events are free and the feedback from people attending them has been great.

imageA bunch of additional Web Camp events are coming up in the months ahead.  You can find our more about the events and register to attend them for free here.

Below is a snapshot of the upcoming schedule as of today:

One Day Events

One day events focus on teaching you how to build websites using ASP.NET MVC, WebMatrix, OData and more, and include presentations & hands on development. They will be available in 30 countries worldwide.  Below are the current known dates with links to register to attend them for free:

City Country Date Technology Registration Link
Bangalore India 16-Nov-10 ASP.NET MVC Already Happened
Paris France 25-Nov-10 TBA Register Here
Bad Homburg Germany 30-Nov-10 ASP.NET MVC Register Here
Bogota Colombia 30-Nov-10 Multiple Register Here
Chennai India 1-Dec-10 ASP.NET MVC Register Here
Seoul Korea 2-Dec-10 Web Matrix Coming Soon
Pune India 3-Dec-10 ASP.NET MVC Register Here
Moulineaux France 8-Dec-10 TBA Register Here
Sarajevo Bosnia and Herzegovina 10-Dec-10 ASP.NET MVC Register Here
Toronto Canada 11-Dec-10 ASP.NET MVC Register Here
Bad Homburg Germany 16-Dec-10 ASP.NET MVC Register Here
Moulineaux France 11-Jan-11 TBA Register Here
Cape Town South Africa 22-Jan-11 Web Matrix Coming Soon
Johannesburg South Africa 29-Jan-11 Web Matrix Coming Soon
Tunis Tunisia 1-Feb-11 ASP.Net MVC Register Here
Cape Town South Africa 12-Feb-11 ASP.Net MVC Coming Soon
San Francisco, CA USA-West 18-Feb-11 ASP.NET MVC Coming Soon
Johannesburg South Africa 19-Feb-11 ASP.Net MVC Coming Soon
Redmond, WA USA-West 18-Mar-11 Odata Coming Soon
Munich Germany 31-Mar-11 Web Matrix Register Here
Moulineaux France 5-Apr-11 TBA Register Here
Moulineaux France 17-May-11 TBA Register Here
Irvine, CA USA-West 10-Jun-11 ASP.NET MVC Coming Soon
Moulineaux France 14-Jun-11 TBA Register Here

Two Day Events

Two day Web Camps go into even more depth.  These events will cover ASP.NET, ASP.NET MVC, WebMatrix and OData, and will have presentations on day 1 with hands on development on day 2.  Below are the current dates for the events:

City Country Date Presenters Registration Link
Hyderabad India 18-Nov-10 James Senior &
Jon Galloway
Already Happened
Amsterdam Netherlands 20-Jan-11 James Senior &
Scott Hanselman
Coming Soon
Paris France 25-Jan-11 James Senior &
Scott Hanselman
Coming Soon
Austin, TX USA 7-Mar-11 James Senior &
Scott Hanselman
Coming Soon
Buenos Aires Argentina 14-Mar-11 James Senior &
Phil Haack
Coming Soon
São Paulo Brazil 18-Mar-11 James Senior &
Phil Haack
Coming Soon
Silicon Valley USA 6-May-11 James Senior &
Doris Chen
Coming Soon

More Details

You can find the latest details and registration information about upcoming Web Camp events here.

Hope this helps,


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Wade Wegner announced the availability of his Code for the Windows Azure AppFabric Caching demo at PDC 2010 in this 11/22/2010 post:

image This post is long overd[ue].  At PDC10 I recorded the session Introduction to Windows Azure AppFabric Caching, where I introduced the new Caching service.  As part of the presentation I gave a demo where I updated an existing ASP.NET web application running in a Windows Azure web role to use the caching service.  For details on the caching service, please watch the presentation.

image722322In this presentation I showed three uses of the caching service:

  1. Using the Caching service as the session state provider.
  2. Using the Caching service as an explicit cache for reference data stored in SQL Azure.
  3. Using the Caching service along with the local cache feature to store resource data in the web client.

If you want to see this application running in Windows Azure, take a look here:

I had meant to immediately post the code to this application so that you can try it out yourself, but alas PDC10 and TechEd EMEA set me back quite a bit.  So, without further ado, here are the files you’ll need: contains the solution and projects required to run the application. contains the SQL scripts required to setup the SQL Azure (if you want to use SQL Server, you’ll have to update the connection string accordingly) database you’ll need to use.

You’ll have to update a few items in the web.config file.  Search and replace the following items:

Once you have made these updates, hit F5 to run locally.

Please note: when you run this locally against the Caching service, you will encounter significant network latency!  This is to be expected, as you are making a number of network hops to get to the data center in order to reach your cache.  To get the best results, deploy your application to South Central US as this is where the AppFabric LABS portal creates your cache.

If you have any problems or questions, please let me know.

Will Perry posted Getting Started with Service Bus v2 (October CTP) - Connection Points on 11/1/2010 (missed when posted):

image Last week at PDC we released a Community Technology Preview (CTP) of a new version of Windows Azure AppFabric Service Bus. You can download the SDK, Samples and Help on the Microsoft Download Center. There’s plenty new in this release and the first is the notion of Explicit Connection Point Management.

image722322In v1 whenever you started up a service and relayed it through the cloud over service bus we just quietly cooperated – this seems like a great idea until you think about a couple of advanced scenarios: Firstly: what happens when your service is offline but a sender (client) tries to connect to it? Well, we don’t know if the service is offline or the address is wrong (and there’s no way to differentiate); there’s no way to determine what’s a valid service connection point and what isn’t. There’s also no way to specify metadata about the service that connects (this is a HTTP endpoint, this endpoint is only Oneway). Also, you might have noticed that we added support for Anycast in v2 – how can you determine which connection points are Unicast (one listener/server and many senders/clients) and which are unicast (many load-balanced listeners and many clients)? That’s what connection points are for.

A Connection Point defines the metadata for a Service Bus connection. This includes the Shape (Duplex, RequestReply, Http or Oneway) and Maximum Number of Listeners and the Runtime Uri which acts as the service endpoint. Connection Points are managed as an Atom Feed using standard REST techniques. You’ll find the feed at the Connection Point Management Uri (">">">">">https://<YourServiceNamespace> and it’ll look at little bit like this:

<feed xmlns="">
  <title type="text">ConnectionPoints</title>
  <subtitle type="text">This is the list of ConnectionPoints currently available</subtitle>
  <generator>Microsoft Windows Azure AppFabric - Service Bus</generator>
  <link rel="self"
    <title type="text">TestConnectionPoint</title>
    <link rel="self"
    <link rel="alternate"
    <content type="application/xml">
      <ConnectionPoint xmlns="" xmlns:xsd="" xmlns:xsi="">

Writing some code to get your list of connection points is easy – in the rest of this post, I’ll show you how to (very simply) List, Create and Delete connection points. Before getting started, make sure that you’ve installed the October 2010 Windows Azure AppFabric SDK then create a new .Net Framework 4.0 (Full) Console Application in Visual Studio 2010.

  1. Add a reference to the Microsoft.ServiceBus.Channels assembly (its in the GAC, or at C:\Program Files\Windows Azure AppFabric SDK\V2.0\Assemblies\Microsoft.ServiceBus.Channels.dll if you need to find it on disk)
  2. Add 3 string constant containing your Service Namespace, Default Issuer Name and Default Issuer Key. You can get this information over at the Windows Azure AppFabric Portal:
        private const string serviceNamespace = "<YourServiceNamespace>";
        private const string username = "owner";
        private const string password = "base64base64base64base64base64base64base64==";
  3. Now define the hostname of your service bus management service (you can find this in the portal, too):
        private static readonly string ManagementHostName = serviceNamespace + "";

Access to Service Bus Management is controlled by the Access Control Service. You’ll need to present the ‘’ claim with a value ‘Manage’ in order to have permission to Create, Read and Delete connection points. The default issuer (owner) is configured to have this permission by default when you provision a new account.

We’ll need to go ahead and get a Simple Web Token from the Access Control Service for our request. Note that, by default, the realm (relying party address) used for service bus management has the http scheme: you must retrieve a token for the management service using a Uri with the default port and http scheme even though you will in fact connect to the service over https:

  1. Get an auth token from ACS2 for the management service’s realm (remember, this must specify an http scheme):
        string authToken = GetAuthToken(new Uri("http://" + ManagementHostName + "/"));
  2. Add the token to the Authorization Header of a new WebClient and download the feed: – note that here we specify https
    using (WebClient client = new WebClient())
            client.Headers.Add(HttpRequestHeader.Authorization, "WRAP access_token=\"" + authToken + "\"");
    var feedUri = "https://" + ManagementHostName + "/Resources/ConnectionPoints";
    var connectionPoints = client.DownloadString(feedUri);

Compile and run the app to see your current connection points (probably none). Now lets go ahead and create a Connection Point – to do this we just Post an Atom entry to the ConnectionPoint Management service. You can achieve this using classes from the System.ServiceModel.Syndication namespace3 but for clarity and simplicity we’ll stick to just using a big old block of xml encoded as a string – its not pretty this way, but it gets the job done!

  1. Define the properties of the new Connection Point – you need to specify a name (alphanumeric), the maximum number of listeners, a channel shape (Duplex, Http, Oneway or RequestReply) and the endpoint address (sometimes called a Runtime Uri) for the service:
    string connectionPointName = "TestConnectionPoint";
    int maxListeners = 2;
    string channelShape = "RequestReply";
    Uri endpointAddress = ServiceBusEnvironment.CreateServiceUri(
    "sb", serviceNamespace, "/services/TestConnectionPoint");
  2. Now, go ahead and define the new Atom Entry for the connection point. Note how the name is set as the entry’s title and the endpoint address is set as an alternate link for the entry.
        var entry = @"<entry xmlns="""">
                        <title type=""text"">" + connectionPointName + @"</title>
                        <link rel=""alternate"" href=""" + endpointAddress + @""" />
                        <content type=""application/xml"">
                          <ConnectionPoint xmlns:i="""" xmlns="""">
                            <MaxListeners>" + maxListeners + @"</MaxListeners>
                            <ChannelShape>" + channelShape + @"</ChannelShape>
  3. Create a HttpWebRequest, add the ACS Auth Token, set the Http Method to Post and configure the content type as ‘application/atom+xml’:
        var request = HttpWebRequest.Create("https://" + ManagementHostName + "/Resources/ConnectionPoints") as HttpWebRequest;
        request.Headers.Add(HttpRequestHeader.Authorization, "WRAP access_token=\"" + authToken + "\"");
        request.ContentType = "application/atom+xml";
        request.Method = "POST";
  4. Finally, write the new atom entry to the request stream and retrieve the response. You’ll discover an atom entry describing your newly created connection point in that response:
        using (var writer = new StreamWriter(request.GetRequestStream()))

        WebResponse response = request.GetResponse();
        Stream responseStream = response.GetResponseStream();
        var connectionPoint = new StreamReader(responseStream).ReadToEnd();

If you re-run your application now, you’ll see that you’ve created a connection point which is listed in the feed we retrieved earlier. The last thing to do now is clean up after ourselves by issuing a delete against the resource we just created:

  1. Connection Point management addresses are similar to OData addresses – we address the specific connection point to delete by appending the connection point management Uri with the name of the item to delete in parenthesis:
        var connectionPointManagementAddress 
                = "https://" + ManagementHostName + "/Resources/ConnectionPoints(" + connectionPointName + ")";
  2. Add the ACS auth token to the header of a WebClient and issue a Delete against the entry:
        using (WebClient client = new WebClient())
         client.Headers.Add(HttpRequestHeader.Authorization, "WRAP access_token=\"" + authToken + "\"");
            client.UploadString(connectionPointManagementAddress, "DELETE", string.Empty);

So, there we go: Listing, Creating and Deleting Connection Points. Download the full example from Skydrive:



1 You’ll need to have a WRAP token in the Http Authorization Header to access this resource, so you can’t just navigate to it in a browser.

2 Read more about the Access Control Service at Justin Smith’s blog. Sample code to retrieve a token is included in the download at the end of this post.

3 An example of using the Syndication primitives to manipulate connection points is available in the October 2010 Windows Azure AppFabric Sample ‘ManagementOperations’

MSDN presented Exercise 1: Using the Windows Azure AppFabric Caching for Session State in 11/2010 as the first part of the Building Windows Azure Applications with the Caching Service training course:

image722322In this exercise, you explore the use of the session state provider for Windows Azure AppFabric Caching as the mechanism for out-of-process storage of session state data. For this purpose, you will use the Azure Store—a sample shopping cart application implemented with ASP.NET MVC. You will run this application in the development fabric and then modify it to take advantage of the Windows Azure AppFabric Caching as the back-end store for the ASP.NET session state. You start with a begin solution and explore the sample using the default ASP.NET in-proc session state provider. Next, you add references to the Windows Azure AppFabric Caching assemblies and configure the session state provider to store the contents of the shopping cart in the distributed cache cluster provided by Window Azure AppFabric.

Here are links to the remaining exercises and parts:

Admin published Azure Extensions for Microsoft Dynamics CRM to the Result on Demand blog on 11/21/2010:

image722322Microsoft DynamicsCRM2011 supports integration with the AppFabric Service Bus feature of the Windows Azure platform. By integrating Microsoft DynamicsCRM with the AppFabric Service Bus, developers can register plug-ins with Microsoft DynamicsCRM2011 that can pass run-time message data, known as the execution context, to one or more Azure solutions in the cloud. This is especially important for Microsoft Dynamics CRM Online as the AppFabric Service Bus is one of two supported solutions for communicating run-time context obtained in a plug-in to external line of business (LOB) applications. The other solution being the external endpoint access capability from a plug-in registered in the sandbox.

The service bus combined with the AppFabric Access Control services (ACS) of the Windows Azure platform provides a secure communication channel for CRM run-time data to external LOB applications. This capability is especially useful in keeping disparate CRM systems or other database servers synchronized with CRM business data changes.

In This Section
Introduction to Windows Azure Integration with Microsoft Dynamics CRM
Provides an overview of the integration implementation between Microsoft DynamicsCRM2011 and Windows Azure.
Configure Windows Azure Integration with Microsoft Dynamics CRM
Provides an overview of configuring Microsoft DynamicsCRM2011 and a Windows Azure Solution for correct integration.
Write a Listener for a Windows Azure Solution
Provides guidance about how to write a Microsoft DynamicsCRM2011 aware Listener application for a Windows Azure solution.
Write a Custom Azure-aware Plug-in
Provides guidance about how to write a custom Microsoft DynamicsCRM plug-in that can send business data to the Windows Azure AppFabric Service Bus.
Send Microsoft Dynamics CRM Data over the AppFabric Service Bus
Provides information about how to use the Azure aware plug-in that is included with Microsoft DynamicsCRM2011 .
Walkthrough: Configure CRM for Integration with Windows Azure
Provides a detailed walkthrough about how to configure Microsoft DynamicsCRM2011 for integration with Windows Azure platform AppFabric.
Walkthrough: Configure Windows Azure ACS for CRM Integration
Provides a detailed walkthrough about how to configure Windows Azure ACS for integration with Microsoft DynamicsCRM2011 .
Sample Code for CRM-AppFabric Integration
Provides sample code that demonstrate Microsoft DynamicsCRM2011 integration with Windows Azure platform AppFabric.
Related Sections

Plug-ins for Extending Microsoft Dynamics CRM

See Also

Windows Azure Platform Developer Center

Windows Azure Platform Getting Started

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, and CDN

imageNo significant articles today.

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

My (@rogerjenn) Adding Trace, Event, Counter and Error Logging to the OakLeaf Systems Azure Table Services Sample Project post of 11/22/2010 describes adding and analyzing Azure diagnostic features with LoadStorm and Cerebrata apps:

image I described my first tests with LoadStorm’s Web application load-testing tools with the OakLeaf Systems Azure Table Services Sample Project in a Load-Testing the OakLeaf Systems Azure Table Services Sample Project with up to 25 LoadStorm Users post of 11/18/2010. These tests were sufficiently promising to warrant instrumenting the project with Windows Azure Diagnostics.


Users usually follow this pattern when opening the project for the first time (see above):

  • Click the Next Page link a few times to verify that GridView paging works as expected.
  • Click the Count Customers button to iteratively count the Customer table’s 91 records.
  • Click the Delete All Customers button to remove the 91 rows from the Customers table.
  • Click the Create Customers button to regenerate the 91 table rows from a Customer collection.
  • Click the Update Customers button to add a + suffix to the Company name field.
  • Click Update Customers again to remove the + suffix.

Only programmers and software testers would find the preceding process of interest; few would find it entertaining.

Setting Up and Running Load Tests

Before you can run a load test, you must add a LoadStorm.html page to your site root folder or a <!-- LoadStorm ##### --> (where ##### is a number assigned by LoadStorm) HTML comment in the site’s default page.

LoadStorm lets you emulate user actions by executing links with HTTP GET operations (clicking Next Page, then First Page) and clicking the Count Customers, Delete All Customers and Create Customers buttons in sequence with HTTP POST operations. …

The post continues with addition details of adding and testing Azure diagnostics.

Gregg Duncan recommended MSI Explorer - A quick and simple means to spelunk MSIs on 11/21/2010:

sateesh-arveti - MSI Explorer

“Most of us are quite familiar i[with] creating MSI or Setup [projects] for our applications. By using a MSI, we can make sure all dependencies for running the application will be placed properly. MSI is the best option available on Windows OS for packaging and distributing our application. For .NET developers, Visual Studio presents lot of features in creating setup and deployment projects for our application. But, there are no built-in tools in Visual Studio to look into the MSI contents. And that too, to make a small change in MSI, it requires rebuilding the entire Setup Project. So, I think it's better to design an application that will analyze the MSI and give the details of it along with capability of updating it without rebuilding.

Features present in this application:

  • It allows us to look into the contents of the MSI.
  • It allows us to export the contents of the MSI.
  • It allows us to update commonly changing properties without rebuilding it.
  • Easy to use UI.
  • Now, Updating an MSI is quite simple


While Orca has been around for like a billion years or so (since about when MSI’s were introduced) it’s not the easiest tool to find, download and use. (o get the official version you have to download the entire Microsoft Windows SDK (493 MB), install that, etc, etc.

That’s what I really liked about the MSI Explorer, how easy it was to download and run. Just download the ZIP, unzip it, and assuming you have the .Net framework installed, run the EXE. Done. No install,  no fuss, no muss. Quick, simple, easy and priced just right, free.

Snap of the ZIP’s contents;


The app running (with the IE9 Platform Preview 4 MSI open);



Related Past Post XRef:

Sean Iannuzzi explained Publishing to Windows Azure using Windows Azure Service Management API on 11/17/2010 (missed when posted):

In many cases publishing to the Windows Azure is fairly straight forward and has become even more so with the completely integrated Windows Azure Service Management API.  Prior to the API being enabled you had to create a package and deploy the package manually to the Cloud by selecting your Package and Configuration file.  This is no longer the case with the Windows Azure Services Management API.

In addition, the Windows Services API seems to be a bit more integrated into Windows Azure.  This could just be my perception of the services, but it seems that when I “cancel” or want to stop an already existing publish that it work faster using the API.  I have also seen cases were the Cloud Services dashboard appears to be a bit behind with the status of the deployment.

In this post I will provide the information you will need in order to setup the Windows Azure Service Management API as this service requires the registration of a client side certificate and subscription ID.  Below you will find screenshots of each step and the final result of a publish using the Management API.


  • When publishing with the Management API it also provides an automatic label so that you can easily track your deployments.
  • Before you being the following steps below, please either open your Windows Azure project or Create a New One and then follow the steps below.

Figure 1.00 – Publish to Azure using the Management API – Publish

Right click on your Cloud Project and Click Publish


Figure 1.01 – Publish to Azure using the Management API – Add New Hosted Service Credential

In the publish Cloud Service Dialog click on the Credentials Dropdown and Click “Add”


Figure 1.02 – Publish to Azure using the Management API – Add New Hosted Service Credential

An authentication dialog will be shown prompting you “Select” or “Create” a certificate that will be used for authentication of the Management API.  I suggest you create a “New” certificate.


Figure 1.03 – Publish to Azure using the Management API – Create New Authentication Certificate

In the Authentication certificate dropdown click on “Create”


Figure 1.04 – Publish to Azure using the Management API – Enter a Friendly Name for your Authentication Certificate

Enter a Friendly Name for your Authentication Certificate


Figure 1.05 – Publish to Azure using the Management API – Certificate Created.

Now that you have created the certificate the next step will be to upload the certificate to the Cloud and associate it with your Subscription. 


Figure 1.06 – Publish to Azure using the Management API – Copy Certificate Path

To save time click on the Copy Certificate Path link as shown below.



For these next steps you will need to login to your Windows Azure Account.

Figure 1.07 – Publish to Azure using the Management API – Managing My API Certificates

  1. Login to Windows Azure
  2. Click on your Azure Services
  3. Click on the Account Tab
  4. Click on the “Manage My API Certificates” Link


Figure 1.08 – Publish to Azure using the Management API – Paste the Copied Certificate Path

  1. Click in the Textbox Area or Click on the Browse button
  2. Paste in the path to the certificate your generated earlier.


Figure 1.09 – Publish to Azure using the Management API – Certificate Verification

Once you have uploaded your certificate you are just about finished.  Verify that your certificate was installed as shown below.


Figure 1.10 – Publish to Azure using the Management API – Certificate Verification

  1. Next click back on the Account Tab
  2. Then locate and select your Subscription ID
  3. Once you have selected your Subscription ID navigate back to your Visual Studio 2010 Windows Azure Project where the dialog is still shown and paste in the Subscription ID as shown below. 


Figure 1.11 – Publish to Azure using the Management API – Management API Services Enabled

Once you have completed the steps above the Management API Deployment Services should be connected to your Windows Azure Cloud Services and show you a list of available deployment options.


Note: When using the API by default the Deployment Wizard will add a Label so that you are able to track your deployments.

Figure 1.12 – Publish to Azure using the Management API – Deployment – Cancel

At this point you should be all set to use the Management API to publish your Cloud Application.  One of the nice features with using the Management API is the ability to Cancel/Stop or a deployment process.  While you can do this using the Azure Dashboard, I have found that using the API seems to be a bit more integrated or simply just work faster.  Judge for yourself, but this is a very nice feature that is completely seamless.


Figure 1.13 – Publish to Azure using the Management API – Deployment Successful

At this point everything should be setup and enabled so that you can deployment you Cloud Application using the Windows Azure Management API.


<Return to section navigation list> 

Visual Studio LightSwitch

The Virtual Realm blog published a comprehensive LightSwitch Links and Resources linkpost on 11/22/2010:

Microsoft Downloads
  • image2224222Visual Studio LightSwitch Beta 1 Training Kit (Link)
Microsoft Support Forums
  • Visual Studio LightSwitch – General (Beta 1) – (Link)
  • LightSwitch Extensibility – (Link)
MSDN Videos
  • How Do I: Define My Data in a LightSwitch Application? (Link)
  • How Do I: Create a Search Screen in a LightSwitch Application? (Link)
  • How Do I: Create an Edit Details Screen in a LightSwitch Application? (Link)
  • How Do I: Format Data on a Screen in a LightSwitch Application? (Link)
  • How Do I: Sort and Filter Data on a Screen in LightSwitch Application? (Link)
  • How Do I: Create a Master-Details (One to Many) Screen in a LightSwitch Application? (Link)
  • How Do I: Pass a Parameter into a Screen from the Command Bar in a LightSwitch Application? (Link)
  • How Do I: Write business rules for validation and calculated fields in a LightSwitch Application? (Link)
  • How Do I: Create a Screen that can both Edit and Add Records in a LightSwitch Application? (Link)
  • How Do I: Create and Control Lookup Lists in a LightSwitch Application? (Link)
  • How Do I: How do I Set up Security to Control User (Link)
  • How Do I: Deploy a Visual Studio LightSwitch Application? (Link)
Channel 9 Videos
  • Steve Anonsen and John Rivard: Inside LightSwitch (Link)
  • Visual Studio LightSwitch Beyond the Basics (Link)
  • Introducing Visual Studio LightSwitch (Link)
LightSwitch Bloggers
LightSwitch Team Blogs
  • How to Communicate Access LightSwitch Screens (Link)
  • How to use Lookup Tables with Parameterized Queries (Link)
  • Creating a Custom Search Screen in Visual Studio LightSwitch – Beth Massi (Link)
  • How to Create a RIA service wrapper for OData Source (Link)
  • How Do I: Import and Export Data to and from a CSV File (Link)
  • How Do I: Create and Use Global Values in a Query (Link)
  • How Do I: Filter Items in a ComboBox or Model Window Picker in LightSwitch (Link)
  • Overview of Data Validation in LightSwitch Applications (Link)
  • The Anatomy of a LightSwitch Application Part 3, the logic Tier (Link)
  • The Anatomy of a LightSwitch Application Part 2, the presentation Tier (Link)
  • The Anatomy of a LightSwitch Application Part 1, Architecture Overview (Link)
General Links and Resources
  • Microsoft LightSwitch Application using SQL Azure Database (Link)
  • Switch On The Light - Bing Maps and LightSwitch (Link)
  • How Do I: Import and Export Data to/From a CSV Files (Dan Seefeldt) (Link)
  • Visual Studio LightSwitch - Help Website (Link)
  • How to Reference security entities in LightSwitch (Link)
  • Filtering data based on current user in LightSwitch Apps (Link)
  • Telerik Blogs - Getting Started with #LightSwitch and OpenAccess (Link)
  • LightSwitch Student Information System (Link)
  • LightSwitch Student Information System (Part 2): Business Rules and Screen Permissions (Link)
  • LightSwitch Student Information System (Part 3): Custom Controls (Link)
  • Pluralcast 23: Visual Studio LightSwitch with Jay Schmelzer (Link)
  • How Do I: Filter Items in a ComboBox or Model Window Picker in LightSwitch (Link)
  • How Do I: Create and Use Global Values in a Query (Link)
  • Filtering data based on current user in LightSwitch Applications (Link)
  • Printing SQL Server Reports (.rdlc) with LightSwitch (Link)
  • CodeProject - Beginners Guide to Visual Studio LightSwitch (Link)
  • TechEd Africa 2010 - Presentation by Lisa Feigenbaum (DTL217) #LightSwitch - Future of Business Application Development (Link)
  • Visual Studio LightSwitch: Connecting to SQL Azure Databases (Link)
  • Visual Studio LightSwitch: Implementing and Using Extension Methods with Visual Basic (Link)
  • Rapid Application Development with Visual Studio LightSwitch (Link)

The South Bay .NET User Group announced that Introduction to Visual Studio LightSwitch will be the topic of the 12/1/2010 meeting at the Microsoft - Mt. View Campus,
1065 La Avenida Street, Bldg 1, Mountain View, CA 94043 from 6:30 to 8:45 PM:

Event Description:

image LightSwitch is a new product in the Visual Studio family aimed at developers who want to easily create business applications for the desktop or the cloud. It simplifies the development process by letting you concentrate on the business logic while LightSwitch handles the common tasks for you.

image2224222In this demo-heavy session, you will see, end-to-end, how to build and deploy a data-centric business application using LightSwitch. Well also go beyond the basics of creating simple screens over data and demonstrate how to create screens with more advanced capabilities.

You’ll see how to extend LightSwitch applications with your own Silverlight custom controls and RIA services. Well also talk about the architecture and additional extensibility points that are available to professional developers looking to enhance the LightSwitch developer experience.

image Presenter's Bio: Beth Massi is a Senior Program Manager on the Visual Studio BizApps team at Microsoft who build the Visual Studio tools for Azure, Office, SharePoint as well as Visual Studio LightSwitch. Beth is a community champion for business application developers and is responsible for producing and managing online content and community interaction with the BizApps team.

She has over 15 years of industry experience building business applications and is a frequent speaker at various software development events. You can find her on a variety of developer sites including MSDN Developer Centers, Channel 9, and her blog Follow her on twitter @BethMassi.

Drew Robbins explained Building Business Applications with Visual Studio LightSwitch in his DEV206 session at TechEd Europe 2010:


image2224222Visual Studio LightSwitch is the simplest way to build business application for the desktop and cloud. LightSwitch simplifies the development process by letting you concentrate on the business logic, while LightSwitch handles the common tasks for you. In this demo-heavy session you will see, end-to-end, how to build and deploy a data-centric business application using LightSwitch as well as how you can use Visual Studio 2010 Professional and Expression Blend 4 to customize and extend the presentation and data layers of a LightSwitch application for when the requirements grow beyond what is supported by default.

Return to section navigation list> 

Windows Azure Infrastructure

Wriju summarized What’s new in Windows Azure on 11/21/2010:

image What’s new in Windows Azure which is coming up next? In PDC 2010 bunch of new features like,

  • Virtual Machine Role
  • Extra Small Instance
  • Virtual Network
  • Remote Desktop
  • Azure Marketplace

imageAnd many more. To find more on it [new]

I feel strongly that, with VMRole and Remote Desktop migration will be much more easier. We can install and configure 3rd party components.

Adrian Sanders asserted “Obviously we’re drinking the cloud kool-aid: maybe you should too!” in a deck for his The Top 5 Overlooked Reasons Why Business Belongs in the Cloud post of 11/21/2010:

There are plenty of “Top 5 lists” with generic reasons for why businesses should migrate into SaaS and cloud computing. Scalability, cost, mobility – they’re good reasons, sure, but we’ve heard them before: what else does cloud computing offer? If you’re thinking about moving your business into the cloud but haven’t yet, here are five reasons that are often overlooked:

1. Clients notice. Traditionally, IT has served a “backend” role in business. With the exception of email and websites, most businesses hide their IT solutions from clients, and with good reason: IT is ugly. Cloud computing changes that. Many SaaS offerings and cloud-based applications incorporate new ways of reaching clients as part of their workflow solutions. For example, Solve360, a popular online CRM, allows users to “publish” select materials from project workspaces, enabling real-time client collaboration. E-signature services allow clients to sign documents via a slick, paperless delivery model, and Helpdesk software lets clients access knowledge base forums and ticketed support in a branded, easy to use online environment. When it works, clients notice that you’re new, different, modern, and “slick.” IT itself becomes a branding mechanism.

2. Smarter architecture. Amid all the fuss about cloud differentiation it’s easy to forget that, aside from being cloud-based, many cloud apps are simply designed better than their on-premise counterparts. This could be attributable to a whole host of reasons, the most prominent of which is that (good) cloud apps have been designed entirely from the ground up. Whereas most on-premise solutions have strong ancestral roots in software designed 10-20 years ago, cloud apps have been developed much more recently, meaning they’ve benefited from years of accumulated programming and business experience. Cloud apps are designed for modern businesses: most on-premise apps simply aren’t.

3. Usability. One of the great innovations of cloud-computing has been the focus put on end-users. Many legacy apps put function first and usability second (MS Access, anyone?), whereas good cloud apps don’t see a difference between the two. This key principle can’t be underestimated: software is only as powerful as the people using it. Generally speaking (and yes, there are exceptions to this) cloud-based software understands that people matter, creating a better user experience and increasing efficiency.

4. Integration. We just published a blog post blasting API integration, but it’s worth noting that at least cloud-based software makes API integration a viable and affordable workflow solution. Good luck getting anything to work well with a legacy app, especially on the cheap: compare that reality with the generous and freely available API’s that most SaaS and cloud-based vendors offer and it’s an easy sell.

5. Quality of Service. This only applies to SaaS, but it’s a powerful enough attribute that I’m listing it as an argument for all cloud-computing. In a traditional IT setting, clients have a one-time transaction with vendors, repeated every few years for product upgrades. In the world of SaaS, clients generally pay vendors month-to-month and upgrades and bug-fixes are released on a significantly ramped-up timescale. This means that: A) clients can drop out at any time, giving vendors a perpetual incentive to innovate, and: B) clients get a product that’s updated far, far more frequently than before. In addition, SaaS vendors increasingly have robust forums and user communities where support questions and feature requests are addressed quickly, effectively, and by multiple user types. This establishes a culture of support and user-driven innovation that has long been missing from on-premise software.

Obviously we’re drinking the cloud kool-aid. Maybe you should too.

Adrian is the founder of VM Associates, an end-user consultancy for businesses looking to move to the cloud from pre-existing legacy systems

Tim Crawford observed that Servers Purchased Today Will Not Be Replaced in an 11/20/2010 post:

In the spirit of paradigm shifts, here’s one to think about.

Servers that are purchased today will not be replaced. Servers have a useful lifespan. Typically that ranges in the 3-5 years depending on their use. There are a number of factors that contribute to this. The cost to operate the server grows over time and it becomes less expensive to purchase a new one. The performance of the server is not adequate for newer workloads over time. These (and others) contribute to the useful lifespan of a server.

At the current adoption rates of cloud-based services, said servers will not be replaced. But rather, the services provided from those systems will move to cloud-based services. Of course there are corner cases. But as the cloud market matures, it will drive further adoption of services. Within the same timeframe, when existing servers become obsolete, many of those services will move to cloud-based services.

This shift requires several actions depending on your perspective.

  • Server/ Channel Provider: How will you shift revenue streams to alternative offerings? Are you only a product company and can you make the move to a services model? Are you able to expand your services to meet the demand and complexities?
  • IT Organizations: It causes a shift in budgetary, operational and process changes. Not to mention potential architecture and integration challenges for applications and services.

These types of changes take time to plan and develop before implementation. 3-5 years is not that far away in the typical planning cycle for changes this significant. The suggestion would be to get started now if you haven’t started already. There are great opportunities available today as a way to start “kicking the tires”.

This doesn’t bode well for server vendors.

Brent Stineman (@BrentCodeMonkey) offered Microsoft Digest for November 19th, 2010 in a post of the same date to the Sogeti Blogs:

image So PDC10 and TechEd Europe have both come and gone. You’d think that information would have dried up following these two huge brain dumps. However, there’s still been a huge amount of data being sent out. Here we go…

In my last Microsoft specific cloud update, I talked about some of the new PDC10 announcements. If you’d like to hear a more definitive view on it, then check out Jame’s Conard’s update. For those of you not familiar with Mr. Conard, he’s the Sr. Director of Microsoft’s Developer & Platform Evangelism (DPE) group.

imageTechNet has published an article that takes a look Inside SQL Azure. This paper covers many great topics but the key one I think anybody working with SQL Azure should look at is three paragraphs on throttling. I still don’t consider this information comprehensive enough, but it’s a start.

If you’re less concerned about the internals then about new features such as SQL Azure Reporting Services, look no further.

One of the new features of the Azure AppFabric is explicit connection point management. Will, a member of the team, has published a quick tutorial on how to get started with this great new feature.

Clemens Vasters also recently updated his blog with samples from his PDC10 presentation on Azure AppFabric Service bus futures. This includes his Service Bus Management Tool, a new sample echo client/service, and a sample of the new durable message buffer.

Speaking of PDC10, many of the session materials (slide-decks, samples, etc..) have been made available. You can find them at:

I’ve been digging into the Service Bus a bit myself lately and realized all the samples use the owner for security. This didn’t seem realistic and Richard Seroter was kind enough to see my questions and blog on how he addressed it.

The Windows Azure team updated their blog with some additional info on Azure packages and their impact on pricing. They also announced that they’ve updated the David Chappell whitepapers for the latest enhancements.

I was presenting to a group that was going through some Azure training yesterday, discussing the latest PDC announcements. A question came up regarding the new SQL Azure partition federations and Azure Storage. This question, when to use which, comes up often. There’s no simple answer, but I did recently run across a TechNet article that helps you understand your options better. This will hopefully lead to better informed decisions. If this isn’t enough, then you might also want to check out the Azure Storage Team’s blog for how to get the most out of Azure Tables.

One feature a lot of folks are look forward too is the upcoming remote desktop feature. If you wanted a walk-through, you’ve got it!

Tech is all good, but as more folks start to adopt Azure, there’s more of a demand for guidance on patterns and best practices. Fortunately, it looks like the PNP team is already aware of it and working to fill that need.

To round out this this digest, I’ll get a little less technical. There are always folks asking for a comparison between Microsoft and Amazon’s offerings. There’s been a ba-gillion comparisons, but IT World has put up one more highlight 5 key differences.

Robert Duffner has also been updating the Azure Team blog with some interviews with cloud thought leaders. One I’ve found particularly interesting is Robert’s interview with Charlton Barreto, Intel’s Technology Strategist.

This will likely be my final Microsoft specific digest before the end of the year. I have presentations I need to prepare for as well as some technical blogging I want to do. Combine those with the coming holidays and I think you can understand.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA) and Hyper-V Cloud

Tim Anderson (@timanderson) described The Microsoft Azure VM role and why you might not want to use it in this 11/22/2010 article:

image I’ve spent the morning talking to Microsoft’s Steve Plank – whose blog [Plankytronixx] you should follow if you have an interest in Azure – about Azure roles and virtual machines, among other things.

Windows Azure applications are deployed to one of three roles, where each role is in fact a Windows Server virtual machine instance. The three roles are the web role for IIS (Internet Information Server) applications, the worker role for general applications, and newly announced at the recent PDC, the VM role, which you can configure any way you like. The normal route to deploying a VM role is to build a VM on your local system and upload it, though in future you will be able to configure and deploy a VM role entirely online.


It’s obvious that the VM role is the most flexible. You will even be able to use 64-bit Windows Server 2003 if necessary. However, there is a critical distinction between the VM role and the other two. With the web and worker roles, Microsoft will patch and update the operating system for you, but with the VM role it is up to you.

That does not sound too bad, but it gets worse. To understand why, you need to think in terms of a golden image for each role, that is stored somewhere safe in Azure and gets deployed to your instance as required.

In the case of the web and worker roles, that golden image is constantly updated as the system gets patched. In addition, Microsoft takes responsibility for backing up the system state of your instance and restoring it if necessary.

In the case of the VM role, the golden image is formed by your upload and only changes if you update it.

The reason this is important is that Azure might at any time replace your running VM (whichever role it is running) with the golden image. For example, if the VM crashes, or the machine hosting it suffers a power failure, then it will be restarted from the golden image.

Now imagine that Windows server needs an emergency patch because of a newly-discovered security issue. If you use the web or worker role, Microsoft takes responsibility for applying it. If you use the VM role, you have to make sure it is applied not only to the running VM, but also to the golden image. Otherwise, you might apply the patch, and then Azure might replace the VM with the unpatched golden image.

Therefore, to maintain a VM role properly you need to keep a local copy patched and refresh the uploaded golden image with your local copy, as well as updating the running instance. Apparently there is a differential upload, to reduce the upload time.

The same logic applies to any other changes you make to the VM. It is actually more complex than managing VMs in other scenarios, such as the Linux VM on which this blog is hosted.

Another feature which all Azure developers must understand is that you cannot safely store data on your Azure instance, whichever role it is running. Microsoft does not guarantee the safety of this data, and it might get zapped if, for example, the VM crashes and gets reverted to the golden image. You must store data in Azure database or blob storage instead.

This also impacts the extent to which you can customize the web and worker VMs. Microsoft will be allowing full administrative access to the VMs if you require it, but it is no good making extensive changes to an individual instance since they could get reverted back to the golden image. The guidance is that if manual changes take more than 5 minutes to do, you are better of using the VM role.

A further implication is that you cannot realistically use an Azure VM role to run Active Directory, since Active Directory does not take kindly to be being reverted to an earlier state. Plank says that third-parties may come up with solutions that involve persisting Active Directory data to Azure storage.

Although I’ve talked about golden images above, I’m not sure exactly how Azure implements them. However, if I have understood Plank correctly, it is conceptually accurate.

The bottom line is that the best scenario is to live with a standard Azure web or worker role, as configured by you and by Azure when you created it. The VM role is a compromise that carries a significant additional administrative burden.

<Return to section navigation list> 

Cloud Security and Governance

Patrick Butler Monterde published a current list of Azure Security Papers on 11/22/2010:

image This is a list of Azure Security Paper published by the Microsoft Global Foundation Services (GFS), the team that manages and supports the Azure Data Centers.

Information Security Management System for Microsoft Cloud Infrastructure (November 2010)


This paper describes the Information Security Management System program for Microsoft's Cloud Infrastructure, as well as some of the processes and benefits realized from operating this model. An overview of the key certifications and attestations Microsoft maintains to prove to cloud customers that information security is central to Microsoft cloud operations is included.

Windows Azure™ Security Overview  (August 2010)

To help customers better understand the array of security controls implemented within Windows Azure from both the customer's and Microsoft operations' perspectives, this paper provides a comprehensive look at the security available with Windows Azure. The paper provides a technical examination of the security functionality available, the people and processes that help make Windows Azure more secure, as well as a brief discussion about compliance.

Security Best Practices for Developing Windows Azure Applications (June 2010)

This white paper focuses on the security challenges and recommended approaches to design and develop more secure applications for Microsoft’s Windows Azure platform. It is intended to be a resource for technical software audiences: software designers, architects, developers and testers who design, build and deploy more secure Windows Azure solutions.

Chris Hoff (@Beaker) asked if The Future Of Audit & Compliance Is…Facebook? and answered in this 11/20/2010 post:

I’ve had an ephiphany.  The future is coming wherein we’ll truly have social security…

image As the technology and operational models of virtualization and cloud computing mature and become operationally ubiquitous, ultimately delivering on the promise of agile, real-time service delivery via extreme levels of automation, the ugly necessities of security, audit and risk assessment will also require an evolution via automation to leverage the same.

SAN FRANCISCO - NOVEMBER 15:  Facebook founder...At some point, that means the automated collection and overall assessment of posture (from a security, compliance, and risk perspective) will automagically occur (lest we continue to be the giant speed bump we’re described to be,) and pop out indicatively with glee with an end result of “good,” “bad,” or “pass,” “fail,” not unlike one of those in-flesh turkey thermometers that indicates doneness once a pre-set temperature is reached.

What does that have to do with Facebook?


When we’ve all been sucked into the collective hive of the InterCloud matrix, the CISO/assessor/auditor/regulator will look at the score, the resultant assertions and the supporting artifacts gathered via automation and simply click on a button:image

You see, the auditor/regulator really is your friend. ;)

It’s a cruel future.  We’re all Zuck’d.


Related articles

Image by Getty Images via @daylife and @Beaker.

Chris Hoff (@Beaker) posted Incomplete Thought: Compliance – The Autotune Of The Security Industry on 11/20/2010:

imageI don’t know if you’ve noticed, but lately the ability to carry a tune while singing is optional.

Thanks to Cher and T-Pain, the rampant use of the Autotune in the music industry has enabled pretty much anyone to record a song and make it sound like they can sing (from the Autotune of encyclopedias, Wikipedia):

LOS ANGELES, CA - JANUARY 31:  Rapper T-Pain p...Auto-Tune uses a phase vocoder to correct pitch in vocal and instrumental performances. It is used to disguise off-key inaccuracies and mistakes, and has allowed singers to perform perfectly tuned vocal tracks without the need of singing in tune. While its main purpose is to slightly bend sung pitches to the nearest true semitone (to the exact pitch of the nearest tone in traditional equal temperament), Auto-Tune can be used as an effect to distort the human voice when pitch is raised/lowered significantly.[3]

A similar “innovation” has happened to the security industry.  Instead of having to actually craft and execute a well-tuned security program which focuses on managing risk in harmony with the business, we’ve simply learned to hum a little, add a couple of splashy effects and let the compliance Autotune do it’s thing.

It doesn’t matter that we’re off-key.  It doesn’t matter that we’re not in tune.  It doesn’t matter that we hide mistakes.

All that matters is that auditors can sing along, repeating the chorus and ensure that we hit the Top 40.


Related articles

Image by Getty Images via @daylife

<Return to section navigation list> 

Cloud Computing Events

Azure Users Group (Azug, Belgium) announced on 11/22/2010 a Sinterklaas goes cloudy! Integrating xRM meeting on 12/7/2010 at 6:00 PM at an undisclosed location:

  • Date: 12/7/2010
  • Start Time: 6:00 PM
  • End Time: 8:00 PM
  • Location TBD
  • Ticket Price: Free
  • Register for free

AZUG is organizing its third event in 2010. Come join us for an interesting session!

Yves GoelevenIntegrating xRM by Yves Goeleven

Tired of writing the same type of application over and over again? Feel you don’t have enough time to spend on optimizing that complex algorithm because you have to create all of those data management screens? Well those days are over! Let me introduce you to the Dynamics Crm Online platform and it’s eXtended Relational Management (xrm) capabilities that you can harness for your windows azure applications. In this session I will show you how to get the most out of both services!

Bruce Kyle noted on 11/20/2010 Northwest Gamers Come to the Cloud in Sessions on Windows Azure on 12/15/2010 at 5:30 PM  in Seattle, WA:

image Knowing the competitive landscape of gaming, especially here in the Northwest, we want to invite you to a two-hour session to learn about building your next game on Windows Azure.

Hear from a game developer just like you, Sneaky Games, who has successfully leveraged the cloud and Windows Azure to build and deploy Facebook games such as Fantasy Kingdoms.  The Windows Azure Platform can revolutionize the way you do business, making your company more agile, efficient and flexible while allowing you to grow your business, enhance your customer's experience and save money towards your bottom line. 

imageJoin us December 15 at 5:30 PM in Seattle, WA to learn more about what the Windows Azure Platform and cloud computing can do for you as a game developer, network with fellow developers and enjoy open bar and appetizers on us!

Register here.

K. Scott Morrison reported in his How to Fail with Web Services post of 11/19/2010:

I’ve been asked to deliver a keynote presentation at the 8th European Conference on Web Services (ECOWS) 2010, to be held in Aiya Napa, Cyprus this Dec 1-3. My topic is an exploration of the the anti-patterns that often appear in Web services projects.


Here’s the abstract in full:

How to Fail with Web Services

Enterprise computing has finally woken up to the value of Web services. This technology has become a basic foundation of Service Oriented Architecture (SOA), which despite recent controversy is still very much the architectural approach favored by sectors as diverse as corporate IT, health care, and the military. But despite strong vision, excellent technology, and very good intentions, commercial success with SOA remains rare. Successful SOA starts with success in an actual implementation; for most organizations, this means a small proof-of-concept or a modest suite of Web services applications. This is an important first step, but it is here where most groups stumble.

When SOA initiatives fail on their first real implementation, it disillusions participants, erodes the confidence of stakeholders, and even the best-designed architecture will be perceived as just another failed IT initiative. For over six years, Layer 7 has been building real Web services-based architectures for government clients and some of the world’s largest corporations. In this time, we have seen repeated patterns of bad practice, pitfalls, misinterpretations, and gaps in technology. This talk is about what happens when web Services moves out of the lab and into general use. By understanding this, we are better able to meet tomorrow’s challenges, when Web services move into the cloud.

Cyprus is a bit off the usual conference circuit, no?

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Carlos Ble said Goodbye Google App Engine (GAE) in this 11/21/2010 post:

Choosing GAE as the platform four our project is a mistake which cost I estimate in about 15000€. Considering it's been my money, it is a "bit" painful.

imageGAE is not exactly comparable to Amazon, Rackspace or any of this hosting services, because it is a PaaS (Platform as a Service) rather than just a cluster of machines. This means that you just use the platform and gain scalability, high availability and all those things we want for our websites, without any other software architecture. Cool, isn't it.

image You do not pay until you get a lot of traffic to your site so it sounds very appealing for a small startup. I read about this in a book called "The web success startup guide" by Bob Walsh.

So yes, everything looked cool and we decided to go with this platform a couple of months ago.

It supports Python and Django (without the ORM), which we love, so we tried it out. We made a spike, kind of 'hello world' and it was easy and nice to deploy.

When we started developing features we started realizing the hard limitations imposed but the platform:

1. It requires Python 2.5, which is really old. Using Ubuntu that means that you need a virtualenv or chroot with a separate environment in order to work with the SDK properly: Ok, just a small frustration.

2. You can't use HTTPS with your own domain (naked domain as they called) so secure connections should go though This just sucks.

3. No request can take more than 30 secons to run, otherwise it is stopped: Oh my god, this has been a pain in the ass all the time. When we were uploading data to the database (called datastore a no-sql engine) the uplaod was broken after 30 seconds so we have to split the files and do all kind of difficult stuff to manage the situation. Running background tasks (cron) have to be very very well engineered too, because the same rule applies. There are many many tasks that need to take more than 30 secons in website administration operations. Can you imagine?

4. Every GET or POST from the server to other site, is aborted if it has not finished within 5 seconds. You can configure it to wait till 10 seconds max. This makes impossible to work with Twitter and Facebook many times so you need intermediate servers. Again, this duplicates the time you need to accomplish what seemed to be a simple task

5. You can't use python libraries that are build on C, just libraries written in python: Forget about those great libraries you wanted to use.

6. There are no "LIKE" operator for queries. There is a hack that allows you to make a kind of "starts with" operator. So forget about text search in database. We had to work around this normalizing and filtering data in code which took as 4 times the estimated time for many features.

7. You can't join two tables. Forget about "SELECT table1, table2..." you just can't. Now try to think about the complexity this introduces in your source code in order to make queries.

8. Database is really slow. You have to read about how to separate tables using inheritance so that you can search in a table, get the key and then obtain its parent in order to avoid deserialization performance and all kind [of] weird things.

9. Database behavior is not the same in the local development server than in google servers. So you need some manual or selenium tests to run them against the cloud, because your unit and integration tests won't fail locally. Again this means more money.

10. If you need to query on a table asking for several fields, you need to create indexes. If those fields are big, it is easy to get the "Too many indexes" runtime exception. That will not happen to you in a "hello world" application, but enterprise applications that need to query using several parameters are easy to run into this trouble. If you use a StringListProperty, you might not find this problem until one user populates this field with a long list. Really nasty thing. We had to re-engineer our search engine once in production because we didn't know this till it happened.

11. No query can retrieve more than 1000 records. If there are more than 1000, you have to deal with offsets in your code to paginate results. Taking into account that you have to filter results in code sometimes because of the the limitations of GQL (the query language), this means again more complexity for your code. More and more problems, yeiiii!

11. You can't access the file system. Forget about saving uploads to filesystem and those typical things. You have to read and learn about the inconvenient blobs. Again, more development time wasted.

12. Datastore and memcache can fail sometimes. You have to defend your app against database failures. What? Yep, make sure you know how to save user data when this happens, at any time in the request process, but remember... you can't use the filesystem. Isn't it fun?

13. Memcache value max size is 1 megabyte. Wanted to cache everything? You were just dreaming mate, just 1 megabyte. You need your own software architecture to deal with caching.

There are some more things that make the development hard with this platform but I am starting to forget about them. Which is great.

So why did we stay with this platform? Because we were finding the problems as we go, we didn't know many of them. And still, once you overcome all the limitations with your complex code, you are supposed to gain scalability for millions of users. After all, you are hosted by Google. This is the last big lie.

Since the last update they did in September 2010, we starting facing random 500 error codes that some days got the site down 60% of the time. 6 times out of 10, users visiting the site couldn't register or use the site. Many people complained about the same in the forums while Google engineers gave us no answer. After days they used to send and email saying that they were aware of the problem and were working on it, but no solutions where given. In November, an engineer answered in a forum that our website had to load into memory in less than 1 second, as pretty much every request was loading a completely new instance of the site. 1 second? Totally crazy. We have 15000 lines of code and we have to load Django. How can we make it under 1 second?

They encourage people to use the light webapp framework. Webapp is great for a "hello world" but it is bullshit for real applications.

So once we suffered this poor support from Google and we understood that they were recommending GAE just for "hello world" applications, we decided to give up on it finally.

This is not serious nor professional from Google. We are more than disappointed with GAE.

For us, GAE has been a failure like Wave or Buzz were but this time, we have paid it with our money. I've been too stubborn just because this great company was behind the platform but I've learned an important lesson: good companies make mistakes too. I didn't do enough spikes before developing actual features. I should have performed more proofs of concept before investing so much money. I was blind.

After this we wanted to control the systems so we have gone to a classical hosting based on PostgreSQL, Nginx, Apache2 and all that stuff. We rely on great system administrators who give us great support and make us feel confident. We are really happy to be able to make SQL queries and dump databases and all those great tools that we were missing in the dammed GAE.

Because we are software craftsmen, we designed all our code using TDD and the rest of XP practices. Thanks to this, we migrated 15000 lines of code in just 1 week (and just a pair of developers) from one platform to another which has been a deep change. From No-SQL back to SQL. From one database access API to another. If we didn't had these great test batteries, migration would have taken months.

Migrating data took us 1 week because of encoding problems and data transformations.

You might agree or not with me on this post but there is one certain reality: developing on GAE introduced such a design complexity that working around it pushes us 5 months behind schedule. Now that we had developed tools and workarounds for all the problems we found in GAE, we were starting being fast with the development of features eventually, and at that point we found the cloud was totally unstable, doggy.

Carlos Ble is a Spanish Software Developer currently living in Tenerife, Spain. [Nice place!]

James Hamilton reviewed Very Low-Cost, Low-Power Servers in this 11/20/2010 post:

image I’m interested in low-cost, low-power servers and have been watching the emerging market for these systems since 2008 when I wrote CEMS: Low-Cost, Low-Power Servers for Internet Scale Services (paper, talk). ZT Systems just announced the R1081e, a new ARM-based server with the following specs:

  • STMicroelectronics SPEAr 1310 with dual ARM® Cortex™-A9 cores
  • 1 GB of 1333MHz DDR3 ECC memory embedded
  • 1 GB of NAND Flash
  • Ethernet connectivity
  • USB
  • SATA 3.0

It’s a shared infrastructure design where each 1RU module has 8 of the above servers. Each module includes:

  • 8 “System on Modules“ (SOMs)
  • ZT-designed backplane for power and connectivity
  • One 80GB SSD per SOM
  • IPMI system management
  • Two Reatek 4+1 1Gb Ethernet switches on board with external uplinks
  • Standard 1U rack mount form factor
  • Ubuntu Server OS
  • 250W 80+ Bronze Power Supply

Each module is under 80W so a rack with 40 compute modules would only draw 3.2kw for 320 low-power servers for a total of 740 cores/rack. Weaknesses of this approach are: only 2-cores per server, only 1GB/core, and the cores appear to be only 600 Mhz ( Four core ARM parts and larger physical memory support are both under development.

Competitors include SeaMicro with an Atom based design (SeaMicro Releases Innovative Intel Atom Server

) and the recently renamed Calxeda (previously Smooth-Stone) has an ARM-based product under development.

Other notes on low-cost, low-powered servers:

From Datacenter Knowledge: New ARM-Based Server from ZT systems

<Return to section navigation list>