Friday, January 21, 2011

Windows Azure and Cloud Computing Posts for 1/21/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.


Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Azure Blob, Drive, Table and Queue Services

David Anson (@DavidAns) posted "Might as well face it, you're addicted to blob..." [BlobStoreApi update adds container management, fragmented response handling, other improvements, and enhanced Amazon S3 support by Delay.Web.Helpers!] on 1/20/2010:

image As part of my previous post announcing the Delay.Web.Helpers assembly for ASP.NET and introducing the AmazonS3Storage class to enable easy Amazon Simple Storage Service (S3) blob access from MVC/Razor/WebMatrix web pages, I made some changes to the BlobStoreApi.cs file it built on top of.

imageBlobStoreApi.cs was originally released as part of my BlobStore sample for Silverlight and BlobStoreOnWindowsPhone sample for Windows Phone 7; I'd made a note to update those two samples with the latest version of the code, but intended to blog about a few other things first...

Fragmented response handling

BlobStoreOnWindowsPhone sample

That's when I was contacted by Steve Marx of the Azure team to see if I could help out Erik Petersen of the Windows Phone team who was suddenly seeing a problem when using BlobStoreApi: calls to GetBlobInfos were returning only some of the blobs instead of the complete list. Fortunately, Steve knew what was wrong without even looking - he'd blogged about the underlying cause over a year ago! Although Windows Azure can return up to 5,000 blobs in response to a query, it may decide (at any time) to return fewer (possibly as few as 0!) and instead include a token for getting the rest of the results from a subsequent call. This behavior is deterministic according to where the data is stored in the Azure cloud, but from the point of view of a developer it's probably safest to assume it's random. :)

So something had changed for Erik's container and now he was getting partial results because my BlobStoreApi code didn't know what to do with the "fragmented response" it had started getting. Although I'd like to blame the documentation for not making it clear that even very small responses could be broken up, I really should have supported fragmented responses anyway so users would be able to work with more than 5,000 blobs. Mea culpa...

I definitely wanted to add BlobStoreApi support for fragmented Azure responses, but what about S3? Did my code have the same problem there? According to the documentation: yes! Although I don't see mention of the same "random" fragmentation behavior Azure has, S3 enumerations are limited to 1,000 blobs at a time, so that code needs the same update in order to support customer scenarios with lots of blobs. Fortunately, the mechanism both services use to implement fragmentation is quite similar and I was able to add most of the new code to the common RestBlobStoreClient base class and then refine it with service-specific tweaks in the sub-classes AzureBlobStoreClient and S3BlobStoreClient.

Fortunately, the extra work to support fragmented responses is all handled internally and is therefore invisible to the developer. As such, it requires no changes to existing applications beyond updating to the new BlobStoreApi.cs file and recompiling!

Container support

While dealing with fragmented responses was fun and a definite improvement, it's not the kind of thing most people appreciate. People don't usually get excited over fixes, but they will get weak-kneed for new features - so I decided to add container/bucket management, too! Specifically, it's now possible to use BlobStoreApi to list, create, and delete Azure containers and S3 buckets using the following asynchronous methods:

public abstract void GetContainers(Action<IEnumerable<string>> getContainersCompleted, Action<Exception> getContainersFailed);
public abstract void CreateContainer(string containerName, Action createContainerCompleted, Action<Exception> createContainerFailed);
public abstract void DeleteContainer(string containerName, Action deleteContainerCompleted, Action<Exception> deleteContainerFailed);

To be clear, each instance of AzureBlobStoreClient and S3BlobStoreClient is still associated with a specific container (the one that was passed to its constructor) and its blob manipulation methods act only on that container - but now it's able to manipulate other containers, too.

Aside: In an alternate universe, I might have made the new container methods static because they don't have a notion of a "current" container the same way the blob methods do. However, what I want more than that is to be able to leverage the existing infrastructure I already have in place for creating requests, authorizing them, handling responses, etc. - all of which take advantage of instance methods on RestBlobStoreClient that AzureBlobStoreClient and S3BlobStoreClient override as necessary. In this world, it's not possible to have static methods that are also virtual, so I sacrificed the desire for the former in order to achieve the efficiencies of the latter. Maybe one day I'll refactor things so it's possible to have the best of both worlds - but for now I recommend specifying a null container name when creating instances of AzureBlobStoreClient and S3BlobStoreClient when you want to ensure they're used only for container-specific operations (and not for blobs, too).

Other improvements

Delay.Web.Helpers AmazonS3Storage sample

With containers getting a fair amount of love this release, it seemed appropriate to add a containerName parameter to the AzureBlobStoreClient constructor so it would look the same as S3BlobStoreClient's constructor. This minor breaking change means it's now necessary to specify a container name when creating instances of AzureBlobStoreClient. If you want to preserve the behavior of existing code (and don't want to have to remember that "$root" is the magic root container name for Azure (and previous default)), you can pass AzureBlobStoreClient.RootContainerName for the container name.

In the process of doing some testing of the BlobStoreApi code, I realized I hadn't previously exposed a way for an application to find out about asynchronous method failures. While it's probably true that part of the reason someone uses Azure or S3 is so they don't need to worry about failures, things can always go wrong and sometimes you really need to know when they do. So in addition to each asynchronous method taking a parameter for the Action to run upon successful completion, they now also take an Action<Exception> parameter which they'll call (instead) when a failure occurs. This new parameter is necessary, so please provide a meaningful handler instead of passing null and assuming things will always succeed. :)

I've also added a set of new #defines to allow BlobStoreApi users to easily remove unwanted functionality. Because they're either self-explanatory or simple, I'm not going to go into more detail here (please have a look at the code if you're interested): BLOBSTOREAPI_PUBLIC, BLOBSTOREAPI_NO_URIS, BLOBSTOREAPI_NO_AZUREBLOBSTORECLIENT, BLOBSTOREAPI_NO_S3BLOBSTORECLIENT, and BLOBSTOREAPI_NO_ISOLATEDSTORAGEBLOBSTORECLIENT.

Aside: Actually, BLOBSTOREAPI_PUBLIC deserves a brief note: with this release of the BlobStoreApi, the classes it implements are no longer public by default (they're expected to be consumed, not exposed). This represents a minor breaking change, but the trivial fix is to #define BLOBSTOREAPI_PUBLIC which restores things to being public as they were before. That said, it might be worth taking this opportunity to make the corresponding changes (if any) to your application - but that's entirely up to you and your schedule.

The last thing worth mentioning is that I've tweaked BlobStoreApi to handle mixed-case blob/container names properly. Formerly, passing in upper-case characters for a blob name could result in failures for both Azure and S3; with this change in place, that scenario should work correctly.

BlobStore with silly sample data

New BlobStore and BlobStoreOnWindowsPhone samples

I've updated the combined BlobStore/BlobStoreOnWindowsPhone download to include the latest version of BlobStoreApi with all the changes outlined above. Although there are no new features in either sample, they both benefit from fragmented container support and demonstrate how to use the updated API methods properly.

[Click here to download the complete BlobStore source code and sample applications for both Silverlight 4 and Windows Phone 7.]

New Delay.Web.Helpers release and updated sample

The Delay.Web.Helpers download includes the latest version of BlobStoreApi (so it handles fragmented containers) and includes new support for container management in the form of the following methods on the AmazonS3Storage class:

/// <summary>
/// Lists the blob containers for an account.
/// </summary>
/// <returns>List of container names.</returns>
public static IList<string> ListBlobContainers() { ... }

/// <summary>
/// Creates a blob container.
/// </summary>
/// <param name="containerName">Container name.</param>
public static void CreateBlobContainer(string containerName) { ... }

/// <summary>
/// Deletes an empty blob container.
/// </summary>
/// <param name="containerName">Container name.</param>
public static void DeleteBlobContainer(string containerName) { ... }

As I discuss in the introductory post for Delay.Web.Helpers, I'm deliberately matching the existing WindowsAzureStorage API with my implementation of AmazonS3Storage, so these new methods look and function exactly the same for both web services. As part of this release, I've also updated the Delay.Web.Helpers sample page to show off the new container support as well as added some more automated tests to verify it.

[Click here to download the Delay.Web.Helpers assembly, its complete source code, all automated tests, and the sample web site.]


While I hadn't thought to get back to blobs quite so soon, finding out about the fragmented response issue kind of forced my hand. But all's well that ends well - I'm glad to have added container supported to Delay.Web.Helpers because that means it now has complete blob-related feature parity with the WindowsAzureStorage web helper and that seems like a Good Thing. What's more, I got confirmation that the Windows Azure Managed Library does not support Silverlight, so my BlobStoreApi is serving duty as an unofficial substitute for now. [Which is cool! And a little scary... :) ]

Whether you're using Silverlight on the desktop, writing a Windows Phone 7 application, or developing a web site with ASP.NET using MVC, Razor, or WebMatrix, I hope today's update helps your application offer just a little more goodness to your users!

Delay is the pseudonym of David Anson, a Microsoft developer who works with the Silverlight and WPF platforms. He obviously likes long blog titles.

Patriek Van Dorp (@pvandorp) described Building a State Machine using Azure Queues (1) in a 1/4/2011 post (missed when published):

image Last night I attended a webcast on Queue Storage in Windows Azure. They gave an example of how Queue Storage can be used to create a state machine. In this post I’d like to show you how it’s done. First of all, lets paint a picture of a scenario in which we transition an article’s state over time. An author submits an article in MS Word or RTF format the article’s state is initialized to ‘New’. Next, a producer reviews the article and either accepts it or denies it. Once the producer has accepted the article a background process converts the MS Word or RTF document to a PDF document and publishes it online. Figure 1 shows the state machine diagram for this scenario.

State Machine

Figure 1: State Machine Diagram

imageOf course, we could use WF to create a state machine for this, but to my knowledge this wouldn’t be very scalable. I am, however, not a WF specialist so, I might get some comments on that remark (which is a good thing, because then I will learn more about WF in the process). It would require one or more Windows Azure Compute instances to host a Workflow Service though and it would require SQL Azure to persist the workflow state, which in turn costs money. Also, updating the WF state machine by adding or removing states is cumbersome (maybe this has improved with WF 4.0, I don’t know).

Using Queue Storage

Creating a state machine using Azure queues would be very easy, scalable and maintainable. All you have to do is create a separate queue for each state. In this scenario an author would use a website, for instance a ASP.NET WebRole in Windows Azure, to upload a MS Word document (Figure 2.1). The document is stored in Blob Storage (not depicted in figure 2). A CloudQueueMessage is placed on the ‘New’ queue, specifying the name of the document in Blob Storage and perhaps some metadata that could be used later.

State Machine Queues

Figure 2: The Flow

Next, a producer uses the same ASP.NET WebRole , but, due to a different identity, is able to view a list of new documents by getting messages of the ‘New’ queue (Figure 2.2). The producer denies (Figure 2.3) a document by putting a CloudQueueMessage in the ‘Denied’ queue. On the other hand, the producer accepts (Figure 2.4) a document by putting a CloudQueueMessage in the ‘Accepted’ queue. A separate Worker Role polls the ‘Accepted’ queue (Figure 2.5) to see if there are any MS Word documents that need to be transformed to PDF. If the Worker Role finds a CloudQueueMessage in the ‘Accepted’ queue, it will transform the MS Word document that’s stored in Blob Storage to PDF and save it to Blob Storage again.

This is basically the flow that this example scenario follows. In part 2, I will go into the details and touch on some code.

Patriek is a Senior Microsoft Technology Specialist at Sogeti in the Netherlands.

D. R. McGhee posted Blog Post: My simple approach to pagination in Windows Azure Table Storage on 11/29/2010 (missed when posted):

imageAs part of my simple Azure Assets application I first focused on Pagination.  Although my test data was relatively small I knew that there were a number of categories for the data and many types which, if not paginated, could look daunting to the user.

Unfortunately, and probably due to the statelessness of Windows Azure, the typical take and skip methods threw up errors. This meant I needed to understand how I could implement pagination myself. Since my application was a web page I went about reading how in ASP.NET I can efficiently page through large amounts of data. By understanding the custom paging process I was able to link my ASP ListView and bind to a data source. I wanted to use as little code as possible. By using  ObjectDataSource  I could mirror the StartRowIndexParameterName and MaximumRowsParameterName.

You may also find that you may want to add other parameters in the query like the partition key from your Windows Azure Storage .  This is simply achieved by adding other select parameters and changing the

ObjectDataSource1.SelectParameters["PartitionKey"].DefaultValue = value;

By adding a timer I could update the page regularly with any changes.

protected void Timer1_Tick(object sender, EventArgs e)

I then moved to my data classes which I modeled from the Windows Azure Platform Training Course simple Guest Book example in the Building Your First Windows Azure Application.

In this simple example you create a data project which contains a Source, a Context and a Data Entity.

public class AssetDataSource

** – this is the service class for your calling web page

e.g.        public IEnumerable<Asset> GetAssets(string PartitionKey, int MaximumRows, int StartRowIndex)

public AssetDataContext(string baseAddress, Microsoft.WindowsAzure.StorageCredentials credentials)
            : base(baseAddress, credentials)

**  this is where the query occurs against the storage table

public class Asset : Microsoft.WindowsAzure.StorageClient.TableServiceEntity

** this is a simple data class this inherits behaviours and properties needed for Windows Azure Storage

After reading Scott Densmore on Paging with Windows Azure Table Storage I understood that in recent SDK’s the CloudTableQuery<TElement> handles dealing with continuation tokens. Continuation tokens, I think, are a bit of a shim, to create a storable token the developer can use to store where the last page occurred.

Unfortunately the documentation is a bit scant on how this all works, but thanks to good old patterns and practice (and as highlighted by Eugenio Pace) Scott's efforts have been documented in a sample application which is Implementing Paging with Windows Azure Table Storage. At its core is a small class ContinuationStack you need to implement in your web page. As you move through your data returned from the DataContext query a token returned from the stack is stored in session state.

This was all fine for me except I was using ObjectDataSource and it  a synchronous method. For this reason I moved the Continuation stack to my Data Source object  which provides the method GetAssets for the ObjectDataSource to bind to.

// start async call
IAsyncResult asyncResult = BeginAsyncOperation(new AsyncCallback(this.EndAsyncOperation));

// because waiting for result then call end result will get result of operation with WaitOne

When the results are gleaned the following small method is implemented

private void EndAsyncOperation(IAsyncResult result)
     var cloudTableQuery = result.AsyncState as CloudTableQuery<Asset>;
     ResultSegment<Asset> resultSegment = cloudTableQuery.EndExecuteSegmented(result);
     this.assets = resultSegment.Results.ToList();

If you really want to get more into detail about this I’d recommend these two article for further reading:

Ill be updating this post later with my move to OData which simplifies things a little.

For a demonstration of paging Windows Azure Paging, downloadable sample code for paging tables, and links to a seven-part Windows Azure paging tutorial, see my OakLeaf Systems Azure Table Services Sample Project - Paging and Batch Updates Demo and Azure Storage Services Test Harness: Table Services 1 – Introduction and Overview of 11/17/2010.

<Return to section navigation list> 

SQL Azure Database and Reporting

LarenC described Clarifying Sync Framework and SQL Server Compact Compatibility on 12/16/2010:

imageSync Framework and SQL Server Compact install with several versions of Visual Studio and SQL Server, and each version of Sync Framework is compatible with different versions of SQL Server Compact. This can be pretty confusing!

imageThis article on TechNet Wiki clarifies which versions of Sync Framework are installed with Visual Studio and SQL Server, lays out a matrix that shows which versions of SQL Server Compact are compatible with each version of Sync Framework, and walks you through the process of upgrading a SQL Server Compact 3.5 SP1 database to SQL Server Compact 3.5 SP2.

This and the following five posts describe the Sync Framework 4.0 CTP. All but the first of these posts were missing from earlier OakLeaf posts because of a problem with the Sync Framework Team’s Atom feed.

LarenC reported Sync Framework Tips and Troubleshooting on TechNet Wiki on 12/1/2010:

imageI've started an article on the Technet Wiki that collects usage tips and troubleshooting for Sync Framework. This can be the first place you look when you have a nagging question that isn't answered anywhere else.

Some tips already collected are recommendations about how to speed up initialization of multiple SQL Server Compact databases, how to find the list of scopes currently provisioned to a database, and how to improve the performance of large replicas for file synchronization.

It's a wiki, so you can add to it! If the answer you need isn't in the article, or if you have more to say about a tip that's already in the article, post what you know. By sharing our knowledge, we can all make Sync Framework even more useful for everyone.

Check out the article, and share your tips:!

Sreedhar Pelluru announced Sync Framework 4.0 October 2010 CTP Refreshed on 11/16:

imageWe just refreshed the Sync Framework 4.0 CTP bits to add the following two features:  

  • Tooling Wizard UI: This adds a UI wizard on top of the command line based SyncSvcUtil utility. This wizard allows you to select tables, columns, and even rows to define a sync scope, provision/de-provision a database and generate server-side/client-side code based on the data schema that you have. This minimizes the amount of code that you have to write yourself to build sync services or offline applications. This Tooling Wizard can be found at: C:\Program Files (x86)\Microsoft SDKs\Microsoft Sync Framework\4.0\bin\SyncSvcUtilHelper.exe.
  • iPhone Sample: This sample shows you how to develop an offline application on iPhone/iPad with SQLite for a particular remote schema by consuming the protocol directly. The iPhone sample can be found at: C:\Program Files (x86)\Microsoft SDKs\Microsoft Sync Framework\4.0\Samples\iPhoneSample.

More information about this release:

  • Download: The refreshed bits can be found at the same place of the public CTP released at PDC. You can find the download instructions at: here

  • Documentation: We also refreshed the 4.0 CTP Documentation online at MSDN at: here.

  • PDC Session: To learn more about this release 4.0 in general, take a look at our PDC session recording "Building Offline Applications using Sync Framework and SQL Azure".

<In this release, we decided to bump the version of all binaries to 4.0, skipping version 3.0 to keep the version number consistent across all components.>

Sreedhar Pelluru reported the availability of Updated Sync Framework 4.0 CTP Documentation on 11/16/2010:

imageThe Sync Framework 4.0 CTP documentation on MSDN library ( ) has been updated. It now includes the documentation for the SyncSvcUtilHelper UI tool, which is built on top of the SyncSvcUtil command-line tool. This UI tool exposes all the functionalities supported by the command-line tool. In addition, it lets you create or edit a configuration file that you can use later to provision/deprovision SQL Server/SQL Azure databases and to generate server/client code.

Liam Cavanagh (@liamca) posted Announcing SQL Azure Data Sync CTP2 on 10/28/2010:

image Earlier this week I mentioned that we will have one additional sync session at PDC that would open up after the keynote. Now that the keynote is complete, I am really excited to point you to this session “Introduction to SQL Azure Data Sync” and tell you a little more about what was announced today.

In the keynote today, Bob Muglia announced an update to SQL Azure Data Sync (called CTP2) to enable synchronization of entire databases or specific tables between on-premises SQL Server and SQL Azure, giving you greater flexibility in building solutions that span on-premises and the cloud. 

As many of you know, using SQL Azure Data Sync CTP1, you can now synchronize SQL Azure database across datacenters.  This new capability will allow you to not only extend data from your on-premises SQL Servers to the cloud, but also enable you to easily extend data to SQL Servers sitting in remote offices or retail stores.  All with NO-CODING required!

Later in the year, we will start on-boarding customers to this updated CTP2 service.  If you are interested in getting access to SQL Azure Data Sync CTP2, please go here to register.If you would like to learn more and see some demonstrations of how this will work and some of the new features we have added to SQL Azure Data Sync, please take a look at my PDC session recording.  Here is the direct video link and abstract.

Video: Introduction to SQL Azure Data Sync (Liam Cavanagh) – 27 min

In this session we will show you how SQL Azure Data Sync enables on-premises SQL Server data to be easily shared with SQL Azure allowing you to extend your on-premises data to begin creating new cloud-based applications. Using SQL Azure Data sync’s bi-directional data synchronization support, changes made either on SQL Server or SQL Azure are automatically synchronized back and forth. Next we show you how SQL Azure Data Sync provides symmetry between SQL Azure databases to allow you to easily geo-distribute that data to one or more SQL Azure data centers around the world. Now, no matter where you make changes to your data, it will be seamlessly synchronized to all of your databases whether that be on-premises or in any of the SQL Azure data centers.

Sreedhar Pelluru posted Announcing Sync Framework 4.0 October 2010 CTP on 10/27/2010:

imageWe are extremely happy to announce the availability of Sync Framework 4.0 October 2010 CTP. To learn more about this release please take a look at our PDC session recording "Building Offline Applications using Sync Framework and SQL Azure".

The Microsoft Sync Framework 4.0 October 2010 CTP is built on top of Sync Framework 2.1 and it extends the Sync Framework capabilities of building offline application to any client platform that is capable of caching data. The release enables synchronization of data stored in SQL Server/SQL Azure over an open standard network format by a remote synchronization service handling all sync specific logic. Moving all synchronization logic off the client enables clients, which do not have the Sync Framework runtime installed, to cache data and participate in a synchronization topology. Earlier versions of Sync Framework required Windows systems with Sync Framework runtime installed on them as clients. This CTP allows other Microsoft platforms such as Silverlight, Windows Phone 7, and Windows Mobile and non-Microsoft platforms such as HTML5, iPhone, Android and other devices with no Sync Framework runtime installed on them as clients.

The major new features included in this CTP are:

  • Protocol: In this release, we apply the principles of OData to the problem of data-sync and add synchronization semantics to the protocol format. Clients and the service use the protocol to perform synchronization, where a full synchronization is performed the first time subsequently followed by smaller incremental synchronization. The protocol is designed with the goal to make it easy to implement the client-side of the protocol and all the synchronization logic will be running on the service side. It is intended to be used to enable synchronization for a variety of sources including, but not limited to, relational databases and file systems.
  • Server and Client Components: The release includes server components that make it easy for you to build a synchronization Web service that exposes data from SQL Server or SQL Azure via the Sync protocol. The CTP release includes client component’s that make it easy for you to build offline applications on Silverlight for desktop and Windows Phone 7 platforms.
  • SyncSvcUtil.exe utility: The release includes a command-line tool, SyncSvcUtil.exe, which helps you with defining and developing sync services and clients.
  • Business Logic Extensibility on Server: The release allows you to plug in to the synchronization runtime on the service and enable custom business logic configuration using SyncInterceptors.
  • Diagnostic Dashboard: The release supports a diagnostic dashboard to diagnose the health of the deployed sync services.
  • Samples and Tutorials: The CTP ships with samples that include a sample service exposing a ToDo list data model as a synchronization service. It also ships the Silverlight, Windows Phone 7, Windows Mobile 6.5 and HTML5 clients that synchronize with the service to show you how to use the components and the protocol. The documentation for CTP contains tutorials, which walk you through creating and consuming a sync service that you can deploy to an on-premise Windows Server or Windows Azure.

The following features will be available in few weeks after PDC10 as a refresh to this release. We will keep you updated on this release on Sync Framework forums and Sync Framework Blog. (Update on 11/16, we just refreshed the bits with the two new features!)

  • Tooling Wizard UI: This adds a UI wizard on top of the command line based SyncSvcUtil utility. This wizard allows you to select tables, columns, and even rows to define a sync scope, provision/de-provision a database and generate server-side/client-side code based on the data schema that you have. This minimizes the amount of code that you have to write yourself to build sync services or offline applications.

  • iPhone Sample: This sample shows you how to develop an offline application on iPhone/iPad with SQLite for a particular remote schema by consuming the protocol directly.

For more details please visit

(In this release we decided to bump the version of all binaries to 4.0, skipping version 3.0 to keep the version number consistent across all components in the release.)

<Return to section navigation list> 

MarketPlace DataMarket and OData

Alex James (@adjames) published his promised Connecting to an OAuth 2.0 protected OData Service post on 1/21/2011:

imageThis post creates a Windows Phone 7 client application for the OAuth 2.0 protected OData service we created in the last post.


To run this code you will need:

Our application:

Our application is a very basic Windows Phone 7 (WP7) application that allows you to browse favorites and if logged in create new favorites and see your personal favorites too. The key to enabling all this is authenticating our application against our OAuth 2.0 protected OData service, which means somehow acquiring a signed Simple Web Token (SWT) with the current user’s emailaddress in it.

Our application’s authentication experience:

When first started our application will show public favorites like this:


To see more or create new favorites you have to logon.
Clicking the logon button (the ‘gear’ icon) makes the application navigate to a page with an embedded browser window that shows a home realm discovery logon page – powered by ACS. This home realm discovery page allows the user to choose one of the 3 identity providers we previously configured our relying party (i.e. our OData Service) to trust in ACS:


When the user clicks on the identity provider they want to use, they will browse to that identity provider and be asked to Logon:


Once logged on they will be asked to grant our application permission to use their emailaddress:


If you grant access the browser redirects back to ACS and includes the information you agreed to share – in this case your email address.

Note: because we are using OAuth 2.0 and related protocols your Identity provider probably has a way to revoke access to your application too. This is what that looks like in Google:


The final step is to acquire a SWT signed using the trusted signing key (only ACS and the OData service know this key) so that we can prove our identity with the OData service. This is a little involved so we’ll dig into it in more detail when we look at the code – but the key takeaway is that all subsequent requests to the OData service will use this SWT to authenticate, and gain access to more information:


How this all works:

There is a fair bit of generic application code in this example which I’m not going to walk-through; instead we’ll just focus on the bits that are OData and authentication specific.

Generating Proxy Classes for your OData Service

Unlike Silverlight or Windows applications, we don’t have an ‘Add Service Reference’ feature for WP7 projects yet. Instead we need to generate proxy classes by hand using DataSvcUtil.exe, something like this:

DataSvcUtil.exe /out:"data.cs" /uri:"http://localhost/OnlineFavoritesSite/Favorites.svc"

Once you’ve got your proxy classes simply add them to your project.

I chose to create an ApplicationContext class to provide access to resources shared across different pages in the application. So this property, which lets people get to the DataServiceContext, hangs off that ApplicationContext:

public FavoritesModelContainer DataContext {
if (_ctx == null)
_ctx = new FavoritesModelContainer(new Uri("http://localhost/OnlineFavoritesSite/Favorites.svc/"));
_ctx.SendingRequest += new EventHandler<SendingRequestEventArgs>(SendingRequest);
return _ctx;

Notice that we’ve hooked up to the SendingRequest event on our DataServiceContext, so if we know we are logged (we have a SWT token in our TokenStore) we include it in the authentication header.

void SendingRequest(object sender, SendingRequestEventArgs e)
   if (IsLoggedOn)
      e.RequestHeaders["Authorization"] = "OAuth " + TokenStore.SecurityToken;

Then whenever the home page is displayed or refreshed the Refresh() method is called:

private void Refresh()
   AddFavoriteButton.IsEnabled = App.Context.IsLoggedOn;
   var favs = new DataServiceCollection<Favorite>(App.Context.DataContext);
   lstFavorites.ItemsSource = favs;
   favs.LoadAsync(new Uri("http://localhost/OnlineFavoritesSite/Favorites.svc/Favorites?$orderby=CreatedDate desc"));
   App.Context.Favorites = favs;

Notice that this code binds the lstFavorites control, used to display favorites, to a new DataServiceCollection that we load asynchronously via a hardcoded OData query uri. This means whenever Refresh() is executed we issue the same query, the only difference is that because of our earlier SendingRequest event handler if we are logged in we send an Authorization header too.

NOTE: If you are wondering why I used a hand coded URL rather than LINQ to produce the query, it’s because the current version of the WP7 Data Services Client library doesn’t support LINQ. We are working to add LINQ support in the future.

Logging on

The logon button is handled by the OnSignIn event that navigates the app to the SignOn page:

private void OnSignIn(object sender, EventArgs e)
   if (!App.Context.IsLoggedOn)
      NavigationService.Navigate(new Uri("/SignIn.xaml", UriKind.Relative));

The SignIn.xaml file is a modified version of the one in the Access Control Services Phone sample. As mentioned previously, it has an embedded browser (in fact the browser is embedded in a generic sign-on control called AccessControlServiceSignIn). The code behind the SignIn page looks like this:

public const string JSON_HRD_url = "";

public SignInPage()

private void PhoneApplicationPage_Loaded(object sender, RoutedEventArgs e)
   SignInControl.RequestSecurityTokenResponseCompleted += new EventHandler<RequestSecurityTokenResponseCompletedEventArgs>(SignInControl_GetSecurityTokenCompleted);
   SignInControl.GetSecurityToken(new Uri(JSON_HRD_url));

This tells the control to GetSecurityToken from ACS. The JSON_HRD_url points a url exposed by ACS that returns the list of possible identity providers in JSON format.

The underlined and bolded part of the string corresponds to an MVC action we are going to add to our OData service website to get the SWT token into our WP7 application.

You can configure the url of your MVC action via the Relying Party screen for your application in IIS:


Once you’ve set the Return URL correctly, to get the JSON Home Realm discovery URL from ACS, click on ‘Application Integration’ and then click on ‘Logon Pages’ and then click on your Relying Party, then you should see something like this:


The second URL is the one we need.

That’s essentially all we need on the client for security purposes. Remember though we need a page in our website that acts as the Return URL.

We choose this URL ‘http://localhost/OnlineFavoritesSite/Security/AcsPostBack’ to receive the response so we need to create a SecurityController with an AcsPostBack action something like this:

public class SecurityController : Controller
   const string Page = @"<html xmlns="""">
<head runat=""server"">
<script type=""text/javascript"">
   // POST: /Security/AcsPostBack
   public string AcsPostBack()
      RequestSecurityTokenResponseDeserializer tokenResponse = new RequestSecurityTokenResponseDeserializer(Request);
      string page = string.Format(Page, tokenResponse.ToJSON());
      return page;

This accepts a POST from ACS. Because ACS is on a different domain, we need to include [ValidateInput(false)] which allows Cross Site posts.

The post from ACS will basically be a SAML token that will includes a set of claims about the current user. But because our Phone client is going to use REST we need to convert the SAML token which is not header friendly into a SWT token that is.

The RequestSecurityTokenResponseDeserializer class (again from the ACS phone sample) does this for us. It repackages the claims as a SWT token and wraps the whole thing up in a bit of JSON so it can be embedded in a little Javascript code.

The html we return calls window.external.Notify('{0}') to get the token to the AccessControlServiceSignIn control in phone application, which then puts it in the TokenStore.SecurityToken so that it is available for future requests.

Once we have finally got the token this event fires:

void SignInControl_GetSecurityTokenCompleted(
   object sender,
   RequestSecurityTokenResponseCompletedEventArgs e)
   if (e.Error == null)
      if (NavigationService.CanGoBack)

This code takes us back to the Main page, which forces a refresh, which in turn re-queries the OData service, this time with the SWT token, which gives the user access to all their personal favorites and allows them to create new favorites.

Mission accomplished!


In this post you learned how to use ACS and the Data Services Client for WP7 to authenticate with and query an OAuth 2.0 protected OData service. If you have any questions let me know.

Alex James
Program Manager

Alex James (@adjames) explained OData and OAuth – protecting an OData Service using OAuth 2.0 in a 1/20/2011 post to the WCF Data Services (Astoria) Team blog:

imageIn this post you will learn how to create an OData service that is protected using OAuth 2.0, which is the OData team’s official recommendation in these scenarios:

  • image Delegation: In a delegation scenario a third party (generally an application) is granted access to a user’s resources without the user disclosing their credentials (username and password) to the third party.
  • Federation: In a federation scenario a user’s credentials on one domain (perhaps their corporate network) implies access to resources on a resource domain (say a data provider). They key though is that the credentials used (if any) on the resource domain are not disclosed to the end users and the user never discloses their credentials to the resource domain either.

image So if your scenarios is one of the above or some slight variation we recommend that you use OAuth 2.0 to protect your service, it provides the utmost flexibility and power.

To explore this scenario we are going to walkthrough a real-world scenario, from end to end.

The Scenario

We’re going to create an OData service based on this Entity Framework model for managing a user’s Favorite Uris:


As you can see this is a pretty simple model with just Users and Favorites.

image722322Our service should not require its own username and password, which is a sure way to annoy users today. Instead it will rely on well-known third parties like Google and Yahoo, to provide the users identity. We’ll use AppFabric Access Control Services (aka ACS) because it provides an easy way to bridge these third parties claims and rewrite them as a signed OAuth 2.0 Simple Web Token or SWT.

The idea is that we will trust email-address claims issued by our ACS service via a SWT in the Authorization header of the request. We’ll then use a HttpModule to convert that SWT into a WIF ClaimsPrincipal.

Then our service’s job will be to map the EmailAddress in the incoming claim to a User entity in the database via the User’s EmailAddress property, and use that to enforce Business Rules.

Business Rules

We need our Data Service to:

  • Automatically create a new user whenever someone with an unknown email-address hits the system.
  • Allow only administrators to query, create, update or delete users.
  • Allow Administrators to see all favorites.
  • Allow Administrators to update and delete all favorites.
  • Allow Users to see public favorites and their private favorites.
  • Allow Users to create new favorites. But the OwnerId, CreateDate and ‘Public’ values should be set for them, i.e. what they user sends on the wire will be ignored.
  • Allow Users to edit and delete only their favorites.
  • Allow un-authenticated requests to query only public favorites.

  • Windows Server 2008 R2 or Windows 7
  • Visual Studio 2010
  • Internet Information Services (IIS) enabled with IIS Metabase and IIS6 Configuration Compatibility
  • Windows Identity Foundation (WIF) (
  • An existing Data Service Project that you want to protect.
Creating our Data Service

First we add a DataService that exposes our Entity Framework model like this:

public class Favorites : DataService<FavoritesModelContainer>
   // This method is called only once to initialize service-wide policies.
   public static void InitializeService(DataServiceConfiguration config)
      config.SetEntitySetAccessRule("*", EntitySetRights.All);
      config.SetEntitySetPageSize("*", 100);
      config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;

Configuring ACS

You can use to create an AppFabric project, which will allow you to trial Access Control Services (or ACS) for free. The steps involved are:

  1. Sign in with your LiveId (or create a new one). Once you’ve logged on you’ll see something like this:
  2. Click the ‘create a project’ link and choose a name for your project.
  3. Click on your new project:
  4. Click ‘Add Service Namespace’ and choose a service namespace that is available:
    Then you will see this:
  5. You’ll have to wait about 20 seconds for Azure to provision your namespace. Once it is active click on the ‘Access Control’ the link:
  6. Click on ‘Identity Providers’ which will allow you to configure ACS to accept identities from Google, Yahoo and Live, by clicking ‘Add Identity Provider’ on the screen below:
  7. Once you’ve added Google and Yahoo click on ‘Return to Access Control Service’
  8. Click on ‘Relying Party Applications’:
    NOTE: As you can see there is already a ‘Relying Party Application’ called AccessControlManagement. That is the application we are currently using that manages our ACS instance. It trusts our ACS to make claims about the current user’s identity.
    As you can see this management application thinks I am an administrator (top right corner). This is because I logged on to AppFabric using LiveId as who is the owner of this Service Namespace.
    Now we can create a relying party – i.e. something to represent our OData favorites service – which will ‘rely’ on ACS to make claims about who is making the request, to do this:
  9. Click on ‘Add Relying Party Application’.
  10. Fill in the form like this and then click ‘Save’
    Name: Choose a name that represents your Application
    Realm: Choose the ‘domain’ that you intend to host your application at. This will work even if you are testing on localhost first so long as web.config settings that control your OAuth security module match.
    Return URL: Choose some url relative to your domain, like in the above sample. Note this is not need by the server, this is only needed when we write a client – which we will do in the next blog post. You *will* need to change this value as you move from testing to live deployment, because your clients will actually follow this link.
    Error URL: Leave this blank
    Token format:
    Choose SWT (i.e. a Simple Web Token which can be embedded in request headers).
    Token lifetime (secs): Leave at the default.
    Identity providers: Leave the default.
    Rule groups: Leave the default.
    Token signing key: Click ‘Generate’ to produce a key or paste an existing key in.
    Effective date: Leave the default.
    Expiration date: Leave the default.
  11. Click on ‘Return to Access Control Service’.
  12. Click on ‘Rule Groups’
  13. Click on ‘Default Rule Group for [your relying party]’
  14. Click on ‘Generate Rules’
  15. Leave all Identity Providers checked and click the ‘Generate’ button.
    You should see these rules get generated automatically:

This set of rules will take claims from Google, Yahoo and Windows Live Id, and pass them through untouched by sign then with the Token Signing Key we generated earlier.

Notice that LiveId claims don’t include an ‘emailaddress’ or ‘name’, so if we want to support LiveId our OAuth module on the server will need to figure out a way to convert a ‘nameidentifier’ claim into a ‘name’ and ‘emailaddress’ which is beyond the scope of this blog post.

At this point we’ve finished configuring ACS, and we can configure our OData Service to trust it.

Server Building Blocks

We will rely on a sample the WIF team recently released that includes a lot of useful OAuth 2.0 helper code. This code builds on WIF adding some very useful extensions.

The most useful code for our purposes is a class called OAuthProtectionModule. This is a HttpModule that converts claims made via a Simple Web Token (SWT) in the incoming request’s Authorization header into a ClaimsPrincipal which it then assigns to HttpContext.Current.User.

If you’ve been following the OData and Authentication series, this general approach will be familiar to you. It means that by the time calls get to your OData service the HttpContext.Current.User has the current user (if any) and can be used to make decisions about whether to authorize the request.


There is a lot of code in the WIF sample that we don’t need. All you really need is the OAuthProtectionModule, so my suggestion is you pull that out into a separate project and grab classes from the sample as required. When I did that I moved things around a little and ended up with something that looked like this:

You might want to simplify the SamplesConfiguration class too, to remove unnecessary configuration information. I also decided to move the actual configuration into the web.config. When you make those changes you should end up with something like this:

public static class SamplesConfiguration
   public static string ServiceNamespace
         return ConfigurationManager.AppSettings["ServiceNamespace"];

   public static string RelyingPartyRealm
         return ConfigurationManager.AppSettings["RelyingPartyRealm"];
   public static string RelyingPartySigningKey
         return ConfigurationManager.AppSettings["RelyingPartySigningKey"];

   public static string AcsHostUrl
         return ConfigurationManager.AppSettings["AcsHostUrl"];

Then you need to add your configuration information to your web.config:

<!-- this is the Relying Party signing key we generated earlier, i.e. the key ACS will use to sign the SWT – 
     that our module can verify by signing and compariing -->
<add key="RelyingPartySigningKey" value="cx3SesVUdDE0yGYD+86BLzyffu0xPBRGUYR4wKPpklc="/>
<!-- the dns name of the SWT issuer -->
<add key="AcsHostUrl" value=""/>
<!-- this is the your ACS ServiceNamespace of your OData service -->
<add key="ServiceNamespace" value="odatafavorites"/>
<!-- this is the intented url of your service (you don’t need to use a local address during development
     it isn’t verified -->
<add key="RelyingPartyRealm" value=""/>

With these values in place the next step is to enable the OAuthProtectionModule too.

   <validation validateIntegratedModeConfiguration="false" />
   <modules runAllManagedModulesForAllRequests="true">
      <add name="OAuthProtectionModule" preCondition="managedHandler"

With this in place any requests that include a correctly signed SWT in the Authorization header will have the HttpContext.Current.User set by the time you get into Data Services code.

Now we just need a function to pull back a User (from the Database) based on the EmailAddress claim contained in the HttpContext.Current.User by calling GetOrCreateUserFromPrinciple(..).

Per our business requirements this function automatically creates a new non-administrator user whenever a new EmailAddress is encountered. It talks to the database using the current ObjectContext which it accesses via DataService.CurrentDataSource.

public User GetOrCreateUserFromPrincipal(IPrincipal principal)
   var emailAddress = GetEmailAddressFromPrincipal(principal);
   return GetOrCreateUserForEmail(emailAddress);

private string GetEmailAddressFromPrincipal(IPrincipal principal)
   if (principal == null) return null;
   else if ((principal is GenericPrincipal))
      return principal.Identity.Name;
   else if ((principal is IClaimsPrincipal))
      return GetEmailAddressFromClaim(principal as IClaimsPrincipal);
      throw new InvalidOperationException("Unexpected Principal type");

private string GetEmailAddressFromClaim(IClaimsPrincipal principal)
   if (principal == null)
      throw new InvalidOperationException("Need a claims principal to extract EmailAddress claim"); 
   var emailAddress = principal.Identities[0].Claims
      .Where(c => c.ClaimType == "")
      .Select(c => c.Value)

   return emailAddress;

private User GetOrCreateUserForEmail(string emailAddress)
   if (emailAddress == null)
      throw new InvalidOperationException("Need an emailaddress");

   var ctx = CurrentDataSource as FavoritesModelContainer;
   var user = ctx.Users.WhereDbAndMemory(u => u.EmailAddress == emailAddress).SingleOrDefault();
   if (user == null)
      user = new User
         Id = Guid.NewGuid(),
         EmailAddress = emailAddress,
         CreatedDate = DateTime.Now,
         Administrator = false
   return user;

Real World Note:

One thing that is interesting about this code is the call to WhereDbAndMemory(..) in GetOrCreateUserForEmail(..). Initially it was just a normal Where(..) call.

But that introduced a pretty sinister bug.

It turned out that often my query interceptors / change interceptors were being called multiple times in a single request and because this method creates a new user without saving it to the database every time it is called, it was creating more than one user for the same emailAddress. Which later failed the SingleOrDefault() test.

The solution is to look for any unsaved Users in the ObjectContext, before creating another User. To do this I wrote a little extension method that allows you to query both the Database and unsaved changes in one go:

public static IEnumerable<T> WhereDbAndMemory<T>(
   this ObjectQuery<T> sequence,
   Expression<Func<T, bool>> filter) where T: class
   var sequence1 = sequence.Where(filter).ToArray();
   var state = EntityState.Added | EntityState.Modified | EntityState.Unchanged;
   var entries = sequence.Context.ObjectStateManager.GetObjectStateEntries(state);
   var merged = sequence1.Concat(
      entries.Select(e => e.Entity).OfType<T>().Where(filter.Compile())
   return merged;

By using this function we can be sure to only ever create one User for a particular emailAddress.


To implement our required business rules we need to create a series of Query and Change Interceptors that allow different users to do different things.

Our first interceptor controls who can query users:

public Expression<Func<User, bool>> FilterUsers()
   if (!HttpContext.Current.Request.IsAuthenticated)
      throw new DataServiceException(401, "Permission Denied");
   User user = GetOrCreateUserFromPrincipal(HttpContext.Current.User);
   if (user.Administrator)
      return (u) => true;
      throw new DataServiceException(401, "Permission Denied");

Per our requirement this only allows authenticated Administrators to query Users.

Next we need an interceptor that only allows administrators to modify a user:

public void ChangeUser(User updated, UpdateOperations operations)
   if (!HttpContext.Current.Request.IsAuthenticated)
      throw new DataServiceException(401, "Permission Denied");
   var user = GetOrCreateUserFromPrincipal(HttpContext.Current.User);
   if (!user.Administrator)
      throw new DataServiceException(401, "Permission Denied");

And now we restrict access to Favorites:

public Expression<Func<Favorite, bool>> FilterFavorites()
   if (!HttpContext.Current.Request.IsAuthenticated)
return (f) => f.Public == true;
var user = GetOrCreateUserFromPrincipal(HttpContext.Current.User);
var emailAddress = user.EmailAddress;
if (user.Administrator)
return (f) => true;
return (f) => f.Public == true || f.User.EmailAddress == emailAddress;

As you can see administrators see everything, users see their favorites and everything public, and non-authenticated requests get to see just public favorites.

Finally we control who can create, edit and delete favorites:

public void ChangeFavorite(Favorite updated, UpdateOperations operations)
   if (!HttpContext.Current.Request.IsAuthenticated)
      throw new DataServiceException(401, "Permission Denied");
   // Get the current USER or create the current user...
   var user = GetOrCreateUserFromPrincipal(HttpContext.Current.User);
   // Handle Inserts...
   if ((operations & UpdateOperations.Add) == UpdateOperations.Add)
      // fill in the OwnerId, CreatedDate and Public properties
      updated.OwnerId = user.Id;
      updated.CreatedDate = DateTime.Now;
      updated.Public = false;
   else if ((operations & UpdateOperations.Change) == UpdateOperations.Change)
      // Administrators can do whatever they want.
      if (user.Administrator)
      // We don't trust the OwnerId on the wire (updated.OwnerId) because 
      // we should never do security checks based on something that the client
      // can modify!!!
      var original = GetOriginal(updated);
      if (original.OwnerId == user.Id)
         // non-administrators can't modify these values.
         updated.OwnerId = user.Id;
         updated.CreatedDate = original.CreatedDate;
         updated.Public = original.Public;

      // if we got here... they aren't allowed to do anything!
      throw new DataServiceException(401, "Permission Denied");
   else if ((operations & UpdateOperations.Delete) == UpdateOperations.Delete)
      // in a delete operation you can’t update the OwnerId – it is impossible 
      // in the protocol, so it is safe to just check that.
      if (updated.OwnerId != user.Id && !user.Administrator)
      throw new DataServiceException(401, "Permission Denied");

Unauthenticated change requests are not allowed.

For additions we always set the ‘OwnedId’, ‘CreatedDate’ and ‘Public’ properties overriding whatever was sent on the wire.

For updates we allow administrators to make any changes, whereas owners can just edit their favorites, and they can’t change the ‘OwnerId’, ‘CreatedData’ or ‘Public’ properties. 

It is also very important to understand that we have to get the original values before we check to see if someone is the owner of a particular favorite. We do this using this function that leverages some low level Entity Framework code:

private Favorite GetOriginal(Favorite updated)
   // For MERGE based updates (which is the default) 'updated' will be in the
   // ObjectContext.ObjectStateManager.
   // For PUT based updates 'updated' will NOT be in the
   // ObjectContext.ObjectStateManager, but it will contain a copy
   // of the same entity.
   // So to normalize we should find the ObjectStateEntry in the ObjectStateManager
   // by EntityKey not by Entity.
   var entityKey = new EntityKey("FavoritesModelContainer.Favorites","Id", updated.Id);
   var entry = CurrentDataSource.ObjectStateManager.GetObjectStateEntry(entityKey);
   // Now we have the entity lets construct a copy with the original values.
   var original = new Favorite
      Id = entry.OriginalValues.GetGuid(entry.OriginalValues.GetOrdinal("Id")),
      CreatedDate = entry.OriginalValues.GetDateTime(entry.OriginalValues.GetOrdinal("CreateDate")),
      Description = entry.OriginalValues.GetString(entry.OriginalValues.GetOrdinal("Description")),
      Name = entry.OriginalValues.GetString(entry.OriginalValues.GetOrdinal("Name")),
      OwnerId = entry.OriginalValues.GetGuid(entry.OriginalValues.GetOrdinal("OwnerId")),
      Public = entry.OriginalValues.GetBoolean(entry.OriginalValues.GetOrdinal("Public")),
      Uri = entry.OriginalValues.GetString(entry.OriginalValues.GetOrdinal("Uri")),

   return original;

This constructs a copy of the unmodified entity setting all the properties from the original values in the ObjectStateEntry. While we don’t actually need all the original values, I personally hate creating a function that only does half a job; it is a bug waiting to happen.

Finally administrators can delete any favorites but users can only delete their own.


We’ve gone from zero to hero in this example, all our business rules are implemented, our OData Service is protected using OAuth 2.0 and everything is working great. The only problem is we don’t have a working client.

So in the next post we’ll create a Windows Phone 7 application for our OData service that knows how to authenticate.

Alex James
Program Manager

Jon Galloway (@jongalloway) reported a FIX: WCF Data Service with Entity Framework Code-First DbContext doesn’t accept updates in a 1/21/2011 post:


image The Entity Framework Code First DbContext doesn’t expose the interfaces to support updates when exposed via WCF Data Services. Attempting to save changes results in a fault with the message "The data source must implement IUpdatable or IDataServiceUpdateProvider to support updates." The fix is to alter your WCF Data Service to expose the DbContext's underlying ObjectContext and to disable proxy generation.


Jesse Liberty and I have stumbled into some frontier country. Our work on a Windows Phone + WCF Data Services + EF Code First + MVC 3 solution for the The Full Stack has put us in that delightful developer place where the combination of two or more pre-release or newly-released bits can feel like you're in primordial-ooze-technical-preview stage. Honestly, it's our job to get there before you do, and we love it.

I've been pushing Entity Framework Code-First to anyone who will listen. My kids are sick of hearing about it... "yes, dad, POCO object, no config, we GET it!"

We'd figured out how to connect all the pieces together so that data in an ASP.NET MVC 3 using SQL CE and EF Code First could expose data to a Windows Phone client via a WCF Data Service. That sounds like a lot of moving parts, but it actually went pretty smoothly once we figured out the steps, as documented here.

The problem - the service was read-only

Our goal is to build out a contact manager application that allows you to quickly store and look up contacts by description, e.g. short, long hair, works for Microsoft, met at MIX 2010.

Our phone client did a fine job of reading the data from the service, but our attempts to save changes back to the service gave some errors that were hard to troubleshoot. Based on some help from Chris "Woody" Woodruff, we changed our update method to stop using batching, and we started seeing the specific error message: "The data source must implement IUpdatable or IDataServiceUpdateProvider to support updates."

That error message led me to a comment on a post by Rowan Miller on using WCF Data Services against a DbContext, which is exactly what we were doing. Rowan points out that if you're going by experience or documentation on using Entity Framework (before Code First), you'd create a simple WCF Data Service and point it at your Context class, and everything would work. However:

Now what if your BlogContext derives from DbContext instead of ObjectContext? In the current CTP4 you can’t just create a DataService of a derived DbContext, although you can expect this to work by the time there is an RTM release.

But there is some good news, DbContext uses ObjectContext under the covers and you can get to the underlying context via a protected member.

Aha, you say - that was for CTP4, and there's a newer release. Read on...

DbContext as a lightweight, convention-based wrapper over ObjectContext

DbContext is really easy to work with. It does the basic things you'd expect from ObjectContext, like tracking changes to your entities, batching changes, etc. The DbContext also adds some other convention-based goodness on top, though, which does things like infer what the database connection should be based on the the entity name, creating the database if it doesn't exist, etc.

However, the DbContext abstracts away / hides some capabilities in the ObjectContext. Usually, those capabilities aren't things you'll miss, but occasionally you will. In this case, we need IUpdatable support, as the MSDN documentation for DataService<T> explains:

The type of the DataService<T> must expose at least one property that returns an entity set that is an IQueryable<T> collection of entity types. This class must also implement the IUpdatable interface to enable updates to be made to entity resources.

The workaround - Expose the DbContext's base ObjectContext

Rowan Miller explains the workaround, which involves two short steps. Fortunately, since he posted that info (valid for CTP4), it's become even easier, since DbContext directly exposes the ObjectContext with your having to write a property to expose it. Prior to EF Code First CTP 5, if you wanted to call into the base ObjectContext, you had to expose the underlying context via a property, like this:

public class BlogContext : DbContext 
    public DbSet<Blog> Blogs { get; set; } 
    public DbSet<Post> Posts { get; set; } 
    public ObjectContext UnderlyingContext 
        get { return this.ObjectContext; } 

In CTP5, the DbContext implements IObjectContextAdapter, which exposes the ObjectContext. So, putting that together, if you want to set up a WCF Data Service that exposes a DbContext named PersonContext, you’ll need to change from this:

public class PersonTestDataService : DataService<PersonContext> 
    public static void InitializeService(DataServiceConfiguration config) 
        config.SetEntitySetAccessRule("*", EntitySetRights.All); 
        config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 

to this:

public class PersonTestDataService : DataService<ObjectContext> 
    public static void InitializeService(DataServiceConfiguration config) 
        config.SetEntitySetAccessRule("*", EntitySetRights.All); 
        config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; 
    protected override ObjectContext CreateDataSource() 
        var ctx = new PersonContext(); 
        var objectContext = ((IObjectContextAdapter)ctx).ObjectContext; 
        objectContext.ContextOptions.ProxyCreationEnabled = false; 
        return objectContext; 

Here’s the summary of what we did:

  • The DataService is typed as ObjectContext rather than the class which implemented DbContext (PersonContext).
  • We override CreateDataSource to get at the PersonContxt’s underlying ObjectContext, returning the ObjectContext as the result.
  • We disable Proxy Creation, which apparently does something magical. I just got it from Rowan Miller’s post and it works, so I’m not going to complain.
Updating the client proxy

If you’d previously created a client service proxy (via DataSvcUtil) for the service, you’ll need to regenerate it because the ObjectContext based WCF Service now implements IUpdatable and thus exposes some new additional methods. In our test case - a flat object model with one class holding simple strings - the changes to the client proxy class were to set up property change notifications:

  • public event global::System.ComponentModel.PropertyChangedEventHandler PropertyChanged;
  • protected virtual void OnPropertyChanged(string property)
Fun with IObjectContextAdapter

I’ve run into other circumstances where I needed access to the DbContext’s ObjectContext – for instance working with the ObjectStateManager. It’s nice to be able to cast a DbContext to an IObjectContextAdapter to get at the ObjectContext, then go to town with the base ObjectContext.

Ralph Squillace uses Windows Azure DataMart as a data source for his Technical Note: Windows Phone 7: Basic Authentication over HTTPS post of 1/21/2011:

image A note directly related to the larger blog project started with the last post, While I have been working on a lot of phone applications, almost all of the work around the net using HTTP web requests involves Web services -- REST or SOAP style -- that do not require much if anything in the way of authentication and security. Yet when a service provides data of reasonable value, that's what you need. Currently the Windows Phone 7 supports only HTTPS to secure communications, but since that's what is used by the bulk of the Internet selling music, books, tickets, and so on, it is more than sufficient for the vast majority of consumer applications. This post is to demonstrate how to make those calls on the Windows Phone 7 using Basic Authentication over HTTPS using several different mechanisms. Each of these works against an HTTPS endpoint at the Windows Azure DataMarket to access free data sources, such as the United States crime statistics from 2008.

ASIDE: Some people have noticed that the WCF implementation on the phone does not support "easy" Basic authentication of HTTP. No, it doesn't. But that's by design to protect you from "accidentally" using passwords in the clear over the network. Perhaps that's too much "protecting you", but it is documented here, in the note in the topic. If you don't use HTTPS -- that is, if you use HTTP -- the Client.Credentials property is ignored by design.

SECURITY NOTE: Because whenever sample code does not mention this, there's always some people in a big hurry that you have to remind: Obtain the username/password in a secure fashion. The most reasonably secure fashion is to prompt for it so that the user can enter it in a PasswordBox. However, there are other ways to encrypt in a strong way the strorage of such data on the phone. More in the future about several ways, but one is described here by Rob Tiffany: Don’t forget to Encrypt your Windows Phone 7 Data.

Using the WebClient Class (a good example with slightly different style found here!)

Using System.Net;

WebClient client = new WebClient();

// Handle the response when it comes.
client.DownloadStringCompleted += new DownloadStringCompletedEventHandler(client_DownloadStringCompleted);

// Your password; some services will call this a key, or a secret, or something.
string accountkeyOrPassword = "<account or key or password goes here>";

// Add the username/accountkeyOrPassword to the Client.Credentials property by creating a new NetworkCredential
client.Credentials = new NetworkCredential("accountKey", accountkeyOrPassword);

// Handle the response
client.DownloadStringAsync(new Uri("$filter=City%20eq%20%27Seattle%27&$top=100"));

Using the HttpWebRequest Class

HttpWebRequest client
    = WebRequest.CreateHttp(
new Uri("$filter=City%20eq%20%27Seattle%27&$top=100")
    ) as HttpWebRequest;

// Your password; some services will call this a key, or a secret, or something.
string accountkeyOrPassword = "<account or key or password goes here>";

// Add the username/accountkeyOrPassword to the Client.Credentials property by creating a new NetworkCredential
client.Credentials = new NetworkCredential("accountKey", accountkeyOrPassword);

// To support Windows Azure DataMarket
client.AllowReadStreamBuffering = true;

// Call and handle the response.
    (asResult) =>
            () =>
                    var response = client.EndGetResponse(asResult);
                    System.IO.StreamReader reader = new System.IO.StreamReader(response.GetResponseStream());
                    responseString = reader.ReadToEnd();
                catch (WebException failure)
                    throw failure;

In a next post, I'll show how to use several different optimized client libraries to do this work. 

imageSee Pablo Castro will present an Open Data: Designing Data-centric Web APIs session at the O’Reilly Strata Conference on 2/3/2011 at 10:40 AM in the Mission City M room of the Santa Clara Conference Center, Santa Clara, CA in the Cloud Computing Events section below.

<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Ralph Squillace described Boundaryless WCF and REST service communication -- The Windows Azure platform AppFabric Service Bus (with Access Control!) in a 12/29/2010 post (missed when published):

image722322This past month I've been doing some work on the AppFabric Service Bus. It's hard to get a grip on the "feature" associated with this technology at first glance, but I'll take my first shot right here: "The Windows Azure platform Service Bus enables you to build Web services and clients that can traverse boundaries like firewalls and NAT routers without making any direct modifications to those boundaries themselves. What does this give you? Nothing short of the ability to glue applications together and stretch them around the world --anywhere the data or applications reside, or to any platforms (smart phones, anyone? Dishwashers? Toasters?) on which the callers reside. In short, it is the glue that makes secure Software plus Services not only a reality, but a reality that you can begin building today."

OK, that's a little overheated (geez: "a reality that you can begin building today?" ouch!! :-), but technically it is very close to what's going on here. You must realize that a production-quality scenario involves a lot of additional work -- but the genius-cool of this bit of tech is that you do NOT have to build the secure glue system between data chunks and applications -- just build your WCF service that exposes the data or application behavior, and instead of configuring it for local behavior, you configure it to use one of the relay bindings that come with the Service Bus, and to use tokens from the Access Service to authenticate and authorize when communicating with the Service Bus.

Somewhere in the cloud waits the Service Bus (and Access Control service). When you start your local WCF service, the configured relay bindings will communicate with the Service Bus in the cloud (if permission is granted), and then analyze the network between the two endpoints, establishing a bidirectional stream through any firewalls and NAT systems. Clients anywhere, on any Web-enabled platform, can then (if they have permission from the Access Control service), connect to the Service Bus as though it is the original service. The endpoint addresses that the client use -- Service Bus namespaces and names -- are protocol and location independent. Although you can secure the endpoint address information, the name itself has meaning only to the Service Bus -- and to nothing else.

What kind of tokens can you use to be authenticated and authorized? The easiest way is to use Simple Web Tokens, for which the Access Control service is a trusted provider to the Service Bus. Because the Access Control service is extensible, however, you can build a security system that uses the Access Control service to authenticate and authorize Service Bus access without changing the original WCF service or the Service Bus configuration at all. (And, as the Access Control service supports AD FS integration -- SAML tokens -- you can do very robust and integrated security.)

The ability to communicate across firewall and NAT boundaries without management involvement AND the ability to do this securely across platforms -- both of these features are needed to make it possible for you to build the cloud/desktop/server/gadget applications that you can envision for your customers ease and your productivity. This is simply way, way cool.

I'll do some examples very soon.

Haven’t discovered any example so far.

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Andy Cross described overcoming problems with Using Certificates in Windows Azure Management in a 1/21/2011 post:

Recently I ran into a problem with using Certificates to upload my project to Windows Azure. I was doing something wrong over and over again, so I thought I’d post the correct way of doing it. It’s something I’ve done many times, but on setting up a new computer I had to do it again for the first time in a while and ran into a brick wall. If you get the same error it will read like:

The HTTP request was forbidden with client authentication scheme ‘Anonymous’.

The remote server returned an error: (403) Forbidden.

When you want to upload to Azure, for security Visual Studio Tools for Windows Azure require the use of a certificate. You must enter a certificate before you can upload your project with the Visual Studio Tools.

Visual Studio Prompt for Credentials

Visual Studio Prompt for Credentials

Select from the Credentials drop down the option to <Add> a new Credential:

Add Credentials

Add Credentials

Now you can select or create a certificate for use with these credentials:

Select a Certificate

Select a Certificate

You will see any existing certificates in this list, mine has a few – yours may be empty. However for this example we will create a new Certificate, so select <Create> and enter a friendly name:

Select Create a Certificate

Select Create a Certificate

Enter a friendly name

Enter a friendly name

Now you will be taken back to the Windows Azure Project Management Authentication screen with the certificate selected. Now you must open up the Management Portal at and upload the newly created certificate to the Portal. The shortcut to doing this is to click the first and then second link at point 2 of the below dialog:

Prepare to upload your certificate

Prepare to upload your certificate

In the portal, make sure you go to the Management Certificates section. This was where I was going wrong! I was adding a certificate to a Hosted Service rather than the overall Management Portal.

Go to the Management Certificates area

Go to the Management Certificates area

Click the Add Certificate button at the top left of the screen, select the subscription to use and browse to the certificate in this location:

Update Certificate

Update Certificate

Clicking “Done” will create the certificate, and then you will be able to upload your project to Windows Azure.

Done indeed!

How not to do it!

Just to be clear, I was adding my certificate to the wrong place:

Don't upload a certificate here for uploading - the wrong place!

Don't upload a certificate here for uploading - the wrong place!

This Certificates Node is used to manage a particular instance set, for things such as Remote Desktop.

It’s pretty easy to go wrong, but easy to notice that you have once you know what to look for – make sure your certificate upload dialog doesn’t contain a Password field! This is because the certificate for upload and a certificate for remote desktop are different (the remote desktop contains the private key):

The wrong dialog!

The wrong dialog!

Had I been a little more awake, I probably would have thought in more depth about why I didn’t know what the password was! However, if you specify a .cer file in here, the portal doesn’t need to use it, and so won’t mind that you just made it up! Trying to use this will result in the error message:

Error Message

Error Message

If you run into this issue, just delete the certificate from within your Hosted Services subscription and instead add it to the Management Certificates part of the Portal.

Also, it’s probably time to get another coffee :-) It certainly was for me!

Stop Press!

As I was writing this, there was an update by David Hardin on a very similar subject:

PRWeb reported “The first Microsoft Dynamics CRM Independent Software Vendor to be invited to the BizSpark One program looks forward to a closer collaboration with Microsoft” in a Marketing Automation Vendor Selected to Join the Microsoft BizSpark One program press release of 1/21/2010 (via

image Microsoft Dynamics CRM Marketing Automation vendor ClickDimensions today announced that it has been selected to join the Microsoft BizSpark One Program. Microsoft BizSpark was launched as a global program offering software, support and visibility to early stage startups. Microsoft BizSpark One is the latest expansion of the BizSpark program. This unique invitation-only program is designed to accelerate the growth of selected high potential startups through a one-to-one relationship with Microsoft and a global community of advisors, investors, and peers.

msoftdynamics.jpg“We've set out to help businesses improve their sales and marketing functions by giving them tools to reach their prospects and customers and understand what interests them” commented ClickDimensions founder and CEO John Gravely. “To achieve this, our solution deeply embeds email marketing, web tracking, lead scoring, social discovery, campaign tracking, form capture and other capabilities into the Microsoft Dynamics CRM platform. Having marketing automation data reside inside CRM creates a powerful overall solution that goes well beyond what the typical integration between CRM and Marketing Automation provides.”

Microsoft Corporation is committed to serving as a valuable technology and business partner for emerging startups and their investors. BizSpark One pairs each startup with a dedicated relationship manager at Microsoft who works with the startup to identify its unique opportunities and build a tailored plan to promote the startup's visibility, expand its network of investors and mentors, expose it to business opportunities, and develop cutting edge applications.

image“ClickDimensions' focus on the Microsoft Dynamics CRM and Windows Azure platforms is exciting for Microsoft, our partners and customers,” commented Rodney Bowen-Wright, business development manager, at Microsoft. “We look forward to working with ClickDimensions to accelerate its success.”

About ClickDimensions
ClickDimension's Marketing Automation for Microsoft Dynamics CRM empowers marketers to generate and qualify high quality leads while providing sales the ability to prioritize the best leads and opportunities. Providing Email Marketing, Web Tracking, Lead Scoring, Social Discovery, Campaign Tracking and Form Capture ClickDimensions allows organizations to discover who is interested in their products, quantify their level of interest and take the appropriate actions. A 100% Software-as-a-Service (SaaS) solution built on the Microsoft Windows Azure platform and built into Microsoft Dynamics CRM, ClickDimensions allows companies to track their prospects from click to close. For more information about ClickDimensions visit

Christian Weyer offered a brief tip about Maximum size of a Windows Azure deployment package (.cspkg) on 1/21/2011:

image Note to self: The maximum Windows Azure package (.cspkg) size that can be deployed – e.g. by using Visual Studio or the cspack tool - is 600 MB (600 * 1024 * 1024 bytes).

Klint Finley reported COBOL's Not Dead. Make it Play Nice with the Modern Enterprise in a 1/20/2011 post to the ReadWriteEnterprise blog:

image COBOL, one of the oldest programming languages created, is often thought to be as dead as Latin. Yet, as we reported in October, COBOL remains a consistently in-demand skill for enterprise developers. Why? Maybe because, according to an announcement from application modernization company Micro Focus, there are still 220 billion lines of COBOL code in active use in enterprise applications today. Micro Focus claims that COBOL still powers 70% of the world's businesses.

image How do you deal with all that legacy code? One way is to re-write everything. But Micro Focus offers another solution - and no, it's not COBOL on Cogs. MicroFocus offers a tool called Visual COBOL that enables developers to deploy COBOL code to Windows, UNIX, Linux, .NET and Java Virtual Machine (JVM) and the Microsoft Windows Azure. Visual COBOL translates COBOL code directly to the JVM, allowing developers integrate COBOL applications with Java.

Visual COBOL screenshot

Visual COBOL integrates with Microsoft Visual Studio and Eclipse to give developers a modern programming environment for COBOL.

From Micro Focus's announcement:

"There is an ongoing shift towards examining the impact and specific role of the chosen language in conjunction with selecting a platform," said Mark Driver, Research Vice President of Gartner Research. "Taking COBOL to new platforms like .NET, JVM or the cloud supports a growing trend toward developers choosing the best language for the job, independent of the choice of best deployment platform to use."

The obvious advantage is that this is much quicker and cheaper than re-writing all these legacy applications from scratch. The disadvantage is that there will still be a lot of COBOL lurking around your enterprise. But that might not be such a bad thing.

Some more interesting tidbits from the announcement:

  • COBOL systems are responsible for transporting up to 72,000 shipping containers, caring for 60 million patients, processing 80% of point-of-sales transactions and connecting 500 million mobile phone users.
  • It has been estimated that the average American relies on COBOL at least 13 times during the course of a routine day as they place phone calls, commute to and from work, and use credit cards.
  • Around 5 billion lines of new COBOL code are added to live systems every year.
  • There are over 200 times more transactions processed by COBOL applications than Google searches each day.

Doug Rehnstrom posted yet another Windows Azure Training – Creating Your First Azure Project to the Learning Tree blog on 1/20/2010:

This post will walk you through creating a Windows Azure application. It assumes that you have Visual Web Developer 2010 Express already installed (or Visual Studio 2010), along with the Azure Tools for Visual Studio. If you don’t, see my previous post Windows Azure Training – Setting up a Development Environment for Free.

You’ll create a simple ASP.NET application that just squares a number.

Creating a Windows Azure Project in Visual Studio

Start Visual Web Developer 2010 Express (or Visual Studio), and then select the File | New Project menu. The New Project dialog will open. Expand Visual C#, in the Installed Templates tree view, and then click on the Cloud branch. From the list of templates on the right, select Windows Azure Project.

Visual Basic would work just as well, but the code later on is in C#.

Give your project the name “SquareANumberAzureProject”. Also, set a location where you want to save the project. Then, click the OK button. See the screen shot below.

The New Windows Azure Project dialog will open. Select ASP.NET Web Role from the list on the left, and click on the right arrow to move it to the list on the right. Hover over the Web Role and a pencil icon will appear. Click on that icon, then change the name of the role to “SquareANumber.Web”. When done, click OK. See the screenshot below.

Editing an Azure Web Role in Visual Studio

Your Windows Azure project will be created, and will open in Visual Studio. You should see the Solution Explorer window on the right.

Two projects were created, an ASP.NET Web project and an Azure service project. The Azure Web application is edited the same way as any other ASP.NET application. The Azure service project is used to configure the application, and to create a deployment package when you are ready to upload the application to the cloud (we’ll cover that in a later post).

Let’s make the application do something. In Solution Explorer, double-click on the file Default.aspx to open it in the code editor. At the bottom of the code editor, click the Design button to see the graphical editor. Delete the existing content, and then drag a TextBox, Button and Label onto the Web page from the toolbox. The page should look similar to the screenshot below.

You need to write some code to take the number entered in the text box, square it, and put the answer in the label. Double-click on the button, and an event handler will be created. The code below will work well enough for this program.

Running your Azure Project using the Windows Azure Platform Emulator

From the Visual Web Developer toolbar, click the green run button, or select the Debug | Start Debugging menu. You’ll likely get the error shown below.

Save your work and then close Visual Web Developer. To run Visual Web Developer with elevated privileges, right-click on its icon in the Start menu and select Run as administrator. See the screenshot below.

Once Visual Web Developer starts open your project and try to run it again. It will take a little while, but your program should start and you will be able to square some numbers! The reason it takes a little while for the program to start, is because it is running using your development machine’s local Windows Azure Platform Emulator. You should be able to see the emulator’s icon in the Windows taskbar on the lower right side of your screen. It’s the one that looks like a blue Windows logo, as shown below.

Right-click on the Azure emulator’s icon, and then select Show Compute Emulator UI. This will open a form that displays the Azure services running on your machine, and their status. See the screen shot below.


Visual Studio and the Azure tools make creating Windows Azure Web applications easy. The Windows Azure Platform Emulator makes it easy to test Azure applications on your local development machine. In a later post, I’ll cover deploying this application to the cloud.

To learn more about Windows Azure, come to Learning Tree course 2602, Windows Azure Platform Introduction: Programming Cloud-Based Applications.

Avkash Chauhan described a workaround for a WinHttpGetProxyForUrl(***) failed ERROR_WINHTTP_AUTODETECTION_FAILED (12180) - Error message in Windows Azure Infrastructure log on 1/20/2010:

image When you have your diagnostics enabled application running on Windows Azure, you might see that your Infrastructure Logs filled with lots of "ERROR_WINHTTP_AUTODETECTION_FAILED (12180)" errors related to Windows Azure storage access. Most of these errors are appeared in Infrastructure Logs, when you access table, blog or queue within your Windows Azure storage.

imageThe error message look like as below:

WinHttpGetProxyForUrl(http://<your-Windows-Azure-Storage-Name> failed ERROR_WINHTTP_AUTODETECTION_FAILED (12180)


WinHttpGetProxyForUrl(http://<your-Windows-Azure-Storage-Name> failed ERROR_WINHTTP_AUTODETECTION_FAILED (12180)


WinHttpGetProxyForUrl(http://<your-Windows-Azure-Storage-Name> failed ERROR_WINHTTP_AUTODETECTION_FAILED (12180)

Even when you will see these error in your Infrastructure logs still you will not see any functional problem within application.

I would like to say that these error messages are coming from the monitoring agent and do not indicate a problem on their own.  It just means that there is no auto proxy set up and it defaults to use the static proxy, which is normal in a role environment and not a cause of concern. You sure can discard such error messages.

Hopefully the new SDK [will] have something to reduce these errors to very minimum level

Nigel Parker (@nzigel) published Using Windows Azure with Windows Phone 7, HTML5 and iPhone to his MSDN blog on 1/18/2011:

image Since the launch of Windows Phone 7 and while I was down at the iDev conference in Christchurch I am coming across more and more people that are looking for ways to create scalable backend services and sync solutions across a number of different client platforms. This got me thinking about what Windows Azure can potentially deliver to the solution.

62916f07-85ed-45e0-b27d-2ed363dcf80dThe first area that perked my interest is the CTP of sync framework 4. This framework enables synchronization of data stored in SQL Azure to a range of different clients including Silverlight, Windows Phone 7, HTML5, iPhone and Android. Check out the videos Introduction to SQL Azure Data Sync and Building Offline Applications using Sync Framework and SQL Azure.

In addition to this Steve Marx presented a session at PDC titled Building Windows Phone 7 Applications with the Windows Azure Platform. He has written a blog post that shares his code to DoodleJoy, a Windows Azure application that hosts an OData service and a Windows Phone 7 application that consumes it.

image Since having my Windows Phone 7 I have particularly enjoyed applications like AlphaJax that take advantage of the notification service on the device to let you know when your friends have played their turn. In order to leverage this functionality you require to deploy and host your own backend services. Windows Azure can be a good solution to hosting these kinds of services. Check out this article Windows Phone 7 – Toast Notification Using Windows Azure Cloud Service to find out more on how to do this.

imageIt is great to see people starting to put all these pieces together. Jason Hunter from Gen-i took the challenge of scoping building and deploying an application for WP7 using Windows Azure over the holiday break. He writes about his experiences on his blog.

How can you get on Windows Azure?

Do you have a registered company that is:

  • Developing a software product or service?
  • Privately held?
  • Less than three years old?
  • Making less than US $1M annually?

If so, you may like to sign-up for Microsoft BizSpark (it’s free!)

If you are already an MSDN subscriber (either personally or through your companies partner benefits) you can activate your Windows Azure platform benefits valued at an estimated $1,800 USD per year per developer.

If you are not an MSDN subscriber there is a free introduction offer to try Windows Azure to dip your toes in.

See also the six 2010 posts by the Sync Framework Team starting with LarenC described Clarifying Sync Framework and SQL Server Compact Compatibility on 12/16/2010 in the SQL Azure Database and Reporting section above.

Ralph Squillace described his forthcoming Cloud Data: A Newbie Series --The issues in a 1/3/2011 post (missed with published):

imageIt's been an interesting couple of years for me -- I've been assigned to two different projects that have failed to find their way to RTM as originally conceived -- which cut my blog writing down a bit. It happens; I've been working most recently on some internal tools and demos designed to make use of the Microsoft cloud services and client technologies we have to make data accessible on any platform, always (or almost-always) available, reasonably efficient, and secure to the level that the value of the data requires.

Of course, that's the market-speak way of putting it; in point of fact, I wanted to build an application that makes use of private data, shares the appropriate amount publicly, and can be used from anywhere so that customers -- including me -- could use it the way that they wanted. This is a dumb, very normal idea -- get some data and build an app that actually uses it in a helpful way.

I'm actually trying to build two applications, but one is a tad behind schedule so I'll wait on that one. In any case, the first one required cutting the teeth a bit on a series of application technologies, assembled as loosely built cloud components using the following technologies:

  • Microsoft SQL Server 2008 and Express Edition with Advanced Services
  • Microsoft Linq2SQL -- a data access technology which is not our most supported technology, so far as I can see
  • Microsoft Entity Framework (EF)
  • SQL Azure
  • Windows Azure
  • -- a wonderful application for working with data in the cloud or locally
  • Visual Studio 2010
  • WPF
  • Windows Phone 7 with Silverlight
  • OData
  • WCF
  • WCF Data Services
  • Windows Azure DataMarket

I've touched each one of these items in order to create a working set of applications that can view, query, print, or send Microsoft technical and developer documentation topics based on internal metadata that is not used by search engines such as Google or Bing because they do not have access to it. The goal of this system, ultimately, is to enable any customer to find Microsoft information much faster than the global search engines can -- that is, you won't google or bing your question because you'll be able to locate the precise information you want based on query information the global services do not have.

Another way of saying this is that the applications aim to do one thing: unblock your work faster than any other current way within a specific problem domain. This isn't a global effort; I'm trying to keep the domain scope limited to data-centric .NET development with SQL Server 2008. But if your work touches this area, the idea is that the apps should be able to beat every other mechanism of finding your answer. If I can do that within a small domain, we can see where we can take that in larger problem spaces.

This is my gig; but along the way, I've touched so many different technologies that what I hope will end up being of help is the code that shows how to work with different technologies together -- you might not care for my specific effort. You have your own. :-)

<Return to section navigation list> 

Visual Studio LightSwitch

image2224222No significant articles today.


Return to section navigation list> 

Windows Azure Infrastructure

Mario Meir-Huber posted “Part 1: A brief overview of what’s possible” on 1/21/2011 as the first in a Windows Azure Series – Introduction to Windows Azure:

image I've done a Windows Azure Series together with Mario Szpusta, Software Architect Evangelist at Microsoft Austria on Windows Azure. The Series is in German and I decided to bring it to Cloud Computing Journal. I will modify some of the articles to bring them up-to-date. This series will cover about 15 articles that will be published once a week.

What Is Windows Azure?
imageWindows Azure is Microsoft's Platform as a Service offering for Cloud Computing. There are three major fields in Windows Azure. Figure 1 provides an overview of the platform.

Figure 1

The three fields are Windows Azure, SQL Azure and Windows Azure AppFabric. There are several other services that are part of each field. Windows Azure is a platform for web applications, SQL Azure is a comprehensive database in the cloud based on Microsoft SQL Server, and Windows Azure AppFabric contains enterprise techniques such as the service bus or access control. The name for Microsoft's cloud platforms is "Azure Services Platform" and Windows Azure is one part of this platform. Nevertheless, most people say Windows Azure and mean all three parts. Let's continue by examining each part of the Azure Services Platform.

Windows Azure
Windows Azure is the platform that allows us to build applications for the cloud. There are several components such as compute, storage, and content delivery. Figure 2 provides an overview of Windows Azure.

Figure 2

Windows Azure Compute allows developers to build cloud-based applications. There are three major Roles: WebRole, WorkerRole and VMRole. The WebRole is designed to build web applications on top of Windows Azure. Possible frameworks and tools are ASP.NET, ASP.NET MVC, and FastCGI such as PHP. The WorkerRole is designed for high-performance tasks such as background processing. A WorkerRole might be used to process tasks from a Website (WebRole) to decouple applications. The Windows Azure VMRole allows users to upload their images (virtual hard drive, VHD) to the cloud. This enables enterprises to run existing servers in the cloud.

Another major part of Windows Azure is storage. Storage contains three elements: Table Storage, Blob Storage and the Message Queue. I personally like the Table Storage a lot as it's a NoSQL Store that allows enterprises to store large amount of data in a Table Storage without the side-effects of relational databases. The Blob Storage is designed to store large binary objects such as videos, images or documents. Finally, the Message Queue is designed to enable messaging between components. It's useful for scalable and distributed applications in the cloud.

The Windows Azure Virtual Network was launched at PDC 2010 in Redmond. There might be several additions to the product in the future. So far, Windows Azure Virtual Network contains one sub-product called "Windows Azure Connect." Windows Azure Connect allows direct IP-connections between the cloud and on-premise datacenters. This targets interoperability between existing platforms and future cloud platforms. A great feature of Windows Azure Connect is the Active Directory integration. Many companies use Active Directory for their rights management so this gives cloud-based solutions the opportunity to use existing rights for users in the cloud.

The Content Delivery Network is already known by the names "Windows Update" or "Zune Marketplace." It basically replicates data closer to the end user in different regions. The Content Delivery Network is combined with Windows Azure Storage and built for high-performance content delivery in various regions. The Content Delivery Network can be used to stream videos, and distribute files or other content to end users in a specific region.

Currently (January 2011) the last product within Windows Azure is Windows Azure Marketplace. Marketplace allows developers and vendors to sell their products online. This can be done via the app market. Another great product is Windows Azure Marketplace DataMarket that allows companies to buy and sell data. This data can be used easily in different applications.

SQL Azure
SQL Azure is Microsoft's relational Database in the Cloud. It's based on SQL Server 2008. Figure 3 gives an overview on each part of SQL Azure.

Figure 3

SQL Azure is another PaaS offering by Microsoft that is built on top of SQL Server Technologies. The main product is SQL Azure Database, a relational database in the cloud. The benefit of this is that there is no maintenance or installation required. SQL Azure also takes care of the scaling and partitioning. The thing I like most about SQL Azure is how easy it is to calculate the costs compared to other databases.

SQL Azure DataSync is built on top of the Sync Framework. Its main target is to enable data synchronization between different datacenters. SQL Azure Reporting adds reporting and BI functionality to SQL Azure. These two products are not yet commercially available (January 2011) but can be used as a preview.

Windows Azure AppFabric
Windows Azure AppFabric is a middleware for the cloud. It can be used to integrate existing applications and to allow interoperability. Windows Azure AppFabric is also very useful for hybrid cloud solutions.

Figure 4

There are currently five different products in Windows Azure AppFabric. AppFabricServiceBus serves as a reliable way for messaging in service discovery in the cloud. The Windows Azure Access Control allows users to authenticate based on different credentials such as Facebook, Google, Yahoo, Windows Live and enterprise authentication such as Active Directory.

Caching is often a problem in enterprise applications. If an application needs to scale over more instances, caching is often a bottleneck that might cause some negative side effects. With Windows Server 2008 AppFabric, caching was introduced to solve this problem. This is now also integrated into Windows Azure and solves caching problems between Windows Azure and SQL Azure that might occur with large-scale systems. Integration allows users to integrate existing BizTalk Server Tasks into Windows Azure. Last but not least there are composite applications that can be used to deploy distributed systems based on the Windows Communication Foundation and the Workflow Foundation.

There are a lot of products that come with Windows Azure. In the next couple of month I'll dig much deeper. This article is a brief overview on what is possible. The next article will focus on the Windows Azure Platform where I'll describe the functionality of Windows Azure and the storage capabilities.

•   •   •

"This article is part of the Windows Azure Series on Cloud Computing Journal. The Series was originally posted on, the official Blog of the Developer and Platform Group at Microsoft Austria. You can see the original Series here:"

The Windows Azure Team suggested that you Watch Career Factor, A New Reality Show That Follows 9 IT Personalities As They Advance Their Careers on Windows Azure and other Microsoft Products on 1/21/2011:

image Here's something very cool:  a new real-time online reality show called Career Factor has just debuted that will tell the stories of nine real individuals around the world, each working to improve their IT careers.  During the next five months, each will be working toward a career goal with the help of Microsoft, its partners, and the global IT community.  One of the people you can follow is Neil [Simon (@neilnasty)], a Windows Azure developer from Dublin, Ireland who wants to learn how to develop an application on Windows Azure and SQL Azure.

imageEach candidate has a personal page where you can explore their backgrounds, learn from their experiences, and share your findings. You'll find videos, links to learning resources, tips, and updates via their blog and various social media outlets. So take a look around, find some common ground and get inspired by Career Factor.

Reality videos for geeks?

David Chappell’s 17-page Windows Azure and HPC Server whitepaper v1.22 became available for downloading as a *.docx file on 1/20/2010 (site registration with Windows Live ID required). From the first section:

High-Performance Computing in a Cloud World

image The essence of high-performance computing (HPC) is processing power. Whether implemented using traditional supercomputers or more modern computer clusters, HPC requires having lots of available computing resources.

Windows HPC Server 2008 R2 Suite lets organizations provide these resources with Windows Server machines. By letting those machines be managed as a cluster, then providing tools for deploying and running HPC jobs on that cluster, the product makes Windows a foundation for high-performance computing. Many organizations today use Windows HPC Server clusters running in their own data centers to solve a range of hard problems.

But with the rise of cloud computing, the world of HPC is changing. Why not take advantage of the massive data centers now available in the cloud? For example, Microsoft’s Windows Azure provides on-demand access to lots of virtual machines (VMs) and acres of cheap storage, letting you pay only for the resources you use.

The potential benefits for HPC are obvious. Rather than relying solely on your own on-premises cluster, using the cloud gives you access to more compute power when you need it. For example, suppose your on-premises cluster is sufficient for most—but not all—of your organization’s workloads. Or suppose one of your HPC applications sometimes needs more processing power. Rather than buy more machines, you can instead rely on a cloud data center to provide extra computing resources on demand. Depending on the kinds of jobs your organization runs, it might even be feasible to eventually move more and more of your HPC work into the cloud. This allows tilting HPC costs away from capital expense and toward operating expense, something that’s attractive to many organizations.

Yet relying solely on the cloud for HPC probably won’t work for most organizations. The challenges include the following:

  • Like other applications, HPC jobs sometimes work on sensitive data. Legal, regulatory, or other limitations might make it impossible to store or process that data in the cloud.
  • A significant number of HPC jobs rely on applications provided by independent software vendors (ISVs). Yet because many of these ISVs have yet to make their applications available in the cloud, a significant number of HPC jobs can’t use cloud platforms today.
  • Jobs that need lots of computing power often rely on large amounts of data. Moving all of that data to a cloud data center can be problematic.

While the rise of cloud computing will surely have a big impact on the HPC world, the reality is that on-premises HPC clusters aren’t going away. Instead, providing a way to combine the two approaches—cluster and cloud—makes sense. This is exactly what Microsoft has done with Windows HPC Server and Windows Azure, supporting all three possible combinations: applications that run entirely in an on-premises cluster, applications that run entirely in the cloud, and applications that are spread across both cluster and cloud.

imageThis paper describes how Windows HPC Server R2 and Windows Azure can work together. After a brief tutorial on each one, it walks through the options for combining them, ending with a closer look at SOA applications, one example of the kind of HPC application this combination supports. The goal is to provide an architectural overview of what this technology is and why HPC customers should care. …

Architectural illustrations:











High-performance computing is entering a new era. The enormous scale and low cost of cloud computing resources is sure to change how and where HPC applications run. Ignoring this change isn’t an option.

Yet the rise of the cloud doesn’t imply the death of today’s clusters. Data issues, hardware limitations, and other concerns mean that some HPC jobs remain best suited for on-premises compute clusters. This reality is why Windows HPC Server supports all three possibilities:

  • Running applications entirely in an on-premises cluster.
  • Running applications entirely in the cloud.
  • Running applications across both an on-premises cluster and the cloud.

David Chappell is Principal of Chappell & Associates ( in San Francisco, California. Microsoft sponsored the white paper.

Nati Shalom (pictured below) and Joseph Ottinger asserted PaaS shouldn’t be built in Silos in a 1/20/2011 post:

image Last year I wrote a post titled Platform as a Service: The Next Generation Application Server?  where I was referring to the shift from  application server into platforms.

On my 2011 predictions  I referred to a new emerging PaaS category targeted for those who be looking to build their own custom PaaS offering.

The demand coming from many SaaS providers to offer their own platform to their ecosystem players (a good example on that regard is Salesforce as well as the continues pressure of enterprises on their IT to enable faster deployment of new application and improve the utilization of their existing resources will force those organizations to build their own specialized PaaS offering. This will drive a new category of PaaS platform known as Cloud Enabled Application Platform (CEAP) specifically designed to handle multi-tenancy, scalability, and on-demand provisioning but unlike some of the public PaaS it needs to come with significantly higher degree of flexibility and control.

One of the immediate question that came up in various follow-up  discussion was  “how is that different from the Application Servers of 2000?”

I’ll try to answer that question in this post.

The difference between PaaS and Application Servers

Lori MacVittie wrote an interesting post, "Is PaaS Just Outsourced Application Server Platforms?," which sparked an interesting set of response on twitter.

Lori summarized the aggregated response to this questions as follows:

PaaS and application server platforms are not the same thing, you technological luddite. PaaS is the cloud and scalable while application server platforms are just years of technology pretending to be more that it is..

Now let me try to expand on that:

The founding concept behind application servers was built around the concept of Three Tier architecture where you break your application into presentation, business-logic and data-tier. It was primarily an evolution of Client/Server architecture, which consists of a rich client connected to centralized database.

The main drive was the internet boom that allowed million of users to connect to the same database. Obviously having million of users connected directly to a database through a rich client couldn’t work. The three-tier approach addressed this challenge by breaking the presentation tier into two parts – a Web Browser and Web Server - and by separating the presentation business logic tier into two separate parts.

In this model, Clients are not connected directly into the database except through a remote server.

Why is PaaS any different?

A good analogy to explain the fundamental difference between existing applications servers and PaaS enablement platform is the comparison between Google Docs vs Microsoft office. Google came up with a strong collaboration experience that made sharing of documents simple. I was willing to sacrifice many of the other features that Office provided just because of the ease of sharing.

The same goes with PaaS – when I think of PaaS I think of built-in scaling, elasticity, efficiency (mostly through Multi Tenancy) and ease of deployment as the first priority and I’m willing to trade many other features just for that.

In other words, when we switched from desktop office to its Google/SaaS version we also changed our expectations from that same service. It wasn’t like we took the same thing that we had in our desktop and put it over the web – we expected something different,.  This brings me to the main topic behind this post:

PaaS shouldn’t be built in Silos

If I take the founding concept of Application Server – i.e. Tier based approach - and try to  stretch it into the PaaS world the things that pops up immediately in my mind is the impedance mismatch between the two concepts.

Tiers are Silos – PaaS is about sharing resources.

Tiers were built for fairly static environment – PaaS is elastic.

Tiers are complex to setup – In PaaS everything is completely automated. Or should be.

Now this is an important observation – even if the individual pieces in your Tier based platform is scalable and easy to deploy the thing that matters is how well they can:

  1. Scale together as one piece,
  2. Dance in an elastic environment as one piece.
  3. Run on the same shared environment without interfering with one another... as one piece through multi-tenancy.
Silos are everywhere ..

In the application server world, every tier in our infrastructure is built as a complete silo. By "silo," I mean that they are not just built separately, but that each component comes with its own deployment setup, scaling solution, high availability solution, performance tuning and the list goes on...

All that individual deployment and tuning in the context of a single application, well, if you stretch that into multi-tenant application, it becomes much worse.

Silos are not just the components that make our products. Silos are also a feature of those who build the components.

If you think of Oracle Fusion as an example or IBM for that matter you’ll see silos. Messaging, data, analytics, all broken into a separate development group with different management, individual development groups.

Each component has a different set of goals, different timelines, different egos, and in most cases they are even incentivized to keep those silos in place simply since they are not measured on how well their products work with other pieces but solely on how well their thing does its (siloed) job which leads into even greater focus on silos - functionality that works as an end to itself and not a contributor to general application development.

Oracle Fusion is an interesting attempt to break those silos. In the course of creating Fusion, they even came up with an interesting announcements of bringing Hardware and Software together.

Is "bringing hardware and software together" good enough? It's certainly an attempt but if you look closely you’ll see that most of it is a thin glue that tries to put together pieces that were bought separately through acquisitions and are now packaged together and labeled as one thing.

However most of it is still pretty much the same set of siloed products. (Oracle RAC cluster is still very different than Weblogic cluster, etc.) That doesn’t apply to Oracle only – think of VMware and its recent acquisition now labeled under VFabric.

Now imagine how you can scale together an erlang based messaging system with other pieces of their stack as one piece.. The sad thing about VMware is that it looks like they went and built a new platform targeted specifically for PaaS world but inherited the same pitfalls from the old Tier-based application server world.

Silos are also involved in the process by which we measure new technology.

Time and time again I see where even those who need to pick or build a PaaS stack go through the process of scoring different pieces in the puzzle independently, with different people measuring each piece rather than looking on how much those pieces work together.

A good analogy is comparing Apple with Microsoft: Microsoft would have probably score better on feature by feature comparison (they can put a checkmark in more places) but the thing that brings many people to Apple is how well all of its pieces work together.

Apple even went farther and refused to put in features that might have been great - just because they didn’t fit the overall user experience. They often win with users because of the scarcity of features, even though it may not look as nice on a bullet list of features.

Sharing is everything.

The thing that makes collaboration so easy in the case of Google Docs is the fact that we all share the same application and underlying infrastructure. No more moving parts. No more incompatible parts.

All previous attempts to synchronize two separate documents through versioning highlighting and many other synchronization tools pale compared to the experience that we get with Google Docs. The reason is that sharing and collaboration in Office was an afterthought. If we work in silos, we’ll end up… in silos. Synchronizing two silos to make them look as one is doomed to fail, even if we use things like revision control systems for our documents.

We learned through the Google docs analogy that sharing of the same application and its underlying infrastructure was a critical enabler to make collaboration simple.

There is no reason that that this lesson wouldn’t apply to our Application Platform as well. Imagine how simple our lives could be if our entire stack could use one common clustering mechanism for managing its scaling and high availability. All of a sudden the amount of moving parts in our application would go down significantly, along with lots of complexity and synchronisation overhead.

We can measure the benefits of having a shared cluster based on the fact that it comes with less moving parts compared with those that are based on Tier based model:

  • Agility – fewer moving parts means that we can produce more features more quickly without worrying if once piece will break another piece.
  • Efficiency – fewer moving parts means that will have less synchronization overhead and network calls associated with the each user transaction.
  • Reliability – fewer moving parts means lower chance for partial failure and more predictability in the system behavior.
  • Scaling –  fewer moving parts means that scaling is going to be significantly simpler, because scaling would be a factor of a "part" designed for that purpose.
  • Cost – fewer moving parts means fewer products, a smaller footprint, less hardware, reduced maintenance cost, even less brainpower dedicated to the application.
First Generation PaaS – Shared platform

If we look at the first generation PaaS platforms such as Heroku, Engine Yard, etc., we see that they did what Oracle Fusion does. They provide a shared platform and carve out a lot of the deployment complexity associated with the deployment of those multi-tiered applications. (This makes is marginally better than Oracle Fusion, but not good enough... certainly not as good as it could or should be.)


I refer to these as “PaaS wrappers”. The approach that most of those early-stage PaaS providers took was to “wrap” the existing tier-based frameworks into a nice and simple package without changing the underlying infrastructure, to provide an extremely simple user experience.

This model works well if you are in the business of creating lots of simple and small applications but it fails to fit the needs of the more “real” applications that need scalability, elasticity and so forth. The main reason it was built this way was that it had to rely on an underlying infrastructure that was designed with the static tier-based approach in mind. An interesting testimonial on some of the current PaaS limitations can be found in Carlos Ble's post, "Goodbye Google App Engine (GAE)."

Toward 2nd Generation PaaS

In 2010, a new class of middleware infrastructure emerged, aimed at solving the impedance mismatch at the infrastructure layer. They were built primarily for a new, elastic world, with NoSQL as the new elastic data-tier, Hadoop for Big data analytics, and the emergence of a new category (DevOps) aimed at simplifying and automating the deployment and provisioning experience.

With all this development, building a PaaS platform in 2011 will - or should - look quite different than those that were built in 2009.

The second-generation PaaS systems would be those who could address real application load and provide full elasticity within the context of a single application.

They need to be designed for continuous deployment from the get-go and allow shorter cycles between development and production.

They should allow changes to the data model and introduction of new features without exposing any downtime.

The second-generation PaaS needs to include built-in support for multi-tenancy but in a way that will enable us to control the various degrees of tradeoffs between Sharing (utilization and cost) and Isolation (Security).

The second-generation PaaS should come with built-in and open DevOps tools that will enable us to gain full automation but without losing control, open enough for customization and allowing us to choose our own OSes and hardware of choice as well as allowing us to install our own set of tools and services, not limited to the services that come out of the box with the platform.

It should also come with fine-grained monitoring that will enable us to track down, trouble-shoot  our system behavior through a single point of access as if the platform was a single machine.

The 2nd generation PaaS systems are not going to rely on virtualization to deliver elasticity and scaling but instead provide a fine grain level control to enable more efficient scaling.

At Structure 2010, Maritz said that “clouds at the infrastructure layer are the new hardware.” The unit of cloud scaling today is the virtual server. When you go to Amazon’s EC2 you buy capacity by the virtual server instance hour. This will change in the next phase of the evolution of cloud computing. We are already starting to see the early signs of this transformation with Google App Engine, which has automatic scaling built in, and Heroku with its notion of dynos and workers as the units of scalability.

Final words

Unlike many of the existing Platforms, in this second-generation phase, its not going to be enough to package and bundle different individual middleware services and products (Web Containers, Messaging, Data, Monitoring, Automation and Control, Provisioning) and brand them under the same name to make them look as one. (Fusion? Fabric? A rose is a rose by any other name - and in this case, it's not a rose.)

The second-generation PaaS needs to come with a holistic approach that couples all those things together and provide a complete holistic experience. By that I mean that if I add a machine into cluster, I need to see that as an increase in capacity on my entire application stack, the monitoring system needs to discover that new machine and start monitoring it without any configuration setup, the load-balancer need to add it to its pool and so forth.

Our challenge as technologists would be to move from our current siloed comfort zone. That applies not just to the way we design our application architecture but to the way we build our development teams, and the way we evaluate new technologies. Those who are going to be successful are those who are going to design and measure how well all their technology pieces work together before anything else, and who look at a solution without reverence for past designs.


Written by Nati Shalom and Joseph Ottinger

Bruce Guptill asserted Cloud Strategy 101- Be Flexible, Learn, Repeat in a 1/20/2011 post to Saugatuck Technology’s Lens 360 blog:

image Recent blog posts and industry media articles have suggested that IBM, among other large, Master Brand IT providers, is “flying blind” when it comes to Cloud business strategy and execution. According to this article, “the computing giant risks squandering its IT leadership position by lacking a clear strategy in this new era of cloud computing.”

image No well-established, Master Brand such as IBM currently risks squandering a leadership position in a market that is as new and as volatile as Cloud IT is right now. Cloud IT markets in 2011 are the equivalent of desktop personal computer markets in 1979. It’s new; it’s going in multiple directions at once; even the terminology is incomplete and far from standardized. Market leaders today tend to be pioneers, who may or may not be leaders in two to three years. Indeed, considering IT market history, few of today’s early movers in Cloud will even exist in another few years. As the joke about pioneers goes: “You go first, and I’ll kill whatever kills you.”

Not to defend IBM by any means (they can do that well enough themselves), but Cloud strategies from any vendor, especially traditionally-focused IT providers, are usually incomplete - and they are constantly shifting and adapting to map to a shifting and growing marketplace. Cloud-related strategies developed by practically every IT vendor/provider have changed, and will continue to change, as the marketplace matures.

IBM is learning the same Cloud lessons that Microsoft is learning, or that SAP is learning, or that Oracle is learning; it’s learning the same lessons that its own smaller partners are learning as well. It’s learning many of the lessons that Salesforce and OpSource and have Amazon and Google have already learned, and that Saugatuck began educating our clients on years ago.

The key lesson is this: No firm can get to the Cloud and make it work on their own. The Cloud is different; the Cloud is a whole new game; the Cloud is way more complex and expensive than it appears. Moving any business to the Cloud requires new and different business and technology strategies, capabilities, operations, management, and, most importantly, partners. There will be gaps in any firm’s portfolio; there will be gaps in any firm’s strategy. There must be; the nature and nascence of Cloud IT requires it. An overly-specific business or technical strategy is doomed to failure. Of course, any reasonable vendor wishing to continue to service its existing customers MUST have strategies that embrace and enable BOTH Cloud IT and traditional IT, for the future will be one of multiple hybrid, Cloud-plus-on-premises environments.

It’s a very new and amorphous market. The fact that providers exhibit uncertainty, or that they are building on established business models, should not be viewed as anything but normal. And yes, if they drag their feet over a period of years, they risk losing customers and partners. There needs to be some urgency to catch up; but there is (and should be) much more urgency to not mess up.

Full disclosure: IBM, along with many of the major traditional and Cloud IT providers, is a client of Saugatuck Technology Inc.

imageNo significant articles today.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

Jake Sorofman asserted Consider the Source: The Myth of the False Cloud in a 1/21/2011 post to the HPC in the Cloud blog:

image At a very young age, my mother told me to consider the source—to never take authoritative pronouncements at face value. I hadn’t thought much about this back in those more innocent days, but as my cynicism has matured, I’ve learned that behind such declarations is often an agenda that’s hidden only in vain.

The worst offending is often the impossibly pithy—slogans that are a little too packaged and pristine to represent any authentic truth. If it sounds too neat and convenient, chances are it’s an agenda-backed instrument of manipulation.

A good example of this is the cautionary refrain from Marc Benioff, CEO of; and Werner Vogels, CTO of Amazon Web Services:

“Beware of the false cloud!”

These gentlemen are visionaries and revolutionaries in their own rights; they have a great deal of credibility and they’ve made contributions to IT that will be remembered for many decades. But, on this particular topic, they’re anything but credible. As the titans of the public cloud, they have an obvious axe to grind.

As Public Enemy protested: “Don’t believe the hype!”

As their argument goes, if you own the hardware, it’s not a cloud.

I’m not buying it. The private cloud is anything but illegitimate.

Here’s why:

Cloud is About Agility

IT used to be your cable company. It held the local monopoly for IT services. The wait you experienced was a frustrating, but necessary part of your relationship with IT. You had no choice. But now you do. Public cloud has ended the wait.

Why wait three months when three minutes will do?

That question has captured the attention of IT leadership, which realizes that the public cloud has dramatically changed performance expectations for IT.

Traditionally, the CIO was expected to improve performance incrementally year over year; last year’s metrics were next year’s benchmarks and your goal was to ensure the curve was moving in the right direction. Today, the expectation is for a radical transformation in agility and responsiveness—from months to minutes.

Amazon can do it. Why can’t you?

But this question isn’t unleashing a wholesale migration of enterprise workloads to the public cloud; it’s the impetus for the private cloud transformation.
Private Cloud is the Entry Point for Enterprise IT

I’ve yet to see an analyst projection that doesn’t point to the private cloud as the beachhead for enterprise IT organizations making this transformation.

Three reasons that is the case:

  • Compliance—Private cloud mitigates the security and privacy issues related to regulated workloads running outside the firewall.
  • Culture—Private cloud is better aligned to the command-and-control cultures and expectations of enterprise IT organizations.
  • Economics—Private cloud leverages existing infrastructure investments and it is significantly more cost effective for long-running workloads. It turns out that “pennies an hour” compounds remarkably fast!

Private Cloud Was the Entry Point for Amazon EC2...

That’s right: Before Amazon EC2 was a public cloud it was a private cloud!

Why? For the same reasons enterprise IT organizations are building private clouds today: Flexibility and agility.

So, to call the private cloud illegitimate isn’t just illogical; it’s a little hypocritical.

It’s important to acknowledge that all of this hullaballoo around clouds may be an issue of semantics. Ultimately, it doesn’t much matter if you call this internal elastic infrastructure a cloud, a grid or some other such thing.

What matters is the acknowledgement that enterprise IT organizations must become self-service on-demand providers of infrastructure, platforms and applications. In the future, IT must look like a public cloud in its own right.

It’s also important to acknowledge that private clouds are the starting point on this journey and not necessarily the final destination. Most enterprises will want to blend together a variety of internal and external resources to create an integrated “hybrid cloud” that allows workloads to be dynamically retargeted to optimize for price, policy, performance and various service level characteristics.

Of course, this argument deserves one final acknowledgement: I, too, have my own biases and my own axe to grind. I’m pretty sure we all do.

So, don’t take my pronouncements at face value. Consider them as one perspective in forming your own version of the truth.

Jake is chief marketing officer for rPath.

<Return to section navigation list> 

Cloud Security and Governance


No significant articles today.

<Return to section navigation list> 

Cloud Computing Events

Nancy Medica (@NancyMedica) announced a How to scale up, out or down in Windows Azure – Webinar to be held by Common Sense on 1/26/2011 at 9:00 to 11:00 AM PST:

image Learn how to achieve:

  • Scalability linear scale, scale up vs. scale out, choose VM sizes
  • Storage Cache
  • Elasticity, scale out, scale back and automation of scaling


Presented by Juan De Abreu,Common Sense Vice President. Juan is also Common Sense's Delivery and Project Director and a Microsoft’s Sales Specialist for Business Intelligence. He has nineteen years of Information Systems Development and Implementation experience.

Follow the event on Twitter #CSwebinar

Pablo Castro will present an Open Data: Designing Data-centric Web APIs session at the O’Reilly Strata Conference on 2/3/2011 at 10:40 AM in the Mission City M room of the Santa Clara Conference Center, Santa Clara, CA:

imageChallenges around open data come in various angles, from licensing to provenance to change management. We focus on one in particular: how to create Web APIs for open data that encourage new creative uses of data sets and do not get in the way.

In this session we’ll discuss key aspects of data-centric service interfaces, such as the huge power of simply applying HTTP and URLs to address parts of data sets with the appropriate granularity, the tough balance between allowing clients to ask very specific questions and ensuring that servers can still take the load, and how uniformity in data service interfaces across the industry can bring a huge opportunity to the table.

During the session we’ll share our various experiences designing the Open Data Protocol (, where we have many examples of ideas that worked great, of ideas that did not work at all, and of live services that have been using the protocol for a while to share data on the Web.

Photo of Pablo Castro

Pablo Castro, Microsoft

Pablo is a software architect at Microsoft where he works on various topics that bring data and the Web together. He currently leads the design effort for the Open Data Protocol (OData) and is also involved on the design of the local data storage features in future versions of HTML, representing Microsoft in the WebApps working group at W3C.

Bill Zack reported in a 1/20/2011 post to his Ignition Showcase blog that Microsoft Management Summit 2011 will take place 3/21 through 3/25/2011 in Las Vegas NV:

Microsoft Management Summit (MMS) 2011 will empower you with the insight, solutions and networking opportunities to succeed in your job today while preparing for the future. MMS is the premier event of the year for deep technical information and training on the latest IT management solutions from Microsoft, Partners, and Industry Experts.


Whatever your IT management specialty, you’ll gain critical new knowledge and solutions at MMS 2011. Microsoft experts and industry leaders will present more than 150 sessions offering something for everyone, from executive keynotes to demo-packed technical sessions covering new solutions and best practices.

Presentation tracks include:

  • Cloud Management [Emphasis added.]
  • Client Management Technologies
  • Community
  • Microsoft IT
  • Operations Management
  • Security   & Compliance Management
  • Server Management Technologies    
  • Solution Accelerators             
  • Systems Management                
  • Virtualization                    

Reserve your place at Microsoft Management Summit 2011: Register Today!

The Cloud Management track offered 19 sessions as of 1/20/2011, many of which relate directly to the Windows Azure Platform. Early-bird registrations close after 2/14/2011.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Ayende Rahien (@ayende) offers a sneak peak at RavenDB message queues in his RavenMQ update of 1/21/2010:

image It wasn’t planned so much as it happened, but RavenMQ just slipped into private beta stage. The API is still in a the somewhat clumsy state, but it it is working quite nicely.

You can see an example of the client API below:

using(var connection = RavenMQConnection.Connect("http://reduction:8181"))
    connection.Subscribe("/queues/abc", (context, message) => 

    connection.PublishAsync(new IncomingMessage
        Queue = "/queues/abc",
        Data = Encoding.UTF8.GetBytes("Hello Ravens")


Please note that this is likely to be subject to many changes.

This is written about 10 minutes after I posted the code above:

using(var connection = RavenMQConnection.Connect("http://localhost:8181"))
        Subscribe<User>("/queues/abc", (context, message) => Console.WriteLine(message.Name));

        .Add("/queues/abc", new User {Name = "Ayende"})


I told you it would change…

P.J. Jakovljevic posted an analysis of’s in Dreamforce 2010: Of Cloud Proliferation – Part 2 of 1/20/2011:

image Part 1 of this blog series talked about my attendance of Dreamforce 2010,’s annual user conference, which has over the past several years become a highly anticipated and entertaining end-of-the-year fixture for enterprise applications market observers. My post concluded that while Dreamforce 2009 was mostly about continued growth of the cloud computing trailblazer and unveiling of Salesforce Chatter, the company’s nascent social platform and collaboration cloud (as duly covered by my blog series), the overall Dreamforce 2010 theme was cloud proliferation (and’s further diversification).

In his blog post, Louis Columbus states that at the center of Dreamforce 2010 was the transformation of into an enterprise cloud platform provider, starting with endorsing open application programming interfaces (APIs) including REST (Representational State Transfer), which its developer community had reportedly been requesting for over a year. Moreover, after realizing the proprietary nature of its cloud platform (and its Apex code), CEO Marc Benioff and his co-founder Parker Harris have recently decided to decouple into a more open application layer, for platform as a service (PaaS) purposes and a database layer for providing infrastructure as a service (IaaS).

Database in the Cloud

The latter layer was named, which should help to attract Java, Ruby on Rails, PHP, Perl, etc. developers, who have likely perceived (Apex) as non-mainstream technologies. When it becomes generally available (GA) in early 2011, customers will be able to use as the back-end for both on-premises and cloud-based applications. Open APIs will also provide compatibility for applications developed for smartphone and mobile devices.

We all learned at the Dreamforce 2010 media lunch that the “” name was Benioff’s failed entrepreneurial attempt between his tenure at Oracle and his run (some time in 1999). Indeed, from the light of’s resounding success, it is difficult to imagine Benioff failing, but it apparently happens to the best of us. At least, the dormant domain name will come in handy now, since or “Database as a service (DaaS)” might be just what the doctor ordered for mobile and social applications, given that traditional on-premise databases are not easily accessible since they are usually behind firewalls.

Cloud database is not’s invention per se, given the existence of Amazon’s SimpleDB, FathomDB, LongJump, Xeround, FlatDb by Google, and Microsoft’s ongoing work on SQL Azure. But most of these offerings are only good for creating a few very large database tables (so-called “fat tables”) to be stored in the cloud.

Conversely, is an enterprise-class relational database technology for a vast number of tables, relations, database normalizations, etc. It is largely built on top of Oracle’s relational database technology, although has added a number of other traditional on-premises databases into its datacenter portfolio: the Dreamforce 2010 demo featured a recruiting application on top of a Progress OpenEdge database. The cloud database offering can be hosted on multiple IaaS offerings, including Amazon’s Elastic Cloud 2 (EC2), Microsoft Windows Azure, and’s own infrastructure.

Benioff asserted that the scalability offered by the cloud, together with’s affordable pricing, will make an attractive proposition for its customers. Some observers have suggested that the offering would put in competition with Oracle, IBM, Progress, and other on-premises database providers.

In truth, it rather makes the vendor a reseller of these on-premise databases. As a matter of fact, during the media lunch at Dreamforce 2010, Steve Fisher, the chief architect at, encouraged Oracle and IBM to offer their databases in the cloud, and even had a few good words for what Microsoft has been trying to do with SQL Azure (although one should keep in mind that SQL Azure and its on-premises SQL Server counterpart will not seemingly be in the same code). …

P. J., who’s an equal opportunity analyst with Technology Evaluation Centers, continues with commentary about Chatter and the Heroku acquisition.

The Apache Foundation Incubator announced an updated Road Map for Apache Whirr on 1/16/2011 (missed when published):

image This page lists high-level milestones for Whirr development in rough order of priority, and with tentative release versions. If you are interested in helping out on any of these then please comment on the relevant JIRA, or email the developers list.

  • Add more services
  • Support more cloud providers such as Terremark and GoGrid.
  • Allow Whirr to use Chef for configuration management (WHIRR-49).
  • Support EBS volumes for Hadoop.
  • Use Linux Containers to run services in for testing. A future version of jclouds will support this, or a similar feature with (libvirt|
  • More mock object testing.
  • Support EC2 Cluster Compute Groups for Hadoop (WHIRR-63).
From the WhirrDesign Page: Goals
Whirr is cloud-neutral

image Whirr should provide a cloud-neutral API for running services in the cloud. The choice of cloud provider should be a configuration option, rather than require the user to use a different API. This may be achieved by using cloud-neutral provisioning and storage libraries such as jclouds, libcloud, or fog, or similar. However, Whirr's API should be built on top these libraries and should not expose them to the user directly.

In some cases using cloud-specific APIs may be unavoidable (e.g. EBS storage), but the amount of cloud-specific code in Whirr should be kept to a minimum, and hopefully pushed to the cloud libraries themselves over time.

Whirr prefers minimal image management

Building and maintaining cloud images is a pain, especially across cloud providers where there is no standardization, so Whirr takes the approach of using commonly available base images and customizing them on or after boot. Tools like cloudlets look promising for maintaining custom images, so this is an avenue worth exploring in the future.

Whirr hides provisioning scripts

Whirr doesn't mandate any particular solution. Runurl is a simple solution for running installation and configuration scripts, but some services may prefer to use Chef or Puppet. The details should be hidden from the Whirr user.

Whirr provides a common service API

The Whirr API should not be bound to a particular version of the API of the service it is controlling. For example, you should be able to launch two different versions of Hadoop using the same Whirr API call, just by specifying a different configuration value. This is to avoid combinatorial explosion in version dependencies.

Of course, having launched a particular version of a Hadoop cluster, you will need the correct version of the Hadoop library to interact with it, but that is the client's concern, not Whirr's. We see this in the tests for Whirr - they have dependencies on particular versions of Hadoop, while the Whirr core library has no dependency on Hadoop.

Whirr has smart defaults

Whirr should start a service with sensible defaults, so the user can just start using it. It should be possible to override the defaults too. Selection of good defaults is something that a community-based project should be able to do well. Having the concept of a profile might be a good idea: since the characteristics of a small transient cluster for testing may differ from a large long-lived cluster.

Non-goals: Whirr does not have inter-language service control

It's not an immediate goal to be able to start a cluster using the Python API, then stop it using the Java API, for example.

jclouds reportedly supports Windows Azure, so it’s possible (but unlikely) that Apache Whirr might do so, too.

<Return to section navigation list>