Friday, April 15, 2011

Windows Azure and Cloud Computing Posts for 4/15/2011+

image2 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.


Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Azure Blob, Drive, Table and Queue Services

Avkash Chauhan explained how to Mount a Page Blob VHD in any Windows Azure VM outside any Web, Worker or VM Role in a 4/15/2011 post:

Following is the C# Source code to mount a Page Blob VHD in any Windows Azure VM outside any Web, Worker or VM Role:

Console.Write("Role Environment Verification: ");
if (!RoleEnvironment.IsAvailable)
    Console.WriteLine(" FAILED........!!");
Console.WriteLine(" SUCCESS........!!");
Console.WriteLine("Starting Drive Mount!!");
var cloudDriveBlobPath = http://<Storage_Account><Container>/<VHD_NAME(myvhd64.vhd);
Console.WriteLine("Role Name: " + RoleEnvironment.CurrentRoleInstance.Role.Name);
StorageCredentialsAccountAndKey credentials = new StorageCredentialsAccountAndKey("<Storage_ACCOUNT>", "<Storage_KEY>");
Console.WriteLine("Deployment ID:" + RoleEnvironment.DeploymentId.ToString());
Console.WriteLine("Role count:" + RoleEnvironment.Roles.Count);
    localCache = RoleEnvironment.GetLocalResource("<Correct_Local_Storage_Name>");
         Char[] backSlash = { '\\' };
         String localCachePath = localCache.RootPath.TrimEnd(backSlash);
         CloudDrive.InitializeCache(localCachePath, localCache.MaximumSizeInMegabytes);
         Console.WriteLine(localCache.Name + " | " + localCache.RootPath + " | " + localCachePath + " ! " + localCache.MaximumSizeInMegabytes);
catch (Exception eXp)
    Console.WriteLine("Problem with Local Storage: " + eXp.Message);
drive = new CloudDrive(new Uri(cloudDriveBlobPath), credentials);
    Console.WriteLine("Calling Drive Mount API!!");
         string driveLetter = drive.Mount(localCache.MaximumSizeInMegabytes, DriveMountOptions.None);
         Console.WriteLine("Drive :" + driveLetter);
         Console.WriteLine("Finished Mounting!!");
         Console.WriteLine("************Lets Unmount now********************");
         Console.WriteLine("Press any key......");
         Console.WriteLine("Starting Unmount......");
         Console.WriteLine("Finished Unmounting!!");
         Console.WriteLine("Press any key to exit......");
catch (Exception exP)
    Console.WriteLine(exP.Message + "//" + exP.Source);
         Console.WriteLine("Failed Mounting!!");

imageApplication Output:

Role Environment Verification: SUCCESS........!!
Starting Drive Mount!!
Role Name: VMRole1
Deployment ID:207c1761a3e74d6a8cad9bf3ba1b23dc
Role count:1
LocalStorage | C:\Resources\LocalStorage | C:\Resources\LocalStorage ! 870394
Calling Drive Mount API!!
Drive :B:\
Finished Mounting!!

************Lets Unmount now********************

Press any key......
Starting Unmount......
Finished Unmounting!!
Press any key to exit......

Possible Errors:

  • 1. Be sure to have correct Local Storage Folder
    • no such local resource
  • 2. Be sure to have your VHD otherwise you will get error:
  • 3. If your VHD in not a page blob VHD, it will not mount
  • 4. If you receive the following Error it means your application build is set to x86
    • Could not load file or assembly 'mswacdmi, Version=
    • Please set the build to 64bit to solve this problem.

To Create VHD drive:

When using Microsoft Disk Management to create VHD be sure:

  • To create FIXED SIZE VHD
  • Use MBR Partition type (GPT based disk will not be able to mount)

I done testing with Web Role, Worker Role and VM Role and I was able to mount valid Page Blob VHD with this code with all 3 kinds of roles with Windows Azure SDK 1.4.

<Return to section navigation list> 

SQL Azure Database and Reporting

Steve Yi reported the availability of a TechNet Wiki- Overview of Security of SQL Azure on 5/15/2011:

image TechNet has written an article which provides an overview of the security features of SQL Azure.  It provides a great overview of security considerations for SQL Azure and you should definitely take a look as you start putting your data into SQL Azure.

Click here to look at the article.

imageIn a previous blog post we also covered SQL Azure security which had a good overview video and samples. 

Bill Ramos explained Migrating Access Jet Databases to SQL Azure in a 4/14/2011 post:

In this blog, I’ll describe how to use SSMA for Access to convert your Jet database for your Microsoft Access solution to SQL Azure. This blog builds on Access to SQL Server Migration: How to Use SSMA using the Access Northwind 2007 template. The blog also assumes that you have a SQL Azure account setup and that you have configured firewall access for your system as described in the blog post Migrating from MySQL to SQL Azure Using SSMA.

Creating a Schema on SQL Azure

imageIf you are using a trial version of SQL Azure, you’ll want to get the most out of your free 1 GB Web Edition database. By using a SQL Server schema, you can accommodate multiple Jet database or MySQL migrations into a single database and limit access to users for each schema via the SQL Server permissions hierarchy.

SSMA for Microsoft Access version 4.2 doesn’t support the creation of a database schema within the tool, so you will need to create the schema using the Windows Azure Portal. Launch the Windows Azure Portal with your Live ID and follow the steps as shown below.

01 Windows Azure Portal

  1. Click on the Database node in the left hand navigation pane.
  2. Expand out the subscription name for your Azure account until you see your databases
  3. Select the target database that you created when you first connected to the Azure portal – see Migrating from MySQL to SQL Azure Using SSMA for how the SSMADB was created for this blog.
  4. Click on the Manage command to launch the Database Manager. You will log in into SQL Azure database as shown below.
    02 Login to DB Manager

Once in the Database Manager, you will need to press the New Query command as shown below so that you can create the target schema for the Northwind2007 database.

03 Create new query in DB Manager

Now that you have the new query window, you can do the following steps as illustrated below.

04 issue create schema command

  1. Type in the Transact-SQL command to create your target schema: create schema Northwind2007
  2. Press the Execute command in the toolbar to run the statement.
  3. Click on the Message window command to show that the command was completed successfully.

You are now ready to use SSMA for Access to migrate your database to SQL Azure into the Northwind2007 schema.

Creating a Migration Project with SQL Azure as the Destination

Start SSMA for Access as usual, but close the Migration Wizard that starts by default. The Migration Wizard will end up creating the tables in the dbo schema instead of the Northwind2007 schema that you created. Follow the steps shown below to create your manual migration project.

10 Create new project

  1. Click on the New Project command.
  2. Enter in the name of your project.
  3. Select SQL Azure for the Migration To option and click OK. If you forget to select SQL Azure, you’ll need to create a new project again because you can’t change the option once you have competed the dialog.

The next step is to add the Northwind2007 database file to the project and connect to your SQL Azure database as shown below.

11 Add databases and connect to SQL Azure

  1. Click on the Add Databases command and select the Northwind2007 database.
  2. Expand the Access-metadata node in the Access Metadata Explorer to show the Queries and Tables nodes and select the Tables checkbox.
  3. Click on the Connect to SQL Azure command
  4. Complete the connection dialog to your SQL Azure database
Choosing the Target Schema

To change the target schema, you need to Modify the default value from master.dbo to database name and schema that you created for your SQL Azure database – in this example – SSMADB.Northwind2007 following the steps below.

12 Cloosing the schema

  1. Click on the Modify button in the Schema tab.
  2. Click on the […] brose button in the Choose Target Schema dialog.
  3. Choose the target schema – Northwind2007 - and then click the Select and the OK button.
Migrate the Tables and Data with the Convert, Load, and Migrate Command

At this point, you are ready to proceed with the standard migration steps for SSMA which includes (ignoring errors):

  1. Click on the Tables folder for the Northwind2007 database in the Access Metadata Explorer to enable the migration toolbar commands.
  2. Click on the Convert, Load, and Migrate command to do all the steps to compete the migration with the one command.
  3. Click OK for the Synchronize with the Database dialog as shown below to create the tables in the Northwind2007 schema within the SSMADB database.
    13 Sync tables to target
  4. Dismiss the Convert, Load, and Migrate dialog assuming everything worked.
Using SSMA to Verify the Migration Result

To verify the results, you can use the Access and SQL Azure Metadata Explorers to compare data after the transfer as follows.

14 Verify the results

  1. Click on the source table Employees in the Access Metadata Explorer
  2. Select the Data tab in the Access workspace to see the data
  3. Click on the target table Employees in the SQL Azure Metadata Explorer
  4. Select the Table tab in the SQL Azure workspace to see the schema or the Data tab to view the data.

You can also use the SQL Azure Database Manager to view the table schema and data as described at the end of the blog post Migrating from MySQL to SQL Azure Using SSMA.

Creating Linked Tables to SQL Azure for your Access Solution

To make your Access solution use the SQL Azure tables, you need to create Linked tables to the SQL Azure database. To create the Linked tables, you need to select the Tables folder in the Access Metadata Explorer as shown below.

15 Link Tables

Right click on the Tables folder and select the Linked Tables command. SSMA will create a backup of the tables in your Access solution file and then create the Linked Table that connects to the table in SQL Azure.


As you can see, migrating your Access solution that uses Jet tables as easy as:

  1. Creating a target schema in your target SQL Azure database.
  2. Creating a project with the Migrate To option set to SQL Azure.
  3. Following the normal steps for migrating schema and data within SSMA.
  4. Verifying the reports within SSMA or through the SQL Azure Database Manager.
  5. Creating Linked Tables to the SQL Azure database within your Access solution using SSMA.

Additional SQL Azure Resources

To learn more about SQL Azure, see the following resources.

<Return to section navigation list> 

MarketPlace DataMarket and OData

Sudhir Hasbe described Real-time Stock quotes from BATS Exchange available on Azure DataMarket in a 4/13/2011 post:

image Get instant access to real-time stock quotes for U.S. equities traded on NASDAQ, NYSE, NYSE Alternext and over-the-counter (OTC) exchanges. Quotes are provided by the BATS Exchange, one of the largest U.S. market centers for stock transactions. Each real-time quote includes trading volume as well as open, close, high, low and last sale prices.

image For developers of websites and software applications, XigniteBATSLastSale offers a high-performance web service API capable of scaling to meet the requirements of even the most demanding software applications and websites. For individual investors or professional traders, XigniteBATSLastSale offers simple, easy-to-use operations that let you cherry-pick exactly the data you want when you want it and pay only for as much data as you think you’ll need.

XigniteBATSLastSale: view samples & details

The WCF Data Services Team posted a Reference Data Caching Walkthrough on 4/13/2011:

imageThis walkthrough shows how a simple web application can be built using the reference data caching features in the “Microsoft WCF Data Services For .NET March 2011 Community Technical Preview with Reference Data Caching Extensions” aka “Reference Data Caching CTP”. The walkthrough isn’t intended for production use but should be of value in learning about the caching protocol additions to OData as well as to provide practical knowledge in how to build applications using the new protocol features.

Walkthrough Sections

The walkthrough contains five sections as follows:

  1. Setup: This defines the pre-requisites for the walkthrough and links to the setup of the Reference Data Caching CTP.
  2. Create the Web Application: A web application will be used to host an OData reference data caching-enabled service and HTML 5 application. In this section you will create the web app and configure it.
  3. Build an Entity Framework Code First Model: An EF Code First model will be used to define the data model for the reference data caching-enabled application.
  4. Build the service: A reference data caching-enabled OData service will be built on top of the Code First model to enable access to the data over HTTP.
  5. Build the front end: A simple HTML5 front end that uses the datajs reference data caching capabilities will be built to show how OData reference data caching can be used.

The pre-requisites and setup requirements for the walkthrough are:

  1. This walkthrough assumes you have Visual Studio 2010, SQL Server Express, and SQL Server Management Studio 2008 R2 installed.
  2. Install the Reference Data Caching CTP. This setup creates a folder at “C:\Program Files\WCF Data Services March 2011 CTP with Reference Data Caching\Binaries\” that contains:
    • A .NETFramework folder with:
      • i. An EntityFramework.dll that allows creating off-line enabled models using the Code First workflow.
      • ii. A System.Data.Services.Delta.dll and System.Data.Services.Delta.Client.dll that allow creation of delta enabled Data Services.
    • A JavaScript folder with:
      • i. Two delta-enabled datajs OData library files: datajs-0.0.2 and datajs-0.0.2.min.js.
      • ii. A .js file that leverages the caching capabilities inside of datajs for the walkthrough: TweetCaching.js
Create the web application

Next you’ll create an ASP.NET Web Application where the OData service and HTML5 front end will be hosted.

  1. Open Visual Studio 2010 and create a new ASP.NET web application and name it ConferenceReferenceDataTest.  When you create the application, make sure you target .NET Framework 4.
  2. Add the .js files in “C:\Program Files\WCF Data Services March 2011 CTP with Reference Data Caching\Binaries\Javascript” to your scripts folder.
  3. Add a reference to the new reference data caching-enabled data service libraries found in “C:\Program Files\WCF Data Services March 2011 CTP with Reference Data Caching\Binaries\.NETFramework”:
    • Microsoft.Data.Services.Delta.dll
    • Microsoft.Data.Services.Delta.Client.dll
  4. Add a reference to the reference data caching-enabled EntityFramework.dll found in in “C:\Program Files\WCF Data Services March 2011 CTP with Reference Data Caching\Binaries\.NETFramework”.
  5. Add a reference to System.ComponentModel.DataAnnotations.
Build your Code First model and off-line enable it.

In order to build a Data Service we need a data model and data to expose.  You will use Entity Framework Code First classes to define the delta enabled model and a Code First database initializer to ensure there is seed data in the database. When the application runs, EF Code First will create a database with appropriate structures and seed data for your delta-enabled service.

  1. Add a C# class file to the root of your project and name it model.cs.
  2. Add the Session and Tweets classes to your model.cs file. Mark the Tweets class with the DeltaEnabled attribute. Marking the Tweets class with the attribute will force Code First to generate additional database structures to support change tracking for Tweet records.

public class Session
public int Id { get; set; }
public string Name { get; set; }
public DateTime When { get; set; }
public ICollection<Tweet> Tweets { get; set; }
public class Tweet
    [Key, DatabaseGenerated(DatabaseGeneratedOption.Identity)]
    [Column(Order = 1)]
public int Id { get; set; }
    [Column(Order = 2)]
public int SessionId { get; set; }
public string Text { get; set; }
public Session Session { get; set; }
Note the attributes used to further configure the Tweet class for a composite primary key and foreign key relationship. These attributes will directly affect how Code First creates your database.

3.   Add a using statement at the top of the file so the DeltaEnabled annotation will resolve:

using System.ComponentModel.DataAnnotations;

4. Add a ConferenceContext DbContext class to your model.cs file and expose Session and Tweet DbSets from it.

public class ConferenceContext : DbContext
    public DbSet<Session> Sessions { get; set; }
    public DbSet<Tweet> Tweets { get; set; }

5. Add a using statement at the top of the file so the DbContext and DbSet classes will resolve:

using System.Data.Entity;

6. Add seed data for sessions and tweets to the model by adding a Code First database initializer class to the model.cs file:

public class ConferenceInitializer : IncludeDeltaEnabledExtensions<ConferenceContext>
    protected override void Seed(ConferenceContext context)
        Session s1 = new Session() { Name = "OData Futures", When = DateTime.Now.AddDays(1) };
        Session s2 = new Session() { Name = "Building practical OData applications", When = DateTime.Now.AddDays(2) };

        Tweet t1 = new Tweet() { Session = s1, Text = "Wow, great session!" };
        Tweet t2 = new Tweet() { Session = s2, Text = "Caching capabilities in OData and HTML5!" };


7. Ensure the ConferenceIntializer is called whenever the Code First DbContext is used by opening the global.asax.cs file in your project and adding the following code:

void Application_Start(object sender, EventArgs e)
            // Code that runs on application startup
            Database.SetInitializer(new ConferenceInitializer());


Calling the initializer will direct Code First to create your database and call the ‘Seed’ method above placing data in the database

8. Add a using statement at the top of the global.asax.cs file so the Database class will resolve:

using System.Data.Entity;
Build an OData service on top of the model

Next you will build the delta enabled OData service. The service allows querying for data over HTTP just as any other OData service but in addition provides a “delta link” for queries over delta-enabled entities. The delta link provides a way to obtain changes made to sets of data from a given point in time. Using this functionality an application can store data locally and incrementally update it, offering improved performance, cross-session persistence, etc.

1. Add a new WCF Data Service to the Project and name it ConferenceService.


2. Remove the references to System.Data.Services and System.Data.Services.Client under the References treeview. We are using the Data Service libraries that are delta enabled instead.

3. Change the service to expose sessions and tweets from the ConferenceContext and change the protocol version to V3. This tells the OData delta enabled service to expose Sessions and Tweets from the ConferenceContext Code First model created previously.

public class ConferenceService : DataService<ConferenceContext>
    // This method is called only once to initialize service-wide policies.
    public static void InitializeService(DataServiceConfiguration config)
        config.SetEntitySetAccessRule("Sessions", EntitySetRights.All);
        config.SetEntitySetAccessRule("Tweets", EntitySetRights.All);
        config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V3;

4. Right-click on ConferenceService.svc in Solution Explorer and click “View Markup”. Change the Data Services reference as follows.


<%@ ServiceHost Language="C#" Factory="System.Data.Services.DataServiceHostFactory, System.Data.Services, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089" Service="ConferenceReferenceDataTest.ConferenceContext" %>


<%@ ServiceHost Language="C#" Factory="System.Data.Services.DataServiceHostFactory, Microsoft.Data.Services.Delta, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089" Service="ConferenceReferenceDataTest.ConferenceService" %>

6. Change the ConferenceService.svc to be the startup file for the application by right clicking on ConferenceService.svc in Solution Explorer and choosing “Set as Start Page”.

7. Run the application by pressing F5.

  • In your browser settings ensure feed reading view is turned off. Using Internet Explorer 9 you can turn feed reading view off by choosing Options->Content then clicking “Settings for Feeds and Web Slices”. In the dialog that opens uncheck the “Turn on feed reading view” checkbox.


<link rel="" href=http://localhost:11497/ConferenceService.svc/Tweets?$deltatoken=2002 />

The delta link is shown because the DeltaEnabled attribute was used on the Tweet class.

  • The delta link will allow you to obtain any changes made to the data based from that current point in time. You can think of it like a reference to what your data looked like at a specific point in time. For now copy just the delta link to the clipboard with Control-C.

8. Open a new browser instance, paste the delta link into the nav bar: http://localhost:11497/ConferenceService.svc/Tweets?$deltatoken=2002, and press enter.

9. Note the delta link returns no new data. This is because no changes have been made to the data in the database since we queried the service and obtained the delta link. If any changes are made to the data (Inserts, Updates or Deletes) changes would be shown when you open the delta link.

10. Open SQL Server Management Studio, create a new query file and change the connection to the ConferenceReferenceDataTest.ConferenceContext database. Note the database was created by a code first convention based the classes in model.cs.

11. Execute the following query to add a new row to the Tweets table:

insert into Tweets (Text, SessionId) values ('test tweet', 1)

12. Refresh the delta link and note that the newly inserted test tweet record is shown. Notice also that an updated delta link is provided, giving a way to track any changes from the current point in time. This shows how the delta link can be used to obtain changed data. A client can hence use the OData protocol to query for delta-enabled entities, store them locally, and update them as desired.

13. Stop debugging the project and return to Visual Studio.

Write a simple HTML5 front end

We’re going to use HTML5 and the reference data caching-enabled datajs library to build a simple front end that allows browsing sessions and viewing session detail with tweets. We’ll leverage a pre-written library that uses the reference data capabilities added to OData and leverages the datajs local store capabilities for local storage.

1. Add a new HTML page to the root of the project and name it SessionBrowsing.htm.

2. Add the following HTML to the <body> section of the .htm file:

<button id='ClearDataJsStore'>Clear Local Store</button>
<button id='UpdateDataJsStore'>Update Tweet Store</button>
<br />
<br />
Choose a Session: 
<select id='sessionComboBox'>
<br />
<br />
<div id='SessionDetail'>No session selected.</div>

3. At the top of the head section, add the following script references immediately after the head element:

    <script src="" type="text/javascript"></script>
    <script src=".\Scripts\datajs-0.0.2.js" type="text/javascript"></script>
    <script src=".\Scripts\TweetCaching.js" type="text/javascript"></script>

4. In the head section, add the following javascript functions. These functions:

  • Pull sessions from our OData service and add them to your combobox.
  • Create a javascript handler for when a session is selected from the combobox.
  • Create handlers for clearing and updating the local store.
    <script type="text/javascript">
        //load all sessions through OData
        $(function () {
            //query for loading sessions from OData
            var sessionQuery = " ConferenceService.svc/Sessions?$select=Id,Name";
            //initial load of sessions
                    function (data, request) {
                        $("<option value='-1'></option>").appendTo("#sessionComboBox");
                        var i, length;
                        for (i = 0, length = data.results.length; i < length; i++) {
                            $("<option value='" + data.results[i].Id + "'>" + data.results[i].Name 
                                + "</option>").appendTo("#sessionComboBox");
                    function (err) {
                        alert("Error occurred " + err.message);

            //handler for combo box
            $("#sessionComboBox").change(function () {
                var sessionId = $("#sessionComboBox").val();
                if (sessionId > 0) {

                    var sessionName = $("#sessionComboBox option:selected").text();
                    //change localStore to localStore everywhere and TweetStore to be TweetStore
                    var localStore = datajs.createStore("TweetStore");

                    document.getElementById("SessionDetail").innerHTML = "";
                    $('#SessionDetail').append('Tweets for session ' + sessionName + ":<br>");
                    AppendTweetsForSessionToForm(sessionId, localStore, "SessionDetail");

                } else {
                    var detailHtml = "No session selected.";
                    document.getElementById("SessionDetail").innerHTML = detailHtml;

            //handler for clearing the store
            $("#ClearDataJsStore").click(function () {

            //handler for updating the store
            $("#UpdateDataJsStore").click(function () {
                alert("Local Store Updated!");


5. Change the startup page for the application to the SessionBrowsing.htm file by right clicking on SessionBrowsing.htm in Solution Explorer and choosing “Set as Start Page”.

6. Run the application. The application should look similar to:


  • Click the ‘Clear Local Store’ button: This removes all data from the cache.
  • Click the ‘Update Tweet Store’ button. This refreshes the cache.
  • Choose a session from the combo box and note the tweets shown for the session.
  • Switch to SSMS and add, update or remove tweets from the Tweets table.
  • Click the ‘Update Tweet Store’ button then choose a session from the drop down box again.  Note tweets were updated for the session in question.

This simple application uses OData delta enabled functionality in an HTML5 web application using datajs. In this case datajs local storage capabilities are used to store Tweet data and datajs OData query capabilities are used to update the Tweet data.


This walkthrough offers an introduction to how a delta enabled service and application could be built. The objective was to build a simple application in order to walk through the reference data caching features in the OData Reference Data Caching CTP. The walkthrough isn’t intended for production use but hopefully is of value in learning about the protocol additions to OData as well as in providing practical knowledge in how to build applications using the new protocol features. Hopefully you found the walkthrough useful. Please feel free to give feedback in the comments for this blog post as well as in our prerelease forums here.

The Asad Khan posted Announcing datajs version 0.0.3 on 3/13/2011 to the WCF Data Services Team Blog:

Today we are very excited to announce a new release of datajs. The latest release of datajs makes use of HTML5 storage capabilities to provide a caching component that makes your web application more interactive by reducing the impact of network latency. The library also comes with a pre-fetcher component that sits on top of the cache. The pre-fetcher detects application needs and brings in the data ahead of time; as a result the application user never experiences network delays. The pre-fetcher and the cache are fully configurable. The application developer can also specify which browser storage to use for data caching; by default datajs will pick the best available storage.

The following example shows how to setup the cache and load the first set of data.

var cache = datajs.createDataCache({
    name: "movies",
    source: "$filter=Rating eq 'PG'"

// Read ten items from the start
cache.readRange(0, 10).then(function (data) {
    //display 'data' results

The createDataCache has a ‘name’ property that specifies the name that will be assigned to the underlying store. The 'source' property specifies the URI of the data source. The source property follows the OData URI convention (to read more about OData visit The readRange takes the range of items you want to display on your first page.

The application developer has full control over size of the cache, the pre-fetch page size, and the lifetime of the cached items. . For example one can define the pageSize and the prefetchSize as follows:

var cache = datajs.createDataCache({
    name: "movies",
    source: "$filter=Rating eq 'PG'",
    pageSize: 20,
    prefetchSize: 350

This release also includes a standalone local storage component which is built on HTML5. This is the same component that the datajs cache components use for offline storage. If you are not interested in the pre-fetcher you can directly use the local storage API to store items offline. datajs local storage gives you same storage experience no matter which browser or OS you run your application on. 

You can get the latest drop of datajs library from: You can also get the latest version of the sources, which are less stable, but contain the latest features the datajs team is working on. Please visit the project page for detailed documentation, feedback and to get involved in the discussions.

datajs is an open source JavaScript library aimed at making data-centric web applications better. The library is released under the MIT license; you can read the earlier blog post on datajs here, or visit the project release site at

<Return to section navigation list> 

Windows Azure AppFabric: Access Control, Caching WIF and Service Bus

Cennest posted Microsoft Azure App-Fabric Caching Service Explained! on 4/14/2011:

image722322222MIX always comes with a mix of feelings…excitement at the prospect of trying out the new releases and the heartache that comes with trying to understand  “in depth” the new technologies being released…and so starts the “googling..oops binging”,,,blogs,,,videos etc…What does it mean?? How does it impact me??

One such very important release at MIX 2011 is the AppFabric Caching ServiceAt Cennest we do a lot of Azure development and Migration work and this feature caught our immediate attention as something which will have a high impact on the architecture, cost and performance of new applications and Migrations .

So we collated information from various sources (references below ) and here is an attempt is simplify the explanation for you!

What is caching?

The Caching service is a distributed, in-memory, application cache service that accelerates the performance of Windows Azure and SQL Azure applications by allowing you to keep data in-memory and saving you the need to retrieve that data from storage or database.(Implicit Cost Benefit? Well depends on the costing of Cache service…yet to be released..)

Basically it’s a layer that sits between the Database and the application and which can be used to “store” data prevent frequent trips to the database thereby reducing latency and improving performance


How does this work?

Think of the Caching service as Microsoft running a large set of cache clusters for you, heavily optimized for performance, uptime, resiliency and scale out and just exposed as a simple network service with an endpoint for you to call. The Caching service is a highly available multitenant service with no management overhead for its users

As a user, what you get is a secure Windows Communication Foundation (WCF) endpoint to talk to and the amount of usable memory you need for your application and APIs for the cache client to call in to store and retrieve data.


The Caching service does the job of pooling in memory from the distributed cluster of machines it’s running and managing to provide the amount of usable memory you need. As a result, it also automatically provides the flexibility to scale up or down based on your cache needs with a simple change in the configuration.

Are there any variations in the types of Cache’s available?

Yes, apart from using the cache on the Caching service there is also the ability to cache a subset of the data that resides in the distributed cache servers, directly on the client—the Web server running your website. This feature is popularly referred to as the local cache, and it’s enabled with a simple configuration setting that allows you to specify the number of objects you wish to store and the timeout settings to invalidate the cache.


What can I cache?

You can pretty much keep any object in the cache: text, data, blobs, CLR objects and so on. There’s no restriction on the size of the object, either. Hence, whether you’re storing explicit objects in cache or storing session state, the object size is not a consideration to choose whether you can use the Caching service in your application.

However, the cache is not a database! —a SQL database is optimized for a different set of patterns than the cache tier is designed for. In most cases, both are needed and can be paired to provide the best performance and access patterns while keeping the costs low.

How can I use it?

  • For explicit programming against the cache APIs, include the cache client assembly in your application from the SDK and you can start making GET/PUT calls to store and retrieve data from the cache.
  • For higher-level scenarios that in turn use the cache, you need to include the ASP.NET session state provider for the Caching service and interact with the session state APIs instead of interacting with the caching APIs. The session state provider does the heavy lifting of calling the appropriate caching APIs to maintain the session state in the cache tier. This is a good way for you to store information like user preferences, shopping cart, game-browsing history and so on in the session state without writing a single line of cache code.


When should I use it?

A common problem that application developers and architects have to deal with is the lack of guarantee that a client will always be routed to the same server that served the previous request.

When these sessions can’t be sticky, you’ll need to decide what to store in session state and how to bounce requests between servers to work around the lack of sticky sessions. The cache offers a compelling alternative to storing any shared state across multiple compute nodes. (These nodes would be Web servers in this example, but the same issues apply to any shared compute tier scenario.) The shared state is consistently maintained automatically by the cache tier for access by all clients, and at the same time there’s no overhead or latency of having to write it to a disk (database or files).

How long does the cache store content?

Both the Azure and the Windows Server AppFabric Caching Service use various techniques to remove data from the cache automatically: expiration and eviction. A cache has a default timeout associated with it after which an item expires and is removed automatically from the cache.

This default timeout may be overridden when items are added to the cache. The local cache similarly has an expiration timeout. 

Eviction refers to the process of removing items because the cache is running out of memory. A least-recently used algorithm is used to remove items when cache memory comes under pressure – this eviction is independent of timeout.

What does it mean to me as a Developer?

One thing to note about the Caching service is that it’s an explicit cache that you write to and have full control over. It’s not a transparent cache layer on top of your database or storage. This has the benefit of providing full control over what data gets stored and managed in the cache, but also means you have to program against the cache as a separate data store using the cache APIs.

This pattern is typically referred to as the cache-aside, where you first load data into the cache and then check if it exists there for retrieving and, only when it’s not available there, you explicitly read the data from the data tier. So, as a developer, you need to learn the cache programming model, the APIs, and common tips and tricks to make your usage of cache efficient.

What does it mean to me as an Architect?

What data should you put in the cache? The answer varies significantly with the overall design of your application. When we talk about data for caching scenarios, usually we break it into the data types and access patterns

  • Reference Data( Shared Read Data):-Reference data is a great candidate for keeping in the local cache or co-located with the client


  • Activity Data( Exclusive Write):- Data relevant to the current session between the user and the application.

Take for example a shopping cart!During the buying session, the shopping cart is cached and updated with selected products. The shopping cart is visible and available only to the buying transaction. Upon checkout, as soon as the payment is applied, the shopping cart is retired from the cache to a data source application for additional processing.

Such an collection of data would be best stored in the Cache Server providing access to all the distributed servers which can send updates to the shopping cart . If this cache were stored at the local cache then it would get lost.


  • Shared Data(Multiple Read and Write):-There is also data that is shared, concurrently read and written into, and accessed by lots of transactions. Such data is known as resource data.

Depending upon the situation, Caching shared data on a single computer can provide some performance improvements but for large-scale auctions, a single cache cannot provide the required scale or availability. For this purpose, some types of data can be partitioned and replicated in multiple caches across the distributed cacheimage

Be sure to spend enough time in capacity planning for your cache. Number of objects, size of each object, frequency of access of each object and pattern for accessing these objects are all critical in not only determining how much cache you need for your application, but also on which layers to optimize for (local cache, network, cache tier, using regions and tags, and so on).

If you have a large number of small objects, and you don’t optimize for how frequently and how many objects you fetch, you can easily get your app to be network-bound.

Also Microsoft will soon release the pricing for using the caching service so obviously you need to ensure usage of the Caching service is “Optimized” and when it comes to the cloud “Optimized= Performance +Cost”!!

Matias Woloski described Adding Internet Identity Providers like Facebook, Google, LiveID and Yahoo to your MVC web application using Windows Azure AppFabric Access Control Service and jQuery in 3 steps in a 4/12/2011 post:

If you want to achieve a login user experience like the one shown in the following screenshot, then keep reading…


Windows Azure AppFabric Access Control 2.0 has been released last week after one year in the Labs environment and it was officially announced today at MIX. If you haven’t heard about it yet, here is the elevator pitch of ACS v2:

Windows Azure AppFabric Access Control Service (ACS) is a cloud-based service that provides an easy way of authenticating and authorizing users to gain access to your web applications and services while allowing the features of authentication and authorization to be factored out of your code. Instead of implementing an authentication system with user accounts that are specific to your application, you can let ACS orchestrate the authentication and much of the authorization of your users. ACS integrates with standards-based identity providers, including enterprise directories such as Active Directory, and web identities such as Windows Live ID, Google, Yahoo!, and Facebook

According to the blog published today by the AppFabric team you can use this service for free (at least throughout Jan 2012). Also the Labs environment are still available for testing purposes (not sure when they will turn this off).

We encourage you to try the new version of the service and will be offering the service at no charge during a promotion period ending January 1, 2012.

Now that we can use this for real, in this post I will show you how to create a little widget that will allow users of your website to login using social identity providers like Google, Facebook, LiveId or Yahoo. In this post I will go through the process of creating such experience for your website.

I will use an MVC Web Application, but this can be implemented in WebForms also or even WebMatrix if you understand the implementation details

Step 1. Configure Windows Azure AppFabric Access Control Service
  1. Create a new Service Namespace in or if you have an Azure subscription use the production version at


  2. The service namespace will be activated in a few minutes. Select it and click on Access Control Service to open the management console for that service namespace.
  3. In the management console go to Identity Providers and add Google, Yahoo and Facebook (LiveID is added by default). It’s very straightforward to do it. This is the information you have to provide for each of them. I just googled for the logos and some of them are not the best quality, so feel free to change them
  4. The next thing is to register the web application you just created in ACS. To do this, go to Relying party applications and click Add.
  5. Enter the following information
    Name: a display name for ACS
    Realm: https://localhost/<TheNameOfTheWebApp>/
    This is the logical identifier for the app. For this, we can use any valid URI (notice the I instead of L). Using the base url of your app is a good idea in case you want to have one configuration for each environment.
    Return Url: https://localhost/<TheNameOfTheWebApp>/
    This is the url where the token will be posted to. Since there will be an http module listen for any HTTP POST request coming in with a token, you can use any valid url of the app. The root is a good choice and, don’t worry, then you can redirect the user back to the original url she was browsing (in case of bookmarking).


  6. Leave the other fields with the default values and click Save. You will notice that Facebook, Google, LiveID and Yahoo are checked. This means that you want to enable those identity providers for this application. If you uncheck one of those, the widget won’t show it.
  7. Finally, go to the Rule Groups and click on the rule group for your web application.
  8. Since each identity provider will give us different information (claims about the user), we have to generate a set of rules to passthrough that information to our application. Otherwise by default that won’t happen. To do this, click on Generate, make sure all the identity providers are checked and save. You should see a screen like this


Step 2. Configure your application with Windows Azure AppFabric Access Control Service
  1. Now that we have configured ACS, we have to go to our application and configure it to use ACS.
  2. Create a new ASP.NET MVC Application. Use the Internet Application template to get the master page, controllers, etc.
    NOTE: I am using MVC3 with Razor but you can use any version.
  3. Before moving forward, make sure you have Windows Identity Foudnation SDK installed in your machine. Once you have it, then right click the web application and click Add STS Reference…. In the first step you will have already the right values so click Next

  4. In the next step, select Use an existing STS. Enter the url of your service namespace Federation Metadata. This URL has a pattern like this:


  5. In the following steps go ahead and click Next until the wizard finishes.
  6. The wizard will add a couple of http modules and a section on the web.config that will have the thumbprint of the certificate that ACS will use to sign tokens. This is the basis of the trust relationship between your app and ACS. If you change that number, it means the trust is broken.
  7. The next thing you have to do is replace the default AccountController with one that works when the authentication is outsourced of the app. Download the AccountController.cs, change the namespace to yours and replace it. Among other things, this controller will have an action called IdentityProviders that will return from ACS the list of identity providers in JSON format.
    public ActionResult IdentityProviders(string serviceNamespace, string appId)
        string idpsJsonEndpoint = string.Format(IdentityProviderJsonEndpoint, serviceNamespace, appId);
        var client = new WebClient();
        var data = client.DownloadData(idpsJsonEndpoint);
        return Content(Encoding.UTF8.GetString(data), "application/json");
Step 3. Using jQuery Dialog for the login box
  1. In this last step we will use the jQuery UI dialog plugin to show the list of identity providers when clicking the LogOn link. Open the LogOnPartial cshtml file


  2. Replace the LogOnPartial markup with the following (or copy from here). IMPORTANT: change the service namespace and appId in the ajax call to use your settings.
    @if(Request.IsAuthenticated) {
        <text>Welcome <b>@Context.User.Identity.Name</b>!
        [ @Html.ActionLink("Log Off", "LogOff", "Account") ]</text>
    else {
        <a href="#" id="logon">Log On</a>
        <div id="popup_logon">
        <style type="text/css">
        #popup_logon ul
            list-style: none;
        #popup_logon ul li
             margin: 10px;
             padding: 10px
        <script type="text/javascript">
        $("#logon").click(function() {
            $("#popup_logon").dialog({ modal: true, draggable: false, resizable: false, title: 'Select your preferred login method' });
                url : '@Html.Raw(Url.Action("IdentityProviders", "Account", new { serviceNamespace = "YourServiceNamespace", appId = "https://localhost/<YourWebApp>/" }))',
                success : function(data){
                    dialogHtml = '<ul>';
                    for (i=0; i<data.length; i++)
                        dialogHtml += '<li>';
                        if (data[i].ImageUrl == '')
                            dialogHtml += '<a href="' + data[i].LoginUrl + '">' + data[i].Name + '</a>';
                        } else
                            dialogHtml += '<a href="' + data[i].LoginUrl + '"><img style="border: 0px; width: 100px" src="' + data[i].ImageUrl + '" alt="' + data[i].Name + '" /></a>';
                        dialogHtml += '</li>';
                    dialogHtml += '</ul>';
  3. Include jQuery UI and the corresponding css in the Master page (Layout.cshtml)
    <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" />
    <link href="@Url.Content("~/Content/themes/base/jquery-ui.css")" rel="stylesheet" type="text/css" />
    <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script>
    <script src="@Url.Content("~/Scripts/jquery-ui.min.js")" type="text/javascript"></script>
Step 4. Try it!
  1. That’s it. Start the application and click on the Log On link. Select one of the login methods and you will get redirected to the right page. You will have to login and the provider may ask you to grant permissions to access certain information from your profile. If you click yes you will be logged in and ACS will send you a set of claims like the screen below shows.





I added this line in the HomeController to show all the claims:

ViewBag.Message = string.Join("<br/>", ((IClaimsIdentity)this.User.Identity).Claims.Select(c => c.ClaimType + ": " + c.Value).ToArray());

Well, it wasn’t 3 steps, but you get the point Winking smile. Now, it would be really cool to create a NuGet that will do all this automatically…

Just for future reference, these are the claims that each identity provider will return by default

Facebook 619815976 2011-04-09T21:00:01.0471518Z Matias Woloski 111617558888963|2.k <stripped> 976|z_fmV<stripped>3kQuo Facebook-<appid>

Google UoU"><stripped>UoU Matias Woloski Google

LiveID WJoV5kxtlzEbsu<stripped>mMxiMLQ= uri:WindowsLiveID

Yahoo<stripped>58mGa#e7b0c Matias Woloski Yahoo!

Following up

Get the code used in this post from here.

If you are interested in other features of ACS, these are some of the things you can do:

DISCLAIMER: use this at your own risk, this code is provided as-is.

Asir Vedamuthu Selvasingh posted AppFabric ACS: Single-Sign-On for Active Directory, Google, Yahoo!, Windows Live ID, Facebook & Others to the Interoperability @ Microsoft blog on 4/12/2011:

image Until today, you had to build your own custom solutions to accept a mix of enterprise and consumer-oriented Web identities for applications in the cloud or anywhere. We heard you and we have built a service to make it simpler.

image722322222Today at MIX11, we announced a new production version of Windows Azure AppFabric Access Control service, which enables you to build Single-Sign-On experience into applications by integrating with standards-based identity providers, including enterprise directories such as Active Directory, and consumer-oriented web identities such as Windows Live ID, Google, Yahoo! and Facebook.

The Access Control service enables this experience through commonly used industry standards to facilitate interoperability with other software and services that support the same standards:

  • OpenID 2.0
  • OAuth WRAP
  • OAuth 2.0 (Draft 13)
  • SAML 1.1, SAML 2.0 and Simple Web Token (SWT) token formats
  • WS-Trust, WS-Federation, WS-Security, XML Digital Signature, WS-Security Policy, WS-Policy and SOAP.

And, we continue to work with the following industry orgs to develop new standards where existing ones are insufficient for the emerging cloud platform scenarios:

Check out the Access Control service! There are plenty of docs and samples available on our CodePlex project to get started.


Asir Vedamuthu Selvasingh, Technical Diplomat, Interoperability

<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

The Windows Azure Team reported Now Available: Windows Azure SDK 1.4 Refresh with WebDeploy Integration on 4/15/2011:

We are pleased to announce the immediate availability of the Windows Azure SDK 1.4 refresh.  Available for download here, this refresh enables web developers to increase their development productivity by simplifying the iterative development process of web-based hosted services.

Until today, the iterative development process for hosted services has required you to re-package the application and upload it to Windows Azure using either the Management portal or the Service Management API. The Windows Azure SDK 1.4 refresh solves this problem by integrating with the Web Deployment Tool (Web Deploy).

Please note the following when using Web Deploy for interactive development of Windows Azure hosted services:

  • Web Deploy only works with a single role instance.
  • The tool is intended only for iterative development and testing scenarios
  • Web Deploy bypasses the package creation. Changes to the web pages are not durable. To preserve changes, you must package and deploy the service.

Click here to install the WebDeploy plug-in for Windows Azure SDK 1.4.

Steve Marx (@smarx) explained Building a Simple “” Clone in Windows Azure in a 5/14/2011 post:

imageA few weeks ago, I built a sample app that I used as a demo in a talk for the Dr. Dobb’s “Programming for the Cloud: Getting Started” virtual event. That pre-recorded talk went live today, and I believe you can still watch the talk on-demand.

For regular readers of my blog, the talk is probably a bit basic, but the app is yet another example of that common pattern of a web role front-end serving up web pages and a worker role back-end doing asynchronous work. The full source code is linked from the bottom of the page, so feel free to check it out.

Note that this is pretty much the most obvious use case ever for the CDN, but I didn’t use it. That was to keep the example as simple as possible for people brand new to the platform.

I don’t promise to keep the app running for very long, so don’t get too attached to your sweet URL. (Mine’s

Marius Revhaug described the Microsoft Platform Ready Test Tool - Windows Azure Edition in a 5/14/2011 post:

This is a guide for testing Windows Azure and SQL Azure readiness with the MPR testing tool. However, you may also use the Microsoft Platform Ready (MPR) Test Tool you can test the following applications:

  • Windows® 7

  • Windows Server® 2008 R2

  • Microsoft® SQL Server® 2008 R2

  • Microsoft Dynamics™ 2011

  • Microsoft Exchange 2010 + Microsoft Lync™ 2010

This guide outlines the steps for using the Microsoft Platform Ready (MPR) Test Tool to verify a Windows Azure and SQL Azure application.

1. Download the MPR Test Toolkit
The MPR Test Toolkit is available from the Web site. You must first sign up to access the MPR Test Toolkit.
After you have successfully registered the application(s) you want to test, you will see the MPR Test Toolkit located on the Test tab as seen in figure 1.1.

Figure 1.1: MPR Test Tool download location

2. Start the MPR Test Toolkit
After you have successfully installed the MPR Test Toolkit you will see shortcuts to start the MPR Test Toolkit available on the Start menu and on the Desktop.

3. Select an operation
Once you have started the MPR Test Tool, you can either start a new test, resume a previous test or create reports. The options are shown in figure 1.2.

Figure 1.2: MPR Test Tool options

4. Start a new test
Click Start New Test to choose the type of platform you want to test as shown in figure 1.3.
If your application utilizes several platforms, you can select all of the platforms used on this screen. This guide will focus on Windows Azure and SQL Azure.
At this point, name your test by entering a name in the Test Name textbox. The name of the test should match the name you specified when you registered your application on the MPR Web site.

Figure 1.3: Name your test and select the platform(s) to test

5. Enter server information
On the next screen click the Edit links to enter appropriate information to connect to Windows Azure and SQL Azure.
For SQL Azure you need to specify the Server Name, Username and Password used to connect as seen in figure 1.4.

Figure 1.4: Specify server information

For Windows Azure you need to specify the Physical Server Address and Virtual IP Address used to connect as seen in figure 1.5.

Figure1.5: Specify server information.
The following screen summarizes the results of the Server Prerequisites test as seen in figure 1.6.

Figure 1.6: Summary of Server Prerequisites test

6. Test application functionality
Your application’s primary functionality must be executed now. Before proceeding ensure that your application is already installed and fully configured. Execute all features in your application including functionalities that invoke drivers or devices where applicable. If your application is distributed on multiple computers exercise all client-server functionalities. Where applicable you are required to test client components in your application concurrently with server components in your application. You may now minimize the MPR Test Tool, execute primary functionality of your application and return to the MPR Test Tool.

Once you have executed all primary functionality make sure to check the “Confirm that all primary functionality testing of your application” option as seen in figure 1.7.

Figure 1.7: Execute all primary functionality of your application

7. See if your application passed
Once you click Next, you will see immediately if your application passed the test as seen in figure 1.8.
If one or more components in your application fail, you have the ability to submit a waiver.
You will find the instructions for how to acquire a waiver described at the bottom of the screen.

Figure 1.8: You will see immediately if your application has passed or failed

8. View the test results report
You have now completed the test portion. To view the report, click the Show button seen in figure 1.9.

Figure 1.9: Click the Show button to view the test report

Figure 1.10 displays an example of a test report.

Figure 1.10: Example of a test report

9. Generate a test results package
In this next step you will generate a test results package that you can submit to the MPR Web site.
Start this process by clicking the Reports button as seen in figure 1.11.

Figure 1.11: Click the Reports button to start the process for creating a test results package

10. Select reports to include in the Test Results Package
To create a Test Results Package start by selecting the test(s) you want to include.
You can include one or more tests. You can also click the Add button, as seen in figure 1.12, to select and include reports located on other computers.

Figure 1.12: Select the reports you want to include

11. View summary
The screen shown in figure 1.13 displays a summary of reports that will be included in the test results package.

Figure 1.13: Reports summary

12. Specify application name and version
Next you will specify the application name and version. Enter the application name and version in the textboxes shown in figure 1.14.
The application name should match the name you specified when you registered your application on the MPR Web site.

Figure 1.14: Specify application name and version

13. Specify test results package name and application ID
Next you will specify a name for the test package and application ID. Enter the name and application ID in the textboxes as shown in figure 1.15.
The application ID must match the application ID that was generated on the MPR Web site when you first registered your application.
You also need to agree to the Windows Azure Logo Program legal requirements before you can continue.

Figure 1.15: Specify the Test Results Package Name, Application ID and agree to Windows Azure Logo program requirements

14. Finish the test process
The test process is now complete. You can now upload the Test Results Package (zip file) to the MPR Web site.
The path to the Test Results Package is located in the Test Results Package Location textbox as seen in figure 1.16.

Figure 1.16: Test Results Package location

15. Upload Test Results Package to the MPR Web site

In this last and final step you will upload the Test Results Package created by the MPR tool.
To upload the Test Results Package visit the MPR Web site.
Click the Browse button to locate the generated Test Results Package zip file then click the Upload button as seen in figure 1.17.

Figure 1.17: Upload Test Results Package

The MPR tool will upload your Test Results Package to the MPR Web site

The MPR tool’s UI has improved greatly since I used it to obtain the “Powered by Windows Azure” logo for my OakLeaf Systems Azure Table Services Sample Project - Paging and Batch Updates Demo several months ago.    

Wes Yanaga recommended that you Speed up your Journey to the Cloud with the FullArmor CloudApp Studio (US) on 4/14/2011:

image We know that getting to the cloud is not always straight forward. That's why we are committed to bringing you the latest tools and resources to help you to quickly get your solution cloud-ready.

We would like to introduce you to the FullArmor CloudApp Studio, it makes it easy to rapidly deploy applications and websites onto the Microsoft Windows Azure platform - making development into the cloud simple.

It's a tool designed to be used by anyone - not just developers - and doesn't require coding knowledge. Using a familiar interface it can take just a few minutes to get your application or website ready for Windows Azure.

You are invited to take part in a three month preview of the FullArmor CloudApp Studio, starting on April 14, 2011 and ending on June 30, 2011.

Steps to Get Started with the FullArmor CloudApp Studio Preview

  1. Sign up for a CloudApp Studio Preview Account.
  2. Log-in to the Windows Azure Dev portal as normal.
    • If you don’t already have a Windows Azure account, you can try Windows Azure for 30 days – at no cost. Click here and use Promo Code DPWE01 for your free Windows Azure 30 Day Pass. No credit card required*.
  3. Select Beta Programs, then VM Role, then Apply for Access [takes 3-7 business days].
  4. Once your VM Role access has been enabled, download the FullArmor CloudApp Studio Quick Start Guide for information on how to get started.
  5. That's it! Log into FullArmour CloudApp Studio to start migrating your applications or websites onto the Windows Azure platform.

FullArmor CloudApp Studio Tech Specs:

  • A single disk virtual machine running on Windows Server 2008 R2 Hyper-V X64
  • Admin access to the virtual machine (for the virtual machine to Azure VMRole web site migration)
  • Find more information in the CloudApp Studio Technical Guide

Get more out of the cloud with Microsoft Platform Ready for Windows Azure

imageMicrosoft Platform Ready offers no-cost technical support, developer resources, software development kits and marketing resources to help you get into the cloud faster.

With Microsoft Platform Ready you gain exclusive access to developer resources and can test compatibility with online testing tools. Drive sales with customized marketing toolkits and when you’re done you can exhibit your solutions in Microsoft catalogs.

Windows Azure Platform developer resources

Try Windows Azure for 30 days – at no cost*

Take advantage of the CloudApp Studio preview with a free Windows Azure 30 day pass. You can quickly migrate your app or website to the cloud and see how it performs on the Windows Azure platform.

Use Promo Code DPWE01 to get your free 30 day Windows Azure Pass today

*Windows Azure 30 Day Pass Terms and Conditions

Note: If you would like to continue to use the Windows Azure platform after your 30 day pass has expired, you will need to sign up for a Windows Azure paying account here.

Julie Lerman (@julielerman) answered Why can’t I Edit or Delete my Data Service data? on 5/14/2011:

image I spent waaaaaay too much time on this problem yesterday and wanted to share the solution, or more accurately, the workaround.

I’m creating an Azure hosted Data Service and to as a consumer, an Azure hosted MVC application.

imageMy app allows users to query, create, edit and delete data.

I have the permission set properly in the data service.  In fact, in desperation I completely opened up the service’s permissions with

config.SetEntitySetAccessRule("*", EntitySetRights.All);

But calling SaveChanges when there was a modification or delete involved always threw an error: Access is Denied and in some situations a 404 page not found.

When you develop Azure apps in Visual Studio, you do so with Visual Studio running as an Admin and the Azure tools use IIS to host the local emulator. I so rarely develop against IIS anymore. I mostly just use the development server (aka cassini) that shows up as localhost with a port number, e.g., localhost:37.

By default IIS locks down calls to POST, MERGE and DELETE  (e.g. update and delete).

It took me a long time to even realize this but Chris Woodruff asked about that.

So I went into IIS and even saw that “All Verbs” was checked in the Handler Mappings and decided that wasn’t the problem.

Eventually I went back and explicitly added PUT, POST,MERGE,DELETE, GET in the verbs list.

But the problem still didn’t go away.

I deployed the app to see if Azure would have the same problem and unfortunately it did.

I did a lot of whining (begging for ideas) on twitter and some email lists and none of the suggestions I got were panning out.

Today I was very fortunate to have Shayne Burgess and Phani Raj from Microsoft take a look at the problem and Phani suggested to use the workaround that the WCF Data Services team created for this exact problem – when POST is being honored (e.g. insert) but PUT, MERGE and DELETE are being rejected by the server.

There is a property on the DataServiceContext called UsePostTunneling.

Setting it to true will essentially trick the request into thinking it’s a POST and then when the request got the server, saying “haha, just kidding it’s a MERGE or DELETE”. (My twisted explanation…Phani’s was more technical.)

This did the trick both locally and on Azure. But it is a workaround and our hope is to eventually discover the root cause of the problem.

Here's a blog post by Marcelo Lopez Ruis about the property:

Hopefully my blog post has some search terms that will help others find it who have the same problem. In all of the searching I did, I never came across post tunneling though I do remember seeing something about flipping POST and PUT in a request header but the context was not inline with my problem enough for me to recognize it as a path to the solution.

Barbara Duck (@MedicalQuack) reported HealthVault Begins Storing Medical Images (Dicom) Using Windows Azure Cloud Services With Full Encryption on 4/14/2011:

image This is cool and makes me think of what my mother as a senior has gone through with 2nd opinions and so on.  She has literally had to enlist the help of friends to pick up images and take them in hand to various doctor’s offices.  Yes that is a pain and having HealthVault at a storage point is great.  Dicom images are big files so that is where the cloud comes in.  If you have images already in your possession there’s an interface to upload them to HealthVault. 

imageIf you share with a provider they can download and keep the images in your record so again this is a big timesaver.  Imaging centers can automatically send your images to your personal health record with HealthVault. 

imageThe one vendor mentioned has a “drop box” for files and if you have used any of those types of sharing programs, it’s works well and is pretty simple to use.  There’s always a link on this site under resources if you want to get started on the right hand side. 

It has been a while since I mentioned this, you can track your prescriptions from Walgreens and CVS too and have them all in one place.  Also at CVS Minute Clinic they work with HealthVault and pretty much anymore from conversations I have had with a few locations, they ask you right up front if you want to connect with HealthVault or Google Health any more too. 

I have uploaded documents to my file but again imaging files are huge this using the cloud takes care of the “size” issues.  What is also nice is having the Dicom viewer and not needing another piece of software to view the images.   I can use one less image viewer on my computer too.  BD 

I'm always talking about how lame the health IT landscape is --- and with good reason. But there is one domain in which advances have been consistent and amazing and clearly impactful to care: medical imaging. Our ability to look inside a body and see with incredible detail what's going on inside is just staggering.

imageWell, today there is no way I could be more over-the-top excited to say --- medical imaging is now a core part of the HealthVault platform. We're talking full diagnostic quality DICOM images, with an integration that makes it immediately useful to consumers and their providers:

  • imageA new release of HealthVault Connection Center that makes it super-easy to upload images from a CD or DVD , and enables users to burn new discs to share with providers --- complete with a viewer and "autoplay" behavior that providers expect.
  • Full platform support for HealthVault applications to store and retrieve images themselves, enabling seamless integration with radiology centers and clinical systems.

Of course, we have to ensure that we keep the data secure and private as well --- we do this with a really cool piece of code that creates unique encryption keys for every image, stores those keys in our core HealthVault data center, and only sends encrypted, chunked data to Azure. This effectively lets us use Azure as an infinitely-sized disk --- warms this geek's heart to see it all come together.



It’s all about the pictures: medical imaging arrives at HealthVault! - Family Health Guy - Site Home - MSDN Blogs

Bruce Kyle reported Windows Azure Updates SDK with New Deployment Tool, CDN Preview, Geo Performance Balancing on 4/13/2011:

imageAnnouncements at MIX11 myriad of new Windows Azure services and functionality that include:

  • A new Web Deployment Tool that is part of an updated SDK.
  • Preview of Windows Azure Content Delivery Network (CDN) for Internet Information Services (IIS) Smooth Streaming.
  • Preview of Windows Azure Traffic Manager.

The announcements were made  by the Windows Azure team on their blog Windows Azure News from MIX11.

New & Updated Windows Azure Features

Announcements include:

  • An update to the Windows Azure SDK that includes a Web Deployment Tool to simplify the migration, management and deployment of IIS Web servers, Web applications and Web sites. This new tool integrates with Visual Studio 2010 and the Web Platform Installer. Get the tools from Download Windows Azure SDK.
  • A community technology preview (CTP) of Windows Azure Traffic Manager, a new service that allows Windows Azure customers to more easily balance application performance across multiple geographies. To get the CTP, see the Windows Azure Traffic Manager section of Windows Azure Virtual Network.
  • A preview of the Windows Azure Content Delivery Network (CDN) for Internet Information Services (IIS) Smooth Streaming capabilities, which allows developers to upload IIS Smooth Streaming-encoded video to a Windows Azure Storage account and deliver that video to Silverlight, iOS and Android Honeycomb clients. See Smooth Streaming section at Windows Azure Content Delivery Network (CDN).
AppFabric Offers Caching, Single Sign-On

Previously reported in this blog:

  • Updates to the Windows Azure AppFabric Access Control service, which provides a single-sign-on experience to Windows Azure applications by integrating with enterprise directories and Web identities.
  • Release of the Windows Azure AppFabric Caching service in the next 30 days, which will accelerate the performance of Windows Azure and SQL Azure applications.

See Windows Azure AppFabric Release of Caching, Single Sign-on for Access Control Service.

    About MIX

    In its sixth year, MIX was created to foster a sustained conversation between Microsoft and Web designers and developers. This year’s event offers two days of keynotes streamed live and approximately 130 sessions available for download, all at

    See The Web Takes Center Stage at Microsoft’s MIX11 Conference.

    About Azure Traffic Manager

    Windows Azure Traffic Manager is a new feature that allows customers to load balance traffic to multiple hosted services. Developers can choose from three load balancing methods: Performance, Failover, or Round Robin. Traffic Manager will monitor each collection of hosted service on any http or https port. If it detects the service is offline Traffic Manager will send traffic to the next best available service. By using this new feature businesses will see increased reliability, availability and performance in their applications.

    About Content Delivery Network

    The Windows Azure Content Delivery Network (CDN) enhances end user performance and reliability by placing copies of data closer to users. It caches your application’s static objects at strategically placed locations around the world.

    Windows Azure CDN provides the best experience for delivering your content to users. Windows Azure CDN can be used to ensure better performance and user experience for end users who are far from a content source, and are using applications where many ‘internet trips’ are required to load content.

    Both Channel 9 and this blog are served to you through CDN.

    Getting Started with Windows Azure

    See the Getting Started with Windows Azure site for links to videos, developer training kit, software developer kit and more. Get free developer tools too.

    For a free 30-day trial, see Windows Azure platform 30 day pass (US Developers) No Credit Card Required - Use Promo Code: DPWE01.

    For free technical help in your Windows Azure applications, join Microsoft Platform Ready.

    Learn What Other ISVs Are Doing on Windows Azure

    For other videos about independent software vendors (ISVs):

    The All-In-One Code Framework Team posted New Microsoft All-In-One Code Framework “Sample Browser” v3 Released on 4/13/2011:

    Microsoft All-In-One Code Framework Sample Browser v3 was released today, targeting to provide a much better code sample download and management experience for developers.

    Install: (a tiny ClickOnce install)

    Background of Sample Browser v3

    Last month, we reached the milestone of releasing Sample Browser v2, whose features focus on providing better experience for browsing and searching around 600 code samples of Microsoft All-In-One Code Framework.  We heard an important feedback from developers after that release: the sample browser accompanied with all 600 code samples is too large to download and hard to update when new releases come.  We took this feedback seriously and make the enhancement in this new version of Sample Browser.  In partnership with the MSDN Code Sample Gallery, the code sample download and management experience is now dramatically improved in Sample Browser v3.

    Key features of Sample Browser v3
    1. "on-demand" sample downloading

    The Sample Browser v3 itself is a very tiny and easy-to-install application. It searches for Microsoft All-In-One Code Framework code samples.  Based on your demands, you have the flexible choice between downloading individual code sample by clicking the “Download” button next to the code sample:

    and downloading all code samples in one click:

    2. Sample download management

    In Sample Browser v3,  you can configure the folder for managing all downloaded code samples

    We recommend that you download all code samples to a centralized place, e.g. "D:\Samples".  Microsoft All-In-One Code Framework Sample Browser will help you manage the downloaded code samples.  If a sample has already been downloaded, Sample Browser displays the “Open” button next to the code sample:

    If an update becomes available for a code sample, a [Update Available] button will remind you to download the updated code sample.

    3. Auto-update

    Thanks to ClickOnce, you will automatically get updates of Microsoft All-In-One Code Framework Sample Browser when its new versions come out.

    Besides the above key improvements, the font in the Sample Browser user interface is adjusted to have better contast with the background color.

    If you have any feedback, please feel free to let us know by sending email to us at here: Your suggestions are very much appreciated.


    Appreciate Steven Wilssens and his MSDN Code Sample Gallery team for enabling us to implement the on-demand download feature.

    Appreciate my team members Leco Lin, Dandan Shi for writing this new version of Sample Browser.

    Appreicate Celia Liang, Lixia Dai for creating banners and evagalizing the new Sample Browser.

    Brien Loesgen reported Just released: Windows Azure Migration Scanner on 4/12/2011:

    image From the great minds of Hanu Kommalapati, Bill Zack, David Pallmann comes a new rules-based tool that can scan your existing .NET app and alert you to any issues you may have in migrating to Windows Azure.

    Great work guys!

    You can get it at

    Windows Azure Migration Scanner
    Windows Azure Migration Scanner (WAMS) is a tool that scans application code and identifies potential Windows Azure migration issues needing attention.
    WAMS works by scanning source files and looking for matches against a rules file. The rules file specifies text regular expression patterns that suggest areas requiring attention. For example, if a C#/.NET application makes references to the System.Messaging namespace, that flags a warning that use of MSMQ needs to be converted to Windows Azure Queue Storage.
    While Azure Migration Scanner can’t identify all migration issues using this technique, it will give you a heads-up on many migration issues. Since the rules file is extensible, you can add new rules and even support new languages by adding your own definitions.
    There are 3 ways to use the Azure Migration Scanner:
    1. WAMS Desktop Tool: wams.exe, a WPF-based desktop GUI tool
    2. WAMS Command Line Tool: wamscmd.exe, a console command
    3. WAMS Library: wamslib.dll, the WAMS library that you can invoke from your own code
    The WAMS Desktop Tool

    The WAMS Desktop Tool provides an easy to use GUI for scanning source code for migration. To use it, fill in the inputs (describe below) and click the Start Scan button. When you run a scan, your setting preferences are saved for next time.

    • Scan Directory: the folder where the source code is.
      • All subfolders will also be scanned.
      • You can specify multiple folders by separating them with a semicolon.
    • Keyword File: the keyword rules XML file defining the scan rules.
      • By default, this is an online hosted rules file.
      • If you wish to view, extend or replace the rules, see the Keywords.xml file that is included in the installation which you can customize.
      • To use a customized file, specify its file location or URL here.
    • Output Options:
      • Display results / Save results to .csv file / Both
    • Choose output (display, csv file, or both)
      • Remove duplicates
      • If checked, identical warnings (same category of issue for the same file) will be condensed into a single warning.
    • Auto Open
      • If checked, the output CSV will be opened automatically at the end of the scan.
    • Display Options:
      • All: Display all results
      • Top: Display the top n results (enter the max number to display)
    • Output Directory: the folder to write the csv output file to.
    • Output File: the name of the output file.
      • Click the Append Timestamp button to add a timestamp prefix to the filename.

    When you have defined the above parameters, click the Start Scan button. During the scan, a dialog will inform you of progress.

    When the scan completes, the results will be displayed and/or reside in an output file depending on the preferences you specified. The results display looks like this. Close the results window when done. The result columns are:
    • Filename: full path of the file that was scanned
    • Line No: line number of the source file
    • Category: the category of the rule that triggered a warning
    • Level: the severity of the warning
    • Line: the source code line
    • Guidance: a guidance note explaining the warning and what to do about it


    The output file is a CSV file which can be viewed in Microsoft Excel. Like the results display, the output is one line per issue showing filename, line number, issue category, level, source code line, and a guidance note.

    To adjust the column widths to fit the data, select Format > AutoFit Column Width from the Home menu.


    The WAMS Command Line Tool
    The WAMSCMD provides the same general functionality as the WAMS Desktop Tool in a command line tool that you include in scripts and build processes.
    The command line form for wamscmd.exe is shown below.
    wamscmd –i <input-directory> -option [ <parameter> ]
    The wamscmd parameters are:


    -i <path> specifies the input directory(ies) (separate with semicolon)
    -i c:\project1\src;c:\project1\inc

    -o <path>
    specifies the output directory (default: current directory)
    -o c:\project1\analysis

    Adds the current date yyyymmddhhmmss as a prefix to the output filename.

    -f <path>
    specifies the output filename (default: wamscsv)
    -f azurescan.csv

    -k <path>
    specifies the file location or URL of the keywords.xml (rules) file (default: online rules file)
    -k keywords.xml

    Quiet mode – suppresses console output

    The only required parameter is the –i switch, which specifies the path of the source code to be scanned. Multiple paths may be specified if desired. The output file is a comma-separated values (CSV) file named <runname><filename>. You can set the output file location, run name, and filename with command line parameters. If not specified, these default to yyyymmddhhmmsswams.csv in the current directory.

    Keyword Rules File (Keywords.xml)
    The keyword.xml rules file contains the patterns for discovering and reporting migration issues. Below is a partial listing.
    The root element of Keywords.xml is <scancatalog> which contains multiple <category> element. Each <category> element describes a scan category which specifies:
    • Category name
    • File type(s) to scan
    • Severity level
    • Whether or not case should be ignored in matching
    • Guidance text
    • Multiple <keyword> child elements that define regular expression patterns to match
    WAMS Library
    The WAMS library is the core scanner that powers both the WAMS desktop tool and command line tool. The best way to understand its workings is to examine the source code of these projects, which is included in the codeplex project.
    WAMS can be a useful addition to your collection of migration tool but will not identify all migration issues. It has these limitations:
    • WAMS scans one source code line at a time in its rules matching. This means the current implementation cannot accommodate rules that have to look at multi-line patterns of program code or markup.
    • WAMS can be fooled. For example a string literal or a comment could contain a pattern match meant for program code.
    • WAMS’s rules don’t cover every migration issue. WAMS has a good starter set of rules but they are neither complete not do they cover very many languages yet—just C#, Web.config, and .sql scripts at present. The breadth and coverage of rules will be expanded over time (volunteers welcome!).
    Community Project
    Windows Azure Migration Scanner is a community project maintained at
    Contributors: Hanu Kommalapati, Bill Zack, David Pallmann [see post below]

    The All-In-One Code Framework Team posted More Searchable, Interactive and Optimized for Code Samples - Microsoft All-In-One Code Framework has a new home on MSDN Code Sample Gallery on 4/12/2011:

    Microsoft All-In-One Code Framework - the free, centralized and developer-driven code sample library is having a new home on MSDN Code Sample Gallery to deliver a more friendly user experience to tens of thousands of developers worldwide.

    The new MSDN Code Sample Gallery provides developers a greater experience around learning from code samples and enables access to the code samples where it is most relevant and in context. Thanks to the partnership with the great MSDN Code Sample Gallery team, Microsoft All-In-One Code Framework is moving its ever growing code sample library and its code sample request service to the new gallery. Here are the key improvements based on this migration.

    ● Sample Browser ^online coming in this week

    We heard an important feedback from developers after we released Sample Browser v2 in Feb:  the sample browser accompanied with all 580+ code samples is too large to download and hard to update when new releases come.  We take this feedback seriously.  With our move to MSDN Code Sample Gallery, we are now able to solve this problem and give customers a much better sample download and management experience.  We will release a new Sample Browser in just a few days.  It is a very tiny Clickonce-deployed application – so it’s easy to download, install and upgrade.  It connects to MSDN Code Sample Gallery to search and browse code samples of Microsoft All-In-One Code Framework.  Users can download code samples on-demand, and easily manage the downloaded code samples in Sample Browser online. 

    ● Better search function specially designed for code samples

    Our code sample pages in MSDN Code Sample Gallery provide you with a rich html description, online code browsing, and an optional Q/A section where you can ask questions about the code sample. You can directly copy and paste the code from the browse code tab or download the code sample and open it inside Visual Studio.

    ● Better feedback channels

    You can vote up or vote down our code samples based on its quality. You can also submit your feedback to any code sample. Your voice will be directly sent to the Microsoft All-In-One Code Framework to improve the content.

    ● Request a Code Sample

    Microsoft All-In-One Code Framework is migrating its code sample request service to the new MSDN Code Sample Gallery because of its direct support of requesting a code sample.  You can submit a request for a new code sample. Both Microsoft and the community can decide to write a code sample for your request. Other users can vote up a request. This creates a great mechanism for discovering content gaps and optimizing new sample efforts by pre-determining whether a code sample will be useful and popular. 

    ● Localization of Code Samples coming soon

    With the direct support of sample localization from the new MSDN Code Sample Gallery, customers will see an improved experience in finding localized Microsoft All-In-One Code Framework code samples.  We will start from integrating code samples localized to Simplified Chinese to the gallery pages. These samples are currently available in  If you want to see more localized code samples, please comment in the blog entry "Do you want "localized" code samples from Microsoft?", or email

    With the integration of all these great new features of MSDN Code Sample Gallery, we embrace the opportunity to serve developers' needs better and make developers' lives much easier.


    Our current portal on CodePlex will still be well maintained and updated, but we are encouraging developers to download & request code samples from our new home on MSDN Code Sample Gallery if you like its new features.

    David Pallman (@davidpallman) posted Announcing Windows Azure Migration Scanner on 4/10/2011 (missed when posted):

    image I’m pleased to announce the availability of a new community tool to aid in migration of applications to Windows Azure: Windows Azure Migration Scanner, or WAMS. WAMS scans your source code and brings potential migration issues to your attention.

    As anyone knows who has tried it, a migration of software from one environment to another usually involves accommodating some differences and this can mean a small or a large amount of work at times. Accordingly, you really want to know what you’re in for before you start. Indeed, knowing the scope may help you decide whether or not the migration is even worth doing at all. In a move to cloud computing, migration analysis is doubly important since the cloud is so different from the enterprise.

    There are already some great technical and business tools out there for helping scope a migration to Windows Azure such as SQL Azure Migration Wizard, the Windows Azure TCO Tool, and Azure ROI Calculator. WAMS comes alongside these with one more form of insight that comes from scanning source code.

    What WAMS does is scan your source files, looking for regular expression matches against a keyword rules base. For example, a rule like this one tells WAMS to raise an issue when it sees a reference to transparent data encryption in SQL Server scripts because SQL Azure does not have a matching feature. You can customize the rules file with your own rules, or use the default rules which are maintained in a central location online.

    <category name="SQLAzure_TDE" filetypes=".sql" issuelevel="HIGH" ignorecase="true"
    guidance="SQL Azure does not currently support transparent data encryption (TDE).">
      <keyword>ENCRYPTION KEY</keyword>
      <keyword>SET ENCRYPTION</keyword>

    This approach makes it simple to add rules for any text-based code, including scripts and configuration files. The rules mechanism is fairly powerful since you can specify not just text keywords but complete regular expressions.

    WAMS can display its findings in a window, output them to a CSV file you can view in Excel, or both. Each issue reported describes the filename, line number, rule category, severity level, code line, and guidance text. Options allow you to consolidate duplicate issues into a short list.

    WAMS is also supplied in the form of a command line tool you can use in scripts and builds. There’s an option to add a timestamp to the output file name.

    WAMS has some limitations. In its first incarnation, its rules base only applies to C#, .SQL scripts, and .NET config files. As we regularly extend the rules base, we’ll get more breadth and depth in the types of source code included and the coverage of the rules (contact me if you’d like to help in that effort!). WAMS can also be fooled, since the contents of a string literal or a comment could conceivably contain a match to one of its rules. Still, we think WAMS is a useful tool: while it is unlikely to uncover all of your migration issues, it’s nonetheless valuable in bringing considerations to your attention you might have otherwise missed.

    WAMS is the result of community collaboration. It resides on codeplex at and the project includes an installable .msi, documentation, the keywords rules XML file, and source code.

    <Return to section navigation list> 

    Visual Studio LightSwitch

    Beth Massi (@bethmassi) announced the availability of CodeCast Episode 104: Visual Studio LightSwitch with Beth Massi:

    image I woke up early this morning to do a phone interview with an old friend of mine, Ken Levy, about Visual Studio LightSwitch. We chatted about what LightSwitch is and what it’s used for, some of the latest features in Beta 2, deployment scenarios including Azure deployment, and a quick intro to the extensibility model. I’m always pretty candid in these and this one is no different – lots of chuckling and I had a great time as always. Thanks Ken!

    Check it out: CodeCast Episode 104: Visual Studio LightSwitch with Beth Massi (Length: 47:44)

    image2224222222Links from the show:

    And if you like podcasts, here are a couple more episodes with the team:

    Paul Patterson explained Microsoft LightSwitch – Simple Stakeholder Management in a 4/12/2011 post:

    Like many seasoned (NOT OLD!) information technology folk, experience has taught us a number of things. Here is a small tidbit of knowledge that I hope you can learn from and use for your own journey as an information technology professional. It’s a simple stakeholder management architecture implemented using LightSwitch.

    (With Downloadable Solution Source)

    Trust Me. I`ve Been There!

    Yes, the source project is available for download – at the bottom of the article…

    My experience and breadth of knowledge comes from many years of line of business solution development and support. From my early days as a Business Analyst to my roles in development, consulting, and solution architecture, I have learned many an interesting things.

    Back in the day as a young whippersnapper I thought I knew everything. I could walk through a class file and tell you exactly what every snippet was doing, with what namespaces, and why. I was confident, and could hold my own with any other like minded people. I certainly had the passion, but as experience has taught me, I was a bit to presumptuous.

    Now that I have some years behind me (not that many though!), I thought I would start offering up some of the things I have learned so that others can benefit from the same and learn from.

    This first article is about what I have learned about the most common requirements, or inputs, into creating line of business software – stakeholder management.

    Stakeholder Management

    First off, and for sake of clarity, a definition…

    Stakeholder: a person, group, organization, or system who affects or can be affected by an organization`s actions. (Thank you Wikipedia.

    Be it a customer, supplier, user, employee, or whatever, every solution I have worked on has requirements for managing business entities that can be abastracted into a high-level type of stakeholder. Yes, each type of stakeholder entity may have different roles and responsibilities however at its root, a stakeholder is a stakeholder.

    So, back to what I have learned over the years. Stakeholder management is arguably the most critical piece of the line of business solution pie. Everything that happens in a solution resolves around how a stakeholder is managed. This includes how the processes, both system and business, mitigate stakeholder information. Stakeholder inputs and outputs are key to almost every process.

    Here is how I have engineered stakeholder information management, in the context of a solution.

    Most stakeholder business types can be generalized into a Stakeholder entity. For example, let`s say we have a requirement to mitigate customer, supplier, and manufacturer information for an application that will be used for a service based company. A customer, supplier, and manufacturer can each be considered a type of stakeholder.

    Here is an example generlization of each of the customer, supplier, and manufacturer stakeholder…

    Just to make things interesting, I am going to throw in a couple more entities that will better show how this stakeholder abstraction works. Address information is a great entity to use because what system does not require some information about a stakeholder’s address. I am also going to throw contacts into the mixer too. Oh, how about if a contact has addresses of their own too.

    Cool, so how do I implement this in LightSwitch? Well I am glad you asked. Here is how…

    Open Visual Studio and create a LightSwitch project. In this example, I’ve named my project SimpleStakeholderManagement.

    Create a new table named Stakeholder. Add StakeholderName and StakeholderType properties to the table as shown below.

    For this example, I know what types of Stakeholders will be managed by my application, so I am going to setup my Stakeholder data entity so that a predefined type can be applied to it.

    Select the StakeholderType property, and click the Choice List… link from the properties panel.

    Enter a list of choices as shown below, and click the OK button to save the choice list…

    With that, I am going to create a simple screen that will be the starting point for the solution.

    From the Stakeholder table designer, click the Screen button.

    From the Add New Screen dialog, select the List and Details Screen template. Then select Stakeholders from the Screen Data dropdown. Then click the OK button…

    I’m not too concerned about the screen designer at this point, however I am curious what it looks like, so press the F5 key run the application.

    Nothing fancy, but it works. In my example, I create a number of sample stakeholder records, each with an appropriate name and stakeholder type assigned…

    Now to provide for the management of Addresses. Cose the application and head back on over Visual Studio.

    Create a new table named Address with the following properties…

    For the AddressType property, create a Choice List with the following values…

    Now I want to create the relationship between the Address table and the Stakeholder table.

    Click on the Relationship button on the top of the Address table designer. In the Add New relationship dialog, select the Stakeholder entity from the To Name dropdown. Then select the Zero or One value from the Multiplicity dropdown, and then click the OK button to create the relationship.

    The zero to one multiplicity provides for a relationship where addresses can exist without a related stakeholder. I am selecting the zero or one multiplicity because I also want to be able to attach addresses to contact entities, without the constraint of the address being attached to a stakeholder. Confused? Don’t worry, you’ll see it in action later.

    I now see the relationship in the Address table designer…

    Now open the designer for the StakeholdersListDetail screen created earlier. In the designer, click the +Add Address link in the Stakeholders panel on the left.

    Clicking the Add Addresses link will add the Addresses collection as a data item for the screen. The action also wires up the relationship with the stakeholders.

    Now drag and drop the Address data item onto the layout panel, drop the item within the Details Column Rows Layout group. When dragging the item  you can carefully position the item within the appropriate location by slightly moving the mouse back and forth, left to right. It takes practice, but you’ll get the hang of it.

    You’ll see that all the properties of the Address entity have been added to the screen, including the Stakeholder property. I don’t want that to show up, so, delete from the screen layout.

    Now, run the application to see what it looks like.

    Super. In my example, I selected the first stakeholder and added an address to it.

    Now I want to add contacts to the stakeholder, and then have it so that individual contacts can have their own address records.

    Back in Visual Studio I select to add a new table. The new table is named Contact and contains properties of COntactName, ContactType and EmailAddress. For the ContactType I define a choice list and then save the table…

    With my new Contact table I create a relationship to the Stakeholder, as shown below…

    Then, I create a relationship to the Address table…

    Similar to the stakeholder to address, the contact to address relationship is defined with a Zero or one multiplicity. Be careful to define the correct From and To entities because the relationship is being created from the contact table here, not the address table.

    Here is what the Contact table designer looks like with the relationhips.

    Now back to the StakeholdersListDetail screen designer.

    Just like adding the addresses collection, click the +Add Contacts link to add a related contacts collection data item to the screen. This will add a Contacts collection to the screen data items. Next we want to add the addresses for those contacts as data for the screen.

    Within the Contacts data item (not the root Add Addresses), click the Add Addresses.

    Now all we need to do is modify the layout a bit, and then add the data items we just added.

    In the layout designer, add a new group to the Details COlumn…

    Change the layout type to a Tabs Layout…

    Now drag and drop the Address Data Grid to the new Tabs Layout group…

    Now create a new Rows Layout group within the Tabs Layout group and then rename the new group to “Contacts”.

    Next, drag and drop the Contacts data item to the new Contacts group. Then select the Stakeholder property of the Contacts grid and delete it.

    Next, drag and drop the Contact Address (named Address1) from the data items to the Rows Layout group named Contacts in the layout. Then delete both the Stakeholder and Contact items within that newly added data grid.

    Now run the application.

    The application will run and the Stakeholder List Detail screen will display a listing of Stakeholders. Two tabs will exist on the right. The first tab will show the Addresses for the selected Stakeholder. The second tab will show the Contacts for that stakeholder. A second grid in that table will show the addresses for the contact selected in the contacts grid above.
    One thing you may notice is that when selecting to add or edit an address, the model dialog window that appears will show input fields for either a stakeholder or contact. To fix this, a screen will need to be created to use for creating and editing the Address entity.

    Back in Visual Studio, select to add a new screen. From the Add New Screen dialog, select the New Data Screen template. Then select the Address table from the Screen Data drop down. Rename the Screen Name to CreateNewAndEditAddress.

    In the CreateNewAndEditAddress screen designer, delete the Stakeholder and Contact items from the screen layout.

    This screen is going to be set up as the screen to be used when adding or editing address records for both stakeholder addresses, and contact addresses. For this to work, we need to customize how the screen is created and used.

    The customization will go something like this…

    • The screen is going to be opened from one of the two (stakeholder or contact) address grids.
    • One of three variables is going to be passed to the screen when opened. These are AddressID, StakeholderID, and ContactID.
      • If and AddressID is passed to the form, then we know that an existing address is to be edited
      • If only a StakeholderID is passed to the form, then we know that a new address is being created for a stakeholder.
      • If only a ContactID is passed to the form, then we know that a new address is being created for the contact.

    To set all this up, I added three data items to the screen. Each are integer types and named AddressID, StakeholderID, and ContactID. Each of these new data items are not required, and are configured as parameters.

    E.g., here is the AddressID setup…

    Make sure that each of these new data items are set as parameters, and are not required…

    In the screen designer, select Write Code drop down button and click CreateNewAndEditAddress_InitializeDataWorkspace.

    Update the CreateNewAndEditAddress_InitializeDataWorkspace method with the following.

    Private Sub CreateNewAndEditAddress_InitializeDataWorkspace(saveChangesTo As System.Collections.Generic.List(Of Microsoft.LightSwitch.IDataService))
      If Not Me.AddressID.HasValue Then
        Me.AddressProperty = New Address()
        If StakeholderID.HasValue Then
          Dim existingStakeholder = (From s In DataWorkspace.ApplicationData.Stakeholders Select s Where s.Id = StakeholderID).First
          Me.AddressProperty.Stakeholder = existingStakeholder
        End If
        If ContactID.HasValue Then
          Dim existingContact = (From c In DataWorkspace.ApplicationData.Contacts Select c Where c.Id = ContactID).First
          Me.AddressProperty.Contact = existingContact
        End If
        Dim existingAddress = (From a In DataWorkspace.ApplicationData.Addresses Select a Where a.Id = AddressID).First
        Me.AddressProperty = existingAddress
      End If
    End Sub

    Save everything and build the project.

    Open the StakeholdersListDetail screen. Expand the Command Bar node for the Stakeholder Address data grid. Right-clcik the Add… and select the Override Code menu item.

    Update the AddresssAddAndEditNew_Execute() method to look like the following…

    Private Sub AddressesAddAndEditNew_Execute()
      ' Write your code here.
      Dim stakeholderID = Me.Stakeholders.SelectedItem.Id
      Application.ShowCreateNewAndEditAddress(Nothing, stakeholderID, Nothing)
    End Sub

    Now do the same for the command bar used for the Contact Addresses. For example…

    Note that I should have probably renamed Addresses1 to something more appropriate, but you get the idea.

    Make sure to update the same methods for these commands. For example…

    Private Sub Addresses1AddAndEditNew_Execute()
      ' Write your code here.
      Dim contactID = Me.Contacts.SelectedItem.Id
      Application.ShowCreateNewAndEditAddress(Nothing, Nothing, contactID)
    End Sub
    Private Sub Addresses1EditSelected_Execute()
      ' Write your code here.
      Dim addressID = Me.Addresses.SelectedItem.Id
      Application.ShowCreateNewAndEditAddress(addressID, Nothing, Nothing)
    End Sub
    Private Sub Addresses1EditSelected_CanExecute(ByRef result As Boolean)
      ' Write your code here.
      If Me.Contacts IsNot Nothing Then
        result = True
      End If
    End Sub

    The CanExecute method above is updated to make sure that a contact is actually selected before trying to add a new contact address.

    Finally, clean things up a bit by removing the Address Detail and Create New And Edit Address menu items.

    Save everything and then run the application. You should now have yourself a generic address editor used for both adding and editing both stakeholder and contact addresses.

    This is a good starting point to building out your own contact management system.

    Downloadable Visual Studio 2010 Project –

    Return to section navigation list> 

    Windows Azure Infrastructure and DevOps

    The Windows Azure Service Dashboard reported repeated problems with my OakLeaf Systems Azure Table Services Sample Project - Paging and Batch Updates Demo on 4/15/2011:


    Following are the messages related to this problem:

    image[RESOLVED] Windows Azure Network Performance Degradation

    • 3:35 AM UTC We are experiencing an issue with Windows Azure in the South Central US region, which may impact hosted services deployed in this region. Customers may experience increased latency on network connectivity, or dropped connections. We are actively investigating this issue and working to resolve it as soon as possible. Further updates will be published to keep you apprised of the situation. We apologize for any inconvenience this causes our customers.
    • 4:33 AM UTC The repair steps are underway to restore full network service in this region. Further updates will be published to keep you apprised of the situation. We apologize for any inconvenience this causes our customers.
    • 4:56 AM UTC We have implemented a mitigation and are working actively to restore full network service as soon as possible. Further updates will be published to keep you apprised of the situation. We apologize for any inconvenience this causes our customers.
    • 5:26 AM UTC The repair steps have been implemented and full recovery is complete. Network service is fully functional again in South Central US region. We apologize for any inconvenience this caused our customers.
    • 5:40 AM UTC The repair steps have been implemented and full recovery is complete. Network service is fully functional again in South Central US region. We apologize for any inconvenience this caused our customers.

    I have only one instance running, so I can’t determine if the problem would affect applications with the high availability feature (two instances) would have encountered this issue.

    David Linthicum asserted “Even with the moves by virtualization giants, the virtualized hybrid cloud could be farther off than you think” as a deck for his Why virtualized hybrid clouds won't happen soon article of 4/15/2011 for InfoWorld’s Cloud Computing blog:

    image The architectural concept is compelling: Have sets of virtualized servers that exist in your data center, with the ability to move virtual machines dynamically between your data center and a public cloud provider -- in other words, a virtualized hybrid cloud.

    image What's so compelling is that you can load balance between your private and public clouds at the VM level, moving virtual machines between on-premise and public cloud servers, running them where they will be most effective and efficient. Perhaps you can even use an auto-balancing mechanism, and dynamic management allows a system to pick the location of the VMs based on cost and SLA information. Why would you not want one of those?

    However, I'm not sure those who drive existing virtualization technology understand the value here. Perhaps it's even counterproductive to their business model. In short, to make a virtualized private cloud possible, cloud vendors would have to provide a mechanism that allows virtual machines to be executed outside of their technology, so their license revenue would surely take a hit. Moreover, as time goes on, public clouds will provide the most cost-effective platform for these VMs, and support for the virtual hybrid cloud would offer customers a quick migration path.

    Thus, the movement made in this direction by vendors will have the core purpose of selling virtualization technology, not expanding provider choice.

    Regardless, it will be interesting to see how virtualization vendors roll out this technology. I suspect fees will be levied on public cloud providers, who will pass them to the public clouds' users. The public cloud option would become much less viable.

    The conundrum is enterprises want to leverage this technology via VM-level dynamic migration between private and public clouds. Providing a dynamic mechanism to move virtual machines around public and private server instances removes a lot of barriers around cloud computing, such as lock-in, security, governance, cost, and performance.

    Unfortunately, there are no good incentives to push the virtualization vendors in the true and open virtualized hybrid cloud direction. There are solutions emerging, but it's going to be a while before we have a usable option.

    James Downey posted Platform-as-a-Service: Liberation from System Administrators to his Cloud of Innovation blog on 4/14/2011:

    After finishing my prior post on Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS), I realized I perhaps misused the acronym PaaS. I described, or what markets as, as PaaS. While is a platform sold as a service, it would not qualify as PaaS according to the emerging definition.

    A platform is what we build upon. Its meaning varies with context. A programmer writing assembly language might describe a chip’s instruction set as a platform. A power user building a macro might describe Microsoft Word as a platform. And in between hardware and end-user applications lie many layers in the stack—operating systems, middleware, and services from email to ecommerce, each of which might serve as a platform. As a systems integrator, I tend to think of any customizable system with a rich API as a platform, which aptly describes

    But as cloud computing has evolved, the concept of PaaS has taken on a more specific, though far from fixed, meaning. PaaS vendors offer the following:

    • Middleware in the cloud that both runs applications and provides services, such as messaging, consumed by applications.
    • A deployment abstraction layer that frees developers from the tedium of deploying software to servers. Instead, developers deploy to the cloud. The PaaS service itself meets capacity demand by distributing the software to servers around the globe.
    • Data storage in the cloud.
    • Optionally, tools for moving code through development, QA, and production.

    PaaS removes the bottleneck of IT infrastructure. It frees developers to launch applications on their time frame, rather than that of system administrators. System administrators, responsible for the reliability of enterprise networks, value stability over rapid change. Out of necessity, system administrators put up road blocks and delays. PaaS vendors eliminate this friction by taking on the responsibilities once owned by system admins: stability, robustness, scalability.

    Gartner declared 2011 as the year of PaaS, and so far the prediction looks right on the mark. Several vendors have recently defined offerings in the space, making for a rapid pace of innovation.

    At the moment, I’m just trying to keep track of the choices. As I was reading posts on PaaS, I started putting together a list of the major offerings. I’ll share it below. It’s incomplete. I’m sure I’ve missed important vendors. And certainly there are more worthwhile attributes to track. At this point, I’ve not even filled in all the columns. But instead of waiting until I’ve achieved any sense of finality, I thought I’d throw it out there for feedback.

    Additional Resources:

    What is PaaS After All?:

    Comparison of Google App Engine, Amazon Elastic BeanStalk, and CloudBees RUN@Cloud:

    Amazon Elastic BeanStalk v. CloudBees RUN@Cloud: (Good debate in comments on PaaS relationship to IaaS)

    Gartner Says 2011 Will Be the Year of Platform as a Service:

    Darryl K. Taft said “Experts weigh in on the benefits of platform-as-a-service offerings and the dangers of vendor lock-in” and asked Will PAAS Solve All Developer Ills? in a 4/14/2011 article for

    image When it comes to the alphabet soup of cloud computing, at least one vendor is staying above the fray. Amazon Web Services, whose offerings include IAAS, PAAS and SAAS (infrastructure, platform and software as a service), is intent on not being grouped under any particular label.

    image “We don’t spend any time talking about the acronyms,” Andy Jassy, senior vice president of AWS, told eWEEK. “All those lines will get blurred over time. It’s a construct to box people in and it fits some stack paradigm. We started with raw storage, raw compute, and raw database in SimpleDB. And we’ve added load balancing, a relational database, Hadoop and Elastic Map reduce, a management GUI… All those lines start to get blurred, and you can expect to see additional abstraction from us."

    Amazon Web Services’ objections aside, the January release of the company’s Elastic Beanstalk service offers an excellent example of PAAS, versus IAAS: Developers upload their Java applications into the Elastic Beanstalk service, and Amazon handles all the capacity provisioning, load balancing, auto-scaling and application-health-monitoring details. The PAAS service taps lower-level AWS services to do the work, with compute power provided by Amazon’s Elastic Compute Cloud, an archetypal IAAS offering.

    If you mapped an existing IT organization to the new world of the cloud, your IT operations team would be the IAAS layer, the standard applications (email, social, office, ERP, CRM, etc.) would be available as SAAS, and the custom applications would run on a PAAS, Sacha Labourey, CEO and founder of Java PAAS player CloudBees, told eWEEK. So all three of them are really important as companies move toward the cloud, he said.

    Ross Mason, CTO and founder of MuleSoft, which offers an iPAAS (integration platform as a service) solution, said, “SAAS changes the way we acquire applications, IAAS changes the way we deploy and consume infrastructure and PAAS changes the way we build applications. Platform is the magic word; it creates a development platform for building software in the cloud. It's important to understand that, like enterprise software platforms, the PAAS universe is evolving to serve various development communities, e.g., languages, as well as serve different functions.”

    Bob Bickel, an advisor at CloudBees and chairman at eXo, said, “PAAS is for developers what virtualization was for system administrators. Virtualization let sys admins forget about the underlying servers and to really share resources a lot more effectively. PAAS will be the same, and in a long-term vision really supplants lower layers like OS and virtualization as being the key platform custom apps and SAAS are deployed on.”

    Patrick Kerpan, president, CTO and co-founder of CohesiveFT, a maker of onboarding solutions for cloud computing, told eWEEK: “The significance of PAAS will be the transition from OS-based features to network-based features, that take advantage of growing customer acceptance of the idea and the fact that their information assets (their ‘stuff’) is ‘out there somewhere’ and the increasing ability of PAAS (and applications built on top of it) to seem more local, controlled and secure.”

    Read more: Next: Applications on Network Services >>

    The Windows Azure Team sent the following “Changes to the Windows Azure platform Offer for Visual Studio with MSDN Subscribers” message to MSDN subscribers on 4/14/2011 (repeated from Windows Azure and Cloud Computing Posts for 4/13/2011+:

    imageWe are pleased to share with you the new Visual Studio with MSDN Professional, Premium and Ultimate offers, as well as changes to your current MSDN offer.  Subscribers with an active Visual Studio with MSDN subscription are now entitled to the monthly benefits listed below, based on their subscription level:


    For customers, like you, with a Windows Azure Platform MSDN subscription, we have additional good news: as of April 12, 2011, your monthly benefits will be automatically upgraded to the following levels:


    In exchange for these increased benefit levels, we are eliminating the 5% discount on compute, SQL Azure database, Access Control transactions and Service Bus connections for billing periods beginning on or after June 1, 2011. 

    If you’re looking for help getting started with the Windows Azure platform, click here to find helpful resources, such as SDKs, training kits, management tools, and technical documentation. To help spark your creativity, also be sure to check out what other developers have built using the Windows Azure platform.  Thanks again for your interest in the Windows Azure platform and happy developing!

    1,500 hours of a small compute instance lets you run two roles instead of the previous one. My present MSDN subscription provides three 1-GB Web databases. I (and probably most other folks) would much rather have five 1-GB databases, which is the same price.

    Jim O’Neill weighted in on Windows Azure Benefits for MSDN: New and improved on 4/12/2011:

    Play Rock, Paper, Scissors in the cloud!If you have an MSDN Premium or Ultimate subscription, hopefully you’re aware of and have perhaps taken advantage of a free monthly allotment of Windows Azure benefits.  If you haven’t provisioned your account yet, well now’s the time, because the benefits just got better – with a doubling of compute hours, significant increase in storage (triple of that previously for MSDN Premium subscribers), and an order of magnitude increase in bandwidth allotment… and here’s the best part:

    Visual Studio Professional MSDN subscribers are now included!

    The updated benefits chart appears below, and for a great way to get started with Azure, try your hand at the Rock Paper Azure Challenge running through May 13.  You’ll get your feet wet with cloud computing, have some fun, and perhaps even win an XBox 360/Kinect bundle!


    1 For Ultimate subscribers this is an increase of 2GB; for new Premium subscribers, it’s a decrease of 2GB from the previous plan.  Existing Premium subscribers will also receive the 5GB benefit.
    2 There is a reduction in number of access control transactions from the previous plans (1 million transactions); however, current plan holders will retain that benefit.
    3 There is no longer a distinction in bandwidth allotment between Europe/North America and Asia Pacific.

    Those of you with existing MSDN Ultimate and Premium subscriptions may be wondering what you now get.  All of the existing MSDN Premium and Ultimate Subscribers are automatically transitioned to the new MSDN Ultimate offer.  That means a doubling(!) of compute benefits, an increase in storage, and as you can see from footnote (2), you also retain the previous benefit of 1,000,000 access control transactions.

    These benefits are in place for as long as your retain the MSDN subscription (although we do reserve the right to modify the monthly benefit at some point in the future – like we just did).

    I repeat my complaint about SQLAzure database size version number of databases.

    <Return to section navigation list> 

    Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds


    No significant articles today.

    <Return to section navigation list> 

    Cloud Security and Governance


    No significant articles today.

    <Return to section navigation list> 

    Cloud Computing Events

    Jonathan Rozenblit announced on 4/15/2011 Azure At the Movies: Get In The Cloud to open in Toronto, Canada on 5/5/2011 5:00 PM at 9:00 AM to 12:00 PM:


    When was the last time you had a chance to go to the movie theatre during the day? More importantly, when was the last time you learned something new at the movie theatre? Well Azure at the Movies will not only get you out to the ScotiaBank Theatre on May 5, 2011 from 9 AM to noon, but it will also guarantee that you walk away having learned how to provide stability and elasticity for your web based solutions using Windows Azure.

    Join me and the gang from ObjectSharp, Barry Gervin, Cory Fowler, Dave Lloyd, Bruce Johnson, and Steve Syfuhs for a half day event in Toronto where we’ll explore how Silverlight, ASP.NET MVC, Team Foundation Server, Visual Studio, and Powershell all work together with Windows Azure to give you, the developer, the best development experience and your application the platform to reach infinite scale and success.

    Thursday, May 5, 2011
    Registration: 8:00 AM – 9:00 AM
    Seminar: 9:00 AM – 12:00 PM
    ScotiaBank Theatre, 259 Richmond Street West, Toronto

    Click here to register.

    <Return to section navigation list> 

    Other Cloud Computing Platforms and Services

    Brian Gracely (@bgracely) posted on 4/15/2011 101 Thoughts about the "Cloud Foundry" Announcement by VMWare to his Clouds of Change blog:

    [This is still a work in progress - the blog post, not Cloud Foundry ]

    image This week's Cloud Foundry announcement has the Cloud Computing community buzzing about the OpenPaaS framework driven by VMware. Considering the size of VMware, the competitors it's taking on (Google, Microsoft, IBM, Oracle,, Amazon AWS and RedHat), and the delivery model (open-source), it's no surprise that opinions and analysis are coming from every direction.
    Who knows if I'll make it all the way to 101, but my head's been spinning with all the angles, so here goes.... [NOTE: Some of these thoughts are specifically about Cloud Foundry, while others are about the idea of application portability & mobility, since frameworks tend to come and go over time.]


    1. Their Hypervisor is always under pricing pressure from free alternatives (Hyper-V, Xen, KVM), their middleware is now open-source and SP's probably aren't sure if they are vendor or competitor. Will be interesting to see what their revenue models looks like long-term.
    2. Cloud Foundry creates a PaaS model that isn't dependent on VMware VMs, but does attack their largest competitors, so an interesting "creative disruption" strategy. This always takes courage from leadership. Paul Maritz is uber-smart and been in the big boy games before, so you have to expect he has a grand plan for all of this.
    3. It's not clear if "for fee" Cloud Foundry from VMware is attached to Mozy's service. Mozy just raised prices. If they are connected, it will be interesting to see if the "for fee" service can remain competitive at scale...or if that's really part of their strategy? 
    4. What does Cloud Foundry mean for existing VM Admins
    5. How will the 1000s of VMware channel partners adapt to this new model? Do they have the skills to understand this and sell this? Moving from infrastructure to applications is not a simple context switch for most people.
    6. How will Cloud Providers (SPs hosting the apps) react to the portability capabilities? Will we see minimal friction (eg. number portability), or subtle changes to their services (ToS, billing, network/security infrastructure) that reduce that flexibility?
    7. Several people are already connecting the open-source dots of CloudFoundry and OpenStack, so what does this ultimately do to vCloud Director (at SP or between ENT & SP)?
    8. Does VMware have the deep pockets to support the developer community enough to make it dominant (like Google and Microsoft did before them)? Is VM revenue enough of a cash cow to sustain.


    1. If you're a customer considering Cloud Foundry, the bulk of the value-proposition is potentially falling directly into your lap. No licensing shackles, portability to internal or external systems, uses many existing languages. 
    2. If you decide this is a interesting approach, Bernard Golden does a nice job providing guidance about future investments and skills-development.
    3. Since Cloud Foundry can be deployed anywhere, do more Enterprises begin using it internally (greenfields, green "patches") since public cloud security/trust is still a top concern (whether technically valid or not), or is the expertise in running the underlying PaaS platform going to primarily reside with VMware or at Service Providers? 
    4. What will a typical team look like that runs a Cloud Foundry PaaS? How big? What skills?


    1. PaaS can be deployed on almost any infrastructure - silo of whatever, structured IaaS, etc. - but the mobility element means that there will need to be some structure or way to describe the infrastructure needed. Some form of Network as a Service or Network-level APIs.
    2. Networks underlying these PaaS environments will need to be provisioned via an automated mechanism, and preferably be virtualized at multiple levels (whether for VMs or bare-metal deployments).
    3. Just as the PaaS frameworks allow applications to be described (n-tiers, etc.), so too will there need to be a way to describe the underlying infrastructure (network segments, addresses, load-balancing, storage, etc.) so it can be requested as the application moves from cloud to cloud.
    4. How does the mobility effect storage for the application? How does the storage get bundled for movement (as a LUN, as a file-system, something else)? Does it mandate IP-based storage?
    5. Do we expect SPs or ENT to continue to build intelligence into the infrastructure (security, isolation, availability) or does the application eventually become the network?  
    6. While some people (hint, hint @reillyusa) have called this the death of IT infrastructure intelligence, I actually think it just adds a new level of complexity into those roles. They now have fixed (legacy), mobile (VMs) and hyper-mobile (CloudFoundry) applications and all the associated issues that go along with them (security, compliance, auditing, assurance, etc.).

    Mobility/Portability of Apps and Data

    1. Being able to move an application from one cloud to another is a powerful concept, but are developers educating themselves on the laws surrounding which country the data resides (data sovereignty), especially outside the US? 
    2. Will this create a new market for tools that can test portability of applications between clouds? Just like with Backup/Recovery, it's important to test the "Recovery" portion (bring it back...if so desired).

    1. Maybe I'm wrong about this, but I would have thought that developers went to a certain PaaS platform (Google App Engine, Salesforce, Azure, etc.) because they either valued the functionality around the platform (community, linkage to other properties within that platform, etc.) or they wanted to reuse existing applications (eg. Azure, .NET).
    2. Being the everything to everyone and open-sourced (bring your own business model) - doesn't that mean it doesn't necessarily doing anything outstanding unless it gets forked into projects that have narrower focus? Seems like the competition may have an advantage of focus.
    3. Let's suppose that Cloud Foundry becomes the dominant player, similar to Android's market-share vs. iPhone. Which existing player becomes the iPhone platform, or RIM? 
    4. Does this lead to a new round of PaaS/SaaS consolidation so that the competitors can offer a broader "market" of in-house services to developers (sales, social, data mining, collaboration, etc.)
    1. Does this just increase the future value of companies like RightScale and Enstratus that build tools to manage multiple cloud environments?
    2. I wonder how long it will take before the Cloud Foundry App-Store opens for business and who will be the company taking 30% (ala the iTunes App Store). Is this VMware, somehow integrated with their "Project Horizon"?
    3. Beyond VMware, who is the first company to create solutions around this for the Enterprise (private/hybrid-cloud), or does the primary focus remain with ISVs and public-cloud deployments?
    4. Is there a clearinghouse or brokerage business to be build around helping apps move from Cloud to Cloud, or to Federate them? 
    5. Has the "Cloud Foundry for Dummies" book been written yet?


    1. How much will we see applications moving from one cloud to another, or is the portability and open-source aspect just an insurance policy for running it on an existing IaaS/PaaS environment that may not be able to create enough cross-linkage, traffic or overall value? 
    2. Anything that creates the ability for developers to create portability in an application is almost always a good thing. [NOTE: Can't think of when it wouldn't be a good thing...]
    3. One of the goals of Cloud Computing is to hide the "where did it come from" from the user. This seems to be a step in the right direction, whether you're an Enterprise, Web Company or a ISV/iPhone/Android developer.
    4. There are pros and cons to this being released as an open-source project. It obviously speeds up the pace of innovation, but equally speeds the race to $0 for many aspects around this business. It all depends on which side of that $0 you're on. 

    Ernest De Leon (a.k.a., The Silicon Whisperer) rings in with his VMware disrupting the Cloud market with Cloud Foundry on 4/15/2011:

    imageI’ve been waiting for a while now for VMware to bring a PaaS offering to market, but I was pleasantly surprised with the newly announced Cloud Foundry. In their own words, Cloud Foundry is “a VMware-led project [that] is the world’s first open Platform as a Service (PaaS) offering. Cloud Foundry provides a platform for building, deploying, and running cloud apps using Spring for Java developers, Rails and Sinatra for Ruby developers, Node.js and other JVM frameworks including Grails.”

    There are several awesome things about this new offering. The first is that it’s open source. This is huge in many ways, but the most important thing to businesses looking at Cloud Foundry is that being open source does a lot to eliminate vendor lock-in. Secondly, the offering spans public, private and hybrid cloud computing. This is an offering that hits many key areas across the cloud spectrum. Lastly, there will be a ‘micro cloud’ version coming out that allows developers to test on their own desktops and laptops. This is HUGE because you are not forced to deploy a full blown private cloud (or push to a public cloud) for development and testing. Let’s take a little time to discuss the three pieces of Cloud Foundry.

    First is, VMware’s hosted commercial public cloud offering. From their own website: “The VMware hosted, managed and supported service, provides a multitenant PaaS from VMware that runs on the industry leading vSphere cloud platform. Initially, supports Spring for Java apps, Rails and Sinatra for Ruby apps, Node.js apps and apps for other JVM frameworks including Grails. Cloud Foundry also offers MySQL, Redis, and MongoDB data services.” This is where most enterprises will deploy their apps when they want to go to the public cloud. It will be backed by VMware and offer support and services that wrap the offering. This is similar to Amazon’s AWS PaaS offering – Elastic Beanstalk, Google’s App Engine, Microsoft’s Azure and Salesforce’s

    Second is the open source project From their own website: “The open-source community site,, is the community where developers can collaborate and contribute to the Cloud Foundry project. For a full catalog of software services available in the open source stack, please refer to the community website at” This is the truly innovative piece of the Cloud Foundry line-up that no other big cloud vendor is offering. There is great potential synergy with other open source projects like OpenStack (which is backed by NASA and Rackspace among others) and Ubuntu Enterprise Cloud (UEC). This is where I expect to see a lot of interest and innovation in the short and long term.

    Lastly, (and the best part in my opinion) is the Cloud Foundry Micro Cloud. From their own website: “Micro Cloud is a single developer instance of Cloud Foundry. It provides developers with a personal PaaS that runs on their desktop. Micro Cloud is provided as a downloadable software image for VMware Fusion or VMware Player, as well a hosted image on selected cloud partners.” This is big in so many ways. Prior to the Micro Cloud concept, developers had to deploy a small private cloud (or spin up a paid instance from a public cloud provider like Amazon) in order to test their code. The Micro Cloud will now allow developers to deploy a developer instance on their own workstations (and/or laptops) for development and testing. Think of this as a cloud IDE. The Micro Cloud will save time and money in the development process.

    There is much more information available on the Cloud Foundry FAQ at As always, the VMware Guy is happy to come out to your location for professional services engagements centered around VMware and Cloud Computing. I am proud to announce that Cloud Foundry training and consulting services are now offered by the VMware Guy. Call now to schedule your services engagement.

    Sid Yadav (pictured below) asked Microsoft data center czar to head Apple’s expansion in the cloud? in a 4/15/2011 post to VentureBeat:

    image Looking to expand its presence in the cloud, Apple has reportedly hired Kevin Timmons, the former head of Data Center Services at Microsoft, whose departure from the company was confirmed to Data Center Knowledge.

    The news comes after recent speculation that Apple is rumored to be working on a cloud-based storage service to help users store music, videos, and other media on the Internet and access it across their various devices. Some time back, the company was reportedly considering making MobileMe, its current cloud storage service, free in order to make way for the newer service.

    Timmons [pictured at right], who joined Microsoft in 2009 from Yahoo, helped oversee the completion of new data centers in Dublin and Chicago and was reportedly on target to reduce the building costs of its new data centers by fifty percent. The infrastructure developed through his tenure helped power some of the company’s most important online services such as Bing, Exchange Online, and Windows Azure.

    image Last November, Olivier Sanche, Apple’s previous director of global data center operations, passed away unexpectedly from a heart attack. But according to data center expert David Ohara, Timmons will not be replacing the position previously held by Sanche, which was filled by another Apple executive in the department.

    image While his rumored position at the company is unclear, Timmons holds a strong reputation for building and managing cost-effective data centers, making him a valuable addition to help lead the expansion of its data center facilities.

    Geva Perry (@gevaperry) adds his two cents with a VMWare's Announcement post to his Thinking Out Cloud blog on 4/12/2011:

    image Today VMWare is about to make a big announcement about I'm writing this post before the actual announcement was made and while on the road, so more details will probably emerge later, but there is the gist of it:

    VMWare is launching This is a VMWare owned and operated platform-as-a-Service. It's a big step in the OpenPaaS intiative they have been talking about for the past year: "Multiple Clouds, Multiple Frameworks, Multiple Services".

    For those of you keeping track, CloudFoundry is the official name for the DevCloud and AppCloud services which have come out in various alpha and beta releases in the last few months.

    The following diagram summarizes the basic idea behind

    In other words, in addition to running your apps on VMWare's own PaaS service (, VMWare will make this framework available to other cloud providers -- as well as for enterprises to run in-house as a private cloud (I'm told this will be in beta by the end of this year). In fact, they are going to open source CloudFoundry under an Apache license.

    They're also going to support multiple frameworks, not just their own Java/Spring framework but Ruby, Node.JS and others (initally, several JVM-based frameworks). This concept is similar to the one from DotCloud, which I discussed in What's the Best Platform-as-a-Service.

    On the services front, they are going to provide multiple services provided by VMWare itself, and eventually, open it up to the ecosystem for third-parties (similar to Heroku Add-Ons or AppExchange).

    Where it says in the diagram above "data service", for example, VMWare already has three offerings: MySQL (similar to Amazon RDS), MongoDB and Redis. For "message service" they will offer their own Rabbit MQ and other messaging services.

    Finally, you'll notice there is reference to a "Downloadable 'Micro Cloud'" in the diagram. This is a free offering from VMWare that lets you run a CloudFoundry cloud on a single VM, which you can carry around on a USB memory stick or run on an Amazon Machine Image (which is what Michael from RightScale is going to demonstrate today). The idea behind the Micro Cloud is to appeal mostly to developers and let them easily do development in any physical location and seamlessly load their app to a CloudFoundry cloud when they are ready.

    The Micro Cloud is one more aspect of this intiative that is intended to appeal to developers and encourage bottom-up adoption for the VMWare cloud. I've discussed the idea of developers being the driving force of cloud adoption before, and you can read more about it in this post.

    All in all, a very smart, if not unexpected, move by VMWare. But it remains to be seen how VMWare will handle the inevitable conflict between being a cloud provider and hoping to be the provider of infrastructure software for other providers. Case in point: VMForce, the joint offering announced by VMWare and several months ago.

    Officially, both companies say everything is on track for the VMForce offering, and VMWare says VMForce is "powered by CloudFoundry". But as I am quoted in this TechTarget article, the companies were already on a collision course, especially after's Heroku acquisition.

    From talking to some of the folks at VMWare, it's clear that they too believe that the future of computing is PaaS -- something I believe strongly in. It will be interesting to see how they execute on this grand vision.

    <Return to section navigation list>