Saturday, April 16, 2011

Windows Azure and Cloud Computing Posts for 4/16/2011+

image2 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.


• Updated 4/16/2011 3:00 PM PDT with articles marked from Dinesh Haridas, Bruce Kyle, Matias Woloski and Liran Zelkha.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Azure Blob, Drive, Table and Queue Services

• Bruce Kyle (pictured below) announced the availability of an ISV Video: StorSimple Integrates On-Premises, Cloud Storage with Windows Azure on 4/16/2011:

image ISV StorSimple provides a hybrid storage solution that combines on-premises storage with a scalable, on-demand cloud storage model. StorSimple integrates with several cloud storage providers, including Windows Azure.

clip_image001Dr Iam Howells [pictured at right], CMO for StorSimple, talks with Developer Evangelist Wes Yanaga about how the team created a solution that enables companies to seamlessly integrate cloud storage services for Microsoft Exchange Server, SharePoint Server, and Windows Server. Howells describes why the company offers storage on Azure and how that benefits its customers.

Video link: StorSimple Integrates On-Premises Storage to Windows Azure

imageStorSimple uses an algorithm that tiers storage to an on-premises, high-performance disk or to Windows Azure Blob storage to optimize performance.

imageTo address performance issues with on-premises data storage, a data compression process eliminates redundant data segments and minimizes the amount of storage space that an application consumes. At the same time that StorSimple optimizes storage space for data-intensive applications, it uses Cloud Clones—a patented StorSimple technology—to persistently store copies of application data in the cloud.

With StorSimple and Windows Azure, customers maintain control of their data and can take advantage of public cloud services with security-enhanced data connections. StorSimple uses Windows Azure AppFabric Access Control to provide rule-based authorization to validate requests and connect the on-premises applications to the cloud. Customers who deploy StorSimple use a private key to encrypt the data with AES-256 military-grade encryption before it is pushed to Windows Azure, helping to enhance security and ensure that data can only be retrieved by the customer.

About StorSimple

StorSimple provides application-optimized cloud storage for Microsoft Server applications. StorSimple's mission is to bring the benefits of the cloud to on-premises applications without forcing the migration of applications into the cloud. StorSimple manages storage and backup for working set apps such as Microsoft SharePoint, shared file drives, Microsoft Exchange, and virtual machine libraries, offering SSD performance with cloud elasticity.

Other ISV Videos

For videos on Windows Azure Platform, see:

• Dinesh Haridas of the Windows Azure Storage Team described Using SMB to Share a Windows Azure Drive among multiple Role Instances on 4/16/2011:

imageWe often get questions from customers about how to share a drive with read-write access among multiple role instances. A common scenario is that of a content repository for multiple web servers to access and store content. An Azure drive is similar to a traditional disk drive in that it may only be mounted read-write on one system. However using SMB, it is possible to mount a drive on one role instance and then share that out to other role instances which can map the network share to a drive letter or mount point.

In this blog post we’ll cover the specifics on how to set this up and leave you with a simple prototype that demonstrates the concept. We’ll use an example of a worker role (referred to as the server) which mounts the drive and shares it out and two other worker roles (clients) that map the network share to a drive letter and write log records to the shared drive.

Service Definition on the Server role

The server role has TCP port 445 enabled as an internal endpoint so that it can receive SMB requests from other roles in the service. This done by defining the endpoint in the ServiceDefinition.csdef as follows

      <InternalEndpoint name="SMB" protocol="tcp" port="445" />

Now when the role starts up, it must mount the drive and then share it. Sharing the drive requires the Server role to be running with administrator privileges. Beginning with SDK 1.3 it’s possible to do that using the following setting in the ServiceDefinition.csdef file.

<Runtime executionContext="elevated"> 
Mounting the drive and sharing it

When the server role instance starts up, it first mounts the Azure drive and executes shell commands to

  1. Create a user account for the clients to authenticate as. The user name and password are derived from the service configuration.
  2. Enable inbound SMB protocol traffic through the role instance firewall
  3. Share the mounted drive with the share name specified in the service configuration and grant the user account previously created full access. The value for path in the example below is the drive letter assigned to the drive.

Here’s snippet of C# code that does that.

String error;
ExecuteCommand("net.exe", "user " + userName + " " + password + " /add", out error, 10000);

ExecuteCommand("netsh.exe", "firewall set service type=fileandprint mode=enable scope=all", out error, 10000);

ExecuteCommand("net.exe", " share " + shareName + "=" + path + " /Grant:"
                    + userName + ",full", out error, 10000);

The shell commands are executed by the routine ExecuteCommand.

public static int ExecuteCommand(string exe, string arguments, out string error, int timeout)
          Process p = new Process();
          int exitCode;
          p.StartInfo.FileName = exe;
          p.StartInfo.Arguments = arguments;
          p.StartInfo.CreateNoWindow = true;
          p.StartInfo.UseShellExecute = false;
          p.StartInfo.RedirectStandardError = true;
          error = p.StandardError.ReadToEnd();
          exitCode = p.ExitCode;

          return exitCode;

We haven’t touched on how to mount the drive because that is covered in several places including here.

Mapping the network drive on the client

When the clients start up, they locate the instance of the SMB Server and then identify the address of the SMB endpoint on the server. Next they execute a shell command to map the share served by the SMB server to a drive letter specified by the configuration setting localpath. Note that sharename, username and password must match the settings on the SMB server.

var server = RoleEnvironment.Roles["SMBServer"].Instances[0];
machineIP = server.InstanceEndpoints["SMB"].IPEndpoint.Address.ToString();machineIP = "\\\\" + machineIP + "\\";

string error;
ExecuteCommand("net.exe", " use " + localPath + " " + machineIP + shareName + " " + password + " /user:"+ userName, out error, 20000);

Once the share has been mapped to a local drive letter, the clients can write whatever they want to the share, just as they would to a local drive.  

Note: Since the clients may come up before the server is ready, the clients may have to retry or alternatively poll the server on some other port for status before attempting to map the drive. The prototype retries in a loop until it succeeds or times out.

Enabling High Availability

With a single server role instance, the file share will be unavailable when the role is being upgraded. If you need to mitigate that, you can create a few warm stand-by instances of the server role thus ensuring that there is always one server role instance available to share the Azure Drive to clients.

Another approach would be to make each of your roles a potential host for the SMB share. Each role instance could potentially run an SMB service, but only one of them would get the mounted Azure Drive behind SMB service. The roles can then iterate over all the role instances attempting to map the SMB share with each role instance. The mapping will succeed when the client connects to the instance that has the drive mounted.

Another scheme is to have the role instance that successfully mounted the drive inform the other role instances so that the clients can query to find the active server instance.

Note: The high availability scenario is not captured in the prototype but is feasible using standard Azure APIs.

Sharing Local Drives within a role instance

It’s also possible to share a local resource drive mounted in a role instance among multiple role instances using similar steps. The key difference though is that writes to the local storage resource are not durable while writes to Azure Drives are persisted and available even after the role instances are shutdown.

Dinesh Haridas

Sample Code

Here’s the code for the Server and Client in its entirety for easy reference.

Server – WorkerRole.cs

This file contains the code for the SMB server worker role. In the OnStart() method, the role instance initializes tracing before mounting the Azure Drive. It gets the settings for storage credentials, drive name and drive size from the Service Configuration. Once the drive is mounted, the role instance creates a user account, enables SMB traffic through the firewall and then shares the drive. These operations are performed by executing shell commands using the ExecuteCommand() method described earlier. For simplicity, parameters like account name, password and the share name for the drive are derived from the Service Configuration.

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Net;
using System.Threading;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.Diagnostics;
using Microsoft.WindowsAzure.ServiceRuntime;
using Microsoft.WindowsAzure.StorageClient;

namespace SMBServer
    public class WorkerRole : RoleEntryPoint
        public static string driveLetter = null;
        public static CloudDrive drive = null;

        public override void Run()
            Trace.WriteLine("SMBServer entry point called", "Information");

            while (true)

        public override bool OnStart()
            // Set the maximum number of concurrent connections 
            ServicePointManager.DefaultConnectionLimit = 12;

            // Initialize logging and tracing
            DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration();
            dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
            dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
            DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", dmc);
            Trace.WriteLine("Diagnostics Setup complete", "Information");

            CloudStorageAccount account = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue("StorageConnectionString"));
                CloudBlobClient blobClient = account.CreateCloudBlobClient();
                CloudBlobContainer driveContainer = blobClient.GetContainerReference("drivecontainer");

                String driveName = RoleEnvironment.GetConfigurationSettingValue("driveName");
                LocalResource localCache = RoleEnvironment.GetLocalResource("AzureDriveCache");
                CloudDrive.InitializeCache(localCache.RootPath, localCache.MaximumSizeInMegabytes);

                drive = new CloudDrive(driveContainer.GetBlobReference(driveName).Uri, account.Credentials);
                catch (CloudDriveException ex)
                    Trace.WriteLine(ex.ToString(), "Warning");

                driveLetter = drive.Mount(localCache.MaximumSizeInMegabytes, DriveMountOptions.None);

                string userName = RoleEnvironment.GetConfigurationSettingValue("fileshareUserName");
                string password = RoleEnvironment.GetConfigurationSettingValue("fileshareUserPassword");

                // Modify path to share a specific directory on the drive
                string path = driveLetter;
                string shareName = RoleEnvironment.GetConfigurationSettingValue("shareName");
                int exitCode;
                string error;

                //Create the user account    
                exitCode = ExecuteCommand("net.exe", "user " + userName + " " + password + " /add", out error, 10000);
                if (exitCode != 0)
                    //Log error and continue since the user account may already exist
                    Trace.WriteLine("Error creating user account, error msg:" + error, "Warning");

                //Enable SMB traffic through the firewall
                exitCode = ExecuteCommand("netsh.exe", "firewall set service type=fileandprint mode=enable scope=all", out error, 10000);
                if (exitCode != 0)
                    Trace.WriteLine("Error setting up firewall, error msg:" + error, "Error");
                    goto Exit;

                //Share the drive
                exitCode = ExecuteCommand("net.exe", " share " + shareName + "=" + path + " /Grant:"
                    + userName + ",full", out error, 10000);

               if (exitCode != 0)
                    //Log error and continue since the drive may already be shared
                    Trace.WriteLine("Error creating fileshare, error msg:" + error, "Warning");

                Trace.WriteLine("Exiting SMB Server OnStart", "Information");
            catch (Exception ex)
                Trace.WriteLine(ex.ToString(), "Error");
                Trace.WriteLine("Exiting", "Information");
            return base.OnStart();

        public static int ExecuteCommand(string exe, string arguments, out string error, int timeout)
            Process p = new Process();
            int exitCode;
            p.StartInfo.FileName = exe;
            p.StartInfo.Arguments = arguments;
            p.StartInfo.CreateNoWindow = true;
            p.StartInfo.UseShellExecute = false;
            p.StartInfo.RedirectStandardError = true;
            error = p.StandardError.ReadToEnd();
            exitCode = p.ExitCode;

            return exitCode;

        public override void OnStop()
            if (drive != null)

Client – WorkerRole.cs

This file contains the code for the SMB client worker role. The OnStart method initializes tracing for the role instance. In the Run() method, each client maps the drive shared by the server role using the MapNetworkDrive() method before writing log records at ten second intervals to the share in a loop.

In the MapNetworkDrive() method the client first determines the IP address and port number for the SMB endpoint on the server role instance before executing the shell command net use to connect to it. As in the case of the server role, the routine ExecuteCommand() is used to execute shell commands. Since the server may start up after the client, the client retries in a loop sleeping 10 seconds between retries and gives up after about 17 minutes. Between retries the client also deletes any stale mounts of the same share.

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Net;
using System.Threading;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.Diagnostics;
using Microsoft.WindowsAzure.ServiceRuntime;
using Microsoft.WindowsAzure.StorageClient;

namespace SMBClient
    public class WorkerRole : RoleEntryPoint
        public const int tenSecondsAsMS = 10000;
        public override void Run()
            // The code here mounts the drive shared out by the server worker role
            // Each client role instance writes to a log file named after the role instance in the logfile directory

            Trace.WriteLine("SMBClient entry point called", "Information");
            string localPath = RoleEnvironment.GetConfigurationSettingValue("localPath");
            string shareName = RoleEnvironment.GetConfigurationSettingValue("shareName");
            string userName = RoleEnvironment.GetConfigurationSettingValue("fileshareUserName");
            string password = RoleEnvironment.GetConfigurationSettingValue("fileshareUserPassword");

            string logDir = localPath + "\\" + "logs";
            string fileName = RoleEnvironment.CurrentRoleInstance.Id + ".txt";
            string logFilePath = System.IO.Path.Combine(logDir, fileName);


                if (MapNetworkDrive(localPath, shareName, userName, password) == true)

                    // do work on the mounted drive here
                    while (true)
                        // write to the log file
                        System.IO.File.AppendAllText(logFilePath, DateTime.Now.TimeOfDay.ToString() + Environment.NewLine);

                Trace.WriteLine("Failed to mount" + shareName, "Error");
            catch (Exception ex)
                Trace.WriteLine(ex.ToString(), "Error");


        public static bool MapNetworkDrive(string localPath, string shareName, string userName, string password)
            int exitCode = 1;

            string machineIP = null;
            while (exitCode != 0)
                int i = 0;
                string error;

                var server = RoleEnvironment.Roles["SMBServer"].Instances[0];
                machineIP = server.InstanceEndpoints["SMB"].IPEndpoint.Address.ToString();
                machineIP = "\\\\" + machineIP + "\\";
                exitCode = ExecuteCommand("net.exe", " use " + localPath + " " + machineIP + shareName + " " + password + " /user:"
                    + userName, out error, 20000);

                if (exitCode != 0)
                    Trace.WriteLine("Error mapping network drive, retrying in 10 seoconds error msg:" + error, "Information");
                    // clean up stale mounts and retry 
                    ExecuteCommand("net.exe", " use " + localPath + "  /delete", out error, 20000);
                    if (i > 100) break;

            if (exitCode == 0)
                Trace.WriteLine("Success: mapped network drive" + machineIP + shareName, "Information");
                return true;
                return false;

        public static int ExecuteCommand(string exe, string arguments, out string error, int timeout)
            Process p = new Process();
            int exitCode;
            p.StartInfo.FileName = exe;
            p.StartInfo.Arguments = arguments;
            p.StartInfo.CreateNoWindow = true;
            p.StartInfo.UseShellExecute = false;
            p.StartInfo.RedirectStandardError = true;
            error = p.StandardError.ReadToEnd();
            exitCode = p.ExitCode;

            return exitCode;

        public override bool OnStart()
            // Set the maximum number of concurrent connections 
            ServicePointManager.DefaultConnectionLimit = 12;

            DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration();
            dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
            dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
            DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString", dmc);
            Trace.WriteLine("Diagnostics Setup comlete", "Information");

            return base.OnStart();


<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="AzureDemo" xmlns="">
  <WorkerRole name="SMBServer">
    <Runtime executionContext="elevated">
      <Import moduleName="Diagnostics" />
      <Setting name="StorageConnectionString" />
      <Setting name="driveName" />
      <Setting name="driveSize" />
      <Setting name="fileshareUserName" />
      <Setting name="fileshareUserPassword" />
      <Setting name="shareName" />
      <LocalStorage name="AzureDriveCache" cleanOnRoleRecycle="true" sizeInMB="300" />
      <InternalEndpoint name="SMB" protocol="tcp" port="445" />
  <WorkerRole name="SMBClient">
      <Import moduleName="Diagnostics" />
      <Setting name="fileshareUserName" />
      <Setting name="fileshareUserPassword" />
      <Setting name="shareName" />
      <Setting name="localPath" />


<?xml version="1.0" encoding="utf-8"?>
<ServiceConfiguration serviceName="AzureDemo" xmlns="" osFamily="1" osVersion="*">
  <Role name="SMBServer">
    <Instances count="1" />
      <Setting name="StorageConnectionString" value="DefaultEndpointsProtocol=http;AccountName=yourstorageaccount;AccountKey=yourkey" />
      <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=yourstorageaccount;AccountKey=yourkey" />
      <Setting name="driveName" value="drive2" />
      <Setting name="driveSize" value="1000" />
      <Setting name="fileshareUserName" value="fileshareuser" />
      <Setting name="fileshareUserPassword" value="SecurePassw0rd" />
      <Setting name="shareName" value="sharerw" />
  <Role name="SMBClient">
    <Instances count="2" />
      <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=yourstorageaccount;AccountKey=yourkey" />
      <Setting name="fileshareUserName" value="fileshareuser" />
      <Setting name="fileshareUserPassword" value="SecurePassw0rd" />
      <Setting name="shareName" value="sharerw" />
      <Setting name="localPath" value="K:" />

Scott Densmore described adding Multi Entity Schema Tables in Windows Azure in a 4/14/2011 post:

The source for this is available on github.


imageWhen we wrote Moving Applications to the Cloud, we talked about doing a-Expense using a multi entity schema yet never implemented this in code. As we finish up V2 to the Claims Identity Guide, we are thinking of how we go about  updating the other two guides for Windows Azure. I read an article from Jeffrey Richter about this same issue. I decided to take our a-Expense with Workers reference application and update it to use multi entity schemas based on the article.  I also decided to fix a few bugs that we had as well. Most of this had to do with saving the expenses. In the old way of doing things, we would save expense items, the receipt images, and then the expense header. The first problem is that if you debugged the project things got a little out of sync, you could end up trying to update a reciept url on the expense item before it was saved. Also, if the expense header failed to save, you would have orphaned records that then you would need to create another process to go out and scavenge any orphaned records.


There has been a lot of code changes since the version shipped for Moving to the Cloud. The major change comes to the save method for the expense repository.

The main changes for this code are the following:

  1. Multi Entity Schema for the Expense
  2. Remove updates back to the table for Receipt URIs
  3. Update the Queue handlers to look for Poison Messages and move to another Queue

Multi Entity Schema

The previous version of the project was split into two tables: Expense and ExpenseItems. This create a few problems that needed to be addressed. The first was that you could not create a transaction across the two entities. The way we solved this was to create the ExpenseItem(s) before creating the Expense. If there is a problem between the ExpenseItem and Expense, then there would be orphaned ExpenseItems. This would require a scavenger looking for all the orphaned records. This would all add up to more costs. Now we are going to save the ExpenseItem and Expense in the same table. The following is a diagram of doing this:


This now lets us have one transaction across the table.

Remove Updates for Receipts

In the first version, when you uploaded receipts along with the expense, the code would post a message to a queue that would then update the table with the URI of the thumbnail and receipt images. In this version, we used a more convention based approach. Instead of updating the table, a new property, "HasReceipts", is added so when displaying the receipts the code can tell when there is a receipt and where there is not. Now when there is a receipt the URI is built on the fly and accessed. This saves on the cost of the update to the table. Here is the diagram:


Poison Messages

In the previous version, when a message would be dequeued 5 times it would be deleted, now you have the option of moving it to a new queue.


	success = true;
catch (Exception)
	success = false;
	if (deadQueue != null && message.DequeueCount > 5)
	if (success || message.DequeueCount > 5)




This is the beginning of our updates to our Windows Azure Guidance. We want to show even better ways of moving and developing applications for the cloud. Go check out the source.

<Return to section navigation list> 

SQL Azure Database and Reporting

Per Ola Sæther published Push Notifications for Windows Phone 7 using SQL Azure and Cloud services - Part 1 to the DZone blog on 4/13/2011:

image In this article series I will write about push notifications for Windows Phone 7 with focus on the server-side covering notification subscription management, subscription matching and notification scheduling. The server-side will be using SQL Azure and WCF Web Service deployed to Azure.

imageAs an example I will create a very simple news reader application where the user can subscribe to receive notifications when a new article within a given category is published. Push notifications will be delivered as Toast notifications to the client. The server-side will consist of a SQL database and WCF Web Services that I deploy to Windows Azure.

Creating the SQL Azure database

The database will store subscription information and news articles. To achieve this I will create three tables; Category, News and Subscription.

The category table is just a place holder for categories containing a unique category ID and the category name. The news table is where we store our news articles containing a unique news ID, header, article, date added and a reference to a category ID. In the subscription table we store information about the subscribers containing a global unique device id, channel URI and a reference to a category ID.

I create the database in Windows Azure as a SQL Azure database. To learn how to sign up to SQL Azure and create a database there you can read an article I wrote earlier explaining how sign up and use SQL Azure. When you have signed up to SQL Azure and created the database you can use the following script to create the three tables we will be using in this article.

view source


01.USE [PushNotification]


03.CREATE TABLE [dbo].[Category]


05. [CategoryId]   [int] IDENTITY (1,1) NOT NULL,

06. [Name] [nvarchar](60) NOT NULL,

07. CONSTRAINT [PK_Category] PRIMARY KEY ([CategoryId])



10.CREATE TABLE [dbo].[Subscription]


12. [SubscriptionId]   [int] IDENTITY (1,1) NOT NULL,

13. [DeviceId]   [uniqueidentifier] NOT NULL,

14. [ChannelURI] [nvarchar](250) NOT NULL,

15. PRIMARY KEY ([SubscriptionId]),

16. [CategoryId] [int] NOT NULL FOREIGN KEY REFERENCES [dbo].[Category]([CategoryId])



19.CREATE TABLE [dbo].[News]


21. [NewsId]   [int] IDENTITY (1,1) NOT NULL,

22. [Header] [nvarchar](60) NOT NULL,

23. [Article] [nvarchar](250) NOT NULL,

24. [AddedDate] [datetime] NOT NULL,

25. PRIMARY KEY ([NewsId]),

26. [CategoryId] [int] NOT NULL FOREIGN KEY REFERENCES [dbo].[Category]([CategoryId])



By running this script the three tables Category, Subscription and News that will be used in this example are created. Be aware that I have not normalized the tables to avoid complicating the code. You can create a normal MS SQL database and run it locally or at a server if you want to, but for this example I have used SQL Azure.

Creating the WCF Cloud service

The WCF Cloud service will be consumed by the WP7 app and will communicate with the SQL Azure database. The service is responsible for pushing notifications to the WP7 clients.

Steps to create the WCF Cloud service with Object Model

Create a Windows Cloud project in Visual Studio 2010, I named mine NewsReaderCloudService. Select WCF Service Web Role, I named mine NewsReaderWCFServiceWebRole. Then you need to create an Object Model that connects to your SQL Azure database, I named mine NewsReaderModel and I named the entity NewsReaderEntities. For a more detailed explanation on how to do this you can take a look at an article I have written earlier on the subject.

If you have followed the steps you should now have a Windows Cloud project connected to your SQL Azure database.

Creating the WCF contract

The WCF contract is written in the auto generated IService.cs file. The first thing I do is to rename this file to INewsReaderService.cs. In this interface I have added seven OperationContract methods: SubscribeToNotifications, RemoveCategorySubscription, GetSubscribtion, AddNewsArticle, AddCategory, GetCategories and PushToastNotifications. I will explain these methods in the next section when we implement the service based on the INewsReaderService interface. Below you can see the code for INewsReaderService.cs

01.using System;

02.using System.Collections.Generic;

03.using System.ServiceModel;

04.namespace NewsReaderWCFServiceWebRole


06. [ServiceContract]

07. public interface INewsReaderService

08. {

09. [OperationContract]

10. void SubscribeToNotification(Guid deviceId, string channelURI, int categoryId);

11. [OperationContract]

12. void RemoveCategorySubscription(Guid deviceId, int categoryId);

13. [OperationContract]

14. List<int> GetSubscriptions(Guid deviceId);

15. [OperationContract]

16. void AddNewsArticle(string header, string article, int categoryId);

17. [OperationContract]

18. void AddCategory(string category);

19. [OperationContract]

20. List<Category> GetCategories();

21. [OperationContract]

22. void PushToastNotifications(string title, string message, int categoryId);

23. }


Creating the service

The service implements the INewsReaderService interface and this is done in the auto generated Service1.svc.cs file, the first thing I do is to rename it to NewsReaderService.svc.cs. I start with implementing members from the interface. Below you can see the code for NewsReaderService.svc.cs with empty implementations. In the next sections I will complete the empty methods implemented.

01.using System;

02.using System.Collections.Generic;

03.using System.Data.Objects;

04.using System.IO;

05.using System.Linq;

06.using System.Net;

07.using System.Xml;

08.namespace NewsReaderWCFServiceWebRole


10. public class NewsReaderService : INewsReaderService

11. {

12. public void SubscribeToNotification(Guid deviceId, string channelURI, int categoryId)

13. {

14. throw new NotImplementedException();

15. }

16. public void RemoveCategorySubscription(Guid deviceId, int categoryId)

17. {

18. throw new NotImplementedException();

19. }

20. public List<int> GetSubscriptions(Guid deviceId)

21. {

22. throw new NotImplementedException();

23. }

24. public void AddNewsArticle(string header, string article, int categoryId)

25. {

26. throw new NotImplementedException();

27. }

28. public void AddCategory(string category)

29. {

30. throw new NotImplementedException();

31. }

32. public List<Category> GetCategories()

33. {

34. throw new NotImplementedException();

35. }

36. public void PushToastNotifications(string title, string message, int categoryId)

37. {

38. throw new NotImplementedException();

39. }

40. }


SubscribeToNotification method

This is the method that the client calls to subscribe to notifications for a given category. The information is stored in the Subscription table and will be used when notifications are pushed.

01.public void SubscribeToNotification(Guid deviceId, string channelURI, int categoryId)


03. using (var context = new NewsReaderEntities())

04. {

05. context.AddToSubscription((new Subscription

06. {

07. DeviceId = deviceId,

08. ChannelURI = channelURI,

09. CategoryId = categoryId,

10. }));

11. context.SaveChanges();

12. }


RemoveCategorySubscription method

This is the method that the client call to remove a subscription for notifications for a given category. If the Subscription table has a match for the given device ID and category ID this entry will be deleted.

01.public void RemoveCategorySubscription(Guid deviceId, int categoryId)


03. using (var context = new NewsReaderEntities())

04. {

05. Subscription selectedSubscription = (from o in context.Subscription

06. where (o.DeviceId == deviceId && o.CategoryId == categoryId)

07. select o).First();

08. context.Subscription.DeleteObject(selectedSubscription);

09. context.SaveChanges();

10. }


GetSubscribtions method

This method is used to return all categories which a device subscribes to.

01.public List<int> GetSubscriptions(Guid deviceId)


03. var categories = new List<int>();

04. using (var context = new NewsReaderEntities())

05. {

06. IQueryable<int> selectedSubscriptions = from o in context.Subscription

07. where o.DeviceId == deviceId

08. select o.CategoryId;

09. categories.AddRange(selectedSubscriptions.ToList());

10. }

11. return categories;


AddNewsArticle method

This method is a utility method so that we can add a new news article to the News table. I will use this later on to demonstrate push notification sent based on content matching. When a news article is published for a given category only the clients that have subscribed for that category will receive a notification.

01.public void AddNewsArticle(string header, string article, int categoryId)


03. using (var context = new NewsReaderEntities())

04. {

05. context.AddToNews((new News

06. {

07. Header = header,

08. Article = article,

09. CategoryId = categoryId,

10. AddedDate = DateTime.Now,

11. }));

12. context.SaveChanges();

13. }


AddCategory method

This method is also a utility method so that we can add new categories to the Category table.

01.public void AddCategory(string category)


03. using (var context = new NewsReaderEntities())

04. {

05. context.AddToCategory((new Category

06. {

07. Name = category,

08. }));

09. context.SaveChanges();

10. }


GetCategories method

This method is used by the client to get a list of all available categories. We use this list to let the user select which categories to receive notifications for.

01.public List<Category> GetCategories()


03. var categories = new List<Category>();

04. using (var context = new NewsReaderEntities())

05. {

06. IQueryable<Category> selectedCategories = from o in context.Category

07. select o;

08. foreach (Category selectedCategory in selectedCategories)

09. {

10. categories.Add(new Category {CategoryId = selectedCategory.CategoryId, Name = selectedCategory.Name});

11. }

12. }

13. return categories;


PushToastNotification method

This method will construct the toast notification with a title and a message. The method will then get the channel URI for all devices that have subscribed to the given category. For each device in that list, the constructed toast notification will be sent.

01.public void PushToastNotifications(string title, string message, int categoryId)


03. string toastMessage = "<?xml version=\"1.0\" encoding=\"utf-8\"?>" +

04. "<wp:Notification xmlns:wp=\"WPNotification\">" +

05. "<wp:Toast>" +

06. "<wp:Text1>{0}</wp:Text1>" +

07. "<wp:Text2>{1}</wp:Text2>" +

08. "</wp:Toast>" +

09. "</wp:Notification>";

10. toastMessage = string.Format(toastMessage, title, message);

11. byte[] messageBytes = Encoding.UTF8.GetBytes(toastMessage);

12. //Send toast notification to all devices that subscribe to the given category

13. PushToastNotificationToSubscribers(messageBytes, categoryId);


15.private void PushToastNotificationToSubscribers(byte[] data, int categoryId)


17. Dictionary<Guid, Uri> categorySubscribers = GetSubscribersBasedOnCategory(categoryId);

18. foreach (Uri categorySubscriberUri in categorySubscribers.Values)

19. {

20. //Add headers to HTTP Post message.

21. var myRequest = (HttpWebRequest) WebRequest.Create(categorySubscriberUri); // Push Client's channelURI

22. myRequest.Method = WebRequestMethods.Http.Post;

23. myRequest.ContentType = "text/xml";

24. myRequest.ContentLength = data.Length;

25. myRequest.Headers.Add("X-MessageID", Guid.NewGuid().ToString()); // gives this message a unique ID

26. myRequest.Headers["X-WindowsPhone-Target"] = "toast";

27. // 2 = immediatly push toast

28. // 12 = wait 450 seconds before push toast

29. // 22 = wait 900 seconds before push toast

30. myRequest.Headers.Add("X-NotificationClass", "2");

31. //Merge headers with payload.

32. using (Stream requestStream = myRequest.GetRequestStream())

33. {

34. requestStream.Write(data, 0, data.Length);

35. }

36. //Send notification to this phone!

37. try

38. {

39. var response = (HttpWebResponse)myRequest.GetResponse(); 

40. }

41. catch(WebException ex)

42. {

43. //Log or handle exception

44. }

45. }


47.private Dictionary<Guid, Uri> GetSubscribersBasedOnCategory(int categoryId)


49. var categorySubscribers = new Dictionary<Guid, Uri>();

50. using (var context = new NewsReaderEntities())

51. {

52. IQueryable<Subscription> selectedSubscribers = from o in context.Subscription

53. where o.CategoryId == categoryId

54. select o;

55. foreach (Subscription selectedSubscriber in selectedSubscribers)

56. {

57. categorySubscribers.Add(selectedSubscriber.DeviceId, new Uri(selectedSubscriber.ChannelURI));

58. }

59. }

60. return categorySubscribers;


Running the WCF Cloud service

The WCF Cloud service is now completed and you can run it locally or deploy it to Windows Azure.

Deploying the WCF service to Windows Azure

Create a service package

Go to your service project in Visual Studio 2010. Right click the project in the solution explorer and click “Publish”, select “Create service package only” and click “OK”. A file explorer window will pop up with the service package files that got created. Keep this window open, you need to browse to these files later.

Create a new hosted service

Go to and login with the account you used when signing up for the SQL Azure. Click “New Hosted Service” and a pop up window will be displayed. Select the subscription you created for the SQL Azure database, enter a name for the service and a url prefix. Select a region, deploy to Stage and give it a description. Now you need to browse to the two files that you created when you published the service in the step above and click “OK”.  Your service will be validated and created (Note that this step might take a while). In the Management Portal for Windows Azure you will see the service you deployed in Hosted Services once it is validated and created.

Test the deployed WCF service with WCF Test Client

Open the WCF Test Client, on my computer it's located at: C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\WcfTestClient.exe

Go to the Management Portal for Windows Azure and look at your hosted service. Select the NewsReaderService (type Deployment) and click the DNS name link on the right side. A browser window will open and most likely display a 403 Forbidden error message. That's OK, we are only going to copy the URL. Then select File and Add Service in WCF Test Client. Paste in the address you copied from the browser and add /NewsReaderService.svc to the end, click OK. When the service is added you can see the methods on the left side, double click AddCategory to add some categories to your database. In the request window enter a category name as value and click the Invoke button.

You can now log in to your SQL Azure database using Microsoft SQL Server Management Studio and run a select on the Category table. You should now see the Categories you added from the WCF Test Client.

Part 2

In part 2 of this article series I will create a Windows Phone 7 application that consumes the WCF service you just created. The application will also receive toast notifications for subscribed categories. Continue to part 2.

<Return to section navigation list> 

MarketPlace DataMarket and OData

Channel9 posted the video segment for Mike Flasko’s OData in Action: Connecting Any Data Source to Any Device on 4/14/2011:

imageWe are collecting more diverse data than ever before and at the same time undergoing a proliferation of connected devices ranging from phone to the desktop, each with its own requirements. This can pose a significant barrier to developers looking to create great end-to-end user experiences across devices. The OData protocol ( was created to provide a common way to expose and interact with data on any platform (DB, No SQL stores, web services, etc). In this code heavy session we’ll show you how Netflix, EBay and others have used OData and Azure to quickly build secure, internet-scale services that power immersive client experiences from rich cross platform mobile applications to insightful BI reports.

Tony Bailey (a.k.a. tbtechnet) described B2B Marketplaces Emerging for Windows Azure at Ingram Micro in a 4/12/2011 post to the Windows Azure Platform, Web Hosting and Web Services blog:


image New “marketplaces” are emerging for developers, startups and ISVs that are building Windows Azure platform applications and who want visibility with VARs and to develop new channel development leads.

The new Ingram Micro Cloud Marketplace will feature solutions and help developers to participate in customized marketing campaigns.
Ingram Micro expects more than half of it's 20,000 active solution providers to deploy cloud software and services in the next two years.,,23762,00.html

Ingram Micro Deliverables:
  • Ingram Micro Cloud Platform: Marketplace with links to your website
    • Your Cloud Service landing page with your services descriptions, specs, features and benefits
    • Post your service assets (training videos, pricing, fact sheets, technical documents)
    • Education section: post your white papers and case studies
  • Banner ads on Ingram Micro Cloud and on your microsite
  • Ingram Micro Services and Cloud newsletter presence (June and September)
  • Face-to-face events

imageJoin Ingram Micro at a live webinar on Thursday 28th April 10am PST to learn more about the opportunity to launch your solution in the Ingram Micro Cloud Marketplace.

Registration link:

<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF and Service Bus

Mattias Woloski (@woloski) described Troubleshooting WS-Federation and SAML2 Protocol in a 4/16/2011 post:

image During the last couple of years we have helped companies deploying federated identity solutions using WS-Fed and SAML2 protocols with products like ADFS, SiteMinder in various platforms. Claims-based identity has many benefits but as every solution it has its downsides. One of them is the additional complexity to troubleshoot issues if something goes wrong, especially when things are distributed and in production. Since the authentication is outsourced and it is not part of the application logic anymore you need someway to see what is happening behind the scenes.

I’ve used Fiddler and HttpHook in the past to see what’s going on in the wire. These are great tools but they are developer-oriented. If the user who is having issues to login to an app is not a developer, then things get more difficult.

  • Either you have some kind of server side log with all the tokens that have been issued and a nice way to query those by user
  • Or you have some kind of tool that the user can run and intercept the token

Fred, one of the guys working on my team, had the idea couple of months ago to implement the latter. So we coded together the first version (very rough) of the token debugger. The code is really simple, we are embedding a WebBrowser control in a Winforms app and inspecting the content on the Navigating event. If we detect a token being posted we show that.

Let’s see how it works. First you enter the url of your app, in this case we are using wolof (the tool we use for the backlog) that is a Ruby app speaking WS-Fed protocol. .


After clicking the Southworks logo and entering my Active Directory account credentials, ADFS returns the token and it is POSTed to the app. In that moment, we intercept it and show it.


You can do two things with the token: send it via email (to someone that can read it) or continue with the usual flow. If there is another STS in the way it will also show a second token.



Since I wanted to have this app handy I enabled ClickOnce deployment and deployed it to AppHarbor (which works really well btw)

If you want to use it browse to and launch the ClickOnce app @

If you want to download the source code or contribute @

Vittorio Bertocci (@vibronet) began a new series with ACS Extensions for Umbraco - Part I: Setup on 4/14/2011:

More unfolding of the tangle of new content announced with the ACS launch.

imageToday I want to highlight the work we’ve been doing for augmenting Umbraco with authentication and authorization capabilities straight out of ACS. We really made an effort to make those new capabilities blend into the Umbraco UI, and without false modesty I think we didn’t get far from the mark.

image722322222I would also like to take this chance to thank Southworks, our long-time partner on this kind of activities, for their great work on the ACS Exensions for Umbraco.

Once again, I’ll apply the technique I used yesterday for the ACS+WP7+OAuth2+OData lab post; I will paste here the documentation as is. I am going to break this in 3 parts, following the structure we used in the documentation as well.

Access Control Service (ACS) Extensions for Umbraco

'Click here for a video walkthrough of this tutorial'

Setting up the ACS Extensions in Umbraco is very simple. You can use the Add Library Package Reference from Visual Studio to install the ACS Extensions NuGet package to your existing Umbraco 4.7.0 instance. Once you have done that, you just need to go to the Umbraco installation pages, where you will find a new setup step: there you will fill in few data describing the ACS namespace you want to use, and presto! You’ll be ready to take advantage of your new authentication capabilities.


Alternatively, if you don’t want the NuGet Package to update Umbraco’s source code for you, you can perform the required changes manually by following the steps included in the manual installation document found in the ACS Extensions package. Once you finished all the install steps, you can go to the Umbraco install pages and configure the extension as described above. You should consider the manual installation procedure only in the case in which you really need fine control on the details of how the integration takes place, as the procedure is significantly less straightforward than the NuGet route.

In this section we will walk you through the setup process. For your convenience we are adding one initial section on installing Umbraco itself. If you already have one instance, or if you want to follow a different installation route than the one we describe here, feel free to skip to the first section below and go straight to the Umbraco.ACSExtensions NuGet install section.

Install Umbraco using the Web Platform Installer and Configure It
  1. Launch the Microsoft Web Platform Installer from


    Figure 1 - Windows Web App Gallery | Umbraco CMS

  2. Click on Install button. You will get to a screen like the one below:


    Figure 2 - Installing Umbraco via WebPI

  3. Choose Options. From there you’ll have to select IIS as the web server (the ACS Extensions won’t work on IIS7.5).


    Figure 3 - Web Platform Installer | Umbraco CMS setup options

  4. Click on OK, and back on the Umbraco CMS dialog click on Install.
  5. Select SQL Server as database type. Please note that later in the setup you will need to provide the credentials for a SQL administrator user, hence your SQL Server needs to be configured to support mixed authentication.


    Figure 4 - Choose database type

  6. Accept the license terms to start downloading and installing Umbraco.
  7. Configure the web server settings with the following values and click on Continue.


    Figure 5 - Site Information

  8. Complete the database settings as shown below.



    Figure 6 - Database settings

  9. When the installation finishes, click on Finish button and close the Web Platform Installer.
  10. Open Internet Information Services Manager and select the web site created in step 7.
  11. In order to properly support the authentication operations that the ACS Extensions will enable, your web site needs to be capable of protecting communications. On the Actions pane on the right, click Bindings… and add one https binding as shown below.


    Figure 7 - Add Site Binding

  12. Open the hosts file located in C:\Windows\System32\drivers\etc, and add a new entry pointing to the Umbraco instance you’ve created so that you will be able to use the web site name on the local machine.


    Figure 8 - Hosts file entry

  13. At this point you have all the bits you need to run your Umbraco instance. All that’s left to do is make some initial configuration: Umbraco provides you with one setup portal which enables you to do just that directly from a browser. Browse to http://{yourUmbracoSite}/install/; you will get to a screen like the one below.


    Figure 9 - The Umbraco installation wizard

  14. Please refer to the Umbraco documentation for a detailed explanation of all the options: here we will do the bare minimum to get the instance running. Click on “Let’s get started!” button to start the wizard.
  15. Accept the Umbraco license.
  16. Hit Install in the Database configuration step and click on Continue once done.
  17. Set a name and password for the administrator user in the Create User step.
  18. Pick a Starter Kit and a Skin (in this tutorial we use Simple and Sweet@s).
  19. Click on Preview your new website: your Umbraco instance is ready.


    Figure 10 - Your new Umbraco instance is ready!

Install the Umbraco.ACSExtensions via NuGet Package

Installing the ACS Extensions via NuGet package is very easy.

  1. Open the Umbraco website from Visual Studio 2010 (File -> Open -> Web Site…)
  2. Open the web.config file and set the umbracoUseSSL setting with true.


    Figure 11 - umbracoUseSSL setting

  3. Click on Save All to save the solution file.
  4. Right-click on the website project and select “Add Library Package Reference…” as shown below. If you don’t see the entry in the menu, please make sure that NuGet 1.2 is correctly installed on your system.


    Figure 12 -

    umbracoUseSSL setting

  5. Select the Umbraco.ACSExtensions package form the appropriate feed and click install.


    At the time in which you will read this tutorial, the ACS Extensions NuGet will be available on the NuGet official package source: please select Umbraco.ACSExtensions from there. At the time of writing the ACS Extensions are not published on the official feed yet, hence in the figure here we are selecting it from a local repository. (If you want to host your own feed, see Create and use a NuGet local repository)


    Figure 13 - Installing theUmbraco. ACSExtensions NuGet package

If the installation takes place correctly, a green checkmark will appear in place of the install button in the Add Library Package Reference dialog. You can close Visual Studio, from now on you’ll do everything directly from the Umbraco management UI.

Configure the ACS Extensions

Now that the extension is installed, the new identity and access features are available directly in the Umbraco management console. You didn’t configure the extensions yet: the administrative UI will sense that and direct you accordingly.

  1. Navigate to the management console of your Umbraco instance, at http://{yourUmbracoSite}/umbraco/. If you used an untrusted certificate when setting up the SSL binding of the web site, the browser will display a warning: dismiss it and continue to the web site.
  2. The management console will prompt you for a username and a password, use the credentials you defined in the Umbraco setup steps.
  3. Navigate to the Members section as shown below.


    Figure 14 - The admin console home page

  4. The ACS Extensions added some new panels here. In the Access Control Service Extensions for Umbraco panel you’ll notice a warning indicating that the ACS Extensions for Umbraco are not configured yet. Click on the ACS Extensions setup page link in the warning box to navigate to the setup pages.


    Figure 15 - The initial ACS Extensions configuration warning.


    Figure 16 - The ACS Extensions setup step.

The ACS Extensions setup page extends the existing setup sequence, and lives at the address https://{yourUmbracoSite}/install/?installStep=ACSExtensions. It can be sued both for the initial setup, as shown here, and for managing subsequent changes (for example when you deploy the Umbraco site form your development environment to its production hosting, in which case the URL of the web site changes). Click Yes to begin the setup.

Access Control Service Settings

Enter your ACS namespace and the URL at which your Umbraco instance is deployed. Those two fields are mandatory, as the ACS Extensions cannot setup ACS and your instance without those.

The management key field is optional, but if you don’t enter most of the extensions features will not be available.


Figure 17 - Access Control Service Setttings


The management key can be obtained through the ACS Management Portal. The setup UI provides you a link to the right page in the ACS portal, hut you’ll need to substitute the string {namespace} with the actual namespace you want to use

Social Identity Providers

Decide from which social identity providers you want to accept users from. This feature requires you to have entered your ACS namespace management key: if you didn’t, the ACS Extensions will use whatever identity providers are already set up in the ACS namespace.

Note that in order to integrate with Facebook you’ll need to have a Facebook application properly configured to work with your ACS namespace. The ACS Extensions gather from you the Application Id and Application Secret that are necessary for configuring ACS to use the corresponding Facebook application.


Figure 18 - Social Identity Providers

SMTP Settings

Users from social identity providers are invited to gain access to your web site via email. In order to use the social provider integration feature you need to configure a SMTP server.


Figure 19 - SMTP Settings

  1. Click on Install to configure the ACS Extensions.


    Figure 20 - ACS Extension Configured

If everything goes as expected, you will see a confirmation message like the one above. If you navigate back to the admin console and to the member section, you will notice that the warning is gone. You are now ready to take advantage of the ACS Extensions.


Figure 21 - The Member section after the successful configuration of the ACS Extensions

Zane Adam reported Delivering on our roadmap: Announcing the production release of Windows Azure AppFabric Caching and Access Control services in a 4/12/2011 post:

image A few months ago, at the Professional Developer Conference (PDC), we released Community Technology Previews (CTPs) and made several announcements regarding new services and capabilities we are adding to the Windows Azure Platform. Ever since then we have been listening to customer feedback and working hard on delivering against our roadmap. You can find more details on these deliveries in my previous two blog posts on SQL Azure and Windows Azure Marketplace DataMarket.

image722322222Today at the MIX conference we announced the production release of two of the services that CTPed at PDC: the Windows Azure AppFabric Caching and Access Control services. The updated Access Control is now live, and Caching will be released at the end of April.

These two services are particularly appealing to web developers. The Caching service seamlessly accelerates applications by storing data in memory, and the Access Control service enables the developer to provide users with a single-sign-on experience by providing out-of-the-box support for identities such as Windows Live ID, Google, Yahoo!, Facebook, as well as enterprise identities such as Active Directory.

Pixel Pandemic, a company developing a unique game engine technology for persistent browser based MMORPGs, is already using the Caching service as part of their solution. Here is a quote from Ebbe George Haabendal Brandstrup, CTO and co-founder, regarding their use of the Caching service:

“We're very happy with how Azure and the caching service is coming along.
Most often, people use a distributed cache to relieve DB load and they'll accept that data in the cache lags behind with a certain delay. In our case, we use the cache as the complete and current representation of all game and player state in our games. That puts a lot extra requirements on the reliability of the cache. Any failed communication when writing state changes must be detectable and gracefully handled in order to prevent data loss for players and with the Azure caching service we’ve been able to meet those requirements.”

Umbraco, one of the most deployed Web Content Management Systems on the Microsoft stack, can be easily integrated with the Access Control service through an extension. Here is a quote from Niels Hartvig, founder of the company, regarding the Access Control service:

“We're excited about the very diverse integration scenarios the ACS (Access Control service) extension for Umbraco allows. The ACS Extension for Umbraco is one great example of what is possible with Windows Azure and the Microsoft Web Platform.”

You can read more about these services and the release announcement on the Windows Azure AppFabric Team Blog.

We are continuing to innovate and move quickly in our cloud services and release new services as well as enhance our existing services every few months.

The releases of these two services are a great enhancement to our cloud services offering enabling developers to more easily build applications on top of the Windows Azure Platform.

Look forward to more CTPs and releases in the coming months!

<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

Wely Lau continued his RDP series with Establishing Remote Desktop to Windows Azure Instance – Part 2 on 4/16/2011:

Part 2 - Uploading Certificate on Windows Azure Developer Platform

image This is the second part of this blog post series about Establishing Remote Desktop to Windows Azure. You can check out the first part here.

So far we’ve just only do the work on the development environment side. There’re still something needs to be done on Windows Azure Portal.

Export Your Certificate

1. The first step is to export the physical certificate file since we need to upload it to Windows Azure Portal.

There’re actually few ways to export certificate file. The most common way is using MMC. Since we use Visual Studio to configure our remote desktop, we can utilize the feature as well. I refer to step 4 of the first part of the post, where you’ve just create a certificate using Visual Studio wizard. With the preferred certificate selected, click on View to see the detail of your certificate.


2. Click on Details tab of the Certificate dialog box, and then click on Copy to File button. It should bring you to a Certificate Export Wizard


3. Clicking on Next button will bring you the next step where you can select whether to export private or public key. On the first step, select “No, do not export private key” first, keep following the wizard and eventually it will prompt you to the last page where you need to name the physical file [Name].CER.


4. Repeat from the step 1 to 3 but this time, select “Yes, export private key” which eventually will require you to define your password and export it to another [Name].pfx file.

Upload the certificate to Windows Azure Portal

Since we are done exporting both private and public key of the certificate, the next step is to upload it to Windows Azure.

5. Log-in to your Windows Azure Developer Portal (https// I assume that you’ve your subscription ready with your live id.

6. Click on the “Hosted Service, Storage Account, and CDN” on the left-hand side menu. On the upper part, click on Management Certificate. If you previously have uploaded the certificate, obviously you will see some of them.

7. Next step is to click on Add Certificate button and a modal pop up dialog will prompt you to select your subscription as well as upload your .CER certificate.


As instructed, go ahead to select your subscription and browse your .CER file where you’ve exported in step 3. It may take a few second to upload your certificate. You’ve successfully uploaded your public key of the certificate.

8. Now, you will also need to upload the private key. To do that, click on Hosted Service upper menu. Click on New Hosted Service button on upper menu and you will see Create a new Hosted Service dialog show up. There are a few section which you need to enter here.


a. Enter the name of your service and well as the URL prefix. Please note that the URL prefix must be globally unique.

b. Subsequently, select your region / affinity group, where do you want to host your service.

c. The next one is about your deployment option, whether you want to deploy it immediately as staging or production environment or do not deploy it first. I assume that you deploy it as production environment.

d. You can give your deployment name or label on it. People sometimes like to use either version number or current time as the label.

e. Now it’s your time to browse your package as well as configuration where you’ve created on the step 9 in previous post.

f. Finally, you need to add certificate again, but this time it’s private key certificate that you’ve specified in step 4 above.

Click OK when you are done with that. In the case where an warning occur, stating that you’ve only 1 instance, you can consider whether to increase your instance count to meet the 99.95% Microsoft SLA. If you are doing this only for development or testing purpose, I believe 1 instance doesn’t really matter. You can click on OK to continue.

8. It will definitely take some time to upload the package as well as wait for the fabric controller to allocate a hosted service place for you. You may see that the status will change slowly from “uploading”, “initializing”, “busy”, and eventually “ready”, if everything goes well.

Remote Desktop to Your Windows Azure Instance.

9. Assuming that your instance has been successfully uploaded. Now you can remote desktop by selecting the instance of your hosted service. And click on the Connect button in upper menu.


This will prompt you to download an .rdp file.

10. Open up the .rdp file and you will see a verification are you want to connect despite these certificate error. Just simply ignore it and click on Yes.


It will then prompt your for username and password that you’ve specified in Visual Studio when configuring the remote desktop. But, here’s little trick here. Just just simply click on your name since it will use your computer as domain. Instead, use “\” (backslash) and follow-up with your username. For example: “\wely”. And of course you’ll need to enter your password as well.


11. If it goes well, you’ll see that you’ve successfully remote desktop to you Windows Azure instance. Bingo!


Alright, that’s all for this post. Hope it helps! See you on another blog post.

Richard Seroter reported Code Uploaded for WCF/WF and AppFabric Connect Demonstration in a 4/13/2011 post:

image A few days ago I wrote a blog post explaining a sample solution that took data into a WF 4.0 service, used the BizTalk Adapter Pack to connect to a SQL Server database, and then leveraged the BizTalk Mapper shape that comes with AppFabric Connect.

imageI had promised some folks that I’d share the code, so here it is.

The code package has the following bits:


The Admin folder has a database script for creating the database that the Workflow Service queries.  The CustomerServiceConsoleHost project represents the target system that will receive the data enriched by the Workflow Service.  The CustomerServiceRegWorkflow is the WF 4.0 project that has the Workflow and Mapping within it.  The CustomerMarketingServiceConsoleHost is an additional target service that the RegistrationRouting (instance WCF 4.0 Routing Service) may invoke if the inbound message matches the filter.

On my machine, I have the Workflow Service and WCF 4.0 Routing Service hosted in IIS, but feel free to monkey around with the solution and hosting choices.  If you have any questions, don’t hesitate to ask.

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Andy Cross (@andybareweb) described reducing iterative deployment time from 21 minutes to 5 seconds in his Using Web Deploy in Azure SDK 1.4.1 post of 4/16/2011:

image The recently refreshed Azure SDK v1.4.1 now supports the in-place upgrade of running Web Role instances using Web Deploy. This post details how to use this new feature.

The SDK refresh is available for download here: Get Started

imageAs announced by the Windows Azure team blog,

Until today, the iterative development process for hosted services has required you to re-package the application and upload it to Windows Azure using either the Management portal or the Service Management API. The Windows Azure SDK 1.4 refresh solves this problem by integrating with the Web Deployment Tool (Web Deploy).

A common complaint regarding Azure has been that roles take too long to start up, and since every update to your software requires a start up cycle (provision, initialize, startup, run) the development process is slowed significantly. The Web Deploy tool can be used instead to upgrade an in-place role – meaning the core of an application can be refreshed in an iterative development cycle without having to repackage the whole cloud project. This means a missing CSS link, validation element, code behind element or other change can be made without having to terminate your Role Instance.

It is a useful streamlining of the Windows Azure deployment path as it bypasses the lengthy process of Startup. This must be considered when choosing whether to do a full deployment or a partial Web Deploy, for things such as Startup Tasks do not run when performing a Web Deploy. Any install task (for instance) has already been undertaken when the Role was initially created, and so does not execute again.

In this blog, I use the code from my previous blog, Restricting Access to Azure by IP Address. I use this blog as it includes a startup task that adds a significant length of time from the initial publish to the Role becoming available and responsive to browser requests. This delay is greater than the standard delay in starting an Azure Role as it installs a module into IIS. It is also these types of delay that the Web Deploy integration ask as a remedy for, as the start tasks do not need to be repeated on a Web Deploy.

Firstly, one has to deploy the Cloud Project to Azure in the standard way. This is achieved by right clicking on the Cloud Project and selecting “Publish”.

Standard Publish

Standard Publish

On this screen, those familiar with SDK v1.4 and earlier will notice a new checkbox; initially greyed out, this allows the enabling of Web Deploy to your Azure role. We need to enable this, and the way to do so is (as hinted by the screen label) to firstly enable Remote Desktop access to the Azure role. Click the “Configure Remote Desktop Connections” link, check the Enable Connections for all Roles and fill out all the details required on this screen. Make sure you remember the Password you enter, as you’ll need this later.

Set up Remote Desktop to the Azure Roles

Set up Remote Desktop to the Azure Roles

Once we have done this, the Enable Web Deploy checkbox is available for our use.

Check the Enable Web Deploy checkbox

Check the Enable Web Deploy checkbox

The warning triangle shown warns:

“Web Deploy uses an untrusted, self-signed certificate by default, which is not recommended for upload sensitive data. See help for more information about how to secure Web Deploy”.

For now I recommend you don’t upload sensitive data at all using the Web Deploy tool, and if in doubt, this should not be the preferred route for you to use.

Once you have enabled the Web Deploy for roles, click OK, and the deploy beings. This is the standard, full deploy that takes a while to provision an Instance and instantiate it.

From my logs, this takes 21 minutes. The reason for this particularly long delay is that I have an new module being installed into IIS which adds around 8 to 9 minutes at least to complete. Typically the deploy would take 10-15 minutes.

12:33:52 - Preparing...
12:33:52 - Connecting...
12:33:54 - Uploading...
12:35:08 - Creating...
12:36:04 - Starting...
12:36:46 - Initializing...
12:36:46 - Instance 0 of role IPRestricted is initializing
12:41:27 - Instance 0 of role IPRestricted is busy
12:54:12 - Instance 0 of role IPRestricted is ready
12:54:12 - Creating publish profiles in local web projects...
12:54:12 - Complete.

This 21 minute delay is what Web Deploy is seeking to reduce. I will now access my role.

You are an ip address!

You are an IP address!

We will now make a trivial change to the MasterPage of the solution, changing “You are: an IP address” to “Your IP is: an IP address”. This change requires a recompilation of the ASPX Web Application, and so could be cosmetic like this change, or programmatic or referential. Some changes are not possible with Web Deploy – such as adding new Roles, changing startup tasks and changing ServiceDefinitions.

In order to update our code, right click on the Web Role APPLICATION Project in Visual Studio. This is different to the previous Publish, which was accessed by right clicking on the Cloud Project. Select “Publish”.

Web Deployment options

Web Deployment options

All of these options are completed for you, although you have to enter the Password that you entered when you set up the Remote Desktop connection to your Roles.

Clicking Publish builds and publishes your application to Azure. This happens for me in under 5 seconds.

========== Publish: 1 succeeded, 0 failed, 0 skipped ==========

That is that. It’s amazingly quick and I wasn’t even sure it was working initially :-)

Going to the same Service URL gives the following:

An updated web app!

An updated web app!

As you can see, the web application has been updated. We have gone from a deployment cycle of 21 minutes, to an iterative deployment cycle of 5 seconds. AMAZING.

One last thing to note, straight from the Windows Azure Team Blog regarding constraints on using Web Deploy:

Please note the following when using Web Deploy for interactive development of Windows Azure hosted services:

  • Web Deploy only works with a single role instance.
  • The tool is intended only for iterative development and testing scenarios
  • Web Deploy bypasses the package creation. Changes to the web pages are not durable. To preserve changes, you must package and deploy the service.

Happy clouding,


Neil MacKenzie described Windows Azure Traffic Manager in a 4/15/2011 post:

image The CTP for the Windows Azure Traffic Manager was announced at Mix 2011, and a colorful hands-on lab was introduced in the Windows Azure Platform Training Kit (April 2011). The lab has also been added to the April 2011 refresh of the Windows Azure Platform Training Course – a good, but under-appreciated, resource. You can apply to join the CTP on the Beta Programs section of the Windows Azure Portal.

imageThe Windows Azure Traffic Manager provides several methods of distributing internet traffic among two or more hosted services, all accessible with the same URL, in one or more Windows Azure datacenters. It uses a heartbeat to detect the availability of a hosted service. The Traffic Manager provides various ways of handling the lack of availability of a hosted service.


The Traffic Manager requests a heartbeat web page from the hosted service every 30 seconds. If it does not get a 200 OK response for this heartbeat three consecutive times the Traffic Manager assumes that the hosted service is unavailable and takes it out of load-balancer rotation.  (In fact, on not getting a 200 OK the Traffic Manager immediately issues another request – and failure requires three pairs of failed requests every 30 seconds.)

Traffic Manager Policies

The Traffic Manager is configured at the subscription level on the Windows Azure Portal through the creation of one or more Traffic Manager policies. Each policy associates a load-balancing technique with two or more hosted services which are subject to the policy. A hosted service can be in more than one policy at the same time. Policies can be enabled and disabled

The Traffic Manager supports various load balancing techniques for allocating traffic to hosted services.

  • failover
  • performance
  • round robin

With failover, all traffic is directed to a single hosted service. When the Traffic Manager detects that the hosted service is not available it modifies DNS records and directs all traffic to the hosted service configured for failover. This failover hosted service can be in the same or another Windows Azure datacenter. Since it takes 90 seconds for the Traffic Manager to detect the failure and it takes a minute or two for DNS propagation the service will be unavailable for a few minutes.

In a round robin configuration, the Traffic Manager uses a round-robin algorithm to distribute traffic equally among all hosted services configured in the policy. The Traffic Manager automatically removes from the load-balancer rotation any hosted service it detects as unavailable. The hosted services can be in one or more Windows Azure datacenters.

With performance, the Traffic Manager uses information collected about internet latency to direct traffic to the “closest” hosted service. This is useful only if the hosted services are in different Windows Azure datacenters.

Thoughts of Sorts

The Windows Azure Traffic Manager is really easy to use and works as advertised. The configuration is simple with a nice user experience. This is the type of feature that simplifies the task of developing scalable internet services. In particular, a lot of people ask about automated failover when they initially find out about distributed datacenters. And it is a feature I have seen a lot of hand waving over. It looks like the hand waving is about to end.

Tony Bailey (a.k.a. tbtechnet) asserted Speed and Support Matters in a 4/12/2011 post to the Windows Azure Platform, Web Hosting and Web Services blog:

image Sure. You can put up a website on a hosted or on-premise environment. You can then spend hours a month patching, monitoring, configuring your server.

Or, sure. You can go cheap and put up a site in a low monthly cost environment.

Azure is not about that.

imageThe Windows Azure platform is about low cost of development because you can build web services fast. Azure is about no hassle server management. Azure is about building scalable solutions to accommodate high demand when you need it and scale down when you don’t.

Content management systems from Umbraco, Kentico, SiteCore and Composite can enable you to build highly scalable web experiences on Windows Azure.

Check it out:

And with Microsoft, you get that other scale. The scale of a massive ecosystem of developers and all kinds of free technical support:

Sounds to me like a shot at VMware’s CloudFoundry. See the Other Cloud Computing Platforms and Services section below for more about CloudFoundry.

Tony Bailey (a.k.a. tbtechnet) posted Build at the Speed of Social: Facebook and Azure on 4/12/2011 to the Windows Azure Platform, Web Hosting and Web Services blog:

image As I’ve always said with Windows Azure and SQL Azure you can just concentrate on coding.

Let someone else worry about server provisioning, patching and configuring.

It now gets better for social applications.

Build Facebook apps quickly and on Windows Azure with less fuss.

Joe did it. So can you.

Need help? Get free technical support too:

<Return to section navigation list> 

Visual Studio LightSwitch

Matt Thalman described Invoking Tier-Specific Logic from Common Code in LightSwitch in a 4/12/2011 post (missed when posted):

image Visual Studio LightSwitch makes use of .NET portable assemblies to allow developers to write business logic that can be executed on both the client (Silverlight) and server (.NET 4) tiers.  In LightSwitch terminology, we refer to the assembly that contains this shared logic as the Common assembly.  In this post, I’m going to describe a coding pattern that allows you to invoke code from the Common assembly that has different implementations depending on which tier the code is running on.

image2224222222In my scenario, I have a Product entity which has an Image property and the image must be a specific dimension (200 x 200 pixels).  I would like to write validation code for the Image property to ensure that the image is indeed 200 x 200.  But since the validation code for the Image property is contained within the Common assembly, I do not have access to the image processing APIs that allow me to determine the image dimensions.

This problem can be solved by creating two tier-specific implementations of the image processing logic and store that in classes which derive from a common base class that is defined in the Common assembly.  During the initialization of the client and server applications, an instance of the tier-specific class is created and set as a static member available from the Common assembly.  The validation code in the Common assembly can now reference that base class to invoke the logic.  I realize that may sound confusing so let’s take a look at how I would actually implement this.

This is the definition of my Product entity:


I now need to add some of my own class files to the LightSwitch project.  To do that, I switch the project to File View.


From the File View, I add a CodeBroker class to the Common project.


The CodeBroker class is intended to be the API to tier-specific logic.  Any code in the Common assembly that needs to execute logic which varies depending on which tier it is running in can use the CodeBroker class. Here is the implementation of CodeBroker:


public abstract class CodeBroker
    private static CodeBroker current;

    public static CodeBroker Current
        get { return CodeBroker.current; }
        set { CodeBroker.current = value; }

    public abstract void GetPixelWidthAndHeight(byte[] image, out int width,
out int height); }


Public MustInherit Class CodeBroker
    Private Shared m_current As CodeBroker

    Public Shared Property Current() As CodeBroker
            Return CodeBroker.m_current
        End Get
        Set(value As CodeBroker)
            CodeBroker.m_current = value
        End Set
    End Property

    Public MustOverride Sub GetPixelWidthAndHeight(image As Byte(),
ByRef width As Integer,
ByRef height As Integer) End Class

I next add a ClientCodeBroker class to the Client project in the same way as I added the CodeBroker class to the Common project.  Here’s the implementation of ClientCodeBroker:


using Microsoft.LightSwitch.Threading;

namespace LightSwitchApplication
    public class ClientCodeBroker : CodeBroker
        public override void GetPixelWidthAndHeight(byte[] image, out int width,
out int height) { int bitmapWidth = 0; int bitmapHeight = 0; Dispatchers.Main.Invoke(() => { var bitmap = new System.Windows.Media.Imaging.BitmapImage(); bitmap.SetSource(new System.IO.MemoryStream(image)); bitmapWidth = bitmap.PixelWidth; bitmapHeight = bitmap.PixelHeight; }); width = bitmapWidth; height = bitmapHeight; } } }


Imports Microsoft.LightSwitch.Threading

Namespace LightSwitchApplication
    Public Class ClientCodeBroker
        Inherits CodeBroker
        Public Overrides Sub GetPixelWidthAndHeight(image As Byte(),
ByRef width As Integer, ByRef height As Integer) Dim bitmapWidth As Integer = 0 Dim bitmapHeight As Integer = 0 Dispatchers.Main.Invoke( Sub() Dim bitmap = New Windows.Media.Imaging.BitmapImage() bitmap.SetSource(New System.IO.MemoryStream(image)) bitmapWidth = bitmap.PixelWidth bitmapHeight = bitmap.PixelHeight End Sub) width = bitmapWidth height = bitmapHeight End Sub End Class End Namespace

(By default, my application always invokes this GetPixelWidthAndHeight method from the Logic dispatcher.  So the call to invoke the logic on the Main dispatcher is necessary because BitmapImage objects can only be created on the Main dispatcher.)

To include the server-side implementation, I add a ServerCodeBroker class to the Server project.  It’s also necessary to add the following assembly references in the Server project because of dependencies in my image code implementation: PresentationCore, WindowsBase, and System.Xaml.  Here is the implementation of ServerCodeBroker:


public class ServerCodeBroker : CodeBroker
    public override void GetPixelWidthAndHeight(byte[] image, out int width,
out int height) { var bitmap = new System.Windows.Media.Imaging.BitmapImage(); bitmap.BeginInit(); bitmap.StreamSource = new System.IO.MemoryStream(image); bitmap.EndInit(); width = bitmap.PixelWidth; height = bitmap.PixelHeight; } }


Public Class ServerCodeBroker
    Inherits CodeBroker
    Public Overrides Sub GetPixelWidthAndHeight(image As Byte(),
ByRef width As Integer, ByRef height As Integer) Dim bitmap = New System.Windows.Media.Imaging.BitmapImage() bitmap.BeginInit() bitmap.StreamSource = New System.IO.MemoryStream(image) bitmap.EndInit() width = bitmap.PixelWidth height = bitmap.PixelHeight End Sub End Class

The next thing is to write the code that instantiates these broker classes.  This is done in the Application_Initialize method for both the client and server Application classes.  For the client Application code, I switch my project back to Logical View and choose “View Application Code (Client)” from the right-click context menu of the project.


In the generated code file, I then add the following initialization code:


public partial class Application
    partial void Application_Initialize()
        CodeBroker.Current = new ClientCodeBroker();


Public Class Application
    Private Sub Application_Initialize()
        CodeBroker.Current = New ClientCodeBroker()
    End Sub
End Class

This initializes the CodeBroker instance for the client tier when the client application starts.

I need to do the same thing for the server tier.  There is no context menu item available for editing the server application code but the code file can be added manually.  To do this, I switch my project back to File View and add an Application class to the Server project.


The implementation of this class is very similar to the client application class.  Since the server’s Application_Initialize method is invoked for each client request, I need to check whether the CodeBroker.Current property has already been set from a previous invocation.  Since the CodeBroker.Current property is static, its state remains in memory across multiple client requests.


public partial class Application
    partial void Application_Initialize()
        if (CodeBroker.Current == null)
            CodeBroker.Current = new ServerCodeBroker();


Public Class Application
    Private Sub Application_Initialize()
        If CodeBroker.Current Is Nothing Then
            CodeBroker.Current = New ServerCodeBroker()
        End If
    End Sub
End Class

The next step is to finally add my Image property validation code.  To do this, I switch my project back to Logical View, open my Product entity, select my Image property in the designer, and choose “Image_Validate” from the Write Code drop-down button.


In the generated code file, I add this validation code:


public partial class Product
    partial void Image_Validate(EntityValidationResultsBuilder results)
        if (this.Image == null)

        int width;
        int height;
        CodeBroker.Current.GetPixelWidthAndHeight(this.Image, out width,
out height); if (width != 200 && height != 200) { results.AddPropertyError(
"Image dimensions must be 200x200.",
this.Details.Properties.Image); } } }


Public Class Product
    Private Sub Image_Validate(results As EntityValidationResultsBuilder)
        If Me.Image Is Nothing Then
        End If

Dim width As Integer Dim height As Integer CodeBroker.Current.GetPixelWidthAndHeight(Me.Image, width, height) If width <> 200 AndAlso height <> 200 Then results.AddPropertyError("Image dimensions must be 200x200.",
Me.Details.Properties.Image) End If End Sub End Class

This code can execute on both the client and the server.  When running on the client, CodeBroker.Current will return the ClientCodeBroker instance and provide the client-specific implementation for this logic.  And when running on the server, CodeBroker.Current will return the ServerCodeBroker instance and provide the server-specific implementation.

And there you have it.  This pattern allows you to write code that is invoked from the Common assembly but needs to vary depending on which tier is executing the logic.  I hope this helps you out in your LightSwitch development.

Michael Washington posted LightSwitch Procedural Operations: “Do This, Then Do That” to the LightSwitch Help blog on 4/11/2011:


image LightSwitch is designed for “forms over data”, entering data into tables, and retrieving that data. Procedural code is required when you need to manipulate data in “batches”. Normally, you want to put all your custom code on the Entity (table) level, but for procedural code, you want to use custom code on the Screen level.

The Inventory Program

To demonstrate procedural code in LightSwitch, we will consider a simple inventory management requirement. We want to specify sites and books, and we want to move those books, in batches, between the sites.


We enter sites into a table.


Each book is entered into the database.


We create one record for each book, and assign it to a site (yes a better interface to do this could be created using the techniques explained later).

Inventory Management (non-programming version)


Even procedural programming is simply moving data around. If you are not a programmer, you can simply make a screen that allows a user to change the site for each book. The problem with this approach is that it would be time-consuming to transfer a large amounts of books.

Inventory Management (using programming)


Using a little programming, we can create a screen that allows us to update the book locations in batches.


First we add a Screen.


We give the Screen a name, we select New Data Screen, and we do not select any Screen Data.


We Add Data Item.


We add a Property that will allow us to select a book.


It will show in the Screen designer.


We right-click on the top group, and select Add Group.


We are then able to add Selected Book to the screen as a drop down.


If we run the application at this point, we see that it allows us to select a book.


We add the various elements to the Screen.


Here is the code we use for the Get Existing Inventory button:

partial void GetExistingInventory_Execute()
    if (SiteFrom != null)
        SiteOneBooks = SiteFrom.BookInventories.Where(x => x.Book == SelectedBook).Count();
    if (SiteTo != null)
        SiteTwoBooks = SiteTo.BookInventories.Where(x => x.Book == SelectedBook).Count();


Here is the code for the Transfer Inventory button:

partial void TransfeerInventory_Execute()
    // Ensure that we have all needed values
    if ((SiteFrom != null) && (SiteTo != null) && (BooksToTransfeer > 0))
        // Get the Books from SiteFrom
        // Loop based on the value entered in BooksToTransfeer
        for (int i = 0; i < BooksToTransfeer; i++)
            // Get a Book from SiteFrom that matches the selected Book
            var BookFromSiteOne = (from objBook in SiteFrom.BookInventories
                                    where objBook.Book.BookName == SelectedBook.BookName
                                    select objBook).FirstOrDefault();
            // If we found a book at SiteFrom
            if (BookFromSiteOne != null)
                // Transfeer Book to SiteTo
                BookFromSiteOne.Site = SiteTo;
        // Save changes on the ApplicationData data source
        // Set BooksToTransfeer to 0
        BooksToTransfeer = 0;
        // Refresh values

The download for the complete project is here:

The RSSBus Team reported that you can Connect to QuickBooks From LightSwitch Applications in a 4/11/2011 article with downloadable sample code:

image The RSSBus Data Providers for ADO.NET include out-of-the-box support for Microsoft Visual Studio LightSwitch. In this demo, we will show you how to create a LightSwitch application that connects with QuickBooks and displays QuickBooks account data through the QuickBooks ADO.NET Data Provider. This exact same procedure outlined below can be used with any RSSBus ADO.NET Data Providers to integrate data with LightSwitch applications.

  • Step 1: Open Visual Studio and create a new LightSwitch Project.
  • Step 2: Add a new Data Source of type "Database".
  • Step 3: Change the data source to an RSSBus QuickBooks Data Source.
  • Step 4: Enter the connection details for your QuickBooks machine. In this example, we are using a separate machine to host QuickBooks on.


  • Step 5: Once you have your connection details set, select the tables and views you would like to add to the project.
  • Step 6: Now add a screen to the project. While adding the screen, select which table or view to associate with the screen. In our example we are using a Grid screen and the Accounts table.


  • Step 7: Run the LightSwitch application. The screen will automatically execute and populate with data.


LightSwitch Sample Project

To help you with getting started using the QuickBooks Data Provider within Visual Studio LightSwitch, download the fully functional a sample project. You will also need the QuickBooks ADO.NET Data Provider to make the connection. You can download a free trial here.

Note: Before running the demo, you will need to change your connection details to fit your environment as detailed in Step 4.

Return to section navigation list> 

Windows Azure Infrastructure and DevOps

Vishwas Lele (@vlele) casted his vote for Windows Azure in his Platform as a Service – When it comes to the cloud, PaaS is the point post of 12/10/2010 (missed when posted):

image Today there is a lot of talk about the different cloud-based services including Infrastructure as a Service (IaaS), Software as a Service(SaaS), Platform as a Service (PaaS)and so on. While each of the aforementioned services has its unique place, in my opinion Platform as a Service (PaaS) stands out in this mix. This is not to suggest that PaaS is somehow better than, say IaaS. This would be an improper comparison. In fact, as shown in the diagram below, PaaS builds on the capabilities offered by IaaS.

Diagram 1

So here is my point: If you are a developer, IT shop or an ISV responsible for building, deploying and maintaining solutions, leveraging PaaS is where you reap the maximum benefits cloud computing has to offer. PaaS providers offer a comprehensive application development, runtime and hosting environment for cloud enabled applications.  PaaS simplifies the IT aspects of creating software-as-a-service applications by offering a low cost of entry, easier maintenance,  scalability and fault tolerance, enabling companies to focus on their business expertise. This is why, PaaS is seen as a game-changer in the IT world fueling innovation from the larger players including Microsoft (Windows Azure Platform), Google (Google App Engine), SalesForce (, as well as the smaller players such as Bungee Labs and Heroku .  

APPIRIO recently conducted a State of the Public Cloud survey. What was interesting about this survey is that it focused on companies (150+ mid-to-large companies in North America) who have already adopted at least one cloud application. . The survey found that 68% of these respondents planned to have a majority of their applications in the public cloud in three years. There are two concepts to note here in the key survey finding – applications and public cloud . Let us look at each one of these concepts and how they related to PaaS:

  • Public Cloud – As we will see shortly, in order to provide economies of scale and elasticity of resources at a price that is attractive to small and medium businesses, PaaS providers need to maintain a massive infrastructure. The cost of setting up and maintaining such an infrastructure can only be justified if there are a large number of tenants. So it is no coincidence that almost all the major PaaS providers (including Windows Azure Platform) are based on the public cloud.
  • Applications – The survey respondents also indicated that they are looking to move their applications to the cloud. This is different from taking their existing on-premise servers and moving them to a public (IaaS) or a private cloud. Doing so would not allow parts of the application to take advantage of elastic resources available in the cloud. Moving the server to IaaS platform has benefits in the form of availability of elastic virtualization. However, the benefits are limited because the various parts of the application (UI tier, business tier, etc.) cannot be individually scaled. This is why it is better to migrate the application to the cloud. In order to facilitate this migration, PaaS platforms offer application building blocks that are symmetrical to the ones available on-premise. In other words, when building cloud applications, developers can still use the same development tools and programming constructs as the ones they use when building on-premise applications.

imageThe diagram below illustrates this concept in the context of a traditional four tier ASP.NET based web application. This application is deployed as a single virtual machine image within the on-premise data center. Moving this image to IaaS is certainly useful. It opens up the possibility to take advantage of shared pool of infrastructure resources. However, it is the PaaS platform (Windows Azure Platform in this instance) that allows each tier to scale independently. And it is not just about the elasticity in scaling (perhaps your application does not have an immediate need for it). But by mapping parts of your application to a set of pre-built application services (such as AppFabric Caching, AppFabric storage etc.) can improve fault tolerance and lower maintenance costs.


As you can see PaaS providers are well-suited to support the two aforementioned concepts (public cloud and applications). This is why PaaS is increasingly seen as such an important development going forward.

Let us take a deeper look at the benefits of PaaS with the aid of some concrete examples. Since I have hands-on experience with the Windows Azure Platform, I will reference its features to make the case for PaaS.

Server VM, Network, Storage

Even though organizations are benefiting from the advances in virtualization technology (live migration, virtualization assist processors, etc ), the overall management experience is not as seamless as they would like. These organizations have to continue to worry about creating and maintaining virtual machine images, and configuring the necessary storage and network before they can get to building and deploying their applications.

By utilizing PaaS, virtual machines, storage and the network are pre-configured by the provider. Furthermore, the PaaS providers monitor the virtual machines for failures and initiate auto-recovery when needed.

As far as the resources such as storage, compute and the network are concerned, PaaS-based applications can simply acquire them as needed and pay only for what they use,

It is helpful to see the series of steps that Windows Azure platform undertakes in order to meet the needs of an application:

  1. First, the application developer uploads the code (binaries) and resource requirements (# of web and middle-tier nodes, HW, memory, fire wall settings etc.);
  2. Based on the resource requirements, compute and network resources are then allocated appropriately. Please refer to diagram below. Windows Azure will attempt to allocate the requested number of web-tier nodes based on the resources that are available. Furthermore, it will try to place the nodes across the different racks to improve the fault tolerance.
  3. Then, Windows Azure creates appropriate virtual machine images by placing application specific code on top of base images and starts the virtual machines;
  4. It then assigns dynamic IP (DIP) addresses to the machines;
  5. Virtual IP addresses are allocated and mapped to DIPs; finally,
  6. It sets up the load balancer to route incoming client traffic appropriately.


The above diagram depicts a snapshot in time of available resources within a Windows Azure datacenter. Little blue boxes denote slots that are available. Windows Azure uses this information to dynamically allocate resources.

As you can see from the above steps, Windows Azure intelligently pulls together resources to create a custom setup that meets application-specific needs (applications can operate under the impression that there is a limitless supply of resources). Note also that all through the steps above, application developers were not expected to setup the OS, log into the machines directly or worry about IP addresses, routers and storage. Instead, the application developers are free to focus on implementing the business logic.

Patch, Service Release, New Version

Whether they are utilizing virtualization or not, organizations have to deal with changes in the form of patches, OS upgrades, etc. This is commonly the case even if the servers are placed with a “hoster”. As we saw earlier, PaaS providers are responsible for providing the OS image. Since they are the ones providing the OS image in the first place, they are also responsible for keeping them up-to date.

It is helpful to see how the Windows Azure platform performs OS upgrades:

  • Windows Azure team applies patches once each month.
  • In order to ensure that upgrades are performed without violating the SLA, Windows organizes the nodes that make up an application into virtual groupings called upgrade domains. Windows Azure upgrades one domain at a time – stopping all the nodes, applying the necessary patches and starting them back up.
  • By stopping only the nodes running within the current upgrade domain, Windows Azure ensures that an upgrade takes place with the least possible impact to the running service.
  • The underlying virtual machine image is not changed in this process thereby preserving any cached data.

As a case in point, consider the recent publicly disclosed vulnerability in ASP.NET. ASP.NET-based applications hosted on Windows Azure had the vulnerability patched for them when the Guest OS was automatically upgraded from version 1.6 to version 1.7.

Windows Azure platform recently introduced the notion of Virtual Machine (VM) role. As part of using the VM Role, users are allowed to bring their own custom virtual machine image to the cloud, instead of using an image provided by Windows Azure. This is a powerful capability that allows users to control how the virtual machine image is setup. In many ways this is akin to capability offered by IaaS platforms. But with great power comes great responsibility. Customers using the VM mode are expected to setup and maintain the OS themselves. Windows Azure does not automatically understand the health of applications running in a custom VM image.

No Assembly Required

Distributed applications by their very nature have a lot of moving parts and consequently a lot of steps are required to assemble the various components. For instance, assembly steps include installing and configuring caching components, access control components, database-tier components and so on.

PaaS greatly simplifies the assembly by providing many of these application components as ready-made services.

It is helpful to consider a few examples of application building block services offered by the Windows Azure platform:

Azure App Fabric Caching – This is a distributed in-memory cache for applications running in Windows Azure. Application developers do not need to bother with configuration, deployment or management of their cache. Applications simply provision the cache based on their needs (with the ability to dynamically adjust capacity), and only pay for what they use.

AppFabric Access Control Service – This service simplifies access control for applications hosted in Windows Azure. Instead of having to handle different access control schemes (OAuth, OpenID etc.), Windows Azure-based application can integrate with AppFabric Access Control. AppFabric Access Control service in turn brokers the integration with various access control schemes thereby greatly simplifying the task of application developers.

SQL Azure Service – This is a cloud-based relational database service available to applications hosted on Windows Azure. Application developers do not have to install, setup, patch or manage any software. High availability and fault tolerance is built-in and no physical administration is required.

There are two noteworthy things about the examples above. First, each of the aforementioned services has a corresponding on-premise equivalent. This is what makes it possible for developers to migrate their applications to the cloud. Second, even though these services are functionally and semantically equivalent, they exhibit the core characteristics of the cloud, namely elasticity and fault tolerance. To be specific, let us consider the SQL Azure service. SQL Azure supports most the T-SQL functionality available on SQL Server that is running on premise. However, SQL Azure has built-in fault tolerance (it keeps multiple copies of the database in the background) and elasticity (databases can be scaled out seamlessly). Additionally, there is no need to go though the typical setup steps such as installing the product, applying patches andsetting up back-up and recovery. Setting up a SQL Azure database is literally a matter of issuing a “create database” command.


In this post, we saw how PaaS providers offer a comprehensive application development, runtime and hosting environment for cloud enabled applications.  This greatly simplifies the IT aspects of creating software-as-a-service applications enabling companies to focus on their business expertise.

Added at Vishwas’ request of 4/15/2011. See his Windows Azure Boot Camp at MIX post of 4/11/2011 in the Cloud Computing Events section below.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Phil Wainwright added a Private cloud discredited, part 2 sequel to ZDNet’s Software as Services blog on 4/13/2011:

image I wrote part 1 of this post last October, highlighting a Microsoft white paper that convincingly established the economic case for multi-tenant, public clouds over single-enterprise, private infrastructures. Part 2 would wait, I wrote then, for “the other shoe still waiting to drop … a complete rebuttal of all the arguments over security, reliability and control that are made to justify private cloud initiatives. The dreadful fragility and brittleness of the private cloud model has yet to be fully exposed.”

image The other shoe dropped last month, and from an unexpected direction. Rather than an analyst survey or research finding, it came in a firestorm of tweets and two blog posts by a pair of respected enterprise IT folk. One of them is Adrian Cockcroft, CIO of Cloud Architect for Netflix, a passionate adopter of public cloud infrastructure. The other is Christian Reilly, who engineers global systems at a large multinational and had been a passionate advocate of private cloud on his personal blog and Twitter stream until what proved to be a revelatory visit to Netflix HQ:

“The subsequent resignation of my self imposed title of President of The Private Cloud was really nothing more than a frustrated exhalation of four years of hard work (yes, it took us that long to build our private cloud).”

Taken together, the coalface testimony of these two enterprise cloud pioneers provides the evidence I’d been waiting for to declare private cloud comprehensively discredited — not only economically but now also strategically. There will still be plenty of private cloud about, but no one will be boasting about it any more.

As both these individuals make clear, the case for private cloud is based on organizational politics, not technology. The pace of migration to the public cloud is dictated solely by the art of the humanly possible. In Cockcroft’s words, “There is no technical reason for private cloud to exist.” Or as Reilly put it, “it can bring efficiencies and value in areas where you can absolutely NOT get the stakeholder alignment and buy in that you need to deal with the $, FUD and internal politics that are barriers to public cloud.”

Cockcroft’s post systematically demolishes the arguments for public cloud:

  • Too risky? “The bigger risk for Netflix was that we wouldn’t scale and have the agility to compete.”
  • Not secure? “This is just FUD. The enterprise vendors … are sowing this fear, uncertainty and doubt in their customer base to slow down adoption of public clouds.”
  • Loss of control? “What does it cost to build a private cloud, and how long does it take, and how many consultants and top tier ITops staff do you have to hire? … allocate that money to the development organization, hire more developers and rewrite your legacy apps to run on the public cloud.”

Then he adds his killer punch:

“The train wrecks will come as ITops discover that it’s much harder and more expensive than they thought, and takes a lot longer than expected to build a private cloud. Meanwhile their developer organization won’t be waiting for them.”

But it’s Reilly who adds the devastating coup de grace for private cloud:

“Building the private cloud that is devoid of any plan or funding to make architectural changes to today’s enterprise applications does not provide us any tangible transitional advantage, nor does it position our organization to make a move to public cloud.”

In a nutshell, an enterprise that builds a private cloud will spend more, achieve less and increase its risk exposure, while progressing no further along the path towards building a cloud applications infrastructure. It’s a damning indictment of the private cloud model from two CIOs top enterprise cloud architects who have practical, hands-on experience that informs what they’re saying. Their message is that private cloud is a diversion and a distraction from the task of embracing cloud computing in the enterprise. It can only make sense as a temporary staging post in the context of a systematically planned transition to public cloud infrastructure.

[UPDATED 03:45am April 14: I have made small amendments to Christian Reilly and Adrian Cockcroft's job descriptions at their request.  Reilly also commented via Twitter that this post "doesn't really capture the spirit my blog was written in," which I completely accept. He went on: "the point of the blog was to highlight the differences in how enterprises approach cloud versus orgs who build their business on it." Read his original post.]

Kenon Owens posted System Center Virtual Machine Manager 2012 Beta Direct from the Product Team on 4/12/2011:

image On March 22nd at MMS 2011 this year, we announced the Beta for System Center Virtual Machine Manager 2012. Our customers are telling us it is about the “Service”. They want to better manage their infrastructure while providing more levels of self-service and faster time to implementation. With VMM 2012 our customers will be able to provide all of this.

image But VMM 2012 is different from VMM 2008 R2 SP1, and it can be a challenge to understand all of the new capabilities. With that, our VMM Program Management Team has created a series of blogs that provide more depth and detail into the new features and scenarios that VMM enables.

Check out the VMM blog and all of the new posts detailing all of the new capabilities. Some of these have already been posted, and there are plans for many more. I wanted to point you to these, and let you know our plans for some future posts. Check these that are available now:

And more of the themes coming in the near future:

  • VMM 2012 OOB and OSD
  • VMM 2012  Provision a Cluster
  • VMM 2012 Private Cloud - Create a Private Cloud/Delegate a Private Cloud/
  • VMM 2012 PRO/Monitoring/Reporting
  • VMM 2012 Xen Support
  • VMM 2012  Dynamic Optimization
  • VMM 2012  Update Management- Hyper-V Orchestrated Cluster Patching
  • VMM 2012  Library- Creating and Using Resource Groups
  • VMM 2012 Services Overview
  • VMM 2012 Creating Virtual Application Packages with Microsoft Virtualization Sequencer
  • VMM 2012 Service Templates
  • VMM 2012 Service Deployment
  • VMM 2012 Service Servicing
  • VMM 2012 Service Template Import/Export
  • VMM 2012 Service Deployment – Troubleshooting Tips
  • VMM 2012 Service Scale in/Scale Out

Thank you to the Program Management Team for giving us this detail and insight into our new Beta. Also, check out the Server Application Virtualization blog that explains in more depth Server App-V from the Server App-V.

Finally, if you are interested in a more guided evaluation, check out the Virtual Machine Manager 2012 Community Evaluation Program beginning May 26, and go here to fill out the application.

Enjoy the System Center Virtual Machine Manager 2012 Beta, and please let us know how it is going.

David Greschler asserted Windows Server Hyper-V and System Center Raise the Stakes in the Virtualization Race in a 4/13/2011 post to the Systems Center blog:

image It is great to see InfoWorld acknowledge the significant progress we’ve made with Windows Server 2008 R2 SP1 Hyper-V (“Virtualization shoot-out: Citrix, Microsoft, Red Hat, and VMware”). We’re excited that the reviewer recognizes what our customers and respected industry analysts have been telling us for a while now: Hyper-V is ready to “give VMware a run for its money.”

This recognition comes on the heels of the Enterprise Strategy Group’s (ESG) report on Hyper-V R2 SP1 running key Microsoft workloads. ESG tested and verified industry-leading results that showed that single servers virtualized with Hyper-V R2 SP1 scaled to meet the IO performance requirements of 20,000 Exchange 2010 mailboxes, over 460,000 concurrent SharePoint 2010 users, and 80,000 simulated OLTP SQL Server users. InfoWorld’s results and ESG’s testing leave no doubt that Hyper-V is an enterprise-class hypervisor.

There are areas, of course, where I might quibble with the reviewer’s assessment. One such area is management. We believe that Microsoft has a key differentiation point in the management capabilities built into our System Center suite.

Just this week, IDC noted that the virtualization battleground will be won with management tools: “Looking ahead, the most successful vendors in the virtualization market will be those that can automate the management of an ever-escalating installed base of virtual machines as well as provide a platform for long-term innovation.” (They also state that the year over year growth of Hyper-V is almost three times that of VMware.)

This battleground is where Microsoft stands out, with System Center’s unique ability to provide deep insight into the applications running within the virtual machines (VMs), to manage heterogeneous virtualized environments, and to serve as a strong on-ramp to private cloud computing. Unlike the solutions of all other virtualization vendors, Microsoft’s management solution can manage not only the virtualization infrastructure but the actual applications and services that run inside the virtual machines. This is key to leveraging the capabilities of virtualization and the private cloud – it’s the apps that really matter at the end of the day.   

Of course, a management solution has to see all your assets to manage them. As InfoWorld and many others are starting acknowledge, the days of a monolithic virtualization solution are over. That is why, three years ago, Microsoft added VMware management to System Center. This allowed for one management infrastructure to manage all of the assets in IT, from physical to virtual, Microsoft to VMware, Windows to Linux.  And with System Center 2012, we’ll extend that capability by enhancing our support for VMware and adding support for Citrix XenServer. 

Virtualization is a major on-ramp to private cloud computing. As companies begin the shift to private cloud, they recognize that applications are the key services that the cloud delivers. Our customers—you—are telling us that the private cloud needs a new level of automation and management, beyond what traditional virtualization management offers. Last month at the Microsoft Management Summit, Brad Anderson talked about the advancements we’re building into System Center 2012 that will deliver against those needs.

And lastly, there is the issue of price. For the base virtualization layer, VMware’s solution is over three times the cost of the Microsoft solution. That’s a significant cost given the parity in performance and features that Hyper-V provides.  Butwhen you factor in management and the private cloud, the delta becomes even more pronounced. VMware’s new Cloud and management offerings are all priced on a per-VM basis, unlike Microsoft’s, which is priced on a per-server basis. This means that the cost of VMware solution will increase as you grow your private cloud – something you should take into account now.

I strongly encourage you to look into all that Microsoft has to offer in Virtualization and Private Cloud – and I’ll continue to discuss this theme in future posts. 

Jeff Woolsey announced MICROSOFT HYPER-V SERVER 2008 R2 SP1 RELEASED! on 4/12/2011:

imageThe good news just keeps coming and we’re pleased to keep the momentum rolling with the latest release of our rock stable, feature rich, standalone Microsoft Hyper-V Server 2008 R2 with Service Pack 1! For those who need a refresher on Microsoft Hyper-V Server 2008 R2, it includes key features based on customer feedback such as:

  • Live Migration

  • High Availability with Failover Clustering

  • Cluster Shared Volumes

  • 10 Gb/E Ready

  • Processor Compatibility Mode

  • Enhanced Scalability

  • …and much more.

For more info on Microsoft Hyper-V Server 2008 R2, read: Service Pack 1 for Hyper-V Server 2008 R2 includes all the rollup fixes released since Microsoft Hyper-V Server 2008 R2 and adds two new features that greatly enhance VDI scenarios:

  • Dynamic Memory

  • RemoteFX

After installing the update, both Dynamic Memory and RemoteFX will be available to Hyper-V Server. These new features can be managed in a number of ways:

Dynamic memory is an enhancement to Hyper-V R2 which pools all the memory available on a physical host and dynamically distributes it to virtual machines running on that host as necessary. That means based on changes in workload, virtual machines will be able to receive new memory allocations without a service interruption through Dynamic Memory Balancing. In short, Dynamic Memory is exactly what it’s named. If you’d like to know more, I've included numerous links on Dynamic Memory below.

Configuring RemoteFX with Microsoft Hyper-V Server 2008 R2 SP1

Although using Dynamic Memory does not need any additional server side configuration beyond installing the R2 SP1 update, enabling RemoteFX does require some additional configuration on the host.  The exact steps for enabling the RemoteFX are detailed below:

1) Verify the host machine meets the minimum hardware requirements for RemoteFX. 

2) Verify the host has the latest 3D graphics card drivers installed before enabling RemoteFX.

3) Enable the RemoteFX feature using the following command line:

Dism.exe  /online /enable-feature /featurename:VmHostAgent

4) From a remote machine running the full version of Windows Server 2008 R2 SP1 or a client OS running the latest version of RSAT, connect to the Hyper-V Server machines, create a Windows 7 R2 SP1 virtual machine and under “Add Hardware”, select “RemoteFX 3D Video Adapter”.  Select “Add”.

If the “RemoteFX 3D Video Adapter” option is greyed out, it is usually because RemoteFX is not enabled or the 3D video card drivers have not been installed on the host yet. Before attaching the RemoteFX adapter, make sure to set user access permissions, note the computer name and enable Remote Desktop within the VM first. When the RemoteFX 3D video adapter is attached to the VM, you will no longer be able to connect to the VM local console via the Hyper-V Manager Remote Connection.  You will only be able to connect to the VM via a Remote Desktop connection.  Remove the RemoteFX adapter if you ever need to use the Hyper-V Manager Remote Connection.

How much does Microsoft Hyper-V Server 2008 R2 SP1 cost? Where can I get it?

Microsoft Hyper-V Server 2008 R2 SP1 is free and we hope you enjoy it! Here’s the download link: Microsoft Hyper-V Server 2008 R2 SP1.

Here are the links to a six part series titled Dynamic Memory Coming to Hyper-V and an article detailing 40% greater virtual machine density with DM.

Part 1: Dynamic Memory announcement. This blog announces the new Hyper-V Dynamic Memory in Hyper-V R2 SP1. It also discussed the explicit requirements that we received from our customers.

Part 2: Capacity Planning from a Memory Standpoint. This blog discusses the difficulties behind the deceptively simple question, “how much memory does this workload require?” Examines what issues our customers face with regard to memory capacity planning and why.

Part 3: Page Sharing. A deep dive into the importance of the TLB, large memory pages, how page sharing works, SuperFetch and more. If you’re looking for the reasons why we haven’t invested in Page Sharing this is the blog.

Part 4: Page Sharing Follow-Up. Questions answered about Page Sharing and ASLR and other factors to its efficacy.

Part 5: Second Level Paging. What it is, why you really want to avoid this in a virtualized environment and the performance impact it can have.

Part 6: Hyper-V Dynamic Memory. What it is, what each of the per virtual machine settings do in depth and how this all ties together with our customer requirements.

Hyper-V Dynamic Memory Density. An in depth test of Hyper-V Dynamic Memory easily achieving 40% greater density.

<Return to section navigation list> 

Cloud Security and Governance


No significant articles today.

<Return to section navigation list> 

Cloud Computing Events

Vishwas Lele (@vlele) described how to “‘Azure-enable’ an existing on-premise MVC 3 based web application” in his Windows Azure Boot Camp at MIX post of 4/11/2011 (missed when posted):

image Later today I will be presenting the “Cloud computing with the Windows Azure Platform” boot camp at MIX. My session is comprised of three sections:

  • Overview of Cloud Computing
    Introduction to Windows Azure Platform
  • Review an on-premise MVC3 app
    Move the on-premise app to Windows Azure
  • Fault Tolerance and Diagnostics
    Tips & Tricks – Building Scalable Apps in the cloud

As part of the second session, I will go over the steps to “Azure-enable” an existing on-premise MVC 3 based web application. This blog is an attempt to capture these steps for the benefit of the attendees and other readers of this blog.

Before we begin, let me state the key objectives of this exercise:

    • Demonstrate that apps built using currently shipping, on-premise technologies (.NET 4.0 / VS 2010) can easily be moved to the Windows Azure Platform. (Of course, the previous version of .NET (3.5 Sp1) is supported as well.)
    • Understand subtle differences in code between  running on-premise and on the Windows Azure platform.
    • Reduce the number of moving parts within the application by “outsourcing” a number of functions to Azure based services.
    • Refactor the application to take advantage of the elasticity and scalability characteristics, inherent in a cloud platform such as Windows Azure.

Community Portal – An MVC3 based on-premise application

The application is simple by design. It is a Community Portal (hereafter interchangeably referred to as CP) that allows users to create tasks. To logon to the CP, users need to register themselves. Once a task has been created, users can kick-off a workflow to guide the task to completion. The workflow represents a two-step process that routes the request to the assignee, first, and then to the approver. In addition to this primary use case of task creation and workflow, Community Portal also allows users to access reports such as “Tasks complete/ Month”.




The following diagram depicts the Community Portal architecture. It is comprised of two subsystems: 1) an MVC3 based Community Portal site, and 2) a WF 4.0 based Task Workflow Service. This is very quick overview assuming that you are familiar with the core concepts such as MVC 3, EF, Code-First and so on.


Community Portal is a MVC 3 based site. The bulk of the logic is resides within the actions  as part of the TasksController class.  Razor is used as the ViewEngine. Model is based on EF generated entities. Data annotations are used to specify validation for individual fields.  Community Portal uses the built-in membership provider to store and validate user credentials. CP also uses the built-in ASP.NET session state provider to store and retrieve data from the cache. In order to avoid hitting the database on each page refresh, we cache the TaskSummary (list of tasks assigned to a user) in the session state. 


Tasks created by the users are stored in SQL Server. EF 4.0, in conjunction with the Code-First library, is used to access the data stored in SQL Server. As part of the task creation, users can upload an image. Uploaded images are stored in SQL Server using the FILESTREAM feature.  Since the FILESTREAM allows for storing the BLOB data on a NTFS volume, it provides excellent support for streaming large BLOBs [as opposed to the 2GB limit imposed by varbinary(max)]. When the user initiates a workflow, a message is sent over to WCF using the BasicHttpBinding. This causes the WF instance to be created. This brings us to the second subsystem – workflow host.

As stated, Task Workflow Service consists of a WF 4.0-based workflow program hosted inside a console application (we could have just as easily hosted this in IIS). WF program is based on built-in Receive, SendReply, and InvokeMethod activities. As stated, this workflow is a two-stage workflow. The first call into the workflow kicks off a new instance. Each new instance updates the task status to “Assigned” via the InvokeMethod activity and sends a reply to the caller. Then the workflow waits for “task completed” message to arrive. Once the “task completed” message arrives, the workflow status is marked “complete”, followed by a response to the caller. The last step is repeated for the “task reviewed” message.


In order to ensure that all messages are appropriately correlated (i.e. routed to the appropriate instance), a content-based correlation identifier is used. Content-based correlation is new in WF 4.0. It allows an element in the message (in this case taskId) to server as the correlation token. Refer to the screenshot below: Xpath query is used to extract the TaskId from the message payload. The WF 4 runtime compares the result of the Xpath expression to the variable pointed to by the CorrelatesWith attribute to determine the appropriate instance where the message needs to be delivered.


A quick word on the reporting piece, again for the sake of simplicity, the Community Portal relies on client report definition files (.rdlc). This is based on the local processing mode supported by the report viewer control. Since the ReportViewer control (a server-side control) cannot be used inside MVC, an alternate route was created.

Finally, the following diagram depicts the on-premise solution – it consists of two projects: 1) an MVC 3 based CommunityPortal project, and 2) a console application that hosts the WF 4.0 based TaskWorkflow.


This concludes our whirlwind tour of the on-premise app.

Changing Community Portal to run on Windows Azure

Let us switch gears and discuss how the on-premise version of Community Portal can be moved to the Windows Azure Platform.

Even though our on-premise application is trivial, it includes a number of moving parts. So if we were to move this application to IaaS (Infrastructure as a Service) provider, we would have to setup and manage the various building blocks ourselves. For instance, we would need to install Windows Server AppFabric caching and setup a pool of cache servers; we would also have to install SQL Server and setup high availability; we would have to install an identity provider (IP) and setup user database and so on. We have not even discussed the work involved in keeping the servers patched.

Fortunately, Windows Azure platform is a PaaS offering. In other words, it provides a platform of building block services that can make it easier to develop cloud-based applications. Application developers can subscribe to a set of pre-built services as opposed to creating and managing their own. There is one other important benefit of PaaS – the ability to treat the entire application (including the virtual machines, executable and binaries) as one logical service. For instance, rather than logging on to each server (that makes up the application) and deploying the binaries to them individually, Windows Azure allows developers to simply upload the binaries and a deployment model to the Windows Azure Management portal – as you would expect there is a way to automate this step using the management API. Window Azure in turn allocates the appropriate virtual machines, storage and network, based on the deployment model. It also takes care of installing the binaries to all the virtual machines that have been provisioned. For more a detailed discussion on PaaS capabilities of Windows Azure PaaS, please refer to my previous blog post.

The following diagram illustrates the different pieces that make up a Windows Azure virtual machine (also referred to a compute unit). The binaries supplied by the application developer are represented as the dark grey box. The rest of the components are supplied by the Windows Azure platform. Role runtime is some bootstrapping code that we will discuss later in the blog post. Together, this virtual machine represents an instance of software component (such as an MVC 3 site, a batch executable program, etc.) and is commonly referred to as a role. Currently, there are three types of roles – Web role, Worker Role and VM role. Each role type is designed for a type of software component. For instance, a web role is designed for web application or anything related to it; a worker role is designed for long running, non-interactive tasks. An application hosted within Azure (commonly referred to as an Azure-hosted service) is made up of one more types of roles. Another thing to note is that an Azure-hosted application can scale by adding instances of a certain type.


Now that you understand the core concepts (such as deployment model, role and role instances) let us resume our discussion on converting the on-premise application to run on Windows Azure. Let us begin with a discussion of the deployment model. It turns out that the deployment model is an XML file that describes things such as the various roles that make our application, number of instances of each role, configuration settings, port settings, etc. Fortunately, Visual Studio provides tooling so that we don’t have to hand create the XML file. Simply add a Windows Azure project to the existing solution as shown below.


Select a worker role. Remember that a worker role is designed for a long-running task. In this instance, we will use it to host instances of TaskWorkflow. There is no need to select a web role as we will take the existing CommunityPortal project and map it to a web role.



We do this by clicking on Add Web Role in the solution (as shown below) and selecting the CommunityPortal project.


Here is the screen shot of our modified  solution structure. Our solution has two roles.  To mark the transition to an Azure service, I dropped the OnPremise suffix from our solution name – so our solution is now named CommunityPortal.


Through these steps, we have generated our deployment model!

The following snippet depicts the generated ServiceDefinition file. This file contains definitions of role, endpoints, etc. As you can see, we have one web role and one worker role as part of our service. The Sites element describes the collection of web sites that are hosted in a web role. In this instance, we have a single web site that is bound to an external endpoint that is listening to HTTP traffic over port 80 ( line # 12 & 17 ).  

 1:  <?xml version="1.0" encoding="utf-8"?>
 2:  <ServiceDefinition name="WindowsAzureProject5" xmlns="">   
 3:    <WorkerRole name="WorkerRole1">   
 4:      <Imports>   
 5:        <Import moduleName="Diagnostics" />    
 6:      </Imports>   
 7:    </WorkerRole>   
 8:    <WebRole name="CommunityPortal">   
 9:      <Sites>  
10:        <Site name="Web">  
11:          <Bindings>  
12:            <Binding name="Endpoint1" endpointName="Endpoint1" />  
13:          </Bindings>  
14:        </Site>  
15:      </Sites>  
16:      <Endpoints>  
17:        <InputEndpoint name="Endpoint1" protocol="http" port="80" />  
18:      </Endpoints>  
19:      <Imports>  
20:        <Import moduleName="Diagnostics" />  
21:      </Imports>  
22:    </WebRole>  
23:  </ServiceDefinition>
While the service definition file defines the settings, the values associated with these settings reside in the service configuration file. The following snippet shows the equivalent service configuration file. Note the Instances element (line #4 & line #10) that defines the number of instances for a given role. This tells us that changing the number of instances of a role is just a matter of changing a setting within the service configuration file. Also note that we have one or more configuration settings defined for each role. This seems a bit odd doesn’t it? Isn’t Web.config the primary configuration file for an ASP.Net application? The answer is that we are defining configuration settings that apply to one or more instances of a web role. There is one other important reason for preferring the service definition file over web.config. But to discuss that, we need wait until we have talked about packaging and uploading to application binaries to Windows Azure. 
 1:  <?xml version="1.0" encoding="utf-8"?>   
 2:  <ServiceConfiguration serviceName="WindowsAzureProject5" xmlns="" osFamily="1" osVersion="*">   
 3:    <Role name="WorkerRole1">   
 4:      <Instances count="1" />   
 5:      <ConfigurationSettings>   
 6:        <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" />   
 7:      </ConfigurationSettings>   
 8:    </Role>   
 9:    <Role name="CommunityPortal">  
10:      <Instances count="1" />  
11:      <ConfigurationSettings>  
12:        <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" />  
13:      </ConfigurationSettings>  
14:    </Role>  
15:  </ServiceConfiguration>

Visual Studio makes it easy to work with the service configuration settings so developers don’t have to work with the XML directly. Following screenshot shows the UI for adjusting the configuration settings.


Now that we have the deployment model, it’s time to discuss how we package up the binaries (recall that to host an application on Windows Azure, we need to provide a deployment model and the binaries ). Before we create the package, we need to make sure that all the files we will need to run our application in Window Azure are indeed included. By default, all the application assemblies will be included. However, recall that Windows Azure is providing the environment for our application to run. So we can expect .NET 4.0 framework (or 3.5 Sp1) assemblies, C Runtime components, etc. to be available to the applications running in Windows Azure. There are however, some add-on packages that are not available provided in the environment today such as MVC3. This means we need to explicitly include the MVC3 assemblies as part of our package by setting the CopyLocal=true.

Now that we have covered the concept of a service package and configuration file, let us revisit the discussion on where to store configuration settings that need to change often. The only way to change a setting that is part of the package (such as the settings included within the web.config) is to redeploy it. In contrast, settings defined in the service configuration can easily be changed via the portal or the management API, without the need to redploy the entire package. So, now that I have convinced you to store the configuration settings in the service configuration files, we must make a change to the code to read settings from the new location. We need to use the RoleEnvironment.GetConfigurationSettingValue to read the value of a setting from the service configuration file. RoleEnvironment class gives us access to the Windows Azure runtime. In addition to accessing the configuration settings, we can use this class to find out about all the roles instances that are running as part of the service. We can also use it to make checks to see if the code is running in Windows Azure. Finally, we also need to set the global configuration setting publisher inside the Global.asax.cs as shown below:

1:             CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>   
2:              {   
3:                  configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));   
4:              });   
That’s all the changes we will make to the web role for now. 

But we also need to modify the worker role we just added. The on-premise version relied on a console application to the host the task workflow. The equivalent piece of code (shown) can be moved to the Run method within the RoleEntryPoint. This begs the question – what is RoleEntryPoint and why do I need it? Recall that when discussing the Azure VM, we briefly alluded to the Role Runtime. This is the other piece of logic (other than the application code files themselves) that an application developer can include as part of the package. It allows an application to receive callbacks (OnStart, OnStop. etc.) from the runtime environment and is, therefore, a great place to inject bootstrapping code such as diagnostics, another important example of the type of callback is the notification when there is a change in service configuration setting. We talked about the role instance count as being a service configuration setting. So if the user decides to add or remove role instances, the application can receive a configuration change event and act accordingly.

Strictly speaking, including a class derived from RoleEntryPoint is not required if there is no initialization to be done. That is not the case with worker role we added to the solution. Here we need to use the OnStart method to setup WorkflowService endpoint and open the workflowhost as shown in the snippet below. Once the OnStart method returns (with a return value of true) we will start receiving incoming traffic. This is why it is important to have the WorkflowServiceHost all ready before we return from OnStart.

The first line of the snippet below requires some explanation. While running on-premise, the base address was simply the local host and dynamically assigned port. However, when the code is being run inside Windows Azure, we need to determine at runtime which port has been assigned to the role. The worker role needs to setup an internal endpoint to listen to incoming requests to the workflow service. Choosing an internal endpoint is a decision we made. We could have just as easily decided to allow the workflow host to listen on an external point (available on the public internet) However, in deference to the defense in depth idea, we decided to limit the exposure to the workflow host and only allow other role instances running as part of the service to communicate with it.

1:  RoleInstanceEndpoint internalEndPoint =   
2:       RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["TaskEndpoint"];   
4:    string protocol = "http://";   
5:    string baseAddress = string.Format("{0}{1}/", protocol, internalEndPoint.IPEndpoint);   
7:    host = new WorkflowServiceHost(new TaskService(), new Uri(baseAddress));   
9:    host.AddServiceEndpoint("ITaskService", new BasicHttpBinding(BasicHttpSecurityMode.None) { HostNameComparisonMode = HostNameComparisonMode.Exact }, "TaskService");  
11:    host.Open();

The Run method then simply runs a never ending  loop as shown below.  This is equivalent to the static void Main method from our console application. 

1:             Trace.WriteLine("TaskServiceWorkerRole entry point called", "Information");   
3:              while (true)   
4:              {   
5:                  Thread.Sleep(10000);   
6:                  Trace.WriteLine("Working", "Information");   
7:              }

Once we have gone through and included the appropriate files, packaging the application binaries is just matter of right clicking and selecting the publish option. This brings up the publish dialog as shown below. For now we will simply select “Create Service Package option” and click OK.


That’s it. At this point, we have both the required artifacts we need to publish to Windows Azure. We can browse to the (shown below) and create a new service called CommunityPortal. Notice the two highlighted text boxes – here we provide the package location and the deployment model. Let us  go ahead and publish our application to Windows Azure.


Not so fast …

We’re all done and the application is running on Windows Azure! Well almost, except we don’t have a database available to the code running in Windows Azure (this includes the Task database and the membership database). Unless of course, we make our on-premise database instances accessible to the Windows Azure hosted service. While doing so is technically possible, it may not be such a good idea because of latency and security concerns. The alternative is to move the databases close to the Windows Azure hosted service. Herein lies an important concept. Not only do we want to move these databases to Windows Azure, we also want to get out of the business of setting up high availability, applying patches, etc. for our database instances. This is where the PaaS platform capabilities offered by Windows Azure shine.

In the next few subsections, we will look at approaches that allow us to “get out of the way” and let the Windows Azure platform take care of as many moving parts as possible. One other thing – changing the on-premise code to take advantage of Azure-provided services sounds good, but, it is only practical to do if there is semantic parity between the two environments. It is not reasonable to expect developers to maintain two completely different code bases. Fortunately, achieving semantic parity is indeed the key design goal that designers of Windows Azure had in mind. The following diagram represents the symmetry between the between on-premise and on Windows Azure offerings. It is important to note that a complete symmetry between on-premise and Windows Azure is not there today and is unlikely to be there anytime soon. Frankly, achieving complete symmetry is a journey on which the entire ecosystem including Microsoft, ISVs, SIs and developers like us, have to embark on together. Only then can we hope to realize the benefits that PaaS has to offer.


Let us start with moving the database to the cloud.

Windows Azure includes a highly available database as a service offering called SQL Azure. The following diagram depicts the architecture of SQL Azure. SQL Azure is powered by a fabric of SQL Servers nodes that allows for setting up highly elastic (i.e. ability to create as many databases as needed) and redundant (i.e. resilience against failures such as corrupted disk) databases. While the majority of the on-premise SQL Server functions are available within SQL Azure, there is not 100% parity between the two offerings. This is mainly due to the following reasons :

1) SQL Azure team has simply not gotten around building this capability into SQL Azure yet, however, a future release is planned to address this. An example of such a feature is CLR-based stored procedures. This is something that is not available on SQL Azure today, but the team has gone on record to add this capability in the future.

2) It simply does not make sense to enable certain features in a hosted environment such as SQL Azure. For example, ability to specify file groups will limit how the SQL Azure service can move the databases around. So this feature is unlikely to be supported by SQL Azure.


As application developers, all we need to do is provision the database on the Windows Azure portal by specifying the size as shown below. For this sample, we have a very modest size requirement. So we don’t have to worry about sharding our data across multiple databases. The next step is to replace the on-premise connection string with the SQL Azure connection string.


Remember from our earlier discussion that there is not 100 % parity between on-premise SQL Server and SQL Azure. Reviewing our on-premise database reveals one aspect of our schema that cannot be migrated to SQL Azure – this is the FILESTREAM column type that we used to efficiently store an uploaded image associated with a task. The FILESTREAM column type allows us to store the image on a mounted NTFS volume outside of the data file. This is something that SQL Azure is unlikely to support. Fortunately, Windows Azure provides Azure Blob storage that allows us to store large BLOBs efficiently. Furthermore, we don’t have to worry about managing the NTFS volume (as is the case with FILESTREAM) – Azure Blob storage takes care of that for us. It provides a highly elastic store that is also fault tolerant (as it keeps multiple copies of the data). Finally, should we need to, we can serve the images from a location closer to our users by enabling the Content Delivery Network (CDN) option on our Azure Blob containers. Hopefully, I have convinced you that Azure Blob storage is a much more optimal place for storing our BLOB data.

Now, let’s look at the code change that is needed to store the images in Blob store. The change is rather simple. First, we will upload the image to the Azure Blob storage. Next, we will take the address of the Blob container and store it in the Tasks table. Of course, we will need to change the Tasks table to replace the FILESTREAM column with the varchar (255) that allows us to store the address of the Blob container.

 1:         public void SaveFile(HttpPostedFileBase fileBase, int taskId)   
 2:          {   
 3:              var ext = System.IO.Path.GetExtension(fileBase.FileName);   
 5:              var name = string.Format("{0:10}_{1}{2}", DateTime.Now.Ticks, Guid.NewGuid(), ext);   
 7:              CreateOnceContainerAndQueue();   
 9:              var blob = blobStorage.GetContainerReference("imagelib").GetBlockBlobReference(name);  
10:              blob.Properties.ContentType = fileBase.ContentType;  
11:              blob.UploadFromStream(fileBase.InputStream);  
13:              // Save the blob name into the table  
15:              using (SqlConnection conn = new SqlConnection(RoleEnvironment.GetConfigurationSettingValue("CommunityPortalEntities")))  
16:              {  
17:                  SqlCommand comm = new SqlCommand("UPDATE Tasks SET BlobName = @blobName WHERE TaskId = @TaskId", conn);  
18:                  comm.Parameters.Add(new SqlParameter("@blobName", name));  
19:                  comm.Parameters.Add(new SqlParameter("@TaskId", taskId));  
21:                  conn.Open();  
22:                  comm.ExecuteNonQuery();  
23:              }  
24:          }

With this change, we are ready to hook up the SQL Azure based CommunityPortal database to the rest of our application, but what about the existing SSRS reports? What happens to these reports, now that the database has been moved to SQL Azure? We have two options: 1) continue to run the report on an on-premise version of SQL Server Reporting Services (SSRS) by adding the SQL Azure database as a datasource, and 2) move the reporting function to an Windows Azure hosted SSRS service instance (currently in a CTP). The benefit of the latter is obvious by now. We get to subscribe to “reporting as a service” capability without being bogged down with installing and maintaining SSRS instances ourselves. The following screenshot depicts how, using Business Intelligence Development Studio BIDS , we can deploy the existing report definition to SQL Azure Reporting Services instance. Since our database is already migrated to SQL Azure, we can easily add it as the report datasource.


Outsourcing the authentication setup

What about the ASP.NET membership database? I suppose we can also migrate it to SQL Azure. In fact, it turns out that there is a version of ASP.NET membership database install scripts that are designed to work with SQL Azure. So once we migrate the data to SQL Azure based membership database, it is straight forward to rewire our Azure hosted application to work with the new provider. While this approach works just fine, there is one potential issue. Our users do not like the idea of registering a new set of credentials to access the Community Portal. Instead, they would like to use their existing credentials such as Windows LiveID, Facebook, Yahoo! and Google to authenticate to the Community Portal. Since these credentials such as Windows Live ID allow a user to establish their identity, we commonly refer to them as identity providers, or IP. Turns out that IPs rely on a number of different underlying protocols such as OpenID, OAuth etc. Furthermore, these protocols tend to evolve constantly. The task of interacting with a number of distinct IPs seems daunting. Fortunately, Windows Azure provides a service called Access Control Service (ACS) can simplify this task. ACS can act as an intermediary between our application and various IPs. Vittorio and Wade authored an excellent article that goes through the detailed steps required to integrate our application with ACS. So I will only provide a quick summary.

  • Logon to the
  • Create a namespace for the Community Portal application as shown below.


    • Add the Identity Providers we want to support. As shown in the screenshot below, we have selected Google and Windows Live ID as the two IPs.


    • Notice that the role of our application has changed from being an IP (via the ASP.NET membership provider that was in place earlier) to being entity that relies on well-known IPs. In security parlance, our Community Portal application has become a replying party or RP. Our next step is to register Community Portal as a RP with ACS. This will allow ACS to pass us information about the authenticated users. The screenshot below illustrates how we setup Community Portal as an RP


    • The previous step concludes the ACS setup. All we need to do is copy and paste some configuration settings that ACS has generated for us (based on the IPs and RP setup). This configuration setting needs to be plugged into the Community Portal source code. The configuration information that needs to be copied is highlighted in the screenshot below.


    • Finally, inside Visual Studio we need to add ACS as our authentication mechanism. We do this by selecting “Add STS reference” menu and plugging in the configuration metadata we copied from the ACS portal, as shown in the screenshot below.


    • Windows Identity Foundation (WIF) classes needed to interact with ACS are not already installed within the current versions of OS available on Windows Azure. To get around this limitation, we need to install the WIF update as part of the web role initialization.
    • That’s it! Users will now be prompted to select an IP of their choice, as shown below:


    Outsourcing the caching setup

    Up until now we have successfully migrated the task database to SQL Azure. We have also “outsourced” our  security scheme to ACS. Let us look at one last example of a building block service provided by Windows Azure Platform- caching. The on-premise version of Community Portal relies on default in-memory ASP.NET session state. The challenge with moving this default implementation to the cloud is that all web role instances are networked load balanced, so storing the state in-proc on one of the worker role instance does not help. We can get around this limitation by install caching components such as memcached on each of our web role instances.  However, it is our stated goal to  minimize the number of moving parts. Fortunately, what is now becoming a recurring theme of this blog post, we can leverage a Windows Azure Platform provided service called the AppFabric Caching Service.  The next set of steps illustrate how this can be accomplished:

    • Since AppFabric Caching Service is in CTP, we first browse to Appfabric labs portal  –
    • We create a new cache service namespace as shown below.


    • We are done creating the cache. All we need to do is grab the configuration settings from the portal (as shown below) and plug it into the web.config of the Community Portal web project. 


    • With the last step, our ASP.NET state session provider is now based on AppFabric Caching Service. We can continue to use the familiar session syntax to interact with the session state.


    We have taken an on-premise version of Community Portal built using currently shipping technologies and moved it to the Windows Azure Platform. In the process  we have eliminated many moving parts of the on-premise application. The Windows Azure based version of Community Portal can now take advantage of the elasticity and scalability characteristics inherent in the platform.

    Hope you found this description useful. The source code for the on-premise version can be found here. The source code for the “Azure-enabled” version can be found here. Also, if you would like to test drive Community Portal on Windows Azure – please visit  (I will leave it up and running for a few days)

    Steve Plank (@plankytronixx) reported UK Tech Days 2011: Building and deploying applications onto the Windows Azure Platform for cloud computing, 23rd May, repeated on 24th May, London on 4/15/2011:

    image This day is about getting you up to speed on why you should be looking at the Windows Azure Platform, what tools and services it offers and how you can start to explore it to create new solutions, migrate elements of existing solutions or integrate with existing systems.



    Register for Mon 23rd:

    Register for Tues 24th:

    Also, as part of the Tech Days series, there are 6 Windows Azure Bootcamps in the UK.

    If you are a developer looking to take advantage of cloud computing, but you haven’t yet taken the plunge, this free half-day of training is the quickest way to get up-to-speed with Microsoft’s offering; Windows Azure. We’ll take you from knowing nothing about the cloud to actually having written some code, deployed it to the cloud service and made a simple application available on the public Internet. You’ll get all the information you need to get up to speed with Windows Azure in a packaged and compressed form, ready for your consumption, without having to trawl through books, blogs and articles on your own. There will be experienced people available to guide you through each exercise. Once you have the basics in place, you’ll be off and running.

    The sessions are either morning or afternoon and are free please see agenda and link to registration below:-

    • Registration and Coffee
    • Windows Azure Introduction – architecture, roles, storage
    • First Lab: Hello World Windows Azure Web Role and deploying to your free Windows Azure subscription
    • Second Lab: Using Windows Azure Storage
    • Third Lab: Worker Role and how to use Windows Azure Storage to communicate between roles
    • Break, phone calls, coffee…
    • Introduction to SQL Azure
    • Fourth Lab: Using SQL Azure with a Windows Azure application
    • Review and wrap-up

    See Mike Taulty’s post below for Bootcamp location and registration data.

    Mike Taulty reported mostly Windows Azure Bootcamps in his Upcoming UK Events post of 4/14/2011:

    image There’s some great events coming up in the UK over the coming weeks/months – here’s a quick update, hope to see you there.

    § 4 Hours of Coding and You’ll get a 30 Day Azure Pass at the Windows Azure Bootcamp this May – If you’re a developer looking to take advantage of cloud computing, but haven’t taken the plunge, this free half-day of training is the quickest way to get up-to-speed with Microsoft’s Windows Azure. We’ll take you on a journey from, learning about the cloud, writing some code, deploying to the cloud and making a simple application available on the Internet. (Banners attached)


    Event: UK Tech Days 2011 - Windows Azure Bootcamp (1032481491 – 4th May 2011, AM)
    Location: Thames Valley Park, Reading, UK

    Event: UK Tech.Days 2011 - Windows Azure Bootcamp (1032481495 – 4th May 2011, PM)
    Location: Thames Valley Park, Reading, UK

    Event: UK Tech Days 2011 - Windows Azure Bootcamp (1032481492 – 12th May 2011, AM)
    Location: Thames Valley Park, Reading, UK

    Event: UK Tech.Days 2011 - Windows Azure Bootcamp (1032481496 – 12th May 2011, PM)
    Location: Thames Valley Park, Reading, UK

    Event: UK Tech.Days 2011 - Windows Azure Bootcamp (1032481635 – 27th May 2011, AM)
    Location: Cardinal Place, London, UK

    Event: UK Tech.Days 2011 - Windows Azure Bootcamp (1032482519 – 27th May 2011, PM)
    Location: Cardinal Place, London, UK

    § Register for UK Tech.Days 2011 Windows Azure Developer Sessions – This day is about getting you up to speed on why you should be looking at the Windows Azure Platform, what tools and services it offers and how you can start to explore it to create new solutions, migrate elements of existing solutions or integrate with existing systems. You can attend one of two repeated sessions on either Monday 23 May or Tuesday 24 May.

    § Register for UK Tech.Days 2011 Windows Phone 7 Developer Sessions – It's only a few months since Windows Phone 7 launched in the UK and already developers have published thousands of apps to the Windows Phone Marketplace, and hundreds of new apps are being added every day. On this day we'll explore a variety of topics to help you build better Windows Phone 7 apps and get them into Marketplace as quickly as possible. Expect sessions on cutting-edge design, making App Hub work for you, avoiding common publishing pitfalls (i.e. how to get your app approved without losing your mind), performance optimisation and making the most of Windows Phone 7 features. Click here to register!

    § Get rewarded for building Windows Phone 7 Applications – Every successful and original application published in the marketplace gets you one point to exchange for a range of awesome goodies. The more you applications you publish, the bigger and shinier the stuff you get to choose from. The prizes are not going to hand around for too long so get cracking!

    <Return to section navigation list> 

    Other Cloud Computing Platforms and Services

    Liran Zelkha reported The NewSQL Market Breakdown in a 4/16/2011 post to the High Scalability blog:

    Matt Aslett from the 451 group created a term called “NewSQL”. On the definition of NewSQL, Aslett writes:

    “NewSQL” is our shorthand for the various new scalable/high performance SQL database vendors. We have previously referred to these products as ‘ScalableSQL’ to differentiate them from the incumbent relational database products. Since this implies horizontal scalability, which is not necessarily a feature of all the products, we adopted the term ‘NewSQL’ in the new report.

    And to clarify, like NoSQL, NewSQL is not to be taken too literally: the new thing about the NewSQL vendors is the vendor, not the SQL.

    As with NoSQL, under the NewSQL umbrella you can see various providers, with various solutions.

    I think these can be divided into several sub-types:

    1. New MySQL storage engines. These give MySQL users the same programming interface, but scale very well. You can Xeround or Akiban in this field. The good part is that you still use MySQL, but on the downside it’s not supporting other databases (at least not easily) and even MySQL users need to migrate their data to these new databases.
    2. New databases. These completely new solutions can support your scalability requirements. Of course, some (hopefully minor) changes to the code will be required, and data migration is still needed. Some examples are VoltDB and NimbusDB.
    3. Transparent Sharding. ScaleBase, which offers such a solution, lets you get the scalability you need from the database, but instead of rewriting the database, you can use your existing one. This allows you to reuse your existing skill set and eco-system, and you don’t need to rewrite your code or perform any data migration – everything is simple and quick. Other solutions in the field are dbShards for instance.

    As in NoSQL, I believe each NewSQL solution has its own spot, answering specific needs.

    As if NoSQL hasn’t contributed enough confusion, we now have NewSQL! Will Alex Popescu start a MyNewSQL blog?

    Maureen O’Gara asserted “Tested for production use, it’s supposed to have improved performance, greater stability, and extended authentication throughout” as a deck for her Cloudera Puts Out New Hadoop article of 4/15/2011 to the Cloud Computing Journal:

    Hadoop's prime commercializer, Cloudera, has started pushing its third rev of the popular Big Data open source data management framework inspired by Google's MapReduce as the competition for such widgetry is heating up.

    image Tested for production use, it's supposed to have improved performance, greater stability, and extended authentication throughout.

    image It integrates with business intelligence tools and RDBMS systems like Informatica, Jaspersoft, MicroStrategy, Netezza, Talend and Teradata and, besides Apache Hadoop 3, now includes HBase, the Hadoop database for random read/write access; Hive, the Facebook-built SQL-like queries and tables for large datasets; Pig, the Yahoo-developed dataflow language and compiler; Sqoop, Cloudera's MySQL-Hadoop connector; Flume, its own data-loading program; the Hue GUI; and the Zookeeper configuration tool.

    image Cloudera says the 100% open source CDH3 integrates all components and functions to interoperate through standard APIs, manages required component versions and dependencies and will be patched quarterly.

    It supports Red Hat, Centos, SuSE and Ubuntu Linux and can run in the Amazon or Rackspace clouds.

    Small MapReduce jobs are supposed to run up to 3x faster and filesystem I/O is up to 20% faster, with 2x improved performance in HBase query throughput. …

    Twitter is already supposed to be using it broadly.

    Cloudera peddles a commercial version with proprietary tools and support.

    Carl Brooks (@eekygeeky) asked and answered What does VMware Cloud Foundry mean for the enterprise? in a 4/15/2011 post to the blog:

    Published: 15 Apr 2011

    Weekly cloud computing update

    VMware launched a unique new initiative this week called Cloud Foundry. It's a Platform as a Service with a decidedly new-school bent: You'll be able to sign up and tap into running database services like MySQL and MongoDB and login and write code directly to a Spring or Rails environment, much like Engine Yard or Heroku.

    Cloud Foundry is for Java developers and IT shops that don't feel particularly invested in Microsoft; end of story.


    On the other hand, you can run the platform yourself as a downloadable instance called Micro Cloud. It's packed up in a Linux-based OS built on VMware, and the whole shooting match is on Github as an open source project. Supposedly the downloadable, runnable Cloud Foundry does everything the online service arm does, including automatic provisioning of new instances as load increases and self-monitoring of the various services.

    This has got a lot of tongues wagging; its being seen as a viable Platform as a Service (PaaS) alternative for the enterprise, something that can straddle the gap between easy-on cloud services and the enterprise need for control and security. If your IT shop does not like dumping the company Web app on Engine Yard but the dev team is threatening mutiny over working in a stone-age traditional Java production lifecycle ("that's so 2005, man"), Cloud Foundry can basically become the in-house option.

    And if moving production onto an Amazon Web Services environment makes sense, that can be taken care of by doing all the development on your Micro Cloud and pushing it out to Cloud Foundry services when you feel like it.

    If you're a Java house already using Spring, this may well represent an ideal state of affairs. It's an innovation that many people will take to like a duck to water. It's been called an evolution of the PaaS model, an "Azure killer" and many other glowing terms, most of which are at least faintly specious. One thing is for sure: from a utility and viability perspective, it knocks, VMware's previous PaaS venture with, into a cocked hat. If Cloud Foundry is a race track, looks like a bumper car pavilion by comparison.


    Will Cloud Foundry battle Microsoft Azure?

    But the fact is, while Cloud Foundry's VMware pedigree will give it serious viability as an enterprise option, it's not going to attract the same users as Azure. Microsoft's PaaS is well on the way toward maturity and backed up by its own dedicated infrastructure.

    Azure's premise is built around being a Microsoft shop first and foremost. SQL Azure or Azure CDN, the theory goes, will be a choice the developer can make, right in the middle of picking out an environment, that comes with very little functional difference between it and your own SQL box. It's also clearly aimed at the .NET crowd.

    From a utility and viability perspective, [Cloud Foundry] knocks into a cocked hat.

    Cloud Foundry is for Java developers and IT shops that don't feel particularly invested in Microsoft; end of story. While it's going to get a great deal of attention, Cloud Foundry is very much in the "Web 2.0" camp; its greatest strength will come when VMware integrates it more completely with vCloud Express and shakes out those extra PaaS services. After all, it's not like it was that hard to get going on SpringSource or Ruby to begin with.

    It's nice that Cloud Foundry just made it that much easier, but it's a long way from a magic bullet. It remains to be seen how far VMware will take it: will it be a platform people take under their wing, or will it truly become a viable pipeline for highly developed infrastructure into the enterprise Java shop?

    Full disclosure: I’m a paid contributor to

    Matthew Weinberger reported VMware Counters Windows Azure With Cloud Foundry in a 4/14/2011 post to the TalkinCloud blog:


    VMware has launched Cloud Foundry, an open source platform-as-a-service designed for cloud service providers.Without mentioning Microsoft Windows Azure by name, it sounds like VMware Cloud Foundry is VMware’s attempt to counter Microsoft’s platform as a service (PaaS) efforts. But how does VMware Cloud Foundry differ from OpenStack? We’ve got some initial perspectives.

    With Cloud Foundry, VMware is aiming to provide a “new generation” application platform, giving developers a way to streamline the deployment and scalability of their cloud applications — while also enabling a greater flexibility to choose between public and private clouds, developer frameworks, and application infrastructure services.

    According to VMware’s press release, Cloud Foundry leapfrogs PaaS offerings that force developers into using a specific or non-standard framework, potentially running from a single vendor-operated cloud. That sounds like a thinly veiled jab at Microsoft Windows Azure.

    So while PaaS is key to the future of cloud application development, vendor lock-in runs prevalent as workloads and software can’t be moved from one cloud to another. But Cloud Foundry supports any number of popular, open frameworks and databases like Ruby On Rails and MySQL. VMware even claims Cloud Foundry supports non-VMware environments.

    There are four different ways to use VMware Cloud Foundry:

    • hosted by VMware;
    • as an open source, community download that’s especially useful for testing and evaluation;
    • as a Micro Cloud virtual appliance; and
    • most relevantly, a forthcoming commercial edition for enterprises and service providers.

    VMware says that the service provider edition will also enable portability across hybrid clouds, meaning that an enterprise can deploy internally and migrate to a vCloud provider’s platform.

    At least on paper, it sounds like VMware Cloud Foundry is aiming to do for PaaS what OpenStack is doing for IaaS. But where OpenStack is taking on Amazon EC2, VMware Cloud Foundry has Windows Azure in its sights. As always, adoption is the real measure of a product, so we’ll see if developers buy VMware’s open source line.

    John C. Stame (@JCStame) chimed in with You Bet Your PAAS it’s Open! on 4/14/2011:

    image Earlier this week VMware introduced Cloud Foundry, an Open Platform as a Service (PaaS) hosted by VMware and will be made available to service provider partners and customers, enabling them to host their on Open PaaS.


    Cloud Foundry provides a PaaS implementation that offers developers choice:

    • Choice of developer frameworks,
    • Choice of application infrastructure services, and
    • Choice of clouds to which to deploy applications

    This PaaS is different from other people’s PaaS offerings that restrict developers to a specific or non-standard development framework, a limited set of application services or a single, vendor-operated cloud service; things that raises issues of lock-in by inhibiting application portability.

    Instead, Cloud Foundry supports, or will support, frameworks like Spring for Java, Ruby on Rails, Sinatra for Ruby, Node.js and Grails with others promised in fairly short order. And for application services, it will initially support the open source NoSQL MongoDB, MySQL and Redis databases with plans to add VMware’s own vFabric services, the application platform in vCloud, as well as the RabbitMQ messaging system, another VMware property.

    Whats really interesting is that VMware has rolled out their CloudFoundry service as an open source project! An entire PaaS platform, which they will also offer as a hosted service, but also available for anyone to run within their own company or datacenter!

    Coming real soon, VMware plans to produce the Cloud Foundry Micro Cloud, a free, complete, downloadable instance of Cloud Foundry that runs in a single virtual machine developers can use on their own laptop to ensure that “applications running locally will also run in production without modification on any Cloud Foundry-based private or public cloud.”  Down the road, the plan is to provide a commercial Cloud Foundry for enterprises that want to offer PaaS capabilities in their own private cloud and service providers that want to offer Cloud Foundry via their public cloud services.  Enterprises should then be able to integrate the PaaS environment with their application infrastructure services portfolio and service providers hybrid cloud environments.

    Thorsten von Eicken described Cloud Foundry Architecture and Auto-Scaling in a 4/14/2011 post to the RightScale blog:

    image Yesterday’s blog post mostly covered the benefits of VMware’s Cloud Foundry PaaS and how it fits with RightScale. Today I want to dive a little into the Cloud Foundry architecture and highlight how IaaS and PaaS really are complementary. I’m really hoping that more PaaS options will become available so we can offer our users a choice of PaaS software.

    CloudFoundry Architecture

    From a technical point of view I see two main innovations in Cloud Foundry. The first is that the software is released as an open source project with an Apache license, which gives users and third-parties access to make customizations and to operate Cloud Foundry on their own. The second is that Cloud Foundry is very modular and separates the data path from the control plane, i.e. the components that make user applications run from the ones that control Cloud Foundry itself and the deployment and scaling of user applications. The reason the latter innovation is significant is that it really opens up the door to innovate on the management of the PaaS as well as integrate it into existing frameworks such as RightScale’s Dashboard.

    Enough prelude, the pieces that make up Cloud Foundry are:

    • At the core the app execution engine is the piece that runs your application. It’s what launches and manages the Rails, Java, and other language app servers. As your app is scaled up more app execution engines will launch an app server with your code. The way the app execution engine is architected is nice in that it is fairly stand-alone. It can be launched on any suitably configured server, then it connects to the other servers in the PaaS and starts running user applications (the app execution engines can be configured to run a single app per server or multiple). This means that to scale up the PaaS infrastructure itself the primary method is to launch more suitably configured app execution engines, something that is easy to do in a RightScale server array!
    • The request router is the front door to the PaaS: it accepts all the HTTP requests for all the applications running in the PaaS and routes them to the best app execution engine that runs the appropriate application code. In essence the request router is a load balancer that knows which app is running where. The request router needs to be told about the hostname used by each application and it keeps track of the available app execution engines for each app. Request routers are generally not scaled frequently, in part because DNS entries point to them and it’s good practice to keep DNS as stable as possible, and also because a small number of request routers go a long way compared to app execution engines. It is possible, however to place regular load balancers in front of the request routers to make it easy to scale them without DNS changes.
    • The cloud controller implements the external API used by tools to load/unload apps and control their environment, including the number of app execution engines that should run each application. As part of taking in new applications it creates the bundles that app execution engines load to run an application. A nice aspect of the cloud controller is that it is relatively policy-free, meaning that it relies on external input to perform operations such as scaling how many app execution engines each application uses. This allows different management policies to be plugged-in.
    • A set of services provide data storage and other functions that can be leveraged by applications. In analogy with operating systems these are the device drivers. Each service tends to consist of two parts: the application implementing the service itself, much as MySQL, MongoDB, redis, etc. and a Cloud Foundry management layer that establishes the connections between applications and the service itself. For example, in the MySQL case this layer creates a separate logical database for each application and manages the credentials such that each application has access to its database.
    • A health manager responsible for keeping applications alive and ensuring that if an app execution engine crashes the applications it ran are restarted elsewhere.

    All these parts are tied together using a simple message bus, which, among other things allows all the servers to find each other.

    Auto-scaling Cloud Foundry

    “So, does it auto-scale”? seems to be the question everyone asks. (I wonder who started this auto-scaling business? …) The answer is “no, but trivially so”. There are actually two levels at which Cloud Foundry scales, whether automatically or not. The first is at the Cloud Foundry infrastructure level, e.g. how many app execution engines, how many request routers, how many cloud controllers, and how many services there are. The second level is at the individual application level and is primarily expressed in how many app execution engines are “running” the application (really, how many have the application loaded and are accepting requests from the request router).

    The first level of scaling the Cloud Foundry infrastructure is the responsibility of the PaaS operator. The operator needs to monitor the load on the various servers and launch additional or terminate idle ones as appropriate. In particular, there should always be a number of idle app execution engines that can accept the next application or that can be brought to bear on an application that needs more resources. This level of scaling can be performed relatively easily manually or automatically in RightScale. The app execution engines can be placed in a server array and scaled based on their load.

    The second level of scaling is the responsibility of each application’s owner. The nice thing about the modularity of Cloud Foundry is that it exposes the necessary hooks to adding external application monitoring and scaling decisions. It is also interesting that Cloud Foundry in effect exposes the resource costs and lets the application owner decide how much to consume–and pay for. This is in contrast to other systems that make it difficult to limit the resources other than by setting quotas at which point an application is suspended as opposed to simply running slower.

    What we envision in working with Cloud Foundry is simple: RightScale will be able to monitor the various servers in the Cloud Foundry cluster, and determine for example when it’s “slack pool” of warm, ready-to-go app execution engines has dropped below a given threshold (or exceeded an idle threshold), and either boot new servers to add to the “slack pool” or de-commission unnecessary ones to save on cost, as appropriate.

    PaaS and IaaS Synergy

    The benefits of PaaS come from defining a constrained application deployment environment. That makes it necessary for many applications to “punch out” and leverage services outside of the PaaS framework. In some cases this may be a simple service, like a messaging server or a special form of data storage. In other cases it will end up being almost a reversed situation where a large portion of the application runs outside of the PaaS and the portions in the PaaS are really just complements or front-ends for the main system. Cloud Foundry makes it relatively easy to make outside services available to applications in the PaaS, but these outside applications still need to be managed. This is where an IaaS management framework like RightScale is great because it can bring the whole infrastructure under one roof.

    Some examples for this punching out:

    • Databases from the SQL variety to NoSQL and other models. Accessing legacy databases as well as leveraging popular DB setups like our MySQL Manager, which provides master slave replication.
    • Different load balancers in front of the request routers, perhaps with extensive caching features, global load balancing, or other goodies. Examples would be Zeus, Squid and many others.
    • Legacy or licensed software, for example video encoding software or PDF generators.
    • Special back-end services, such as a telephony server.

    If there’s one thing I’ve learned about customers at RightScale it’s the incredible variety of needs, architectures, and software packages that are in use. For this reason alone I see PaaS as another very nice tool in the RightScale toolbox.

    Can you run Cloud Foundry without RightScale? Of course. It certainly runs on raw servers. They can PXE boot a base image and join the PaaS in one of the above server roles. However in a mixed environment it is much more beneficial to run the Cloud Foundry roles within a managed infrastructure cloud.

    It seems obvious from the traditional SaaS/PaaS/IaaS cloud diagrams that these different layers were made to interoperate. And that’s what we’ve already seen our customers doing: combining PaaS and IaaS in ways that meet their needs. There are a number of PaaS solutions in the market with more on the horizon. We will continue to support as many as we can and to the extent that their architectures allow it, because cloud is a heterogeneous world and customers want choice. In the case of Cloud Foundry, we have a particularly open architecture that provides a compelling fit – and we’re excited to see where our joint customers take us together.

    Paul Fallon noted that CloudFoundry doesn’t implement a fabric in a 4/16/2011 post:

    image RT @reillyusa: Great intro to #cfoundry architecture by Thorsten von Eicken of #rightscale > << +1 but no Fabrics!! ;)

    Thorsten von Eicken explained how to Launch VMware’s CloudFoundry PaaS using RightScale in a 4/13 post to the RightScale blog:

    imageVMware’s Cloud Foundry release has the potential to be quite a watershed moment for the PaaS world. It provides many of the core pieces that are needed to build a PaaS in an open source form — VMware has put it together in such a way that it is easy to construct PaaS deployments of various sizes and also to plug-in different management strategies. All this dovetails very nicely with RightScale in that we are providing multiple deployment configurations for Cloud Foundry and will add management automation over the coming months.

    Advent of private PaaS

    Until now the notion of PaaS has lumped together the author of the PaaS software and its operator. For example, Heroku developed its PaaS software and also offers it as a service. If you want to run your application on Heroku your only choice is to sign-up to their service and have them run your app. Google AppEngine has the same properties. All this is very nice and has many benefits, but it doesn’t fit all use-cases by a long shot. What if you need to run your app in Brazil but Heroku and your PaaS service doesn’t operate there? Or if you need to run your app within the corporate firewall? Or if you want to add some custom hooks to the PaaS software so you can punch out to custom services that are co-located with your app? All these options become a reality with Cloud Foundry because the PaaS software is developed as an open-source project. You can customize it and you can run it where you want to and how you want.

    Of course you can also go to a hosted Cloud Foundry service whenever you don’t want to be bothered running servers. This could be a public Cloud Foundry service that is in effect competing with Heroku, AppEngine and others, but it could also be a private service offered by IT or your friendly devops team mate. This opens the possibilities for departmental PaaS services that may have a relatively small scale and can be tailored for the specific needs of their users.

    Benefits of PaaS

    PaaS is really about two things: simplicity of deployment and resource sharing. The way a PaaS makes deployment simpler is by defining a standard deployment methodology and software environment. Developers must conform to a number of restrictions on how their software can operate and how it needs to be packaged for deployment. Restrictions is perhaps the wrong word here, a set of standards is a better way to phrase it because just as some flexibility is lost a lot of benefits are gained out of the box. It’s similar to no longer writing applications that tweak device interfaces directly and instead have to go through a modern operating system device driver interface. In the PaaS context, instead of having custom deployment and scaling methodologies for each application there is a standard contract. This makes for much simpler and cheaper deployment and reduces the amount of interaction necessary between the teams that produce applications and those that run it.

    Resource sharing is a second benefit of PaaS in that many applications can time-share a set of servers. This is similar to virtualization but at a different level. Where this resource sharing becomes interesting is when there are many applications that receive an incredibly low average number of requests per second. For example, a corporate app that is used once a quarter for a few days is likely to receive just a trickle of requests at other times. If virtualization were used then at least some virtual machines would have to be consuming sufficient resources to keep the operating system ticking, the monitoring system happy, log files rotating and a number of other things that are just difficult to turn off completely without shutting the VMs down, which may not be desirable for a number of reasons. In a PaaS the cost of keeping such applications alive drops down significantly.

    PaaS running in IaaS — Cloud Foundry with RightScale

    PaaS is sometimes believed to be at odds with IaaS, as if you have to choose one or the other. We believe in both models and CloudFoundry starts to fulfill that vision. RightScale enables Cloud Foundry to be deployed in a number of different configurations that vary in size, in location, in underlying cloud provider, in geographic location, or in who controls the deployment or pays for it

    With RightScale it becomes easy to set-up a number of Cloud Foundry configurations for different use-cases. It is possible to set up a large deployment for many applications and really leverage the resource sharing benefits. But as some applications mature and have more stable resource needs and perhaps need to be separated from other to improve monitoring, resource metering, or allow for customization this can be easily accomplished by launching appropriate deployments. Finally, some applications may outgrow the capabilities of a PaaS environment and require a more custom deployment architecture.

    Try it out!

    We’ve created an All-In-One ServerTemplate in RightScale that launches Cloud Foundry in one server on Amazon EC2. If you do not have a RightScale account you can sign up for one free (you will have to pay for the EC2 instance time though). The ServerTemplate is called “Cloud Foundry All-In-One“. When you launch it, take a coffee, and come back, and you’ll be able to load your apps up! (Note that currently a lot of components are compiled at boot from the source repositories, so the server takes ~10-15 minutes to boot, we will be optimizing that as soon as the code base settles down a bit.)

    I must say that this is one of the more exciting cloud developments in a while. I’ve been wanting to add good PaaS support to RightScale for a long time and Cloud Foundry is now making it possible. We’ve been talking to Mark Lucovsky about his secret project for many moons and it’s really refreshing to see the nice clean simple architecture he and his team (hi Ezra!)  have developed see the light of day. We’re now planning RightScale features around PaaS support so please let us know what you’d like to see from us!

    NB: I had wanted to write about the architecture of Cloud Foundry and how it fits with RightScale ServerTemplates, but the timing was too tight. Stay tuned for a follow-on blog post in the next couple of days… Update: I did write the follow-up post Cloud Foundry Architecture and Auto-Scaling

    Audrey Watters reported Apple Hiring a Team to Build "the Future of Cloud Services" in a 4/13/2011 post to the ReadWriteCloud blog:

    image A new job posting on the Apple website indicates the company is looking for the position of "Cloud Systems Software Engineer." The job, first discovered by Apple Insider, is a full-time job at the company's Cupertino campus.

    apple_logo_150.jpgApple says it's looking for an engineer "who has built high-performance, scalable and extensible systems." That engineer will join the team that is building "the future of cloud services at Apple!" That exclamation is from the original posting, which in a nod perhaps to some of the questions raised by our Mike Melanson about what exactly constitutes "the cloud" has now been expunged of all cloudy references, replaced with the word "Web."

    image Of course, the name change is more likely a response from the super-secretive Apple. Indeed, the listing - in either its original or altered form - gives very little detail about what Apple is up to with "the future of cloud services." Or the future of Web services, as the case may be.

    Of course, Apple has offered "cloud services" for quite some time via MobileMe. But there has been much speculation, including a story in The Wall Street Journal, that the company will launch an enhanced version of MobileMe this year that will include cloud storage for photos, music, videos and so on. According to those reports, an updated MobileMe would give users the option to stream their digital content to their iPhones and iPads, eliminating the need for local memory on those devices.

    With the recent launch of the Amazon Cloud Drive and its ability to stream music, both Apple and Google are certainly feeling the pressure to make a similar service available.


    Image via Apple Insider

    <Return to section navigation list>