Friday, April 01, 2011

Windows Azure and Cloud Computing Posts for 3/30/2011+

image2 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.


Updated 4/1/2011 with new diagram terminology and added Workflow Service. Added articles marked by Ron Jacobs, Steve Yi, Windows Azure Team, Cihan Biyikoglu, Transcender Team, Kunal Chowdhury, Brian Hitney, Chris Hoff, Ryan Bateman, Bruce Hoard, Mike West, Windows Azure Team, Ernest Mueller, Adam Grocholski and Bruce Kyle.

• Updated 3/31/2011 with Parts 1 and 2 of Gill Cleeren’s Silverlight in the Azure Cloud series marked in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.

<Return to section navigation list> 

SQL Azure Database and Reporting

Cihan Biyikoglu updated on 3/31/2011 his SQL Azure Federations - How to build large scale database solutions with SQL Azure article in the TechNet Wiki:

What are Federations in SQL Azure?

image SQL Azure Federations provide the ability to scale-out database tier of applications. Federations represent a dataset that is spread over many nodes and manage connection routing and online repartitioning to help scale database tier on demand.

Benefits of Federations?

  • Unlimited Scalability: With federations, data gets distributed to a number of nodes and collectivity computational capacity of the nodes allow massive scalability.
  • Elasticity with Online Redistribution of Data: Application using federations can scale to the demand of the apps workload at a given time, by expanding and contracting the dataset to varying number of nodes. Repartitioning of data for this expansion and contraction is done online without requiring application downtime by the SQL Azure system.
  • Great Price-Performance Characteristics: in combination with the pay-as-you-go model, application using federations gain elasticity and with online repartitioning they gain the ability to expand and contract any time to get to the best price-performance for the given workload.
  • Multi-tenancy Friendly Programming Model: Federation also provide ability to easily transition to multi-tenant storage model for applications with many tenant in a single SQL Azure Database. Multi-tenancy further improves economics of the application by improving tenant density and reducing management overhead.

Further Information

imageHere is a collection of information that is publicly available on the topic of federations. 

  • SQL Azure Federations Series – Intro:
  1. Evaluation of Scale-up vs Scale-out: how to evaluate the scalability options?
  2. Intro to SQL Azure Federations: SQL Azure Federations Overview
  3. Perfect scenarios and typical applications that highlight the power of SQL Azure Federations technology  
  4. How to scale out an app with SQL Azure Federations – quick walk through of building an app with SQL Azure Federations.
  5. Robust Connectivity model for SQL Azure Federations

Not to mention my Build Big-Data Apps in SQL Azure with Federation cover story for the March 2011 issue of Visual Studio Magazine. However, it’s all slideware until the SQL Azure team releases its promised Federations CTP.

Steve Yi reported a new MSDN Article: How to Connect to SQL Azure Using on 3/31/2011:

image It's important to understand that connecting to SQL Azure using, is very similar to how the connection was made using SQL Server.  MSDN has written a tutorial on how to do so. The article provides a quick walkthrough and offers some considerations in connecting to a SQL Azure Database using Take a few minutes to read through it, and then give it a try.

imageClick here to read the MSDN article


Welly Lee described Migrating Data into SQL Server Using SSMA in a 3/30/2011 post to the SQL Azure team blog:

image SSMA provides flexibility for you to migrate your data with one of the two options below:

  1. Client side data migration engine : This option migrate data through the client machine where SSMA is installed and provides a quick and easy way to migrate smaller database into SQL Server.

  1. Server side data migration engine: this option migrate data directly from the source database to the target database and should be considered when migrating large database

imageIn order to enable server side data migration engine, you will need the following:

  1. Install SSMA Extension Pack

SSMA extension pack installation file comes in the SSMA download but requires separate installation:

  1. SQL Server Agent

The data migration operation is initiated from SQL Server through a SQL Agent job.

Note that SSMA provides a warning when connecting to target when SQL Server Agent is not running: "Common Requirement: SQL Server Agent is not running. You must start SQL Server Agent to use Server-side data migration engine". You can ignore this warning if you do not plan to perform data migration or if you decide to use client side data migration engine.

You can set the data migration option through the project setting:

  1. Navigate to Tools menu and go to Project Settings
  2. Select the migration engine option from the Migration menu

Mark Kromer found Business Intelligence 2.0 & 3.0 missing in the Cloud on 3/28/2011:

image In an earlier blog post here , I spoke about BI 2.0 as the next generation of business intelligence solutions such as what can be accomplished with RIAs like Silverlight. As Bart Czernicki explains in his book very well, BI 2.0 makes business intelligence very easy to use, easy to access and interactive. Silverlight is perfect for this and allows the Microsoft platform stack to provide the entire solution.

So if we agree with that definition, then it seems natural that BI 3.0 would take that interactivity and access to the next level where BI is made ubiquitous. The next evolution would include the cloud and social networking to incorporate analytics and the deep insights that BI brings to business into mainstream activities to multiple devices, focusing on mobile devices.

imageCompanies like Panorama are looking to exploit this new frontier in BI. But I still don’t see a complete solution that will take social networking to build social intelligence to quickly analyze huge amounts of data without requiring a PhD, all delivered over the cloud. There are bits and pieces of this today. Here in Microsoft land, I can put the database and application in the cloud with SQL Azure and Windows Azure, send reports in the cloud with Azure Reporting Services, analyze huge disparate data sets quickly with PowerPivot and deliver that all with compelling Silverlight applications that can run on a PC, laptop or mobile phone.

image But that is really hybrid BI 2.0 / 3.0 or hybrid on-premises / cloud. PowerPivot cannot sit in the cloud and neither can SQL Server Analysis Services today. But I do believe that the market is ready for it. The ubiquity of mobile devices and social networks continue to move all areas of IT in this same direction.

image UPDATE: I do agree that SharePoint does provide a further level up toward BI 3.0, particularly with the integrated BI Site types in SharePoint 2010. What is missing from that picture is still a fully Cloud-based BI deployment that would include PowerPivot, PPS & Excel Services.

<Return to section navigation list> 

MarketPlace DataMarket and OData

The Transcender Team offered an OData overview and a pair of simple sample queries in an OData, Oh My! post of 3/30/2011:

image Slogging through .NET 4 certification path, I am happy to find Microsoft adopt even more open standards. As open standards become more popular, the ideal of developing application logic and ignoring the plumbing details seems likes more of a possible reality. Well, a programmer can dream, right?

imageAnyway, one of these open standards is OData. WCF Data Services, formerly known as ADO.NET Data Services, uses the Open Data Protocol (OData) to expose data through addressable URIs, similar to REST (representational state transfer) services. OData supports both Atom and Json (JavaScript Object Notation) formats for the payload.

Okay, so again, what is OData? It’s a simple HTTP mechanism for accessing data. For example, let’s say that I have an application and want to retrieve all titles provided by Netflix that contain the notorious actor Charlie Sheen. Using OData, you can just type in the following URL:$filter=Name eq ‘Charlie Sheen’&$expand=TitlesActedIn

If you are using IE, then you need to turn off feed reading view to see the results. Go to Internet Options and under the Content tab, click the Settings button in the Feeds and Web Slices section. Turn off reading view by unchecking the Turn on feed reading view checkbox.

Go ahead, try it. (Yeah, I forgot he was in Platoon, too.) What this query does is access the People set, filter it to a single actor and include the related Titles set. The $filter and $expand are keywords that limit entries and include related entries, respectively.

Let’s say that you like to listen to music while at work and want to retrieve all awesome live concerts available for instant streaming. Then, you would type a URL similar to this one:‘Must-See Concerts’)/Titles?$filter=Instant/Available eq true&$select=Name,Synopsis

In this case, we choose the Titles set from the Genre “Must-See Concerts.” Notice the $select keyword is used to limit the entry properties to only the name and synopsis.

Okay, enough hand-holding. Try it out for yourself. Netflix has some more examples and eBay even has its own OData implementation.  So there’s the plumbing; I’ll let you move on to creating the applications!

The Marketplace DataMarket Team posted the first no-charge Utility Rate Service data set on 3/30/2011:

The Microsoft Utility Rate Service is a database of electric utilities and rates indexed by location within the United States, gathered from Partner Utilities and/or from publically available information. Using this Service, developers can find electricity providers and rates when available in a given area, or area averages as provided by the U.S Energy Information Administration (EIA).

Using the Microsoft Utility Rate Service

This documentation describes how to use the Microsoft Utility Rate service to get information on utility rates across a range of Postal Codes.


Sudhir Hasbe reported ComponentArt releases Digital Dashboards built on DataMarket on 3/20/2011:

image ComponentArt is leader in delivering Silverlight based tools for building rich Dashboards. ComponentArt's Data Visualization technology allows you to present, navigate and visualize your data like never before.Check out the dashboards built on public domain data from DataMarket.


Scott Hanselman explained Enabling dynamic compression (gzip, deflate) for WCF Data Feeds, OData and other custom services in IIS7 in a 3/29/2011:

image I'm working on a thing that uses an HttpWebRequest to talk to a backend WCF Data Service and it'd be ideal if the traffic was using HTTP Compression (gzip, deflate, etc).

On the client side, it's easy to just add code like this

request.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate

or more manually

var request = HttpWebRequest.Create("http://foofoo");
request.Headers["Accept"] = "application/json";
request.Headers["Accept-Encoding"] = "gzip, deflate";

However, you need to make sure this is installed and turned on in IIS7 in you server.

Launch your IIS Manager and go to the Compression module.

Compression Button in IIS Manager

There's check boxes, but it's not installed you may see this yellow alert on the right side.

Compression Alert in IIS Manager

If it's not installed, go to the Server Manager, Roles, Web Server. Under Role Services, check your installed Roles. If Dynamic Compression isn't installed, click Add Roles and install it.

The Dynamic Compression module in IIS manager is installed

You can go back to compression for your site and ensure Dynamic Compression is checked. At this point, Dynamic Compression should be setup, but you really need to be specific about what mimeTypes will be compressed.

Back in IIS Manager, go to the page for the SERVER, not the SITE. Click on Configuration Editor:

The Configuration Editor in IIS Manager

From the dropdown, select system.webServer/httpCompression:

Selecting the httpCompression node in the Configuration Editor in IIS Manager

Then click on Dynamic Types and now that you're in the list editor, think about what types you want compressed. By default */* is False, but you could just turn that on. I chose to be a little more picky and added application/atom+xml, application/json, and application/atom+xml;charset=utf-8 as seen below. It's a little gotcha that application/atom+xml and application/atom+xml;charset=utf-8 are separate entries. Feel free to add what ever mimeTypes you like in here.

Adding MimeTypes graphically in IIS Manager

After you've added them and closed the dialog, be sure to click Apply and Restart your IIS Service to load the new module. …

Scott continues with a GUIs suck! Command Lines Rule! description and concludes:

imageTurning on Compression is a VERY low effort and VERY high reward thing to do on your servers, presuming they aren't already totally CPU-bound. If you're doing anything with phones or services over low-bandwidth 3G or EDGE networks, it's a total no brainer. Make sure you know what's compressed on your systems and what's not, and if not, why not.

Be explicit and know what your system/sites HTTP Headers are doing. Compression is step 0 in service optimization. I think I mentioned this in 2004. :)

<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF, Service Bus and Workflow

Ron Jacobs reported on 3/31/2011 that the PDC10–WF4 Session “Windows Workflow Futures” video was just posted to Channel9:


image Last year at PDC10 we gave you a preview of the next release of Windows Workflow Foundation.  The video has just been posted to Channel 9.

Windows Workflow Futures
Speaker: Ron Jacobs

image722322222Learn about the key investments we’re making in Windows Workflow Foundation (WF). See the WF improvements we’re working on for workflow authoring, hosting and management. Learn how we’re bringing WF to the cloud, and how WF and Windows Azure AppFabric will provide a great middle-tier platform for building, deploying, running and managing scalable workflow solutions in Windows Azure.

What took so long to publish the video?

The Windows Azure AppFabric Team blog posted 3/31/2011 - Updated IP addresses for AppFabric Data Centers on 3/31/2011:

image722322222Today (3/31/2011) the Windows Azure platform AppFabric has updated the IP Ranges on which the AppFabric nodes are hosted.  If your firewall restricts outbound traffic, you will need to perform the additional step of opening your outbound TCP Ports and IP Addresses for these nodes. Please see the 1/28/2010 “Additional Data Centers for Windows Azure platform AppFabric” posted which was updated today to include the new set of IP Addresses.

The Windows Azure AppFabric Team reminded users on 3/31/2011 with a Windows Azure AppFabric Scheduled Maintenance Notification (April 7, 2011):

image722322222Due to upgrades and enhancements we are making to Windows Azure AppFabric, the AppFabric Portal, located at, will be locked for updates for a few hours. During that time you will not be able to create, update or delete any namespaces.

There will be no disturbance to the services themselves, nor will there be any breaking changes as result of the upgrades.


  • START: April 7, 2011, 9 am PST
  • END: April 7, 2011, 9 pm PST

Impact Alert: You will not be able to create, update or delete any namespaces in the AppFabric Portal.

Action Required: None

If you experience any issues or have any questions please visit our Windows Azure Platform Forums.

We apologize in advance for any inconvenience this might cause.

Microsoft’s Venice Team posted a Senior Software Test Engineer (SDET) Job advert on 3/30/2011. Haven’t heard the Venice code-name? Read on:

Our job is to drive the next generation of the security and identity infrastructure for Windows Azure and we need good people.

image722322222We are the Venice team and are part of the Directory, Access and Identity Platform (DAIP) team which owns Active Directory and its next generation cloud equivalents. Venice's job is to act as the customer team within DAIP that represents the needs of the Windows Azure and the Windows Azure Platform Appliance teams. We directly own delivering in the near future the next generation security and identity service that will enable Windows Azure and Windows Azure Platform Appliance to boot up and operate. If you have great passion for the cloud, for excellence in engineering, and hard technical problems, the DAIP Team would love to talk with you about this rare and unique opportunity.

In this role you will:

  • Own and deliver test assets of Windows Azure's next generation boot time security and identity infrastructure
  • Work closely with DAIP and Azure test teams insuring integration of the services
  • Advocate for product quality and provide critical input into team strategy and priorities
  • Initiate and promote engineering best practices constantly improving engineering experience


  • 5+ years of software design, and development experience with C++, C#, Java or .Net programming
  • 3+ years of experience leading/architecting test automation development for shipped software
  • Excellent technical skills, attention to detail, strong debugging and problem soving skills
  • Expertise in test automation frameworks and utilization of appropriate test methodologies and tools
  • Strong customer focus and passion for doing the right thing for the customer
  • Experience working through the full product cycle from initial design to final product delivery
  • Experience with Agile methodologies such as Scrum and TDD a plus
  • A BS degree in computer science, related discipline or equivalent experience

We are an agile, small team operating in the dynamic environment with lots of room to grow. We had embraced the services world with the fast release cycles focusing on fundamentals, engineering hygiene and doing things right. We are hardworking but striving to maintain the right work-life balance. Come and join us!

No significant articles today.

<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

Adam Grocholski described How to Create a VPN for Your Home Network with Windows Azure Connect in a 3/31/2011 post to his Think First, Code Later blog:

image Back in October 2010 at PDC, Microsoft unveiled a new feature of the Windows Azure platform called Windows Azure Connect (which is part of the larger Windows Azure Virtual Network stack). The goal of Windows Azure Connect is as follows:

Windows Azure Connect provides a simple and easy-to-manage mechanism to setup IP-based network connectivity between on-premises and Windows Azure resources. This capability makes it easier for an organization to migrate their existing applications to the cloud by enabling direct IP-based network connectivity with their existing on-premises infrastructure. For example, a company can deploy a Windows Azure application that connects to an on-premises SQL Server database, or domain-join Windows Azure services to their Active Directory deployment. In addition, Windows Azure Connect makes it simple for developers to setup direct connectivity to their cloud-hosted virtual machines, enabling remote administration and troubleshooting using the same tools that they use for on-premises applications.

imageThe basic idea is to enable hybrid solutions that can easily leverage both on-premises and cloud based assets. This is a great resource for enterprises who want to leverage the Windows Azure platform but have some assets that can’t or shouldn’t move to the cloud. However, the cloud just doesn’t have to be for business, you can use it too!

This initial idea came to me while getting ready for a business trip. In the past I’ve used things like Windows Live Mesh and Dropbox to sync files to the cloud so I can access them while on the go if needed. I’ve also set up my own VPN to connect to my home network to accomplish the same purpose, but there’s a lot of work and maintenance involved, things I’d rather not spend time doing. So I began to wonder if I could use Windows Azure Connect to set up a VPN back to my home network while on the road. Typically when you see Windows Azure Connect on-premises services are connecting to various roles running in Azure, but I didn’t see a reason why you couldn’t just have remote machines connect to one another using the same technology. Here’s what I did to give it a go:

First I logged on to the Windows Azure Management Portal and click the Virtual Network option that appears in the lower left-hand side menu.


I then clicked  on the Install Local Endpoint option and run the installer. This will install a small agent on the machine that will be used to connect to the cloud.


Next I repeated steps 1 and 2 for each machine I want to connect through the cloud.

However, having the agent on each machine wasn’t enough, a group had to be created in the management portal. The group enabled me to specify which endpoints (read machines) could talk to one another. I created a group by clicking the Create Group button


Which resulted in the following dialog appeared


Obviously I needed to provide a name and description, but there are a couple of other things to note:

(a) When I clicked the Add button I received a list of all the machines I installed the agent on. I selected them as endpoints I wanted to connect.

(b) I also had to be sure to click the “Allow connections between endpoints in group” checkbox to enable my machines to talk to one another.

Since I was only connecting my machines to each other and not any Azure roles I ignored the bottom half of the dialog.

All I then had to do was wait for the group configuration to get pushed to each machine that was part of the group I created (it took < 5 minutes).

And that’s it. Within 15 minutes I created a VPN that enabled me to access my network from anywhere.

Yay Cloud!

Additional Windows Azure Connect Resources

Troubleshooting Windows Azure Connect

Kenneth van Sarksum recommended Tech: Put a VM on Azure on 3/30/2011:

In December last year, reported about the fact that Microsoft made a VM role available in their Azure cloud computing platform, in this article we outlined that creating the VM was overwhelmingly complex, especially compared to solutions from other vendors.

imageGiovanni Marchetti, who is Senior Technical Evengelist at Microsoft published an article on how to publish a Virtual Machine to Windows Azure.

Basically the steps are:

  1. Generate a x509 certificate for use with the management API
  2. Prepare the VM by creating it on Hyper-V, installing Windows Server 2008 R2 and install the Azure integration components so that the device drivers and management services required by the Azure hypervisor and fabric controller are provisioned, installing and configuring applications and finally sysprepping the VM.
  3. Upload the VM to Azure using the csupload command line utility provided with the Azure SDK.
  4. Prepare the Service Model, which is a service definition and a service configuration file, which can be generated using Visual Studio 2010.
  5. Creating the Service in Azure
  6. Connect to the machine in Azure using RDP.

Here’s Giovanni Marchetti’s article, Put a VM on Azure, in full:

imageI have summarized here all the steps you need to take in order to deploy an Azure VM.

Step 1: Get your certificates

I assume that you have an active Azure subscription and you have installed visual studio 2010, the Azure SDK and tools and activated the VM role. You will need a management certificate for your subscription to deploy services and 1 or more service certificates to communicate with those securely. To generate a x509 certificate for use with the management API:

1. Open the IIS manager, click on your server.

2. Select "Server Certificates" in the main panel.

3. Click "Create Self-Signed Certificate" in the actions panel

4. Give the certificate a friendly name.

5. Close IIS manager and run certmgr.msc

6. Find your certificate in "Trusted Root Certification Authorities"

7. Right-Click on it, select All Tasks / Export

8. Do not export the private key, choose the DER format, give it a name.

9. Navigate to the Windows Azure management portal.

10. Select Hosted Services / Management Certificates / Add a Certificate

11. Browse to the management certificate file and upload it.

Step 2: Prepare the VM

I assume that you are familiar with Hyper-V and how to build a virtual machine on a hyper-v host.

  1. Create a virtual machine on hyper-v. Note that the maximum size of virtual hard disk you specify  will determine what size of Azure VM you will be able to choose. An extra-small machine will mount a vhd up to 15 GB, small one up to 35 and medium or more up to 65 GB. This is just the size of the system VHD. You will still receive local storage, mounted as a separate volume.
  2. Install Windows Server 2008 R2 on the VHD. It is the only supported o/s as of writing.
  3. Install the Azure integration components in the VM. They are contained in the wavmroleic.iso file, which is typically located in c:\progam files\windows azure sdk\<version>\iso. You need to mount that file on the VM and then run the automatic installation process. This provisions the device drivers and management services required by the Azure hypervisor and fabric controller. Note that the setup process asks you for a local administrator password and reboots the VM. The password is encrypted and stored in c:\unattend.xml for future unattended deployment.
  4. Install and configure any application, role or update as you normally would.
  5. Configure the windows firewall within the VM to open the ports that your application requires. It is recommended that you use fixed local ports.
  6. Open and administrator command prompt and run c:\windows\system32\sysprep\sysprep.exe
  7. Select "OOBE", Generalize and Shutdown

This process removes any system-specific data (including the name and SID) from the image, in preparation for re-deployment on Azure. If your application is dependent on those data, you will have to take appropriate measures at startup on Azure (e.g. run a setup script for your application). The VHD is now ready to be uploaded. It is recommended to make a copy of it to keep as a template.

Note that any deployment to Azure starts from this vhd. No status is saved to local disk if the Azure VMs is recycled for any reason.

Step 3: Upload the VM to Azure

For this you will need a command-line utility provided with the Azure SDK.

  1. Open a windows azure command prompt as administrator.
  2. Type

csupload Add-VMImage -Connection "SubscriptionId=<YOUR-SUBSCRIPTION-ID>; CertificateThumbprint=<YOUR-CERTIFICATE-THUMBPRINT>" -Description "<IMAGE DESCRIPTION>" -LiteralPath "<PATH-TO-VHD-FILE>" -Name <IMAGENAME>.vhd -Location <HOSTED-SERVICE-LOCATION> -SkipVerify

The subscription ID can be retrieved from the Azure portal and the certificate thumbprint refers to the management certificate you created and uploaded before. The thumbprint can be retrieved from the portal as well. The description is an arbitrary string, the literal path is the full absolute path on the local disk where you stored your vhd. The image name is the name of the file once stored in Azure and the location is one of those available in the Azure portal. Note that the location must be specific, e.g. "North Central US". A region is not accepted (e.g. Anywhere US). SkipVerify will save you some time.

This command will create a blob in configuration storage and load your vhd file in it for future use, but not create a service or start a VM for you. In the Azure portal the stored virtual machine templates can be found under "VM Images"

Step 4: Prepare the service model

Azure requires a service definition and a service configuration file before deploying any role. These are .xml files that are packaged and uploaded to the fabric controller for interpretation. You can generate one for the VM using Visual Studio 2010.

1. Open Visual Studio 2010 and create a new Windows Azure project.

2. Do NOT add any role to the project from the project setup wizard.

3. In the solution explorer panel, right click on the project name and select New Virtual Machine Role. Note that a service may be made of several roles, including multiple VMs.

4. In the VHD configuration dialog, specify your Azure account credentials and which of the stored virtual machine templates you'd like to use.

5. In the Configuration panel specify how many instances you'd like and what type. Remember the size constraints on the system VHDs.

6. In Endpoints, specify which ports and protocol must be open for your applications within the virtual machine (they should match those configured before).

7. Note that RDP connections are configured elsewhere.

8. Once the VM role configuration is done, right-click on the project name and select Publish. You have an option to create the service configuration package only, to be uploaded later via the portal, or to actually deploy the project. I am assuming that you have not got a service defined yet. It is advisable to configure RDP connections for debugging purposes at least during staging.

9. Select Enable connections, then specify a service certificate. This will contain a private key used to encrypt your credentials. If you have none, you can create one from this interface. If you do create a new certificate, click View, Details and Copy to File to export it. Make sure to include the private key.

10. Specify a user name and password to connect to this virtual machine. Change the account expiration date as necessary (but set it before the certifcate expires).

11. Select "Create Service Package Only" and save the package file.

Step 5. Create the service in Azure

1. In the Azure Management Portal, select Hosted Services / New Service

2. Populate the form, specifying a name for your service and deployment options. Note that the location you select must be the same specified at upload time for the virtual machine you want to use. Select the configuration package and file that you saved before. Add the certificate that you exported before for RDP.

3. Click OK to deploy. Start your deployed machines.

Step 6: Connect and enjoy.

From the machine where you generated the RDP certificate, connect to your virtual machines and test. Simply select the virtual machine in the Azure portal and click "connect". A RDP file will be generated for you to save and open. Once debugging is finished, it is recommended to disable RDP connections for production.

Avkash Chauhan explained Troubleshooting problems with VHD upload using CSUPLOAD Tool in a 3/30/2011 post:

image While uploading VHD for your VM Role using CSUPLOAD tool, if you experience issues, you can use the following methods to troubleshoot the problems:

Verifying Connectivity Settings:

imageTo verify your connection string you can use the "csupload get-connection" command and you will see the connection settings as below:

C:\Program Files\Windows Azure SDK\v1.4\bin>csupload.exe get-connection

Windows(R) Azure(TM) Upload Tool
for Microsoft(R) .NET Framework 3.5
Copyright (c) Microsoft Corporation. All rights reserved.

ConnectionString          : SubscriptionId=***********;CertificateThumbprint=**************; ServiceManagementEndpoint=
SubscriptionId            : *********************
CertificateSubjectName    : CN=<Certificate_Subject_Name>
CertificateThumbprint     : <Certificate_Thumbprint>
ServiceManagementEndpoint :

Network Connectivity Issues:

To verify that connection from your machine to Windows Azure Management portal, run "csupload get-location" command and if you see the output like below, it will prove that network connectivity from your machine to Windows Azure management portal is working.

c:\Program Files\Windows Azure SDK\v1.4\bin>csupload get-location

Windows(R) Azure(TM) Upload Tool
for Microsoft(R) .NET Framework 3.5
Copyright (c) Microsoft Corporation. All rights reserved.

Using the saved connection string...
Location : Anywhere US
Location : South Central US
Location : North Central US
Location : Anywhere Europe
Location : North Europe
Location : West Europe
Location : Anywhere Asia
Location : Southeast Asia
Location : East Asia
A total of 9 record(s) were found.

c:\Program Files\Windows Azure SDK\v1.4\bin>

Verifying Browser connection to Windows Azure Management Portal:

Create a URL similar to as below using you Subscription ID:<Your_Subscription_ID>/locations

Open this URL in IE and if network connection is established, you should get  "Dialog asking to choose a certificate".

If you have any networking issue you will [receive] an error (most probably 403: Forbidden).

Getting more output in Command Window:

You can create a new Environment Variable as below:


After that open a new Command Prompt Window in Administrator mode and use the CSUPLOAD command. You will see lot more details are available on Command window. 

Generating detailed activity log:

To generate detailed log file to collect more details with regard to CSUPLOAD command activity, please remove the following highlighted lines from the csupload.exe.config file, this file is located in the same folder as csupload.exe:

<!-- uncomment to help debug errors -->
      <source name="Microsoft.WindowsAzure.ServiceManagementClient"
              switchName="Microsoft.WindowsAzure.ServiceManagementClient" >
          <add name="Debug"/>
       <source name="csupload"
              switchName="csupload" >
          <add name="Debug"/>
      <add name="Microsoft.WindowsAzure.ServiceManagementClient" value="Verbose"/>
      <add name="csupload" value="Verbose"/>
      <add name="Debug"
      initializeData="csupload.log.txt"  />
    <trace autoflush="true"/>

After the above change, open a new command window to run the same test. You will see csupload.log.txt file is created which will have detailed log related with CSUPLOAD command activity.

Avkash Chauhan described Error uploading VHD using CSUPLOAD: The request channel timed out attempting to send after 00:01:00. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding on 3/30/2011:

imageWhen using CSUPLOAD to upload your VHD to use with VM role, it is possible you may receive the following error:

Using the saved connection string...

Using temporary directory C:\AzureVM \Virtual Hard Disks...

An unexpected error occurred: The request channel timed out attempting to send after 00:01:00. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout.


imageThis error will occurred if connection to Windows Azure Management site could not be established due to general connectivity issues. The CSUPLOAD tool is using credentials set in the connection string and if the connection could not be established then you will get this error.

To confirm it you can run the following command

>  CSUPLOAD get-location

And if there is any kind of connectivity problem from your machine to Windows Azure Management Portal, you will get the exact same error which you have received during VHD upload.


To solve this problem you will need to check your network settings and networking components, i.e. proxy, firewall etc. If you try to run the same command outside the network boundary or some other please it could verify that the issue is with your network where CSUPLOAD is running. 

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Bruce Kyle reported the availability of an ISV Video: BI Analytics Solution Links On-Premises To Azure on 3/31/2011:

image The Star Command Center is used by enterprises across major industries to manage and automate disparate reporting, planning, and analysis environments with applications, data warehouses and business intelligence tools from multiple vendors.


By orchestrating interoperability between on-premise and Cloud applications on the Windows Azure Cloud platform, the new edition will allow users to accelerate business decisions by accessing time-sensitive automated application processes using mobile phones, tablets or PCs.

Video Link: BI Solutions Join On-Premises To Windows Azure Using Star Analytics Command Center

imageAllan Naim, an Architect Evangelist from Microsoft, interviews Quinlan Eddy and Stephan Enni from Star Analytics, about how the Windows Azure platform provides an easy and cost-effective way for enterprises to address the explosion of business intelligence, hosted and legacy applications spanning private and public Clouds and on-premise applications.

About Star Analytics

Star Analytics is the leader in application process automation and integration software, offering enterprises a cost-effective way to automate and integrate disparate financial and operational applications for better business decisions. Fortune 1000 companies in all major industries use Star Analytics to improve business planning and analytics and reduce the burden on IT by automating the integration and management of financial processes and data. Star Analytics is certified Microsoft partner.

Other partners include QlikTech, Informatica, interRel Consulting, IBM Cognos, SAP BusinessObjects, and Oracle. Headquartered in San Mateo, California, Star Analytics is a privately-held company backed by Hummer Winblad Venture Partners and LightSpeed Venture Partners.

Other ISV Videos

For videos on Windows Azure Platform, see:

Brian Hitney posted Rock, Paper, Azure Deep Dive: Part 1 on 3/31/2011:

image If you’re not sure what Rock, Paper, Azure (RPA) is all about, check out the website or look over some of my recent posts.   In this series of posts, I want to go into some of the technical nuts and bolts regarding the project.

First, you can download Aaron’s original project on github (here and here).   The first project is the Compete framework, which is an extensible framework design to host games like Rock, Paper, Scissors Pro! (the second project).    The idea, of course, is that other games can be created to work within the framework.


Aaron and the other contributors to the project (I remember Aaron telling me some others had helped with various pieces, but I don’t recall who did what) did a great job in assembling the solution.   When moving it to Windows Azure, we had a number of issues – the bottom line is, our core requirements were a bit different than what was in the original solution.   When I describe some of these changes in this and other posts, don’t mistake it for me being critical of Aaron’s project.   Obviously, having used it at code camps and the basis for RPA shows I have a high regard for the concept, and the implementation, in many parts, were quite impressive.

So, if you download those two projects on github, the first challenge is getting it up and running.  You’ll see in a few locations there are references to a local path – by default, I believe this is “c:\compete”.  This is the local scratch folder for bots, games, the db4o database, and the logfile.  Getting this to work in Windows Azure was actually pretty straightforward.   A Windows Azure project has several storage mechanisms.  When it comes to NTFS disk I/O, you have two options in Azure:  Local Storage, or Azure Drives.  

Azure Drives are VHD files stored in Azure Blob Storage and can be mounted by a VM.   For our purposes, this was a little overkill because we only needed the disk space as a scratch medium: the players and results were being stored in SQL Azure.  The first thing we needed to do to get local storage configured is add a local storage resource:


In this case, we just created a local storage area called compete, 4GB in size, set to clean itself if the role recycles.

The next step was to remove any path references.  For example, in Compete.Site.Models, you’ll see directory references like this:


Because there’s so much disk I/O going on, we created an AzureHelper project to ultimately help with the abstraction, and have a simple GetLocalScratchFolder method that resolves the right place to put files:


Now, we inject that call wherever a directory is needed (about a half dozen or so places, if memory serves).   The next major change was deciding: to Spark, or not to Spark?  If you look at the project references (and in the views themselves, of course), you’ll see the Spark view engine is used:


I’m no expert on Spark but having worked with it some, I grew to like its simplicity:


The problem is, getting Spark to work in .NET 4.0 with MVC 2 was, at the time, difficult.  That doesn’t appear to be the case today as Spark has been revived a bit on their web page, but we started this a few weeks earlier (before this existed) and while we recompiled the engine and got it working, we ultimately decided to stick with what we knew best.

imageThe end result is the Bot Lab project.   While we’re using RPA with the idea that it can help others learn about Azure while having fun, it’s also a great example of why to use Windows Azure.  The Bot Lab project is around 1 MB in size, and the Bot Lab itself can be up and running in no time (open solution, hit F5).

Imagine if you wanted to host an RPS style competition at a code camp.  If you have a deployment package, you could take the package and host it locally if you wanted, or upload it to Windows Azure – hosting an extra small instance for 6 hours at a code camp would cost $0.30.   Best of all, there’s no configuring that needs to be done (except for what the application dictates, like a username or password).  This, if you ask me, is one of the greatest strengths behind a platform as a service.

Gill Cleeren (@gillcleeren) posted  Silverlight in the Azure cloud - Part 2 to the Silverlight Show blog on 3/30/2010 (Part 2 follows):

imageIn the first part of this series, we looked at how we could move a Silverlight application to Windows Azure. The biggest conclusion there was that from a Silverlight point-of-view, not much work was required. The Silverlight application itself – the XAP file – is in this case packaged along when publishing the web role in which the Silverlight application resides.

The database was moved to SQL Azure, a SQL Server in the cloud, hosted as a service and therefore benefitting from high availability and scalability. Next to the database, the services (in our case, a WCF service) were moved to the cloud as well. The Silverlight application talked to the services hosted in the cloud. We just had to change the address of the service in the configuration to be up and running again.

In this second part, we’ll take a look at some more advanced stuff. We’ll start by looking at how we should work if the Silverlight application is using RIA Services for its data needs. Next, we’ll look at how we can work with Azure storage from Silverlight. We’ll finish off by looking at the combination Windows Phone 7/Azure. A lot of ground to cover so let’s get to it!

The demos for this second part can be found here.

RIA Services in the cloud

When we look at a typical RIA services application, there’s a clear link between the client and the server project, both when the services are hosted in a site and when we use a separate RIA services class library. This link can be seen in the fact that within the Silverlight application, code gets generated based on the code in the RIA services classes. Also, the client project has a link to the server project.


RIA Services require in the web project that there are a few assemblies available. When we add a RIA service to a project, automatically, Visual Studio creates references to:

  • System.ServiceModel.DomainServices.EntityFramework
  • System.ServiceModel.DomainServices.Hosting
  • System.ServiceModel.DomainServices.Server
  • System.ComponentModel.DataAnnotations

The screenshot below shows these references in the web project. Note that if you are building a RIA services library, these assemblies are added in the server-side project as well.


When we are deploying to Azure, we are actually deploying to a virtual machine running Windows 2008. There’s an IIS running as well that hosts your web application. However, this machine doesn’t have the RIA services assemblies installed (they are not in the GAC, since there’s no installer that put them there). Therefore, when we deploy an application that consumes RIA services, we need to make sure these assemblies will be packaged into the package we’ll deploy.

In the sample application, this is solved by setting the 3 System.ServiceModel.DomainServices assemblies to Copy local à True. This makes sure that these assemblies aren’t referenced from the GAC (like on our local development machine), but instead are referenced from the local bin directory.


You can easily see if you did this correctly by looking at the size of the package that gets created when publishing. In the sample case, when the assemblies are correctly included, the size of the package is almost 9MB. When setting the copy local to False, this was only 7.5MB.

Apart from this change, RIA services applications work perfectly in Windows Azure. There’s no need to perform any configuration changes.

Blob storage and Silverlight

Azure storage provides scalable storage for your files. Actually, Azure storage consists out of three parts: Blob Storage, Table Storage and Queue Storage. Blob Storage can be used to store binary and textual files/data in the Azure cloud. Each file that gets added to the cloud gets replicated and therefore is safely stored with an almost non-existent chance of losing the file. Finally, queue storage is capable of storing messages and can be used for example to have a web role and a worker role to communicate with each other.

While all of these are usable from a Silverlight client application, I’ll focus on blob storage, since that seems the one I have used most for clients from a Silverlight perspective. If you want to use table or queue storage, take a look at their REST API. Note that blob storage also has a REST API but I won’t be using that here.

Instead, I want to securely store files in blob storage and retrieve them from a Silverlight application. To store files, we first have to create a storage account via the Azure portal, as shown in the screenshot below. Each Windows Azure account can have multiple storage accounts, each storage account can have up to 100TB of data!


Once created, a URL gets created for this storage account (for all 3 services), as shown in the portal screenshot below.


The portal does not give you the ability to upload files. Instead, you need a tool such as CloudBerry or Cloud Explorer. The screenshot below shows Cloud Explorer on my storage account, where I have a few containers created. A container can be compared to a folder; it can contain other containers or files.


Per container, we can specify the access permissions. A file can be publicly accessible or private. The files in this case are going to be used inside of a public Silverlight application so the container gets Full public read access. If you want to store files that users have to pay for, you could use private access and used Shared Access Signatures to allow users to access files with a public key token.


The files are now accessible via a URL, shown below.


We can now use this URL inside of a Silverlight application. Files such as a video or an image work in the same way. In the code below, we are using the file stored in blob storage as the source of a MediaElement.

<Grid x:Name="LayoutRoot">
<MediaElement Width="300" Height="200" Stretch="Uniform" 
Reading an XML stored in blob storage

If we store XML files in Azure storage and want to read out these files from a Silverlight application, we will come across cross domain restrictions. The Silverlight application is running from our domain, while the XML we want to read out is stored in blob storage. The solution is quite simple though: we need to place a clientaccesspolicy.xml in the root container.

However, the root container out-of-the box can only contain other containers, not files. The solution is creating a container named $root. This $root is a special container that points back to the root. Inside this container, we can place the clientaccesspolicy.xml file as shown below.


From Silverlight, the file can be read out as follows, using the direct link to the XML in blob storage:

private void LoadTrendingTopics()
    WebClient client = new WebClient();
    client.DownloadStringCompleted += 
        new DownloadStringCompletedEventHandler(client_DownloadStringCompleted);
    client.DownloadStringAsync(new Uri("", 
void client_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e)
    if (e.Error == null)
        XDocument document = XDocument.Parse(e.Result);
        List<Trend> twitterData = (from status in document.Descendants("trend")
                                   select new Trend
                                       TrendingTopic = status.Value,
                                       Url = status.LastAttribute.Value
        TrendingTopicListBox.ItemsSource = twitterData;
XAP in Azure storage

In part 1, I concluded by mentioning that we can’t just update the XAP file when hosted in a hosted service, because an update basically comes down to upgrading the instance to a new version. A solution for this problem is placing the XAP file in blob storage. The following steps need to be done to do so.

Step 1: Create a container for the XAP file

While not really necessary, I advise to use a container specific for the XAP file.


Step 2: Place the XAP file in blob storage

After letting Visual Studio create the XAP file, copy the XAP file using Cloud Explorer to the blob storage.


Step 3: Update the URL in your ASPX

In the web project, update the URL of the source parameter, like below.

<div id="silverlightControlHost">
<object data="data:application/x-silverlight-2," 
width="100%" height="100%">
<param name="source" 
value="" />
<param name="onError" value="onSilverlightError" />
<param name="background" value="white" />
<param name="minRuntimeVersion" value="4.0.50826.0" />
Step 4: Publish the hosting site in a web role to Azure

We can now go ahead and publish the site to Azure, with the XAP being referenced from blob storage.

Using this approach, we can easily update the XAP site while not having to republish the site.

Windows Phone and Azure

The final topic I want to discuss here is Windows Phone and Azure. Many applications for Windows Phone use some kind of service layer to talk to a database backend. If we host these services in Azure, Windows Phone can perfectly access these services as well. There’s no real difference since we use Silverlight as the development platform.

Azure can be very interesting in a WP7 scenario. The cloud is always available, while Windows Phone apps don’t always have a network connection available. Occasionally connected applications can locally store files and upload to the cloud when a connection is available.


This concludes our mini-series on Windows Phone and Azure. In this second part, we looked at some practical implementations, including RIA Services, Windows Phone and blob storage.

Gill Cleeren (@gillcleeren) posted  Silverlight in the Azure cloud - Part 1 to the Silverlight Show blog on 3/29/2010:

image Cloud computing is a hot topic nowadays. The ability to have access to an unlimited amount of resources when we need it is awesome and can help us deliver better and more stable applications, while keeping the cost low. Many vendors, including Microsoft, Amazon and Google have their own cloud implementations.

Since PDC 2008, Microsoft jumped on the cloud-computing bandwagon and introduced Windows Azure. Microsoft envisions the cloud as the future. There are many great advantages that come with cloud computing, including nearly unlimited scaling, high availability and low cost to get started, all offered by Azure. When combining the power, scalability and high availability of Azure with a rich client platform like Silverlight, we can create even more compelling experiences.

Silverlight works great with Azure and vice versa. From a professional point-of-view, I have for my company done a few migrations of existing Silverlight applications towards the Azure platform. Moreover, I know use Azure for delivering intermediate builds of my Silverlight projects to the customer, without all the hassle of attaining and installing a server, making it available outside the domain etc. With the tools I have at my disposal for Azure, I can deploy to the cloud in a snap and we only pay for the days the server is online.

This all inspired me to give a webinar on SilverlightShow titled Switching on the cloud for Silverlight, delivered on March 23 2011 and available for on-demand viewing here. While there’s a lot of information in the talk, I decided to write this 2-part article series as an extra companion to the webinar. Or maybe you don’t have an hour to spend and just want to read about Silverlight and Azure, well in that case, these articles will give you the information you need to get started.

In this first part, we’ll be looking at how we can migrate a regular Silverlight application to the cloud. In the second part, I’ll be focusing on the more advanced stuff, such as migrating RIA services apps, XAP hosting and accessing cloud services from Windows Phone.

The demo for this part can be downloaded here.

Before we go ahead and look at how we can migrate applications to the cloud, let me start by explaining you what you need to know to get started with Azure. I’ve noticed that almost all .NET developers know what Windows Azure is, but not everyone is clear on all the parts that make up Windows Azure.

Things the Silverlight developer needs to know about Azure

To get rolling with Azure, it’s vital that you at least know what Windows Azure is. Basically, it’s Microsoft’s implementation of a cloud platform. It is an operating system for the cloud – a cloud that runs on Microsoft’s datacenters, which are spread all over the world. Roughly said, Azure provides us computing power, storage and the capability to connect our on-premise systems with machines in the cloud. The computing block is a place where code can be crunched and the storage can be used to store an unlimited amount of files with high availability and replication. Since the applications we will want to host on Azure are web applications, it’s important for a Silverlight developer to know how to use the cloud: we may want to host services on Azure, store image files in Azure storage etc.

When starting with Azure, one of the things you need to know is what are the parts it encompasses and how can they be useful for me as a Silverlight developer. Let’s take a look.

Parts of Azure
  • Hosted services provide a way of executing applications. Web roles (a role can be seen as an instance) can be used to run an ASP.NET application, a service application. Worker roles can execute other EXEs for example, such as a different web server or runtime.
  • Storage gives us a way to store data in the cloud. Blob storage gives the ability to store files in the cloud. Each file that’s stored is replicated so it’s safe to say that the file is difficult to lose once stored in blob storage. Table storage provides a way of storing unrelational data and queue storage provides a way to allow different roles to communicate and store messages in queue.
  • SQL Azure is a SQL Server database in the cloud
  • AppFabric includes services we can use in our own applications. Access Control allows us to integrate with other authentication systems such as GMail or Facebook. Caching allows us to use a distributed cache. The Service Bus is a messaging system that allows exchanging messages between cloud and on-premise systems.

The Silverlight developer should certainly take a look at Hosted Services, Storage and SQL Azure, since these can make his life a lot easier! Hosted services can be used to run a website from Azure: this can be a site that hosts WCF services as well as a site that hosts the Silverlight application. Azure Storage can be useful to store images, videos and access them from Silverlight. Finally, SQL Azure can replace an on-premise database server.

Migrating an existing Silverlight application to the cloud

To demonstrate the use of hosted services and SQL Azure in a Silverlight scenario, let’s follow the steps to migrate an existing application to Azure. The application, a typical Line-Of-Business application exists out of the following components:

  • Data hosted in a SQL Server 2008 R2 database
  • Data is made available over a Silverlight-enabled WCF Service (using a BasicHttpBinding)
  • Silverlight application is hosted in an ASP.NET application
  • Silverlight Navigation application uses the data exposed from the WCF service. It has a service reference to this service

Let’s take a look at the different steps to migrate this application.

Step 1: the database

Up first is the database. The screenshot below shows the current – local – database schema.


In this local database, we already have data.

To migrate this to the cloud, we need to start by creating a new database in SQL Azure. Assuming you already have an account on Windows Azure, we can create and manage these databases from the portal. In the screenshot below, a new database named CityHotels was created. Since the database is rather small, I selected web as type.


With the database created, we need to migrate our existing database schema and the data to the cloud database. A useful tool for this is the SQL Azure Migration Wizard, available from CodePlex. By following this wizard-driven tool, we can analyze and migrate a database. The screenshot below shows the tool in action.


It’s recommended to allow the tool to do a migration first. Not everything that a regular SQL Server database can do can be copied and this analysis will find exactly those things. I remember at one customer that we had to perform a manual change on a table from ASP.NET Membership. After the tool is ready, the database should be available.

To finish this first step, we can change the connection string in the web.config of the services so it refers to the database in the cloud. Nothing else needs to change to allow this to work, as this database is accessible from our application.

<add name="CityHotelsEntities" 
connection string=&quot;data;initial 
catalog=CityHotels;user id=gillcleeren;
providerName="System.Data.EntityClient" />
Step 2: the services

Once the database is moved to Azure, we can continue our migration quest by moving the service layer to Azure as well. Assume like in the following screenshot that the services are in a separate project called ServiceHostingSite.


To move this site to the cloud, we need to add a new Windows Azure Project (called here CityHotelBrowserCloud). We don’t need to have the template create any empty projects, so when adding the project, just click OK to continue.


In the Solution Explorer, the new project is added. Now right-click on the Roles folder and select Add Web role project in solution. Go ahead and select the ServiceHostingSite project.


By adding this role, we basically configured an instance to execute this code. If we right-click on the project and select Properties, we can configure how the role should behave. In the screenshot below, I have configured 2 instances, meaning that 2 virtual machines will be created that both run my code independently. Should one fail, the second one is still up while the first one reboots. We can also select VM size here. It’s easy to see that the more instances you configure, the more expensive things will be. Also, selecting larger instances will also increase the cost.


If we now run this, we see the emulator start up.


The Compute Emulator show the running instances.


If we look at the address in the browser bar, we can see this site (the ServiceHostingSite) is now running on We therefore need to update the endpoint address in the ServiceReferences.clientconfig in Silverlight as well:

<endpoint address=""
binding="customBinding" bindingConfiguration="CustomBinding_CityHotelService"
name="CustomBinding_CityHotelService" />

Finally, we can publish the service. On the cloud project, right-click and select Publish. Select the “Create Service Package Only” option. This will generate 2 files: a package containing the site and a configuration file.


Now head back to the Azure portal and select Hosted Services. On the same page, select new Hosted Service and create a name for your service. At the bottom, browse for your package and configuration file. The screenshot below shows the filled-in screen.


After your service was deployed, remember to update the service reference in the Silverlight application to the “live” version of the services.

Step 3: the Silverlight application

We are now ready to move the Silverlight application itself to Azure as well. Create a new web role and select the Silverlight hosting site (CityHotelBrowser.Web) to be the second web role. Alternatively, you can also create a new cloud project and add a web role in there. The latter would in real world scenarios be the best choice; however, the first choice will be cheaper but will be hosting 2 sites on different ports. For simplicity reasons, I’ve selected to host both in the same project here.


When publishing this second web role, the ClientBin including the XAP file will be included and uploaded as part of the hosted service. What happens is that all files are packaged, so the XAP file will be included as well. When a user now browses to the hosting website the Silverlight XAP file will be downloaded from the cloud server instead of our own server.

This does however pose a problem. When we want to update just the XAP file, we can’t just change that one file. Instead, we need to republish. More on this in the second part of this series!


In this first part, we looked at how we can easily migrate a typical Line-Of-Business application to the Windows Azure cloud. As can be seen, for Silverlight itself, not really a lot is changing. For the server-side code however, quite a lot actually is, mostly behind the scenes. By moving to the Azure cloud, our services, database and hosting website benefit from the high availability and scalability from Windows Azure.

Stay tuned for the second part [see above] where I’ll be looking at more advanced scenarios.

Gill is a Microsoft Regional Director ( for Belgium, Silverlight MVP (former ASP.NET MVP), INETA speaker bureau member and Silverlight Insider.

Doug Rehnstrom considered Amazon EC2 or Microsoft Windows Azure for hosting an ASP.NET Dynamic Data site in a 3/30/2011 post to the Learning Tree blog and settled on Windows Azure:

image I haven’t written a blog post for a while because I’ve been programming. So much fun!

I’m working on a new Web site using Microsoft ASP.NET Dynamic Data and Entity Framework. ASP.NET Dynamic Data is a bit like Ruby on Rails. It allows Web pages to be automatically generated based on the data they are displaying. Entity Framework automates all the data access code. I’m using what’s called a “model-first’ implementation of Entity Framework, so I don’t even have to manually create the database. That is automated as well!

image The whole thing is very cool. I create the models using a graphical designer, and apply attributes for field validation and formatting. Then, on the one side the database code is generated, and on the other side the user-interface is generated. It’s a bit more complicated than I make it out to be, but once you have it figured out making changes to the application is very simple. That’s why I wanted to use this approach in the first place. (Check Learning Tree course 2620 to learn more about it.)

Okay, so what’s this have to do with the cloud? I have to decide where I want to deploy my program, on EC2 or on Windows Azure. I guess I’m writing this article so I can weigh the pros and cons of each.

Advantages of Amazon EC2
  1. EC2 is cheaper, at least to start. I can get an EC2 Windows 2008 R2 Server instance up and running for about $40 per month.
  2. EC2 is familiar. The nice thing about EC2 is it’s like having your own Windows Server without buying the hardware. I can do anything I want to it; I just have to remote desktop into it.
  3. I’m already using EC2 for a couple of projects.
Advantages of Microsoft Windows Azure
  1. Azure may be cheaper than EC2 in the long run. Azure is a zero maintenance solution. You just deploy your application and Microsoft takes care of the software, patches and backups. There’s a cost to maintenance which has to be taken into account when using EC2 compared to Azure. The problem is that cost is a bit hard to calculate.
  2. Scalability with Windows Azure is seamless. There’s a good chance this application will grow to have many users and consume a massive amount of data. If it does, adding additional machines with Windows Azure is as simple as changing a value in the configuration file.
  3. It’s completely integrated with Visual Studio. Once set up, deploying changes from Visual Studio to Windows Azure is just a couple clicks.


Well, I haven’t made up my mind yet, but interestingly I’m not even considering setting up my own server. Maybe I should go to Learning Tree’s Cloud Computing course. That course covers cloud computing in general and explores a number of different vendors and options for taking advantage of the cloud.

You might also like to go to Learning Tree’s Windows Azure course. That course covers Windows Azure in detail.

I wrote and linked many posts about ASP.NET Dynamic Data when it was in beta and first released to the Web in 2009. Click here to check them out. (Click Show Older Posts to view the 2009 articles.) Entity Framework v4.1 makes ASP.NET Dynamic Data a much better data scaffolding solution.

Wade Wegner (@WadeWegner) announced on 3/29/2011 his 00:10:19 Getting Started with the Windows Azure Toolkit for Windows Phone 7 video segment on Channel9:

image I have just published a short screencast that demonstrates how to get started with the Windows Azure Toolkit for Windows Phone 7.  This toolkit is designed to make it easy to build phone applications that use cloud services running in Windows Azure.


imageIn this screencast you will learn:

  • Where to download the toolkit
  • How to install and setup the toolkit
  • How to use the new project templates

image To get started, head to the CodePlex project at For additional information, can also take a look at my previous blog post announcing the toolkit or an excellent explanation of the toolkit from Mariano Converti. You can also watch Cloud Cover Episode 41 to learn more about the toolkit.

<Return to section navigation list> 

Visual Studio LightSwitch

Kunal Chowdhury described How to install LightSwitch Beta 2? in a 3/29/2011 post:

image Visual Studio LightSwitch is a new tool for building data-driven Silverlight Application using Visual Studio IDE. It automatically generates the User Interface for a Data Source without writing any code. You can write a small amount of code also to meet your requirement.

image2224222222LightSwitch is currently in it's Beta 2 stage. Read this post to know about the installation process of it. Also find the installer location to download it freely.

If you are very new to LightSwitch, you can easily download it from Microsoft Download Centre and install it. You can find it here:

Before installing LightSwitch Beta 2, make sure that, you don't have the previous version of LightSwitch.

If you already have LightSwitch Beta 1 installed, then follow the below steps:

  1. First of all, uninstall LightSwitch Beta 1
  2. Uninstall LightSwitch Beta 1 VSLS Server prerequisites
  3. If you have Visual Studio 2010, install Visual Studio 2010 SP1
  4. Install LightSwitch Beta 2 at the end

LightSwitch installer will install/update the following items depending on the installed package:

  • .Net Framework 4.0
  • Silverlight 4.0 Runtime and SDK
  • SQL Server Compact Edition and dependent libraries
  • TFS 2010 Object Model
  • Visual C++ runtime and redistributable
  • WCF RIA Services V1 SP1
  • Visual Studio LightSwitch Beta 2 and dependent packages

See related Parts 1 and 2 of Gill Cleeren’s Silverlight in the Azure Cloud series marked • in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section.

Robert Green announced an Updated Post on Extending a LightSwitch Application with SharePoint Data on 3/30/2011:

image2224222222I have just updated Extending A LightSwitch Application with SharePoint Data for Beta 2. I reshot all the screens and have made some minor changes to both the text and the narrative. There are two primary differences. The minor change is that I need to add code to enable editing of data in two different data sources. The more significant change is in what I need to do to create a one to many relationship between Courses in SQL Server and KBs in SharePoint.

I have now updated all but one of Beta 1 posts. One more to go.

Bing Videos offered a 00:08:31 An Overview of Microsoft Visual Studio LightSwitch segment on 3/17/2011 (missed when published):

image2224222222An overview of Visual Studio LightSwitch Beta 2, a simpler way to create high-quality business applications for the desktop and the cloud.

It’s only had 103 views. You might find some of the related VSLS video segments interesting, although most cover beta 1.

Return to section navigation list> 

Windows Azure Infrastructure and DevOps

•• Mike West compared PaaS offerings from Salesforce, Microsoft, Oracle, Progress, Apprenda, Corent and GigaSpaces in his SaaS Enablement Platforms: Fast Path to Multi-tenancy or Lock-In? Research Alert of 331/2011 for Saugatuck Technology (site registration required):

What Is Happening:

image As independent system vendor (ISV) migration to the Cloud continues, and as the many benefits of multi-tenancy have become clearer, Cloud platform providers have recently been touting better, faster, cheaper ways for the ISV application to become multi-tenant.


Established solutions from Salesforce and Microsoft provide a multi-tenant solution for applications written from the ground up. Microsoft also offers a virtualization solution on Azure that is not truly multi-tenant, as do many other providers in the market. Enablement offerings from Apprenda, Corent, GigaSpaces, and others, all promise a much quicker route to the Cloud for ISVs through a middleware platform for multi-tenancy. Oracle and Progress Software have solutions for their current ISVs that achieve multi-tenancy quickly, as well as new-build platforms for multi-tenant solutions.

However, as always, there are tradeoffs to each of these solutions. None of them is the holy grail – a truly plug-and-play middleware solution for instant multi-tenancy. ISVs will have to make imperfect choices to move to the Cloud, or else miss the market window.

Mike continues with the usual “Why Is It Happening” and “Market Impact” sections.

•• The Windows Azure Team announced Windows Azure Guest OS 2.3 (Release 201102-01) on 3/31/2011:


Now deploying to production for guest OS family 2 (compatible with Windows Server 2008 R2).

The following table describes release 201012-01 of the Windows Azure Guest OS 2.3:

Friendly name

Windows Azure Guest OS 2.3 (Release 201102-01)

Configuration value


Release date

March 28, 2011


Stability and security patch fixes applicable to Windows Azure OS.

Security Patches

This release includes the following security patches, as well as all of the security patches provided by previous releases of the Windows Azure Guest OS:

Bulletin ID

Parent KB

Vulnerability Description



Vulnerabilities in Microsoft Data Access Components Could Allow Remote Code Execution



Cumulative Security Update for Internet Explorer



Vulnerability in Internet Information Services (IIS) FTP Service Could Allow Remote Code Execution



Vulnerability in the OpenType Compact Font Format (CFF) Driver Could Allow Remote Code Execution



Vulnerability in JScript and VBScript Scripting Engines Could Allow Information Disclosure



Vulnerabilities in Windows Kernel Could Allow Elevation of Privilege



Vulnerabilities in Windows Kernel-Mode Drivers Could Allow Elevation of Privilege



Vulnerabilities in Kerberos Could Allow Elevation of Privilege

Windows Azure Guest OS 2.3 is substantially compatible with Windows Server 2008 R2, and includes all Windows Server 2008 R2 security patches through February 2011.

noteNote: When a new release of the Windows Azure Guest OS is published, it can take several days for it to fully propagate across Windows Azure. If your service is configured for auto-upgrade, it will be upgraded sometime after the release date, and you’ll see the new guest OS version listed for your service. If you are upgrading your service manually, the new guest OS will be available for you to upgrade your service once the full roll-out of the guest OS to Windows Azure is complete.

•• The Windows Azure Team announced Windows Azure Guest OS 1.11 (Release 201102-01) on 3/31/2011:

The following table describes release 201011-01 of the Windows Azure Guest OS 1.10:

Friendly name

Windows Azure Guest OS 1.10 (Release 201101-01)

Configuration value


Release date

February 15, 2011


Stability and security patch fixes applicable to Windows Azure OS.

Security Patches

The Windows Azure Guest OS 1.10 includes the following security patches, as well as all of the security patches provided by previous releases of the Windows Azure Guest OS:

Bulletin ID Parent KB Vulnerability Description



Vulnerability in ASP.NET Could Allow Information Disclosure



Vulnerability in .NET Framework Could Allow Remote Code Execution



Cumulative Security Update for Internet Explorer



Vulnerabilities in the OpenType Font (OTF) Driver Could Allow Remote Code Execution



Vulnerability in Task Scheduler Could Allow Elevation of Privilege



Vulnerability in Windows Address Book Could Allow Remote Code Execution



Vulnerabilities in Windows Kernel-Mode Drivers Could Allow Elevation of Privilege



Vulnerability in Consent User Interface Could Allow Elevation of Privilege



Vulnerability in Windows Netlogon Service Could Allow Denial of Service



Vulnerability in Hyper-V Could Allow Denial of Service

Windows Azure Guest OS 1.10 is substantially compatible with Windows Server 2008 SP2, and includes all Windows Server 2008 SP2 security patches through December 2010.

noteNote: When a new release of the Windows Azure Guest OS is published, it can take several days for it to fully propagate across Windows Azure. If your service is configured for auto-upgrade, it will be upgraded sometime after the release date, and you’ll see the new guest OS version listed for your service. If you are upgrading your service manually, the new guest OS will be available for you to upgrade your service once the full roll-out of the guest OS to Windows Azure is complete.

Ryan Bateman ranked Windows Azure #1 of the top 15 cloud service providers in his Cloud Provider Global Performance Ranking – January post of 3/16/2011 to the CloudSleuth blog (missed when posted):

image A few weeks ago we released calendar year Q4 “Top 15 Cloud Service Providers – Ranked by Global Performance”.  My intention here is to start a trend; releasing our ranking of the top cloud service providers based on a global average of real-time performance results as seen from the Gomez Performance Network and its Last Mile nodes.


If you missed out on the last post and are curious about the methodology behind these numbers, here is the gist:

1. We (Compuware) signed up for services with each of the cloud providers above.

2. We provisioned an identical sample application to each provider.  This application represents a simple ecommerce design, one page with sample text, thumbnails and generic nav functions followed by another similar page with a larger image.  This generic app is specifically designed as not to favor the performance strengths of any one provider.

3. We choose 200 of the 150,000+ Gomez Last Mile peers (real user PCs) each hour to run performance tests from.  Of the 200 peers selected, 125 are in the US.  The remaining 75 are spread across the top 30 countries based on GDP.

4. We gather those results and make them available through the Global Provider View.

These decisions are fueled by our desire to deliver the closest thing to an apples-to-apples comparison of the performance of these cloud providers.  Got ideas about how to tweak the sample application or shift the Last Mile peer blend or anything else to make this effort more accurate?  Let us know, we take your suggestions seriously.

Ryan’s post was reported (but usually not linked) by many industry pundits and bloggers.

Click here for more details about the technology Cloudsleuth uses.

Here’s a 4/1/2011 screen capture of the worldwide response-time results for the last 30 days with Google App Engine back in first place:



Windows Azure still takes the top spot in North America, as shown here:


Checking response time by city by clicking the arrow at the right of the Provider column indicates that the data is for Microsoft’s North Central US (Chicago) data center:


Lori MacVittie (@lmacvittie) asserted “What distinguishes these three models of cloud computing are the business and operational goals for which they were implemented and the benefits derived” in an introduction to her Public, Private and Enterprise Cloud: Economy of Scale versus Efficiency of Scale post of 3/30/2011 to F5’s DevCentral blog:

image A brief Twitter conversation recently asked the question how one would distinguish between the three emerging dominant cloud computing models: public, private and enterprise. Interestingly, if you were to take a "public cloud" implementation and transplant it into the enterprise, it is unlikely to deliver the value IT was expecting.

Conversely, transplanting a private cloud implementation to a public provider would also similarly fail to achieve the desired goals. When you dig into it, the focus of the implementation – the operational and business goals – play a much larger role in distinguishing these models than any technical architecture could.

Public cloud computing is also often referred to as "utility" computing.

imageThat's because its purpose is to reduce the costs associated with deployment and subsequent scalability of an application. It's about economy of scale – for the customer, yes, but even more so for the provider. The provider is able to offer commoditized resources at a highly affordable rate because of the scale of its operations. The infrastructure – from the network to the server to the storage – is commoditized. It's all shared resources that combine to form the basis for a economically viable business model in which resources are scaled out on-demand with very little associated effort. There is very little or no customization (read: alignment of process with business/operational goals) available because economy of scale is achieved by standardizing as much as possible and limiting interaction.

Enterprise cloud computing is not overly concerned with scalability of resources but is rather more focused on the efficiency of resources, both technological and human.

imageAn enterprise cloud computing implementation has the operational and business goal of enabling a more agile IT that serves its customers (business and IT) more efficiently and with greater alacrity. Enterprise cloud computing focuses on efficient provisioning of resources and automating operational processes such that deployment of applications is repeatable and consistent. IT wants to lay the foundation for IT as a Service. Public cloud computing wants to lay the foundation for resources as a service. No where is that difference more apparent than when viewed within the scope of the data center as a whole.

Private cloud computing, if we're going to differentiate, is the hybrid model; the model wherein IT incorporates public cloud computing as an extension of its data center and, one hopes, its own enterprise cloud computing initiative.

imageIt's the use of economy of scale to offset costs associated with new initiatives and scalability of existing applications without sacrificing the efficiency of scale afforded by process automation and integration efforts. It's the best of both worlds: utility computing resources that can be incorporated and managed as though they are enterprise resources.

Public and enterprise cloud computing have different goals and therefore different benefits. Public cloud computing is about economy of scale of resources and commoditized operational processes. Forklifting a model such as AWS into the data center would be unlikely to succeed. The model assumes no integration or management of resources via traditional or emerging means and in fact the model as implemented by most public cloud providers would inhibit such efforts. Public cloud computing assumes that scale of resources is king and at that it excels. Enterprise cloud computing, on the other hand, assumes that efficiency is king and at that, public cloud computing is fair to middling at best. Enterprise cloud computing implementations recognize that enterprise applications are holistic units comprising all of the resources necessary to deploy, deliver and secure that application. Infrastructure services from the network to the application delivery network to storage and security are not adjunct to the application but are a part of the application. Integration with identity and access management services is not an afterthought, but an architectural design. Monitoring and management is not a "green is good, red is bad" icon on a web application, but an integral part of the overall data center strategy.

Enterprise cloud computing is about efficiency of scale; a means of managing growth in ways that reduces the burden placed on people and leverages technology through process automation and devops to improve the operational posture of IT in such a way as to enable repeatable, rapid deployment of applications within the enterprise context. That means integration, management, and governance is considered part and parcel of any application deployment. These processes and automation that enable repeatable deployments and dynamic, run-time management that includes the proper integration and assignment of operational and business policies to newly provisioned resources are unique, because the infrastructure and services comprising the architectural foundation of the data center are unique.

These are two very different sets of goals and benefits and, as such, cannot easily be substituted. They can, however, be conjoined into a broader architectural strategy that is known as private (hybrid) cloud computing.

PRIVATE CLOUD: EFFICIENT ECONOMY of SCALE There are, for every organization, a number of applications that are in fact drivers of the need for economy of scale, i.e. a public cloud computing environment. Private (hybrid) cloud computing is a model that allows enterprise organizations to leverage the power of utility computing while addressing the very real organizational need for at a minimum architectural control over those resources for integration, management and cost containment governance.

It is the compromise of cheap resources coupled with control that affords organizations the flexibility and choice required to architect a data center solution that can meet the increasing demand for self-service of its internal customers while addressing ever higher volumes of demand on external-facing applications without substantially increasing costs.

Private (hybrid) cloud computing is not a panacea; it's not the holy grail of cloud computing but it is the compromise many require to simultaneously address both a need for economy and efficiency of scale. Both goals are of interest to enterprise organizations – as long as their basic needs are met. Chirag Mehta summed it up well in a recent post on CloudAve: "It turns out that IT doesn’t mind at all if business can perform certain functions in a self-service way, as long as the IT is ensured that they have underlying control over data and (on-premise) infrastructure."  See: Cloud Control Does Not Always Mean ‘Do it yourself’.

Control over infrastructure. It may be that these three simple words are the best way to distinguish between public and enterprise cloud computing after all, because that's ultimately what it comes down to. Without control over infrastructure organizations cannot integrate and manage effectively its application deployments. Without control over infrastructure organizations cannot achieve the agility necessary to leverage a dynamic, services-based governance strategy over performance, security and availability of applications. Public cloud computing requires that control be sacrificed on the altar of cheap resources. Enterprise and private (hybrid) cloud computing do not.  Which means the latter is more likely able to empower IT to realize the operational and business goals for which it undertook a cloud computing initiative in the first place.

JP Morgenthal (@jpmorgenthal) asked SaaS, Paas, IaaS: Which of These Things is not Like the Others? and and answered “SaaS” in a 3/30/2011 post:

image There’s an interesting debate raging over at (a newly formed site that is dedicated toward facilitating the sharing and exchange of information as well as provide access to subject matter experts). The question was posed, "Is Facebook a cloud?" Clearly, there’s differing opinions on the response to this question, which makes for good reading and opens the door for discussion. Incorporated into this question is a underlying skepticism that I addressed in my entry, Scale is the Common Abstraction of Cloud Computing, which is does Software-as-a-Service (SaaS) really belong within the definition of cloud computing?

Let’s use Facebook as the reference model for answering this question. As I posited in the discussion at, how Facebook chooses to implement their application is mostly irrelevant to us as a consumer of that application. To make assumptions about their application’s architecture or to incorporate knowledge from interviews and articles about how Facebook works into our decision to call Facebook cloud acts to introduce irrelevant information into the discussion. To incorporate SaaS or any application under the moniker of cloud merely begs the question of the value of the term to the industry and the role marketing is playing on formulating this industry.

Indeed, SaaS by the nature of what it is should relish the abstraction of itself from its implementation. After all, what they’re selling customers is their ability to provide a highly-available and easily accessible application. Now, as I also posited in the discussion, Facebook also provides a platform for authenticating users and authorizing access to their data. This component of Facebook I would be open to incorporating into the cloud discussion under the moniker of Platform-as-a-Service (PaaS). However, due to ambiguity, to blindly state that Facebook is cloud without clarifying that you are discussing the PaaS component of Facebook would leave one to believe that you are discussing the SaaS component of Facebook, which I would continue to argue does not belong to the class of things that are incorporated under the cloud moniker.

I have danced around this topic for some time now, but this discussion finally pushed me to come out against including SaaS in the definition of cloud computing moving forward. Software-as-a-Service is merely a consumer of cloud computing and not a component of cloud computing. Or, as we like to say in the architecture world, SaaS uses cloud, not SaaS is a cloud. Hence, the Facebook application is not cloud.

I realize there’s going to be a lot of unhappy campers who read these words, but having written an entire book on semantics and ontology, I would be remiss if I did not raise my hand up and say that we need some aspect of rational thought about what’s in the cloud class and what’s outside the cloud class. Now, since no one group or person officially owns the definition of cloud computing, SaaS vendors will most likely to reject this entry and continue to stomp all over the term in favor of being included in the class of “what’s hot”, which is most likely the root cause for Larry Ellison’s statement that the computer industry is more fashion-driven than women’s fashion.

JP is the author of Enterprise Information Integration.

David Hardin described Configuring WAD via the diagnostics.wadcfg Config File in a 3/29/2011 post:

image Azure 1.3 added the ability to control Windows Azure Diagnostics (WAD) via a config file.  The MSDN documentation covering diagnostics.wadcfg explains that the capability was added to support the VM role.  The documentation also says to continue configuring WAD via code in OnStart for the other role types.

I instead recommend using diagnostics.wadcfg for all role types to perform the the majority of the configuration and only configure via code when required, such as when using a custom performance counter.  This will allow WAD to capture diagnostics prior to OnStart’s execution plus it is easier to maintain a config file than code.

The documentation discusses the location Azure reads diagnostics.wadcfg from; each role type uses a different location.  What isn’t explained is how to add diagnostics.wadcfg to your Visual Studio solution such that Visual Studio packages the file into the correct location.  Others have blogged about this topic in sufficient detail so I’ll just say that for a web role, the hardest of the three, add an XML file to the root of your web project called diagnostics.wadcfg then change its properties so that “Build Action = Content” and “Copy to Output Directory = Copy always”.  “Copy if newer” should work too but I personally prefer “Copy always”.


The sample config XML in the MSDN documentation demonstrates most of the configuration capabilities but WAD fails if the sample is copied as-is into your project.  As shown below, the sample XML specifies paths and local storage names which may not exist.  Comment out the entire <DataSources> element to get the configuration working.  In my next blog post I’ll show how to get custom logs working.

   1: <Directories bufferQuotaInMB="1024" 
   2:    scheduledTransferPeriod="PT1M">
   4:    <!-- These three elements specify the special directories 
   5:         that are set up for the log types -->
   6:    <CrashDumps container="wad-crash-dumps" directoryQuotaInMB="256" />
   7:    <FailedRequestLogs container="wad-frq" directoryQuotaInMB="256" />
   8:    <IISLogs container="wad-iis" directoryQuotaInMB="256" />
  10:    <!-- For regular directories the DataSources element is used -->
  11:    <DataSources>
  12:       <DirectoryConfiguration container="wad-panther" directoryQuotaInMB="128">
  13:          <!-- Absolute specifies an absolute path with optional environment expansion -->
  14:          <Absolute expandEnvironment="true" path="%SystemRoot%\system32\sysprep\Panther" />
  15:       </DirectoryConfiguration>
  16:       <DirectoryConfiguration container="wad-custom" directoryQuotaInMB="128">
  17:          <!-- LocalResource specifies a path relative to a local 
  18:               resource defined in the service definition -->
  19:          <LocalResource name="MyLoggingLocalResource" relativePath="logs" />
  20:       </DirectoryConfiguration>
  21:    </DataSources>
  22: </Directories>

WAD automatically maps the wad-crash-dumps, wad-frq, and wad-iis containers to special folders which only exist in web and worker roles.  For VM roles comment out the CrashDumps, FailedRequestLogs, and IISLogs elements.

Another issue is with the various “QuotaInMB” settings.  WAD automatically allocates 4096 MB of local storage named DiagnosticStore.  WAD fails if the overallQuotaInMB value is set higher than the local storage allocated or if the various “QuotaInMB” values add up to within about 750 MB of overallQuotaInMB.  Either:

  • Decrease some of the “QuotaInMB” values until the config works.
  • Add a LocalStorage setting named DiagnosticStore to ServiceDefinition.csdef and increase overallQuotaInMB.

It isn’t documented but the MB’s allocated to DiagnosticStore is a hard limit which WAD can’t exceed.  The various WAD quotas are soft limits which control when WAD starts deleting old data.  It is possible for WAD to exceed the quotas for a brief period of time while performing the delete.

For more posts in my WAD series:

The aircraft in David’s logo is a WWII Stearman Navy trainer (Boeing-Stearman Model 75.)

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Bruce Hoard reported System Center 'Concero' the Latest Management Blast in a 3/31/2011 post to his The Hoard Facts blog for the Virtualization Review:

image Virtualization and cloud management is hot. I get one press release after another from vendors in this area. Right now VMware and Microsoft are grabbing the headlines in an effort to carve out market share in this increasingly lucrative market.

Over the past few weeks, VMware made a couple of noteworthy announcements. The first was with vCenter XVP Manager and Converter, a plug-in that enables users to manage Hyper-V from a central console, and the second was vCenter Operations, a performance monitoring and capacity management tool.

image I like the idea that I recently heard from Simon Bramfitt, founder and principal analyst of Entelechy Associates, who suggested it might be a good idea for VMware to drop its hypervisor prices and make up for the revenue by competing more strongly and earning more money in the management market, where in Simon's words, "the real value-add is."

Moving onto Redmond, Microsoft has just taken its latest shot by boosting System Center in the spate of introductions that took place at the recent Microsoft Management Summit. Highlights included System Center Virtual Machine Manager 2012 beta, System Center Configuration Manager 2012, and a "sneak peek" at System Center "Concero."

Concero is Latin for "connected."

Delivered as part of a common management toolset for private and public cloud applications and services aimed at bolstering IT as a service, the new System Center 2012 offerings were created to help customers create and manage their private and public clouds based on Windows Server 2008 R2, Hyper-V and other virtualization platforms.


"Concero provides a web-based and simple experience for the application owner who will be consuming cloud capacity," blogs Microsoft's Wilfried Schadenboeck. "Concero will enable customers to deploy, manage and control applications and services on private clouds built using System Center Virtual Machine Manager 2012 and in the public cloud offering of Windows Azure. This provides a consistent and simple user experience for service management across these clouds."

The list of top-line Concero feature includes access to resources across Virtual Machine Manager (VMM) servers, the ability to register and consume capacity from multiple Windows Azure subscriptions, and the ability to copy service templates and optional resources from one VMM Server to another. Other Azure-related features enable multiple users to be authenticated through Active Directory to access a single Azure subscription, and make it possible to copy Azure configuration, package files and VHDs from on-premises and between Azure subscriptions.

Full Disclosure: Virtualization Review is an 1105 Media publication. 1105 Media also publishes Visual Studio Magazine, for which I’m a contributing editor.

<Return to section navigation list> 

Cloud Security and Governance

Chris Hoff (@Beaker) reported FYI: New NIST Cloud Computing Reference Architecture on 3/31/2011:

logo of National Institute of Standards and Te...

In case you weren’t aware, NIST has a WIKI for collaboration on Cloud Computing.  You can find it here.

image They also have a draft of their v1.0 Cloud Computing Reference Architecture, which builds upon the prior definitional work we’ve seen before and has pretty graphics.  You can find that, here (dated 3/30/2011)


Related articles

The Wiki homepage has link to sign up for Reference Architecture, Taxonomy, Business Use Cases, Security and Standards mailing lists.

NIST logo via Wikipedia

Chris Wysopal published Navigating Cloud Application Security: Myths vs. Realities to the InfoSecurity blog on 3/8/2011 (missed when posted):

Developers and IT departments are being told they need to move applications to the cloud and are often left on their own to navigate the challenges related to developing and managing the security of applications in those environments. Because no one should have to fly blind through these uncertain skies, it’s important to dispel the myths, expose the realities and establish best practices for securing cloud-based applications.

Inherent Threats

Whether we are talking about IaaS (Infrastructure as a Service), PaaS (Platform as a Service) or SaaS (Software as a Service), perceived security vulnerabilities in the cloud are abundant. A common myth is that organizations utilizing cloud applications should be most concerned about someone breaking in to the hosting provider, or an insider gaining access to applications they shouldn’t. This is an outdated, generic IT/infrastructure point of view. What’s more important and elemental is to examine if the web application being used is more vulnerable because of the way it was built, then deployed in the cloud – versus focusing on cloud security risks from an environmental or infrastructure perspective.

It’s imperative to understand the inherent (and non-storied) threats facing applications in virtualized environments. Common vulnerabilities associated with multi-tenancy and cloud provider services, like identity and access management, must be examined from both a security and compliance perspective. Obviously in a multi-tenant environment, hardware devices are being shared among other companies – potentially by competitors and other customers, as well as would-be attackers. Organizations lose control over physical network or computing systems, even local storage for debugging and logging is remote. Additionally, auditors may be concerned about the fact that the cloud provider has access to sensitive data at rest and in transit.

Inherent threats are not only present in the virtualized deployment environment, but also in the way applications for the cloud are developed in the first place. Consider the choices many architects and designers are forced to make when it comes to developing and deploying applications in the cloud. Because they are now in a position where they are relying on external controls put in place by the provider, they may feel comfortable taking short cuts when it comes to building in application security features. Developers can rationalize speed time to market advantages related to by being able to use, and test, less code. However, by handing external security controls to the provider, new attack surfaces quickly emerge related to VM, PaaS APIs and cloud management infrastructure.

Security – Trust No One

Security trust boundaries completely change with the movement of applications from internal or DMZ, to the cloud. As opposed to traditional internal application infrastructures, in the cloud the trust boundary shrinks down to encompassing only the application itself, with all the users and related storage, database and identity management systems becoming “external” to that application. In this situation, “trust no one” takes on great significance to the IT organization. With all these external sources wanting access to the application, how do you know what request is legitimate? How can we make up the lack of trust? It boils down to establishing an additional layer of security controls. Organizations must encrypt all sensitive data stored or transmitted and treat all environmental inputs as untrusted in order to protect assets from attackers and the cloud provider itself.

Fasten Your Seatbelts

Best practices aimed at building protection must be incorporated into the development process to minimize risks. How can you help applications become more secure? It starts with a seatbelt – in the form of application level security controls that can be built into application code or implemented by the cloud services provider itself.

Examples of these controls can include encryption at rest, encryption in transit, point-to-point and message contents, auditing and logging, or authentication and authorization. Unfortunately, in an IaaS environment, it may not be an option to have the provider manage these controls. The advantages of using PaaS APIs to establish these controls, for example, is that in most cases the service provider has tested and debugged the API to speed time to market for the application. SaaS environments offer no choice to the developer, as the SaaS provider will be totally in control of how data is secured and identity managed.

Traditional Application Security Approaches Still Apply

Another myth that must be debunked is the belief that any approach to application security testing – perhaps with a slightly different wrapper on it – can be used in a cloud environment. While it is true that traditional application security issues still apply in the cloud, and that you still need to take advantage of established processes associated with requirements, design, implementation and testing, organizations can’t simply repackage what they know about application security. Applications in the cloud require special care. IT teams can’t be content to use mitigation techniques only at the network or operating system level anymore.

Security testing must be done at the application level, not the environmental level. Threat modeling and design phases need to take additional cloud environmental risks into account. And, implementation needs to use cloud security aware coding patterns in order to effectively eliminate vulnerability classes such as Cross-Site Scripting (XSS) and SQL Injections. Standards such as OWASP Top 10 and CWE/SANS Top 25 are still applicable for testing IaaS and PaaS applications, and many SaaS extensions.

Overall, dynamic web testing and manual testing are relatively unchanged from traditional enterprise application testing, but it’s important to get permission and notify your cloud provider if you plan to do dynamic or manual testing, especially on a SaaS extension you have written, so it doesn’t create the appearance that your organization is attempting an attack on the provider.

It’s also important to note that cloud design and implementation patterns are still being researched, with efforts being led by organizations like the Cloud Security Alliance and NIST. Ultimately, it would be valuable for service providers to come up with a recipe-like implementation for APIs.

Pre-Flight Checklists

After applications have been developed, application security testing has been performed according to requirements of the platform, and you are presumably ready to deploy, how do you know you are ready? Each environment, IaaS, PaaS or SaaS, requires its own checklist to ensure the applications are ready for prime time.

For example, for an IaaS application, the organization must have taken steps such as securing the inter-host communication with channel level encryption and message based security, and filtered and masked sensitive information sent to debugging and logging functions. For a PaaS application, threat modeling must have incorporated the platform API’s multi-tenancy risks. For SaaS, it’s critical to have reviewed the provider’s documentation on how data is isolated from other tenants’ data. You must also verify the SaaS provider’s certifications and their SDLC security processes.

Future Threats

Myth: just because you are prepared for a safe flight, doesn’t mean it will be. Even with all the best preparation and safety measures in place, there is no debating the nascent nature of this deployment environment, leaving much more research that needs to be done. One effective approach is to use threat modeling to help developers better understand the special risks of applications in the cloud. For example, using this approach, they can identify software vulnerabilities that can be exploited by a “pause and resume” attack where a virtual machine becomes temporarily frozen. A seemingly innocent halt to end-user productivity can actually mean a hacker has been able to enter a system to cause damage by accessing sensitive information or planting malicious code that can be released at a future time.

As a security community, with security vendors, cloud service providers, research organizations and end-users who all have a vested interest in secure deploying applications in the cloud, we have the power to establish guidelines and regular best practices aimed at building protection into the development process to prevent deployment risks. Fasten your seatbelts, it’s going to be a fun ride.

Chris is co-founder and CTO of Veracode.

<Return to section navigation list> 

Cloud Computing Events

Brian Hitney announced the RockPaperAzure Coding Challenge on 3/29/2011:

I’m pleased to announce that we’re finally launching our Rock, Paper, Azure Challenge!


image For the past couple of months, I’ve been working with Jim O’Neil and Peter Laudati on a new Azure event/game called Rock, Paper, Azure.  The concept is this:  you (hopefully) code a “bot” that plays rock, paper, scissors against the other players in the game.  Simple, right?

Here’s where it gets interesting.  Rock, paper, scissors by itself isn’t all that interesting (after all, you can’t really beat random in a computer game – assuming you can figure out a good random generator!), so there are two additional moves in the game.  The first is dynamite, which beats rock, paper, and scissors.   Sounds very powerful – and it is – however, you only have a few per match so you need to decide when to use them. The other move is a water balloon. The water balloon beats dynamite, but it loses to everything else. You have unlimited water balloons.


Now, with the additional rules, it becomes a challenge to craft an effective strategy.   We do what we call “continuous integration” on the leaderboard – as soon as your bot enters, it’s an all out slugfest and you see where you are in near real time.   In fact, just a few minutes ago, a few of us playing a test round were constantly tweaking our bots to defeat each other – it was a lot of fun trying to outthink each other.

Starting next week, we’ve got some great prizes on the line – including Xbox systems, Kinect, and gift cards – so be sure to check it out!   The project homepage is here:

See you in the game!

Interop Las Vegas published on 5/28/2011 the complete Cloud Computing Conference Track for Interop 2011 to be held 5/8 to 5/12/2011 at the Mandalay Bay hotel in Las Vegas, NV:


Interop's Cloud Computing track brings together cloud providers, end users, and IT strategists. It covers the strategy of cloud implementation, from governance and interoperability to security and hybrid public/private cloud models. Sessions include candid one-on-one discussions with some of the cloud's leading thinkers, as well as case studies and vigorous discussions with those building and investing in the cloud. We take a pragmatic look at clouds today, and offer a tantalizing look at how on-demand computing will change not only enterprise IT, but also technology and society in general.

Enterprise Cloud Summit

In just a few years, cloud computing has gone from a fringe idea for startups to a mainstream tool in every IT toolbox. The Enterprise Cloud Summit will show you how to move from theory to implementation.

Learn about practical cloud computing designs, as well as the standards, infrastructure decisions, and economics you need to understand as you transform your organization's IT.

Enterprise Cloud Summit – Public Clouds (5/8/2011)
In Day One of Enterprise Cloud Summit, we'll review emerging design patterns and best practices. We'll hear about keeping data private in public places. We'll look at the economics of cloud computing and learn from end users' actual experience with clouds. Finally, in a new addition to the Enterprise Cloud Summit curriculum, major public clouds will respond to our shortlist questionnaire, giving attendees a practical, side-by-side comparison of public cloud offerings.
Enterprise Cloud Summit – Private Clouds (5/9/2011)

On Day Two of Enterprise Cloud Summit, we'll turn our eye inward to look at how cloud technologies, from big data and turnkey cloud stacks, are transforming private infrastructure. We'll discuss the fundamentals of cloud architectures, and take a deep dive into the leading private cloud stacks. We'll hear from more end users, tackle the "false cloud" debate, and look at the place of Platform-as-a-Service clouds in the enterprise.

Carrier Cloud Forum (5/9/2011)

Service Providers will learn to build a cloud infrastructure that is manageable and billable, ensures high-performance security and service quality to meet SLA demands, recognizes best practices for packaging and monetizing XaaS services and applications, and tips for vertical industry customization.

Cloud Computing Free Programs (5/10 through 5/12/2011)

All Interop attendees can choose from a full calendar of free programs, Tuesday through Thursday, including the following covering Cloud Computing.

Visit the site for lists of session and their abstracts.

Kevin Griffin organized the Bay Area Windows Azure Users Group on 3/23/2011:




<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Ernest Mueller described Why Amazon Reserve Instances Torment Me in a 3/31/2011 post:

imageWe’ve been using over 100 Amazon EC2 instances for a year now, but I’ve just now made my first reserve instance purchase. For the untutored, reserve instances are where you pay a yearly upfront per instance and you get a much, much lower hourly cost. On its face, it’s a good deal – take a normal Large instance you’d use for a database.  For a Linux one, it’s $0.34 per hour.  Or you can pay $910 up front for the year, and then it’s only $0.12 per hour. So theoretically, it takes your yearly cost from $2978.40 to $1961.2.  A great deal right?

Well, not so much. The devil is in the details.

First of all, you have to make sure and be running all those instances all the time.  If you buy a reserve instance and then don’t use it some of the time, you immediately start cutting into your savings.  The crossover is at 172 days – if you don’t run the instance at least 172 days out of the year then you are going upside down on the deal.

But what’s the big deal, you ask?  Sure, in the cloud you are probably (and should be!) scaling up and down all the time, but as long as you reserve up to your low water mark it should work out, right?

So the big second problem is that when you reserve instances, you have to specify everything about that instance.  You aren’t reserving “10 instances”, or even “10 large instances” – you have to specify:

  • Platform (UNIX/Linux, UNIX/Linux VPC, SUSE Linux, Windows, Windows VPC, or Windows with SQL Server)
  • Instance Type (m1.small, etc.)
  • AZ (e.g. us-east-1b)

And tenancy and term. So you have to reserve “a small multitenant Linux instance in us-east-1b for one year.” But having to specify down to this level is really problematic in any kind of dynamic environment.

Let’s say you buy 10 m1.large instances for your databases, and then you realize later you really need to move up to an m1.xlarge.  Well, tough. You can, but if you don’t have 10 other things to run on those larges, you lose money. Or if you decide to change OS.  One of our biggest expenditures is our compile farm workers, and on those we hope to move from Windows to Linux once we get the software issues worked out, and we’re experimenting with best cost/performance on different instance sizes. I’m effectively blocked from buying reserve for those, since if I do it’ll put a stop to our ability to innovate.

And more subtly, let’s say you’re doing dynamic scaling and splitting across AZs like they always say you should do for availability purposes.  Well, if I’m running 20 instances, and scaling them across 1b and 1c, I am not guaranteed I’m always running 10 in 1b and 10 in 1c, it’s more random than that.  Instead of buying 20 reserve, you instead have to buy say 7 in 1b and 7 in 1c, to make sure you don’t end up losing money.

Heck, they even differentiate between Linux and Suse and Linux VPC instances, which clearly crosses over into annoyingly picky territory.

As a result of all this, it is pretty undesirable to buy reserve instances unless you have a very stable environment, both technically and scale-wise. That sentence doesn’t describe the typical cloud use case in my opinion.

I understand, obviously, why they are doing this.  From a capacity planning standpoint, it’s best for them if they make you specify everything. But what I don’t think they understand is that this cuts into people willing to buy reserve, and reserve is not only upfront money but also a lockin period, which should be grotesquely attractive to a business. I put off buying reserve for a year because of this, and even now that I’ve done it I’m not buying near as many reserve as I could be because I have to hedge my bets against ANY changes to my service. It seems to me that this also degrades the alleged point of reserves, which is capacity planning – if you’re so picky about it that no one buys reserve and 95% of your instances are on demand, then you can’t plan real well can you?

What Amazon needs to do is meet customers halfway.  It’s all a probabilities game anyway. They lose specificity of each given reserve request, but get many more reserve requests (and all the benefits they convey – money, lockin, capacity planning info) in return.

Let’s look at each axis of inflexibility and analyze it.

  • Size.  Sure, they have to allocate machines, right?  But I assume they understand they are using this thing called “virtualization.”  If I want to trade in 20 reserved small instances for 5 large instances (each large is 4x a small), why not?  It loses them nothing to allow this. They just have to make the effort to allow it to happen in their console/APIS. I can understand needing to reserve a certain number of “units” but those should be flexible on exact instance types at a given time.
  • OS. Why on God’s green earth do I need to specify OS?  Again, virtualized right? Is it so they can buy enough Windows licenses from Microsoft?  Cry me a river.  This one needs to leave immediately and never come back.
  • AZ. This is annoying from the user POV but probably the most necessary from the Amazon POV because they have to put enough hardware in each data center, right?  I do think they should try to make this a per region and not a per AZ limit, so I’m just reserving “us-east” in Virginia and not the specific AZ, that would accommodate all my use cases.

In the end, one of the reasons people move to the cloud in the first place is to get rid of the constraints of hardware.  When Amazon just puts those constraints back in place, it becomes undesirable. Frankly even now, I tried to just pay Amazon up front rather than actually buy reserve, but they’re not really enterprise friendly yet from a finance point of view so I couldn’t make that happen, so in the end I reluctantly bought reserve.  The analytics around it are lacking too – I can’t look in the Amazon console and see “Yes, you’re using all 9 of your large/linux/us-east-1b instances.”

Amazon keeps innovating greatly in the technology space but in terms of the customer interaction space, they need a lot of help – they’re not the only game in town, and people with less technically sophisticated but more aggressive customer services/support options will erode their market share. I can’t help that Rackspace’s play with backing OpenStack seems to be the purest example of this – “Anyone can run the same cloud we do, but we are selling our ‘fanatical support’” is the message.

See also Christopher Thorpe (@drthorpe) posted Pricing Fail: Amazon EC2 Dedicated Instances on 3/28/2011 below.

Matthew Weinberger asked Amazon Cloud Drive: A More Active Storage? in a 3/30/2011 post to the TalkinCloud blog:

image — the retail business, not the Amazon Web Services subsidiary that pioneered IaaS — has launched Amazon Cloud Drive, a consumer cloud storage offering. You can store any type of file you’d like, but Amazon is heavily promoting it as the power behind the also-new Amazon Cloud Player, a SaaS app that plays any music you’ve stashed in the cloud. Is it an enterprise offering? No. Should cloud service providers still keep an eye on Amazon Cloud Drive? Yes. Here’s why.

image The basic pricing scheme for the Amazon Cloud Drive and Amazon Cloud Player one-two punch is simple: you get 5GB free no matter what, forever. If you buy any album from the (personal favorite) Amazon MP3 DRM-free music store, you get a one-time credit of an additional 15GB of storage for a year. Above that, Amazon offers packages of additional storage at 50GB increments or so for a basic rate of $1/GB/year.

The offering comes with a couple of twists: any music you download from Amazon MP3 not only gets synced to your Amazon Cloud Drive account, but doesn’t count against your storage quota. And since you can access the Amazon Cloud Player from any computer with a browser or a Google Android-based smartphone, it means not only is your content available from anywhere – but you can actually use it.

It’s that last part that means it’s worth watching. Cloud storage isn’t new to Amazon — the Amazon S3 offering has been around for a while now. But Amazon Cloud Drive wraps the concept up in a user-friendly interface that also enables them to not only move files in and out of the cloud, but access them in useful and appealing ways directly from the browser. Just as Google Docs (incidentally cheaper at $5.00/20GB/year) lets its users edit documents in the cloud, Amazon Cloud Drive enables customers to interact with their media.

I don’t think Amazon is looking to take on Google Docs with a productivity suite or similar. At least, not yet. But Amazon Cloud Drive represents a potential shift in the way end users expect to interact with their cloud data. If it catches on, people are going to start asking why they can’t use their other data in the same way.

In fact, Amazon Cloud Drive may, in its own way, be the best Google Chrome OS value-add yet, since it would let users keep and access their personal files with Amazon and their business data in Google Docs. Call it a sign of things to come, but I’m predicting that users won’t be satisfied by what I’m going to call cold storage for much longer. …

Read More About This Topic

Read more about Amazon Cloud Drive in Windows Azure and Cloud Computing Posts for 3/29/2011+’s “Other Cloud Computing Platforms and Services” section.

Robert Duffner posted Thought Leaders in the Cloud: Talking with Jonathan Boutelle, cofounder and CTO of Slideshare on 3/30/2011:

Jonathan Boutelle is cofounder and CTO of Slideshare, a site for the online social sharing of slideshows. A software engineer by training, his interests lie at the intersection of technology, business, and customer experience. He studied computer science at Brown University, and previously worked as a software engineer for Advanced Visual Systems (a data visualization company), CommerceOne (a B2B enterprise software company), and Uzanto (a user experience company). He writes an occasional article on his blog.

In this interview, we discuss:

  • The scale of SlideShare (45 million visitors per month)
  • Two kinds of expensive missteps with the cloud
  • Aligning SaaS revenue with cloud costs
  • Concrete examples of the benefits of hybrid architectures
  • The higher administrative burden of infrastructure as a service
  • The opportunity to build cloud hosted special purpose application services
  • Front end complexity and back end cost risks

Robert: Jonathan, could you take a moment and introduce yourself?

Jonathan Boutelle: Sure. I'm the CTO and cofounder of SlideShare. I came up with the idea for SlideShare four years ago when I was organizing an unconference, a board camp in Delhi, India. People were coming up and asking me how they could put the PowerPoint into a Wiki. That was when I got the idea for SlideShare.

Prior to SlideShare, I had a small online consulting company, and before that, I was a software engineer at a B2B startup called Commerce One.

Robert: Could you talk a little bit about the scale you're working with at SlideShare, in terms of your traffic handling, the amount of data you're storing, the number of documents, and those kinds of things?

Jonathan: We handle 45 million unique visitors a month and we're growing at about 10 percent a month right now. We handle tens of thousands of new documents every day. Once they get uploaded to the system, just the process of converting all of those documents and preparing them for viewing on the web is a scaling challenge in its own right.

We have tens of millions of documents in our repository, and we're getting a lot more every day. It's a really big site that has a lot of simultaneous load on it at all times, because we're very global.

Robert: Can you talk a little bit about the stack that you run?

Jonathan: We have a strong preference for open source software, because it's easier to tinker with and troubleshoot if it doesn't work. We use MySQL on the back end.

Robert: Are you using MySQL in a relational way or more in a non-SQL fashion?

Jonathan: We're using MySQL in a classic relational way. It's not like the kind of stuff you've heard about from Facebook, where it's essentially using MySQL for key value pairs. It's traditional grouping of business objects that we're doing sorting on.

One of the saving graces of SlideShare is that, from a scaling perspective, it's a lot of read traffic. There's not a fantastic amount of write traffic, because the overwhelming amount of activity that comes to the site is people browsing and reading content. That's quite a bit different from a site like Twitter or Facebook where there's tremendous amount of content being written into the system by the people who are using it.

There's really no way to use Facebook without writing a lot of data into their database, but at SlideShare, we get a lot of people browsing and looking at content. It's analogous to YouTube, in that sense. The nice thing about that use case is that you can put many layers of caches in between the user and the database, which can help you scale up to a very high level while using a fairly traditional database architecture.

Our first tier of caching is a reverse-proxy cache. We use Varnish, and we keep HTML pages around once they're rendered. If you're viewing a slideshow and you're not logged in, we keep them around for four hours or so. We'll happily serve that up to you if you come along and request that page.

The next layer back is a tier of memcached servers where we save data. That's the data that we use to build up web pages. If we retrieve, say, a user name for a particular user, we'll save that in memcached. If we need that information within a certain amount of time, we'll pull it from the memory cloud rather than bothering the database with it.

The database is the last layer. The stuff that doesn't get caught by those two caching layers is what comes back to the database.

Robert: You've got a great article titled, "Lessons from SlideShare: Cloud Computing Fiascos and How to Avoid Them" where you talk about how to lose $5,000 without even trying. People think of the cloud as a huge cost saver, but what can you share about expensive missteps?

Jonathan: I think there are two categories of expensive missteps that most people are susceptible to when they're starting to deploy cloud computing in an enterprise environment. The first is big blunders. What happened to us was that we were doing some very heavy Hadoop-based log analysis, and the software was not able to crunch through the data fast enough. We decided just to throw more hardware at the problem.

It's very seductive to do that, because if you throw 100 servers at the problem, you're still only paying several dollars an hour, which feels very affordable. The problem can come when you don't make sure to shut it down as soon as the work is done or as soon as you've determined that there's actually a problem with your software rather than a problem with the availability of hardware.

We ended up leaving the servers running for several days, and we got a very high bill that month from Amazon because we had been so sloppy. This just doesn't happen with conventional hardware, because you wouldn't buy 100 servers to see whether throwing hardware at a temporary problem will fix it. Because the cloud gives you that power to scale up so fast, you need to make sure you remember to scale back down when you don't need it anymore. You need to have more discipline, not less.

The second kind of problem that can really bite you is just the drip, drip, drip of occasional servers that have been spun up and haven't been shut down. The first category I described was like the big screw up. This category is more like just being a little bit sloppy, having several servers sitting around. This happens even with conventional hardware.

There's that box in the corner that nobody really knows what it does, but everybody is afraid to unplug it, because they think maybe it does something critical. You can get many more metaphorical boxes in the corner with cloud computing, because you've empowered more people in the organization to do procurement. Procurement is just spinning up a node.

Robert: When I'm traveling in Europe with data roaming turned on, I'll get a message from AT&T telling me that I have very heavy data usage and high costs associated with it. That really doesn't happen with the cloud, but what kinds of alerts like that would you like to see that would help detect those kinds of $5,000 mistakes before they happen?

Jonathan: I think that alerting on the basis of spikes in costs, like you described with the AT&T scenario, would be extremely helpful. I also think that daily or weekly reporting of costs would be extremely valuable. When you drill down into your spend on cloud computing, it can be challenging to figure out exactly where the money is going, when the costs originated, who authorized them, and things like that.

Being able to get a weekly or a daily report of what your spend was and a chart that shows the difference between today and yesterday would go a long way toward helping organizations cut out these kinds of extra costs.

Robert: I hear a lot of this stuff first-hand from our Windows Azure customers, and there are definitely a lot of parallels to the mobile phone industry. If you look back to what mobile plans looked like five or 10 years ago, compared to how they have evolved today, they are definitely much more attuned to how users consume. At one time, you had to do a lot of math to figure out what your costs were going to be.

You've also talked about the freemium model, and you posted a great slide about this. When a cloud provider charges by the drink and a SaaS (software-as-a-service) provider wants to charge per user, how do you determine where you're going to draw that free/premium line?

Jonathan: I find pricing interesting. The link between SaaS and freemium and cloud computing is that, in all cases, you're paying for cloud computing resources as you use them. Presumably, if you're running a SaaS or a freemium business, you're collecting money as your users use it. The challenge with freemium is that there's a large percentage of your users that are not paying you. So maybe you're relying on them as a distribution strategy.

You're hoping that your free users will convert to paid, and what that means is that you're starting to pay for computing resources at the beginning, but you're only collecting money once a given user converts to being a paying customer, which might be two or three months out and is only going to happen a certain percentage of the time.

That makes business modeling a little bit more complicated, but it's still much better to use a cloud computing solution where you can spin up more compute resources as you have more users than to have to front load that cost and pay for the users that you hypothetically hope that you'll get.

Robert: You've also talked about cloud advantage of "success-based scaling." Can you elaborate a little bit on that?

Jonathan: What's really powerful about cloud computing is that the cost of failure is dramatically reduced. When the cost of failure is low enough, innovation can happen much more freely. You can do experiments assuming that the vast majority of them are going to fail and that in your portfolio of experiments, one will work. It's only when it works that you start to incur real infrastructure costs.

This is really powerful, because it means that you can try to build a lot of different types of solutions. That means you'll probably get more innovative, creative solutions to the problems coming faster.

Robert: You've also talked about the dangers of storage sprawl with the cloud. Can you talk a little bit about knowing what to store? After all, with big data and distributed-cost processing, you can ask a lot of "I wonder" questions if you've bothered to archive the data.

Jonathan: I think storage fits in the same category as compute, really, in the sense that because there's no hard limit on how much storage you have, it's easy to go overboard and just store everything. It's especially easy to be sloppy and then not know exactly what you're storing and where you stored it.

If you had a conventional disk array, your system administrator would come back to you much earlier and say, "Look, we're running out of space. We need to prune this data and only save the things that are necessary." The constraint of physical hardware forces you to be more disciplined, so in the case of cloud computing, you need to have more sophisticated processes.

You need to address what is saved where, what the policies are for what data should be saved, and automating the process of removing data from storage when it's no longer needed. That helps you contain your costs and make sure that you're only saving the valuable data.

Both storage and compute resources are becoming cheaper over time, so data is becoming more valuable because the cost of working on it is lower and the insights that come out of it are still worth the same. Therefore, you probably want to save a lot of information, but you still don't want to save everything. You need to make sure that your team is on the same page and is only saving the data that's required.

For example, we save our load balancing logs for a couple of months on the off chance that we'll want to parse through them and understand our traffic patterns. But the log files themselves are just too bulky to save them forever on the hypothetical basis that they'll be useful for something someday.

Robert: You're definitely pretty bullish on the cloud, but you've also written about hybrid advantages. Can you talk us through some of those?

Jonathan: The cloud has one huge Achilles heel, which is I/O performance. It is usually very slow to access the disk in a cloud-based solution, because since they use virtualization, there's another layer of software between you and the disk. Therefore, for example, if you're trying to build a conventional web application, you might need to have a very high-performance database. At SlideShare, our database has eight spindles, 15K RPMs, 32 gigs of memory on top of it. It's basically just an I/O monster.

You can't get something like that in the cloud, which means that if you're going to build a really big website that's 100% in the cloud, you have to have a much more complicated back end data model. You have to do all of your shorting from the very beginning. That can be complicated and expensive. I think hybrid architectures are really exciting, where you have a back end database that's a physical machine that's very high performance, which is surrounded by proximate nodes that are cloud computing nodes handling the web application tier, the web server tier, and everything else, except for the data layer where you need to have very high I/O throughput.

It's interesting to consider who's going to arrive at a really good solution like that first. You could imagine cloud computing vendors like Azure and Amazon renting out access to dedicated hardware on an hourly basis. I'm not sure whether there are plans to offer something like that, but it would certainly meet a very compelling need. On the flip side, aggressive hosting providers are moving rapidly into the cloud computing space. You have companies like Rackspace and SoftLayer who are offering cloud computing more and more in addition to their very mature dedicated hosting offerings.

So the question is; who's going to arrive first at a hybrid computing nirvana where you can get everything from one vendor and it's really good? Nobody's there right now, and that's why at SlideShare, we actually have a hybrid architecture that uses different vendors. We have our dedicated hosting in SoftLayer and we have our cloud computing in Amazon.

Robert: Maybe you can comment on some of the trends you're seeing in the industry. The infrastructure-as-a-service players are starting to move toward becoming more like platform-as-a-service. Then you have platform-as-a-service vendors, us in particular, moving a little bit toward infrastructure-as-a-service.

It seems like we're going to meet somewhere in the middle. I think the distinction between infrastructure and platform as a service is going to go away. We're doing it primarily because we need to make it easier for customers to on ramp into a platform as a service. Any thoughts on how you see the market moving there?

Jonathan: Well, I would agree that there's convergence and that everything is basically becoming platform-as-a-service, because that delivers so much more value to a customer than raw infrastructure-as-a-service. The sysadmin requirements of working with infrastructure-as-a-service are, if anything, higher than using dedicated hosting solutions, because you need to figure out not just how to administer all these servers, but also how to handle the case that they're likely to disappear at any moment because they're virtual computers, rather than physical ones.

I think that platform-as-a-service is going to be a really huge trend in the coming years, as evidenced by Salesforce acquiring Heroku and Amazon web services launching Beanstalk and talking about using Engine Yard as a potential platform as a service for their Ruby community. I think that Beanstalk is particularly interesting, because I've been surprised at how long it's taken for there to be really credible platform-as-a-service offerings for Java.

This space has huge market share, and it's been completely underserved relative to Ruby on Rails, for example, which has two excellent platform-as-a-service offerings competing for developer mind share.

Robert: From the enterprise perspective, I've talked to a bunch of architects and senior execs around this issue of cloud adoption. At least within the enterprise, a lot of these organizations just aren't ready to move some of their data to a public cloud. It's been one of the biggest barriers of adoption.

Jonathan:  You know, enterprise IT people will be waiting 10 years to use this. As I start up, I can adopt the new good stuff immediately because I don't have hang ups, and that's a competitive advantage. I don't spend a lot of time worrying about that. I do think there's another trend, though, that is just as big and probably doesn't get a lot of attention. I don't even know what the word for it is, really, but it's offering point solutions to particular application problems.

For example, SendGrid is the vendor we use to send email at SlideShare. It completely outsources the entire technological problem of delivering emails to a bunch of inboxes, doing rate limiting, making sure that there aren't too many spam complaints, all that kind of stuff.

Similarly, Recurly is the provider that we use for handling our recurring billing. That means that we don't have to build our own billing system. We're looking at vendors for other things as well. Video transcoding is a really good example of something that you can just outsource and use on the basis of a REST API that you talk to from a provider.

I don't know what the word is for that, but I think it's a huge trend that has definitely made it easier to build creative, new solutions to problems. Because you don't have to build the entire solution yourself.

Robert: Forrester analysts talk about being a pure cloud provider, a pure cloud application. I know exactly what you're talking about here. We have a number of companies who have basically architected applications that primarily interface with RESTful APIs, but they perform a particular function and a whole new set of functionality that they can offer in a way that can leverage the scale out that a cloud application has to provide.

I'll give you an example. There's a company called RiskMetrics. I think they're now called MSCI. They were acquired by that company. They do sophisticated simulations, called Monte Carlo simulations, to analyze the portfolios for hedge funds and look at very complicated instruments like collateralized debt obligations. They come in, and they'll look at anywhere from 10 to 20 thousand servers at a time. Go in, run their analysis, go back out.

It's just amazing. I've also seen another company called MarginPro in the US doing the same thing. They're evaluating the profitability of a bank's loan in the market. Every night, they're pulling down the rates and running the analysis. Again, they didn't have to make any capex investments to build out all these servers.

Think of how much lower the bar is going to be for startups that don't need that initial round of angel investment to pay for a capex.

Jonathan: Absolutely, and that's exactly what Amazon has talked about from the beginning. Removing the stuff that everybody always has to do and centralizing it in one place. Startups and general businesses using IT can focus on the higher level, and can focus on the incremental added value, rather than the core infrastructure that always has to be done.

Platform as a service is the next jump in those terms. Infrastructures that are accessed via APIs are another big jump in that direction. What it means is that you can rapidly prototype a new idea with very low capital requirements, bring it to market, see what the response is, and then invest in it only if it starts to get traction.

Robert: There's a site called Data Center Map that lets you search for data centers in specific geographic locations. For a global site like SlideShare, how do you balance centralization for efficiency with being close to users for performance?

Jonathan: We centralize our infrastructure, our dynamic page generation, for efficiency. We have one big cluster of physical servers and one big cluster of cloud servers, but the thing to realize about a site like SlideShare is that nine tenths of the traffic is actually downloading content, rather than HTML.

So it's the slides that are the overwhelming majority of the bandwidth load, and all that goes through our CDN infrastructure. If it misses on the CDN, it goes back to our cloud storage infrastructure on Amazon.

So nine tenths of the traffic on SlideShare never, ever hits a server that we're personally responsible for keeping up. It's all handled by Akamai and Amazon. That's tremendously liberating, and it lets you focus while leaving stuff like administering huge arrays of storage that are constantly increasing in requirements for size to a third party.

Robert: With less complexity associated with in-house IT, where should startups know that the new complexity will show up?

Jonathan: Interestingly enough, the new complexity for us is on the front end. As we start to explore HTML5 features and build solutions on top of WebSockets, we have to take into account the fact that 95 percent of the browsers out there don't have WebSockets yet, and that number is changing fast. So we're having to be a lot smarter in terms of our front-end coding, and we're having to be a lot more clever in terms of our JavaScripting.

The fact that your infrastructure is dynamic exposes you to the risk of essentially unlimited potential costs. That means you have to be much more careful, and you have to build monitoring systems for that yourself, especially since vendors don't seem to do a very good job of providing those systems right now.

We've written scripts that try to keep track of that cost on a daily basis, and we look at the data pretty carefully. Operations pays a lot of attention to what our current spending is.

That's a definite area of increased complexity. Another megatrend, I think, is server automation and making sure that operations people don't end up doing the same job more than once. That's a best practice in a traditional hosting environment, but in the cloud, it's even more necessary, just because you have computers appearing and disappearing on a continual basis.

You need to be able to have a fully scripted way of creating a computer with a particular role. We use Puppet for that, which is a really great infrastructure for automating the configuration of your servers.

Robert: Is there anything else you'd like to talk about?

Jonathan: One thing I'm personally really excited about is a new SlideShare feature that we are launching next week, called ZipCasting. ZipCasts are very easy to start online meetings that are completely browser-based.

You can start a ZipCast with one click, and you can invite someone to join a ZipCast with one click, and then you're sharing slides with them, and you're broadcasting video to them. This is a much faster way of doing online collaboration than has traditionally been available, and it's also at a much lower price point. The majority of the features are free. What you pay for is password protection and ad removal.

I'm really excited about the potential for ZipCasts to create a new type of social media experience. We think of it as being Ustream for nerds: a real-time, social-learning, one-to-many experience that is driven by a social media website.

Robert: One thing that I'm still waiting for is more robust collaborative whiteboarding.

Jonathan: That is definitely a pain point for our organization when we're having remote meetings. There is one company that I've heard of that has been working on a whiteboarding app for the iPad, which is pretty cool.

Robert: I feel like what you're doing here would be the perfect service to acquire through the iPad, right? Or through any other tablet technology for that matter.

Jonathan: What we're really waiting for is for the front facing web camera on the iPad 2. Once that comes out, online meetings on the iPad will really pop, because you'll be able to broadcast video and you'll be able to advance slides. That will be really cool.

Robert: Are you guys keeping a close look at Honeycomb and some of the other Android-based tablets like Xoom as well?

Jonathan: It's funny that you should ask that. A lot of the developers in our Delhi office have taken to carrying around these seven inch Android tablets. They're really enjoying them as a lighter way of having a computing device with them for taking notes during meetings and things like that.

We do a lot of testing of our mobile web site on these Samsung Galaxy Tabs, as well as on iPhones, iPads, and everything else. I don't see the tablet market as going 100 percent to Apple, but the Android devices are only starting to come out now. We'll just have to wait some time before we really see what they can do.

Robert: Thanks a lot, and good luck with your February 16th launch.

Jonathan: Thank you.

Cisco Systems published Cisco Announces Intent to Acquire newScale on 3/29/2011:

imageSAN JOSE, Calif. – March 29, 2011 – Cisco today announced its intent to acquire privately-held newScale, a leading provider of software that delivers a service catalog and self-service portal for IT organizations to select and quickly deploy cloud services within their businesses. Based in San Mateo, Calif., newScale allows commercial and enterprise customers to initiate the provisioning of their own systems and infrastructure on an as-needed basis. 

Inlet Logo

Featured Content

Frequently Asked Questions

Q: Will Cisco continue to offer newScale’s products?

A: Cisco will continue to offer all of newScale’s products and implementation services.

Q: How will newScale’s products be sold and serviced after the acquisition?

A: NewScale’s products will continue to be sold and serviced by Cisco. The newScale sales and services team will be joining Cisco as part of this acquisition.

Q: How will newScale’s customers be affected by this acquisition?

A: Impact to newScale customers will be minimal. Cisco will continue to fulfill the terms of newScale’s current customer contracts. newScale's customers will continue to receive the same level of support, services, and technology that they are accustomed to receiving.

Q: I am a newScale customer today. Will this transaction shift newScale’s attention away from us?

A: Cisco and newScale share a common culture - one of the key attributes being a relentless focus on customer success. We expect to preserve this aspect of the culture on customer focus and build upon it as we scale the business. We will continue to operate as independent entities until the transaction closes.

A propos the above, Tom Nolle asked and answered Cisco and cloud computing: Go with the networking giant? in a 3/29/2011 post to

image Of all the cloud computing service providers out there today, there's only one network equipment vendor that can host your cloud -- Cisco Systems. But is there any special value to having a cloud offering from a network vendor, and will Cisco deliver on any of the special values that a network provider might offer? The answer to both questions is "yes," under the right conditions.

image Cisco's cloud vision is a three-pillar structure that consists of Unified Computing, Unified Fabric and Unified Network Services. To many enterprises, the greatest value Cisco brings to their cloud needs is how all the elements are collected into a single offering and separated into key components. Each component can then be tuned to match the combination of enterprise needs and the current level of investment in the three areas of the cloud.

What makes Cisco a special provider is the Unified Computing System (UCS) server family, a product line that includes either blade servers with a fabric-integrated chassis and chassis extenders or rackmount servers and a set of multiprotocol switches (the Nexus line) that connect servers and storage. Both Cisco offerings are designed to couple software virtualization and cloud tools (VMware's vSphere and vCloud, for example) and create virtualization-friendly data centers, which are then connected to become cloud data centers. Cisco's UCS blade strategy integrates all data center and network components into a Cisco-created cloud, and the rackmount Nexus-based strategy will allow enterprises to easily include non-Cisco servers.

Choosing a Cisco cloud strategy
So which of these strategies is the best for your enterprise, and are either of them better than what's offered by other vendors?

The first step in answering that question is determining if your current data center, network and server investments are replaceable. If an enterprise recognizes that the current data center is too old, its servers that aren't easily optimized for virtualization (for example, too few CPU cores) and any old LAN technologies won't easily support integrating storage networking with server networking, there may be a real value in refreshing the whole data center, or at least quickly evolving to a new architecture. And Cisco, of course, presents a unique one-stop shopping capability for that kind of upgrade.

The most compelling application for a Cisco cloud offering is one where the data center and network are going to be substantially replaced, or where there will be an entirely new installation. While Cisco's server pricing is comparable to that of major competitors, the UCS fabric integration and management integration is likely superior; this could provide better performance and reduce total cost of ownership (TCO).

Another important question is the extent to which the current LAN and WAN are based on Cisco equipment. Enterprises know that their support staff's skills in managing switches and routers are normally vendor-specific. If your staff is experienced in working with Cisco networks, then a Cisco-dominated cloud will preserve your investment in staff skills development. And even if your current network is a mixture of vendors and your data center a mixture of servers from multiple manufacturers, the harmonization of management and support activity around a single management platform and skill set may be a major savings and a selling point in switching to Cisco.

Determining Cisco's value to you

The value of a common management platform and support skill set is the most significant factor in deciding whether any WAN commitments justify standardizing on Cisco in the rest of the cloud. The value of a tight coupling between the data center network, the servers and any virtualization and cloud software is very clear, but it's not as clear what the technical benefit is to having the same vendor supplying these components and the WAN components. The support value is real, though, and it will be even more beneficial as the size of the WAN and the number of support personnel increase. Large enterprises are more likely to find this particular value proposition compelling than smaller ones.

Enterprises do suggest that where private cloud will be created from multiple distributed data centers, there's value in getting all of the pieces from a single player. Those types of clouds create the largest number of design and support issues, and a single source of product there would be of great value. The classic stories of finger-pointing are true, but the real benefit is insuring that complex issues like capacity and performance planning are handled. Complicated issues of that sort grow even more complex when considered across a structure with this many layers of traffic and performance.

Computing plus networking equals clouds; that's a basic equation of cloud computing. The place where network giant Cisco shines is where the integration of the computing and networking elements has to be near-perfect. As a provider of all the products involved, Cisco would have the knowledge to be that kind of integrator.

More on Cisco and the cloud:

Tom is president of CIMI Corporation, a strategic consulting firm.

Full Disclosure: I’m a paid contributor to

Christopher Thorpe (@drthorpe) posted Pricing Fail: Amazon EC2 Dedicated Instances on 3/28/2011:

image Amazon’s “Dedicated Instances are Amazon EC2 instances … that run hardware dedicated to a single customer.” Great idea, but their pricing structure discourages all but the biggest customers to use the service – when the real win would be to make it attractive to start your infrastructure on Amazon instead of their competitors.

At Blueleaf, where I’m the CTO and security dude ex officio, all confidential customer data go through dedicated metal in a SAS 70 Level 2 certified, ISO-27000 framework-following data center. As awesome as the cloud is, we don’t take risks even passing usernames and passwords to our trusted third party (we never write them to disk, encrypt our swap, and scrub them from the logs).

image When I read this, I thought: “Great! Maybe I can build my infrastructure on EC2, and reap all the benefits of scaling in the cloud, while still knowing the bits that fly through our encrypted connections aren’t stored anywhere that anyone else shares.” (Let’s ignore those valid concerns about storage on AWS for now.)

Then I looked at the pricing table. An on-demand large instance, with slightly better CPU performance than we have now and a similar memory footprint, costs $302.40 in a 30-day month, over 40% less than what we’re currently paying. If I get a dedicated reserved instance for a year, I can cut it down to an average of $245/mo – less than half the price.

Then I looked again:


Pay only for what you use with no long-term commitments. Dedicated Instance pricing has two components: (1) an hourly per instance usage fee and (2) a dedicated per region fee (note that you pay this once per hour regardless of how many Dedicated Instances you’re running).

Dedicated Per Region Fee
  • $10 per hour – An additional fee is charged once per hour in which at least one Dedicated Instance of any type is running in a Region.

Excuse me? $10 per hour as an “additional fee”? That’s ~$7,200 per month. That basically means I’m going to have to scale to more than 20 machines before this possibly becomes cost effective versus paying for managed servers at my current provider (assuming they don’t give me a discount at that scale.)

In my view, Amazon has their pricing backwards. They’ve put up a gigantic barrier to my startup using this service. It means that as I’m scaling up to 10 or so, their dedicated instances are cost prohibitive – so I’ll go build my infrastructure elsewhere. And once I’ve done that, the switching costs go up dramatically, meaning that I’ll probably have to scale much higher than 20 instances before it makes sense to switch. And Amazon being there gives me leverage to drive down the pricing of my current provider, which I’m likely to try to do long before I imagine moving a couple of dozen servers I depend on.

Now, I understand that we’re a special case and most startups probably don’t care about sharing a machine with other sandboxed VM’s. But I know we’re not alone there. This seems like a missed opportunity to get new customers to establish the cornerstone of their infrastructure on your platform, ensuring a sticky relationship for a long time to come.


<Return to section navigation list>