Friday, February 18, 2011

Windows Azure and Cloud Computing Posts for 2/17/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.


• Updated 2/18/2011 with new articles marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Azure Blob, Drive, Table and Queue Services

• Joe Giardino of the Windows Azure Storage Team delivered a  Windows Azure Blob MD5 Overview on 2/17/2011:

imageWindows Azure Blob service provides mechanisms to ensure data integrity both at the application and transport layers. This post will detail these mechanisms from the service and client perspective. MD5 checking is optional on both PUT and GET operations; however it does provide a convenience facility to ensure data integrity across the network when using HTTP. Additionally since HTTPS provides transport layer security additional MD5 checking is not needed while connecting over HTTPS as it would be redundant.

To ensure data integrity the Windows Azure Blob service uses MD5 hashes of the data in a couple different manners. It is important to understand how these values are calculated, transmitted, stored, and eventually enforced in order to appropriately design your application to utilize them to provide data integrity.

Please note, the Windows Azure Blob service provides a durable storage medium, and uses its own integrity checking for stored data. The MD5's that are used when interacting with an application are provided for checking the integrity of the data when transferring that data between the application and service via HTTP. For more information regarding the durability of the storage system please refer to the Windows Azure Storage Architecture Overview.

The following table shows the Windows Azure Blob service REST APIs and the MD5 checks provided for them:




Validated By


Put Blob


MD5 value of blobs bits


Full Blob

Put Blob


MD5 value of blobs bits


Full Blob, If x-ms-blob-content-md5 is present Content-md5 is ignored

Put Block


MD5 value of block bits


Validated prior to storing the block

Put Page


MD5 value of page bits


Validated prior to storing the page

Put Block List


MD5 value of blobs bits

Client on subsequent download

Stored as the Content-MD5 blob property to be downloaded with blob for client side checks

Set Blob Properties


MD5 value of blobs bits

Client on subsequent download

Sets the blob Content-MD5 property.

Get Blob


MD5 value of blobs bits


Returns the Content-MD5 property if one was stored/set with the blob

Get Blob (range)


MD5 value of blobs range bits


If client specifies x-ms-range-get-content-md5: true the Content-MD5 header will be dynamically calculated over the range of bytes requested. This is restricted to <= 4 MB range requests

Get Blob Properties


MD5 value of blobs bits


Returns the Content-MD5 property if one was stored/set with the blob

Table 1 : REST API MD5 Compatibility

Service Perspective

From the Windows Azure Blob Storage service perspective the only MD5 values that are explicitly calculated and validated on each transaction are the transport layer (HTTP) MD5 values. MD5 checking is optional on both PUT and GET operations. Note, since HTTPS provides transport layer security when using HTTPS any additional MD5 checking would be redundant, so MD5 checking is not needed when using HTTPS. We will be discussing two separate MD5 values which will provide checks for at different layers:

  • PUT with Content-MD5: When a content MD5 header is specified, the storage service calculates an MD5 of the data sent and checks that with the Content-MD5 that was also sent. If the two hashes do not match, the operation will fail with error code 400 (Bad Request). These values are transmitted via the Content-MD5 HTTP header. This validation is available for PutBlob, PutBlock and PutPage. Note, when uploading a block, page, or blob the service will return the Content-MD5 HTTP header in the response populated with the MD5 it calculated for the data received.
  • PUT with x-ms-blob-content-md5: The application can also set the Content-MD5 property that is stored with a blob. The application can pass this in with the header x-ms-blob-content-md5, and the value with this is stored as the Content-MD5 header to be returned on subsequent GETs for the blob. This can be set when using PutBlob, PutBlockList or SetBlobProperties for the blob. If a user provides this value on upload all subsequent GET operations will return this header with the client provided value. The x-ms-blob-content-md5 header is a header we introduced for scenarios where we wanted to specify the hash for the blob content when the http request content is not fully indicative of the actual blob data, such as in PutBlockList. In a PutBlockList, the Content-MD5 header would provide transactional integrity for the message contents (the block list in the request body) , while the x-ms-blob-content-md5 header would set the service side blob property. To reiterate, if a x-ms-blob-content-md5 header is provided it will supersede the Content-MD5 header on a PutBlob operation, for a PutBlock or PutPage operation it is ignored.
  • GET: On a subsequent GET operation the service will optionally populate the Content-MD5 HTTP header if a value was previously stored with the blob via a PutBlob, PutBlockList, or SetBlobProperties. For range GETs an optional x-ms-range-get-content-md5 header can be added to the request. When this header is set to true and specified together with the Range header for a range GET, the service dynamically calculates an MD5 for the range and returns it in the Content-MD5 header, as long as the range is less than or equal to 4 MB in size. If this header is specified without the Range header, the service returns status code 400 (Bad Request). If this header is set to true when the range exceeds 4 MB in size, the service returns status code 400 (Bad Request).
Client Perspective

We have already discussed above how the Windows Azure Blob service can provide transport layer security via the Content-MD5 HTTP header or HTTPS. In addition to this the client can store and manually validate MD5 hashes on the blob data from the application layer. The Windows Azure Storage Client library provides this calculation functionality via the exposed object model and relevant abstractions such as BlobWriteStream.

Storing Application layer MD5 when Uploading Blobs via the Storage Client Library

When utilizing the CloudBlob Convenience layer methods in most cases the library will automatically calculate and transmit the application layer MD5 value. However, there is an exception to this behavior when a call to an upload method results in

  • A single PUT operation to the Blob service, which will occur when source data is less than CloudBlobClient.SingleBlobUploadThresholdInBytes.
  • A parallel upload (length > CloudBlobClient.SingleBlobUploadThresholdInBytes and CloudBlobClient.ParallelOperationThreadCount > 1).

In both of the above cases, an MD5 value is not passed in to be checked, so in this scenario if the client requires data integrity checking they need to make sure and use HTTPS. (HTTPS can be enabled when constructing a CloudStorageAccount via the constructor or by specifying HTTPS as part of the baseAddress when manually constructing a CloudBlobClient)

All other blob upload operations from the convenience layer in the SDK send MD5’s that are checked at the blob service.

In addition to the exposed object methods, you can also provide the x-ms-blob-content-md5 header via the Protocol layer on a PutBlob or PutBlockList request.

The below table lists the convention functions used to upload blobs, and which ones support sending MD5 checks and when they are sent.






MD5 is sent. Note, this function is not currently supported for PageBlob



MD5 is sent if:

  • Length is >= CloudBlobClient. SingleBlobUploadThresholdInBytes AND
  • CloudBlobClient. ParallelOperationThreadCount==1



MD5 is sent if:

  • Length is >= CloudBlobClient. SingleBlobUploadThresholdInBytes AND
  • CloudBlobClient. ParallelOperationThreadCount==1



MD5 is sent if:

  • Length is >= CloudBlobClient. SingleBlobUploadThresholdInBytes AND
  • CloudBlobClient. ParallelOperationThreadCount==1



MD5 is sent if:

  • Length is >= CloudBlobClient. SingleBlobUploadThresholdInBytes AND
  • CloudBlobClient. ParallelOperationThreadCount==1

Table 2 : Blob upload methods MD5 compatibility

Validating Application Layer MD5 when downloading Blobs via the Storage Client Library

The CloudBlob Download methods do not provide application layer MD5 validation; as such it is up to the application to verify the Content-MD5 returned against the data returned by the service. If the application layer MD5 value was specified on upload the Windows Azure Storage Client Library will populate it in CloudBlob.Properties.ContentMD5 on any download (i.e. DownloadText, DownloadByteArray, DownloadToFile, DownloadToStream, and OpenRead).

The example below shows how a client can validate the blobs MD5 hash once all the data is retrieved.


// Initialization
string blobName = "md5test" + Guid.NewGuid().ToString();
long blobSize = 8 * 1024 * 1024;

StorageCredentialsAccountAndKey creds = 
        new StorageCredentialsAccountAndKey(AccountName, AccountKey);
CloudStorageAccount account = new CloudStorageAccount(creds, false);
CloudBlobClient bClient = account.CreateCloudBlobClient();

// Set CloudBlobClient.SingleBlobUploadThresholdInBytes, all blobs above this 
// length will be uploaded using blocks
bClient.SingleBlobUploadThresholdInBytes = 4 * 1024 * 1024;

// Create Blob Container 

CloudBlobContainer container = bClient.GetContainerReference("md5blobcontainer");
Console.WriteLine("Validating the Container");

// Populate Blob Data
byte[] blobData = new byte[blobSize];
Random rand = new Random();
MemoryStream retStream = new MemoryStream(blobData);

// Upload Blob
CloudBlob blobRef = container.GetBlobReference(blobName);

// Any upload method will work here: byte array, file, text, stream

// Download will re-populate the client MD5 value from the server
byte[] retrievedBuffer = blobRef.DownloadByteArray();

// Validate MD5 Value
var md5Check = System.Security.Cryptography.MD5.Create();
md5Check.TransformBlock(retrievedBuffer, 0, retrievedBuffer.Length, null, 0);     
md5Check.TransformFinalBlock(new byte[0], 0, 0);

// Get Hash Value
byte[] hashBytes = md5Check.Hash;
string hashVal = Convert.ToBase64String(hashBytes);

if (hashVal != blobRef.Properties.ContentMD5) 
     throw new InvalidDataException("MD5 Mismatch, Data is corrupted!");

Figure 1: Validating a Blobs MD5 value

A note about Page Blobs

Page blobs are designed to provide a durable storage medium that can perform a high rate of IO. Data can be accessed in 512 byte pages allowing a high rate of non-contiguous transactions to complete efficiently. If HTTP needs to be used with MD5 checks, then the application should pass in the Content-MD5 on PutPage, and then use the x-ms-range-get-content-md5 [header] on each subsequent GetBlob using ranges less than or equal to 4MBs.


Currently the convenience layer of the Windows Azure Storage Client Library does not support passing in MD5 values for PageBlobs, nor returning Content-MD5 for getting PageBlob ranges. As such, if your scenario requires data integrity checking at the transport level it is recommended that you use HTTPS or utilize the Protocol Layer and add the additional Content-MD5 header.

In the following example we will show how to perform page blob range GETs with an optional x-ms-range-get-content-md5 via the protocol layer in order to provide transport layer security over HTTP.


// Initialization
string blobName = "md5test" + Guid.NewGuid().ToString();
long blobSize = 8 * 1024 * 1024;

// Must be divisible by 512
int writeSize = 1 * 1024 * 1024;

StorageCredentialsAccountAndKey creds = 
    new StorageCredentialsAccountAndKey(AccountName, AccountKey);
CloudStorageAccount account = new CloudStorageAccount(creds, false);
CloudBlobClient bClient = account.CreateCloudBlobClient();
bClient.ParallelOperationThreadCount = 1;

// Create Blob Container 
CloudBlobContainer container = bClient.GetContainerReference("md5blobcontainer");
Console.WriteLine("Validating the Container");

int uploadedBytes = 0;
// Upload Blob
CloudPageBlob blobRef = container.GetBlobReference(blobName).ToPageBlob;

// Populate Blob Data
byte[] blobData = new byte[writeSize];
Random rand = new Random();
MemoryStream retStream = new MemoryStream(blobData);

while (uploadedBytes < blobSize)
    blobRef.WritePages(retStream, uploadedBytes);
    uploadedBytes += writeSize;
    retStream.Position = 0;

HttpWebRequest webRequest = BlobRequest.Get(
                                        blobRef.Uri,        // URI
                                        90,                 // Timeout
                                        null,               // Snapshot (optional)
                                        1024 * 1024,        // Start Offset
                                        3 * 1024 * 1024,    // Count 
                                        null);              // Lease ID ( optional)

webRequest.Headers.Add("x-ms-range-get-content-md5", "true");
WebResponse resp = webRequest.GetResponse();

Figure 2: Transport Layer security via optional x-ms-range-get-content-md5 header on a PageBlob


This article has detailed various strategies when utilizing MD5 values to provide data integrity. As with many cases the correct solution is dependent on your specific scenario.

We will be evaluating this topic in future releases of the Windows Azure Storage Client Library as we continue to improve the functionality offered. Please leave comments below if you have questions.

<Return to section navigation list> 

SQL Azure Database and Reporting

The Microsoft Download Center announced the availability of Microsoft JDBC Driver 3.0 for SQL Server and SQL Azure on 2/17/2011:

Brief Description

imageDownload the SQL Server JDBC Driver 3.0, a Type 4 JDBC driver that provides database connectivity through the standard JDBC application program interfaces (APIs) available in Java Platform, Enterprise Edition 5 and above. This version of SQL Server JDBC Driver 3.0 adds support for SQL Azure. [Emphasis added.]


In our continued commitment to interoperability, Microsoft has released a new Java Database Connectivity (JDBC) driver. The SQL Server JDBC Driver 3.0 download is available to all SQL Server users at no additional charge, and provides access to SQL Azure, SQL Server 2008 R2, SQL Server 2008, SQL Server 2005 and SQL Server 2000 from any Java application, application server, or Java-enabled applet. This is a Type 4 JDBC driver that provides database connectivity through the standard JDBC application program interfaces (APIs) available in Java Platform, Enterprise Edition 5 and above.

This release of the JDBC Driver is JDBC 4.0 compliant and runs on the Java Development Kit (JDK) version 5.0 or later. It has been tested against major application servers including IBM WebSphere, and SAP NetWeaver.

Note: By downloading the SQL Server JDBC Driver 3.0, you are accepting the terms and conditions of the End-User License Agreement (EULA) for this component. Please review the End-User License Agreement (EULA) located on this page and print a copy of the EULA for your records.

Top of page

System Requirements
  • Supported Operating Systems:Linux;Unix;Windows 7;Windows Server 2008 R2;Windows Vista
  • Linux; Unix; Windows 7; Windows Server 2003; Windows server 2008 R2;Windows Vista; Windows XP
  • The list above is an example of some of the supported operating systems. The JDBC driver is designed to work on any operating system that supports the use of a Java Virtual Machine (JVM). However, only Sun Solaris, SUSE Linux, and Windows XP or later operating systems have been tested.
  • Java Development Kit: 5.0 or later
  • SQL Azure or SQL Server 2008 R2 or SQL Server 2008 or SQL Server 2005 or SQL Server 2000

• Beth Massi (@bethmassi) explained Step-by-Step: Installing SQL Server Management Studio 2008 Express after Visual Studio 2010 in a 2/18/2010 post:

image One of the first things I always do after installing Visual Studio is to install SQL Server Management Studio (SSMS). Visual Studio 2010 installs SQL Server 2008 Express on your machine but doesn’t include SSMS. Although you can use Visual Studio to create/connect/design databases, I like having SSMS around for advanced management. I recall SSMS for SQL Server 2005 was a simple install, unfortunately they threw the kitchen sink into the SSMS 2008 installer and I’m always confused at which buttons I need to push to get it to do what I want. So I’m writing up this blog post for two reasons 1) So I remember the steps and 2) So you can be less frustrated :-) (BTW, a birdie tells me that the SQL team is looking at making this installer much simpler in the future. Hooray!)

Okay the first thing you need is to make sure you get the right version of SSMS. If you installed Visual Studio 2010 then you will need the 2008 version (not R2).

STEP 1: Download Microsoft® SQL Server® 2008 Management Studio Express and select either SQLManagementStudio_x64_ENU.exe or SQLManagementStudio_x86_ENU.exe depending on your machine’s OS bit-ness. I’m running Windows 7-64bit so I’ll be detailing the steps as it pertains to that OS.

STEP 2: Make sure you’re logged in as an administrator of the machine then right-click on the exe you downloaded and select “Run as Administrator”. If you’re on Windows 7 then you’ll get a compatibility warning. Click past it for now to continue with the install. Later you’ll need to apply SQL 2008 Service Pack 2.

STEP 3: You should now see the “SQL Server Installation Center” window. Yes it looks scary. Select the “Installation” tab.


STEP 4: Select “New SQL Server stand-alone installation or add features to an existing installation”. It will then run a rule check. Make sure there are no failures and then click OK.


STEP 5: The next step is misleading. The Setup Support Files window looks like it’s doing something and stuck on “Gathering user settings.” It’s actually waiting for you to click the Install button! Doh!


STEP 6: Another rule check. You’ll probably end up with a Windows Firewall warning this time. If you want to enable remote access to SQL Server you’ll need to configure the firewall later. Since I’m using SQL Server Express for development purposes on this machine only, I won’t need to worry about that. Click Next.


STEP 7: Okay here is the step I always mess up because it’s just not intuitive at all. On the Installation Type window you have a choice between “Perform a new installation of SQL Server 2008” OR “Add features to an existing instance of SQL Server 2008”. You need to select new installation, NOT add features. I know I know, totally weird. You would think that since you just want to add SSMS that it would be Add features to existing instance – I mean I don’t want a new instance, just the dang tools. Sigh. Click Next.


STEP 8: Next you get the Product Key window. You obviously don’t need a product key for SQL Server Express since it’s free so just click Next.


STEP 9: Accept the License Terms and click Next.


STEP 10: Okay now for the window we’ve all been waiting for - Feature Selection. Check off “Management Tools – Basic” and then click Next.


STEP 11: Verify your disk space requirements and click Next.


STEP 12: Help Microsoft improve SQL Server features and services by selecting to send usage and error reports to them (or not). Click Next.


STEP 13: Another quick rule check runs. Click Next.


STEP 14: Now it looks like we’re ready to install. Click the Install button.


The install will kick off and will take about 5 minutes to complete.

STEP 15: Once the installation completes, click the Next button again.


STEP 16: Complete! Click the Close button and you should be all set.


STEP 17: Fire up SQL Server Management Studio! You should now see it in your Programs under Microsoft SQL Server 2008. Make sure you run it as an administrator for access to all the features.


And don’t forget at some point to install the latest SQL Server 2008 Service Pack. I hope this helps people who have just installed Visual Studio 2010 but also want to install SQL Server Management Studio. I know I’ll be referring to my own post on this when I need to do it again :-)

• Mark Bower compared costs per gigabyte per month in his Choosing between SQL Azure and Windows Azure Storage in a 2/16/2010 post:

image I was having a chat with an Azure architect at Microsoft last week, and he pointed out that SQL Azure storage costs 66x (yes, sixty-six times) Table Storage.

Not quite believing I went back to check pricing. And sure enough it’s true. In fact it is probably an even higher ratio as you have to buy SQL Azure in chunks of 1,5,10,20 GB etc.

imageIf you have 15Gb of data in SQL Azure you need a 20Gb database @ ~$200/month.

15 Gb of data in Table Storage = 15 * $0.15 = $2.25 / month.

That would make SQL Azure around 89x as expensive as Table Storage.


That’s not quite the full story though. In Table Storage the idea is that you would often not normalise your data. Data is likely to be stored multiple times in the store. And additionally there are transactional costs associated with Table Storage ($0.01 per 10,000) transactions.  These both make estimating cost more tricky.

As a guestimate starting point, let’s say I need to store each data item 3 times in Table Storage, meaning that 15Gb of normalised data gives me a requirement of 45Gb Table Storage. If my system has 1000 users each performing 10,000 transactions / month, then my total Table Storage costs would be:

(45 * $0.15) + (1000 * $0.01) = $16.75 / month

That’s about a 12x ratio (and is based on a lot of assumption too).

There’s probably a need for a simple spreadsheet to help look at the trade-offs here, but as a rule of thumb the pricing model from Microsoft is giving us some strong guidance: Windows Azure architects should look to put data in Windows Azure Storage first and SQL Azure conservatively.

The two scenarios I see when I would prefer SQL Azure over Table Storage are:

  • When I need SQL Transactions – ie. the ability to group together a bunch of database actions and commit them as a single atomic unit. There’s no concept of this type of transaction in Table Storage.
  • When I need reporting – especially enabling end-users to design their own queries and reports. In this case I don’t know how the user will want to query the data at design time and will need to rely on SQL Servers ability to query across tables to enable this.

Of course, SQL Azure’s per-month cost includes SQL Server 2008 compute capability. If you want to manipulate the content in Azure table, blob or queue storage, you need at least one Very Small compute instance at US$0.05 per hour or US$36.00 per month.

<Return to section navigation list> 

MarketPlace DataMarket and OData

Marcelo Lopez Ruiz described Handling errors in datajs in a 2/18/2011 post:

imageToday I want to talk a bit about how we handle errors in datajs.

Every operation that datajs runs asynchronously, whether a read or a general request, has both a success and an error callback. These can be passed in explicitly when the function is invoked, and this pretty much always done with the success callback.

imageOften, however, there is little to do in the face of an error rather than letting the user know that things didn't go well, perhaps offer a message with more details, and letting the user retry or perhaps fix something before retrying. At this point, writing an error callback for every function invocation is a bit of a pain, so datajs offers the OData.defaultError.

This field has a function that is used when an explicit function is omitted from a call and is passed an error. By default, it will simply throw the error object, which depending on your browser and how you're running it, may trigger the debugger that will let you look at why things failed.

As shown in the walk-through from a few days ago, however, it's a very convenient location to re-enable controls (if you disabled them while waiting for a server response), and display a message to the user.

OData.defaultError = function (err) {
    $("button").attr("disabled", false);

• Glenn Gailey (@ggailey777) explained MERGE Requests and the WCF Data Services Client in a 2/18/2011 post:

image I came across this interesting blog post by Alex van Beek the other day where he demonstrates how to reduce the size of the payload of MERGE requests to an OData service when using the WCF Data Services client.

imageThe OData protocol defines a HTTP MERGE action that enables a client to request updates to only specific properties of an entity, which sounds neat and perhaps a way to save bandwidth when updating an entity. But, as Alex points out, the WCF Data Services client library always includes all of the properties of an entity (at least the ones it knows about) in the request.

For example, the following is the payload of a MERGE request sent to a Northwind-based data service.

<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<entry xmlns:d=
  <category scheme=
    term="NorthwindModel.Customer" />
  <title />
    <name />
  <content type="application/xml">
      <d:Address>Obere Str. 57</d:Address>
      <d:CompanyName>Alfreds Futterkiste</d:CompanyName>
      <d:ContactName>New Name</d:ContactName>
      <d:ContactTitle>Sales Representative</d:ContactTitle>
      <d:Region m:null="true" />

In this example, even though the only property that I changed on the client was ContactName, all of the properties were included by the client in the MERGE request. Why does the client include all of the properties in the MERGE request payload, even when those properties haven’t changed? As Alex correctly points out, it is because—unlike the ObjectContext in the ADO.NET Entity Framework—the DataServiceContext doesn’t have an ObjectStateManager-like component that tracks property-level changes on the client. All the client can do when an object has been marked as changed is send all of the properties that it knows about to the data service. (From the client’s perspective, the main difference between MERGE and PUT is that because a PUT is essentially an entity replace, any property values that the client doesn't know about are reset to their default values.) These MERGE request payloads can get large when the entity has BLOB properties (although you should really consider streaming BLOBs to an OData service—for more info see my Data Services Streaming Provider series).

By wrapping the DataServiceContext to implement his own property-level tracking, Alex’s example does reduce the payload of MERGE requests (PATCH requests, which will likely be supported in the future) to an OData service. If you think that this functionality should be added to the official WCF Data Services client, you can vote for it now at the WCF Data Services Feature Suggestions site.

I gave it 3 votes.

Marcelo Lopez Ruiz offered a datajs DateTime sample in a 2/17/2011 post:

image Building on the dev notes I recently posted, here is a short sample page you can use to try the new datajs-0.0.2 library out. Just download the files to your local disk and add this HTML file there, then you can point your browser to it. The event handling style is pretty horrible, but it's as short as I could write it without having to go into how to hook up events, which you probably know anyway (else, leave me a note and I'll get to that in a future post).

<!-- saved from url=(0014)about:internet -->
<title>DateTime Sample</title>
<script type="text/javascript" src=''></script>
<script type="text/javascript" src='datajs-0.0.2.min.js'></script>
<button onclick='
OData.jsonHandler.recognizeDates = !OData.jsonHandler.recognizeDates;
'>Toggle recognizeDates</button>
<button onclick='
if (OData.defaultMetadata.length > 0) {
// Reset metadata.
  OData.defaultMetadata = [];
} else {
// Push a schema object with just one declared entity type and property.
    { namespace: "NetflixCatalog.Model",
      entityType: [ {
        name: "Title", property: [
          { name: "DateModified", type: "Edm.DateTime" }
        ] } ]
Toggle metadata
<button onclick='
var target = document.getElementById("outputDiv");
target.innerHTML =
"defaultMetadata #: " + OData.defaultMetadata.length + "<br />" +
"recognizeDates: " + OData.jsonHandler.recognizeDates + "<br />" +
"<br />";
OData.defaultHttpClient.enableJsonpCallback = true;"$top=10",
function (data) {
for (var n in data.results) {
var i = data.results[n];
      document.createTextNode(i.Name + " - " + i.DateModified));
<div id='outputDiv'></div>

Let's analyze this a bit.

  • The first button simply toggles the 'recognizeDate' value on the JSON handler. It defaults to false, which means that strings won't be pattern-matched to Dates.
  • The second button toggles bit of metadata, just enough to recognize the DateModified property of Title entities as a Date.
  • The last button simply enables JSONP in the library and fires off a request for a few Netflix titles, then outputs the settings along with the title names and their DateModified titles.

imageDepending on the values you set up for the settings using the original buttons, you'll see output similar to the following:

defaultMetadata #: 0
recognizeDates: false
Red Hot Chili Peppers: Funky Monks - /Date(1282379236000)/
Rod Stewart: Storyteller 1984-1991 - /Date(1254891982000)/

The above shows string values - there is no metadata to tell us they are Date values, and recognizeDates is false (the default).

defaultMetadata #: 0
recognizeDates: true
Red Hot Chili Peppers: Funky Monks - Sat Aug 21 01:27:16 PDT 2010
Rod Stewart: Storyteller 1984-1991 - Tue Oct 6 22:06:22 PDT 2009

The above shows Date values, becase I turned recognizeDates to true. If someone had sneaked in a value where it wasn't supposed to, however, we would have seen the Date show up like that.

defaultMetadata #: 1
recognizeDates: false
Red Hot Chili Peppers: Funky Monks - Sat Aug 21 01:27:16 PDT 2010
Rod Stewart: Storyteller 1984-1991 - Tue Oct 6 22:06:22 PDT 2009

The above shows Date values, because I included metadata. This is the preferred solution to get Date values.

defaultMetadata #: 1
recognizeDates: true
Red Hot Chili Peppers: Funky Monks - Sat Aug 21 01:27:16 PDT 2010
Rod Stewart: Storyteller 1984-1991 - Tue Oct 6 22:06:22 PDT 2009

The above shows Date values because I included metadata. Even though recognizeDates is true, when we do have metadata for an entity, we will never try to pattern-match to it.

Nicole Hemsoth described Big Data, Big Demand: Navigating the Cloud Storage Landscape in a 2/17/2011 post to the HPC in the Cloud blog:

image A rapid-fire search for the terms "big data" and "cloud storage" will reveal no shortage of options for users in need of a secure place to store and quickly access critical information. As the data deluge continues to slide a never-ending swell into already overstuffed datacenters, an increasing number of organizations are looking to the cloud to handle their massive demands—and for some, their processing needs as well.

A number of enterprise users are completely reliant on massive data wells to drive their businesses, and like high performance computing users, they have unique concerns when it comes to storage. However, given that the space is rather new for a an ever-diversifying breed of applications, storage concerns are overlooked in many conversations about big data.

image To clear the clouds that muddle the big picture view of the storage landscape, we asked a number of big data storage experts about how they should evaluate cloud storage options and what the future of cloud storage looks like for both those with massive volumes of data to contend with as well as high performance computing users.

This week technology leaders from Panasas, EMC, NetApp, Cirtas, TwinStrata, Cleversafe, Virident, and Infineta Systems weighed in on a particular element involving cloud storage--from the nature of public clouds to scalability and cost concerns, to larger trends that are affecting the decision-making process.

Big Data and the Public Cloud Storage Problem

Garth Gibson, founder at CTO of Panasas provided a very directly-stated perspective on cloud storage, especially on public cloud storage. Gibson claims that the fundamental challenge with public cloud resources is that despite their focus on computation, the important element of storage is considered only as an afterthought.

“An opposing perspective is what is necessary for data-intensive HPC workloads. Here, big data is the true asset and computation is just part of the infrastructure. So instead of looking to potential big data applications to justify buying into the latest market hype on utility cloud computing in a far off state or country, the aggressive innovator focuses on raising top-line revenue and will spec the infrastructure (compute clusters) to the needs of the asset (big data).  Some leasing company nearby, working with a colo facility and an integrator pouncing on all things posing as cloud software, will be more than happy to build and operate the private cloud appropriate for your Big Data.  And at a reasonable price to boot.

Let’s get it straight. Understand your big data -- what is it, where is it, what is needed to extract value out of it. And then build around it the private cloud best suited for it.”

Gibson remarked on the paradoxical problem with storing large amounts of data, stating that big data is “difficult and time-consuming to create, awkward to manage, expensive and slow to move, critical to gaining a competitive edge, and far from technically mature.”

This issue has been echoed from a number of quarters; the data is the central advantage for many businesses, yet it is also incredibly burdensome. Without solutions that are mature this creates a giant problem for many enterprise users. Furthermore, as Gibson asks, given the value of this data:

Page:  1  of  5
Read more: 2, 3, 4, 5, Next >

David Vellante [pictured below] asked and answered What Does Big Data Mean to Infrastructure Professionals? in a 2/17/2011 post to the Wikibon blog:

Listen to this article. Powered by

Is your data warehouse facing extinction?

imageThere’s been plenty of talk about big data lately and it’s finally spilling into the world of infrastructure. I’m not surprised. But there’s a lot of confusion and many frustrating misconceptions. I understand. It’s a confusing topic – especially to us infrastructure people. We like to simplify things. Big data = big iron, big RDBMS, beyond terabytes. Big. Data. I Get it!


My new friend Andreas Weigend said to me the other day “Dave, infrastructure is irrelevant.” Ouch. The truth is he’s largely correct. Infrastructure is plumbing. Plumbing is only relevant when it’s not working– then it matters a lot – otherwise there are way more important things to talk about that deliver true value.

But infrastructure enables applications you say– Infrastructure can be profitable. Well I’m all for enabling value and profits that’s cool. And we need infrastructure to deliver data value and people can profit from that – but it’s not going to be your daddy’s infrastructure that powers big data.

Ten Big Data Realities

Here are the first ten points that I want you think about when you’re grokking big data:

  1. Oracle is not big data
  2. Big data is not traditional RDBMS
  3. Big data is not Exadata  
  4. Big data is not Symmetrix
  5. Big data is not highly structured
  6. Big data is not centralized
  7. IT people are not driving big data initiatives
  8. Big data is not a pipe dream – big data initiatives are adding consumer and business value today. Right now. Every second of every minute of every hour of every day.
  9. Big data has meaning to the enterprise
  10. Data is the next source of competitive advantage in the technology business.

The Next Big Thing

Visionary Tim O’Reilly underscored this last point when he spoke with me and my colleague John Furrier at Strata’s Making Data Work event. O’Reilly is the man who led the industry in spotting huge trends including open source software and Web 2.0.  He told us that he came to the conclusion that data was the next point of leverage by observing the PC analogy. He cited IBM’s mistaken assumption during the PC era that hardware was the primary source of lock-in, only to blindly support the commoditization of hardware and hand over its monopoly to Microsoft. Tim started to ask “what happens when software becomes a commodity too?” This is when he realized that the next source of competitive advantage would be “large databases generated through collective action over the Internet.” Check out Tim’s comments in this short video.

Visit site to play video.

This brings me back to so-called big data. Back in the day, if you had lots of data to analyze you’d buy the biggest Unix box you could lay your hands on and if you had any money left over, you’d pay Oracle through the nose for some database licenses. That big Unix box became a “data temple” and the DBA held the keys to the kingdom. You’d bring all of your data into that box where function resided in the form of code; all revolving around a relational database.

When Google started its search operation it realized that it couldn’t suck this huge volume of dispersed information into a data temple – it just wouldn’t work – so it developed MapReduce and the early days of big data were born which led to Doug Cutting and his friends inventing Hadoop (with some help from Yahoo) and then this whole ecosystem around big data and Apache, Cassandra, Cloudera and a zillion other important pieces has exploded. 

No Oracle. No Symmetrix. No RDBMS. Very unstructured. Highly dispersed. Lots of data hacking from multiple sources on the Internet. Inside and outside of firewalls. The basic premise was don’t bring petabytes of data to a temple, rather bring megabytes of code to the data and avoid the network bottleneck– whoa – V8 moment!

So lots of Internet companies have hopped on the big data bandwagon – Google, Yahoo, Facebook, Twitter, etc. But it’s not just the Web whales. Abhi Mehta, who at the time was with BofA told me financial services is all over big data and it’s changing the business model. For example, sampling he said is dead. Rather than build fraud detection models on samples and realize an outlier breaks the model and then have to tear it down and rebuild the model…financial services firms can now analyze five years of fraud, every single instance and operate on the entire data set to spot patterns and trends – in orders of magnitude less time. That’s game-changing.

And it’s  not just financial services and Internet businesses. Manufacturing, energy, government, retail, health care…everyone has a  big data problem – or should I say opportunity? White space is everywhere. Securing data, automating the data pipeline, new ways to visualize data, new products built on data (see LinkedIn Skills), new services – the big data list goes on and on. And it’s here today – just Google Pizza and you’ll tap an enormous database from your mobile access point and you can get recommendations, menus, directions, deliveries – whatever you need to make a decision.

What Does Big Data Mean to Infrastructure Professionals?

Here are the next ten things you should know about big data:

  1. Big data means the amount of data you’re working with today will look trivial within five years.
  2. Huge amounts of data will be kept longer and have way more value than today’s archived data.
  3. Business people will covet a new breed of alpha geeks. You will need new skills around data science, new types of programming, more math and statistics skills and data hackers…lots of data hackers.
  4. You are going to have to develop new techniques to access, secure, move, analyze, process, visualize and enhance data; in near real time.
  5. You will be minimizing data movement wherever possible by moving function to the data instead of data to function. You will be leveraging or inventing specialized capabilities to do certain types of processing- e.g. early recognition of images or content types – so you can do some processing close to the head.
  6. The cloud will become the compute and storage platform for big data which will be populated by mobile devices and social networks. 
  7. Metadata management will become increasingly important.
  8. You will have opportunities to separate data from applications and create new data products.
  9. You will need orders of magnitude cheaper infrastructure that emphasizes bandwidth, not iops and data movement and efficient metadata management.
  10. You will realize sooner or later that data and your ability to exploit it is going to change your business, social and personal life; permanently.

Are you ready?

Derrick Harris reported Forrester: Big Data, Cloud Will Merge in 2011 in a 2/14/2011 post to GigaOm’s Structure blog:

image In a new Forrester report titled “2011 Top 10 IaaS Cloud Predictions For I&O Leaders,” authors James Staten and Lauren E. Nelson highlight widespread convergence of big data products and cloud computing. Specifically, the authors advise infrastructure and operations (I&O) leaders to encourage their data analysts to get hip to cloud-based analytics tools if they aren’t already, and to seriously consider the bottom-line value of making their organizational data available to the public as a cloud resource. This is pretty progressive advice coming from a large analyst firm, and some evidence suggests it’s right on.

The two trends — cloud computing and big data –already  appear to be coming together. A third-quarter 2010 Forrester survey has cloud adoption picking up significantly in 2011, particularly among large large enterprises and high-tech companies. These are prime targets for cloud-based data tools, as well, because they likely have large amounts of data to analyze, some of which might actually be valuable to other organizations. In fact, these are often the same types of organizations cited as early adopters of Hadoop, NoSQL and other products targeting big data.

imageStaten and Nelson highlight a number of tools, including Amazon Elastic MapReduce, Splunk and GoodData, that can help these types of organizations, or any other organization so inclined, get started with cloud-based analytics tools and reap the rewards of early adoption. As the authors point out, cloud-based offerings often are simple, low-cost ways to get familiar with advanced analytics, often in a customized manner. Specifically, they offer this advice to I&O teams:

Here’s another empowerment opportunity. If your information and knowledge management professionals aren’t considering these solutions, introduce them to the concept and demonstrate your proactive willingness to help them get started. If cost is a significant barrier to BI, consider following the 12,000 enterprises, including British Telecom, USDA, Tata Communications, and QED Financial Systems, that have already downloaded Jaspersoft, an open source BI solution, as a starting point.

image Jaspersoft, coincidentally, just announced technology integrations with just about every noteworthy big data product on the market, including Hadoop and a large number of NoSQL databases.

Staten and Nelson also advise organizations to consider whether they could make money from their data by making it publicly available a la the Associated Press, Dun & Bradstreet and Esri. Although placing any proprietary data into a public data marketplace (Windows Azure DataMarket or InfoChimps, for example), is a decision beyond the realm of IT, the latter group can play an important role in the process by developing self-service cloud portals to access the data, as well as by ensuring proper security protocols, permissions and the like are in place.

It’s unclear to me that too many organizations, especially large businesses, are keen on giving away any advantage derived from their analytical efforts, but I think the proposition has to be compelling. Especially if a data set has already served its primary purpose, there might be relatively little harm in making it available to others that might be interested in it for entirely different purposes, and that might be willing to pay for it. I’m thinking specifically of a conversation I had with IBM last year, in which CTO for Emerging Internet Technologies David Boloker told me about about an IBM analysis of 10 years worth of patents that could be a valuable resource for other organizations.

The rest of the report’s predictions focus on more traditional cloud computing concerns such as delivery models, standards and security, but it’s the advice on taking data analysis to the next level that steals the show. It’s great to see even analyst firms with largely prudent clients pushing them not only to step up their data efforts, but to do so in the cloud.

Related content from GigaOM Pro (sub req’d):

<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Patriek Van Dorp (@pvandorp) started “a series that will end with a federated scenario in Windows Azure” with a Creating a Custom STS with Windows Identity Foundation post of 2/18/2011:

image A while ago I was putting together a demo for Engineering World that would demonstrate how you could create your own SecurityTokenService (STS) using Windows Identity Foundation (WIF). I figured I could create an STS that would authenticate users against LinkedIn. To use the STS in my demo I needed a place to host the STS temporarily, so I thought it would be a good idea to host the STS in Windows Azure.

image7223222While I was putting together the demo I was reading the “Programming Windows Identity Foundation” book written by Vittorio Bertocci. As work on my demo progressed I read in Vittorio’s book that writing your own STS is probably not a good idea. An STS needs to be:

  • Secure – Relying Parties need to trust your STS. If it is not secure this will damage the trust you built up.
  • Available – Because an STS is a single entry point for possibly multiple applications, it must be available all the time. If your STS is down, none of the relying parties will be able to function correctly.
  • High-performing – Login experience is very important. While the number of users increases as more and more relying parties are added, performance must remain acceptable.
  • Manageable – You must be able to add or remove relying parties, monitor performance, manage claims, add or remove users, etc.

All these requirements imply that your STS needs to be thoroughly tested for security, functionality and performance. You need a trustworthy infrastructure to host your STS, which would cost a lot of money to provision and to maintain. Although the Windows Azure platform could take away some the worries such as performance and availability, it’s still probably not a good idea writing your own STS and you will probably be better of using a thoroughly tested STS like ADFS v2.

Ok, that being said, I had progressed way to far in my demo to stop now. Besides,…it was a nice challenge and it would give me a lot of inside in how the WS-Federation protocol works. Additionally I learned a lot about OAuth v1.0a as well, since LinkedIn uses this protocol to authenticate users. Mixing the two protocols turned out to be a great exercise in the beautiful world of Identity.

In this post I will show how to create a plain STS using the ASP.NET Security Token Service Web Site project template you get out-of-the-box when you install WIF. I will show what you when you create the STS and how WS-Federation applies to it. I will also show what the “Add STS Reference…” functionality does to an application that relies on WIF for authentication and authorization (a Relying Party or just RP).

In future posts I will show how to authenticate using the LinkedIn API an how you can convert your working project to host it in a Windows Azure WebRole. I will also show how you can use Windows Azure AppFabric Access Control Service as a Federation Provider (FP), which I’ll explain when I talk about Federated Identity. Finally I will explain why I didn’t deliver my prepared demo.

For all you folks who attended my talk at Engineeringworld, in this post I will also uncover why my improvised demo mysteriously failed.

ASP.NET Security Token Service Web Site

When you installed the Windows Identity Foundation SDK, you got some Visual Studio project templates for free, which make your live as developer so much easier. One of them is the ASP.NET Security Token Service Website template, which creates a website project with some pages, a configuration file with FormsAuthentication enabled and a metadata file used by WIF.


Figure 1: ASP.NET Security Token Service Web Site Template

Figure 1 shows the artifacts present in the ASP.NET Security Token Service Web Site template. The website is configured to use FormsAuthentication, thus every unauthenticated user will be redirected to the Login.aspx page where they will need to provide a username and password (or, as we’ll see in a future post, use some other form of authentication). The template does not implement the actual authentication of users for there is no user store behind it. The fact alone that the template is implemented in a Website project (as opposed to a Web Application project) indicates once again that it’s probably not good idea to use it for a production STS. It’s meant mainly to provide the developer an easy way to create  a stub for testing  a  RP.

Next to the .aspx pages and the Web.config, you see a FederationMetadata.xml which is used by a wizard called FedUtil.exe to generate an STS reference. There are also some classes in the App_Code folder used for extending default logic and for creating an IClaimsIdentity. I will get into the details of these artifacts shortly, but first I want to look at the WS-Federation protocol and see how it’s implemented in the STS template.


WS-Federation is an authentication protocol used to externalize authentication to provide better interoperability between applications and businesses. WS-Federation works in active scenario’s (rich clients) as well as passive scenario’s (stateless request/response). The scenario we’re interested in here is the passive scenario as we’re making more internet-based applications using the stateless HTTP protocol.


Figure 2: The WS-Federation Protocol

Figure 2 depicts the communication process specified by the WS-Federation protocol. Let me write the process out in 9 steps:

  1. The user requests Page.aspx in my web application.
  2. In my web application’s configuration a couple of HttpModules are registered to handle incoming requests. The WSFederationAuthenticationModule processes the request and finds that the request does not contain a valid token. It responds to the browser with a HTTP/1.1 302 (FOUND) message with the address of the STS in the Location header, which is configured in my web application’s configuration file.
  3. My web application gets redirected to the Default.aspx page of the STS with some typical WS-Federation query string parameters (I won’t go in the details of these query string parameters, because then I would elaborate too much).
  4. Because the STS is configured to use FormsAuthentication and I’m not yet authenticated, the user gets redirected to the Login.aspx page as is configured in the Web.config of the STS. Here the user needs to, for instance, enter her username and password which will be validated by some self implemented mechanism (e.g. ASP.NET Membership Provider).
  5. When the user is authenticated she gets redirected back to the Default.aspx page which in turn processes the signin request by generating a SecurityToken with whatever claims you specified (I will go into this in a later post) and putting it into a signin response.
  6. Next, the browser gets redirected to the Page.aspx page of my web application. The WSFederationAuthenticationModule intercepts the request and now finds a (hopefully) valid token.
  7. The WSFederationAuthenticationModule passes the token on to the  SessionAuthenticationModule which is also registered in the Web.config of my web application. The SessionAuthenticationModule sets a cookie named FedAuth in the response to the browser. For this it uses a HTTP/1.1 302 (FOUND) message with the address of the requested page in the Location header.
  8. The browser gets redirected back to the Page.aspx page with the FedAuth cookie in the header of the request. The SessionAuthenticationModule processes the cookie and create an IClaimsPrincipal with an IClaimsIdentity in it and let’s the request pass to it’s destination.
  9. Finally, my web application’s code is hit, returning the requested Page.aspx page.

The above steps are generally what the WS-Federation protocol describes. Of course the process is a lot more complicated under the hood, but with these 9 steps you get a good idea of what’s going on.

Signing In
protected void Page_PreRender( object sender, EventArgs e )
        if ( action == WSFederationConstants.Actions.SignIn )
            // Process signin request.
            SignInRequestMessage requestMessage = (SignInRequestMessage)WSFederationMessage.CreateFromUri( Request.Url );
            if ( User != null && User.Identity != null && User.Identity.IsAuthenticated )
                SecurityTokenService sts = new CustomSecurityTokenService( CustomSecurityTokenServiceConfiguration.Current );
                SignInResponseMessage responseMessage = FederatedPassiveSecurityTokenServiceOperations.ProcessSignInRequest( requestMessage, User, sts );
                FederatedPassiveSecurityTokenServiceOperations.ProcessSignInResponse( responseMessage, Response );
                throw new UnauthorizedAccessException();
        else if ( action == WSFederationConstants.Actions.SignOut )
            throw new InvalidOperationException();
    catch ( Exception exception )
        throw new Exception( "An unexpected error occurred when processing the request. See inner exception for details.", exception );

Codeblock 1: Process wasignin1.0

A signin request in WS-Federation always has a wa QueryString parameter with the value wasignin1.0. If the STS encounters this QueryString parameter a SigninRequestMessage is created from the url in the request. If the user is not authenticated at this point, the request will be redirected to the Login.aspx page as is configured in the Web.config. When the user is authenticated the request will be redirected back to the Default.aspx page. Because the QueryString parameters haven’t changed, the signin process will begin. Now the user is authenticated and so a new instance of CustomSecurityTokenService is created based on the CustomSecurityTokenServiceConfiguration. The CustomSecurityTokenService class contains some methods that enable the developer to modify the default behavior in regard to the validation of the relying party, which are out of scope for this post. The CustomSecurityTokenService also contains a method called GetOutputClaimsIdentity (Codeblock 2), which enables the developer to control what claims will be returned.

protected override IClaimsIdentity GetOutputClaimsIdentity( IClaimsPrincipal principal, RequestSecurityToken request, Scope scope )
	if ( null == principal )
		throw new ArgumentNullException( "principal" );

	ClaimsIdentity outputIdentity = new ClaimsIdentity();

	// Issue custom claims.
	// TODO: Change the claims below to issue custom claims required by your application.
	// Update the application's configuration file too to reflect new claims requirement.

	outputIdentity.Claims.Add( new Claim( System.IdentityModel.Claims.ClaimTypes.Name, principal.Identity.Name ) );
	outputIdentity.Claims.Add( new Claim( ClaimTypes.Role, "Manager" ) );

	return outputIdentity;

Codeblock 2: GetOutputClaimsIdentity

The ASP.NET Security Token Service Web Site template implements the GetOutputClaimsIdentity method so that it returns an IClaimsIdentity with two claims; Name and Role. In this method claims can be added or transformed as needed.


Typically an STS contains a publicly accessible metadata file. This metadata file is a signed XML file that contains information about the owner of the STS, the offered claims, the endpoints on which a RP can reach the STS and much more that’s beyond the scope of this post. This metadata file (typically called FederationMetadata.xml) is signed, meaning that it cannot be changed in any way without invalidating the signature. The template generates the file as one long string of XML and even if you change the layout by hitting CTRL+A+K+F, the signature is invalidated and the metadata is useless to any RP wanting to use it.

There are tools however that let you enter your information and the claims you want your STS to offer and they will generate a valid signed metadata file, which you can include in your STS project. One of these tools is the WS-Federation Metadata Generation Wizard by ThinkTecture.

The WS-Federation metadata is used by tooling to generate the right configuration for RP’s. This way a lot of tedious and error prone work can be done by generators, making our life as a developer a lot easier once more.

Relying Party

Now that we have a basic STS in place, our RP needs to reference that STS and start using the WS-Federation protocol. When we installed the WIF SDK, we got an extra option in de context menu of a project in Visual Studio named “Add STS Reference…”. This option opens the FedUtil wizard that will guide use through a few simple steps to add a reference to our STS.


Figure 3: FedUtil Step 1: The Audience

In the first step we provide the location of the Web.config of our RP. When we opened FedUtil from within the context of a Visual Studio project, this value is already filled in for us. In the same step we need to provide the URL where the application will be listening on. This can be localhost for debugging. FedUtil only makes changes to the Web.config of our application, so any values entered here can be changed easily at deploy time. When our application is not hosted in IIS yet and we don’t use SSL, we get a warning when we want to continue, saying that we’re not using SSL. We can safely ignore this message for now. Of course in production we should always make sure we have valid certificates installed and that the application URI scheme is HTTPS.


Figure 4: FedUtil Step 2: The STS

Next we need to select the location of the WS-Federation metadata. This can be an URL to the metadata XML file that is publicly accessible on the STS, but during development we can also browse for the file on the file system.


Figure 5: FedUtil Step 3: Certificate Validation

The next step we need to indicate whether we want to validate the certificate chain or not. Validating the certificate chain ensures that the certificate is issued by a trusted Certificate Authority (CA). In our sample we won’t validate the certificate chain, because we will use self-signed certificates.


Figure 6: FedUtil Step 4: Token Encryption

The next step in the process is indicating whether we want our tokens to be encrypted or not. For sake of simplicity we won’t use token encryption here either.


Figure 7: FedUtil Step 5: Offered Claims

Next we get an overview of claims that the STS offers. These are the claims that are configured in the WS-Federation metadata of the STS. We cannot change the claims here.


Figure 8: FedUtil Step 6: Summary

Finally, we get an overview of all information we have entered and all we have to do now is click on ‘Finish’ and our RP will be configured to use WS-Federation and to externalize it’s authentication to our STS.

WRONG!!! This is exactly what went wrong with my improvised demo at Engineering World. Let’s look at what FedUtil has generated.

First of all FedUtil generated a FederationMetadata.xml file in my RP. I won’t go into this for now, but an STS can use this to validate the RP and to enable dynamic claims generation and stuff. Furthermore FedUtil changed the Web.config file.

  <section name="microsoft.identityModel" type="Microsoft.IdentityModel.Configuration.MicrosoftIdentityModelSection, Microsoft.IdentityModel, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />

Codeblock 3: configSection/section microsoft.identityModel

<location path="FederationMetadata">
		<allow users="*" />

Codeblock 4: Publicly Accessible FederationMetadata

FedUtil adds a new configSection named microsoft.identityModel and it opens up the FederationMetadata folder so that the WS-Federation metadata for the RP is publicly accessible (just as with the STS).

<!--Commented out by FedUtil-->
<!--<authorization><allow users="*" /></authorization>-->
<authentication mode="None" />
<compilation debug="true" targetFramework="4.0">
	<add assembly="Microsoft.IdentityModel, Version=, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
	<add assembly="System.Design, Version=, Culture=neutral, PublicKeyToken=B03F5F7F11D50A3A" />
<!--Commented out by FedUtil-->
<!--<authentication mode="Forms"><forms loginUrl="~/Account/Login.aspx" timeout="2880" /></authentication>-->

Codeblock 5: Added Microsoft.IdentityModel Assembly and Changed Authentication Mode

FedUtil further adds the Microsoft.IdentityModel assembly and changes the authentication mode to ‘None’ in the system.web section of the Web.config.

	<modules runAllManagedModulesForAllRequests="true">
	  <add name="WSFederationAuthenticationModule" type="Microsoft.IdentityModel.Web.WSFederationAuthenticationModule, Microsoft.IdentityModel, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" preCondition="managedHandler" />
	  <add name="SessionAuthenticationModule" type="Microsoft.IdentityModel.Web.SessionAuthenticationModule, Microsoft.IdentityModel, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" preCondition="managedHandler" />

Codeblock 6: WSFederationAuthenticationModule and SessionAuthenticationModule

The WSFederationAuthenticationModule and SessionAuthenticationModule HttpModules are added to the system.webServer section.

		<add value="http://localhost:777/" />
		<securityTokenHandlerConfiguration saveBootstrapTokens="true" />
		  <!--          Following are the claims offered by STS ''. Add or uncomment claims that you require by your application and then update the federation metadata of this application.-->
		  <claimType type="" optional="false" />
		  <claimType type="" optional="true" />
		  <claimType type="" optional="true" />
		  <claimType type="" optional="true" />
		  <claimType type="" optional="true" />
		  <claimType type="" optional="true" />
		  <!--Following are the claims offered by STS ''. Add or uncomment claims that you require by your application and then update the federation metadata of this application.-->
		  <!--          <claimType type="" optional="true" />-->
		  <!--          <claimType type="" optional="true" />-->
		  <claimType type="" optional="true" />
		  <claimType type="" optional="true" />
		  <!--Following are the claims offered by STS ''. Add or uncomment claims that you require by your application and then update the federation metadata of this application.-->
		  <!--<claimType type="" optional="true" />-->
		  <!--<claimType type="" optional="true" />-->
		  <!--<claimType type="" optional="true" />-->
		  <!--<claimType type="" optional="true" />-->
		  <!--<claimType type="" optional="true" />-->
		  <claimType type="" optional="true" />
		  <claimType type="" optional="true" />
	  <certificateValidation certificateValidationMode="None" />
		<wsFederation passiveRedirectEnabled="true" issuer="" realm="http://localhost:777/" requireHttps="false" />
		<cookieHandler requireSsl="false" />
	  <issuerNameRegistry type="Microsoft.IdentityModel.Tokens.ConfigurationBasedIssuerNameRegistry, Microsoft.IdentityModel, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35">
		  <add thumbprint="39AC3758F0EBFABE20F0E7EB889430549983D1A9" name="" />
		  <add thumbprint="6230CDA6652AD17F887E7EF36D5149C8B38D64B1" name="https://.../" />
		  <add thumbprint="824413764DC4829DB867087DC15B59BE5028B484" name="https://..." />

Codeblock 7: The microsoft.identityModel Configuration Section

The microsoft.identityModel configurationsection contains all the settings we entered In the FedUtil and the claimtypes that are offered by the configured STS.

Now here it comes. While these changes, made by FedUtil, work perfectly in IIS 7 using integrated pipeline mode, they don’t work in the development webserver. We need to do two things:

  1. Add the httpRuntime element to the system.web section with the requestValidationMode set to ‘2.0’
  2. Add the WIF modules to the system.web section as well
<httpRuntime requestValidationMode="2.0" />
  <add name="WSFederationAuthenticationModule" type="Microsoft.IdentityModel.Web.WSFederationAuthenticationModule, Microsoft.IdentityModel, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
  <add name="SessionAuthenticationModule" type="Microsoft.IdentityModel.Web.SessionAuthenticationModule, Microsoft.IdentityModel, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />

Adron Hall (@adronbh) posted A Short Introduction to Windows Azure AppFabric on 2/17/2011:

image Windows Azure AppFabric is arguably the main feature set that truly sets Windows Azure apart from any cloud services on the market today. The AppFabric allows cloud users to hookup on-premises services to their cloud services, to secure cloud & on-premises services with new or existing security frameworks (identity based, active directory, or otherwise), cache Internet or other content, and on top of all that build out and enable composite application integration.

image7223222At the same time that Windows Azure AppFabric is one of the main features that sets Windows Azure apart from the competition, it is often one of the most misunderstood or unknown parts of the entire offering. In this chapter I’ll cover the main parts of the AppFabric including the service bus, access control, caching, integration, and patterns for integrating composite applications.

Home of Windows Azure AppFabric

The first thing to do in order to begin building with the Windows Azure AppFabric is to download the SDK, check out the various Windows Azure AppFabric Sites and familiarize yourself with what it is and how it works into the Windows Azure Platform.

The main web presence is located at

Windows Azure AppFabric Site

Windows Azure AppFabric Site

On the main site you will primarily find a marketing presence, but with links to many locations with useful architectural, development, and related technical information.  On the left hand side of the site there is a navigation bar that provides access to specific information describing the service bus, access control, and other features of the Windows Azure AppFabric.

The next major web presence that is extremely useful for Windows Azure AppFabric is the Windows Azure AppFabric Team Blob. It is located at This site is regularly updated with development tips, patterns and practices, related MVP and Microsoft Enangelist links, updates on the SDK, CTPs, and other technical information.

The last major link that should be reviewed and checked often in relation to Windows Azure AppFabric Development is the Windows Azure AppFabric Developers’ Center located at Next to the blog, which often links to this page, this site is probably the most useful in relation to AppFabric Development. There are headlines, quick starts, and other documentation related to AppFabric Development with Ruby, Java, and other languages and technology stacks.

I don’t want to provide a direct link to the SDK. The reason is that Microsoft’s method for tracking and providing download links to SDKs, CTPs, and other related software often changes. The best way to find the current download location for the Windows Azure AppFabric SDK is to use a search engine and enter the keywords “Windows Azure AppFabric SDK” (click the link as I setup the search for you). The first links provided will get you to the current location to download the SDK & other related files discussed below.

As of the current v1.0 release of the SDK there are several downloads that are matched along with it for documentation and samples. There is also a specific WindowsAzureAppFabricSDK-x64.msi and a WindowsAzureAppFabricSDK-x86.msi.  Each of these are different and specific to the 32 or 64 bit architecture. Along with the SDK there is a v1.0 C# and Visual Basic file available for download that includes multiple examples of how Windows Azure AppFabric works. The last file that is included with the Windows Azure AppFabric download page is the WindowsAzureAppFabric.chm. This is simply a documentation file for the Windows Azure AppFabric SDK.

What Exactly is Windows Azure AppFabric?

Windows Azure AppFabric can be seen as the all encompassing fabric that interconnects on-premises solutions to Windows Azure solutions, and even Windows Azure solution to other solutions within the cloud. Windows Azure AppFabric is broken down into two main feature offerings simply called the service bus and access control. Some new features are coming online in the very near future; the main one I’ll discuss is the Windows Azure AppFabric Caching.

Windows Azure AppFabric

Windows Azure AppFabric

That’s all I have right now, and would love any feedback on things I should mention, discuss or otherwise add to this write up.  I’m thinking of using it as an intro to AppFabric in some of the pending presentations I have coming up.  So please throw in some feedback on what else you might like.  -thanks

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Sarah Perez explained How the Nokia/Microsoft Partnership Affects Developers in a 2/18/2011 post to the ReadWriteMobile blog:

image Just prior to the start of this week's annual Mobile World Congress event in Barcelona, Microsoft and Nokia shook up the mobile industry with the announcement that Nokia would abandon work on its MeeGo smartphone operating system (OS) and would begin shipping handsets running Windows Phone 7 starting sometime this year.

image Eventually, said Nokia, Symbian handsets will be phased out as Windows Phone becomes the company's new flagship OS. So what does this mean for the developer community?

The Symbian Transition

Nokia wp7 150x150Despite reports of analysts now calling Symbian a "dead" smartphone platform, Nokia stresses that the Symbian platform isn't going away any time soon. It still plans to ship another 150 million Symbian handsets this year, which will join the 100 million already out there. 75 million of those in use today were built using the latest version of Qt, the Nokia application development framework, and run the latest version of QML, the language used for building Nokia's mobile apps. In other words, there's a very large, and very active ecosystem of Symbian end users, and they will still want to download and use mobile applications for a long time.

Because this partnership news is so fresh, neither Nokia or Microsoft have details on how training programs, financial support, technical support or other incentives will be handled to help transition Symbian developers to move from one platform to the next. It's all still "to be determined."

However, Microsoft has publicly said that its Windows Phone Developer Tools (Visual Studio 2010, Express 4, Silverlight and the XNA Framework) will continue to be available for free and that it will work with Nokia so that when new Nokia handsets ship, there won't be extra work that needs to be done to make WP7 apps and game work on the new Nokia-built devices.

Flurry Reports Increased Interest in Windows Phone 7

As of now, there are over 8,000 Windows Phone 7 applications and 28,000+ registered developers working on Windows Phone 7 apps. From early reports, it appears that number may grow much higher over the coming weeks.

Analytics firm Flurry, which tracks new application starts within its system after major market events to gain insight into upcoming trends, recently analyzed the movement of the Windows Phone platform within its network. After speculation about the Nokia/Microsoft partnership began circulating around the Web, Flurry measured a 66% increase in Windows Phone 7 project starts during one week. In comparison, Flurry had reported increased project starts for iOS applications after the iPad announcement, too - up by 185%, in that case.

Flurry NewProjects WindowsPhone7 resized 600

According to Flurry's VP Marketing Peter Farago, "this week’s spike in Windows Phone 7 developer activity shows that developers not only believe Nokia has given Microsoft Windows Phone7 a shot in the arm, but also that Nokia and Microsoft together can build a viable ecosystem."

Developers Get Free Phones, Nokia to Have Presence at Microsoft Events

Although, again, the details aren't worked out yet, it seems that Nokia will have a presence at Microsoft's next big industry events for developers. For example, Microsoft's MIX11 conference planned for April will feature Nokia's presence as will Microsoft's PDC (Professional Developers Conference).

Also, according to a report from SlashGear, Nokia Launchpad program members will receive a new Nokia E7 device (a non-U.S. handset) and one free Nokia WP7 device, as soon as it becomes available. Launchpad members get early and exclusive access to alpha and beta APIs and SDKs, market intelligence and industry reports, invites to Nokia events, handset discounts and more.

Launchpad is typically a paid program, but access is now available for free for the first year, SlashGear noted.

Microsoft Developers to Gain Access to Nokia's Operator Relationships, and Other Changes

One of key aspects to this new partnership from the developer standpoint is regarding Nokia's operator relationship. The company currently has billing arrangements with 103 operators in 32 countries worldwide. This allows apps to be paid for through line items that appear on an end user's monthly phone bill. Those same operator billing relationships will become available to Windows Phone developers, too. What timeframe is involved before that becomes possible is still unknown.

Other changes that will both directly and indirectly affect the developer community have to do with the integration of new Microsoft APIs into the Ovi publisher portal and the fusion of various Nokia and Microsoft services. For example, Nokia's Ovi Maps will replace Bing Maps - not merge with it, but actually replace it. Bing, however, could integrate its search technology into the Nokia mapping service.

Nokia's Local Business Portal, Ovi Primeplace, will be enhanced by Microsoft's ad business AdCenter and Xbox LIVE content may find its way into Nokia's Ovi Store, too.

Microsoft's Involvement with Nokia Community

According to Brandon Watson, Microsoft's head of mobile, his company isn't going to force itself into Nokia's developer community immediately. Its goal is to sit back, listen, talk to developers and other community insiders, discover what pain points there are, and then learn how to best address them. It's still very early in the process, however.

Microsoft is interested, too, in leveraging this new partnership to extend other services to Nokia developers - including those still working on Symbian apps. In the future, Microsoft wants to deliver libraries for Symbian that would provide developers with access to Microsoft's Live services, like Photo Gallery and SkyDrive, for example, plus access to Microsoft's streaming music service Zune Pass and Windows Azure, its cloud computing and Web services platform.

When asked if Microsoft was worried about some developers abandoning Nokia altogether in light of this news, and if the company was preparing to reach-out to some of the high-profile developers within Nokia's Symbian community, Watson would only say that, to Microsoft, "every Symbian developer is important."

See Also

Obviously, the Nokia partnership will have a major affect affect on Windows Azure for mobile development.

• Andy Cross (@andybareweb) described Custom Azure NLog Integration with Azure Diagnostics SDK 1.3 using NuGet in a 2/18/2011 post to his Bare Web blog:

image NLog is an advanced .NET logging platform, allowing well formatted and filtered logging to be added easily to any project. It is implemented using Targets which take messages and output them to a medium depending on their function. Examples are File and Console. There are no targets that are designed to be used in Windows Azure, and so to use NLog involved a complex programmatic setup using RoleEnvironment LocalResources. I have put together two custom NLog targets that solve this problem, by integrating it with the Windows Azure Diagnostics infrastructure. There are targets for WADLogsTable and the Azure Emulator UI. Source code is provided. I used NuGet to setup NLog with the latest version.

imageFirstly, I started with a vanilla Azure project, with a single WebRole:

Add Web Role to a new Vanilla Azure Project

Add Web Role to a new Vanilla Azure Project

This is the WebRole that will be doing the logging. I will insert the basic logging into Page_Load of default.aspx, and have it just output some basic text values. We need to setup the library references for NLog. I chose to do this with NuGet, which is new to me at least and I found the process amazingly painless.


Firstly, install NuGet using the library package manager built into Visual Studio 2010 as described in the Getting Started guide. Then right click on your solution and click “Add Library Package Reference”.

Add Library Package Reference

Add Library Package Reference

This brings up the Add Library Package Reference dialog. On the left side of the dialog, click “Online” and then “NuGet official package source”. This brings up all the available packages for you:

Add Library Package Reference with NuGet

Add Library Package Reference with NuGet

Type NLog into the Search Online search box at the top right, and the list will be filtered to the items that you want:

Filter by NLog

Filter by NLog

Once you have done this, you can click on the package you want to install, and then click on the Install button. Waiting a few seconds shows that the package has been added to your solution.

NuGet has installed NLog!

NuGet has installed NLog!

You will now have the packages in your reference list:

Reference List

Reference List

That’s it! Easy as can be.

NLog Setup

NLog requires a config file to be placed in the root of the application. I added this as a Web.Config file, and just renamed it to NLog.config.

Add NLog.config

Add NLog.config

Once we have done this, we need to set the build configuration options to include the config file:

Content Copy Always

Content Copy Always

The content of this file we’ll get back to, but any basic one will work for now:

NLog.Config example 1

NLog.Config example 1

Next we’ll move to a bit of code!


Now we must add a class library that will contain our new Targets.

Add Class Library for Targets

Add Class Library for Targets

Now we add in references and two classes for the work we want to perform:

References for new Class lib

References for new Class lib

As you can see, I have added two classes, AzureLogger and EmulatorLogger. AzureLogger will write to the Windows Azure Diagnostics core table WADLogTable, and EmulatorLogger will write to the Emulator UI.

The classes are straightforward, but because we are interfacing two different logging frameworks, we have to do some work regarding log levels. At the moment when we want to log, we have to add an appropriate type of TraceListener to the System.Diagnostics.TraceListener collection and then write to the TraceListener, and it will appear in the desired output location.

Here is the code for the Emulator Logger, the other target is also included in the attached source code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using NLog.Targets;
using NLog;

namespace BareWeb.NLogAzure
     public class EmulatorLogger : TargetWithLayout
          protected override void Write(LogEventInfo logEvent)
               string logMessage = this.Layout.Render(logEvent);

               System.Diagnostics.TraceLevel level = GetTraceLevel(logEvent.Level);

               LogMessageWithAzure(logMessage, level);


          /// <summary>
          /// Gets the TraceLevel from the LogLevel.
          /// </summary>
          /// <param name="logLevel">The log level.</param>
          /// <returns></returns>
          private System.Diagnostics.TraceLevel GetTraceLevel(LogLevel logLevel)
               if (logLevel == LogLevel.Debug)
                    return System.Diagnostics.TraceLevel.Verbose;
               else if (logLevel == LogLevel.Error)
                    return System.Diagnostics.TraceLevel.Error;
               else if (logLevel == LogLevel.Fatal)
                    return System.Diagnostics.TraceLevel.Error;
               else if (logLevel == LogLevel.Info)
                    return System.Diagnostics.TraceLevel.Info;
               else if (logLevel == LogLevel.Trace)
                    return System.Diagnostics.TraceLevel.Verbose;
               else if (logLevel == LogLevel.Warn)
                    return System.Diagnostics.TraceLevel.Warning;
               else if (logLevel == LogLevel.Off)
                    return System.Diagnostics.TraceLevel.Off;

               throw new ArgumentOutOfRangeException("logLevel");


          /// <summary>
          /// Logs the message with azure.
          /// </summary>
          /// <param name="logMessage">The log message.</param>
          /// <param name="logLevel">The log level.</param>
          private void LogMessageWithAzure(string logMessage, System.Diagnostics.TraceLevel traceLevel)
               // we could build a new table writing structure here. For our purposes, we will just write to Emulator
               var traceListener = new Microsoft.ServiceHosting.Tools.DevelopmentFabric.Runtime.DevelopmentFabricTraceListener();
               traceListener.WriteLine(logMessage, traceLevel.ToString());



In addition to this, we must add the xml into NLog.config to set this up:

<?xml version="1.0" encoding="utf-8" ?>
<nlog xmlns=""
<add assembly="BareWeb.NLogAzure"/>
<target name="logAzure" xsi:type="Emulator" />
<logger name="*" minlevel="Info" writeTo="logAzure" />

Here is the emulator output:

Emulator output from NLog

Emulator output from NLog

Here is the source code: NLogAzure

• Wade Wegner (@WadeWegner) described Running Multiple Websites in a Windows Azure Web Role in a 2/17/2011 post:

image Prior to the release of the Windows Azure 1.3 SDK, Web roles hosted your application in IIS Hosted Web Core (HWC).  While there were certainly ways to extend HWC, for the most part you were stuck with a single application bound to no more than a single HTTP or HTTPS endpoint.  This model enabled only a minimal number of configurations, and prevented you from fully benefiting from the full capabilities of IIS.  Here’s what the ServiceDefinition.csdef file looks like when using HWC:

Code Snippet

  1. <?xml version="1.0" encoding="utf-8"?>
  2. <ServiceDefinition name="WindowsAzureProject15" xmlns="">
  3. <WebRole name="WebRole1">
  4. <Endpoints>
  5. <InputEndpoint name="Endpoint1" protocol="http" port="80" />
  6. </Endpoints>
  7. <Imports>
  8. <Import moduleName="Diagnostics" />
  9. </Imports>
  10. </WebRole>
  11. </ServiceDefinition>

While you can still use this code today and run HWC in your Web role, you’ll miss you on a lot of great capabilities.

imageOne of the capabilities that, surprisingly, people still doesn’t seem aware of in Windows Azure is the ability to run multiple websites in a Web Role.  In fact, this capability is easy to add once we add seven additional lines of code.

Code Snippet

  1. <?xml version="1.0" encoding="utf-8"?>
  2. <ServiceDefinition name="WindowsAzureProject15" xmlns="">
  3. <WebRole name="WebRole1">
  4. <Sites>
  5. <Site name="Web">
  6. <Bindings>
  7. <Binding name="Endpoint1" endpointName="Endpoint1" />
  8. </Bindings>
  9. </Site>
  10. </Sites>
  11. <Endpoints>
  12. <InputEndpoint name="Endpoint1" protocol="http" port="80" />
  13. </Endpoints>
  14. <Imports>
  15. <Import moduleName="Diagnostics" />
  16. </Imports>
  17. </WebRole>
  18. </ServiceDefinition>

The difference here is found in lines #4 through #10.  We now have a <Sites> element that includes a default <Site> named “Web”.  When run (locally or in Windows Azure), this translates into an actual web site running in IIS.


You can see that our website is now running in IIS.  And what’s nice is that the syntax used in the ServiceDefinition.csdef file is nearly identical to how this is traditionally accomplished in the system.applicationHost – there’s not much new we have to learn.

Run the Same Project in Two Sites in the Web Role

So, how can we take this further and run a second website in a Web role?  It’s as simple as adding an additional <Site> to the <Sites> element.

Copy lines #5 through #9 above, and past them below line #9.  You’re going to have to make a few updates:

  1. Change the site name.  You can’t have multiple sites with the same name.
  2. You’ll have to specify a physical directory for the second site.

Note: the only reason we don’t have to specify a physical path for the first site is because “Web” is consider a special case, and Visual Studio knows that it’s referring to the path of the Web role project.  You can confirm this by renaming “Web” to “Web1” – once you do this you’ll have to specify a physical path for the site.

  1. Create a host header entry for the binding element.  The reason for this is because both sites are listening on port 80.  This is the same behavior you’ll find in IIS – you cannot have more than one site listening on the same port without adding a host header.

Once you make these modifications, your <Sites> element will look something like this:

Code Snippet

  1. <Sites>
  2. <Site name="Web">
  3. <Bindings>
  4. <Binding name="Endpoint1" endpointName="Endpoint1" />
  5. </Bindings>
  6. </Site>
  7. <Site name="Web2" physicalDirectory="..\WebRole1">
  8. <Bindings>
  9. <Binding name="Endpoint1" endpointName="Endpoint1" hostHeader="" />
  10. </Bindings>
  11. </Site>
  12. </Sites>

While this syntax is correct, will not locally resolve to our application.  To make this work, we have to update our hosts file to assist in correctly resolving the address to (the loopback address). Update the hosts file (C:\Windows\System32\drivers\etc\hosts) to include the following hostnames:

Code Snippet


After you’ve done this, you can confirm that they’re resolving correctly by pinging one of the addresses:


The reply should come from

With this complete, hit F5 and run again.  You’re now running the same project in two different websites – take a look at IIS:


In fact, if you take a look at the bindings on Web2, you’ll see that the host name has been set to


And if you type (or whatever port the compute emulator is using), you’ll see the website display.


This is pretty cool, especially when want to use the same application for multiple websites – you can handle the requests in such a way that you either pull from different databases or even display different CSS files to make the website display differently.

Note: Some of you may have noticed above that IIS showed as running under port 5100.  You may be wondering why it’s not on port 81, which is what we’re browsing to in IE.  This is because it’s the compute emulator that’s listening on port 81, and then routing to port 5100.  This is necessary so that incase we’re running more than one instance of our web application, both of which have the same host header, they’re not running on the same port.  Nifty, eh?

Now we can even go further.  Let’s take a look at how we can take a completely separate web application and run it in the same Web role.

Run Two Different Projects in the Web Role

Open up a new instance of Visual Studio 2010, and create a new Web Application.  I’d recommend you make a few changes to make it easily recognizable (e.g. change “Welcome to ASP.NET!” to something else).  Be sure to Build your solution (failure to do this will cause problems later).  Copy the Project Folder for the project, and then return to the ServiceDefinition.csdef file in your Windows Azure project.

Create a new <Site> element where the physicalDirectory value is the location of the Project Folder you copied a moment ago.  Also, update the site name and the hostHoder.  It should look something like this:

Code Snippet

  1. <Site name="Web3" physicalDirectory="c:\users\wwegner\documents\visual studio 2010\Projects\WebApplication7\WebApplication7">
  2. <Bindings>
  3. <Binding name="Endpoint1" endpointName="Endpoint1" hostHeader="" />
  4. </Bindings>
  5. </Site>

We’ve now create a third website, but this time – instead of pointing to the Web role project in our solution – we’ve pointed to a completely separate project that lives outside of our solution.  Hit F5 and take a look at IIS:


We now have three sites running, the third which includes the files from “WebApplication7”.  Try it out by updating the URL to and you’ll see:


You can see that this is the not the Web role project itself, but the second project we created.  The packaging tools know to include all the files needed from the physicalDirectory location and make them a part of the CSPKG that is ultimately given to the fabric controller.

This is pretty awesome, cause it demonstrates that you can run two completely different websites in the same Web role with a minimal amount of work.

Now, let’s take this even one step further … because we can.

Run a Virtual Application Within a Site

There may be times when you want to run a Virtual Application within your site (see Wenlong Dong’s post Virtual Application vs Virtual Directory for some interesting information).  Given that the semantics of the <Sites> element is similar to system.applicationHost, it’s not hard to accomplish.

Create a brand new Web Application, and this time update <h2> with a different description (e.g. “This is my virtual application!”).  Build the solution, then grab the project’s path.  Return to your Windows Azure project and the ServiceDefinition.csdef file.  We’re going to add this virtual application to the “Web3” site.  We do this by adding a <VirtualApplication> element with name and physicalDirectory values.  Here’s what it will look like:

Code Snippet

  1. <Site name="Web3" physicalDirectory="c:\users\wwegner\documents\visual studio 2010\Projects\WebApplication7\WebApplication7">
  2. <VirtualApplication name="VirtualApp" physicalDirectory="c:\users\wwegner\documents\visual studio 2010\Projects\WebApplication8\WebApplication8" />
  3. <Bindings>
  4. <Binding name="Endpoint1" endpointName="Endpoint1" hostHeader="" />
  5. </Bindings>
  6. </Site>

The <VirtualApplication> element is on line #3.  Go ahead and hit F5, and take a look at IIS:


You can see that now, in Web3, our virtual application called VirtualApp is running.  And if we browse to, we’ll see that it is indeed “WebApplication8” that’s running.


So there you have it!

You’ve now seen how to:

  1. Run multiple websites that run off of the same project in the same Web role.
  2. Run multiple websites that use different projects in the same Web role.
  3. Run a virtual application within a website that uses a different project in a Web role.

Of course, when you deploy this to Windows Azure, you’ll want to use your own domain name and set the host headers accordingly.  For tips on how to do this – especially as as it relates to setting up CNAMEs for your domain – take a look at Steve Marx’s post on Custom Domain Names in Windows Azure.

Wade and Steve Marx (@smarx) produced a 00:28:27 Cloud Cover Episode 37 - Multiple Websites in a Web Role for Channel9 on 2/18/2011:

image Join Wade and Steve [pictured at right] each week as they cover the Windows Azure Platform. You can follow and interact with the show at @cloudcovershow.

In this episode, Steve and Wade:

  • Discuss how to maximize your use of Windows Azure by running multiple websites in a Web Role
  • Take a look at sharing folders between instances running in Windows Azure
  • Review an article that explains—in minutia—how to setup the Windows Azure management pack for System Center Operations Manager
  • Explain some tips for using Windows Azure Drives and Full IIS in the local compute emulator
  • Highlight Vittorio Bertocci’s post covering the "Small Business Edition" of his Fabrikam Shipping SaaS solution

imageCloud Cover has been nominated for a "Niney" award in the "Favorite Channel 9 Show" category. If you're a fan of the show, vote for us!

Show links:

• Mickey Gousset noted that he expected Team Foundation Server to be ported to Windows Azure in his Year in Review: Visual Studio TFS and ALM article of 2/16/2010 for Visual Studio Magazine:

image I thought I would take a moment with this column and look back at 2010 through the lens of Visual Studio 2010, Team Foundation Server (TFS) 2010, and Application Lifecycle Management (ALM ) in general. There was a lot of movement on all those fronts, with new products and guidance released, as well as a strong uptake in Scrum.

image Some of the biggest news for readers of this column was, of course, the shipping of Visual Studio 2010, Team Foundation Server 2010, and the .NET Framework 4. Both Visual Studio 2010 and Team Foundation Server 2010 shipped with multiple new features, including the much asked for hierarchical work items. On a sad note, the "Team System" name branding was dropped in an effort to simplify the multiple products in the platform.

One of the new features in TFS that many people were looking forward to, Visual Studio 2010 Lab Management (which I covered here), did NOT ship with the rest of the products. Instead, it was in a "release candidate" stage when TFS 2010 was released. However, later in the year, it was released as a full product, and has started to find wide acceptance.

Distributed version control systems have become very popular, especially with remote teams. Codeplex, the open source project community that runs on Team Foundation Server 2010, announced support for Mercurial, allowing for projects to use it as their distributed version control system. And it was hinted at that distributed version control may be added to TFS sometime in the future.

The Visual Studio ALM Rangers released an incredible amount of guidance in 2010. Some of their hits include:

  • Visual Studio 2010 Quick Reference Guide
  • Visual Studio 2010 Team Foundation Server Upgrade Guide
  • Visual Studio 2010 Team Foundation Server Branching Guide
  • Visual Studio 2010 Team Foundation Server Requirements Management Guide
  • Visual Studio 2010 and Team Foundation Server 2010 VM Factory Guide
  • This guidance continued to be updated and added to throughout the year.

Integrating security into the development lifecycle was also important in 2010. The Microsoft Security Development Lifecycle (SDL) team released process templates for both TFS 2008 and TFS 2010, allowing users to integrate SDL best practices into their Agile development processes.

Microsoft released a new version of Team Explorer named Team Explorer Everywhere. This product, based off the acquired Teamprise product, allows developers to work in non-Microsoft environments, while still taking advantage of all the features in Team Foundation Server 2010, including work item tracking, version control and build automation.

With the big investment Microsoft has made in cloud computing and Windows Azure, it is no surprise that TFS is also being moved into the cloud. Brian Harry teased at this concept during the launch of TFS 2010, and showed off the partially finished product at the Microsoft Professional Developers Conference later that year (read PDC keynote coverage here). At this point, it is not a question of if, but when, TFS will be available with Windows Azure. [Emphasis added.]

The Team Foundation Server team was also very active in their Power Tools area, which provides extra functionality for Team Foundation Server users. They released an update shortly after the release of TFS 2010, with a second update in September of 2010. Microsoft also introduced the concept of Visual Studio 2010 Feature Packs. These feature packs extend Visual Studio 2010 with capabilities that enhance and complement the existing tools in Visual Studio 2010. However, these feature packs were only available to users with an MSDN subscription.

Another big area that 2010 will be remembered for is the year that Scrum seemed to hit critical mass. An Agile development process used by many companies over the years, in 2010 Microsoft finally released their official Scrum 1.0 process template for Team Foundation Server 2010, as well as helped develop the Professional Scrum Developer Program, which offers a training course that guides teams on how to use Scrum, Visual Studio 2010 and Team Foundation Server 2010. Urban Turtle, a third-party tool, was also released, which interacts with the TFS 2010 Scrum process template, adding some nice additional functionality to Team Web Access. I covered the Scrum 1.0 Process Template back in July.

Finally, Microsoft showed both its commitment and its passion around ALM with its first ALM Summit. The idea behind the summit was to provide a forum for ALM practitioners to gain deeper insight into ALM, best practices to address development challenges, and gain in-depth knowledge of Microsoft's ALM solutions. All the sessions from the summit are available at the ALM Summit Web site.

With all the new releases of products, new process templates, new conferences, and new guidance coming out in 2010, it is easy to see how it was a bumper year for both Team Foundation Server 2010 and ALM practices in general. As far as what looms in 2011, we know that Service Pack 1 for Visual Studio 2010 and Team Foundation Server 2010 should be coming out. Other than that, it is anyone's guess. I can't wait to see what 2011 brings.

Mickey Gousset is a consultant for Infront Consulting Group and lead author of Professional Application Lifecycle Management with Visual Studio 2010 (Wrox.) He’s also one of the original Team System MVPs.

Full disclosure: I’m a contributing editor for Visual Studio Magazine.

David Linthicum explained Why SOA using Cloud Requires a New Approach to Testing in a 2/17/2011 article for ebizQ’s Where SOA Meets Cloud blog:

image So, why is SOA and SOA using cloud computing so different that we need a different approach to testing? As I've been stating here on this blog, many of the same patterns around testing a distributed computing system, such as a SOA, are applicable here. We are not asking you to test that differently; only to consider a few new issues. There are some clear testing differences to note when cloud computing comes into the mix.

image First, we don't own nor control the cloud computing-based systems, thus we have to deal with what they provide us, including the limitations, and typically can't change it. Thus, we can't do some types of testing such as finding the saturation points of the cloud computing platform to determine the upward limitations on scaling, or attempt to determine how to crash the cloud computing system. That type of testing may get you a nasty e-mail. Or, white box testing the underlying platform or services, meaning viewing the code, is also, not supported by most cloud computing providers, but clearly something you can do if you own and control the systems under test.

Second, the patterns of usage are going to be different, including how one system interacts with another, from enterprise to cloud. Traditionally, we test systems that are on-premise, and almost never test a system that we cannot see nor touch. This includes issues with Internet connectivity.

Third, we are testing systems that are contractually obligated to provide computing service to our architecture, and thus we need a way to validate that those services are being provided now, and into the future. Thus, testing takes on a legal aspect, since if you find that the service is not being delivered in the manner outlined in the contract, you can take action.

Finally, cloud computing is relatively new. As such, IT is a bit suspicious about the lack of control. Rigorous and well-defined testing, will eliminate many of those fears. We must be hyper diligent to reduce the chances of failure, and work around the fear of what's new.

The Windows Azure team posted Real World Windows Azure: Interview with Shivaji Dutta, Senior Manager, Microsoft Worldwide Partner Group on 2/17/2011:

As part of the Real World Windows Azure series, we talked to Shivaji Dutta, Senior Manager in the Microsoft Worldwide Partner Group, about using the Windows Azure platform to create the Microsoft Partner Transition Tool.

MSDN: Can you please tell us about the Microsoft Worldwide Partner Group and the services you offer?

Dutta:  The Microsoft Worldwide Partner Group (WPG) supports the more than 400,000 partners globally that make up the Microsoft Partner Network (MPN). WPG is responsible for overseeing communications with these partners and providing official Microsoft guidance on certification requirements, trainings, and benchmarks for key business activities, such as marketing initiatives and sales volume.   

imageMSDN: What business need did the Windows Azure platform help the WPG address? 

Dutta: In 2009, Microsoft decided to restructure the requirements for participation in its global Microsoft Partner Network. As of November 2010, we've moved from a three-tier membership structure to a two-tier system. Now, partners work toward achieving either Silver or Gold status in 28 specific business areas or competencies-areas such as Application Integration and Virtualization.  It was critical for us to help make the transition to the new program as straightforward as possible.  In March 2010, we began working with Tikalogic, a Microsoft partner located in Redmond, to start the process of designing and building a web-based Partner Transition Tool.

MSDN: Can you provide an overview of the process of building and hosting the Partner Transition Tool?
Dutta:  The team from Tikalogic used Microsoft Silverlight 4 to develop a web application, which we then published to Windows Azure. By using Windows Azure Tools for Microsoft Visual Studio, the team could access prebuilt templates and documentation for configuring Web and Worker roles, setting up security certificates, and creating the different application environments. The team used the ability to run the application locally and do live debugging through the IntelliTrace tool to save on development time. It took a six-person team six weeks to complete development, which was actually 50 percent faster than we'd expected.

The Partner Transition Tool provides Microsoft partners worldwide with an easy way to track progress toward certification requirements under the newly restructured Microsoft Partner Network program.

MSDN: How did use the Windows Azure platform to meet the security requirements for the Partner Transition Tool?

Dutta: In rolling out the tool, we needed to ensure the highest levels of data security, so partners know that their business information is fully protected every time they interact with the application. We built on support in Windows Azure for user authentication through the Windows Live ID service to meet this goal. 

In addition to providing us with a rigorous security mechanism, partners don't need to remember a separate password to access the Partner Transition Tool site. They can just use their Windows Live ID credentials that they established as part of their partner profile.

MSDN: What kinds of benefits have you realized through the use of Windows Azure?

Dutta: The Partner Transition Tool has provided a way for partners in more than 140 countries to ease the transition to the new MPN program requirements. Over the first couple of months since it's been live, we've seen usage of the tool double, and partners continue to tell us that the interface provides an easy way to find all the information that they need. In addition to saving at least three weeks on the development and rollout of the application, we saved on infrastructure costs without sacrificing scalability. Now we have the network resources that we need in place to handle the continued adoption of the tool, and we can scale back quickly and efficiently as the need for the tool declines over time.

Read the full story at:

To read more Windows Azure customer success stories, visit:

Full disclosure: I’m a registered member of the Windows Partner Network.

David Linthicum [pictured below] asked “The U.S. CIO outlines how the federal government can accelerate the adoption of cloud computing -- but will this help?” in a deck for his The feds' cloud migration strategy may not be enough article of 2/17/2011 for InfoWorld’s Cloud Computing blog:

image It was encouraging to see that the United States, under the direction of Vivek Kundra, the country's first CIO, just posted on the website "A Strategy for Cloud Migration" (PDF) for government agencies. This document includes a decision framework for cloud computing adoption by the government, case-study examples of success in the cloud, and old but mostly accepted definitions of cloud computing and its benefits.

image My favorite part is the first paragraph: "The federal government's current information technology environment is characterized by low asset utilization, a fragmented demand for resources, duplicative systems, environments that are difficult to manage, and long procurement lead times. These inefficiencies negatively impact the federal government's ability to serve the American public." Hear, hear, brother!

As taxpayers, we should all be behind the feds' use of cloud computing to get to a more cost-effective and efficient state. However, the U.S. government's move toward cloud computing has been slow to appear as CIOs at all agencies struggle with inter- and intra-agency politics and with budgets that do not yet support cloud computing.

In this document, the CIO's office lays out the "potential spending on cloud computing by agency" in Appendix 1. I guess the key word is "potential" and not "budgeted." However, the top three departments are Homeland Security, Treasury, and Defense, with a total potential expenditure of $20 billion for all agencies combined from a total IT budget of $80 billion. I'm assuming that's yearly.

The trouble with these kinds of documents is that there isn't enough detailed information to be useful. Migration to cloud computing is complex and far-reaching, and it takes a lot of thoughtful planning that can't be explained in a 40 pages. Moreover, without control of agency budgets, it's going to be difficult for the U.S. CIO to move quickly, if at all, before the current administration goes into re-election mode. I suspect the path to the cloud for the government will continue to be frustratingly slow. I applaud the CIO's work in this area, though. I think he's pushing all of the buttons he can push right now.

Last year at this time, I pointed out that the U.S. government was mandating the use of cloud computing, or at least that agencies have a plan or a cloud in place by 2012. It will be interesting to see if that happens -- I predict everyone will ask for exemptions and extensions.

What's needed is a funded group of cloud computing ninjas in Washington who understand how to do cloud computing and are available to any agency that needs them to assist in planning, budgets, architecture, development, and deployment. Also, money needs to go to agencies that can't yet fit cloud computing into their budget until their stream of funding is more operationally focused.

This is a very solvable problem, but it's one that requires resources and commitment.

No significant articles today.image

Kevin Kell explained Remote Desktop Functionality in Azure Roles in a 2/16/2011 post to the learning Tree blog:

image One of the cool new features of version 1.3 of the Azure SDK is the ability to connect to Azure roles using Remote Desktop. The increased visibility into the role makes monitoring activity and troubleshooting problems easier than ever. This allows for unprecedented flexibility and control over the role instance.

image Configuring a role for Remote Desktop access is relatively straightforward. From the Publish dialog in Visual Studio you can configure the Remote Desktop connection parameters.

Figure 1 Publish Dialog

Figure 2 Configure Remote Desktop connections

In the configuration you need to specify a user name and password to use and a certificate to encrypt those credentials. The certificate will be uploaded to the hosted service on Azure. Thus a secure channel is established for RDP. If you want more information on how certificates can be used to deliver passwords securely and cloud security in general, consider Learning Tree’s course 1220: Securing the Cloud.

After the project is published in this manner and is started on Azure you can connect to the running instance using Remote Desktop. As usual, I think a demo makes this clearer.

Azure continues to evolve, as do the services offered by the other cloud providers. One thing is for sure, competition benefits consumers! In our course 1200: Cloud Computing Technologies a Comprehensive Hands-on Introduction we develop a framework for understanding cloud computing that will be useful now and in the future!

<Return to section navigation list> 

Visual Studio LightSwitch

The ADO.NET Team announced New Whitepaper: T4 Templates and the Entity Framework on 2/17/2011:

image22242222Recently we published a new whitepaper to MSDN written by Scott Allen that explains what T4 is and how the Entity Framework uses it for code generation.  It starts with an introduction to T4, covers editing and debugging tips, and shows how you can use it to customize and take total control of your generated classes.  The article's extremely detailed and contains everything you need to get up and running with T4 and EF4!

T4 Templates and the Entity Framework

Return to section navigation list> 

Windows Azure Infrastructure

The Windows Azure Team recommended in a 2/18/2011 post that you Follow @wapforums on Twitter to stay connected with the Windows Azure Platform MSDN forums:

imageThe Windows Azure Platform MSDN forums provide free online community support for Windows Azure and SQL Azure developers from Microsoft insiders, MVP's, and like-minded developers.  The forums are a great means to get quick answers and support on all things Windows Azure without having to log a support ticket. The Windows Azure Platform forums average more than 360 new posts a month and 99% of the posted questions receive a response within one day. 

To make it easier to stay on top of what's happening on the forums, you can now follow @wapforums on Twitter to keep track of new forum questions and discussions as they're posted. When you're logged into Twitter, you'll be able to see questions as they're asked, get community tips as they're posted and identify discussions that you may want to participate in.

And be sure to watch for posts from #MSTechTalk for tips and discussions from Microsoft engineering teams and #WAPSupport for posts related to top support questions.

I’m sold. Following. 

• Brent Stineman (@BrentCodeMonkey) answered Why Windows Azure? in a 2/18/2011 post:

image I get asked this both by colleagues and clients, why Windows Azure? What makes it special? I’ve answered that I believe that if we don’t get “with the cloud”, we’ll be the COBOL programmers of the next generation. Now before anyone gets offended, I’ve done my share of COBOL. I also need to put this in context.

Some of you that come here may be aware that I also have a blog I do called the Cloud Computing Digest. In my latest update I made the following statement.

When talking to people about cloud, I keep telling them that IMHO Microsoft is betting on PaaS and playing the long game. The patterns that are being pioneered in products like Windows Azure will be coming back on premises and affect the way we build solutions. I believe this so firmly that I’ve essentially bet the next 5-10 years of my career on it. So it was nice to get some validation.

Now this resulted in a question back from a colleague in Ohio…

Clarify this for me, if you would. I am perceiving it as; Since cloud platforms are so massively scalable, their architectures will be mimicked by enterprises that still take a traditional approach to their platforms.

I use the word “traditional” to mean: “Here is our server room” as in it is on-premise at the client site.

Is that what back on premises means?

This made me realize that the discussion that follows my statement when I’m talking to folks helps put that statement in context. While its difficult to have a discussion on a blog, below is my attempt at a more detailed answer.

Microsoft, unlike many of the other vendors is already fully, deeply, and some say irrevocably entrenched in the on-premises world. Windows servers still drive the enterprise. Other cloud computing players are almost entirely hosted. Goggle, Amazon, Rackspace… they don’t have this advantage.

imageIf you’ve looked at what Microsoft does, they pick a direction they know they can control and exploit it. Now they already had a good set of tools to manage their own data centers. Tools built on the back of offerings like MSN, Hotmail, Bing, BPOS, etc… So it made sense to take those tools to the next level and create something like Windows Azure. A product built on a vision where you no longer have to configure each server manually and install applications into it. A vision where a “fabric” of computing hardware could also dynamically shift the load to whatever hardware was necessary to best leverage the resources in the datacenter.

So that’s where we are at today… what about tomorrow…

We have all these servers back on-premises. And we’re building up the tools to make them easier to manage (latest hyper V updates for example). But even if we make it easier to virtualize and manage those resources, we still have the application deployment challenges. One-click made it easier, but there’s still environmental setup and the lack of the ability to move stuff around in an automated fashion.

So the next step is to make this easier. We do this with another fabric approach and add support for discrete application units that can be scaled and connected into more complex solutions (gee, just like we do in Windows Azure). Essentially taking SOA and finally making it practical.

IMHO this is where things are going. Cloud, and specifically PaaS requires us to think differently about how we architect our solutions. This new approach WILL leave the cloud and find its way into the on-premises world. As such, we’re on the front end of a new trend in computing, the likes of which we haven’t seen in almost 20 years.

That is what I’m banking the next 5-10 years of my career on. If this fizzles, I’ll be heavily invested in an architectural approach that hasn’t panned out and more to the point an expert in a failed platform. But I’m confidant that this will not be the case. That those of us that adopt this new thinking now will be uniquely qualified to take the lead when the future comes to pass.

I’m in the same boat as Brent.

Avkash Chauhan observed Windows Azure [is not HIPPA] Complian[t] in a 2/17/2011 post:

image First I would say it is a very delicate matter to discuss HIPPA compliance as definition and standards goes across many boundaries. Both the application and the infrastructure where the application is running defines the aspect of HIPPA compliance. HIPPA uses the concept of Business Associate and Service Provider as defined in HITECH Act, and ancillary Federal Register rules. Running a HIPAA compliance application requires every piece of information verified & stored and every action with the data is recorded and audited. HIPAA compliance means security, privacy, accountability, auditing and many more things. Any healthcare applications hosted in Windows Azure that process HBI data, falls under HIPAA / HITECH Act which is not available in Windows Azure today.

imageWindows Azure is currently rated for Low Business Impact (LBI) information and getting closer to Medium Business Impact (MBI) Information standards. Patient Healthcare Information (PHI) is considered High Business Impact Information (HBI) however based on current Windows Azure architecture it is not ready to handle High Business Impact Information (HBI) yet so it can be said that running an application on Windows Azure does not mean it is HIPPA complian[t].

That means “don’t store personally identifiable PHI information” in Windows Azure or SQL Azure.

Eric Webster [pictured below] reported Lessons From Nokia: Choose Your Cloud Partners Wisely to the Talkin Cloud on 2/17/2011:

Electronics blog engadget recently leaked a memo from Nokia CEO Stephen Elop to his employees. In the memo, Elop writes that Nokia is “standing on a burning platform.” He goes on to say that Nokia has failed by falling behind the competition. If you haven’t read the memo, I highly recommend reading it, and never sending a memo like this to your team. If Elop intended the memo be motivational for his employees, it’s resulted in exactly the opposite: loss of internal confidence in Nokia’s future and a public relations nightmare, not to mention disgust toward Elop that likely extends to other executive team members.

Aside from how to not motivate your employees, there are other lessons to be learned from the memo and from Nokia’s position in the market today.

As the memo stated, “Apple owns the high-end range” of the mobile market, and “they are now winning the mid-range, and quickly they are going downstream to phones” for Android. Elop’s burning platform analogy is probably accurate, but the question remains: how do companies maintain their competitive advantage and avoid becoming the Nokia of their industry?

Place Your Cloud Partner Bets Carefully

For IT service providers, staying competitive and relevant in the market is critical to your success and longevity. Much of your differentiation from the competition will depend on the vendors you decide to partner with. Imagine if you were a Nokia reseller, and you were competing with Apple resellers, how do you think your business would be doing considering Nokia’s brand preference has declined internationally?

Perhaps the most important lesson for channel companies is this: Don’t become another Nokia.

Instead, select your partnerships wisely and make sure the companies and people you choose to align yourself with will continue to provide you with innovative solutions that you can in turn provide to your customers. There are the IT service providers who continue to go the Nokia route—minimal innovation, head in the sand, refusing to change. Then there are IT service providers who not only seek out innovation in partnerships, but they demand it. These are the same companies you see in the news acquiring their competition and growing their marketshare.

When looking at cloud providers, keep the same lesson in mind: Are they a Nokia, or are they an Apple? I would place my bet on Apple every time. Make sure you place your bets wisely and don’t be bashful in partnering with a company that is innovating in the cloud when others aren’t. Remember, Nokia’s lens to the phone market was fundamentally different than Apple’s. Now history has been made, and Nokia has lost the battle.

Eric is VP of Sales and Marketing at Doyenz, a leading provider of cloud-based disaster recovery to the channel. Monthly guest blogs such as this one are part of TalkinCloud’s annual platinum sponsorship.

Read More About This Topic

Nicole Hemsoth reported First Academic Journal Devoted to Cloud Computing Launched in a 2/17/2011 story for the HPC in the Cloud blog:

imageAlthough there are a number of computing-related peer-reviewed academic journals that discuss cloud computing in the context of more specific fields, the field was open for a cloud-driven academic publication until just recently. 

The International Journal of Cloud Computing (IJCC), which launched at the beginning of 2011, is the first journal with a mission to uncover research in the academic cloud space. The publishers have stated that the resource will provide technical reports, use cases and peer-reviewed academic articles covering a wide array of issues, including high-performance computing.

The journal will be managed by Dr. Yi Pan from the Department of Computer Science at Georgia State University. Other members of the editorial board include Dr. Jack Dongarra, Dr. Ernesto Damiani, Dr. Geoffrey Fox, and a host of other luminaries in the academic cloud research space as listed here.

imageAccording to Inderscience Publishers, the journal is set to be published quarterly. The objectives are to “develop, promote, and coordinate the development and practice of cloud computing.” They hope to appeal to academics, researchers and those working in the fields of grid, virtual and cloud computing.

The “international” angle is important, according to the journal’s founders because there is a “need to overcome cultural and national barriers and to meet the needs of accelerating technological changes in the global economy.”

JP Morganthau posted The Game is Always Changing: A Non-technical Perspective on Cloud Computing on 2/17/2011:

image It’s not about what’s different, it’s about getting on board with the changes in the game or get left behind.

People say, “What’s so different about Cloud Computing?”, “This is nothing more than managed hosting or mainframe computing redux”, “what’s old is new again”.  What was it that Gordon Gekko says to Jake on the subway in “Wall St.: Money Never Sleeps?” oh yeah, “a fisherman can always see another fisherman coming.”  Technically, Cloud may be perceived as only incremental changes over what already exists, but those that argue that Cloud is just hype have already missed the bigger picture—the game is always changing and this is the next major change.

In this case, the game has shifted from the infrastructure and applications matter to data matters.  The players driving the Cloud revolution have initiated the path to commoditizing the rest of the computing universe and all you should care about is the data and the services that operate on that data.  Computing hardware was already heavily commoditized, but, up to a few years ago software was still the realm of the wizards.  With the emergence of cloud computing delivery models—IaaS, PaaS & SaaS—consumers now have value-based vehicles to select from to support their various computing needs from do-it-yourself (DIY) to do-it-for-you (DIFY).  This was a realization that just occurred to me in this past week; cloud computing brings IT into the realms of Pep Boys and Home Depot.

Those of us that come from engineering backgrounds often miss the subtle drivers that move the universe and may even question how these changes come about.  Let’s face it, you can’t talk about this scenario without referencing VHS vs. Betamax.  There are a group of engineers that have been employed by the market makers to bring about this revolution.  I regard this move as the chess players moving the pawns on the board.  These market makers see a world that they could have a piece of if they could bring about the vision in their head, so they employ smart people to push the bounds of what can be done and raise the bar to new extremes.

Do you think the Amazon Web Services cloud is perfect?  It’s a work in progress.  I guarantee if you check service levels within a large-scale deployment against current levels within well-staffed, well-managed data centers, you will see more failures in AWS then you have in the past five years in the data center; and that’s okay because we’re in the midst of the game changing.  I had a CEO in one my startups that used to say, “by the time we have 50 customers our first 5 customers are going to hate us and leave us.”  The reality of this is that that group, no matter how special you treat them because they were first are going to live through significant pain working with you through a game change.

What’s really interesting about this change is the support from the public sector.  What traditionally has been a laggard in information technology, in my opinion, is turning out to be one of the strongest driving forces behind this game change; this really turns the concept of market makers on its head.  After all, it’s easy to comprehend where the game is going when Bill Gates, Larry Ellison, Jeff Bezos, etc. are the market makers, but considering that U.S. Federal CIO is also part of that group means that the scale of the market driving the change is considerably larger than anything we’re used to seeing.

While maybe not readily apparent, because market makers are primarily focused on the private sector, direction is most often not influenced by acquisition policy, congressional politicking, military and intelligence confidentiality requirements, or Presidential races.  The landscape of this market once filtered through the lenses of these and other factors will have grown significantly faster and offer a great deal more variation and value.  More importantly, and we are seeing this happen already, it will be more open and transparent than past market changes.

So, technically speaking, it doesn’t matter if Cloud really offers a revolutionary change over what we already had because in the minds of the masses, the change is real and it’s happening now.  The question is, will you figure out your role in the changed game or believe the game change will fail leaving you unchanged?

JP Morganthau is the author of Enterprise Information Integration.

The Microsoft Platform Ready (MPR) team updated its landing page in 2/2011:


Unitek Education announced the formation of a new cloud computing division, Stratos Learning, in a 2/2011 post:

For over a decade, Unitek Education has provided award winning training in Microsoft, NetApp, Cisco and many other technologies. Unitek now brings that same training standard to its new cloud computing division, StratosLearning.

StratosLearning provides four well structured training programs that focus on specific areas of cloud computing characteristics, architectures, and impact on system design. Each program progresses incrementally in complexity; however each course can be taken individually, or in consecutive sequence.

Stratos Learning has partnered with Hyperstratus, a well known name in the Virtualization & Cloud computing world. Through this unique partnership, our synergies provide the exclusive advantage of superior courseware and excellent teaching credentials which is supported by extensive virtualization field experience which no other training entity can match.

About Unitek

Armed with decades of IT Training experience, Unitek has been awarded Learning Partner of the Year awards from five different IT market leaders. The experience and real world know-how of our team of instructors cannot be matched by any other IT Training provider.

If your company has already rolled-out its initial cloud-applications and are in need of a customized Cloud-Training solution, our experienced team of instructors can tailor our programs and bring the training to your door step through Onsite Training. To learn more about how this technology can revolutionize your growth and operations, click here to contact a Training representative now!

Cloud Courses

Stratos Learning features the following in its repertoire of Cloud Computing training courses:


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

Larry Dignan (@ldignan) posted on 2/18/2011 a video clip of Dave Cullinane, eBay’s Chief Information Security Officer, reaffirming eBay to build its own private cloud with Windows Azure:

image At the RSA 2011 conference here in San Francisco, technology futurist Paul Saffo talks to eBay’s chief security information officer about how the company’s intent on building an eBay version in the cloud to handle its growth.

Dave (pictured at right) said at 00:00:30:

image eBays’s going to the cloud. … The whole intent is to build an eBay version of the cloud that we can run in our data center or, if we need not, in a Microsoft data center. From February to November, [eBay’s] volume is fairly consistent, but then in November we get these enormous peaks with people buying things for the holidays, and then it tails off a little at Christmastime and then goes back up as people are selling all the stuff they got for Christmas. …

imageLast year our peak [traffic] was about 25 Gbits per second, which is just about off the charts. So we must have that kind of capacity in our data center plus headroom for the growth. That costs an awful lot of money. We’re almost at the point where we must build another data center. If I can build an enterprise version of eBay that runs either in my data center or [Microsoft’s], I can take that peak increase and burst it into Microsoft’s data center instead of having to go build another data center. We’re talking about hundreds of millions of dollars of savings per year. …

Larry Dignan is Editor in Chief of ZDNet and SmartPlanet as well as Editorial Director of ZDNet's sister site TechRepublic.

Matthew Weinberger reported RightScale and Partner to Promote Hybrid Clouds in a 2/17/2011 post to the Talkin Cloud blog:

Cloud management solution RightScale and channel-focused IaaS provider hit the road this week to promote a vision for a hybrid future as the industry shifts towards software-as-a-service. TalkinCloud was on hand at their “Cloud Conversations and Cocktails” stop in New York City to hear the pitch and get some perspective on where the two open source developers see the industry going.

The case for the hybrid cloud, according to RightScale VP of Business Development Josh Frasier, is simple: while Amazon EC2 may be the de facto cloud standard right now and one that everything from OpenStack to’s own CloudStack has to support, relying on Amazon Web Services availability zones may not always work at best performance if you’re global. And that’s putting aside the compliance issues of keeping data offsite.

That’s where using the cloud-agnostic RightScale to manage a private platform on hardware behind the firewall comes in. Chief Marketing Officer Peder Ulander was on hand to show how it was possible to keep certain resource pools strictly on-premises and redirect intense workloads to a public Amazon EC2 instance — as long as that’s permitted by security rules.

For the most part, “Clouds Conversations and Cocktails” was just a way to demonstrate to the partners of and RightScale alike how their platforms could be used to meet a rising need for hybrid clouds. And RightScale highlighted the ISV relationships they’ve built thanks to their cloud server template library.

Overall, and RightScale’s belief in the hybrid cloud model certainly provides a lot of food for thought.

Follow Talkin’ Cloud via RSS, Facebook and Twitter. Sign up for Talkin’ Cloud’s Weekly Newsletter, Webcasts and Resource Center. Read our editorial disclosures here.

Read More About This Topic

Go to commentsComments (0)

John Treadway (@CloudBzz) posted Ready! Fire! Aim! to his CloudBzz blog on 2/17/2011:

image Time to talk about cloud stacks again.  No, not that there are too many (though there are), but rather the one-track mind that many IT buyers I encounter have with respect to cloud.  One of the customers I met with this week is looking to implement a private cloud for part of their business, and they are actively evaluating “cloud stacks” from a limited number of vendors (mainly their strategic suppliers).

I had the opportunity to meet the lead director for infrastructure and data center operations and a couple of his key guys.  I asked about their process for their private cloud initiative, and specifically how far along they were on their top-down requirements analysis and documentation.  The reply I received was more than a little bit surprising.

“We have not created a high-level analysis of requirements.  We’re evaluating vendor solutions and will pick the best one.”

If they haven’t thought through their vision for private cloud, translated that into capabilities requirements, and aligned their business users, how can they possibly choose a technology?  And why do they think that all they need is a cloud stack?

Is the dog wagging the tail, or is it the other way around?  In this case, it’s clearly a case of tails wagging the dog as they get bombarded with vendor cries of “buy this cloud now and it will solve world hunger and peace.”

Cloud projects should have a flow that should definitively answer a VERY LONG set of key questions.  Here are JUST A FEW of them…:

  1. What are the strategic objectives for my cloud program?
  2. How will my cloud be used?
  3. Who are my users and what are their expectations and requirements?
  4. How should/will a cloud model change my data center workflows, policies, processes and skills requirements?
  5. How will cloud users be given visibility into their usage, costs and possible chargebacks?
  6. How will cloud users be given visibility into operational issues such as server/zone/regional availability and performance?
  7. What is my approach to the service catalog?  Is it prix fixe, a la carte, or more like value meals?  Can users make their own catalogs?
  8. How will I handle policy around identity, access control, user permissions, etc?
  9. What are the operational tools that I will use for event management & correlation, performance management, service desk, configuration and change management, monitoring, logging, auditability, and more?
  10. What will my vCenter administrators do when they are no longer creating VMs for every request?
  11. What will the approvers in my process flows today do when the handling of 95% of all future requests are policy driven and automated?
  12. What levels of dynamism are required regarding elasticity, workload placement, data placement and QoS management across all stack layers?
  13. Beyond a VM, what other services will I expose to my users?
  14. How will I address each of the key components such as compute, networking, structured & object storage, virtualization, security, automation, self-service, lifecycle management, databases and more?
  15. What are the workloads I expect to see in my cloud, and what are the requirements for these workloads to run?

This list goes on for several pages, and more.  If you have not done this level of analysis, you are not ready to evaluate a cloud stack.  Sure, you can research and hear from vendors – often a good way to educate yourself and prompt new thinking about the concepts above.  However, customers should stay away from asking for POCs, asking for pricing from vendors, setting budgets, and all of the other dance routines we face in the procurement process.

The unfortunate truth is that most vendors don’t want you to do this because it slows down the sales cycle.  But I’m going to quote the carpenter’s axiom here:

Measure twice, cut once!

Would you build a house without a vision, rendering and architectural blueprints?  Of course not.  However, the other unfortunate truth I am finding is that too many customers I see are falling into this trap.  They have the cart way out in front of the horse.

They are practicing “Ready! Fire!  Aim!”  I’m sure we all can guess how that will work for them…



No significant articles today.

<Return to section navigation list> 

Cloud Security and Governance

• David Chernicoff reported Obama takes a leap of faith and trusts the government to the cloud in a 2/18/2011 post to ZDNet’s Five Nines: The Next Data Center blog:

image There have been a huge number of articles in the last few months talking about the President’s tacit acknowledgment of the future of cloud computing by pushing a federal budget that relies heavily on the cloud as part of the datacenter consolidation that is being required by the plan to control governmental IT costs. Now I realize that a President’s actual contribution to the design of the federal budget likely amounts to little more than accepting what they are told by their advisors, but one really has to wonder why those advisors are so readily drinking the cloud Kool-Aid.

image Or perhaps those advisors aren’t but are instead pandering to a vague public perception that “the cloud is the solution.” Every time I write about the cloud, I get a flood of public and private responses that have a very common theme; how can you trust the security of a solution that you cannot control end to end? Even the cloud-positive responses often focus on a specific set of security or data control issues which the writer feels their business has properly addressed.

The National Institute of Standards and Technology, in light of the “cloud-first” directive from US government CIO Vivek Kundra, has issued two NIST Special Publications; SP 800-144, Guidelines on Security and Privacy in Public Cloud and SP 800-145, The NIST Definition of Cloud Computing. The main problem with the security document it is that it really presents nothing new; the guidelines presented are pretty much the same as the recommendations that any competent IT security professional would give their employer or client.  The issues that cloud security will present as the technology matures and becomes more prevalent, which also means that more bad guys will be looking for cracks, isn’t really discussed.

The fact that NIST  is basically recommending that agencies take their own responsibility for security and not just trust the cloud vendor falls in that same category of common sense advice. The problem is that given the widely reported security problems with existing governmental and military networks, with failures in preventing unrestricted unauthorized physical access and a raft of malware, Trojan and virus attacks by foreign governments, including the recent successful Anonymous attack on HBGary, what makes anyone think that there will be a simple, or even near-term, solution to securing the potential petabytes of governmental data that will be migrated to the cloud?

The reality is that any cloud service provider with a contract with a US government agency will become a lightning rod for external attacks from everyone from bored script kiddies to inimical foreign agents. And the cloud just isn’t ready for that.

Kick off your day with ZDNet's daily e-mail newsletter. It's the freshest tech news and opinion, served hot. Get it.

• Carl Brooks (@eekygeeky) posted Why enterprise security hates the cloud: Change is hard to the blog on 2/17/2011:

    Weekly cloud computing update

    image It’s one of the conundrums of cloud computing that makes it so irresistibly difficult; taken as a whole, cloud is so different from the way IT usually operates that it feels like there should be a whole new model or set of techniques for protecting an infrastructure. But when you start to break it down into actual achievable, meaningful steps, it’s not. It's just jumbled all around or stretched out in weird directions.

    “At the end of the day, we still have all the security problems we had before.”

    Carl Brooks

    For instance, use public cloud and you're doing website security. You're just not used to doing that for your front end, back end and data layer all at the same time. Use a private cloud in-house for your dev team and you're doing a witch's brew of IP management, operational management and project management. Day-to-day governance and risk management is not traditional fare in the weekly project team meeting.

    Everyone's gotten used to living in a bubble

    The red-tape brigade responsible for compliance and setting policies wants cloud to magically fit into the slots they've already got in their precious paperwork, even though it’s a new kind of animal. If you need to hit an audit twice a year, how do you sleep at night knowing Bob and Ted and Alice might be commissioning and decommissioning hundreds of servers with company IP on them in between?

    The operations guys are perturbed, roused from their hypnotic, CYA-based routine of fighting off clueless users and management buffoons to keep the lights on and data safe (despite the CIO's best efforts to irrevocably screw it up). How do you convince the IT basement trolls that cloud storage is safe when they can't even sniff the hard drives to see if they're ripe? It's like poking bears.

    And the security chiefs are either freaking out about cyber super criminals stealing their brainwaves or cat burgling the company safe through a ventilation shaft to worry about sending data to some "bucket" somewhere. They're more focused on the security team knowing kung fu than something as insane as cloud computing.

    Operations trolls in their beer-soaked caverns, bloodless GRC bureaucrats in their ivory tower and security ninjas slinking around installing 'pick-proof' locks on the executive johns. None of them are prepared to just pick up and run with the cloud.

    When will cloud security issues be resolved?
    Cloud computing, especially private cloud (because let's face it, you're either all the way in the public cloud running an e-business or you're sticking in stuff that doesn't matter), makes everybody rub together uncomfortably. No one's sure where it's all going to shake out and nobody is getting answers they like. The only people successfully overhauling their IT infrastructure into cloud-like operations are doing it whole hog and reinventing security along the way.

    More on cloud security:

    Speaking of uncomfortable rubbing together, the grandmother of security dog and pony shows finishes this week. At RSA Conference 2011, we interviewed an enterprise with a good handle on how to approach cloud security that really illustrates this conundrum. If you can get through his diagram without a searing headache, congratulations; you're IBM's Watson in disguise.

    US Federal CIO Vivek Kundra declared 20% of the Fed's IT infrastructure is going to be on clouds. Security pundit Chris Hoff came on stage directly afterward to basically mock everything going on in cloud and security. One analyst gave a talk and did a variation on a Seinfeld gag. "What's the deal with antivirus software?" So many announcements went out, you'd think the problem was licked.

    But at the end of the day, we still have all the security problems we had before. Cloud really hasn't made that much of a dent; it's just put us in a fun house mirror. What it should really be doing is making us question why we're still doing the same old stuff (why do we need so much useless AV software?) The potential for change is there; the reality is it’s a long way off.

    Carl is the Senior Technology Writer for

    Full disclosure: I’m a paid contributor to

    • CloudTimes reports about the Cloud Security Alliance Summit At RSA in a 2/17/2011 post:

    image At the RSA conference this week, the Cloud Security Alliance (CSA) held a half-day summit which served as the venue for the global introduction of several research projects, including research on governance, cloud security reference architectures and cloud-specific computer security incident response teams. The emphasis of the conference on cloud computing security indicates the importance of this area for mainstream adoptions of cloud computing by organizations. Cloud computing, which moves on-premise organizations resources into a shared virtual environment, has become appealing to organizations of all sizes because of its scalability, increased efficiency, and potential cost savings.

    image The Summit was kicked off with a keynote from the CEO of one of the industry’s leading cloud service providers, Marc Benioff of

    Benioff introduced his vision of ‘Cloud 2’. “The best days are still very much ahead of us in this industry”, he told the audience.

    As social networking goes beyond email, and next-generation internet and applications emerge, “people are changing not only what they are doing online, but how they are doing it”, he said.

    “For the first time, we have seen an instance of PC shipping numbers dropping, as we move into the year of the tabloid. This is one of the most exciting technology opportunities of our time”, Benioff argued, as he explained his belief that security and innovation are very closely linked”., and its launch of ‘Chatter’ – a Cloud 2 initiative, a collaboration application for the enterprise to connect and share information securely with people at work in real-time, “is moving us into a social, mobile, more open cloud. The new question we should be asking is ‘why isn’t all enterprise software like Facebook’?”

    With continuously new innovations in the cloud, new security threats are popping up, and CSA is being proactive and trying to respond to the ever growing security threats. According to Jim Reavis, executive director of CSA, “We have learned from previous technical innovations that we cannot ignore security, We are dealing with such accelerated innovation in the cloud that there will continue to be a lot of risk if we don’t maintain eternal vigilance.”

    In addition, CSA has put together an incident response research program with cloud providers and security experts. This response research program will gather, share, and collaborate on security threats incidents, to find the best suited remedy. CSA has also established in September 2011 a certification program for cloud IT professionals. The test is based on the information contained in the best practice guide, “Guidance for Critical Areas of Focus in Cloud Computing”, the cloud computing security issues contained in this work will help IT professionals better understand what questions to ask, current recommended practices, and potential mistakes to avoid. CSA has done little to promote this certification, but it plans to extend it to include more technical cloud computing material.

    • Andrew R. Hickey reported from RSA Conference 2011: RSA, VMware Stump For Trust In Cloud Security for CRN on 2/15/2011:

    image Cloud computing and virtualization will reshape IT security as the world knows it, and the proof that the cloud can be secure is already starting to bubble to the surface, RSA executive chairman Art Coviello [pictured at right] told a jam-packed room of thousands during his opening keynote at RSA Conference 2011.

    image And to adapt for the inevitable shift to cloud and virtualized systems, RSA, VMware and others are teaming up to launch new security services to ensure data and information moving and stored in cloud and virtualization environments is secure.

    image"Last year, my keynote was about the promise; this year it's about the proof," Coviello told thousands of security pros.

    Cloud computing has created an interesting paradigm for security: More information is being created, the physical infrastructure is dissolving and identities are proliferating. Regardless, Coviello said, the goal of security remains the same, to get the right information to the right people unhindered and uninterrupted.

    "It may at first seem that cloud and virtualization may confuse the problem," he said, later adding that despite the concern, confusion and uncertainty, cloud computing and virtualization have become a reality. "Regardless of fear and uncertainty, businesses are moving to the cloud anyway."

    Calling virtualization "our silver lining in the cloud," Coviello said virtualization can create a new level of control not available in physical environments. First, in virtualization and cloud environments, security becomes information-centric. Second, security becomes built-in and automated. And third, security becomes risk-based and adaptive.

    Richard McAniff, chief development officer and co-president at VMware, said there are three distinct phases to the virtualization journey affecting how security is perceived: capex, resiliency and agility.

    "We need to think about security in a fundamentally different way," McAniff said, noting that historically, security systems were built on the notion of static infrastructure with applications built on top of it. Cloud computing and virtualization changes that as the movement of security policies become automated and users are connected to information regardless of physical infrastructure.

    In order to protect information and users in the cloud and in virtual environments, McAniff said vendors need to work together to create a security ecosystem.

    "If we're going to work together to keep the bad guys out...we need a little help from everyone."

    In the spirit of teamwork, Coviello gave RSA Conference 2011 attendees a sneak peek at the RSA Cloud Trust Authority, an upcoming set of services that RSA and VMware will launch to provide visibility, control and security to cloud computing and virtualization environments. The RSA Cloud Trust Authority, which will see its first set of services launch in the second half of 2011, will offer visibility and control into identity, information and infrastructure. Along with RSA and VMware, Cisco (NSDQ:CSCO), Intel (NSDQ:INTC), Citrix (NSDQ:CTXS) and others will be part of partnership.

    "Trust in the cloud, it is achievable; not in the distant future, but today," Coviello said. "Virtualization and the cloud will change the evolution of security dramatically and positively in years to come."

    <Return to section navigation list> 

    Cloud Computing Events

    • Cloudcor reported Cloud Slam'11 Goes Hybrid, Live in person event to be held April 18 at Microsoft Conference Centre Silicon Valley in a press release of 2/17/2011:

    image Cloudcor® today announced Microsoft as headline Diamond Sponsor of Cloud Slam'11®  - a hybrid format Cloud Computing conference taking place April 18 – 22, 2011. Microsoft will speak at the opening day live event, which will be held at the Microsoft Silicon Valley Conference Center on April 18, 2011.

    The opening day event will drill into such themes as how the cloud enables businesses and the public sector to dramatically innovate through the new cloud-based technologies they can harness. Distinguished experts and local startups will also showcase business models made possible by cloud computing.

    Building on the successes of Cloudcor's sister hybrid conference UP in Nov. 2010, which attracted hundreds of in-person executives and a whopping 7,500 virtual delegates, the third-annual Cloud Slam conference is expected to attract thousands of management-level executives from public and private sectors.


    "We are truly excited to have Microsoft join us as our Diamond Sponsor and co-host of the Cloud Slam'11 Day 1 live event," said Khazret Sapenov, Chairman of Cloudcor Inc. "Microsoft is a leader with its Windows Azure platform-as-a-service offering, working with established and emerging technology companies. The company's expertise will help educate our delegates for the Day 1 event and broader 5 Day virtual proceedings."

    "We are delighted to collaborate with Cloudcor as the Diamond Partner for the Cloud Slam 2011 cloud computing conference," said Matt Thompson, General Manager of Developer & Platform Evangelism at Microsoft. "This conference is the perfect setting for us to share cloud industry best practices around the significant benefits associated with embracing the cloud. Senior-level business leaders from public and private enterprises attending the conference will learn about Microsoft's vision, strategy and direction for the cloud space in 2011 and beyond."

    How To Participate

    Delegates wishing to attend, can purchase a VIP ticket (priced at $299), which also provides full access to entire Cloud Slam'11 conference (18 – 22 April 2011) via

    About Cloudcor®

    Cloudcor provides industry leaders and professionals insights via leading edge conferences, research, analysis and commentary, providing a platform to network with leading experts in cloud computing and IT industry. Learn more about Cloudcor at

    The Windows Azure blog recommended Learn How to Manage Your Website In The Cloud with Windows Azure in A Free Webinar February 22, 2011 with SiteCore in a 2/17/2011 post:


    Given the right circumstances, website management in the cloud drastically reduces costs and time, and allows your company to devote resources to its core competencies. But is website management in the cloud a good fit for your organization? Join a free webinar on Tuesday February 22 at 8:00 am PST with web content management experts from SiteCore to learn:

    • Which types of websites and online business strategies benefit most from the cloud
    • How web management on the cloud can drastically improve IT efficiency
    • How Windows Azure and website management are converging, and how you can benefit
    • How Microsoft and Sitecore are working to move customers to a cloud computing model

    Click here to learn more and to register.

    To learn more about SiteCore's own success story using Windows Azure, please read the case study.

    Sebastian W (@qmiswax) reported on his xRMVirtual User Group session on 2/17/2011:

    image Last week I had a pleasure of presenting session “Where Microsoft Dynamics CRM 2011 meets Windows Azure” at xRMVirtual User Group. During the session I promised that source code of the examples will be on my blog, well there  you go, today is a first one “twitterToCrm”, it’s a simple worker role which every 60 seconds checks followers using twitter API , if there will be a new follower on the list  program will create a new lead in MS CRM Online. This is working POC well I must admin code is far from perfection  but point is you can see the working concept, improve it make it singing and dancing. All setting are in app.config  so before you will deploy it test it. Have fun.

    Link to source code — please see Disclaimer

    Link to recording

    This source code is provided “as is” and without WARRANTY OF ANY KIND ,The author of this program is not responsible for any data loss or any other type of damage this software may do to your systems. By using this software, you agree to use  it AT YOUR OWN RISK.

    <Return to section navigation list> 

    Other Cloud Computing Platforms and Services

    Matthew Weinberger reported Microsoft’s Case Against Ex-Cloud Leader Heats Up in a 2/18/2011 post to the TalkinCloud (@talkin_cloud) blog:

    There are some new developments in Microsoft’s lawsuit to keep former public cloud sales specialist Matt Miszewski (pictured) from joining When Miszewski resigned, Microsoft alleges, he took no less than 25,000 pages of classified sales strategies, marketing materials, and other top-secret documents with him. That’s a problem because Miszewski is seeking to hold a similar position at

    image First, as always, a TalkinCloud tip of the hat to SYS-CON Media, whose Maureen O’Gara had the scoop. Apparently, last week Microsoft made their claims to the same Washington State Supreme Court that granted a restraining order against Miszewski’s assumption of the senior VP, Global Public Sector role at

    image Microsoft claims Miszewski is in violation of both professional ethics codes and a non-compete agreement he signed, and Microsoft is seeking to block him from joining All that said, will stick to the old “innocent until proven guilty” motto. We’ll be watching the case closely. …

    Read More About This Topic

    • Werner Vogels (@werner) reported that he is using a New AWS feature: Run your website from Amazon S3 in a 2/17/2011 post:

    image Since a few days ago this weblog serves 100% of its content directly out of the Amazon Simple Storage Service (S3) without the need for a web server to be involved.  Because my blog is almost completely static content I wanted to run in this very simple configuration since the launch of Amazon S3. It would allow the blog to be powered by the incredible scale and reliability of Amazon S3 with a minimum of effort from my side. I know of several other customers who had asked for this greatly simplifying feature as well. I had held out implementing an alternative to my simple blog server that had been running at a traditional hosting site for many years until this preferred simple solution became available: today marks that day and I couldn't be happier about it.

    blog.PNG The Amazon S3 team launched a new feature today that makes serving a complete (static) web site out of Amazon S3 dead simple: you set a default document for buckets and subdirectories, which is most likely an index.html document. This enables Amazon S3 to know what document to serve if one isn't explicitly requested: for example returns the index.html at the bucket level and the index.html from that subdirectory.  The other document you can specify is a customer error page that is presented to your customers when a 4XX class error occurs (e.g. non-existent page is requested), so they get something more appropriate than just the barebones response from the browser. Click on if you want to see what this blog's error page looks like. The background is courtesy of @nalden.

    image All of this can be done from the AWS console as well as with the AWS SDK's.  You will also need to set access control to make sure that your content is publicly accessible. I have used a bucket policy to make all documents world readable, but you could create one that restricts it to referrers, network address range, time of day, etc. I can now turn on Amazon Cloudfront, the content delivery service, with one simple click, whenever needed.

    There are a few small pieces of the blogging process that I still need a server for: editing postings, managing comments and to serve search and all of these can easily be run out of a single Amazon EC2 micro instance.

    Amazon S3 FTW! More details about the website feature of Amazon S3 can be found here and in Jeff Barr's blog post on the AWS developer blog. [See next item.]

    Jeff Barr (@jeffbarr) explained how to Host Your Static Website on Amazon S3 in a 2/17/2011 post:

    We've added some new features to Amazon S3 to make it even better at hosting static websites.

    image Customers have been hosting their images and video for their websites on Amazon S3 for a long time. However, it was not that easy to host your entire website on S3. Why? If a user enters a site address ( and the CNAME in the site's DNS record resolves to the root of an S3 bucket (, Amazon S3 would list the contents of the bucket in XML form. In order to work around this, customers would host their home page on an Amazon EC2 instance. This is no longer necessary.

    You can now host an entire website on Amazon S3.

    imageYou can now configure and access any of your S3 buckets as a "website." When a request is made to the root of your bucket configured as a website, Amazon S3 returns a root document. Not only that, if an error occurs your users receive an HTML error document instead of an XML error message. You can also provide your own error documents for use when a 4xx-class error occurs.

    Here's more detail on the new features...

    Website Endpoints
    To access this website functionality, Amazon S3 exposes a new website endpoint for each region (US Standard, US West, EU, or Asia Pacific). For example, is the endpoint for the Asia Pacific Region. Existing buckets and endpoints continue to work the same way they always have.

    Root and Index Documents
    When you configure your bucket as a website, you can specify the index document you want returned for requests made to the root of your website or for any subdirectory.  For example, a GET request made to the following URI (either direct or via a CNAME):

    Will return the following S3 object

    Error Document
    When you access a website-configured bucket through the new website endpoint, and an error occurs, Amazon S3 now returns a new HTML error page instead of the current XML error. Also, you can now specify your own custom error page when a 4XX error occurs.

    You can use the S3 tab of the AWS Management Console to enable your bucket as a website.

    The CloudBerry S3 Explorer includes support for this cool new feature:

    The newest version of Bucket Explorer also supports website hosting:

    The AWS Java, .NET, and PHP SDK’s support this new feature; for more information, consult the Amazon S3 Developer Guide. As always, we also encourage developers of libraries and tools to add support for this as well. If you are such a developer, leave me a comment or send me some email ( once you are ready to go.

    I'm pretty excited by this new feature and hope that you are too. I think that it will be pretty cool to see website owners simply and inexpensively gain world class performance by hosting their entire website on Amazon S3. In fact, Amazon CTO Werner Vogels is already doing this! Check out his post, New AWS feature: Run your website completely from Amazon S3, for more information.

    Lydia Leong (@cloudpundit) announced an Open invite for public cloud IaaS Magic Quadrant in a 2/17/2011 post:

    image As I alluded to in some earlier posts, we are doing a mid-year Magic Quadrant for public cloud IaaS. Specifically, this is for multi-tenant, on-demand, self-provisioned, compute services (with associated storage, networking, etc.). That would be services like Amazon EC2 and Terremark Enterprise Cloud. The intended context is virtual data center services — i.e., environments in which a business can run multiple applications of their choice — as they would be bought by Gartner’s typical IT buyer clients (mid-market, enterprise, and technology companies of all sizes).

    Vendors invited to participate will see a formal research-initiation email sometime in the next week or two (or so I hope). This is just an early heads-up.

    image If you are a public cloud compute IaaS provider and you didn’t participate in the last Magic Quadrant (i.e., you did not do a survey for qualification last year), and you are interested in trying to qualify this year, please feel free to get in touch with me, and I’ll discuss including you in the qualification survey round. (Anyone who got a survey last time will get one this time.) You do not need to be a Gartner client.

    Of late, I’ve seen some enthusiastic PR folks sign up executives at totally inappropriate companies to talk to me about qualifying for MQ inclusion. Please note that the MQ is for service providers, not enablers (i.e., not software or hardware companies who make stuff to build clouds with). Moreover, it is for public cloud (i.e., multi-tenant elastic services), not custom private clouds or utility hosting and certainly not colocation or data center outsourcing. And it is for the virtual data center services, the “computing” part of “cloud computing” — not cloud storage, PaaS, SaaS, or anything else.

    Krishna Sankar (@ksankar) posted Hadoop NextGen – From a Framework To a Big Data Analytics Platform on 2/15/2011 (missed when posted):

    image Exciting News, Hadoop is evolving ! As I was reading Arun Murthy’s blog, multiple thoughts crossed my mind on the evolution of this lovable toy animal — My first impressions …

    • From Data at Scale To Data at Scale + with complexity – connected & contextual
      • This, I think is the essence – from generic computation framework to scalability, with the new Hadoop platform we can process data at scale and with complexity – connected & contextual. For example the Watson Jeopardy dataset [Link] [Link]
    • From (relatively) static MapReduce To (somewhat) dynamic analytic platform
      • While we might not see a real-time Hadoop soon, the proposed ideas do make the platform more dynamic
      • The “support for alternate programming paradigms to MapReduce” by decoupling the the computation framework is an excellent idea
      • I think it is still Mapreduce at the core (am not sure if it will deviate) but generic computation frameworks can choose their own patterns ! I am looking forward to BioInformatics applications
      • The “Support for short-lived services” is interesting. I had blogged a little about this. Looking forward to how this evolves …
      • I am hoping that it would be possible via extensible programming models to interface with programming systems like R.
      • Embeddable, domain specific capabilities (for example algorithmics specific to bioinformatics) could be interesting

    There are also a few things that might not be part of this evolution

    • From Cluster to Cloud ?
      • There is a proposed keynote by Dr. Todd Papaioannou/VP/Cloud Architecture at Yahoo, titled “Hadoop and the Future of Cloud Computing”.
      • Actually I would prefer to see “Cloud Computing in the future of Hadoop” ;o) Had a blog few weeks ago … I was hoping for a project fluffy !

        We need to move from a cluster to an elastic framework (from compute and storage prespective) – especially as Hadoop moves to an Analytic Platform. “The separation of management of resources in the cluster from management of the life cycle of applications and their component tasks results” is a good first step, now the resources can be instantiated via different mechanisms – cloud being the premier one
    • GPU
      • In the context of my coursework at JHU (BioInformatics) had a couple of talks with the folks working on DataScope. They plan to run Hadoop as one of the applications in their GPU cluster !
      • GPU computing is accelerating, and capability for Hadoop to run on GPU cluster would be interesting
    • Streamlined logging, monitoring and metering ?
      • One of the challenges we are facing in our Unified Fabric Big Data project is that it is difficult to parse the logs and make inferences that help us to qualify & quantify MapReduce jobs.
      • This also will help to create an analytic platform based on the Hadoop eco system. Now services like EMR, most probably do the second order metering by charging for the cloud infrastructure, as they spin separate VMs for every user (from my limited view)

    In short, exciting times are ahead for Hadoop ! There is a talk tomorrow at the Bay Area HUG (Hadoop User Group) on this topic … plan to attend, and later contribute – this is exciting, cannot remain in the sidelines … Will blog on the points from tomorrow’s talk …

    Krishna is a co-author of Enterprise Web 2.0 Fundamentals and a Distinguished Engineer at Cisco Systems.

    Jeffrey Schwartz reported CEO Insists Rackspace Is Not for Sale in a 2/17/2011 article for’s new Schwartz Cloud Report blog:


    With Verizon Communications agreeing to acquire Terremark for $1.4 billion last month and Time Warner Cable following up with a deal to buy NaviSite for $230 million, it begs the question: are we going to see a wave of consolidation in the cloud computing industry? The obvious answer is: of course.

    imageOne company that claims it's not in play is Rackspace Hosting. Rackspace would appear to be a natural acquirer or acquire. The company is one of the leading players in the cloud computing industry. Revenues in 2010 were up 24 percent topping $780 million, the company announced last week. At that run rate, Rackspace's revenues this year will fall just shy of $1 billion.

    CEO Lanham Napier insists Rackspace is not for sale. His game plan is to grow Rackspace into a "giant" organization. "We are a committed long term player to this market," Napier said during the company's earnings call last week. "There is going to be a Web giant that emerges from this technology shift and we want to be that giant."

    He put it more bluntly in an interview with Forbes: "Our company is not for sale, and we want to whup these guys," he said, acknowledging it would be his fiduciary obligation to shareholders to consider any kind of substantial bid. Considering we are in the midst of a cloud computing gold rush, there are some deep pocketed companies that could make Napier an offer hard to refuse. Naturally any company is for sale at the right price.

    Committed to hitting that $1 billion revenue milestone in this, its 13th year, Napier believes the $2 billion goal is in sight along with growing from 130,000 customers to 1 million. "Today we believe it is very much within our reach and that we have a solid plan to get there," he said during the earnings call.

    And that plan does not call for buying any rivals to grow share, he added. "We will continue to be an organic based growth company," he said. "We will do acquisitions like we just announced [such as the deal to buy Cloudkick in December] but these are acquisitions about building capability, not trying to buy revenue or scale."

    Full disclosure: I’m a contributing editor for 1105 Media’s Visual Studio Magazine. 1105 Media also publishes

    Clint Finley reported Video: How SimpleGeo Built a Scalable Geospatial Database with Apache Cassandra on 2/17/2011:

    SimpleGeo provides a platform for developers to build location-aware applications, and it has the distinction of being our Most Promising Company For 2011.

    image Spatial data is multidimensional, so SimpleGeo had to build its own indexing scheme for Apache Cassandra to handle its data. In this presentation, Mike Malone, an infrastructure engineer at SimpleGeo, explains how and why the company did it.

    imageMalone also shared a slide deck explaining all of this last year:

    Scaling GIS Data in Non-relational Data Stores

    View more presentations from Mike Malone.

    If you want even more information on different types of consistency in NoSQL databases, check out this presentation from MongoDB's Roger Bodamer.

    Michelle Price doesn’t mention Microsoft or Windows Azure in her Pinning Down the Cloud survey piece of 2/15/2011 for the Wall Street Journal:

    It's been likened to the Industrial Revolution in terms of its potential to change lives. But just what is cloud computing and how can companies turn it to their advantage?

    The world of information technology is stuffed with bewildering acronyms and new-fangled fads, but cloud computing is one technology phenomenon not to be dismissed.

    Journal Report

    Read the complete Technology report .

    Likened by some technophiles to the Industrial Revolution, cloud computing is already transforming the world around us and it promises to shape our future world too. Despite its growing importance, however, many companies are struggling to pin down exactly how this technological miracle can truly benefit their balance sheets.

    The practice of cloud computing is something with which consumers all over the world are already relatively familiar, even if the term itself leaves the lay technophobe scratching their head. Sending an email using a third-party Web-based email service provider, such as Google Mail, for example, is a basic form of cloud computing.

    image In this example, a user accesses a Web-based application maintained by a third-party via his or her Internet connection and browser. The email application itself and all the data it stores exists not on the user's own computer but is delivered as a service via "the cloud" of the Internet. "Cloud computing represents a paradigm shift in how IT infrastructure and software are delivered and consumed," says Christian Klezl, vice president and cloud leader, for International Business Machines Corp. [pictured right] in Northeast Europe.

    This shift is most pronounced when viewed within the context of corporate IT. Companies have traditionally purchased or developed software application products and maintained that software themselves. Most large companies, for example, operate and maintain their own corporate email systems.


    The cloud-computing model represents the "natural evolution" from this proprietary approach to software provision, says Mr. Klezl, by enabling IT products to be consumed as services. These services are provided by a third party over the Internet and can be consumed on-demand and on a pay-as-you-go basis.

    "Cloud computing is built on the concept that you have a distributed spectrum of users who can source data or services from a centralized pool of resources, at any time in any place, when they need it," he says.

    Power provision is a commonly used analogy. Most homes and companies do not build and run their own onsite power generation but instead source electricity from the grid when they need it. This model allows consumers and companies with varying power requirements to scale their power consumption up and down at their convenience, and to pay only for what they use.

    image Cloud-computing advocates envisage the IT phenomenon in the same terms: "With cloud computing we're seeing a shift from an IT product-led world into an IT service or utility world," says Simon Wardley, a researcher at CSC's Leading Edge Forum, a global research and advisory program for chief information officers.

    The building blocks of cloud computing have existed for nearly a decade, but the IT model has only gained major traction in recent years as its enabling technologies, such as broadband, have developed, and as business processes and attitudes have matured.

    Together, these developments have conspired to promote cloud computing as a major force in the global IT market, from NATO to manufacturing giants such as Siemens AG. By 2015 some 50% of Global 1000 enterprises are expected to use cloud computing for their top 10 revenue-generating processes, according to Gartner Inc.

    For such institutions, cloud services offer many benefits, not the least of which is cheaper IT. "Cloud gives you economies of scale and allows companies to establish a closer link between what they use and what they pay," says Mr. Wardley.

    By allowing companies to mobilize IT resources quickly, cloud computing also improves business agility. "From an institutional standpoint, the benefits of cloud computing are concrete," says Alan Goldstein, chief information officer for BNY Mellon Asset Management. "You're able to more rapidly deploy infrastructure and applications and to scale-up horizontally. That ability to be able to rapidly provision is really meaningful in terms of expediting speed to market."

    Because cloud-computing is a Web-enabled phenomenon, the model also allows companies to access their IT services and the data stored in it from anywhere in the world. "The key for us is to be able to run our business and access our business data anywhere in the world," says Dominic Shine, chief information officer for Reed Exhibitions, a conference company that uses around 10 different types of cloud-based services.

    But while improved cost-efficiency and greater business agility are attractive, what really excites cloud enthusiasts are the macro-economic possibilities.

    Many cloud evangelists believe that the phenomenon enables companies to boost overall productivity by allowing them to satisfy what Mr. Wardley describes as the "long tail" of unmet demand for IT resources found within most firms. This has led some experts to liken cloud computing to the Industrial Revolution.

    Far-fetched though this may sound, research published by the London-based Centre for Economics and Business Research in December seems partly to reinforce this view. It predicts that the increased productivity, job creation, business development and competitive advantage brought about by cloud computing will generate an additional €763 billion ($1.04 trillion) in economic value and will create some 2.4 million jobs in Europe during the next five years.

    But not everyone will benefit equally. Cloud-computing—like the Internet it is enabled by—is a disruptive technology. By making IT cheap and accessible, cloud services threaten to lower the barriers to entry in a number of industries and in some instances may undermine the prevailing operating model.

    image "One of the unique things about cloud computing is that it's a very democratic technology," says George Hu, executive vice president for platform and marketing at, the 10-year old enterprise cloud-computing company that is widely regarded as a poster-child of the phenomenon. "It's the first technology that can service companies of all sizes."

    Reed Exhibitions' Mr. Shine agrees: "Years ago, if you were a small company, you had access to a small number of vendors because that's all you could afford. But today, small companies can access the powerful IT of their larger rivals. If I were setting up a start-up, I would run the whole thing from the cloud."

    Barriers are also set to be pulled down in the consumer world where cloud computing promises to make software increasingly accessible and easy to use. Take Google Inc., another icon of the cloud-computing age. In December, the IT services giant launched a prototype laptop that requires no software to be installed whatsoever: All applications and data are stored and delivered by Google via the Internet.

    The Google prototype provides a glimpse of a future in which users are not required to pay up-front for software, or negotiate painful software installations, upgrades and anti-virus programs. Instead, all applications and data will be stored in the cloud, which consumers will be able to access from any location.

    Mobile technologies are rapidly making this a reality. The emergence of the super-functional smartphone and tablets, such as the iPhone and iPad, are bringing cloud-based applications, such as the hyper-successful social networking application Facebook and the micro-blogging application Twitter, onto mobile devices—along with a wide range of other applications. By moving onto mobile devices, applications guarantee their own ubiquity and relevance.

    In time, more and more cloud-led IT services will follow suit, including business applications, says Mr. Hu. "The Facebook model is now the new model for the future: Why can't business applications be like Facebook? That is, inherently social, in the cloud, and accessed more and more through mobile devices."

    Cloud computing is still an evolving IT model, however, and some IT chiefs believe it will take time for all companies, particularly highly regulated industries such as financial services, to embrace it.

    "The cloud will be important in people's IT strategy: The prize is pretty considerable," says Michael Fahy, global head of IT infrastructure at investment bank Nomura. "But the commercial model is not yet sufficiently developed to operate on the scale we want to operate on, and there are still questions around data security."

    The European Union shares these concerns; last year it recommended the creation of standards and a regulatory framework for cloud computing. But this is unlikely to impede the technology's advance, believes Mr. Wardley: "Do you have a choice when it comes to the advance of cloud computing? Not really."

    Ms. Price is the Trading and Technology Editor at Financial News in London.

    <Return to section navigation list>