Sunday, March 13, 2011

Windows Azure and Cloud Computing Posts for 3/11/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.


•• Updated 3/13/2011 with additional articles by John Gilham, David Linthicum, Lydia Leong and Ernest Mueller marked ••.

• Updated 3/12/2011 with additional articles by Robin Shahan, Andy Cross, Steve Marx, Glenn Gailey, Bill Zack, Steve Yi, Greg Shields, Lucas Roh, Adron Hall, Jason Kincaid and Microsoft Corporate Citizenship marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Azure Blob, Drive, Table and Queue Services

Dan Liu listed .NET Types supported by Windows Azure Table Storage Domain Service in a 3/11/2010 post:

Kyle [McClellan] has several blog posts [from November 2010 and earlier] explaning how to use Windows Azure Table Storage (WATS) Domain Service, the support of which comes from WCF RIA Services Toolkit.

Below is a list of types that are currently working with WATS Domain Service.


Jerry Huang described A Quick and Easy Backup Solution for Windows Azure Storage in a 3/9/2011 to his Gladinet blog:

imageWindows Azure Storage is getting more popular and more and more Gladinet customers are using Azure Storage now.

One of the primary use case for Windows Azure Storage is Online Backup. It is pretty cool to have your local documents and folders backup to Windows Azure, inside data centers run by Microsoft, a strong brand name that you can trust with your important data.

imageThis article will walk you through the new Gladinet Cloud Backup 3.0, a simple and yet complete backup solution for your Windows PC, Server and Workstations.

Step 1 – Download and Install Gladinet Cloud Backup


Step 2 – Add Windows Azure as Backup Destination


From the drop down list, choose Windows Azure Blob Storage


You can get the Azure Blob Storage credentials from the Azure web site (



Step 3 – Manage Backup Tasks from Management Console


There are two modes of backup. You can use the one that works the best for you.

  1. Mirrored Backup – the local folder will be copied to Azure Storage. Once it is done, the copy on the Azure storage is exactly the same as your local folder’s content.
  2. Snapshot Backup – A snapshot of the local folder will be taken first and then the snapshot will be saved to Azure Storage.

If you need to backup SQL Server and other types of applications that supports Volume Shadow Copy Service, you can use the snapshot backup to get it done.

Related Posts

<Return to section navigation list> 

SQL Azure Database and Reporting

Steve Yi posted SQL Azure Pricing Explained on 3/11/2011:

image We created a quick 10 minute video that provides an overview of the pricing meters and billing associated with using SQL Azure.  The pricing model for SQL Azure is one of the most straightforward in the industry - it's basically just a function of database size and amount of data going in and out of our cloud datacenters. 

What's covered are:

  • imagethe business benefits of utilizing our cloud database
  • a comparison and overview of the different editions of SQL Azure database
  • understanding the pricing meters
  • examples of pricing
  • a walk-through of viewing your bill

You can also view it in full-screen mode by going here and clicking on the full-screen icon in the lower right when you play the video.

If you haven't already, for a limited time, new customers to SQL Azure get a 1GB Web Edition Database for no charge and no commitment for 90 days.  You also get free usage of Windows Azure and AppFabric services through June 30.  This is a great way to evaluate the SQL Azure and the Windows Azure platform without any of the risk.  Details on the offer are here. There are also several different offers available on the Windows Azure offer page.

… We have some great information lined up for next week.  I'm a huge fan of the Discovery Channel, and Shark Week is an event I annually mark on my calendar.  Imitation is the sincerest form of flattery, so next week will be "Migration Week" here on the blog, where we'll have some materials and guidance on migrating from Access and on-premises databases to SQL Azure and why it's important. … [Emphasis added.]

<Return to section navigation list> 

MarketPlace DataMarket and OData

• Glenn Gailey (@gailey777) answered When Should I Version My OData Service? in a 3/12/2011 post:

image While researching an answer [to] a forum post on versioning, I was digging into one of the v1 product specs and found a couple of very useful logical flowcharts that describe how to handle data service versioning scenarios. Sadly, this information has yet to make it into the documentation. I will do my best in this post to distill the essence of these rather complex flowcharts into some useful rules of thumb for when to version your data service. (Then I will probably also put this information in the topic Working with Multiple Versions of WCF Data Services.

Data Model Changes that Recommend a New Data Service Version

imageThe kinds of changes that require you to create a new version of a data service can be divided into the following two categories:

  • Changes to the service contract—which includes updates to service operations or changes to the accessibility of entity sets (feeds)
  • Changes to the data contract— which includes changes to the data model, feed formats, or feed customizations.

The following table details for which kinds of changes you should consider publishing a new version of the data service:


* You can set the IgnoreMissingProperties property to true to have the client ignore any new properties sent by the data service that are not defined on the client. However, when inserts are made, the properties not sent by the client (in the POST) are set to their default values. For updates, any existing data in a property unknown to the client may be overwritten with default values. In this case, it is safest to use a MERGE request (the default). For more information, see Managing the Data Service Context (WCF Data Services).

How to Version a Data Service

When required, a new OData service version is defined by creating a new instance of the service with an updated service contract or data contract (data model). This new service is then exposed by using a new URI endpoint, which differentiates it from the previous version.
For example:

Note that Netflix has already prepared for versioning by adding a v1 segment in the endpoint URI of their OData service:

When you upgrade your data service, your clients will need to also be updated with the new data service metadata and the new root URI . The good thing about creating a new, parallel data service is that it enables clients to continue to access the old data service (assuming your data source can continue to support both versions). You should remove the old version when it is no longer needed.

The Microsoft Office 2010 Team published Connecting PowerPivot to Different Data Sources in Excel 2010 to the MSDN Library in 3/2011:

  • Summary:   Learn how to import data from different sources by using Microsoft PowerPivot for Excel 2010.
  • Applies to:   Microsoft Excel 2010 | Microsoft PowerPivot for Excel 2010
  • Published:   February 2011
  • Provided by:   Steve Hansen, Microsoft Visual Studio MVP and founder of Grid Logic.


The PowerPivot add-in for Microsoft Excel 2010 can import data from many data sources, including relational databases, multidimensional sources, data feeds, and text files. The real power of this broad data-source support is that you can import data from multiple data sources in PowerPivot, and then either combine the data sources or create ad hoc relationships between them. This enables you to perform an analysis on the data as if it were from a unified source.

Download PowerPivot for Excel 2010

Code It

This article discusses three of the data sources that you can use with PowerPivot:

  • Microsoft Access
  • Microsoft Azure Marketplace DataMarket
  • Text files

See the Read It section in this article for a comprehensive list of data sources that PowerPivot supports.

Microsoft Access

Microsoft Access 2010 is a common source of data from analytical applications. By using PowerPivot, you can easily connect to an Access database and retrieve the information that you want to analyze.

To use Microsoft Access with PowerPivot
  1. Open a new workbook in Excel and then click the PowerPivot tab.

  2. Click the PowerPivot Window button on the ribbon.

  3. In the Get External Data group, click From Database, and then select From Access.

  4. Click the Browse button, select the appropriate Access database, and then click Next.

    At this point, you have two options.

    • You can import data from a table that is in the database or from a query that is in the database. If you do not want all of the records from a table, you can apply a filter to limit the records that are returned.
    • You can import the data by using an SQL query. Unlike the first option, where you can return data based on a query that is defined in the database, you use the second option to specify an SQL query in PowerPivot that is executed against the database to retrieve the records.

    The next steps show how to retrieve data from tables and queries in the database.

  5. Select the option, Select from a list of tables…, and then click Next.

  6. Select the tables that you want to import.

  7. By default, all of the columns and records in the selected tables are returned. To filter the records or specify a subset of the columns, click Preview & Filter.

  8. To automatically include tables that are related to the selected tables click Select Related Tables.

  9. Click Finish to load the data into PowerPivot.


Importing data from a Microsoft SQL Server database requires a connection to a SQL Server instance, but is otherwise identical to importing data from a Microsoft Access database.

Microsoft Azure Marketplace DataMarket

The Microsoft Azure Marketplace DataMarket offers a large number of datasets on a wide range of subjects.


To take advantage of the Azure Marketplace DataMarket, you must have a Windows Live account to subscribe to the datasets.

To use Microsoft Azure Marketplace DataMarket data in PowerPivot
  1. image In PowerPivot, in the Get External Data group, select From Azure Datamarket.


    If you do not see the option, you have an earlier version of PowerPivot and should download the latest version of PowerPivot.

    Figure 1. PowerPivot, Connect to an Azure DataMarket Dataset

    Azure Marketplace DataMarket, Service Root URL
  2. Click View available Azure DataMarket datasets to browse the available datasets.

  3. After you locate a dataset, click the Subscribe button in the top-right part of the page. Clicking the button subscribes you to the dataset and enables you to access the data in it. After you subscribe, go back to the dataset home page.

  4. Click the Details tab. Partway down the page, locate the URL under Service root URL and copy it into the PowerPivot import wizard. This URL is the dataset URL that PowerPivot uses to retrieve the data.

    Figure 2. Azure Marketplace DataMarket, Service Root URL
    Azure Marketplace DataMarket, Service Root URL

  5. To locate the account key that is associated with your account, click Find in the Table Import Wizard to open a page that displays the key. (You might have to log in first to view the key.)

  6. After you locate the key, copy and paste it into the import wizard and then click Next to display a list of available tables. From here, you can modify the tables and their corresponding columns to select which data to import.

  7. Click Finish to import the data.


imageThat data from the Azure DataMarket is exposed by using Windows Communication Foundation (WCF) Data Services (formerly ADO.NET Data Services). If you select the From Data Feeds option when you retrieve external data, you can connect to other WCF Data Services data sources that are not exposed through the Azure DataMarket.

Text File

PowerPivot lets users import text files with fields that are delimited by using a comma, tab, semicolon, space, colon, or vertical bar.

To load data from a text file
  1. In the Get External Data group in the PowerPivot window, Select From Text.

  2. Click Browse, and then navigate to the text file that you want to import.

  3. If the file includes column headings in the first row, select Use first row as column headers.

  4. Clear any columns that you do not want to import.

  5. To filter the data, click the drop-down arrows in the field(s) that you want to use as a filter and select the field values to include.

  6. Click Finish to import the data.

Read It

One of the compelling features of PowerPivot is that it enables you to combine data from multiple sources and then use the resulting dataset as if the data were unified. For example, suppose that you manage a portfolio of office buildings. Suppose further that you have a SQL Server database that contains generic information about the buildings in your portfolio, an Analysis Services cube that contains financial information such as operating expenses for the buildings, and access to crime information via the Azure MarketPlace DataMarket. By using PowerPivot, you can combine all of this data in PowerPivot, create ad hoc relationships between the sources, and then analyze it all in Excel as if it were a single data source.

You can use PowerPivot to leverage data from the following supported data sources in your next analysis project:

  • Microsoft SQL Server
  • Microsoft SQL Azure
  • Microsoft SQL Server Parallel Data Warehouse
  • Microsoft Access
  • Oracle
  • Teradata
  • Sybase
  • Informix
  • IBM DB2
  • Microsoft Analysis Services
  • Microsoft Reporting Services
  • Data Feeds (WCF Data Services formerly ADO.Net Data Services)
  • Excel workbook
  • Text file


This is not an exhaustive list; there is also an Others option that you can use to create a connection to data sources via an OLE DB or ODBC provider. This option alone significantly increases the number of data sources that you can connect to.

<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF and Service Bus

John Gilham described Single Sign-on between Windows Azure and Office 365 services with Microsoft Online Federation and Active Directory Synchronization in a 3/13/2011 post to the AgileIT blog:

A lot of people don’t realize there will be 2 very interesting features in Office 365 which makes connecting the dots with your on-premise environment and Windows Azure easy. The 2 features are directory sync and federation. It means you can use your AD account to access local apps in your on-premise environment; just like you always have. You can also use the same user account and login process to access Office 365 up in the cloud, and you could either use federation or a domain-joined application running in Azure to also use the same AD account and achieve single-sign-on.


Extending the model to the cloud

Windows Azure Connect (soon to be released to CTP) allows you to not only create virtual private networks between machines in your on-premise environment and instances you have running in Windows Azure, but it also allows you to domain-join those instances to your local Active Directory. In that case, the model I described above works exactly the same, as long as Windows Azure Connect is configured in a way to allow the client computer to communicate with the web server (which is hosted as a domain-joined machine in the Windows Azure data centre). The diagram would look like this and you can followed the numbered points using the list above:


Diagram 2: Extending AD in to Windows Azure member servers

Office 365

Office 365 uses federation to “extend” AD in to the Office 365 Data Centre. If you know nothing of federation, I’d recommend you read my federation primer to get a feel for it.

The default way that Office 365 runs, is to use identities that are created by the service administrator through the MS Online Portal. These identities are stored in a directory service that is used by Sharepoint, Exchange and Lync. They have names of the form:

However if you own your own domain name you can configure it in to the service, and this might give you:

…which is a lot more friendly. The thing about MSOLIDs that are created by the service administrator, is that they store a password in the directory service. That’s how you get in to the service.

Directory Synchronization

However you can set up a service to automatically create the MSOLIDs in to the directory service for you. So if your Active Directory Domain is named then you can get it to automatically create MSOLIDs of the form The password is not copied from AD. Passwords are still mastered out of the MSOLID directory.


Diagram 3: Directory Sync with on-premise AD and Office 365

The first thing that needs to happen, is that user entries made in to the on-premise AD, need to have a corresponding entry made in to the directory that Office 365 uses to give users access. These IDs are known as Microsoft Online IDs or MSOLIDs. This is achieved through directory synchronization. Whether directory sync is configured or not – the MS Online Directory Service (MSODS) is still the place where passwords and password policy is managed. MS Online Directory Sync needs to be installed on-premise.

When a user uses either Exchange Online, Sharepoint Online or Lync, the Identities come from MSODS and authentication is performed by the identity platform. The only thing Directory Sync really does in this instance is to ease the burden on the administrator to use the portal to manually create each and every MSOLID.

One of the important fields that is synchronised from AD to the MSODS is the user’s AD ObjectGUID. This is a unique immutable identifier that we’ll come back to later. It’s rename safe, so although the username, UPN, First Name, Last Name and other fields may change, the ObjectGUID will never change. You’ll see why this is important.


Read the complete article @> Single-sign-on between on-premise apps, Windows Azure apps and Office 365 services. - Plankytronixx - Site Home - MSDN Blogs

Ron Jacobs (@ronljacobs) announced the availability of the AppFabric WCF Service Template (C#) in an 3/11/2011 post:

image Now available Download the AppFabric WCF Service Template C#

Windows Communication Foundation (WCF) is Microsoft’s unified programming model for building service-oriented applications. Windows Server AppFabric provides tools for managing and monitoring your web services and workflows.

The AppFabric WCF Service template brings these two products together providing the following features:

  • Monitor the calls to your service across multiple servers with AppFabric Monitoring
  • Create custom Event Tracing for Windows (ETW) events that will be logged by AppFabric Monitoring

To build and test an AppFabric WCF Service you will need the following:

  1. Visual Studio 2010 / .NET Framework 4
  2. IIS 7
  3. Windows Server AppFabric
  • Add a new project using the ASP.NET Empty Web Application Template.
  • Add a new item to your project using the AppFabric WCF Service template named SampleService.svc
  • Open the SampleService.svc.cs file and replace the SayHello method with the following code:
public string SayHello(string name)
     // Output a warning if name is empty  
     if (string.IsNullOrWhiteSpace(name))
             "Warning - name is empty");
             "Saying Hello to user {0}",
     return "Hello " + name;
Enable Monitoring
  • Open web.config
  • Enable the EndToEndMonitoring Tracking profile for your web application
Verify with Development Server
  • In the Solution Explorer window, right click on the SampleService.svc file and select View in Browser.
  • The ASP.NET Development Server will start and the SampleService.svc file will load in the browser.
  • After the browser opens, select the URL in the address box and copy it (CTRL+C).
  • Open the WCF Test Client utility.
  • Add the service using the endpoint you copied from the browser.
  • Double click the SayHello operation, enter your name in the name parameter and click Invoke.
  • Verify that your service works.
Verify with IIS

To see the events in Windows Server AppFabric you need to deploy the Web project to IIS or modify your project to host the solution in the local IIS Server. For this example you will modify the project to host with the local IIS server.

Run Visual Studio as Administrator


If you are not running Visual Studio as Administrator, exit and restart Visual Studio as Administrator and reload your project.  For more information see Using Visual Studio with IIS 7.

  • Right click on the Web Application project you’ve recently created and select properties
  • Go to the Web tab
  • Check Use Local IIS Web Server and click Create Virtual Directory
  • Save your project settings (Debugging will not save them)
  • In the Solution Explorer window, right click on the SampleService.svc file and select View in Browser. The address should now be that of the IIS (“http://localhost/applicationName/”) and not of the ASP.NET Development Server (“http://localhost:port/”).
  • After the browser opens, select the URL in the address box and copy it (CTRL+C).
  • Open the WCF Test Client utility.
  • Add the service using the endpoint you copied from the browser.
  • Double click the SayHello operation, enter your name in the name parameter and click Invoke.
  • Verify that your service works.
  • Leave the WCF Test Client open (you will need to use it in the next step).
Verify Monitoring
  • Open IIS Manager (from command line: %systemroot%\system32\inetsrv\InetMgr.exe)
  • Navigate to your web application (In the Connections pane open ComputerName à Sites à Default Web Site à ApplicationName)
  • Double Click on the AppFabric Dashboard to open it
  • Look at the WCF Call History you should see some successful calls to your service.


  • Switch back to the WCF Test Client utility. If you’ve closed the utility, repeat steps 5-8 in the previous step, Verify with IIS.
  • Double click the SayHello operation, enter your name in the name parameter and click Invoke.
  • Change the name to an empty string, or select null from the combo box and invoke the service again. This will generate a warning event.
  • To see the monitoring for this activity switch back to the IIS Manager and refresh the AppFabric Dashboard.
  • Click on the link in the WCF Call History for SampleService.svc and you will see events for the completed calls. In this level you can see calls made to get the service’s metadata (Get calls) and calls for the service operations (SayHello calls).
  • To see specific events, right click on an entry for the SayHello operation and select View All Related Events.
  • In the list of related events you will see the user defined event named SayHello. The payload of this event contains the message logged by the operation.


Itai Raz continued his AppFabric series with Introduction to Windows Azure AppFabric blog posts series – Part 4: Building Composite Applications of 3/10/2011:

image In the previous posts in this series we covered the challenges that Windows Azure AppFabric is trying to solve, and started discussing the Middleware Services in this post regarding Service Bus and Access Control, and this post regarding Caching. In the current post we will discuss how AppFabric addresses the challenges developers and IT Pros face when Building Composite Applications and how the Composite App service plays a key role in that.

Building Composite Applications

image722322222As noted in the first blog post, multi-tier applications, and applications that consist of multiple components and services which also integrate with other external systems, are difficult to deploy, manage and monitor. You are required to deploy, configure, manage and monitor each part of the application individually, and you lack the ability to treat your application a single logical entity.

Here is how Windows Azure AppFabric addresses these challenges.

The Composition Model

To get the ability to automatically deploy and configure your composite applications, and later get the ability to manage and monitor your application as a single logical entity, you first need to define which components and services make up your composite application, and what the relationships between them are. This is done using the Composition Model, which is a set of extensions to the .NET Framework.

Tooling Support

You can choose to define your application model in code, but when using Visual Studio to develop your application, you also get visual design time capabilities. In Visual Studio you can drag-and-drop the different components that make up your application, define the relationships between the components, and configure the components as well as the relationships between them.

The image below shows an example of what the design time experience of defining your application model within Visual Studio looks like:

In addition to the development tools experience, you also get runtime tooling support through the AppFabric Portal. Through the portal you get capabilities to make runtime configuration changes, as well as get monitoring capabilities and reporting, which are discussed in the Composite App service section below.

Composite App service

By defining your application model, you are now able to get a lot of added value capabilities when deploying and running your application.

You can automatically deploy your end-to-end application to Windows Azure AppFabric from within Visual Studio, or you can create an application package that can be uploaded to the Windows Azure AppFabric Portal.

No matter how you choose to deploy your application, the Composite App service takes care of automatically provisioning, deploying and configuring all of the different components, so you can just start running your application. This reduces a lot of complexity and manual steps required from developers and IT Pros today.

In addition, the service enables you to define component level as well as the end-to-end application level performance requirements, monitoring, and reports. You are also able to more easily troubleshoot and optimize the application as a whole.

Another important capability of the Composite App service is to enable you to run Windows Communication Foundation (WCF) as well as Windows Workflow Foundation (WF) services on Windows Azure AppFabric.

These are two very important technologies that you should use when building service oriented and composite applications. The Composite App service enables you to use these technologies as part of your cloud application, and to use them as components that are part of your composite application.

So, as we showed in this post, Windows Azure AppFabric makes it a lot easier for you to develop, deploy, run, manage, and monitor your composite applications. The image below illustrates how you are able to include the different AppFabric Middleware Services, as well as data stores and other applications, including applications that reside on-premises, as part of you composite application:

A first Community Technology Preview (CTP) of the features discussed in this post will be released in a few months, so you will be able to start testing them soon. [Emphasis added.]

To learn more about the capabilities of AppFabric that enable Building Composite Applications, please watch the following video: Composing Applications with AppFabric Services | Speaker: Karandeep Anand.

Also make sure to familiarize yourself with Windows Communication Foundation (WCF) and Windows Workflow Foundation (WF), these technologies already provide great capabilities for building service oriented and composite applications on-premises today, and will be available soon on Windows Azure AppFabric in the cloud.

As a reminder, you can start using our CTP services in our LABS/Preview environment at: Just sign up and get started.

Other places to learn more on Windows Azure AppFabric are:

Be sure to start enjoying the benefits of Windows Azure AppFabric with our free trial offer. Just click on the image below and start using it today!

<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

• Steve Marx (@smarx) explained Using the Windows Azure CDN for Your Web Application in a 3/11/2011 post:

image The Windows Azure Content Delivery Network has been available for more than a year now as a way to cache and deliver content from Windows Azure blob storage with low latency around the globe. Earlier this week, we announced support in the CDN for caching web content from any Windows Azure application, which means you can get these same benefits for your web application.

imageTo try out this new functionality, I built an application last night: It fetches web pages and renders them as thumbnail images. (Yes, most of this code is from, which I’ve blogged about previously.) This application is a perfect candidate for caching:

  1. The content is expensive to produce. Fetching a web page and rendering it can take a few seconds each time.
  2. The content can’t be precomputed. The number of web pages is practically infinite, and there’s no way to predict which ones will be requested (and when).
  3. The content changes relatively slowly. Most web pages don’t change second by second, so it’s okay to serve up an old thumbnail. (In my app, I chose five minutes as the threshold for how old was acceptable.)

I could have cached the data at the application level, with IIS’s output caching capabilities or Windows Azure AppFabric Caching, but caching with the CDN means the data is cached and served from dozens of locations around the globe. This means lower latency for users of the application. (It also means that many requests won’t even hit my web servers, which means I need fewer instances of my web role.)

The Result

You can try a side-by-side comparison with your own URLs at, but just by viewing this blog post, you’ve tested the application. The image below comes from

thumbnail of

If you see the current time in the overlay, it means the image was just generated when you viewed the page. If you see an older time, up to five minutes, that means you’re viewing a cached image from the Windows Azure CDN. You can try reloading the page to see what happens, or go play with

Setting up a CDN Endpoint

Adding a Windows Azure CDN endpoint to your application takes just a few clicks in the Windows Azure portal. I’ve created a screencast to show you exactly how to do this (also at

The Code

To take advantage of the Windows Azure CDN, your application needs to do two things:

  1. Serve the appropriate content under the /cdn path, because that’s what the CDN endpoint maps to. (e.g., maps to
  2. Send the content with the correct cache control headers. (e.g., Images from my application are served with a Content-Cache header value of public, max-age=300, which allows caching for up to five minutes.)

To meet the first requirement, I used routing in my ASP.NET MVC 3 application to map /cdn URLs to the desired controller and action:

    new { controller = "Webpage", action = "Fetch" }

To meet the second requirement, the correct headers are set in the controller:

public ActionResult Fetch()
    return new FileStreamResult(GetThumbnail(Request.QueryString["url"]), "image/png");

The two Response.Cache lines are what set the Cache-Control header on the response. You can also use web.config to set cache control headers on static content.

[UPDATE 3:37pm] David Aiken pointed out that there’s a better way to get this header emitted in ASP.NET MVC 3. This code seems to be approximately equivalent to what I wrote originally. Thanks, David!

[OutputCache(Duration=300, Location=OutputCacheLocation.Any)]
public ActionResult Fetch()
    return new FileStreamResult(GetThumbnail(Request.QueryString["url"]), "image/png");

You can download the full source code for here: Note that I have not included CutyCapt.exe, which is required for creating the thumbnails. You can get CutyCapt here. Also note that my CDN URL is hardcoded in this solution, so be sure to change that if you’re building your own application.

Side Note

This application uses ASP.NET MVC 3. Having already made use of several techniques for installing ASP.NET MVC 3 in Windows Azure, I decided it was time for a new one. This code includes curl, and uses that to download the ASP.NET MVC 3 installer before running it.

Here’s the side-by-side view with the latest page from my Access In Depth blog:


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Robin Shahan (@robindotnet) reported that IIS Failed Request Logs haven’t been fixed in her Azure SDK 1.4 and IIS Logging post of 3/11/2011:

image I was excited about the release of Azure SDK 1.4. With 1.3, I had to put in a workaround to ensure that the IIS logs were transferred to blob storage correctly (as they were in version 1.2). In the release notes for Azure SDK 1.4, this is included in the list of changes:

Resolved an IIS log file permission Issue which caused diagnostics to be unable to transfer IIS logs to Windows Azure storage.

If true, this means I can take out the startup task I put in (explained in this article)  to work around the problem.

imageHaving learned never to assume, I decided to check out the new version and test two of my Azure instances – one that hosts a WCF service and one that hosts a web application.

I installed SDK 1.4, published the two cloud projects, and opened the URLs in the web browser. After navigating around a bit to create IIS logging, I put in some invalid links to generate Failed Request Logging. Then I used the Cerebrata Azure Diagnostics Manager to check it out.

The good news is that the IIS logging has been fixed. It transfers to blob storage as it did in Tools/SDK 1.2.

The bad news is that the IIS Failed Request logs have NOT been fixed. I RDP’d into the instance, and had to change the ownership on the directory for the failed request logs to see if it was even writing them, and it was not. So there’s no change from SDK 1.3.

If you want to use SDK 1.4 and have IIS Failed Request logs created and transferred to blob storage correctly, you can follow the instructions from this article on setting up a startup task and remove the code from the PowerShell script that applies to the IIS logs, leaving behind just the code for the IIS Failed Request logs. It will look like this:

echo "Output from Powershell script to set permissions for IIS logging."

Add-PSSnapin Microsoft.WindowsAzure.ServiceRuntime

# wait until the azure assembly is available
while (!$?)
    echo "Failed, retrying after five seconds..."
    sleep 5

    Add-PSSnapin Microsoft.WindowsAzure.ServiceRuntime

echo "Added WA snapin."

# get the DiagnosticStore folder and the root path for it
$localresource = Get-LocalResource "DiagnosticStore"
$folder = $localresource.RootPath

echo "DiagnosticStore path"

# set the acl's on the FailedReqLogFiles folder to allow full access by anybody.
# can do a little trial & error to change this if you want to.

$acl = Get-Acl $folder

$rule1 = New-Object System.Security.AccessControl.FileSystemAccessRule(
  "Administrators", "FullControl", "ContainerInherit, ObjectInherit",   "None", "Allow")
$rule2 = New-Object System.Security.AccessControl.FileSystemAccessRule(
  "Everyone", "FullControl", "ContainerInherit, ObjectInherit",   "None", "Allow")


Set-Acl $folder $acl

# You need to create a directory name "Web" under FailedReqLogFiles folder
#   and immediately put a dummy file in there.
# Otherwise, MonAgentHost.exe will delete the empty Web folder that was created during app startup
# or later when IIS tries to write the failed request log, it gives an error on directory not found.
# Same applies for the regular IIS log files.

mkdir $folder\FailedReqLogFiles\Web
"placeholder" >$folder\FailedReqLogFiles\Web\placeholder.txt

echo "Done changing ACLs." 

This will work just fine until they fix the problem with the IIS Failed Request logs, hopefully in some future release.

Andy Cross (@andybareweb) described a Workaround: WCF Trace logging in Windows Azure SDK 1.4 in a 3/12/2011 post:

This post shows a workaround to the known issue in Windows Azure SDK 1.4 that prevents the capture of WCF svclog traces by Windows Azure Diagnostics. The solution is an evolution of the work by RobinDotNet’s on correcting IIS logging, and a minor change to the workaround I produced for Windows Azure SDK v1.3.

When using WCF trace logging, certain problems can be encountered. The error that underlies these issues revolves around file permissions related to the log files, which prevents the Windows Azure Diagnostics Agent from being able to access the log files and transfer the files to Windows Azure blob storage. These permissions were evident in SDK v1.3 and are still around in SDK v1.4. The particular problem I am focussing on now is in terms of getting access to WCF Trace Logs.

This manifests itself in a malfunctioning Windows Azure Diagnostics setup – log files may be created but they are not transferred to Blob Storage, meaning they become difficult to get hold of, especially  in situations where multiple instances are in use.

The workaround is achieved by adding a Startup task to the WCF Role that you wish to collect service level tracing for. This Startup task then sets ACL permissions on the folder that the logs will be written to. In SDK version 1.3 we also created a null (zero-byte) file with the exact filename that the WCF log is going to assume. In version 1.4 of the SDK, this is unnecessary and prevents the logs being copied to blob storage.

The Startup task should have a command line that executes a powershell script. This allows much freedom on implementation, as powershell is a very rich scripting language. The Startup line should read like:

powershell -ExecutionPolicy Unrestricted .\FixDiagFolderAccess.ps1>>C:\output.txt

The main work then, is done by the file FixDiagFolderAccess.ps1. I will run through that script now – it is included in full with this post: 

echo "Windows Azure SDK v1.4 Powershell script to correct WCF Trace logging errors. Andy Cross @andybareweb 2011"
echo "Thank you RobinDotNet, smarx et al"

Add-PSSnapin Microsoft.WindowsAzure.ServiceRuntime

# wait until the azure assembly is available
while (!$?)
    echo "Failed, retrying after five seconds..."
    sleep 5

    Add-PSSnapin Microsoft.WindowsAzure.ServiceRuntime

echo "Added WA snapin."

This section of code sets up the Microsoft.WindowsAzure.ServiceRuntime cmdlets, a set of useful scripts that allow us access to running instances and information regarding them. We will use this to get paths of “LocalResource” – the writable file locations inside an Azure instance that will be used to store the svclog files.

# get the ######## WcfRole.svclog folder and the root path for it
$localresource = Get-LocalResource "WcfRole.svclog"
$folder = $localresource.RootPath

echo "DiagnosticStore path"

# set the acl's on the FailedReqLogFiles folder to allow full access by anybody.
# can do a little trial & error to change this if you want to.

$acl = Get-Acl $folder

$rule1 = New-Object System.Security.AccessControl.FileSystemAccessRule(
    "Administrators", "FullControl", "ContainerInherit, ObjectInherit",
    "None", "Allow")
$rule2 = New-Object System.Security.AccessControl.FileSystemAccessRule(
    "Everyone", "FullControl", "ContainerInherit, ObjectInherit",
    "None", "Allow")


Set-Acl $folder $acl

At this point we have just set the ACL for the folder that SVCLogs will go to. In the previous version of the Windows Azure SDK, one also needed to create a zero-byte file in this location, but this is no longer necessary. Indeed doing so causes problems itself.

You can find an example in the following blog;, full source for SDK version 1.4 is here: WCFBasic v1.4

A full listing follows:

echo "Windows Azure SDK v1.4 Powershell script to correct WCF Trace logging errors. Andy Cross @andybareweb 2011"
echo "Thank you RobinDotNet, smarx et al"

Add-PSSnapin Microsoft.WindowsAzure.ServiceRuntime

# wait until the azure assembly is available
while (!$?)
    echo "Failed, retrying after five seconds..."
    sleep 5

    Add-PSSnapin Microsoft.WindowsAzure.ServiceRuntime

echo "Added WA snapin."

# get the ######## WcfRole.svclog folder and the root path for it
$localresource = Get-LocalResource "WcfRole.svclog"
$folder = $localresource.RootPath

echo "DiagnosticStore path"

# set the acl's on the FailedReqLogFiles folder to allow full access by anybody.
# can do a little trial & error to change this if you want to.

$acl = Get-Acl $folder

$rule1 = New-Object System.Security.AccessControl.FileSystemAccessRule(
    "Administrators", "FullControl", "ContainerInherit, ObjectInherit",
    "None", "Allow")
$rule2 = New-Object System.Security.AccessControl.FileSystemAccessRule(
    "Everyone", "FullControl", "ContainerInherit, ObjectInherit",
    "None", "Allow")


Set-Acl $folder $acl

The Microsoft Corporate Citizenship site posted Microsoft Disaster Response: Community Involvement: 2011 Japan Earthquake on 3/11/2011:

image On March 11, 2011 at 14.46 (local time), a magnitude 8.9 earthquake struck 81 miles (130km) east of Sendai, the capital of Miyagi prefecture (Japan), followed by a 13 foot tsunami. It is with great concern we are seeing the images from Japan. The scene of the devastation is quite amazing. It will be a while for all of us to get a full sense of the disaster and its impact. Microsoft has activated its disaster response protocol to monitor the situation in Japan and other areas on tsunami warning alert, and offer support as appropriate. We are taking a number of steps, including ensuring the safety of our employees and their families and assessing all of our facilities for any impact.

Microsoft is putting in place a range of services and resources to support relief efforts in Japan including:

  • Reaching out to customers, local government, inter-government and non-government agencies to support relief efforts.
  • Working with customers and partners to conduct impact assessments.
  • Providing customers and partners impacted by the earthquake with free incident support to help get their operations back up and running.
  • Offering free temporary software licenses to all impacted customers and partners as well as lead governments, non-profit partners and institutions involved in disaster response efforts.
  • Making Exchange Online available at no cost for 90 days to business customers in Japan whose communications and collaboration infrastructure may be affected. We hope this will help them resume operations more quickly while their existing systems return to normal.
  • imageMaking a cloud-based disaster response communications portal, based on Windows Azure, available to governments and nonprofits to enable them to communicate between agencies and directly with citizens. [Emphasis added.]

The Windows Azure Team posted Real World Windows Azure: Interview with Glen Knowles, Cofounder of Kelly Street Digital on 3/11/2011:

image As part of the Real World Windows Azure series, we talked to Glen Knowles, Cofounder of Kelly Street Digital, about using the Windows Azure platform to deliver the Campaign Taxi platform in the cloud. Here's what he had to say:

MSDN: Tell us about Kelly Street Digital and the services you offer.

Knowles: Kelly Street Digital is a self-funded startup company with eight employees-six of whom are developers. We created Campaign Taxi, an application available by subscription that helps customers track consumer interactions across multiple marketing campaigns. Designed for advertising and marketing agencies, it helps customers set up digital campaigns, add functionality to their websites, store consumer information in a single database, and present the data in reports.

MSDN: What was the biggest challenge Kelly Street Digital faced prior to implementing Campaign Taxi on the Windows Azure platform?

Knowles: The Campaign Taxi beta application resided on Amazon Web Services for seven months. We had to hire consultants to manage the cloud environment and the database servers. Not only was it expensive to hire consultants, but it was unreliable because they sometimes had conflicting priorities and they lived in different time zones. In December 2009, a few days before the Christmas holiday, the instance of Campaign Taxi in the Amazon cloud stopped running. Our consultant was on vacation in Paris, France. I couldn't call Amazon, and they provided no support options. The best we could do was post on the developer forum. When you rely on the developer community for support, you can't rely on them at Christmas time because they're on holiday. We needed a more reliable cloud solution.

MSDN: Describe the solution you built with the Windows Azure platform?

imageKnowles: We did a two-week pilot program with Campaign Taxi on the Windows Azure platform. The first thing we noticed was that the response time of the application was significantly faster than it had been with Amazon Web Services. It took one developer three weeks to migrate the application to Windows Azure. We sent our lead developer to a four-hour training through Microsoft BizSpark, and then he quickly wrote a script that ported the application's relational database to SQL Azure. The migration from Microsoft SQL Server to Microsoft SQL Azure was quite straightforward because they're very similar. We use Blob storage to import consumer data, temporarily store customers' uploaded data files, and store backup instances of SQL Azure.

The Campaign Taxi application aggregates consumer interactions during marketing campaigns.

MSDN: What benefits have you seen since implementing the Windows Azure platform?

Knowles: We paid U.S.$4,970 each month for Amazon Web Services; the cost of subscribing to the Windows Azure platform is only 16 percent of that cost-$795 a month-for the same configuration. I can use the annual cost savings to pay a developer's salary. Also, Campaign Taxi runs much faster on the Windows Azure platform. For that kind of improved latency, we would have paid a premium. Windows Azure is an unbelievable product. I'm an evangelist for it in my network of startups. We've chosen this cloud platform and we're sticking with it.

Steve Plank (@plankytronixx, pictured below) posted London scrubs up to a high gloss finish with Windows Azure and Windows Phone 7 on 3/11/2011:

imageThe Mayor of London is an entertaining fellow. A regular on the TV news quiz “Have I Got News For You” (the clip is very funny…).

Here he presents a Windows Phone 7 app using Windows Azure at the back end which allows Londoners to report grafitti, rubbish dumping and so on. Of course the local council get a photo of the offending eyesore, plus its location.

Windows Azure and Windows Phone 7 app to clean up London

imageIt’s a great example of that H+S+S (Hardware + Software + Services) caper that keeps raising its head. What I like about this is that it’s pretty difficult to forecast the likely success (as someone once said – “forecasting is a very difficult job. Especially if it’s about the future…”) of this. Will Londoners completely ignore it, or will they take to it like ducks to water? It’s one of those 4 key Windows Azure workloads.

Whether it’s a storming success with tens of thousands of graffiti reports per hour coming in from all corners of London, or just 2 or 3 reports – the local council can scale this to give whatever citizens use it a great experience. In that sense, I think it means government departments and local authorities can take chances on things they’d have never considered before because the cost of building an infrastructure for an idea that may never be successful is now so low.

Wade Wegner (@WadeWegner) posted Cloud Cover Episode 39 - Dynamically Deploying Websites in a Web Role on 3/11/2011:

image Join Wade and Steve each week as they cover the Windows Azure Platform. You can follow and interact with the show @CloudCoverShow.

In this episode, Steve and Wade are joined by Nate Totten—a developer from Thuzi—as they look at a solution for dynamically deploying websites in a Windows Azure Web Role. This is a slick solution that you can use to zip up your website and drop it into storage, and a service running in your Web Role then picks it up and programmatically sets up the website in IIS—all in less than 30 seconds.

Reviewing the news, Wade and Steve:

Multitenant Windows Azure Web Roles with Live Deployments
Windows Azure Cloud Package Download and sample site
MultiTenantWebRole NuGet Package

Andy Cross (@andybareweb) described Implementing Azure Diagnostics with SDK V1.4 in a 3/11/2011 post:

image The Windows Azure Diagnostics infrastructure has a good set of options available to activate in order to diagnose potential problems in your Azure implementation. Once you are familiar with the basics of how Diagnostics work in Windows Azure, you may wish to move on to configuring these options and gaining further insight into your application’s performance.

imageThis post is an update to the previous post Implementing Azure Diagnostics with SDK v1.3. If you are interested in upgrading from 1.3 to 1.4, it should be noted that there are NO BREAKING CHANGES between the two.

Here is a cheat-sheet table that I have built up of the ways to enable the Azure Diagnostics using SDK 1.4. This cheat-sheet assumes that you have already built up a DiagnosticMonitorConfiguration instance named “config”, with code such as the below. This code may be placed somewhere like the “WebRole.cs” “OnStart” method. 

            string wadConnectionString = "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString";
            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(wadConnectionString));

            RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = storageAccount.CreateRoleInstanceDiagnosticManager(RoleEnvironment.DeploymentId, RoleEnvironment.CurrentRoleInstance.Role.Name, RoleEnvironment.CurrentRoleInstance.Id);
            DiagnosticMonitorConfiguration config = DiagnosticMonitor.GetDefaultInitialConfiguration();

Note that although roleInstanceDiagnosticManager has not yet been used, it will be later.

Alternatively, and if you wish to configure Windows Azure Diagnostics at the start and then modify its configuration later in the execution lifecycle without having to repeat yourself, you can use “RoleInstanceDiagnosticManager.GetCurrentConfiguration()”. It should be noted that if you use this approach in the OnStart method, your IISLogs will not be configured, so you should use the previous option. 

                string wadConnectionString = "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString";
                CloudStorageAccount storageAccount = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(wadConnectionString));                

                RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = storageAccount.CreateRoleInstanceDiagnosticManager(RoleEnvironment.DeploymentId, RoleEnvironment.CurrentRoleInstance.Role.Name, RoleEnvironment.CurrentRoleInstance.Id);
                DiagnosticMonitorConfiguration config = roleInstanceDiagnosticManager.GetCurrentConfiguration();

The difference between the two approaches is the final line, the instantiation of the variable config

If you debug both, you’ll find that config.Directories.DataSources has 3 items in its collection for the first set of code, and only 1 for the second. In brief this means that the first can support crashdumps, IIS logs and IIS failed requests, whereas the second can only support crashdumps. This difference is a useful indication of what this config.Directories.DataSources collection is responsible for – it is a list of paths (and other metadata) that Windows Azure Diagnostics will transfer to Blob Storage.

Furthermore, once you have made changes to the initial set of config data (choosing either of the above techniques), it is best practise to use the second approach, otherwise you will always overwrite any changes that you have already made.

Data source Storage format How to Enable
Windows Azure logs Table config.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1D);

config.Logs.ScheduledTransferLogLevelFilter = LogLevel.Undefined;

IIS 7.0 logs Blob Collected by default, simply ensure Directories are transferredconfig.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(1D);
Windows Diagnostic infrastructure logs Table config.DiagnosticInfrastructureLogs.ScheduledTransferLogLevelFilter = LogLevel.Warning;config.DiagnosticInfrastructureLogs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1D);
Failed Request logs Blob Add this to Web.Config and ensure Directories are transferred<tracing>
<add path=”*”>
<add provider=”ASP” verbosity=”Verbose” />
<add provider=”ASPNET” areas=”Infrastructure,Module,Page,AppServices” verbosity=”Verbose” />
<add provider=”ISAPI Extension” verbosity=”Verbose” />
<add provider=”WWW Server” areas=”Authentication,Security,Filter,StaticFile,CGI,Compression,Cache, RequestNotifications,Module” verbosity=”Verbose” />
<failureDefinitions statusCodes=”400-599″ />


config.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(1D);

Windows Event logs Table config.WindowsEventLog.DataSources.Add(“System!*”);config.WindowsEventLog.DataSources.Add(“Application!*”);

config.WindowsEventLog.ScheduledTransferPeriod = TimeSpan. FromMinutes(1D);

Performance counters Table PerformanceCounterConfiguration procTimeConfig = new PerformanceCounterConfiguration();procTimeConfig.CounterSpecifier = @”\Processor(*)\% Processor Time”;

procTimeConfig.SampleRate = System.TimeSpan.FromSeconds(1.0);


Crash dumps Blob CrashDumps.EnableCollection(true);
Custom error logs Blob Define a local resource.LocalResource localResource = RoleEnvironment.GetLocalResource(“LogPath”);

Use that resource as a path to copy to a specified blob

config.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(1.0);

DirectoryConfiguration directoryConfiguration = new DirectoryConfiguration();

directoryConfiguration.Container = “wad-custom-log-container”;

directoryConfiguration.DirectoryQuotaInMB = localResource.MaximumSizeInMegabytes;

directoryConfiguration.Path = localResource.RootPath;


After you have done this, remember to set the Configuration back for use, otherwise all of your hard work will be for nothing! 


This completes the setup of Windows Azure Diagnostics for your role. If you are using a VM role or are interested in another approach, you can read this blog on how to use diagnostics.wadcfg.

Avkash Chauhan explained Handling Error while upgrading ASP.NET WebRole from .net 3.5 to .net 4.0 - The configuration section 'system.web.extensions' cannot be read because it is missing a section declaration in a 3/10/2011 post:

image After you have upgraded your Windows Azure application which includes ASP.NET WebRole, form .net 3.5 to .net 4.0 (either with Windows Azure SDK 1.2/1.3/1.4), it is possible that your role will be stuck in busy or Starting state.

imageIf you are using Windows Azure SDK 1.3 or later, you have ability to access your Windows Azure VM over RDP. Once you RDP, to your VM and launch the internal HTTP endpoint in web browser, and you could see the following error:

Server Error
Internet Information Services 7.0

Error Summary 

HTTP Error 500.19 - Internal Server Error
The requested page cannot be accessed because the related configuration data for the page is invalid.
Detailed Error Information 
Module IIS Web Core
Notification Unknown
Handler Not yet determined
Error Code 0x80070032
Config Error The configuration section 'system.web.extensions' cannot be read because it is missing a section declaration 
Config File \\?\E:\approot\web.config
Requested URL http://<Internel_IP>:<Internal_port>/
Physical Path 
Logon Method Not yet determined
Logon User Not yet determined
Failed Request Tracing Log Directory C:\Resources\directory\<Deployment_Guid>.Website.DiagnosticStore\FailedReqLogFiles
Config Source 
277:      </system.web>  
278:      <system.web.extensions>  
279:            <scripting>

Web.config will have the system.web.extensions section as below:

        <scriptResourceHandler enableCompression="true" enableCaching="true"/>

You can also verify that ApplicationHost.config has extensions defined as below:


<add name="ScriptModule-4.0" type="System.Web.Handlers.ScriptModule, System.Web.Extensions, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" preCondition="managedHandler,runtimeVersionv4.0" />

To solve this problem you will need to add the following in <configSections> section in your web.config:

<sectionGroup name="system.web.extensions" type="System.Web.Configuration.SystemWebExtensionsSectionGroup, System.Web.Extensions, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35">
    <sectionGroup name="scripting" type="System.Web.Configuration.ScriptingSectionGroup, System.Web.Extensions, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35">
            <section name="scriptResourceHandler" type="System.Web.Configuration.ScriptingScriptResourceHandlerSection, System.Web.Extensions, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="false" allowDefinition="MachineToApplication"/>
            <sectionGroup name="webServices" type="System.Web.Configuration.ScriptingWebServicesSectionGroup, System.Web.Extensions, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35">
                    <section name="jsonSerialization" type="System.Web.Configuration.ScriptingJsonSerializationSection, System.Web.Extensions, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="false" allowDefinition="Everywhere"/>
                     <section name="profileService" type="System.Web.Configuration.ScriptingProfileServiceSection, System.Web.Extensions, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="false" allowDefinition="MachineToApplication"/>
                      <section name="authenticationService" type="System.Web.Configuration.ScriptingAuthenticationServiceSection, System.Web.Extensions, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="false" allowDefinition="MachineToApplication"/>
                       <section name="roleService" type="System.Web.Configuration.ScriptingRoleServiceSection, System.Web.Extensions, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="false" allowDefinition="MachineToApplication"/>

PRNewswire reported Matco Tools implementing the Virtual Inventory Cloud (VIC) solution from GCommerce, which is built on the Microsoft Azure platform in a 3/9/2011 press release:

image A major market leader in the tools & equipment aftermarket is taking a new cloud-based solution to market. Matco Tools, a manufacturer and distributor of professional automotive equipment, is implementing the Virtual Inventory Cloud (VIC) solution from GCommerce, which is built on the Microsoft Azure platform. [Emphasis added.]

image“Special Orders continue to increase due to the continuous expansion of part numbers, and the industry cannot afford to continue throwing people at a broken process,” said Steve Smith, President & CEO of GCommerce. “Our partnership with Matco Tools will lead to further improvements in VIC for the entire automotive aftermarket industry and others like it.”

VIC™ enables automation of the special/drop-ship ordering process between distributors/retailers and manufacturers/suppliers, allowing both parties to capture sales opportunities and streamline a slow manual process. GCommerce has emerged as the leading provider of a B2B solution for the automotive aftermarket by providing Software as a Service (SaaS) using elements of cloud-based technologies to automate procurement and purchasing for national retailers, wholesalers, program groups and their suppliers.

Windows Azure and Microsoft SQL Azure enable VIC to be a more efficient trading system capable of handling tens of millions of transactions per month for automotive part suppliers. GCommerce’s VIC implementation with support from Microsoft allows suppliers to create a large-scale virtual data warehouse that eliminates manual processes and leverages a technology-based automation system that empowers people. This platform is designed to work with existing special order solutions and technologies, and has potential to transform distribution supply chain transactions and management across numerous industries.

image “VIC is a top line and bottom line solution for Matco and its suppliers, enabling both increased revenue opportunities and process cost savings. It’s all about speed and access to information,” said Mike Smith, VP Materials, Engineering and Quality of Matco Tools. “There is a growing concern around drop-ship special orders, which is why we are approaching the issue with a solution that fits the industry at large, as well as Matco individually.”

Currently, it can be time-consuming and inefficient for retailers and distributors when tracking down a product for a customer. With VIC, information required to quote a special order can be captured in seconds vs. the minutes, hours, and sometimes days delay of decentralized manual methods.

About GCommerce
GCommerce, founded in 2000, is a leading provider of Software-as-a-Service (SaaS) technology solutions designed to streamline distribution supply chain operations. Their connectivity solutions facilitate real-time, effective information sharing between incompatible business systems and technologies, enabling firms to improve revenue, operational efficiencies, and profitability. For more information, contact GCommerce at (515) 288-5850 or info(at)gcommerceinc(dot)com.

About Matco Tools
Matco Tools is a manufacturer and distributor of quality professional automotive equipment, tools and toolboxes. Its product line now numbers more than 13,000 items. Matco also guarantees and services the equipment it sells. Matco Tools is a subsidiary of Danaher Corporation, a Fortune 500 company and key player in several industries, including tools, environmental and industrial process and control markets.

<Return to section navigation list> 

Visual Studio LightSwitch

Michael Otey asserted “Microsoft's new development environment connects to SQL Server and more” in a deck for his FAQs about Visual Studio LightSwitch article of 3/10/2011 for SQL Server Magazine:

image Providing a lightweight, easy-to-access application development platform is a challenge. Microsoft had the answer at one time with Visual Basic (VB). However, as VB morphed from VB 6 to VB.NET, the simplicity and ease of development was lost. Visual Studio (VS) LightSwitch is Microsoft’s attempt to create an easy-to-use development environment that lets non-developers create data-driven applications. Whether it succeeds is still up in the air. Here are answers to frequently asked questions about LightSwitch.

image22242222221. What sort of projects does LightSwitch create?

LightSwitch lets you create Windows applications (two-tiered or three-tiered) and browser-based Silverlight applications. Two-tiered applications are Windows Communication Foundation (WCF) applications that run on the Windows desktop. Three-tiered applications are WCF applications that connect to IIS on the back end. LightSwitch desktop applications can access local system resources. LightSwitch web applications connect to IIS and can’t access desktop resources.

2. I heard Silverlight was dead. Why does LightSwitch generate Silverlight apps?

Like Mark Twain’s famous quote, the rumors of Silverlight’s death are greatly exaggerated. People talk about Silverlight being replaced by HTML 5, but Silverlight is meant to work with HTML and is designed to do the things that HTML doesn’t do well. Silverlight will continue to be a core development technology for the web, Windows, and Windows Phone.

3. How is LightSwitch different from WebMatrix or Visual Web Developer Express?

All three can be used to develop web applications. However, WebMatrix and Visual Web Developer Express are code-centric tools that help code web applications written in VB or C#.  LightSwitch is an application generator that lets you build working applications with no coding required.

4. Can LightSwitch work with other databases beside SQL Server?

The version of LightSwitch that’s currently in beta connects to SQL Server 2005 and later and to SQL Azure. However, LightSwitch applications at their core are .NET applications, and the released version of LightSwitch should be able to connect to other databases where there’s a .NET data provider.

5. Is it true that LightSwitch doesn’t have a UI designer?

Yes. Microsoft development has taken a misguided (in my opinion) route, thinking templates are an adequate substitute for screen design. LightSwitch doesn’t have a visual screen designer or control toolbox. Instead, when you create a new window, which LightSwitch calls Screens, you select from several predefined templates. It does offer a Customize Screen button that lets you perform basic tasks like renaming and reordering items on the screen.

6. Can you modify applications created by LightSwitch?

Yes. LightSwitch generates standard VS web projects that you can modify. Open the LightSwitch .lsproj project file or .sln solution file in one of the other editions of VS, then you can modify the project.

7. Will VS LightSwitch be free?

Microsoft hasn’t announced the pricing for LightSwitch; however, it’s not likely to be free. That said, as with other members of the VS family, the applications that you create with LightSwitch will be freely distributable and with no runtime charges or licensing required to execute them.

8. Where can I get LightSwitch?

LightSwitch is currently in beta. To find out more about LightSwitch and download the free beta go to the Visual Studio LightSwitch website.

I agree that lack of a UI designer for LightSwitch is misguided, but I doubt if it will gain one at this point. According to Microsoft’s “Soma” Somasegar, Beta 2 of LightSwitch is due “in the coming weeks.”

Return to section navigation list> 

Windows Azure Infrastructure and DevOps

•• David Linthicum asked Does SOA Solve Integration? in a 3/13/2011 post to ebizQ’s Where SOA Meets cloud blog:

Loraine Lawson wrote a compelling blog post: "Did SOA Deliver on Integration Promises?" Great question.

image "So did SOA solve integration? No. But then again, no one ever promised you that. As Neil observes, we'll probably never see a 'turnkey enterprise integration solution,' but that's probably a good thing - after all, organizations have different needs, and such a solution would require an Orwellian-level of standardization."

image The fact of the matter is that SOA and integration are two different, but interrelated concepts. SOA is a way of doing architecture, where integration may be a result of that architecture. SOA does not set out to do integration, but it maybe a byproduct of SOA. Confused yet?

Truth-be-told integration is a deliberate approach, not a byproduct. Thus, you need to have an integration strategy and architecture that's a part of your SOA, and not just a desired outcome. You'll never get there, trust me.

The issue is that there are two architectural patterns at play here.

First, is the objective to externalize both behavior and data as sets of services that can be configured and reconfigured into solutions. That's at the heart of SOA, and the integration typically occurs within the composite applications and processes that are created from the services.

Second, is the objective to replicate information from source to target systems, making sure that information is shared between disparate applications or complete systems, and that the semantics are managed. This is the objective of integration, and was at the heart of the architectural pattern of EAI.

Clearly, integration is a deliberate action and thus has to be dealt with within architecture, including SOA. Thus, SOA won't solve your integration problems; you have to address those directly.

Ernest Mueller (@ernestmueller) reported a DevOps at CloudCamp at SXSWi! meetup in a 3/12/2011 post:

image Isn’t that some 3l33t jargon.  Anyway, Dave Nielsen is holding a CloudCamp here in Austin during SXSW Interactive on Monday, March 14 (followed by the Cloudy Awards on March 15) and John Willis of Opscode and I reached out to him, and he has generously offered to let us have a DevOps meetup in conjunction.

image John’s putting together the details but is also traveling, so this is going to be kind of emergent.  If you’re in Austin (normally or just for SXSW) and are interested in cloud, DevOps, etc., come on out to the CloudCamp (it’s free and no SXSW badge required) and also participate in the DevOps meetup!

In fact, if anyone’s interested in doing short presentations or show-and-tells or whatnot, please ping John (@botchagalupe) or me (@ernestmueller)!

• Bill Zack added another positive review of David Pallman’s first volume in a Mapping your Move to the Windows Azure Platform post of 3/12/2011:

image The Windows Azure Handbook, Volume 1 by David Pallmann has just been published. It is the most up-to-date work on Windows Azure that I have seen so far. It is available in print or you can get it as an eBook at

The Windows Azure Handbook

Volume 1 is on Planning and Strategy. Subsequent volumes will cover Architecture, Development and Management.

This is an excellent book for anyone evaluating and planning for Windows Azure. It also provides an excellent comparisons of several other public cloud services including Amazon Web Services and Google AppEngine.

One of its strengths is detailed documentation of a complete methodology for evaluating the technical and business reasons for determining which of your applications should be moved to the cloud first.

Tad Anderson published a Book Review: Microsoft Azure Enterprise Application Development by Richard J.Dudley and Nathan Duchene on 3/11/2011, calling the title a “Nice concise overview of Microsoft Azure:”

image I have not had any clients express interested in the cloud yet. They are not willing to hear about giving up the control over their environments which is a stigma the cloud conversation carries with it. I still wanted to know what is going on with Azure, without having to spend 2 months mulling through a tome. This book was the perfect size and depth for getting me up to speed quickly.
The book is intended to give you enough information to decide whether or not further movement toward the cloud is something you want to do, and it does that perfectly.

The book starts off with an overview of cloud computing, an introduction to Azure, and covers setting up a development environment. After that the rest of the book designs and builds a sample application which is used to introduce the key components of Microsoft Azure.

The book has a chapter on each of the following topics: Azure Blob Storage, Azure Table Storage, Queue Storage, Web Role, Web Services and Azure, Worker Roles, Local Application for Updates, Azure AppFabric, Azure Monitoring and Diagnostics, and Deploying to Windows Azure.

Most chapters introduces the topic and then show a working example. The others that just describe the topic describe it in enough detail that you have a good understanding of the topic and they provide good references if you want to dig deeper.

The book did a really good job covering the different types of services and different types of storage available. It also did a great job of describing the differences in SQL Server and SQL Azure.

All in all I thought this book did exactly what it set out to do. It provided me with enough information that I now feel like I know what Microsoft Azure is all about.
I highly recommend this book to anyone who wants an introduction the Microsoft Azure platform.

Steve Plank (@plankytronixx) recommended Don’t Kill the Messenger: Explaining to the Board how Planning for the Cloud will Help the Business in a detailed 3/11/2011 post:

image What we need, in our dual-roles as both technologists and business-analysts, is a sort of kit bag of tools we can use to explain in simple, low-jargon, understandable language to say, the CIO or other C-level executive of a business, how a move to the cloud will benefit the organisation.

It’s a problem more of communication than anything. They’ve all seen the latest, greatest things come and go like fashion. There’s no doubt about the debate and hubub around cloud technologies, but it’s pretty confusing.

It’s like when you take a family holiday. You get back after 2 weeks in the sun and perhaps buy a newspaper at the airport and you realise “whoa – a lot of stuff has been going on”. You come straight in to the middle of 1 or 2 big news stories and because you weren’t there at the start, it’s confusing.

That’s what I mean when I say the cloud is confusing to non-technology audiences. There’s a lot of buzz about it in business circles and even in the mainstream media. But they are coming in to the middle of the story and it’s like a news story. It’s an evolving feast. There is no definitive story. It’s not like looking up Newton’s laws of physics or Darwin’s theories of evolution. In every business-focussed cloud presentation I’ve ever seen, there’s an attempt at a definitive explanation of what the cloud is. They try to distil it in to one or two sentences. These things often end up looking like company mission statements. I’ve seen some of these mission statements that are so far up their own backsides they can see daylight at the other end. “Cloud Computing is the Egalitarianism of a Scalable IT Infrastructure”. Well, yes it is, but it’s more than just that – and does the language help somebody who has come along to find out what it really means. Hmmm – I’m doubtful. Perhaps the eyes of the CIO who has just had his budgets slashed glazed over at this point. Perhaps the CEO has no idea what a “Scalable IT Infrastructure” is or even why his car-manufacturing business needs one.

I personally think the notion of trying to reduce a description of cloud computing to the most compact statement you can create just leads to more misunderstanding.

However, when you talk to technologists like us – we can all identify with the 4 key workloads that are very suitable to cloud computing.


As technologists we can all immediately relate to these diagrams of load/time. These are not the only uses for the cloud, but they are the key uses. However, the C-level audience rapidly starts to lose interest. They have to have the stories associated with these diagrams, before they get to see the diagrams. Showing the diagrams after we’ve told the stories gives us credibility because it proves we’ve done our homework.

For example take “Predictable Bursting”. In my opinion, we avoid that language until the story has unfolded. We can say something like “I know as an online retailer, you run ad-hoc campaigns to promote certain products or services. I know some of these campaigns are massively successful and others don’t quite hit the mark. I know you have difficulty predicting which ones will be successful – you’d never run a campaign in the knowledge it would be unsuccessful. But in order to give your customers a better experience than your competitors do, it’s important that the service is there, doesn’t break down, is fast and doesn’t give an inconsistent experience. So you have to build the campaign’s IT requirements around the fact that it will be a massive run-away success”.

It’s only after we’ve articulated stories like these, in every-day language, that we can then show some slightly technical diagrams (which are also business diagrams, but they only become business diagrams after business stories have been articulated first). The technical diagrams give us credibility and put meat on the bones of the stories.

The cloud can often look simply like some new form of outsourcing. It’s not. All customers are treated the same. The contracts are, largely, non-negotiable. With Windows Azure, if you as MegaCorp buy a service, you get the same SLA as me. You’ll probably be getting a discounted price because at the Enterprise level this can be done by a licensing agreement, but the service you get will be the same as the service I get. It’s the consistency of this approach which makes the cloud so economical. A traditional outsourcer would be wanting to sell additional services – people, processes, data-centres, operations and so on. With every contract being utterly unique, the costs inevitably go up. Cloud operators sell cloud services.

So much is being said about the financial benefits of the cloud. Certainly moving from a Capex to a (much reduced) Opex model would grab the attention of any bean counter. But I believe we concentrate on that point at the expense of other points – the most important one I think, being agility.

This agility though, can be a double-edged sword. I remember, back in the early days of the PC when organisations were slowly moving away from Mainframe and Mini-computer based green-screen infrastructures to PC based client/server computing. Little islands of PCs would appear to deal with some very specific problem, usually with no involvement from the main IT function. Managers could buy a few PCs with their budgets and justify it easily with a business case that seemed to work. Of course these “deployments” were rarely thought through with the same rigour an IT department would give to such a project. When the PCs failed, the users would naturally call IT who’d say “nothing to do with me”. It was quite a disruptive time for many businesses. IT departments tried to get control of these deployments and business managers began to see IT as a blocker to doing business. There was a bigger case they were unaware of. Their actions generally had a wider impact on the entire business.

It can be a little like that with the cloud. Because complex compute services can be bought, in some cases, simply on expenses, there is a tendency to think of the cloud as a super agile platform. That is agility in the sense that from the conception of an idea to having it deployed as a service can be incredibly short. As long as it doesn’t undermine the other IT functions, cause other inconsistencies in data or applications, then that’s great.

I think we can therefore say that cloud affects not only IT, but many other stakeholders in the business. Indeed, in many (most?) cases, it will be these stakeholders that experience cloud computing before IT departments do.

So what are the things that business managers complain about? I think a scenario will paint a picture we can dissect:

Let’s say a product sales group has just been given what seems like an unachievable revenue target and a really tough scorecard for business performance. They’re in a flat spin, panicking about how they are going to achieve this. In a hastily convened meeting they brainstorm ideas. One of them is completely crazy, but hey – you know what? it will probably work. Remember these guys are panicking. They don’t do the usual due diligence on a business idea because they have such a short time to reach these new goals. They know they’ll have failed if they go through the normal process.

They need an application. It needs to be Internet facing and because of the size of the problem they are making the offers so compelling that millions of customers will be visiting. If you are a salesperson in this situation, it all seems logical and ever so slightly exciting. Until you outline your ideas to IT. Even though the business unit has budget for the development, it doesn’t stop there.IT say the procurement process just to get the hardware is 4 months. There’s no way they can turn this around in 4 weeks. They have to develop the service, find space in the data-centre, provision the hardware and software, build support in to their maintenance plans, add hardware if the service is as successful as you say it is. IT come up with all these problems that slow you down. The conversations among yourselves are “…do these guys not realise that sales is the lifeblood of this company? If we don’t sell anything they won’t have jobs. Why are they putting so many barriers in the way of this great idea"?”.

If the cloud is not already an integral part of the IT strategy, the sale team will probably eventually discover it themselves and bypass IT completely. In fact they probably won’t want IT to have any idea they are doing it, because, as far as they are concerned, IT is there to stop business, not promote it. The less IT know the better. So with a combination of local business budgets, expenses and other sources of money, they get the service they want. Agility. Great – it all worked fantastically without IT and let’s bear that in mind for the future.

Let’s now wind forward 2 years. The whole thing has been a runaway success and the CEO wants everything to be integrated in to the main business. There are lots of niggles. Customers have to have 2 ids. One for the main systems and one for this cloud app which was developed “free of IT interference”. Customers don’t like that every time they visit the site, they aren’t quite sure which id they should use. The format of the data in the cloud app and in the internal databases is different. To get consolidated reporting is now a big problem. A database specialist has had to be hired to try to create compound reports from all the data that is gradually being spread to more and more small cloud apps without the benefit of an IT orchestrated model for data.

If IT had been prepared for the onslaught, this model would be different. If IT had already thought about the cloud as a model for delivering these kinds of services – services that fit in to those service appetite diagrams above – only to have been done with the enterprise architecture of the entire system, on-premise and cloud, in mind – the story would be very different.

What are the questions that arise from this scenario (and this is only one scenario):

  • Can IT respond in a timely manner to the business?
  • Do projects over run and under-deliver?
  • Are projects delayed because of infrastructure complexity?
  • Are projects delayed because of procurement complexity?
  • Are business managers demanding that IT change faster than it is capable of changing?
  • Does the business generate problems with mergers, acquisitions, divestitures?
  • Does IT make the best use of the resources it already owns in its data-centres?
  • Does the business move at an ever increasing pace, making greater and greater demands for flexibility and agility on IT?

Organisations do want to change these problems. They want a lower cost base with more predictable and manageable costs. They only want to pay for the resources they use (some estimates say that between 70% and 90% of compute resources in most data-centres are unused (but still paid for)). The cost of risk is too high, business want to shield themselves financially against risk. They want to provide a platform that allows for business innovation, but they need to do it at a cost they can afford. They definitely want to be more agile and more able to respond quickly to competitive threats.

These are the sorts of things the cloud brings. Not to every business problem. But to many.

Most businesses (and governments come to that) have a 5-year time horizon. There’s always a “5 year plan”. I’d say this is the best time period to make the comparisons between the “what we’ll have if we don’t embrace the cloud” and the “what we’ll have if we do embrace the cloud”. Financially, this needs to cover the migration costs.

Having a reasonable time period, like 5 years, allows an organisation to build other factors in – for example a refresh cycle that would normally happen at say the 4 year point, could be delayed (or brought forward), if appropriate and the service moved to the cloud as part of the refresh planning. For example the organisation might be thinking of upgrading the CRM system at the 4-year point, when in fact, because moving to a cloud-based version of the same system might be cheaper, it would give them the option of bringing that “migration” forward (or moving it backward). Bringing it forward would give the benefits of the system earlier to help make the business more successful. Moving it back might help if the organisation is on a cost-drive (perhaps, as is often the case, simply to prove to the market they are taking costs seriously).

Overall, those organisations that have thought about the cloud as a strategic platform and as part of their overall enterprise architecture stand to benefit the most from an architecture that allows for integration and economies of scale. Organisations that neglect this planning will suffer the problems created by independent business units for many years to come.

I believe human stories, combined with technical data, evidence etc (in that order) is the way this can be most optimally achieved.

Brent Stineman (@brentcodemonkey) posted Cloud Computing Barriers and Benefits on 3/11/2011:

image I stretched myself this week and gave a presenation to the local Cloud Computing User Group that wasn’t specifically about Windows Azure but instead a general take on cloud computing adoption. The idea for this rose from a session I conducted at the 2011 Microsoft MVP Summit with two other Windows Azure MVP’s that specifically targetted Windows Azure. And since much of my time lately has been thinking as much strategic about the cloud as technical, it made sense to put my thoughts out there via a presentation.

Feedback on the presentation was good and the discussion was as always much better. And I promised those in attendence that I’d post the presentation. So here you go!

I apologize for the deck not being annotated. I also realize that this subject is one folks have written entire books on so its just touches on complex topics. But there’s not much else you can do in a 60-90 minute presentation that covers such a board topic.  That said, I’d still appreciate any feedback anyone has on it or on my presentation (if you were present yesterday).

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Greg Shields explained What converged infrastructure brings to private cloud in a 3/13/2011 post to

IT has once again found itself engaging in the age-old practice of "white boxing." This time, however, we’re not piecing together our servers; virtualization has instead driven us to white box our entire data center infrastructure.

imageThe end result of white boxing servers often ended up being a data center full of inconsistencies, with additional administrative overhead and an increased chance of failure following every configuration change. This is why converged infrastructure has quickly become our industry's hottest topic, as white boxes are now being assembled to create virtual environments.

image A few years ago, we discovered all the wonder and excitement associated with virtualization. In turn, virtualization was implemented as soon as possible on any available hardware. But the piecewise addition of more SAN equipment and networking, along with each new round of servers, created more and more colossal interconnections.

Before long, the complicated web of do-it-yourself virtual hardware stopped being wonderful and started being problematic. Doing it yourself, as we’ve only now rediscovered, doesn’t scale .

Solving this problem and beating back our industry’s second generation of white boxing is what converged infrastructure intends to achieve. That’s why, rather than "converged infrastructure," I prefer the term "hardware that's designed with virtualization and cloud in mind."

Converged infrastructure in a nutshell
At the end of the day, your virtual infrastructure or private cloud runs on monitoring data. That data explains how much capacity is on-hand, broken down into major categories: Compute, memory, networking and storage. Call this your supply of resources.

It also knows how many of those resources your virtual machines (VMs) require. This second number represents demand. With supply and demand now abstracted into some set of numerical values, you have now generated an easy-to-understand "economics of resources" that represents the state of your data center.

In a converged infrastructure, this provides a recognizable warning as to when more resources are needed. Trending the use of resources means knowing that more networking, storage or compute power will be required at a specific time. Supplied and consumed resources are now quantified and used, rather than organizations relying on best guesses. Trending also makes purchases substantially easier to plan and budget.

To achieve its goals, converged infrastructure hardware is completely modularized, not unlike a stack of Legos or Tinkertoys. Each module connects with minimal effort into the greater whole that is your data center, much as an additional hard drive is snapped into a server or SAN today. The modules also contribute a known quantity of resources, increasing your economic supply. You can add to your total storage, computing power or memory in the same way.

More importantly, each module is something you'll purchase by popping over to a manufacturer’s website and clicking "buy." You’ve done this for years with servers; why couldn’t you do it with your entire data center? What will arrive are ready-to-insert components with minimal cabling and trivial installation. Wrapping around this entire system is a management toolset that recognizes new hardware and seamlessly adds it to your pool of resources.

And this isn't all in the future. For some manufacturers, the hardware is already here. For others, it's on the roadmap. Some of the components -- blades, modularized storage, dense networking and so on -- are being advertised by the major manufacturers, even if they haven’t yet explained how this new approach works. And the management toolsets are also well on their way.

With names like BladeSystem Matrix and Advanced Infrastructure Manager, these prepackaged virtual computing environments manifest the resource economics at the hardware layer while your hypervisor management tools deal with individual VM actions. The combination of these two pieces is the source of what we now think of as private cloud computing. Converged infrastructure is just the enabler.

So is converged infrastructure a fancy name or an actual technology? In a way, it’s a bit of both: not there to eliminate hypervisor management but to augment it. Converged infrastructure’s hardware and management tools provide a way to end, for the second time, our nasty practice of white boxing.

More on converged infrastructure:

Greg is a Microsoft MVP and a partner at Concentrated Technology.

Full disclosure: I’m a paid contributor for

Srinivasa Sundara Rajan posted Cloud Computing: Private Clouds - Buy vs Reuse vs Build on 3/11/2011:

Private Cloud Is Hot
Several  market studies indicate that Private Clouds will be hot in the near future. Few of the market studies on Private Cloud indicate how much they are predicted to grow.

  • Industry seminars expect cloud infrastructures to grow at a compounded rate of 25 percent per year over the next few years, with 75 percent of that infrastructure growth coming from private clouds
  • IDC Predicts Private Cloud Server Market to Reach $11.8B by 2014
  • Gartner also predicts that, Private Cloud Computing Ramps Up in 2011

Definition of Private Cloud
The cloud infrastructure is owned or leased by a single organization, delivered over a private network and is operated solely for that organization, either by that organization (internal private cloud) or by an outsourcer (external private cloud).

Private cloud is the implementation of cloud services on resources that are dedicated to your organization, whether they exist on-premises or off-premises. With a private cloud, you get many of the benefits of public cloud computing-including self-service, scalability, and elasticity-with the additional control and customization available from dedicated resources.

Build / Buy / Reuse Analysis
Most IT organizations conduct this analysis, so that the  investigation about multiple options available to acquire a capability, result in the best possible  option while having the buy in of all the stake holders.

  • Buy - Purchasing a pre packaged solution like ‘Cloud In A Box' option
  • Reuse - Incorporating reusable assets like a public cloud and utilizing the same as a private cloud by adopting appropriate security controls
  • Build - Obtain individual components of Private Cloud and building on top of it

As mentioned  this is about  buying a prepackaged solution that has got all  the elements of a  private cloud, like a virtualization platform, machine images, self service portal, dynamic scaling capability and associated hardware and software.

The advantages about this kind of solution are they are very easy  to make a starting point on the private clouds and can improve the time to market of new applications.

However, most of the‘ Buy' solutions on the market  are limited to specific PaaS platforms and at this time  have limited capability to expand to all enterprise needs for disparate application platforms.

Websphere  CloudBurst Appliance  &  IBM Cloudburst
WebSphere CloudBurst Appliance allows you to quickly set up an Enterprise WebSphere Cloud using existing hardware resources. WebSphere CloudBurst, delivered as an appliance, includes all the capabilities required to virtualize, dispense, and optimize WebSphere Application Server Environments. WebSphere CloudBurst delivers pre-installed and configured virtual images and topology pattern definitions ready for customization and fast deployment into the cloud.

Built on the IBM BladeCenter® platform, IBM CloudBurst provides pre-installed, fully integrated service management capabilities across hardware, middleware and applications.

The above option will only dispense WebSphere images and the packaged solution while easy to buy for quick deployment may have a ‘lock-in' issues on the platform.

Windows Azure Platform Appliance
imageWindows Azure platform appliance is a turnkey cloud platform that customers can deploy in their own datacenter, across hundreds to thousands of servers. The Windows Azure platform appliance consists ofWindows Azure, SQL Azure and a Microsoft-specified configuration of network, storage and server hardware. This hardware will be delivered by a variety of partners.

The appliance is designed for service providers, large enterprises and governments and provides a proven cloud platform that delivers breakthrough datacenter efficiency through innovative power, cooling and automation technologies.

Again  this solution is easy to deploy is a Windows Azure PaaS platform only solution.

There are few more  vendors  to  in this market, like EMC VPLEX Private Cloud Appliance, Oracle  Exalogic Elastic Cloud offer  "Buy" option for private cloud. Like others Exalogic provides a WebLogic centric private cloud solution.

Unlike  a software component where the Reuse is facilitated by the standards,  there is no direct equivalent of  a Reuse in private cloud. The closest offering is to utilize a  ‘Virtual Private Cloud' offering  as part of a pre-built public cloud platform.

A Virtual Private Cloud (VPC) is a private cloud existing within a shared or public cloud.  For example Amazon VPC enables you to use your own isolated resources within the AWS cloud. Amazon VPC provides end-to-end network isolation by utilizing an IP address range that you specify, and routing all network traffic between VPC and your datacenter through an industry-standard encrypted IPsec VPN. This allows you to leverage your preexisting security infrastructure, such as firewalls and intrusion detection systems to inspect network traffic going to and from a VPC.

While   this option gives quick adoption to private cloud theoretically,  the isolation is logical and  achieved  through a combination  of  software and network management.  The  policies are still governed by Vendor and makes it less flexible than a classic ‘Private Cloud'.

This option is about  procuring the basic building  blocks of a Cloud Platform. While not getting into the ready made platform choices as in Buy option.   While this option  takes some effort on the organization to fit the correct platform and services on top of the basic building blocks, it gives complete control of the  Private Cloud while providing all the benefits.

HP  Blade System Matrix and Cloud Service automation
HP's blade system matrix provides a complete packageof hardware, storage, software, service catalog, monitoring  as part of the above private cloud offering.  For companies who are looking to build private clouds and get started quickly, HP Cloud Service Automation for BladeSystem Matrix is an ideal starting point. The bundled solution provides one-touch provisioning, management, and monitoring of both applications and infrastructure using the familiar Matrix interface.

The key advantage of the Build option for Private Clouds are that they are built on Open architecture to manage heterogeneous environments.

HP BladeSystem Matrix is designed to work with the common technologies and processes used in data centers today. This open design approach ensures that Matrix integrates seamlessly with most storage and network fabrics, such as HP StorageWorks and HP Networking, as well as EMC and Cisco. It runs any application out of the box, and is integrated with the leading virtualization technologies from HP, Microsoft and VMware.

Other Custom Solutions
Eucalyptus Enterprise Edition (Eucalyptus EE) enables customers to implement private cloud computing using Eucalyptus, the most popular open source private cloud software, with purpose-built solutions that extend its functionality. Eucalyptus EE contains the open source core platform and a suite of products that allow Enterprises and Service Providers to implement the most portable, scalable and high performing private cloud solution available today.

While most large enterprises have not fully decided on the usage of Open Source products, also at the same time this solution is not fully packaged  especially with respect to hardware and may require support from multiple components.

We see the traditional  decision making ‘Build  Vs Buy Vs Reuse'  very much applicable to choosing the  private clouds too. Also the  Reuse in this case is very much restricted to VPC over Public Clouds, where  as the options for Buy  Vs Build are very much alive and growing. We see that  enterprises tend  to take decisions based on their platform for existing applications  like Java EE or .net and how disparate their existing applications are.  We see in general ‘Build' option provide much flexibility and Less Lock-in for enterprises.

Srinivasan Sundara Rajan works at Hewlett Packard as a Solution Architect. Read his interview by the Windows Azure team’s Robert Duffner here.

<Return to section navigation list> 

Cloud Security and Governance

Lucas Roh asked “What controls can you put in place to restrict data loss?” as a deck for his How to Make Public and Private Clouds Secure post of 3/11/2011:

image With Gartner predicting $150 billion in cloud-related revenues by 2013, the march towards "the cloud" is not abating. As the market grows, "Is the cloud secure?" is a very familiar refrain frequently asked by corporate management. While those in IT will certainly tell you no environment will be completely secure, there are measures you can take when moving to the cloud to mitigate security risks to reasonable levels. Transitioning to the cloud can often be more secure than a traditional in-house solution. Cloud providers have collectively invested billions in implementing standards and procedures to safeguard data. They need to compete based on not only price, but the breadth of their security, driving innovation to the benefit of the customer.

In a public cloud environment the end user has a solution that is highly automated. Customers can put their applications in the cloud and control all of the individual attributes of their services. If you develop products and services in a testing or development environment, the high level of scalability offered by an on-demand computing solution makes it easy to clone server images and make ad-hoc changes to your infrastructure.

The public cloud of course lacks the visibility of control of a private model. Choosing the public cloud also means giving up a measure of control, in terms of where the processing takes place. With a single tenant private cloud, you have more specialized control with fewer people sharing resources. Each environment poses security challenges that can be managed by following standards and choosing the right partners.

Ensuring Security
What controls can you put in place to restrict data loss?

A sophisticated identity management system is crucial for protection from password hacking. Instituting and following a real password management system with true randomization is essential for data security. While it seems like a relic from the 1990s, it is shocking to see the number of IT staff or administrators who still use "123456" or "admin" as a password; a practice that should be ruthlessly weeded out. Consider using a secure password management service that protects user ID and password data and can flag users that repeat passwords across various systems. Using LDAP controls and administering credentials will keep access information from being scattered around. Additional controls such as running scripts to remove access when employees leave the organization are also recommended for identity management security.

After your internal processes such as identity management are better implemented and followed, you should turn your attention to the best practices of your various outsourcers. You may be at the point where you are working with several different SaaS providers. Do they follow your preferred procedures for identity management security? If possible, the centralization of these practices under your review can provide an added measure of security. Also, when choosing a solution provider, you should ask not only about their identity management practices, but also hiring and background check procedures for their administrators and about how access to data is controlled.

Over time, as cloud technology evolves, providers are standardizing policies that dictate where data physically resides. You might see user-defined policies with restrictions on crossing certain state or country boundaries as companies become increasingly globalized.

Specifically for public environments, data in the cloud is typically shared in proximity to other customers. While providers encrypt this data, it's still important to look at how the data is being segregated. Ask your solution provider to follow best encryption practices to be sure your data is both safe and usable. When your data is no longer needed, the cloud provider should take the right steps for deletion. In addition, you want a provider to offer managed services, including firewalls and the latest intrusion detection systems.

Another important consideration is the legal ramifications and jurisdiction that cover data standards. While your data containing PII (Personally Identifiable Information) might be considered "secure" in one country, it may fall under different regulations in another. European governments often have very strict rules for privacy protections compared to other countries. When choosing a cloud solution provider, you need to make sure your data can be quickly relocated in case your service agreement ends. Knowing the physical location of your data is important for such recovery efforts.

Availability and uptime are of course important for end customer satisfaction, but as a client, you need guarantees for data availability. Will internal IT staff have consistent access to company data to perform their daily job functions? Defined service-level agreements should detail availability and include penalty clauses if the agreement's terms are not upheld.

According to Gartner research, 40 or more states have formal regulations in place governing how companies can protect PII. In addition to the traditional focus on credit card, bank account, and other finance-centric information, there are also concerns around Social Security numbers and any other type of data that warrants privacy restrictions. The implications are that a customer should choose an established cloud solution provider that places system controls on the movement of PII within their cloud network. If sensitive data ends up on servers outside of the United States, it can create serious legal issues. Beyond PII, many companies run the risk of exposing their own intellectual property or other trade secrets.

Companies need their cloud providers to implement and follow strict controls that are routinely checked by independent auditors. Auditors exist to validate reporting to make sure procedures are in place to protect PII and other data. Performing thorough reviews of physical and logical access controls, auditors can proactively alert companies to security holes before there is a data breach. Auditors can review if background checks aren't performed or are not completed properly. Backup procedures of customer data are also intensely scrutinized. Who has access to backup data? Does more than one organization touch the backup data?

As companies utilize more and more SaaS solutions to handle business needs, standards such as SAS 70 become more and more prevalent across multiple industries. As a flexible accounting standard that can be altered to fit the custom needs of SaaS providers, SAS 70 is becoming a de-facto standard in the space. While it is indeed a little disingenuous for companies to dictate their own control objectives to the auditing firm, those that take the auditing seriously can proactively find and fix minor issues before they become massive problems.

Choosing the Right Vendor and Managing Outsourcers
The barriers to entry for cloud solution providers are quite low. Less-established players might not be as fastidious about where your data might travel, or who has access to analyze that data. You can't go just on the cost of the service if the tradeoff is lack of security oversight or a broader risk of the company going under.

You need to ask potential solution providers a lot of questions, digging beneath their standard marketing literature. What about business continuity? Is there a documented process for this? If one of their data centers is destroyed, what does that mean for your business? Do they only have one location? If so, you need to explore their backup and disaster recovery procedures, as well as the security risks of those procedures. Another important consideration is the company's actions after a security breach. Do you trust them to tell you security has been compromised so you can take steps to mitigate damage?

Negotiating with the provider can afford extra levels of protection. Strengthened layers of encryption and set standards of data storage can be put in the contract as a safeguard.

You also need to look beyond the cloud provider at any other SaaS type provider, whether a CRM solution or any other kind. A complete cloud solution and other business processes are often enabled by a chain of outsourcers. For customers that manage very sensitive data, they should request independent security audits of outsourcers, for instance any hosting companies used by the cloud provider.

Nightmare scenarios develop when an outsourcer in the fourth degree of separation exposes confidential information. You need to properly review the data standards for all of these outsourcers and have the right to refuse certain unreliable outsourcers from having any contact with your data. All of these SaaS companies have an obligation to enforce and monitor where customer data goes and how it is accessed.

Outsourcers should follow defined password assignment standards that decrease the likelihood of password hijacking. With multi-tenant cloud environments, the risks are greater so, to decrease these risks, the vendor needs to illustrate the controls they put in place to afford some separation between tenants.

Putting It Together
Maintaining optimal security is a two-step process: first, outline data requirements in terms of privacy and user access; and second, vet the right solution provider that can implement both technical and philosophical strategies to minimize risks. With the rate of technological innovation across all sectors, new tools to protect and manage cloud-based data are being researched and developed. As these strategies move beyond development into the implementation stage, cloud providers will have additional weapons to safeguard customer data and ensure security.

image Lucas founded Hostway, one of the top-five Web hosting companies globally, in 1998.

Chris Hoff (@Beaker) posted Incomplete Thought: Cloud Capacity Clearinghouses & Marketplaces – A Security/Compliance/Privacy Minefield? on 3/11/2011:

image With my focus on cloud security, I’m always fascinated when derivative business models arise that take the issues associated with “mainstream” cloud adoption and really bring issues of security, compliance and privacy into even sharper focus.

Advertisement for the automatic (dial) telepho...

To wit, Enomaly recently launched SpotCloud – a Cloud Capacity Clearinghouse & Marketplace in which cloud providers can sell idle compute capacity and consumers may purchase said capacity based upon “…location, cost and quality.”

Got a VM-based workload?  Want to run it cheaply for a short period of time?

…Have any security/compliance/privacy requirements?

To me, “quality” means something different that simply availability…it means service levels, security, privacy, transparency and visibility.

Whilst one can select the geographic location where your VM will run, as part of offering an “opaque inventory,” the identity of the cloud provider is not disclosed.  This begs the question of how the suppliers are vetted and assessed for security, compliance and privacy.  According to the SpotCloud FAQ, the answer is only a vague “We fully vet all market participants.”

There are two very interesting question/answer pairings on the SpotCloud FAQ that relate to security and service availability:

How do I secure my SpotCloud VM?

User access to VM should be disabled for increased security. The VM package is typically configured to automatically boot, self configure itself and phone home without the need for direct OS access. VM examples available.

Are there any SLA’s, support or guarantees?

No, to keep the costs as low as possible the service is offered without any SLA, direct support or guarantees. We may offer support in the future. Although we do have a phone and are more than happy to talk to you…

:: shudder ::

For now, I would assume that this means that if your workloads are at all mission critical, sensitive, subject to compliance requirements or traffic in any sort of sensitive data, this sort of exchange option may not be for you. I don’t have data on the use cases for the workloads being run using SpotCloud, but perhaps we’ll see Enomaly make this information more available as time goes on.

I would further assume that the criteria for provider selection might be expanded to include certification, compliance and security capabilities — all the more reason for these providers to consider something like CloudAudit which would enable them to provide supporting materials related to their assertions. (*wink*)

To be clear, from a marketplace perspective, I think this is a really nifty idea — sort of the cloud-based SETI-for-cost version of the Mechanical Turk.  It takes the notion of “utility” and really makes one think of the options.  I remember thinking the same thing when Zimory launched their marketplace in 2009.

I think ultimately this further amplifies the message that we need to build survivable systems, write secure code and continue to place an emphasis on the security of information deployed using cloud services. Duh-ja vu.

This sort of use case also begs the interesting set of questions as to what these monolithic apps are intended to provide — surely they transit in some sort of information — information that comes from somewhere?  The oft-touted massively scaleable compute “front-end” overlay of public cloud often times means that the scale-out architectures leveraged to deliver service connect back to something else…

You likely see where this is going…

At any rate, I think these marketplace offerings will, for the foreseeable future, serve a specific type of consumer trafficking in specific types of information/service — it’s yet another vertical service offering that cloud can satisfy.

What do you think?

Image via Wikipedia

Seny Kamara and Kristin Lauter published Considerations for the Cryptographic Cloud to the HPC in the Cloud blog on 3/11/2011:

image With the prospect of increasing amounts of data being collected by a proliferation of internet –connected devices and the task of organizing, storing, and accessing such data looming, we face the challenge of how to leverage the power of the cloud running in our data centers to make information accessible in a secure and privacy-preserving manner.  For many scenarios, in other words, we would like to have a public cloud which we can trust with our private data, and yet we would like to have that data still be accessible to us in an organized and useful way.

One approach to this problem is to envision a world in which all data is preprocessed by a client device before being uploaded to the cloud; the preprocessing signs and encrypts the data in such a way that its functionality is preserved, allowing, for example, for the cloud to search or compute over the encrypted data and to prove its integrity to the client (without the client having to download it). We refer to this type of solution as Cryptographic Cloud Storage. 

Cryptographic cloud storage is achievable with current technologies and can help bootstrap trust in public clouds.  It can also form the foundation for future cryptographic cloud solutions where an increasing amount of computation on encrypted data is possible and efficient.  We will explain cryptographic cloud storage and what role it might play as cloud becomes a more dominant force.

Applications of the Cryptographic Cloud


Storage services based on public clouds such as Microsoft’s Azure storage service and Amazon’s S3 provide customers with scalable and dynamic storage. By moving their data to the cloud customers can avoid the costs of building and maintaining a private storage infrastructure, opting instead to pay a service provider as a function of its needs. For most customers, this provides several benefits including availability (i.e., being able to access data from anywhere) and reliability (i.e., not having to worry about backups) at a relatively low cost.  While the benefits of using a public cloud infrastructure are clear, it introduces significant security and privacy risks. In fact, it seems that the biggest hurdle to the adoption of cloud storage (and cloud computing in general) is concern over the confidentiality and integrity of data.

While, so far, consumers have been willing to trade privacy for the convenience of software services (e.g., for web-based email, calendars, pictures etc…), this is not the case for enterprises and government organizations. This reluctance can be attributed to several factors that range from a desire to protect mission-critical data to regulatory obligations to preserve the confidentiality and integrity of data. The latter can occur when the customer is responsible for keeping personally identifiable information (PII), or medical and financial records. So while cloud storage has enormous promise, unless the issues of confidentiality and integrity are addressed many potential customers will be reluctant to make the move.

In addition to simple storage, many enterprises will have a need for some associated services.  These services can include any number of business processes including sharing of data among trusted partners, litigation support, monitoring and compliance, back-up, archive and audit logs.   A cryptographic storage service can be endowed with some subset of these services to provide value to enterprises, for example in complying with government regulations for handling of sensitive data, geographic considerations relating to data provenance,  to help mitigate the cost of security breaches, lower the cost of electronic discovery for litigation support, or alleviate the burden of complying with subpoenas.

For example, a specific type of data which is especially sensitive is personal medical data.  The recent move towards electronic health records promises to reduce medical errors, save lives and decrease the cost of healthcare. Given the importance and sensitivity of health-related data, it is clear that any cloud storage platform for health records will need to provide strong confidentiality and integrity guarantees to patients and care givers, which can be enabled with cryptographic cloud storage.

Another arena where a cryptographic cloud storage system could be useful is interactive scientific publishing. As scientists continue to produce large data sets which have broad value for the scientific community, demand will increase for a storage infrastructure to make such data accessible and sharable.  To incent scientists to share their data, scientific could establish a publication forum for data sets in partnership with hosted data centers.  Such an interactive publication forum would need to provide strong guarantees to authors on how their data sets may be accessed and used by others, and could be built on a cryptographic cloud storage system.

Cryptographic Cloud Storage

Page:  1  of  4
Read more: 2, 3, 4 Next >

Seny and Kristin are members of the Microsoft Research Cryptography Group.

What most Windows Azure developers are seeking today is Transparent Data Encryption (TDE) for SQL Azure.

Mitesh Soni analyzed SAS 70 and Cloud Computing on 3/9/2011:

The Statement on Auditing Standards No. 70 (SAS 70) has become the ubiquitous auditing report by which all cloud computing service providers are judged.  So how did this financial auditing report become the standard by which we examine cloud service providers?  How much can we trust this report as a true representation of the security controls in place?

SAS 70 was originally titled “Reports on the Processing of Transactions by Service Organizations” but was changed by Statement on Auditing Standards No. 88 to “Service Organizations”. The guidance contained in SAS 70 is effective for all service auditors’ reports dated after March 31, 1993.

There are two types of service auditor reports:

Type I Type II
  • Reports on controls placed in operation (as of a point in time
  • Looks at the design of controls- not operating effectiveness
  • Considered for information purposes only
  • Not considered a significant use for purposes of reliance by user auditors/organizations
  • Most often performed only in the first year a client has a SAS 70
  • Reports on controls placed in operation and tests of operating effectiveness (for a period of time, generally not less than 6 months) 
  • Differentiating factor: Includes Tests of Operating Effectiveness
  • More comprehensive
  • Requires more internal and external effort
  • Identifies instances of non-compliance
  • More emphasis on evidential matter

The rise of cloud computing pushed companies to search for a method to validate these new types of services.  Publicly traded companies that had to be compliant with SOX were already familiar with the SAS 70.  It was a natural evolution to adapt the report to auditing cloud computing service providers even though it was not originally intended for this purpose.

Amazon Web Services & SAS70 Type II audit procedures

Amazon Web Services’ controls are evaluated every six months by an independent auditor in accordance with Statement on Auditing Standards No. 70 (SAS70) Type II audit procedures. The report includes the firm’s opinion and results of their evaluation of the design and operational effectiveness of our most important internal control areas, which are operational performance and security to safeguard customer data. The SAS70 Type II report as well as the processes explained in this document, applies to all geographic regions within the AWS infrastructure.

AWS’ SAS70 Type II Control Objectives:

Security Organization Controls provide reasonable assurance that there is a clear information security policy that is communicated throughout the organization to users.
Amazon Employee Lifecycle Controls provide reasonable assurance that procedures have been established so that Amazon employee user accounts are added, modified and deleted in a timely manner and reviewed on a periodic basis to reduce the risk of unauthorized / inappropriate access.
Logical Security Controls provide reasonable assurance that unauthorized internal and external access to data is appropriately restricted and access to customer data is appropriately segregated from other customers.
Secure Data Handling Controls provide reasonable assurance that data handling between the customer’s point of initiation to an AWS storage location is secured and mapped accurately
Physical Security Controls provide reasonable assurance that physical access to Amazon’s operations building and the data centers is restricted to authorized personnel.
Environmental Safeguards Controls provide reasonable assurance that procedures exist to minimize the effect of a malfunction or physical disaster to the computer and data center facilities.
Change Management Controls provide reasonable assurance that changes (including emergency / non-routine and configuration) to existing IT resources are logged, authorized, tested, approved and documented.
Data Integrity, Availability and Redundancy Controls provide reasonable assurance that data integrity is maintained through all phases including transmission, storage and processing.
Incident Handling Controls provide reasonable assurance that system incidents are recorded, analyzed, and resolved in a timely manner.

Limitations of SAS 70

  • It is not as robust as other security frameworks, such as ISO 27000 or the NIST 800 series.
  • ISO 27000 or the NIST 800 series take a broader approach to information security by reviewing the entire program from a risk management perspective.  In contrast, the SAS 70 is focused primarily on security controls and procedures surrounding the data center and financial implications.
  • The SAS 70 report can be misleading to the casual observer as it only focuses on controls and procedures that are agreed upon before the audit by the auditor and the company being audited.

Cloud & SAS 70

The Type I report only requires the auditor to make an opinion on the effectiveness of the controls in place at the time of the audit.  The Type II report takes this a step further by requiring the auditor to test the controls as well as document his opinion on their effectiveness.

The SAS 70 report is focused on accurate financial reporting so the auditors involved are typically from CPA firms.  A CPA firm possesses the education, training and experience to audit financial controls and may even have insight into other types of controls.  However, the question becomes should a CPA be validating information security controls?  If the auditor does not possess expertise in information security, it will be very difficult to provide much insight into the effectiveness of the controls.  There will be technical areas that will get overlooked just as a CISSP would not recognize inaccuracies in a financial audit.

Of the many regulations touching upon information technology with which organizations must comply, few were written with Cloud Computing in mind. Auditors and assessors may not be familiar with Cloud Computing generally or with a given cloud service in particular. That being the case, it falls upon the cloud customer to recognize:

  • Regulatory applicability for the use of a given cloud service
  • Division of compliance responsibilities between cloud provider and cloud customer
  • Cloud provider’s ability to produce evidence needed for compliance
  • Cloud customer’s role in bridging the gap between cloud provider and auditor/assessor

Should an organization interested in purchasing cloud related services even bother requesting this report from a prospective provider?  The SAS 70 can still be useful if the provider has tested more than the minimum number of controls; however, a vendor that provides a SAS 70 will most likely only be focused on areas of strength.  A vendor that does not provide a SAS 70 may or may not be serious about information security and protecting your data.

Recommendations are Right to Audit clause, involvement of Legal Personnel and Cloud Aware Auditors, Compliance to ISO/IEC 27001/27002,SAS 70 Type II, Evidence of Compliance, Identification of impact of Regulations on Infrastructure, Policy & Procedures, Information Security


<Return to section navigation list> 

Cloud Computing Events

The Windows Azure Team invited you on 3/11/2011 to Join The Webcast Tuesday March 22, "Running ERP on The Cloud: Benefits for IT and Business Leaders" with Microsoft and Acumatica:

imageCloud technologies can provide businesses with many benefits, such as access from anywhere, faster deployment, lower hardware costs and flexible payment options.  If you're wondering if the Cloud is right for your business, join the free webcast, "Running ERP on the Cloud:  Benefits for IT and Business Leaders", on Tuesday, March 22, 2011 at 8:00 AM PST with Microsoft and Acumatica.

imageDuring this session speakers from both companies will explore the five key questions you need to ask before deploying on the Cloud.  Additionally, Acumatica executives will share the benefits of running on Windows Azure from the perspective of both an ISV and an end user.

Click here to learn more and to register.

Kevin Grant posted the Conference Program (Agenda) for Cloud Slam ‘11 on 3/11/2011:


Day 1 - Monday, April 18, 2011 (Mountain View & Livestream)
08:30 - 09:00 PST Registration
08:30 - 09:00 PST Welcome & Opening Keynote (Broadcast live over Internet)
09:00 - 10:00 PST Panel
10:00 - 11:00 PST Session Details TBA
11:00 - 12:15 PST Lunch Interval
13:15 - 14:15 PST Session Details TBA
14:15 - 15:40 PST Panel Discussion
15:40 - 15:50 PST Afternoon Break
15:50 - 16:30 PST Session Details TBA
16:30 - 17:30 PST Networking, Drinks & Close
Day 2 - Tuesday, April 19, 2011 (Webex)
10:00 - 11:00 EST Pre-Flight Checklists and Seatbelts for Your Application's Trip to the Cloud,
Chris Wysopal, Co-Founder & CTO at Veracode
10:00 - 11:00 EST IT Survival Tool for the Cloud Era: Transforming IT Into a Service Broker,
Keith Jahn, Director in the Office of the CTO for HP Software and Solutions at HP
11:00 - 12:00 EST Social Media's Midnight Run: The Regulations Are Coming! ,
Rob May , CEO & Co-Founder at Backupify
11:00 - 12:00 EST A Ground-Level View of Cloud Operations and Deployment Issues - For Service Providers and Their Corporate Customers,
Raj Patel, Vice President, Collaboration Software Group, Cisco
11:00 - 12:00 EST Identifying and Removing the Obstacles to Cloud Adoption,
Mark Skilton, Capgemini and Penelope Gordon, Co-Founder at 1Plug Corporation
12:00 - 13:00 EST Session Details TBA,
13:00 - 14:00 EST Session Details,
14:00 - 15:00 EST Session Details TBA,
15:00 - 16:00 EST Bunker Busting: Government and the Cloud,
Logan Kleier, CISO at City of Portland
15:00 - 16:00 EST Getting Scalable Cloud Application Deployments Right the First Time,
Tobias Kunze, Director, Cloud Platform Engineering at Red Hat
15:00 - 16:00 EST From App to Cloud: Protecting Massive Content Archives,
Scott Genereux, President and CEO at Nirvanix
16:00 - 17:00 EST Session Details TBA,
17:00 - 18:00 EST Systematic Cloud Assessment and Roadmapping,
Tony Shan, Chief Cloudologist at Keane/NTT Data
17:00 - 18:00 EST Business Freemium - Leading This Decade in Cloud Innovation,
Matt Holleran, Venture Partner at Emergence Capital
17:00 - 18:00 EST Quick Scaling in the Cloud,
Michael Crandell , CEO at RightScale
18:00 - 19:00 EST How Application Design Can Power Cloud Economics,
James Staten, Vice President, Principal Analyst at Forrester
Day 3 - Wednesday, April 20, 2011 (Webex)
09:00 - 10:00 EST What you don't know about monitoring in the cloud,
Donn Rochette, CTO at AppFirst
09:00 - 10:00 EST Service Platforms: Power the Next Wave of Cloud Adoption,
Madhava Venkatesh, Group Project Manager at HCL
09:00 - 10:00 EST Seize the Cloud - The Business and Enterprise Architecture Dimensions of Cloud,
Erik van Ommeren, Director of Innovation at Sogeti VINT - Vision Inspiration Navigation Trends
10:00 - 11:00 EST Best practices encompassing next-generation process and compute capabilities,
Keith Lowery, Engineering Fellow at AMD
11:00 - 12:00 EST How Flexible and Scalable Is Hardware in the Cloud?,
Donato Buccella, CTO at Certeon
11:00 - 12:00 EST Preview the Next-Generation Cloud: Mobility and Productivity ,
Neil Weldon, Director of Technology at Dialogic
12:00 - 13:00 EST Session Details TBA,
13:00 - 14:00 EST Session Details,
14:00 - 15:00 EST Session Details TBA,
15:00 - 16:00 EST Content Management for the Cloud,
Dean Tang, CEO at ABBYY USA
15:00 - 16:00 EST Chasing the Clouds - Best Practices for Optimizing Your Data Center with Cloud Strategies,
Dan Lamorena , Director, Storage and Availability Management Group at Symantec Corporation
16:00 - 17:00 EST Panel Session,
17:00 - 18:00 EST Cloud Computing Networking: The Race to Zero,
Peter Dougherty, Founder at OnPath Technologies
17:00 - 18:00 EST Cloud on the Go: How Business Will Take Our Cloud(s) Mobile,
John Barnes , CTO at Model Metrics
17:00 - 18:00 EST Preparing Your Data Center for the Transformation to Private Clouds,
Doug Ingraham, Vice President, Data Center Products at Brocade Communications
18:00 - 19:00 EST Primal fear: Enterprise cloud transformation and the fight or flight reflex,
Eric Pulier, CEO of ServiceMesh
19:00 - 20:00 EST Cloud 2: How Social, Mobile & Open Trends Impact the Workplace,
Chuck Ganapathi, Senior Vice President Products, Chatter and Mobile at
Day 4 - Thursday, April 21, 2011 (Webex)
08:00 - 09:00 EST Real Hybrid Clouds – No Barriers, No Boundaries,
Ellen Rubin, Co-Founder & VP Products at CloudSwitch
09:00 - 10:00 EST Dynamic Desktops: To the Cloud and Beyond!,
Tyler Rohrer, Founder at Liquidware Labs
09:00 - 10:00 EST Cloud Decision Framework for CIOs,
Varundev Kashimath , Researcher at Technical University of Delft
09:00 - 10:00 EST 6 Tips on Achieving Scaling Enterprise Applications in the Cloud,
Duncan Johnston-Watt, CEO at Cloudsoft
10:00 - 11:00 EST Session Details TBA,
11:00 - 12:00 EST Hybrid Clouds for SME,
Vineet Jain, CEO and Co-Founder at Egnyte
11:00 - 12:00 EST The Final Cloud Frontier: Driving Cloud Adoption for Mission Critical Apps,
Benjamin Frenkel, Head of Cloud Technology at Pegasystems
11:00 - 12:00 EST The Personality Cloud,
Simon Rust, VP of Technology at AppSense
12:00 - 13:00 EST Session Details TBA,
13:00 - 14:00 EST Session Details,
14:00 - 15:00 EST Session Details TBA,
15:00 - 16:00 EST From Image to Instance – Better Software Development with the Cloud,
Steve Taylor, COO at OpenMake Software
15:00 - 16:00 EST The Road to Cloud Services - An ING Case Study,
Corjan Bast , Global Product Manager at ITpreneurs
16:00 - 17:00 EST Session Details TBA,
17:00 - 18:00 EST Private Cloud Journey Made Easy: 4 Step Prescriptive Guidance,
Kurt Milne, Managing Director at IT Process Institute
17:00 - 18:00 EST Scaling Data in the Cloud,
Frank Weigel, Director of Product Management at Couchbase
17:00 - 18:00 EST The Government Cloud: Creating a 21st Century Citizen Experience,
Pete Stoneberg, Deputy CIO of the Government Cloud RightNow Technologies,
Day 5 - Friday, April 22, 2011 (Webex)
09:00 - 10:00 EST Heimdall Portal - Utilization of Service Measurement Index (SMI) as a Deployment Strategy in the Cloud,
Arfath Pasha, Cloud Software Technical Lead at Mycroft
09:00 - 10:00 EST Multi-Tenancy in the Cloud and Google App Engine ,
Vikas Hazrati, Co-Founder and Technical Architect at Inphina Technologies
09:00 - 10:00 EST Avoiding Regulatory Mines in Your Cloud Business,
David Snead, Attorney & Counselor at W. David Snead, P.C.
10:00 - 11:00 EST Session Details TBA,
11:00 - 12:00 EST Capability Based IT Portfolio Management,
John Rogers, Chief, CIO and DPfM Division, Command, Control, Communications and Computer Systems (TCJ6), United States Transportation Command, Scott Air Force Base, Ill
11:00 - 12:00 EST Cloud Computing and the Internet of Things,
David Young, CEO and Founder at Joyent
11:00 - 12:00 EST The Cloud Makes IT Departments Go From Zeroes to Heroes,
Aaron Levie, CEO and Co-Founder at
12:00 - 13:00 EST Session Details TBA,
13:00 - 14:00 EST Session Details,
14:00 - 15:00 EST Session Details TBA,
15:00 - 16:00 EST Research in the Cloud: Speeding Retail Innovation With Live Customer Data,
Darren Vengroff, Chief Scientist at RichRelevance
15:00 - 16:00 EST The Service Provider Cloud Strategy - Quo Vadis?,
Uwe Lambrette, Service Provider Solutions, Internet Business Solutions Group of Cisco Systems
15:00 - 16:00 EST Cloud Computing Customer Loyalty and Advocacy: The Ingredient for Success,
Donald Ryan, Vice President at Market Probe
16:00 - 17:00 EST Cloud Computing – How We Got Here, Where We Are, and Where we are Heading,
Jeff Barr, Senior Web Services Evangelist at Amazon
17:00 - 18:00 EST Real-Time Cloud Platform,
Cyprien Noel, CEO at ObjectFabric
17:00 - 18:00 EST Hybrid Video Encoding: Keeping Your Feet on the Ground and Your Head in the Cloud,
David Dudas, VP Video Solutions at Sorensen Media
17:00 - 18:00 EST Managing Website Performance and Availability,
Joseph Ruscio, CTO at Librato
18:00 - 19:00 EST Process Flexibility and BPaaS,
Arijit Sengupta CEO at BeyondCore

There are no speakers from Microsoft listed above, so I assume members of the Windows Azure Team will consume most of Monday’s open session slots.

PRNewswire reported Cloud Slam'11 Cloud Computing Conference Announces Rackspace as an Elite Gold Partner in a 3/11/2011 press release:

Rackspace's CTO John Engates to Deliver Headline Keynote Address at Cloud Slam'11 virtual conference

imageMOUNTAIN VIEW, Calif., March 11, 2011 /PRNewswire/ -- Cloudcor®  today announced that Rackspace® Hosting, (NYSE: RAX), the world's leading specialist in the hosting and cloud computing industry, is a leading edge Gold Partner for Cloud Slam'11 - the world's Premier Hybrid format Cloud Computing conference – Scheduled from 18 - 22 April, 2011.

At the third annual cloud conference, Rackspace will share world class cloud computing strategies which encompass leading edge efficiencies and functionalities, transforming the way in which companies of all sizes do business today.

"We are delighted to add Rackspace, as one of our elite sponsors for the third annual Cloud Slam'11 conference," Cloud Slam'11 Chairman – Khazret Sapenov. "Rackspace is a leading provider of Cloud services. The company's expertise in hosting services underpins its innovation and leadership in its core area of business today."

How To Register for Cloud Slam'11

Admission is priced to fit any budget, from $99 to $299. Please visit

About Cloud Slam'11®

Cloud Slam'11 - Produced by Cloudcor, Inc, is the premier Cloud Computing event. Cloud Slam'11 will take place April 18-22 2011 delivered in hybrid format; Day 1 to be held in Mountain View Silicon, CA, Days 2 - 5 Virtual - For more information, visit Stay connected via Twitter @CloudSlam –

About Cloudcor®

Cloudcor, Inc provides industry leaders and professionals, insights into leading edge conferences, research, analysis and commentary, as well as providing a platform to network with leading experts in the cloud computing and IT industry.  For more information visit

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

•• Lydia Leong (@cloudpundit) described a Gartner research note on Cloud IaaS market segmentation in a 12/13/2011 post:

image Over the past couple of months, I’ve been mulling over a way to structure and segment the cloud infrastructure as a service market. Some of those ideas have appeared on my blog, and have since been refined, heavily peer reviewed, and then trial-ballooned at clients. The result is a new research note, called The Structure of the Cloud Compute IaaS Market. (Sorry, Gartner clients only.)

image In brief, I’ve used a two-axis strategy to break the market into eight segments.

The first axis is your general use case. Are you sourcing infrastructure that is focused on a single application (or a group of tightly-related applications, like your e-commerce application)? Or are you sourcing infrastructure for a range of diverse applications, essentially replacing a part or all of your data center? For the former, you are essentially doing a form of hosting. For the latter, you have a whole host of significantly more complex requirements.

The second axis is the level of management services. The first possibility is unmanaged — you’re doing pretty minimal operations, probably because this is a test/dev environment. The second possibility is self-managed — the provider offers the IaaS platform (data center, hardware, and virtualization), but you do the OS layer on up yourself. The third possibility is that the core foundation is service-provider managed — they also handle the OS management, usually with a security emphasis (patch management The fourth possibility is that some or all of the rest of the application stack, minus the app itself, is service-provider managed (which usually means DBA support, maintenance of a Java EE or .Net stack of middleware, etc.).

That gets you eight market segments, as follows:

SCENARIO Single Application Multiple Applications
Unmanaged Developer-centric cloud hosting Virtual lab enviroment
Self-Managed Scale-out cloud hosting Self-managed virtual data center
Core Foundation Managed Simple managed cloud hosting Turnkey virtual data center
Application Stack Managed Complex managed cloud hosting Cloud-enabled data center outsourcing

Each of these segments has very different buyer profiles and requirements. No single service provider serves all of these segments. At best, a service provider might serve a few of these segments well, at the current state of the market. These are all cloud IaaS, but each segment serves a different kind of customer need.

Want more details? Read the research note.

Adron Hall (@adronbh) suggested Git Rid of Windows Azure and Amazon Web Services (AWS) SDKs with .NET + Git + AppHarbor Deployment Revolution in a 3/11/2011 post:

image I’ve been wanting to do a quick write up on the state of cloud apps from my perspective.  What’s my perspective?  Well I’m keeping up with  the SDKs from the big players; AWS and Windows Azure.  I’m also working on several cloud applications and providing consulting for some people and companies when approached related to which stack to go with, how to apply their current stacks (such as Ruby on Rails or .NET) in migrating to a cloud service provider.  Cloud services, or really more accurately utility computing has my personal and professional interest.  Above all, I keep trying to stay informed and know what the best path is for anyone that seeks my advice for moving into hosting & working in the SaaS, PaaS, or IaaS Space.  Feel free to contact me in regards to cloud questions:  adronhall at the famous gmail dot com.  :)

Now on to the good tidbits that have been released lately.

The latest Microsoft goodies area available.  For the Windows Azure SDK go check out the Microsoft MSDN Site.

For the latest awesome from AWS (Amazon Web Services) SDK check out the AWS .NET Site.

These two SDKs are great for customers who want to build on the bare bones X platform.  Now whatever language & stack one builds in they are tied to that.  Ruby on Rails, .NET, Java, PHP, or whatever.  But getting tied to the stack is kind of like breathing air, one has to live with what air they have.  You can’t exactly get a refund very easily on that.

The Cloud SDKs though for Azure and AWS provide a certain amount of lock in, in addition to the stack lock in you’re using.  One of the easiest ways to prevent this lock in is to use a general deployment method backed by source control on something like Git or Mercurial.  So far though, .NET has been left out the cold.  There has been almost zero support for pushing .NET via Git or Mercurial into a cloud.

Engine YardHerokuRuby on Rails however has had support for this since…  well since the idea popped into the minds of the people at Heroku, EngineYard, and the other companies that are pushing this absolutely amazing and powerful technology pairing.

Again, for .NET, the problem is it has been left in the dust.  Smoked.  It has left a lot of .NET Developers moving to Ruby on Rails (which isn’t new, this is just one more thing that has pulled more developers away from the .NET stack).

Well, that’s changed a bit.  FINALLY someone has gotten the Git + .NET Pairing in the Cloud put together!  FINALLY you can get a cloud application running in a minute or two, instead of the absolutely inane amount of time it takes on Windows Azure (15+ minutes most of the time).  So who has done something about this?

AppHarbor is the first fully deployable solution for the cloud that allows Git + .NET to get going FAST!  I don’t work for these guys at all, so don’t think I’m shilling for them.  I’m just THAT happy that .NET has been pulled out of the dust bins and the community has this option.  I am flippin’ stoked matter of fact.

Currently, because of pricing and ease of deployment, I’ve been solely using AWS.  I can have a .NET MVC app running in AWS in about 5-10 minutes.  Between that speed of setup and the pricing, I pay 2/3 as much as Azure would be and can deploy much fast with a completely traditional .NET deployment.  No special project type needed, no extra configs, just a straight deployment with full control over the server (i.e. I can RDP in with no problem).  Anyway, the list of reasons I went with AWS over Azure really deserve an entire blog entry unto themselves.



With AppHarbor though I can step into the realm of doing exactly the same thing a Ruby on Rails Developer would do with Heroku or EngineYard.  Fully PaaS Capable with the scalability and features without needing to port or migrate to an entirely new stack!  I’ll probably keep a number of things running on AWS (such as the pending WordPress Websites I am about to push up to AWS), but will absolutely be starting up some applications to run in AppHarbor.

If you’re a .NET Developer and you’ve been wanting, looking for, and frustrated that the .NET Community didn’t have a Git + Cloud Deployment option for .NET, wait no longer.  Give AppHarbor a look ASAP!

• Jason Kincaid provided early background information about AppHarbor in his YC-Funded AppHarbor: A Heroku For .NET, Or “Azure Done Right” TechCrunch article of 1/20/2011:

You may be noticing a trend: there are a lot of startups looking to mimic the easy-to-use development platform that made Heroku a hit with Ruby developers and offer a similar solution for use with other languages. In the last few weeks alone we’ve written about PHP Fog (which, as you’d guess, focuses on PHP) and dotCloud (which aims to support a variety of languages). And today we’ve got one more: AppHarbor, a ‘Heroku for .NET’. The company is funded by Y Combinator, and it’s launching today.

image AppHarbor will be going up against Microsoft Azure, a platform that developers can use to deploy their code directly from Visual Studio. But co-founder Michael Friis says that Azure has a few issues. For one, it uses Microsoft’s own database system, which can lead to developer lock-in. And it also doesn’t support Git, which many developers prefer to use for collaboration and code deployment.

Other features: AppHarbor has automated unit testing, which developers can run before any code gets deployed (this reduces the chance that they’ll carelessly deploy something that breaks their site). The service also says that it takes 15 seconds to deploy code, rather than the fifteen minute wait seen on Azure.

Friis acknowledges that there are a few potential hurdles. For one, some .NET developers may be used to life without Git, so it may take some work to get them interested (Mercurial support is on the way, which many .NET developers already use, so this may not be a big deal). There’s also going to be competition for the small team, which currently includes Friis, Rune Sørensen and Troels Thomsen.

AppHarbor is first to launch, but there will be others: Meerkatalyst and Moncai are both planning to tackle the same problem, and they won’t be the last.

I’m not sanguine about AppHarbor’s success when jousting with the Microsoft juggernaut for enterprise .NET PaaS services. However, they might succeed as a hosting service for small-business Web sites. AppHarbor offers a free services tier and, as of 3/10/2011, AppHarbor now integrates with Bitbucket to support Mercurial.

The free services tier offers one free Web instance and 20 MB of shared SQL Server or My SQL database space. Additional Web and Worker instances are US$0.05 per hour, the same as Azure’s extra small instances. A 10 GB shared database goes for $10/month, 1 cent more than an SQL Azure instance of the same size.

Tim Anderson (@TimAnderson, pictured below) reported enthuses about MongoDB, plans to ditch Oracle for a NoSQL database in a 3/11/2011 post:

image Guardian’s Mat Wall has spoken here at Qcon London about why it is migrating its web site away from Oracle and towards MongoDB.

He also said there are moves towards cloud hosting, I think on Amazon’s hosted infrastructure, and that its own data centre can be used as a backup in case of cloud failure – an idea which makes some sense to me.

image So what’s wrong with Oracle? The problem is the tight relationship between updates to the code that runs the site, and the Oracle database schema. Significant code updates tend to require schema updates too, which means pausing content updates while this takes place. Journalists on a major news site hate these pauses.

image MongoDB by contrast is not a relational database. Rather, it stores documents in JSON (JavaScript Object Notation) format. This means that documents with new attributes can be added to the database at runtime.

Although this was the main motivation for change, the Guardian discovered other benefits. Developer productivity is significantly better with MongoDB and they are enjoying its API.

Currently both MongoDB and Oracle are in use. The Guardian has written its own API layer to wrap database access and handle the complexity of having two radically different data stores.

I enjoyed this talk, partly thanks to Wall’s clear presentation, and partly because I was glad to hear solid pragmatic reasons for moving to a NoSQL data store.

Related posts:

  1. Oracle: a good home for MySQL?
  2. is the wrong kind of cloud says Oracle’s Larry Ellison
  3. Running Oracle on Amazon’s cloud

Matthew Weinberger reported Amazon EC2 Adds Migration For VMware vCenter in a 3/11/2011 post to the ReadWriteCloud blog:

Amazon Web Services wants to make it easier to move virtual machines from a private cloud to the Amazon Elastic Compute Cloud (EC2). The effort involves the so-called Amazon EC2 VM Import Connector vApp for VMware vCenter. It’s a virtual appliance that allows cloud integrators and IT managers to  import pre-existing virtual machines into Amazon’s cloud.

imageAccording to the official Amazon blog entry announcing the VM Import Connector, it’s as easy for Amazon EC2 customers as selecting the virtual machine in the “familiar” VMware vSphere Client GUI, specifying the AWS Region and Availability Zone, operating system, instance size, and VPC details. Once all that basic stuff is put in, the instance is deployed in EC2.

While on the subject on VMware, this might be a good chance to refer TalkinCloud readers to a pair of virtualization stories that ran on The VAR Guy, our sister site, this week: VMware launched the vCenter Operations cloud management solution, aimed at automating IT-as-a-service. And the Apple iPad is the latest VMware View client.

Follow Talkin’ Cloud via RSS, Facebook and Twitter. Sign up for Talkin’ Cloud’s Weekly Newsletter, Webcasts and Resource Center. Read our editorial disclosures here.

Read More About This Topic

The HPC in the Cloud blog reported Gemini Releases Amazon S3 Compliant Multi-Tenant Cloud Storage System in a 3/11/2011 press release:

image FOSTER CITY, Calif., March 11, 2011 -- Gemini Mobile Technologies ("Gemini") announced today that several Internet Service Providers (ISPs) and Cloud Service Providers (CSPs) have started trials of its Cloudian software.  Cloudian allows ISPs and CSPs to provide Amazon S3 compliant "Cloud Storage on Demand" services to web service providers and enterprise applications. Applications developed for the Amazon S3 API will work on a Cloudian system without modification.  Cloudian allows the same physical storage cluster to be shared by thousands of tenant users, with flexible billing based on usage and traffic.  A fully redundant Cloudian service requires only 2 PCs and scales to hundreds of PCs in multiple datacenters.

imageWeb Service Providers (e.g. gaming, SNS, file sharing, webmail) and Enterprises (e.g. ecommerce, data mining, email) are beginning to solve Big Data problems economically with NOSQL (Not Only SQL) technology, which originates from Cloud Storage technologies at Google, Facebook, and Amazon.  However, as NOSQL technologies are relatively new, early adopters are spending significant resources in the training, set up, and operations of NOSQL server clusters.  Storage OnDemand services enabled by Cloudian lower the technology and cost barriers for companies to benefit from NOSQL Technologies: companies pay for only usage and do not need to buy and operate servers and new database technologies.  Customers can choose which Cloudian enabled ISP/CSP to store their data.  In addition, Cloudian allows ISPs/CSPs to easily set up private clouds for customers requiring custom Service Level Agreements (SLAs).

"To harness the power of the Social Web, every business with an online presence must find solutions to Big Data problems," said Michael Tso, co-founder and COO of Gemini.  "With Cloudian, NOSQL Cloud Storage technology becomes easier and cheaper, giving users more choices and better service."

Cloudian currently supports the Amazon S3 REST API with a Cassandra database backend.  Future versions will support other NOSQL database backends and their native APIs.  Cloudian is available for free evaluation at:

About Gemini

Gemini is a leading provider of high-performance, cloud-enabled messaging platforms.  Gemini is a pioneer in real time Big Data and NOSQL database technology, developing the Hibari® Key Value Store, Hibari Gigabyte-Maildir Email Store, and the Cloudian™ Multi-Tenant Cloud Storage System.  Gemini has offices in San Francisco, Tokyo and Beijing.  Gemini's customers include NTT DOCOMO, NTT Resonant, Softbank Mobile, Vodafone, and Nextel International; the company also has OEM partnerships with Alcatel-Lucent and Bytemobile.  Gemini is backed by Goldman Sachs, Mitsubishi-UFG Capital, Nomura Securities, Mizuho Capital, Access, and Aplix.  For more information, visit

Jeff Barr (@jeffbarr) reported IAM Now Available for Amazon CloudFront on 3/11/2011:

image You can now use AWS Identity and Access Management (IAM) to regulate access to the Amazon CloudFront APIs.For example, you could easily create three separate IAM groups with names and permissions as follows:

image Group CloudFrontManagement - Access to all CloudFront APIs.

Group Publisher - Access to the CreateDistribution, GetDistribution, UpdateDistribution, and GetDistributionConfig APIs.

Group StreamingPublisher - Access to the CreateStreamingDistribution, GetStreamingDistribution, UpdateStreamingDistribution, and GetStreamingDistributionConfig APIs.

You can create an IAM policy using the AWS Policy Generator and then apply it using the AWS Management Console.

A number of third-party tools and toolkits are also providing support for this new feature. Here's what I know about:

CloudBerry Explorer for Amazon S3 (pictured at right) allows you to control access to individual APIs. Read their blog post for more info.

The newest version of Bucket Explorer supports IAM.

Support is expected momentarily in Bucket Explorer and Boto.

Like many teams at Amazon, the CloudFront team is now hiring!

Amazon continues its “new features roll.”

Randy Bias (@randybias) backed up my assessment of Amazon’s “new features roll” by his Amazon Web Services’ Rapid Release Cycle post of 3/11/2011:

image Last year I asked: “Is Amazon Web Services Winning the Cloud Race?” And during the Cloud Connect 2011 keynote this week I made some assertions that AWS is indeed running away with the ball and backed it up with actual numbers[1].

In addition to the keynote, I provided some updated information on AWS releases during panels I moderated this week at Cloud Connect.  Some audience members requested more details. I wanted to provide these details for everyone.

But first as a reminder, here’s the graphic showing Amazon’s development momentum by showing ‘significant’ feature releases per year:

AWS Release Counts by Year

This shows a strong development cycle and continued momentum on Amazon’s part.  The light grey bar on the right indicates a rough prediction for releases this year (2011).  If AWS meets this, that would be roughly 5-6 ‘significant’ releases per month.

The source data, originally in a Google Doc, is published here and we appreciate any thoughts or feedbacks you have into our basic methodology shown in ‘decision criteria‘ tab.  This is how we decided if something was ‘significant’ or not.

UPDATED: Changes made for clarification due to great feedback from Chris Hoff (@Beaker).

[1] We’re working on posting a full Cloud Connect 2011 update including a link/embed to my keynote.  Probably some time next week.

<Return to section navigation list>