Thursday, December 16, 2010

Windows Azure and Cloud Computing Posts for 12/16/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.

Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

MSDN updated the Web and Data Services for Windows Phone’s Storing Data in the Windows Azure Platform for Windows Phone topic on 12/15/2010:

imageThe Windows Azure platform provides several data storage options for Windows Phone applications. This topic introduces components of the Windows Azure platform and describes how they relate to a general architecture for storing non-public data in the cloud. For more information about how Windows Phone applications can use the Windows Azure platform, see Windows Azure Platform Overview for Windows Phone.

image This topic describes current Windows Azure platform features and a basic Windows Azure application architecture. For information about the latest Windows Azure features, see the Windows Azure home page and the Windows Azure AppFabric home page.

WCF services and WCF Data Services hosted on the Windows Azure platform can be consumed by Windows Phone applications just like other HTTP-based web services. For more information about consuming web services in your Windows Phone applications, see Connecting to Web and Data Services for Windows Phone.

Architectural Overview

This basic “client-server” architecture is comprised of three tiers. The Windows Phone application is the “client” application in the client tier; the Windows Azure Web role is the “server” application in the Web services tier; and Windows Azure storage services and SQL Azure provide data storage in the data storage tier. This architecture is shown in the following diagram:

Windows Azure Platform Storage for Windows Phone

Note: This architecture is designed for non-public data that requires authentication. For public data, blobs and blob containers can be directly exposed to the Web and read via anonymous requests. For more information, see Setting Access Control for Containers.

Client Tier

The client tier is comprised of the Windows Phone application and isolated storage. Isolated storage is used to store the application data that is needed for subsequent launches of the application. Isolated storage can also be used to temporarily store data before it is saved to the data storage tier. For more information about isolated storage, see Isolated Storage Overview for Windows Phone.

Web Service Tier

The Web service tier is comprised of a Windows Azure web role that hosts one or more web services based on Windows Communication Foundation (WCF) or WCF data services. WCF is a part of the .NET Framework that provides a unified programming model for rapidly building service-oriented applications. WCF Data Services enables the creation and consumption of Open Data Protocol (OData) services from the Web (formerly known as ADO.NET Data Services). For more information, see the WCF Developer Center and the WCF Data Services Developer Center.

In this architecture, the Web service tier enables abstraction of the data storage tier. By using widely available public specifications to define the protocols and abstract data structures that the web service implements, a wide variety of clients can interact with the service, including Windows Phone applications. Abstraction of the data storage tier also allows the data storage implementation to adapt to changing business requirements without affecting the client tier.

For WCF services, abstract data structures are defined by a data contract. The data contract is an agreement between the client and service that describes the data to be exchanged. For more information, see Using Data Contracts.

For OData services, abstract data structures are defined by a data model. WCF Data Services supports a wide variety of data models. For more information, see Exposing Your Data as a Service (WCF Data Services).

In this architecture, the Windows Phone application communicates with the web service to authenticate users based on their username and password. Depending on the value of the data, user credentials for accessing the Web service tier may or may not be stored on the phone. The Windows Phone application does not directly connect to the data storage tier. Instead, the Web role accesses the data storage tier on behalf of the Windows Phone application.

Security noteSecurity Note: We recommend that Windows Phone applications do not connect to the data storage tier directly. This prevents keys and credentials for the data storage tier from being stored or entered on the phone. In this architecture, only the Web role is granted access to the data storage tier. For more information about web service security, see Web Service Security for Windows Phone.

Data Storage Tier

The data storage tier is comprised of the Windows Azure storage services and SQL Azure. The Windows Azure storage services include the Blob, Queue, and Table services. SQL Azure provides a relational database service. A Windows Azure role can use any combination of these services to store and serve data to a Windows Phone application. For more information about these services, see Understanding Data Storage Offerings on the Windows Azure Platform.

NoteNote: The Windows Azure platform also provides Windows Azure roles a temporary storage repository named local storage. A Windows Azure role can access local storage like a file system. Local storage is not recommended for long-term durable storage of your data.

Configuring a Windows Azure Storage Service

Before using a Windows Azure storage service, the respective endpoint must first be created and configured programmatically. For example, to store images to a blob service for the first time, the Web role must first create and configure the blob container that will store the images.

Configuring a SQL Azure Database

There are several ways that you can create and configure a SQL Azure database:

  • Windows Azure Platform Management Portal: Use Windows Azure Platform Management Portal to create and manage databases.

  • SQL Server Management Studio: Manage a SQL Azure database similar to an on-premise instance of SQL Server, using SQL Server Management Studio from SQL Server 2008 R2.

  • Transact-SQL: Use a Windows Azure role to programmatically issue Transact-SQL statements to SQL Azure.

Important Note: You must first configure the SQL Azure firewall to connect to the database from inside or outside of the Windows Azure platform. For more information, see How to: Configure the SQL Azure Firewall.

Getting Started with the Windows Azure Platform

Perform the following steps to get started building web services like the one described in this topic:



Learn: Review the developer centers for the latest information about the products and educational references.

Windows Azure home page

SQ Azure home page

Windows Communications Foundation (WCF) Developer Center

WCF Data Services Developer Center

OData home page

Install: Install the development tools that emulate Windows Azure on your computer. Install SQL Server 2008 R2 Express to develop local or SQL Azure relational databases.

Windows Azure Downloads

Windows Azure Installation Tutorial

SQL Server 2008 R2 Express home page

Join: To use Windows Azure or SQL Azure, you will need an account. Note: An account is not required for developing applications and databases locally.

Windows Azure Getting Started - Get a Paid Account

How to Create a Storage Account

Getting Started with the SQL Azure Database

Create: Create your first Windows Azure local application. Create a local or SQL Azure database.

Walkthrough: Create Your First Windows Azure Local Application

Video: Use SQL Azure to build a cloud application with data access

Develop: Develop a web role that hosts a WCF service or WCF data service. For storage, the web role can use development storage (a local simulation of Windows Azure storage services) or a relational database. If using a database, the local web service can use SQL Azure or a local database that is hosted with SQL Express.

Building Windows Azure Services

Windows Azure Storage Services - Using Development Storage

Development Considerations in SQL Azure

WCF - Getting Started Tutorial

WCF Data Services - Quickstart

Deploy: Deploy your web role application and database to the cloud.

Walkthrough: Deploy and run your Windows Azure application

Developing and Deploying with SQL Azure

<Return to section navigation list> 

SQL Azure Database and Reporting

LarenC posted Clarifying Sync Framework and SQL Server Compact Compatibility to the Sync Framework Team blog on 12/16/2010:

imageSync Framework and SQL Server Compact install with several versions of Visual Studio and SQL Server, and each version of Sync Framework is compatible with different versions of SQL Server Compact. This can be pretty confusing!

This article on TechNet Wiki clarifies which versions of Sync Framework are installed with Visual Studio and SQL Server, lays out a matrix that shows which versions of SQL Server Compact are compatible with each version of Sync Framework, and walks you through the process of upgrading a SQL Server Compact 3.5 SP1 database to SQL Server Compact 3.5 SP2.

Sync Framework and SQL Server Compact install with several versions of Visual Studio and SQL Server, and each version of Sync Framework is compatible with different versions of SQL Server Compact. This article clarifies which versions of Sync Framework are installed with Visual Studio and SQL Server, lays out a matrix that shows which versions of SQL Server Compact are compatible with each version of Sync Framework, and walks you through the process of upgrading a SQL Server Compact 3.5 SP1 database to SQL Server Compact 3.5 SP2.

Sync Framework and SQL Server Compact Versions that Install with Visual Studio and SQL Server

The following table lists the version of Sync Framework and SQL Server Compact that is installed with Visual Studio or SQL Server.


Compatibility between Sync Framework and SQL Server Compact

Early versions of Sync Framework were built to work with SQL Server Compact 3.5 SP1. When SQL Server Compact 3.5 SP2 was released, Sync Framework was redesigned to work with the new public change tracking API provided as part of the SQL Server Compact 3.5 SP2 release. Sync Framework components that use this new API are not compatible with earlier releases of SQL Server Compact, which is why later versions of Sync Framework are no longer compatible with SQL Server Compact 3.5 SP1.

The following table lists the versions of SQL Server Compact that are compatible with the SqlCeSyncProvider class on a desktop computer.


The following table lists the versions of SQL Server Compact that are compatible with the offline-only provider that is represented by the SqlCeClientSyncProvider class on a desktop computer.

Note that the SqlCeClientSyncProvider class should be used for existing applications only and has been superseded by the SqlCeSyncProvider class.


The following table lists the versions of SQL Server Compact that are compatible with the offline-only provider that is represented by the SqlCeClientSyncProvider class on a Windows Mobile 6.1 or 6.5 device.


Upgrading from SQL Server Compact 3.5 SP1 to SQL Server Compact 3.5 SP2

If you have a SQL Server Compact 3.5 SP1 database that participates in a synchronization community and you want to upgrade to a version of Sync Framework that is not compatible with SQL Server Compact 3.5 SP1, you can upgrade both Sync Framework and SQL Server Compact and continue to synchronize your database by following these steps:

  1. You have a working application that uses Sync Framework 1.0 or Sync Framework 2.0 to synchronize a SQL Server Compact 3.5 SP1 database.
  2. Install Sync Framework 1.0 SP1 or Sync Framework 2.1.
  3. Install SQL Server Compact 3.5 SP2.
  4. Rebuild your application to use Sync Framework 2.1 or use assembly redirection to load the 2.1 assemblies. For more information, see Sync Framework Backwards Compatibility and Interoperability.
  5. Synchronize your SQL Server Compact database. Sync Framework detects that the version of SQL Server Compact has changed and automatically upgrades the format of your database metadata to work correctly with Sync Framework 2.1. For more information, see Upgrading SQL Server Compact.
  6. If you prefer, you can also explicitly upgrade the database by using the SqlCeSyncStoreMetadataUpgrade class.

LaurenC is a technical writer on the Microsoft Sync Framework team.

Michael Otey listed “4 reasons that prove Microsoft is serious about SQL Azure” as a preface to his SQL Azure Enhancements article of 12/15/2010 for SQL Server Magazine:

image At its recent Professional Developers Conference (PDC) in Redmond, Microsoft reaffirmed its commitment to SQL Azure, the new cloud-based version of SQL Server, by announcing several important enhancements for SQL Azure. Microsoft is serious about addressing customers’ needs with the SQL Azure platform and bringing it more on par with the capabilities of an on-premises SQL Server installation. Four of the most important recent announcements for SQL Azure follow:

4. Database Backup

imageAnnounced before PDC as a part of SQL Azure Service Update 4, the ability to back up SQL Azure databases has been added to SQL Azure. I’m not really sure what Microsoft was thinking when it released the earlier versions without the ability to perform backups—perhaps that SQL Azure’s built-in availability abrogated the need to perform backups.

image However, it overlooked the need to provide protection for end-user error. Backing up with bcp or SQL Server Integration Services (SSIS) wasn’t a suitable replacement for database backup.

With SQL Azure Service Update 4, you can use the new copy feature to make SQL Azure-based database backups. Being copies of the database, they do count toward the SQL Azure limit of 150 databases. SQL Azure database backup is available for SQL Azure now. Learn more about it at Microsoft's MSDN site.

3. Database Manager for SQL Azure

In the past, managing SQL Azure databases was more difficult than managing on-premises systems, mainly because of the lack of management tools. As part of SQL Server 2008 R2, SQL Server Management Studio was modified to be able to connect to SQL Azure. You can find the free SQL Azure compatible version of SQL Server 2008 R2 Management Studio Express at Microsoft's download site.

However, this still means using an on-premises tool to manage your database cloud. Database Manager for SQL Azure is a free web-based management tool that can be used to create schema and run queries against SQL Azure databases. Watch a video demo at MSDN's blog about SQL Azure.

2. SQL Azure Data Sync

Tacitly acknowledging that SQL Azure will need to work in conjunction with one or more on-premises SQL Server systems, Microsoft announced the SQL Azure Data Sync feature. SQL Azure Data Sync is a cloud-based data synchronization service that’s built using the Microsoft Sync Framework. It will be able to synchronize data between on-premises SQL Server systems and SQL Azure in the cloud.
It can also replicate data to remote offices, and it will support scheduled synchronization and conflict handling for duplicate data. A second SQL Azure Data Sync CTP should appear by the end of 2010, and the service is expected in the first half of 2011.
Learn more about SQL Azure Data Sync and download the first CTP at Microsoft's SQL Azure site.

1. SQL Azure Reporting

Without a doubt, the most important new announcement at PDC was support for SQL Azure Reporting Services. Reporting Services is one of the most important features of an on-premises SQL Server installation, and it was definitely needed to drive adoption of SQL Azure.
With SQL Azure Reporting Services, reports can be created using BIDs, published to SQL Azure, and managed using the cloud-based Windows Azure Developer Portal. SQL Azure Reporting is expected to be available in a CTP by the end of 2010 and will be generally available in the first half of 2011.
See the video demoing the new service at the Microsoft website.
Related Reading:

Pawel Kadluczka described EF Feature CTP5: Validation in a 12/15/2010 post to the ADO.NET Team Blog:

Validation is a new feature introduced in Entity Framework Feature Community Technology Preview 5 (CTP5) which enables to automatically validate entities before trying to save them to the database as well as to validate entities or their properties "on demand".


imageValidating entities before trying to save changes can save trips to the database which are considered to be costly operations. Not only can they make the application look sluggish due to latency but they also can cost real money if the application is using SQL Azure where each transaction costs. Also using the database as a tool for validating entities is not really a good idea. The database will throw an exception only in most severe cases where the value violates database schema/constraints. Therefore the validation that can be performed by the database is not really rich (unless you start using some advanced mechanisms like triggers but then – is this kind of validation really the responsibility of the database?

Should not this be part of business logic layer which – with CodeFirst - may already have all the information needed to actually perform validation).
In addition, figuring out the real cause of the failure may not be easy. While the application developer can unwrap all the nested exceptions and get to the actual exception message thrown by the database to see what went wrong the application user will not usually be able to (and should not even be expected to be able to) do so. Ideally the user would rather see a meaningful error message and a pointer to the value that caused the failure so it is easy for him to fix the value and retry saving data. [Emphasis added.]


Automatically! In the CTP5 validation is turned on by default. It is using validation attributes (i.e. attributes derived from the System.ComponentModel.DataAnnotations.ValidationAttribute class) in order to validate entities. “Accidentaly” J one of the ways to configure a model when working with Code First is to use validation attributes. As a result when the model is configured with validation attributes validation can kick in and validate whether entities are valid according to the model. But there is more to the story.

CodeFirst uses just a subset of validation attributes to configure the model, while validation can use any attribute derived from the ValidationAttribute class (this includes CustomValidationAttribute) giving even more control over what is really saved to the database.

Validation will also respect validation attributes put on types and will drill into complex properties to validate their child properties. In addition, if an entity or a complex type implements IValidatableObject interface the IValidatableObject.Validate method will be invoked when validating the given entity or complex property.

Finally, it is also possible to decorate navigation or collection property with validation attributes. In this case only the property itself will be validated but not the related entity or entities.
By default entities will be validated automatically when saving changes. It is also possible to validate all entities, a single entity or a single property (be it complex property or primitive property) “on demand”. Each of these scenarios is described in more details below.

One important thing to mention is that in some cases validation will enforce detecting changes. This is especially visible in case of automatic validation invoked from DbContext.SaveChanges(). In this case DetectChanges() will be actually called twice. Once by validation and once by the “real” SaveChanges(). The reason for detecting changes before validation happens is obvious – the latest data should be validated because this is what will be sent to the database. The reason for detecting changes after validation is that the entities could have been changed either during validation (CustomValidationAttributes, IValidatableObject.Validate(), validation attributes created by the user have full access to entities and/or properties) or by the user in one of the validation customization points.

Another thing worth noting is that validation currently works only when doing Code First development. We are considering extending it to also work for Model First and Database First.

What’s the big deal here?

Since CodeFirst uses validation attributes you could potentially use Validator from System.ComponentModel.DataAnnotations to validate entities even before CTP5 was released. Unfortunately Validator can validate only properties that are only direct child properties of the validated object. This means that you need to do extra work to be able to validate complex types that wouldn’t be validated otherwise. Even if you do it you need to make sure that you are able to actually access the invalid value and still know which entity it belongs to – looks like a few more lines. By the. Way, is your solution generic enough to work with different models and databases? Hopefully, but making it generic must have cost another dozen of lines. Now the custom validation is probably pretty big. Hey, someone has added a few lines in OnModelCreatingMethod and that property that was attributed with [Required] validation attribute is no longer required. Now what? Validation prevents from saving a valid value to the database since the Validator is using the attribute that is no longer valid. This is kind of a problem…

Fortunately the built-in validation is able to solve all the above problems without having you add any additional code.

First, validation leverages the model so it knows which properties are complex properties and should be drilled into and which are navigation properties that should not be drilled into. Second, since it uses the model it is not specific for any model or database. Third, it respects configuration overrides made in OnModelCreating method. And you don’t really have to do the whole lot to use it. …

Pavel continues with validation source code for several scenarios.

Here are links to previous posts about EF v4 CTPs:

<Return to section navigation list> 

Marketplace, DataMarket and OData

MSDN updated the Web and Data Services for Windows Phone’s Open Data Protocol (OData) Overview for Windows Phone topic on 12/15/2010:

imageThe Open Data Protocol (OData) is based on an entity and relationship model that enables you to access data in the style of representational state transfer (REST) resources. By using the OData client library for Windows Phone, Windows Phone applications can use the standard HTTP protocol to execute queries, and even to create, update, and delete data from a data service. This functionality is available as a separate library that you can download and install from the OData client libraries download page on CodePlex. The client library generates HTTP requests to any service that supports the OData protocol and transforms the data in the response feed into objects on the client. For more information about OData and existing data services that can be accessed by using the OData client library for Windows Phone, see the OData Web site.

imageThe two main classes of the client library are the DataServiceContext class and the DataServiceCollection class. The DataServiceContext class encapsulates operations that are executed against a specific data service. OData-based services are stateless. However, the DataServiceContext maintains the state of entities on the client between interactions with the data service and in different execution phases of the application. This enables the client to support features such as change tracking and identity management.

TipTip: We recommend employing a Model-View-ViewModel (MVVM) design pattern for your data applications where the model is generated based on the model returned by the data service. By using this approach, you can create the DataServiceContext in the ViewModel class along with any needed DataServiceCollection instances. For more general information about the MVVM pattern, see Implementing the Model-View-ViewModel Pattern in a Windows Phone Application.

When using the OData client library for Windows Phone, all requests to an OData service are executed asynchronously by using a uniform resource identifier (URI). Accessing resources by URIs is a limitation of the OData client library for Windows Phone when compared to other .NET Framework client libraries that support OData.

Generating Client Proxy Classes

You can use the DataSvcUtil.exe tool to generate the data classes in your application that represent the data model of an OData service. This tool, which is included with the OData client libraries on CodePlex, connects to the data service and generates the data classes and the data container, which inherits from the DataServiceContext class. For example, the following command prompt generates a client data model based on the Northwind sample data service:


datasvcutil /uri: /out:.\NorthwindModel.cs /Version:2.0 /DataServiceCollection

By using the /DataServiceCollection parameter in the command, the DataServiceCollection classes are generated for each collection in the model. These collections are used for binding data to UI elements in the application.

Binding Data to Controls

The DataServiceCollection class, which inherits from the ObservableCollection class, represents a dynamic data collection that provides notifications when items get added to or removed from the collection. These notifications enable the DataServiceContext to track changes automatically without your having to explicitly call the change tracking methods.

A URI-based query determines which data objects the DataServiceCollection class will contain. This URI is specified as a parameter in the LoadAsync method of the DataServiceCollection class. When executed, this method returns an OData feed that is materialized into data objects in the collection.

The LoadAsync method of the DataServiceCollection class ensures that the results are marshaled to the correct thread, so you do not need to use a Dispatcher object. When you use an instance of DataServiceCollection for data binding, the client ensures that objects tracked by the DataServiceContext remain synchronized with the data in the bound UI element. You do not need to manually report changes in entities in a binding collection to the DataServiceContext object.

Accessing and Changing Resources

In a Windows Phone application, all operations against a data service are asynchronous, and entity resources are accessed by URI. You perform asynchronous operations by using pairs of methods on the DataServiceContext class that starts with Begin and End respectively. The Begin methods register a delegate that the service calls when the operation is completed. The End methods should be called in the delegate that is registered to handle the callback from the completed operations.

NoteNote: When using the DataServiceCollection class, the asynchronous operations and marshaling are handed automatically. When using asynchronous operations directly, you must use the BeginInvoke method of the System.Windows.Threading.Dispatcher class to correctly marshal the response operation back to the main application thread (the UI thread) of your application.

When you call the End method to complete an asynchronous operation, you must do so from the same DataServiceContext instance that was used to begin the operation. Each Begin method takes a state parameter that can pass a state object to the callback. This state object is retrieved using the IAsyncResult interface that is supplied with the callback and is used to call the corresponding End method to complete the asynchronous operation.

For example, when you supply the DataServiceContext instance as the state parameter when you call the DataServiceContext.BeginExecute method on the instance, the same DataServiceContext instance is returned as the IAsyncResult parameter. This instance of the DataServiceContext is then used to call the DataServiceContext.EndExecute method to complete the query operation. For more information, see Asynchronous Operations (WCF Data Services).

Querying Resources

The OData client library for Windows Phone enables you to execute URI-based queries against an OData service. When the BeginExecute method on the DataServiceContext class is called, the client library generates an HTTP GET request message to the specified URI. When the corresponding EndExecute method is called, the client library receives the response message and translates it into instances of client data service classes. These classes are tracked by the DataServiceContext class.

Note: OData queries are URI-based. For more information about the URI conventions defined by the OData protocol, see OData: URI Conventions.

Loading Deferred Content

By default, OData limits the amount of data that a query returns. However, you can explicitly load additional data, including related entities, paged response data, and binary data streams, from the data service when it is needed. When you execute a query, only entities in the addressed entity set are returned.

For example, when a query against the Northwind data service returns Customers entities, by default the related Orders entities are not returned, even though there is a relationship between Customers and Orders. Related entities can be loaded with the original query (eager loading) or on a per-entity basis (explicit loading).

To explicitly load related entities, you must call the BeginLoadProperty and EndLoadProperty methods on the DataServiceContext class. Do this once for each entity for which you want to load related entities. Each call to the LoadProperty methods results in a new request to the data service. To eagerly load related entries, you must include the $expand system query option in the query URI. This loads all related data in a single request, but returns a much larger payload.

Important noteImportant Note: When deciding on a pattern for loading related entities, consider the performance tradeoff between message size and the number of requests to the data service.

The following query URI shows an example of eager loading the Order and Order_Details objects that belong to the selected customer:


When paging is enabled in the data service, you must explicitly load subsequent data pages from the data service when the number of returned entries exceeds the paging limit. Because it is not possible to determine in advance when paging can occur, we recommend that you enable your application to properly handle a paged OData feed. To load a paged response, you must call the BeginLoadProperty method with the current DataServiceQueryContinuation token. When using a DataServiceCollection class, you can instead call the LoadNextPartialSetAsync method in the same way that you call LoadAsync method. For an example of this loading pattern, see How to: Consume an OData Service for Windows Phone.

Modifying Resources and Saving Changes

Use the AddObject, UpdateObject, and DeleteObject methods on DataServiceContext class to manually track changes on the OData client. These methods enable the client to track added and deleted entities and also changes that you make to property values or to relationships between entity instances.

When the proxy classes are generated, an AddTo method is created for each entity in the DataServiceContext class. Use these methods to add a new entity instance to an entity set and report the addition to the context. Those tracked changes are sent back to the data service asynchronously when you call the BeginSaveChanges and EndSaveChanges methods of the DataServiceContext class.

NoteNote: When you use the DataServiceCollection object, changes are automatically reported to the DataServiceContext instance.

The following example shows how to call the BeginSaveChanges and EndSaveChanges methods to asynchronously send updates to the Northwind data service:


private void saveChanges_Click(object sender, RoutedEventArgs e)
    // Start the saving changes operation.
        OnChangesSaved, svcContext);

private void OnChangesSaved(IAsyncResult result)
    // Use the Dispatcher to ensure that the 
    // asynchronous call returns in the correct thread.
    Dispatcher.BeginInvoke(() =>
            svcContext = result.AsyncState as NorthwindEntities;

                // Complete the save changes operation and display the response.
            catch (DataServiceRequestException ex)
                // Display the error from the response.
            catch (InvalidOperationException ex)
                messageTextBlock.Text = ex.Message;
                // Set the order in the grid.
                ordersGrid.SelectedItem = currentOrder;

Note: The Northwind sample data service that is published on the OData Web site is read-only; attempting to save changes returns an error. To successfully execute this code example, you must create your own Northwind sample data service. To do this, complete the steps in the topic How to: Create the Northwind Data Service (WCF Data Services/Silverlight).

Maintaining State during Application Execution

To enable seamless navigation by limiting the phone to run one application at a time, Windows Phone activates and deactivates applications dynamically, raising events for applications to respond to when their state changes. By implementing handlers for these events, you can save and restore the state of the DataServiceContext class as well as any DataServiceCollection instances when your application transitions between active and inactive states. This behavior creates an experience in which it seems to the user like the application continued to run in the background.

The OData client for Windows Phone includes a DataServiceState class that is used to help manage these state transitions. This state management is typically implemented in the code-behind page of the main application. The following table shows Windows Phone state changes and how to use the DataServiceState class for each change in state. …

Table with source code omitted for brevity.

… For an example of this pattern, see the sample application that is posted on the OData client libraries download page on CodePlex.

MSDN updated the Web and Data Services for Windows Phone’s How to: Consume an OData Service for Windows Phone on 12/15/2010:

imageThis topic describes how to consume an Open Data Protocol (OData) feed in a Windows Phone application using the OData client library for Windows Phone. This library is not part of the Windows Phone Application Platform; it must be downloaded separately from the Open Data Protocol - Client Libraries download page on CodePlex.

imageThe OData client library for Windows Phone generates HTTP requests to a data service that supports OData and transforms the entries in the response feed into objects on the client. Using this client, you can bind Windows Phone controls, such as Listbox or Textbox, to an instance of a DataServiceCollection class that contains an OData data feed. This class handles the events raised by the controls to keep the DataServiceContext class synchronized with changes that are made to data in the controls. For more information about using the OData protocol with Windows Phone applications, see Open Data Protocol (OData) Overview for Windows Phone.

A uniform resource identifier (URI)-based query determines which data objects the DataServiceCollection class will contain. This URI is specified as a parameter in the LoadAsync method of the DataServiceCollection class. When executed, this method returns an OData feed that is materialized into data objects in the collection. Data from the collection is displayed by controls when this collection is the binding source for the controls. For more information about querying an OData service by using URIs, see the OData: URI Conventions page at the OData Web site.

The procedures in this topic show how to perform the following tasks:

  1. Create a new Windows Phone application

  2. Download and install the OData client for Windows Phone

  3. Generate client data service classes that support accessing an OData service

  4. Query the OData service and bind the results to controls in the application

NoteNote: This example demonstrates basic data binding to a single page in a Windows Phone application by using a DataServiceCollection binding collection. For an example of a multi-page Windows Phone application that uses the Model-View-ViewModel (MVVM) design pattern and the DataServiceState object to maintain state during execution, see the project on the Open Data Protocol - Client Libraries download page on CodePlex.

This example uses the Northwind sample data service that is published on the OData Web site. This sample data service is read-only; attempting to save changes will return an error.

To create the Windows Phone application
  1. In Solution Explorer, right-click the Solution, point to Add, and then select New Project.

  2. In the Add New Project dialog box, select Silverlight for Windows Phone from the Installed Templates pane, and then select the Windows Phone Application template. Name the project ODataNorthwindPhone.

  3. Click OK. This creates the application for Silverlight.

  4. Download the file from the Open Data Protocol - Client Libraries download page on CodePlex and extract the contents of the compressed file to your development computer. This library is permitted for use in production applications, which means that it can be used to build applications that qualify for submission to the Windows Phone marketplace.

  5. Navigate to the directory where you extracted the library files and execute the following command at the command prompt (without line breaks):

    datasvcutil /uri: /out:.\NorthwindModel.cs /Version:2.0 /DataServiceCollection

    This generates the client proxy classes that are required by the Windows Phone application to access the Northwind sample data service. These data classes are created in the NorthwindModel namespace.

  6. In Solution Explorer, right-click the project and select Add Reference. In the Add Reference dialog box, click the browse tab, navigate to the directory where you extracted the library files, select System.Data.Services.Client.dll, and then click OK. This adds a reference to the OData client library assembly to the project.

  7. In Solution Explorer, right-click the project, select Add and then Existing Item.In the Add Existing Item dialog box, navigate to the directory where you extracted the library files and select the NorthwindModel.cs file. This adds the generated client data classes to the project.

To define the Windows Phone application user interface

In the project, double-click the MainPage.xaml file. This opens XAML markup for the MainPage class that is the user interface for the Windows Phone application.Replace the existing XAML markup with the following markup that defines the user interface for the main page that displays customer information:

    mc:Ignorable="d" d:DesignWidth="480" d:DesignHeight="768" 
    d:DataContext="{d:DesignInstance Type=my:Customer, CreateList=True}"
    FontFamily="{StaticResource PhoneFontFamilyNormal}"
    FontSize="{StaticResource PhoneFontSizeNormal}"
    Foreground="{StaticResource PhoneForegroundBrush}"
    SupportedOrientations="Portrait" Orientation="Portrait"
    shell:SystemTray.IsVisible="True" Loaded="PhoneApplicationPage_Loaded">
    <Grid x:Name="LayoutRoot" Background="Transparent">
            <RowDefinition Height="Auto"/>
            <RowDefinition Height="*"/>
        <StackPanel x:Name="TitlePanel" Grid.Row="0" Margin="12,17,0,28">
            <TextBlock x:Name="ApplicationTitle" Text="Northwind Sales" 
                       Style="{StaticResource PhoneTextNormalStyle}"/>
            <TextBlock x:Name="PageTitle" Text="Customers" Margin="9,-7,0,0" 
                       Style="{StaticResource PhoneTextTitle1Style}"/>
        <Grid x:Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0">
            <ListBox x:Name="MainListBox" Margin="0,0,-12,0" ItemsSource="{Binding}" 
                        <StackPanel Margin="0,0,0,17" Width="432">
                            <TextBlock Text="{Binding Path=CompanyName}" TextWrapping="NoWrap" 
                                       Style="{StaticResource PhoneTextExtraLargeStyle}"/>
                            <TextBlock Text="{Binding Path=ContactName}" TextWrapping="NoWrap" 
                                       Margin="12,-6,12,0" Style="{StaticResource PhoneTextSubtleStyle}"/>
                            <TextBlock Text="{Binding Path=Phone}" TextWrapping="NoWrap" Margin="12,-6,12,0" 
                                       Style="{StaticResource PhoneTextSubtleStyle}"/>
To add the code that binds data service data to controls in the Windows Phone application
  1. In the project, open the code page for the MainPage.xaml file, and add the following using statements:

    using System.Data.Services.Client;
    using NorthwindModel;
  2. Add the following declarations to the MainPage class:

    private DataServiceContext northwind;
    private readonly Uri northwindUri = 
        new Uri("");
    private DataServiceCollection customers;
    private readonly Uri customersFeed = new Uri("/Customers", UriKind.Relative);

    This includes the URIs of both the data service and the Customers feed.

  3. Add the following PhoneApplicationPage_Loaded method to the MainPage class:

    private void PhoneApplicationPage_Loaded(object sender, RoutedEventArgs e)
        // Initialize the context and the binding collection. 
        northwind = new DataServiceContext(northwindUri);
        customers = new DataServiceCollection(northwind);
        // Register for the LoadCompleted event.
            += new EventHandler(customers_LoadCompleted);
        // Load the customers feed by using the URI.

    When the page is loaded, this code initializes the binding collection and content, and registers the method to handle the LoadCompleted event of the DataServiceCollection object, raised by the binding collection.

  4. Insert the following code into the MainPage class:

    void customers_LoadCompleted(object sender, LoadCompletedEventArgs e)
        if (e.Error == null)
            // Handling for a paged data feed.
            if (customers.Continuation != null)
                // Automatically load the next page.
                // Set the data context of the listbox control to the sample data.
                this.LayoutRoot.DataContext = customers;
            MessageBox.Show(string.Format("An error has occured: {0}", e.Error.Message));

    When the LoadCompleted event is handled, the following operations are performed if the request returns successfully:

    • The LoadNextPartialSetAsync method of the DataServiceCollection object is called to load subsequent results pages, as long as the Continuation property of the DataServiceCollection object returns a value.

    • The collection of loaded Customer objects is bound to the DataContext property of the element that is the master binding object for all controls in the page.

Atanas Korchev described Binding Telerik Grid for ASP.NET MVC to OData on 12/16/2010:

imageWe have just made a nice demo application showing how to bind Telerik Grid for ASP.NET MVC to OData using Telerik TV as OData producer. The grid supports paging, sorting and filtering using OData’s query options.


To do that we implemented a helper JavaScript routine (defined in an external JavaScript file which is included in the sample project) which is used to bind the grid. Here is how the code looks like:

.Columns(columns =>
columns.Bound(v => v.ImageUrl).Sortable(false).Filterable(false).Width(200).HtmlAttributes(new { style="text-align:center" });
columns.Bound(v => v.Description);
columns.Bound(v => v.DatePublish).Format("{0:d}").Width(200);
.Scrollable(scrolling => scrolling.Height(600))
.ClientEvents(events => events.OnDataBinding("Grid_onDataBinding").OnRowDataBound("Grid_onRowDataBound"))

<script type="text/javascript">

function Grid_onRowDataBound(e) {
e.row.cells[0].innerHTML = '<a href="' + e.dataItem.Url +'"><img src="' + e.dataItem.ImageUrl + '" /></a>';

function Grid_onDataBinding(e) {
var grid = $(this).data('tGrid');
// the bindGrid function is defined in telerik.grid.odata.js which is located in the ~/Scripts folder
$.odata.bindGrid(grid, '');

//Include the helper JavaScript file
Html.Telerik().ScriptRegistrar().DefaultGroup(g => g.Add("telerik.grid.odata.js"));

We will provide native (read ‘codeless’) OData binding support in a future release.

<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Alik Levin reported Windows Azure AppFabric CTP December 2010 – New Features in Access Control Service on 12/16/2010:

image Original announcement is available on Windows Azure AppFabric team’s blog here

image722322Here is the list of the new features/changes as per the post:

  • Improved error messages by adding sub-codes and more detailed descriptions.
  • Adding primary/secondary flag to the certificate to allow an administrator to control the lifecycle.
  • Added support for importing the Relying Party from the Federation Metadata.
  • Updated the Management Portal to address usability improvements and support for the new features.
  • Support for custom error handing when signing in to a Relying Party application.

Details on each change and new feature are available at Release Notes - December Labs Release.

Related Books
Related Info

See PRNewswire reported Novell Joins Microsoft Windows Azure Technology Adoption Program to Test and Validate Novell Cloud Security Service on 12/16/2010 in the Cloud Security and Governance  section.

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, and CDN

Shaun Xu described Communication Between Your PC and Azure VM via Windows Azure Connect in a detailed tutorial of 12/16/2010:

image With the new release of the Windows Azure platform there are a lot of new features available. In my previous post I introduced a little bit about one of them, the remote desktop access to azure virtual machine. Now I would like to talk about another cool stuff – Windows Azure Connect.

What’s Windows Azure Connect

imageI would like to quote the definition of the Windows Azure Connect in MSDN

With Windows Azure Connect, you can use a simple user interface to configure IP-sec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. IP-sec protects communications over Internet Protocol (IP) networks through the use of cryptographic security services.

There’s an image available at the MSDN as well that I would like to forward here


As we can see, using the Windows Azure Connect the Worker Role 1 and Web Role 1 are connected with the development machines and database servers which some of them are inside the organization some are not.

With the Windows Azure Connect, the roles deployed on the cloud could consume the resource which located inside our Intranet or anywhere in the world. That means the roles can connect to the local database, access the local shared resource such as share files, folders and printers, etc.

Difference between Windows Azure Connect and AppFabric

It seems that the Windows Azure Connect are duplicated with the Windows Azure AppFabric. Both of them are aiming to solve the problem on how to communication between the resource in the cloud and inside the local network. The table below lists the differences in my understanding.


And also some scenarios on which of them should be used.


How to Enable Windows Azure Connect

OK we talked a lot information about the Windows Azure Connect and differences with the Windows Azure AppFabric. Now let’s see how to enable and use the Windows Azure Connect. First of all, since this feature is in CTP stage we should apply before use it. On the Windows Azure Portal we can see our CTP features status under Home, Beta Program page.


You can send the apply to join the Beta Programs to Microsoft in this page. After a few days the Microsoft will send an email to you (the email of your Live ID) when it’s available.

In my case we can see that the Windows Azure Connect had been activated by Microsoft and then we can click the Connect button on top, or we can click the Virtual Network item from the left navigation bar.

The first thing we need, if it’s our first time to enter the Connect page, is to enable the Windows Azure Connect.


After that we can see our Windows Azure Connect information in this page.


Add a Local Machine to Azure Connect

As we explained below the Windows Azure Connect can make an IP-sec connection between the local machines and azure role instances. So that we firstly add a local machine into our Azure Connect. To do this we will click the Install Local Endpoint button on top and then the portal will give us an URL. Copy this URL to the machine we want to add and it will download the software to us.


This software will be installed in the local machines which we want to join the Connect. After installed there will be a tray-icon appeared to indicate this machine had been joint our Connect.



The local application will be refreshed to the Windows Azure Platform every 5 minutes but we can click the Refresh button to let it retrieve the latest status at once. Currently my local machine is ready for connect and we can see my machine in the Windows Azure Portal if we switched back to the portal and selected back Activated Endpoints node.


Add a Windows Azure Role to Azure Connect

Let’s create a very simple azure project with a basic ASP.NET web role inside. To make it available on Windows Azure Connect we will open the azure project property of this role from the solution explorer in the Visual Studio, and select the Virtual Network tab, check the Activate Windows Azure Connect.

The next step is to get the activation token from the Windows Azure Portal. In the same page there is a button named Get Activation Token. Click this button then the portal will display the token to me.


We copied this token and pasted to the box in the Visual Studio tab.


Then we deployed this application to azure. After completed the deployment we can see the role instance was listed in the Windows Azure Portal - Virtual Connect section.


Establish the Connect Group

The final task is to create a connect group which contains the machines and role instances need to be connected each other. This can be done in the portal very easy.

The machines and instances will NOT be connected until we created the group for them. The machines and instances can be used in one or more groups.

In the Virtual Connect section click the Groups and Roles node from the left side navigation bar and clicked the Create Group button on top. This will bring up a dialog to us. What we need to do is to specify a group name, description; and then we need to select the local computers and azure role instances into this group.


After the Azure Fabric updated the group setting we can see the groups and the endpoints in the page.


And if we switch back to the local machine we can see that the tray-icon have been changed and the status turned connected.


The Windows Azure Connect will update the group information every 5 minutes. If you find the status was still in Disconnected please right-click the tray-icon and select the Refresh menu to retrieve the latest group policy to make it connected.

Test the Azure Connect between the Local Machine and the Azure Role Instance

Now our local machine and azure role instance had been connected. This means each of them can communication to others in IP level. For example we can open the SQL Server port so that our azure role can connect to it by using the machine name or the IP address.

The Windows Azure Connect uses IPv6 to connect between the local machines and role instances. You can get the IP address from the Windows Azure Portal Virtual Network section when select an endpoint.

I don’t want to take a full example for how to use the Connect but would like to have two very simple tests. The first one would be PING.

When a local machine and role instance are connected through the Windows Azure Connect we can PING any of them if we opened the ICMP protocol in the Filewall setting. To do this we need to run a command line before test. Open the command window on the local machine and the role instance, execute the command as following

netsh advfirewall firewall add rule name="ICMPv6" dir=in action=allow enable=yes protocol=icmpv6

Thanks to Jason Chen, Patriek van Dorp, Anton Staykov and Steve Marx, they helped me to enable  the ICMPv6 setting. For the full discussion we made please visit here.


You can use the Remote Desktop Access feature to logon the azure role instance. Please refer my previous blog post to get to know how to use the Remote Desktop Access in Windows Azure.

Then we can PING the machine or the role instance by specifying its name. Below is the screen I PING my local machine from my azure instance.


We can use the IPv6 address to PING each other as well. Like the image following I PING to my role instance from my local machine thought the IPv6 address.


Another example I would like to demonstrate here is folder sharing. I shared a folder in my local machine and then if we logged on the role instance we can see the folder content from the file explorer window.



In this blog post I introduced about another new feature – Windows Azure Connect. With this feature our local resources and role instances (virtual machines) can be connected to each other. In this way we can make our azure application using our local stuff such as database servers, printers, etc. without expose them to Internet.

So Xiyan transliterates to Shaun?

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

MSDN updated the Web and Data Services for Windows Phone’s Windows Azure Platform Overview for Windows Phone topic on 12/15/2010:

imageThe Windows Azure platform is an internet-scale cloud services platform hosted through Microsoft data centers. It provides highly-scalable processing and storage capabilities, a relational database service, and premium data subscriptions that you can use to build compelling Windows Phone applications.

image This topic provides an overview of the Windows Azure platform features that you can use with the Windows Phone Application Platform. For information about how to use the Windows Azure platform for data storage, see Storing Data in the Windows Azure Platform for Windows Phone. For information about using web services with your Windows Phone applications, see Connecting to Web and Data Services for Windows Phone.

Windows Azure Compute Service

The Windows Azure Compute service is a runtime execution environment for managed and native code. An application built on the Windows Azure Compute service is structured as one or more roles. When it executes, the application typically runs two or more instances of each role, with each instance running as its own virtual machine (VM).

You can use Windows Azure roles to offload work from your Windows Phone applications and perform tasks that are difficult or not possible with the Windows Phone Application Platform. For example, a web role could directly query a SQL Azure relational database and expose the data via a Windows Communication Foundation (WCF) service. For more information about writing Windows Phone applications that consume web services, see Connecting to Web and Data Services for Windows Phone.

There are several benefits to using a Windows Azure Compute service in conjunction with your Windows Phone application:

  • Programming options: When writing managed code for a Windows Azure role, developers can use many of the .NET Framework 4 libraries common to server and desktop applications. Although a substantial number of Silverlight and XNA components are available for developing a Windows Phone application, there are limits to what can be done with those components.

  • Availability: Windows Azure roles run in a highly-available internet-scale hosting environment built on geographically distributed data centers. Considering that the phone can be turned off, a Windows Azure role may be a better choice for long-running tasks or code that needs to be running all the time.

  • Processing capabilities: The processing capabilities of a Windows Azure role can scale elastically across servers to meet increasing or decreasing demand. In contrast, on a Windows Phone, a single processor with finite capabilities is shared by all applications on the phone.

A Windows Azure web role can provide Windows Phone applications access to data by hosting multiple web services including Windows Communication Foundation (WCF) services and WCF data services. WCF is a part of the .NET Framework that provides a unified programming model for rapidly building service-oriented applications. WCF Data Services enables the creation and consumption of Open Data Protocol (OData) services from the Web (formerly known as ADO.NET Data Services). For more information, see the WCF Developer Center and the WCF Data Services Developer Center.

Windows Azure Storage Services

Storage resources on the phone are limited. To optimize the user experience, Windows Phone applications should minimize the use of isolated storage and only store what is necessary for subsequent launches of the application. One way to minimize the use of isolated storage is to use Windows Azure storage services instead. For more information about isolated storage best practices, see Isolated Storage Best Practices for Windows Phone.

The Windows Azure storage services provide persistent, durable storage in the cloud. As with the Windows Azure Compute service, Windows Azure storage services can scale elastically to meet increasing or decreasing demand. There are three types of storage services available:

  • Blob service: Use this service for storing files, such as binary and text data. For more information, see Blob Service Concepts.

  • Queue service: Use this service for storing and delivering messages that may be accessed by another client (another Windows Phone application or any other application that can access the Queue service). For more information, see Queue Service Concepts.

  • Table service: Use this service for structured storage of non-relational data. A Table is a set of entities, which contain a set of properties. For more information, see Table Service Concepts.

Note: To access Windows Azure storage services, you must have a storage account, which is provided through the Windows Azure Platform Management Portal. For more information, see How to Create a Storage Account.

We do not recommend that Windows Phone applications store the storage account credentials on the phone. Rather than accessing the Windows Azure storage services directly, we recommend that Windows Phone applications use a web service to store and retrieve data. The exception to this recommendation is for public blob data that is intended for anonymous access. For more information about using Windows Azure storage services, see Storing Data in the Windows Azure Platform for Windows Phone.

SQL Azure

Microsoft SQL Azure Database is a cloud-based relational database service built on SQL Server technologies. It is a highly available, scalable, multi-tenant database service hosted by Microsoft in the cloud. SQL Azure Database helps to ease provisioning and deployment of multiple databases. Developers do not have to install, set up, update, or manage any software. High availability and fault tolerance are built-in and no physical administration is required.

Similar to an on-premise instance of SQL Server, SQL Azure exposes a tabular data stream (TDS) interface for Transact-SQL-based database access. Because the Windows Phone Application Platform does not support the TDS protocol, a Windows Phone application must use a web service to store and retrieve data in a SQL Azure database. For more information about using SQL Azure with Windows Phone, see Storing Data in the Windows Azure Platform for Windows Phone.

SQL Azure enables a familiar development environment. Developers can connect to SQL Azure with SQL Server Management Studio (SQL Server 2008 R2) and create database tables, indexes, views, stored procedures, and triggers. For more information about SQL Azure, see SQL Azure Database Concepts.

Windows Azure Marketplace DataMarket

Windows Azure Marketplace DataMarket is an information marketplace that simplifies publishing and consuming data of all types. The DataMarket enables developers to discover, preview, purchase, and manage premium data subscriptions. For more information, see the Windows Azure Marketplace DataMarket home page.

The DataMarket exposes data using OData feeds. The Open Data Protocol (OData) is a Web protocol for querying and updating data. The DataMarket OData feeds provide a consistent Representational State Transfer (REST)-based API across all datasets to help simplify development. Because DataMarket feeds are based on OData, your Windows Phone application can consume them with the OData Client Library for Windows Phone or use the HttpWebRequest class. For more information, see Connecting to Web and Data Services for Windows Phone.

Note: In this release of the Windows Phone Application Platform, the Visual Studio Add Service Reference feature is not supported for OData data services. To generate a proxy class for your application, use the DataSvcUti.exe utility that is part of the OData Client Library for Windows Phone. For more information, see How to: Consume an OData Service for Windows Phone.

There are two types of DataMarket datasets: those that support flexible queries and those that support fixed queries. Flexible query datasets support a wider range of REST-based queries. Fixed query datasets support only a fixed number of queries and supply a C# client library to help client applications work with data. For more information about these query types, see Fixed and Flexible Query Types.

See Also: Other Resources

Cory Fowler (@SyntaxC4) published Export & Upload a Certificate to an Azure Hosted Service on 12/16/2010:

image Last night I started doing some research into the new features of the Windows Azure SDK 1.3 for a future blog series which I’ve been thinking about lately. The first step was to figure out what was installed on the default Windows Azure image, in order to determine what would need to be installed for my Proof of Concept.

imageThere are two ways to set up the RDP connection into an Azure instance: a developer centric approach, which is configured in Visual Studio, and an IT centric approach which is configured through the [new] Windows Azure Platform Portal. I had thought it might be cool if this functionality was available using the Service Management API, however this is not publicly exposed [which probably is a good thing].

To minimize content repetition I decided to split the export and upload process to this blog post.

Exporting a Certificate

1. Open the Certificate (From Visual Studio Dialog, IIS or Certificate Snap-in in MMC)


2. Navigate to the Details Tab. Click on Copy to File…


3. Start the Export Process.


4. Select “Yes, export the private key”.


5. Click Next.


6. Provide a password to protect the private key.


7. Browse to a path to save the .pfx file.


8. Save the file.


9. Finish the Wizard.



Setting up a Windows Azure Hosted Service

If you’d like to see a more detailed explanation of this, I released some videos with Barry Gervin in my last entry, “Post #AzureFest Follow-up Videos”.

1. Create New Hosted Service.


2. Fill out the Creation Form.


Setting up a Windows Azure Storage Service

The Visual Studio Tools will not allow you to deploy a project without setting up a Storage Service.

1. Create a New Storage Service.


2. Fill out the Creation Form.


Upload the Certificate

1. Select the Certificates folder under the Hosted Service to RDP into. Click Add Certificate.


2. Browse to the Certificate (saved in last section).


3. Enter the Password for the Certificate.


4. Ensure the Certificate is Uploaded.


Moving Forward

This entry overviewed some of the common setup steps between Setting up RDP using Visual Studio, and Manual Configuration. In the Manual Configuration post I will overview how to use the Service Management API to install the Certificate to the server (instead of the Portal as described above).

Happy Clouding!

Wade Wegner recommended that you Use the WAPTK to help setup your Windows Azure development environment in a 12/16/2010 post:

image As awesome as it is to have a lot of great local development tools, it’s also be difficult to setup new development environments.  Downloading and installing the Windows Azure SDK is really only one step – you also have to ensure that local services are configured correctly (e.g. IIS), you may need additional SDKs (e.g. Windows Identity Foundation SDK), setup additional tools (e.g. SSMS), and so on.  Not only does this take time but also organizational skills.

So, is there anything that can help manage this process?

imageYes.  As part of the Windows Azure Platform Training Kit (WAPTK), we ship a Dependency Checker tool along with scripts that check your system for all the required software to complete the hands-on labs in the kit.  I routinely use this tool to ensure that I have all the software required in order to build great applications for Windows Azure.

Try it out.  First, grab latest version of the WAPTK here.  Then follow these steps.

  1. Click the Prerequisites tab.
  2. Click the Check dependencies link.
  3. If you are prompted to install the Dependency Checker tool, click OK to start the installation.
  4. Once the Dependency Checker tool is install, hit F5 to refresh the page (this will allow the script to call to launch the Dependency Checker tool).
  5. When prompted to allow the ConfigurationWizard to run, click the Allow button.
  6. Now that the Configuration Wizard has launched, click Next to begin.
  7. The first (and only) step is to check prerequisites for the Training Kit.  Click Next to continue.
  8. The tool will scan your system and look for required software.  When it finds that your system is missing required software, you are both notified and provided with a link to Install the software.
  9. Clicking the Install link will generally launch a process to install the feature.
  10. In some cases you will have the option to download the missing feature or software.  Click the Download links to launch a download.  You will then have to walk through the installation process for that feature.
  11. At any point you can click the Rescan button to scan your system again.  Any updates you’ve made will be reflected on the scan.
  12. Once you have all of the required software, you’ll be able to complete the tool.  However, if there is software you do not need or want to install, you can cancel at any time to finish.

I hope you find this useful!  Please let me know if you have any feedback.

image I’m checking to see if the December WAPTK has the problems I reported for the November issue in my Strange Behavior of Windows Azure Platform Training Kit with Windows Azure SDK v1.3 under 64-bit Windows 7 post of 12/8/2010.

imageUpdate 12/16/2010 1:45 PM PST: WAPTK’s December 2010 Update solved most problems reported in my post. However, a runtime exception still occurs in the GuestBook_WorkerRole.WorkerRole.vb’s OnStart() function. Check updated post here.

Microsoft Showcase interviewed Chris Kabat in a 00:06:10 MPS Partners Windows Azure Channel 9 Video segment on 12/15/2010:


MPS Partners, a Microsoft Gold Certified Partner, implements [I]nternet scale applications for their customers on a regular basis. The company has a go-to-market solution that defines a set of services around ‘Cloud Composite Applications’, and this solution involves the ability to pull data from both on premise and off premise applications and deliver in a single portal hosted in the cloud. The company has made an on-premise custom content management framework available on the Cloud in two days. MPS Partners sees value in how on-premise applications can be exposed to the cloud using the Service Bus.

Check the other 70 Azure-related videos, too.

Microsoft PressPass reported “Payment solution provided by NVoicePay is based on Windows Azure platform with Silverlight interface across the Web, PC and phone” as a preface to its Customer Spotlight: ADP Enables Plug-and-Play Payment Processing for Thousands of Car Dealers Throughout North America With Windows Azure press release of 12/16/2010:

image The Dealer Services Group of Automatic Data Processing Inc. has added NVoicePay as a participant to its Third Party Access Program. NVoicePay, a Portland, Ore.-based software provider, helps mutual clients eliminate paper invoices and checks with an integrated electronic payments solution powered by the Windows Azure cloud platform, Microsoft Corp. reported today. [Link added.]

image NVoicePay’s AP Assist e-payment solution offers substantial savings for dealerships that opt into the program. NVoicePay estimates that paying invoices manually ends up costing several dollars per check; however, by reducing that transaction cost, each dealership stands to save tens of thousands of dollars per year depending on its size.

“Like most midsize companies, many dealerships are using manual processes for their accounts payables, which is fraught with errors and inefficiency,” said Clifton E. Mason, vice president of product marketing for ADP Dealer Services. “NVoicePay’s hosted solution is integrated to ADP’s existing dealer management system, which allows our clients to easily process payables electronically for less than the cost of a postage stamp.”

The NVoicePay solution relies heavily on Microsoft Silverlight to enable a great user experience across multiple platforms, including PC, phone and Web. For example, as part of the solution, a suite of Windows Phone 7 applications allow financial controllers to quickly perform functions such as approving pending payments and checking payment status while on the go.

imageOn the back end, the solution is implemented on the Windows Azure platform. This gives it the ability to work easily with a range of existing systems, and it also provides the massive scalability essential for a growth-stage business. According to NVoicePay, leveraging Windows Azure to enable its payment network allowed the NVoicePay solution to go from zero to nearly $50 million in payment traffic in a single year.

“The Windows Azure model of paying only for the resources you need has been key for us as an early stage company because the costs associated with provisioning and maintaining an infrastructure that could support the scalability we require would have been prohibitive,” said Karla Friede, chief executive officer, NVoicePay. “Using the Windows Azure platform, we’ve been able to deliver enterprise-class services at a small-business price, and that’s a requirement to crack the midmarket.”

About ADP

Automatic Data Processing, Inc. (Nasdaq: ADP), with nearly $9 billion in revenues and about 560,000 clients, is one of the world’s largest providers of business outsourcing solutions. Leveraging 60 years of experience, ADP offers the widest range of HR, payroll, tax and benefits administration solutions from a single source. ADP’s easy-to-use solutions for employers provide superior value to companies of all types and sizes. ADP is also a leading provider of integrated computing solutions to auto, truck, motorcycle, marine, recreational, heavy vehicle and agricultural vehicle dealers throughout the world. For more information about ADP or to contact a local ADP sales office, reach us at 1-800-CALL-ADP ext. 411 (1-800-225-5237 ext. 411).

About NVoicePay

NVoicePay is a B2B Payment Network addressing the opportunity of moving invoice payments from paper checks to electronic networks for mid-market businesses. NVoicePay’s simple efficient electronic payments have made the company the fastest growing payment network for business.

About Microsoft

Founded in 1975, Microsoft (Nasdaq “MSFT”) is the worldwide leader in software, services and solutions that help people and businesses realize their full potential.

Mark Kovalcson posted MS CRM 2011, the “Cloud”, Azure and the future to his CRM Scape blog on 12/15/2010:

image Something really important is happening right now, and if you are involved in MS CRM take notice or fall by the wayside!  This is a real game changer!

imageAlmost all of my MS CRM installations were on premises installations until just this Fall.  Sure there were IFD configurations and web portals with Silverlight etc., but the customer was always in tight control of the servers, even if the servers were hosted.

I thought the next big thing would be the tighter SharePoint 2010 integration with MS CRM 2011 and to be fair I have seen interest in that and it will be a growing thing.

What I didn’t expect is that suddenly I would almost exclusively be working with hosted MS CRM systems, MS CRM Online and Windows Azure and SQL Azure. BTW I’ll be working with the new SQL Azure Reporting Services CTP this week as well.

What is pretty obvious is that MS has a lot of technologies coming together at about the time that MS CRM 2011 is hitting the scene. Frankly I thought it all looked interesting, but I didn’t expect the market to react so quickly or for it all to fit together so well.

What I also didn’t expect was how much I would enjoy this transfer of responsibility. I enjoy being in control of things, but I also love abstracting away busy work to concentrate on solving real problems.  That is the point of MS CRM. System administration work is not where the fun is, at least for me.

I was also pleasantly surprised at how easy using SQL Azure is. Just make sure you are using SQL 2008 R2 Management Studio and it is really nothing more than setting up an IP range you will be connecting through in the Azure portal, and changing your connection string. All of your tables need to have Clustered indexes as well and you have to use SQL scripts for all of your modifications, but otherwise it is just like the SQL you already know. Another thing to beware of is that the Web database is 1-5Gb and if you expect to grow beyond that you need to start with a Business Database which starts at 10Gb and can grow to 50Gb.

I’m currently working on a project that is using an Azure 10Gb database, the new to SQL 2008 geography fields and spatial indexes. I imported about 7Gb of data in about 3.5 hours. The database has been very responsive.  I did notice that when I ran a very intensive task like generating a Spatial index across 9.2 million records that it would occasionally idle my process, but it still finished in a respectable time.

Windows Azure, likewise, was very easy to set up.  Install the Azure SDK, create a Cloud Service Project, setup the Diagnostics and build a web application. If you make the Cloud Service your Start project it can be debugged locally as it will be deployed. There are a few configuration switches, but then again you are abstracting away all of the IIS administration.

Where this all gets interesting is that additional instances of your application can be added with a few key clicks and without purchasing equipment or understanding anything about network load balancing. Adjust the amount of CPU cores and memory your application needs as it grows.  Scale it back during the off season or grow though the ceiling.

I have customers who want to scale up quickly to compete with some pretty large competitors with a better product, but without the infrastructure or small army to maintain it. The Azure platform allows a small guy with a better mousetrap to take on the world.

As a lone gun for hire, without a staff of sysadmins, what this means to me is that I can create some really really big solutions very quickly and without the setup time normally involved, or the meetings with IT departments to budget for servers, and schedule for installation and configuration.

What this also is going to do is let a lot more killer apps actually see the light of day. I think we are going to see a new wave of enabled developers make a lot of really cool things happen.

The cost of admission to see if something will stick has really dropped and if your solution is really that good and your website is nailed with traffic, all you need is a few key clicks to give you more bandwidth, memory and processing power.

I was a bit skeptical at first, but as the saying goes, “People vote with their wallets”.  Based on what I am seeing the people are voting and much quicker than I expected!

Avanade (UK) reported “h.e.t software wins national competition and scoops a support package worth over £80K” as a preface to its Avanade Unveils Healthcare Innovation Project as Winner of Windows Azure™ Cloud Competition press release of 12/6/2010 (missed when posted): 

image UK-based h.e.t software is celebrating today after being announced the winner of the national Windows Azure competition from Avanade®, the business technology services provider. h.e.t software‟s cloud dream will soon be made a reality, with over £80,000 worth of consultancy and hosting from Avanade and Microsoft®.

image The Avanade Cloud Advantage Lab competition, collected entries from private organisations across the UK. Applicants submitted business cases and proposals for cloud-based software applications, which were judged on originality and business growth potential. The winner, h.e.t software, develops IT solutions specifically for the social healthcare market including domiciliary homecare, residential and nursing care providers.

imageNoted for its forward-thinking outlook, strong ethical philosophy and potential to use the cloud to truly expand its offerings, the independent software vendor‟s business plan was selected as the most innovative cloud-based application idea. h.e.t software has experienced year-on-year organic growth from sales of software designed for homecare providers and is a major player in healthcare market sector in the UK and Australia. It is renowned for its user-focused software, designed to improve business processes.

The product recognised in the winning entry, CareOnline, allows service users or their family to see online information about the care they receive and in particular to evaluate the outcomes of the care so that management and the local authority commissioner of care can monitor and adjust their service to keep standards to a consistently high level. Avanade will be working through its Cloud Lab in conjunction with the team at h.e.t software to help them achieve the kind of increased flexibility and ROI the platform can offer.

“The judging panel were looking for a business that had an innovative and exciting cloud vision,” commented Nic Merriman, director and cloud lead at Avanade and a member of the judging panel. “h.e.t software was one of the three finalists invited to our „Dragon‟s Den‟ scenario and really impressed the judges, not only with their clear application strategy, but also with their creative use of Windows Azure. The cloud platform will allow h.e.t software to take CareOnline beyond the current on-premise delivery. Not only does that make management easier for the end-user, but also it will open up a whole wealth of opportunity to markets abroad, as they take services to the next level of care.”

“Needless to say, we are delighted to have won the Avanade competition,” commented John Mayhew, executive chairman at h.e.t software. “This is the springboard we need, to project us into new markets; we predict the cloud offering to be of general appeal for care providers around the globe. We have invested a lot of time and resource into developing our management tools and the ability to migrate our service user satisfaction software to the cloud, is the missing part of the puzzle. Healthcare organisations simply want software that works without the management headache. Being able to offer CareOnline as a scalable service will really help care providers guarantee their service users with the best care they deserve and allow their feedback to be used as a way to further improve their standards.

Thanks to the Haverhill Weekly News for the heads-up in its Software victory means firm has head in cloud post of 12/16/2010.

<Return to section navigation list> 

Visual Studio LightSwitch

Andy Kung posted How to Create a Many-to-Many Relationship (Andy Kung) to the Visual Studio LightSwitch Team blog on 12/16/2010:

image2224222Many business applications require many-to-many relationships. For example, an author can write many books, and a book can be written by many authors. To model this relationship in LightSwitch, you need to create a mapping table for the 2 objects (e.g. AuthorToBook mapping table for Author and Book).

In this post, we will create a movie database application. This application requires a many-to-many relationship for the movie and genre entities. Meaning, each movie can have many genres, and each genre can have many movies. In addition, we will build a “list-box mover” UI to interact with the relationship. To give you an idea of what we’re building, here is a sneak peek of the screen you will end up with by the end of the post.


Start with data

We will start by adding a Movie table and a Genre table, each with some relevant fields.


  • Title (String, required)
  • ReleaseDate (Date, required)
  • Length (Int32, required)
  • Storyline (String, required)



  • Name (String, required)


Next, we need to create a mapping table between Movie and Genre. Let’s create a new table called MovieGenre, but don’t specify any additional fields. Then, use the “Add New Relationship” dialog to set up the many-to-many relationship. Since it is a mapping table, each entry only contains a reference to a movie and a genre. Using the dialog, we want to make sure “one Movie can have many MovieGenre,” and “one Genre can have many MovieGenre.



We should now have the 3 tables set up as follow.


We can also add a summary field to the mapping table so it has a meaningful string representation by default.

Private Sub Summary_Compute(ByRef result As String)

result = Genre.Name

End Sub

For more details of how to customize an entity’s summary field, please see Getting the Most out of LightSwitch Summary Properties by Beth Massi.

Create a Screen

Now that we have the data set up, let’s create a screen via the “Add New Screen dialog.” We will use “New Data Screen” template and make sure Movie is selected under “Screen Data” and both Movie Details and Movie MovieGenres are checked.


After clicking OK, you should have a screen like this:

If we run the application (F5) at this point, you will see a screen that lets you enter basic movie information as well as a grid for its genres.


“List-Box Mover” UI

Let’s customize the screen to make it more user-friendly by creating a, for lack of a better term, “list-box mover” UI. It has essentially 2 lists. One shows the genres associated with the movie, and the other shows all the possible genres a user can choose from. User can then use an “Add <<” and “Remove >>” button to move things around.

Our screen already has the list of genres associated with the movie (currently showing as a grid). We still need a list of all possible genres. To do this, we will add a screen query that returns all genres using “Add Screen Item” dialog.


You should now see a GenreCollection in the members list of screen designer:

For the “list-box mover” UI, we essentially want a horizontal stack of 3 items: movie genre list, command group, and all-genre list. I find it useful sometimes to draw out what I want to build in blocks first. In this case:


We will use Horizontal Stack instead of Vertical Stack for the BOTTOM ROW. Use the “+ Add” button to add the GenreCollection below the MovieGenreCollection. Change both collections to use List control. Remove all the commands associated with both Lists.


Next, we need to add some groups in the middle of the Horizontal Stack for the mover buttons. Since we want to have 2 buttons stacked vertically in the middle group, create 2 groups within the middle group using Vertical Stack.


Right click on Group 1 to add a button. Rename the generated method to AddGenre. Similarly, add a button to Group 2 with a RemoveGenre method. We can change the display name of the buttons to be “<<” and “>>”.


Finally, we can write some code for AddGenre and RemoveGenre:

Private Sub AddGenre_Execute()

If (GenreCollection.SelectedItem IsNot Nothing) Then

Dim mg As MovieGenre = MovieGenreCollection.AddNew()

mg.Movie = Me.MovieProperty

mg.Genre = GenreCollection.SelectedItem

End If

End Sub

Private Sub RemoveGenre_Execute()


End Sub

That’s it! Now run the application and play with the “list-box mover” UI. In this example, I pre-populated the genre list using another screen.


Hope that helps!

Dave Mendlen and Tim Huckaby discussed Visual Studio LightSwitch in their 12/14/2010 Bytes by MSDN video segment:


One of the most talked about features in Visual Studio 2010 is IntelliTrace. Dave Mendlen, Senior Director at Microsoft, and Tim Huckaby, founder of InterKnowlogy, cover the latest and greatest with Visual Studio 2010, including LightSwitch, Intellitrace and the most recent Feature Pack. Tune in to find out why Dave refers to IntelliTrace as a “time machine.”

Video Downloads
WMV (Zip) | WMV | iPod | MP4 | 3GP | Zune | PSP

Audio Downloads
AAC | WMA | MP3 | MP4

About Dave Mendlen

Dave Mendlen is the Senior Director, Developer Platform and Tools. Prior to this role, he served as the speech writer for Bill Gates and Steve Ballmer. In his time at Microsoft, he was the Director of Web Services strategy in the Developer Platform and Evangelism division at Microsoft responsible for driving Web Services excitement and standards across the industry. He has also served as the Director of Windows Product Management responsible for the marketing of Windows XP Home, Pro, Tablet PC and Windows XP Media Center Edition. He started at Microsoft in the developer division and served as the lead product planner on .NET and Visual Studio .NET driving a team to bring web services and .NET to millions of developers.

About Tim Huckaby

Tim Huckaby is the Founder of InterKnowlogy, experts in Microsoft .NET and Microsoft Platforms, and has 25+ years experience including serving on a Microsoft product team as a development lead on an architecture team. Tim is a Microsoft Regional Director, an MVP and serves on multiple Microsoft councils and boards. Currently, Tim is focused on RIA & Rich Client Technologies like WPF, VSTO, Surface, Silverlight, Windows 7 Touch, and Windows Phone 7. He has been called a "Pioneer of the Smart Client Revolution" by the press. Tim has been awarded multiple times for the highest-rated keynote for Microsoft and numerous other technology conferences around the world and is consistently rated in the top 10% of all speakers at these events. Tim has also done keynote demos for numerous Microsoft executives including Bill Gates and Steve Ballmer.

Dave Mendlen and Tim Huckaby recommend you check out

Return to section navigation list> 

Windows Azure Infrastructure

See also Rakesh Dogra asked is there a Microsoft Cloud over the Orient? in a 12/16/2010 post to the Data Center Journal in the Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds section below.

Alves Arlindo reported New Windows Azure Platform Features Available Today in a 12/15/2010 post to his Microsoft Belgium blog:

Building out an infrastructure that supports your web service or application can be expensive, complicated and time consuming. Whether you need to forecast the highest possible demand, build out the network to support your peak times, getting the right servers in place at the right time or managing and maintaining the systems, these actions require time and money to do.

imageThe Windows Azure platform is a flexible cloud computing platform that lets you focus on solving business problems and addressing customer needs instead of building that infrastructure to have your business running on. Furthermore with the platform, there is no need to invest upfront on expensive infrastructure all together. Pay only for what you use, scale up when you need capacity and pull it back when you don’t, all this power is provided by the Windows Azure Platform at your fingertips.

During PDC 2010 we announced much new functionality to become available at the end of this calendar year. Some of these new functionalities are available as of today:

  • Full Administrative Access
  • Full IIS Access
  • Remote Desktop
  • Windows Azure Connect
  • VM Role

Reading about cloud computing is one thing, experimenting and trying it out is a completely different thing. As such Microsoft provides you different ways allowing you exploring these new functionalities while making cloud computing and Windows Azure in particular more accessible to you and your business. All this and much more can be done in three easy steps.

Setup a Free Account

You will need an account and subscription to access the Windows Azure Portal allowing you to deploy your applications. Microsoft offers two choices for having a free subscription:

  • Windows Azure Introductory Special: This is a new offer specially made for you. Limited of one per customer and includes a base amount of the Windows Azure platform services with no monthly commitment and free of charge.
    1. Navigate to the Microsoft Online Services Customer Portal.
    2. Select the country you live in and press continue.
    3. Right click on the sign in link to sign in the portal.
    4. Click on the View Service Details link under the Windows Azure Platform section.
    5. Locate the Windows Azure Platform Introductory Special offer and click on buy.
    6. Provide a name for the subscription.
    7. Check the Rate Plan check box below and click next
    8. Enter the Billing information and click next
    9. Check the Agreement box and click purchase.
    1. Sign in to the Microsoft Online Services Customer Portal.
    2. Click on the Subscriptions tab and find the subscription called “Windows Azure Platform MSDN Premium”.
    3. Under the Actions section, make sure one of the options is “Opt out of auto renew”.  This ensures your benefits will extend automatically.  If you see “Opt in to auto renew” instead, select it and click Go to ensure your benefits continue for another 8 months.
    4. After your first 8 months of benefits have elapsed (you can check your start date by hovering over the “More…” link under “Windows Azure Platform MSDN Premium” on this same page), you will need to come back to this page and choose “Opt out of auto renew” so that your account will close at the end of the 16-month introductory benefit period.  If you keep this account active after 16 months, all usage will be charged at the normal “consumption” rates.

Note: You can have both offers active at the same time providing even more free access to the Windows Azure Platform and related new functionalities.

Download the Required Tools

Following tools are required to access the news features on the Windows Azure Platform:

Use and Experience the New Features

As part of the release of the new features, new detailed walkthroughs are being made available in learning how to use these new features:

  • Introduction to Windows Azure: In this walkthrough, you explore the basic elements of a Windows Azure service by creating a simple application that demonstrates many features of the Windows Azure platform, including web and worker roles, blob storage, table storage, and queues.
  • Deploying Applications in Windows Azure: In this walkthrough, you learn how to deploy your first application in Windows Azure by showing the steps required for provisioning the required components in the Windows Azure Developer Portal, uploading the service package, and configuring the service.
  • Virtual Machine Role: Windows Azure Virtual Machine Roles allow you to run a customized instance of Windows Server 2008 R2 in Windows Azure, making it easier to move applications to the cloud. In this walkthrough, you explore Virtual Machine roles and you learn how to create custom OS images that you deploy to Windows Azure.

Repeated from Windows Azure and Cloud Computing Posts for 12/15/2010+ due to importance.

image Mary Jo Foley summarized the preceding post in her Microsoft delivers more pieces of its Azure cloud roadmap article of 12/16/2010 for ZDNet’s All About Microsoft blog.

Jay Fry (@jayfry3) reported Survey points to the rise of 'cloud thinking' in a 12/16/2010 post to his Data Center Dialog blog:

image In any developing market, doing a survey is always a bit of a roll of the dice. Sometimes the results can be pretty different from what you expected to find.
I know a surprise like that sounds unlikely in the realm of cloud computing, a topic that, if anything, feels over-scrutinized. However, when the results came back from the Management Insight survey (that CA Technologies sponsored and announced today), there were a few things that took me and others looking at the data by surprise.

Opinions of IT executives and IT staffs on cloud don’t differ by too much. We surveyed both decision makers and implementers, thinking that we’d find some interesting discrepancies. We didn’t. They all pretty much thought cloud could help them on costs, for example. And regardless of both groups’ first impressions, I’m betting cost isn’t their eventual biggest benefit. Instead, I’d bet that it’s agility – the reduced time to having IT make a real difference in your business – that will probably win out in the end.

IT staff are of two minds about cloud. One noticeable contradiction in the survey was that the IT staff was very leery about cloud because they see its potential to take away their jobs. At the same time, one of the most popular reasons to support a cloud initiative was because it familiarized them with the latest and greatest in technology and IT approaches. It seems to me that how each IT person deals with these simultaneous pros and cons will decide a lot about the type of role they will have going forward. Finding ways to learn about and embrace change can’t be a bad thing for your resume.

Virtualization certainly has had an impact on freeing people to think positively about cloud computing. I wrote about this in one of my early blogs about internal clouds back at the beginning of 2009 – hypervisors helped IT folks break the connection between a particular piece of hardware and an application. Once you do that, you’re free to consider a lot of “what ifs.”
This new survey points out a definite connection between how far people have gotten with their virtualization work and their support for cloud computing. The findings say that virtualization helps lead to what we’re calling “cloud thinking.” In fact, the people most involved in virtualization are also the ones most likely to be supportive of cloud initiatives. That all makes sense to me. (Just don’t think that just because you’ve virtualized some servers, you’ve done everything you need to in order to get the benefits of cloud computing.)

The survey shows people expect a gradual move from physical infrastructure to virtual systems, private cloud, and public cloud – not a mad rush. Respondents did admit to quite a bit of cloud usage – more than many other surveys I’ve seen. That leads you to think that cloud is starting to come of age in large enterprises (to steal a phrase from today’s press release). But it’s not happening all at once, and there’s a combination of simple virtualization and a use of more sophisticated cloud-based architectures going on. That’s going to lead to mixed environments for quite some time to come, and a need to manage and secure those diverse environments, I’m betting.

There are open questions about the ultimate cost impact of both public and private clouds. One set of results listed cost as a driver and an inhibitor for public clouds, and as a driver and an inhibitor for private ones, too. Obviously, there’s quite a bit of theory that has yet to be put into practice. I bet that’s what a lot of the action in 2011 will be all about: figuring it out.

And who can ignore politics? Finally, in looking at the internal organizational landscape of allies and stonewallers, the survey reported what I’ve been hearing anecdotally from customers and our folks who work with them: there are a lot of political hurdles to get over to deliver a cloud computing project (let alone a success). The survey really didn’t provide a clear, step-by-step path to success (not that I expected it would). I think the plan of starting small, focusing on a specific outcome, and being able to measure results is never a bad approach. And maybe those rogue cloud projects we hear about aren’t such a bad way to start after all. (You didn’t hear that from me, mind you.)

Take a look for yourself

imageThose were some of the angles I thought were especially interesting, and, yes, even a bit surprising in the survey. In addition to perusing the actual paper that Management Insight wrote (registration required) about the findings, I’d also suggest taking a look at the slide show highlighting a few of the more interesting results graphically. You can take a look at those slides here. [Slide below added.]


I’m thinking we’ll run the survey again in the middle of next year (at least, that seems like about the right timing to me). Two things will be interesting to see. First, what will the “cloud thinking” that we’re talking about here have enabled? The business models that cloud computing makes possible are new, pretty dynamic, and disruptive. Companies that didn’t exist yesterday could be challenging big incumbents tomorrow with some smart application of just enough technology. And maybe with no internal IT whatsoever.

Second, it will be intriguing to see what assumptions that seem so logical now will turn out to be – surprisingly – wrong. But, hey, that’s why we ask these questions, right?

This blog is cross-posted on The CA Cloud Storm Chasers site.

Be sure to check out CA TechnologiesCloud Computing Survey Points to Arrival of “Cloud Thinking”, the source of the preceding post (site registration required). defined the Windows Azure Fabric Controller on 12/15/2010:

imageThe Azure Fabric Controller (FC) is the part of the Windows Azure platform that monitors and manages servers and coordinates resources for software applications.

image The Azure Fabric Controller functions as the kernel of the Azure operating system. It provisions, stores, delivers, monitors and commands the virtual machines (VMs) and physical servers that make up Azure.

According to Dr. Mark Russinovich, a Technical Fellow working on the Windows Azure team, "The Fabric Controller, which automates pretty much everything including new hardware installs, is a modified Windows Server 2008 OS..."

The generic term fabric is a synonym for framework. Microsoft uses it in a proprietary manner to describe the servers, high-speed connections, load balancers and switches that make up the Azure cloud computing platform. The term fabric controller can generally be applied to any component that manages complex connections, but such components are often called by proprietary names. For instance, the OpenStack Compute fabric controller is called Nova.

See also: switching fabric, virtual server

Learn more about Azure Fabric Controller

How Azure actually works, courtesy of Mark Russinovich
The FC has two primary objectives: to satisfy user requests and policies and to optimize and simplify deployment. It does all of this automatically.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

Rakesh Dogra asked is there a Microsoft Cloud over the Orient? in a 12/16/2010 post to the Data Center Journal:

Microsoft Japan is gearing up to establish a data center to provide cloud computing services. At least that’s what the industry buzz says. The Japan-based subsidiary of Microsoft is forming a partnership with Fujitsu to provide cloud computing services from Tokyo.


Fujitsu and Microsoft had announced a cloud computing partnership some time ago. Fujitsu began using the Windows Azure platform in its own data centers, and the company was also able to run its own applications on Windows Azure.

The association benefits both companies in unique ways. Fujitsu is planning major expansion in the area of cloud computing. The alliance with Microsoft gives the company the opportunity to become even more inclusive and competitive in their offerings. Microsoft also gets a major boost in its customer base with access to Fujitsu’s clients. Even small and medium-sized businesses will now have access to the very popular Windows platform. Enterprise customers can also expect this alliance to provide application solutions to suit their requirements.

The alliance can deliver fully integrated solutions backed by Fujitsu’s strengths in infrastructure, systems integration, and consulting and by Microsoft’s products and services. Innovative solutions will be driven by the alliance’s IT knowledge and experience.

Fujitsu apparently has strategic plans for cloud computing users. The strategy includes IaaS (Infrastructure as a Service), Application as a Service, and Activity as a Service. Fujitsu looks at the development of its cloud computing services in a “human-centric” fashion, which essentially means that it can develop its cloud computing offerings and growth plans through the requirements driven by change in society. For instance, the company’s focus on Activity as a Service means that customers will have access to business services described in business terms rather than technology terms. Fujitsu’s vision of a global network connected by IT solutions will certainly gain a boost from Fujitsu’s association with Microsoft.

Fujitsu can offer customers and vendors a wide array of cloud computing services that include features like managed services, transition to the cloud, and system integration. The company has also drawn up extensive plans in terms of personnel. Sales, consulting, and development teams will be created and trained to work in tandem with customers and vendors to ensure that the cloud services are promoted comprehensively and new applications can be easily integrated with client IT DNA.

With the synergy of strengths between the two companies, the customer can access business solutions that are capable of easily and rapidly scaling up to keep pace with dynamic business environments. Although an association with Fujitsu for use of its data centers may certainly be in the cards, according to Microsoft, the possibility of setting up its own data centers is also high.

Fujitsu may also create a branded Windows Azure platform appliance that a customer can install in his own data center along with hardware technology provided by Fujitsu.

Incidentally, Fujitsu is one of the top Microsoft System Integrator partners and is a Microsoft Gold Certified Partner. Its alliance has been in the news for various other products and solutions as well: for instance, the company’s GlobalStore, which is a fully loaded POS and back office management system, and its Business Analytics for Retail and Store Modernization solutions.

Also, Fujitsu’s expertise in networking and telecommunications helps to deploy a strong and resilient cloud for a company. The existing Microsoft data center in Tokyo is catering to web-based email, and its data centers in Singapore and the U.S. provide cloud delivery services.

Renai LeMay reported “'Utility Services' to host local version of Microsoft's Windows Azure Platform Appliance” in an  HP's private Cloud moves local post of 12/16/2010 to

image Technology giant HP will begin pushing its 'Utility Services' private Cloud model in the Australian market from early next year, with plans to provide virtualisation business applications from the likes of Microsoft, Oracle and SAP.

As with other private cloud models already being advanced locally by rivals such as Telstra, Optus, Fujitsu and CSC, HP's offering will offer customers the ability to dynamically provision underlying infrastructure (infrastructure as a service) through a self-service portal.

image Speaking to journalists in Sydney, the local head of the company's Enterprise Services division (formerly EDS), David Caspari, was quick to talk up the advantages of the platform, the Australian version of which has been six months in the making.

Caspari said the local private Cloud would benefit from HP's massive scale - with approximately 300,000 staff globally - and inherent development capabilities.

"It's something that should not be underestimated," he said.

The executive said the ability to evolve and develop the company's cloud platform with "hundreds, maybe thousands" of clients in multiple geographies gave HP "an incredible advantage" compared with those which were only innovating in "the relatively small marketplace in Australia".

Until recently, HP was hesitant to acknowledge whether its Utility Services model would be made available in Australia.

The company's local head of Utility Services, David Fox, claimed HP's model was "more comprehensive" than its rivals, offering infrastructure, platform and software as a service.

Contracts under the Utility Services model will likely range from one to five years but can be broken up into three month cycles. Different tiers of service would also be provided to match each customer's differing needs.

All customer data will remain within Australia to avoid regulatory headaches.


The localisation of the private Cloud will likely provide a boost to financial services, governments and other regulated industries who have previously been unable to utilise services like Microsoft's Windows Azure. Microsoft platform evangelism director, Gianpaolo Carraro, recently confirmed to Computerworld Australia that HP, along with Fujitsu, will provide the local version of the service, dubbed Windows Azure Platform Appliance or WAPA. [Emphasis added.]

Globally, HP already has about 300 customers using the Utility Services platform, with German agribusiness Syngenta mentioned as one particular example.

Despite the ongoing shift to private and public cloud services in Australia, the pair noted there would always be a place locally for the more traditional outsourcing services that HP - and before it, EDS - have offered for some time.

But Fox maintained Australian organisations would continue to adopt cloud computing.

"There's undoubtedly a significant shift in the marketplace," he said.

<Return to section navigation list> 

Cloud Security and Governance

PRNewswire reported Novell Joins Microsoft Windows Azure Technology Adoption Program to Test and Validate Novell Cloud Security Service on 12/16/2010:


Today Novell announced it has joined the Microsoft* Windows Azure* Technology Adoption Program to address cloud security challenges through the Novell® Cloud Security Service.  Microsoft and Novell will work together on pre-release, non-commercial, internal testing and validation of Novell Cloud Security Service on Windows Azure with a goal to deliver a consistent access, security and compliance management framework for applications hosted on Microsoft's cloud application platform, Windows Azure.

image According to research from the Cloud Security Alliance and Novell, managing and enforcing security in the cloud is a top concern among IT executives considering cloud-based solutions. The combination of Novell Cloud Security Service on Windows Azure reduces security concerns by leveraging seamless cross-platform authentication, single sign-on and audit for users. Interoperability between Novell Cloud Security Service and Windows Azure will allow businesses to save money when moving data resources to Windows Azure.

"Microsoft is excited to have Novell participate in the Windows Azure Technology Adoption Program.  The Windows Azure platform enables Novell to deliver cloud-based services that extend the value of on-premise software without the need to manage technology infrastructure," said Robert Duffner, Director of Product Management for Windows Azure, Microsoft Corp.  "Integrating Novell Cloud Security Service into Windows Azure ensures an identity, security and compliance management framework is in place to give our customers peace of mind without sacrificing investments in existing applications."

With Novell Cloud Security Service, developers using Windows Azure will be able to quickly and easily extend any business identity and access management infrastructure to Windows Azure. Changes made to user access permissions within an organization's application infrastructure using Novell Cloud Security Service will be immediately replicated in the cloud. The result will help customers using Windows Azure to deliver a consistent identity and security framework to address compliance needs in heterogeneous cloud environments, while keeping critical identity data secure behind customers' firewalls.

As cloud models mature and businesses look at cloud solutions to reduce costs and increase agility, security questions such as protecting data in the cloud and ensuring regulatory compliance are becoming increasingly important. Part of Novell's WorkloadIQ™ vision, Novell Cloud Security Service offers a multi-tenant environment with built-in metering, and auditing so cloud services providers, like Windows Azure, can offer a secure, compliant computing environment to their customers.

Windows Azure offers a simple, reliable and powerful cloud computing platform that enables customers to focus on business opportunities as opposed to operational hurdles. Windows Azure provides developers with on-demand compute and storage to host, scale and manage services on the Internet through Microsoft data centers.

"Running applications and storing data in the cloud can have clear benefits," said Josh Dorfman, director of Global Alliance Marketing at Novell. "Today's enterprises are embracing cloud services, but security is top of mind when it comes to the cloud. We are pleased to be working with Microsoft to offer an interoperable set of identity, security and compliance management products for Windows Azure. Our goal is to give service providers the ability to enable cloud computing while mitigating risk and maintaining compliance."

About Novell

Novell, Inc. (Nasdaq: NOVL), a leader in intelligent workload management, helps organizations through WorkloadIQ securely deliver and manage computing services across physical, virtual and cloud computing environments.  We help customers reduce the cost, complexity, and risk associated with their IT systems through our solutions for identity and security, systems management, collaboration and Linux-based operating platforms. With our infrastructure software and ecosystem of partnerships, Novell integrates mixed IT environments, allowing people and technology to work as one. For more information, visit

Copyright © 2010 Novell, Inc.  All rights reserved.  Novell, the Novell logo and the N logo are registered trademarks, and WorkloadIQ is a trademark of Novell, Inc. in the United States and other countries.

All third party trademarks are the property of their respective owners.

Note to editors: For more information, news and perspectives from Microsoft, please visit the Microsoft News Center at Web links, telephone numbers and titles were correct at time of publication, but may have changed. For additional assistance, journalists and analysts may contact Microsoft's Rapid Response Team or other appropriate contacts listed at

Lori MacVittie (@lmacvittie) asserted Many denial of service attacks boil down to the exploitation of how protocols work and are, in fact, very similar under the hood. Recognizing these themes is paramount to choosing the right solution to mitigate the attack as a preface to her The Many Faces of DDoS: Variations on a Theme or Two article of 12/16/2010 for F5’s DevCentral blog:

image When you look across the “class” of attacks used to perpetrate a denial of service attack you start seeing patterns. These patterns are important in determining what resources are being targeted because it provides the means to implement solutions that mitigate the consumption of those resources while under an attack. Once you recognize the underlying cause of a service outage due to an attack you can enact policies and solutions that mitigate that root cause, which better serves to protect against the entire class of attacks rather than employing individual solutions that focus on specific attack types. This is because attacks are constantly evolving, and the attacks solutions protect against today will certainly morph into a variation on that theme, and solutions that protect against specific attacks rather than addressing the root cause will not necessarily be capable of defending against those evolutions.

In general, there are two types of denial of service attacks: those that target the network layers and those that target the application layer. And of course as we’ve seen this past week or so, attackers are leveraging both types simultaneously to exhaust resources and affect outages across the globe.


Network-focused DoS attacks often take advantage of the way network protocols work innately. There’s nothing wrong with the protocols, no security vulnerabilities, nada. It’s just the way they behave and the inherent trust placed in the communication that takes place using these protocols. Still others simply attempt to overwhelm a single host with so much traffic that it falls over. Sometimes successful, other times it turns out the infrastructure falls over before the individual host and results in more a disruption of service than a complete denial, but with similar impact to the organization and customers.


A SYN flood is an attack against a system for the purpose of exhausting that system’s resources. An attacker launching a SYN flood against a target system attempts to occupy all available resources used to establish TCP connections by sending multiple SYN segments containing incorrect IP addresses. Note that the term SYN refers to a type of connection state that occurs during establishment of a TCP/IP connection. More specifically, a SYN flood is designed to fill up a SYN queue. A SYN queue is a set of connections stored in the connection table in the SYN-RECEIVED state, as part of the standard three-way TCP handshake. A SYN queue can hold a specified maximum number of connections in the SYN-RECEIVED state. Connections in the SYN-RECEIVED state are considered to be half-open and waiting for an acknowledgement from the client. When a SYN flood causes the maximum number of allowed connections in the SYN-RECEIVED state to be reached, the SYN queue is said to be full, thus preventing the target system from establishing other legitimate connections. A full SYN queue therefore results in partially-open TCP connections to IP addresses that either do not exist or are unreachable. In these cases, the connections must reach their timeout before the server can continue fulfilling other requests.


The ICMP flood, sometimes referred to as a Smurf attack, is an attack based on a method of making a remote network send ICMP Echo replies to a
single host. In this attack, a single packet from the attacker goes to an unprotected network’s broadcast address. Typically, this causes every
machine on that network to answer with a packet sent to the target.


The UDP flood attack is most commonly a distributed denial-of-service attack (DDoS), where multiple remote systems are sending a large flood of UDP packets to the target. 


The UDP fragment attack is based on forcing the system to reassemble huge amounts of UDP data sent as fragmented packets. The goal of this attack is to consume system resources to the point where the system fails.


The Ping of Death attack is an attack with ICMP echo packets that are larger than 65535 bytes. Since this is the maximum allowed ICMP packet size, this can crash systems that attempt to reassemble the packet.


The theme with network-based attacks is “flooding”. A target is flooded with some kind of traffic, forcing the victim to expend all its resources on processing that traffic and, ultimately, becoming completely unresponsive. This is the traditional denial of service attack that has grown into distributed denial of service attacks primarily because of the steady evolution of web sites and applications to handle higher and higher volumes of traffic. These are also the types of attacks with which most network and application components have had long years of experience with and are thus well-versed in mitigating. 


Application DoS attacks are becoming the norm primarily because we’ve had years of experience with network-based DoS attacks and infrastructure has come a long way in being able to repel such attacks. That and Moore’s Law, anyway. Application DoS attacks are likely more insidious simply because like their network-based counterparts they take advantage of application protocol behaviors but unlike their network-based counterparts it requires far fewer clients to overwhelm a host. This is part of the reason application-based DoS attacks are so hard to detect – because there are fewer clients necessary (owing to the large chunks of resources consumed by a single client) they don’t fit the “blast” pattern that is so typical of a network-based DoS. It can take literally millions of ICMP requests to saturate a host and its network, but it requires only tens of thousands of requests to consume the resources of an application host such that it becomes unreliable and unavailable. And given the ubiquitous nature of HTTP – over which most of these attacks are perpetrated – and the relative ease with which it is possible to hijack unsuspecting browsers and force their participation in such an attack – an attack can be in progress and look like nothing more than a “flash crowd” – a perfectly acceptable and in many industries desirable event.

A common method of attack involves saturating the target (victim) machine with external communications requests, so that the target system cannot respond to legitimate traffic, or responds so slowly as to be rendered effectively unavailable. In general terms, DoS attacks are implemented by forcing the targeted computer to reset, or by consuming its resources so that it can no longer provide its intended service, or by obstructing the communication media between the intended users and the victim so that they can no longer communicate adequately.


An HTTP GET flood is as exactly as it sounds: it’s a massive influx of legitimate HTTP GET requests that come from large numbers of users, usually connection-oriented bots. These requests mimic legitimate users and are nearly impossible for applications and even harder for traditional security components to detect. This result of this attack is similar to the <choose your popular aggregator> effect: server errors, increasingly degraded performance, and resource exhaustion. This attack is particularly dangerous to applications deployed in cloud-based environments (public or private) that are enabled with auto-scaling policies, as the system will respond to the attack by launching more and more instances of the application. Limits must be imposed on auto-scaling policies to ensure the financial impact of an HTTP GET flood does not become overwhelming.


Slowloris consumes resources by “holding” connections open by sending partial HTTP requests. It subsequently sends headers at regular intervals to keep the connections from timing out or being closed due to lack of activity. This causes resources on the web /application servers to remain dedicated to the clients attacking and keeps them unavailable for fulfilling legitimate requests.


A slow HTTP Post is a twist on Slow Loris in which the client sends POST headers with a legitimate content-length. After the headers are sent the message body is transmitted at slow speed, thus tying up the connection (server resources) for long periods of time. A relatively small number of clients performing this attack can effectively consume all resources on the web / application server and render it useless to legitimate users.


Notice a theme, here? That’s because clients can purposefully (and sometimes inadvertently) affect a DoS on a service simply by filling its send/receive queues slowly. The reason this works is similar to the theory behind SYN flood attacks, where all available queues are filled and thus render the server incapable of accepting/responding until the queues have been emptied. Slow pulls or pushes of content keep data in the web/application server queue and thus “tie up” the resources (RAM) associated with that queue. A web/application server has only so much RAM available to commit to queues, and thus a DoS can be affected simply by using a small number of v e r y  slow clients that do little other than tie up resources with what are otherwise legitimate interactions.

While the HTTP GET flood (page flood) is still common (and works well) the “slow” variations are becoming more popular because they require fewer clients to be successful. Fewer clients makes it harder for infrastructure to determine an attack is in progress because historically flooding using high volumes of traffic is more typical of an attack and solutions are designed to recognize such events. They are not, however, generally designed to recognize what appears to be a somewhat higher volume of very slow clients as an attack.


Recognizing the common themes underlying modern attacks are helpful in detecting the attack and subsequently determining what type of solution is necessary to mitigate such an attack. In the case of flooding, high-performance security infrastructure and policies regarding transaction rates coupled with rate shaping based on protocols can mitigate attacks. In the case of slow consumption of resources, it is generally necessary to leverage a high-capacity intermediary that essentially shields the web/application servers from the impact of such requests, coupled with emerging technology that enables a context-aware solution better detect such attacks and then act upon that knowledge to reject them.

When faced with a new attack type, it is useful to try to determine the technique behind the attack – regardless of implementation – as it can provide the clues necessary to implement a solution and address the attack before it can impact the availability and performance of web applications. It is important to recognize that solutions only mitigate denial of service attacks. They cannot prevent them from occurring.

Keir Thomas reported “The New Zealand[, Australian, and Irish] government[s have] warned about storing data in the cloud. Local laws could become another cloud stumbling block” as a lead for his Government Warnings Could Kill the Cloud article of 12/16/2010 in NetworkWorld:

image New Zealand has joined the ranks of an increasing number of governments that have issued warnings for businesses thinking about cloud computing.

The N.Z. Inland Revenue Department, which is responsible for taxation, issued an alert earlier this week reminding businesses that by law, they must keep their tax records in the country. With cloud computing, however, the data might be stored just about anywhere on the planet.

There's no issue with keeping backups of records overseas, the alert continued; yet the law says primary copies of accounts need to be kept in New Zealand, seemingly so they're instantly accessible to tax inspectors.

This should mean there's no problem with businesses using backup services such as Mozy, which store data in the cloud. However, there potentially would be a problem with a business relying solely on a service such as Google Docs. Depending on what's included in the definition of accountancy data, software-as-a-service (SaaS) outfits such as might also be ruled out.

In reality cloud providers utilize data centers as close as possible to their clients, although larger countries fare better than smaller ones. In the United States there are various data centers across the country, for example, although for many European states the data center lies beyond national boundaries.

European users of Amazon Simple Storage Service (S3) will find their data is stored in Ireland. Those in the Asia Pacific area will find their data stored in Singapore. New Zealand users of S3 will probably find their data stored in Singapore too.

The New Zealand warning follows one by the Australian Prudential Regulation Authority in November, warning that the rush to the cloud is "not being subjected to the usual rigor of existing outsourcing and risk management framework". The Irish Department of Finance issued a similar warning in February.

It's not clear what's at fault in New Zealand's case: Is the law simply out of date, or is cloud computing threatening to tear down international boundaries in a way that governments find objectionable? It's a curious fact that the countries issuing these warnings are smaller rather than larger. Could this be a misplaced desire to protect national interests?

Whatever the case, it's yet more proof that--from a business perspective--cloud computing raises concerns beyond the mere logistics of making a switch. Cloud service providers are no doubt waiting for such issues to be worked out during implementation, but this could prove litigiously expensive for organizations using their services--and lead to damaged reputations, should the authorities attempt to make an example out of them.

One solution to the location problem is for cloud providers to run data centers in every country. While this might be a realistic prospect once (and if) the cloud gathers enough users, at the moment it's highly unlikely. And with countries that are physically close to each other--such as the United Kingdom and Ireland, or Belgium and France--it's always going to be unlikely.

Read more: 2, Next >

Fortunately, the current US administration is promoting cloud computing.

<Return to section navigation list> 

Cloud Computing Events

Jeff Barnes reported PDC 2010 & Silverlight 5 Firestarter Analysis on Connected Show #39 in a 12/16/2010 post to the Innovation Showcase blog:

Peter Laudati & Dmitry Lyalin host the edu-taining Connected Show developer podcast on cloud computing and interoperability. Check out episode #39, “The Battle Of The 5s”

image In this episode, frequent guest & co-host Andrew Brust (@andrewbrust) joins Peter to re-cap the recent Microsoft PDC 2010 conference and Silverlight 5 Firestarter. Andrew and Peter discuss the latest Azure platform developments, including VM role, Admin mode, and SQL Azure sharding.  In the mix, they cover some news for PHP developers looking to the cloud for a solution.


Also, Andrew provides us with some blue badge analysis on the future of Silverlight in an HTML5 world.


If you like what you hear, check out previous episodes of the Connected Show at  You can subscribe on iTunes or Zune.  New episodes approximately every two weeks!

David Lemphers blogged My “Business Impact of Cloud Computing” Session Video is Available! on 12/16/2010:

image A couple of weeks ago, I was very fortunate to be invited to OpenStack’s Design Summit for 2010, not only to attend and meet some amazing folks who I believe are at the leading edge of cloud computing, both as a technology shift but also as an emerging billion dollar industry, but to also speak on the topic of the business impact of cloud computing.

 My Session and My Slides

imageThe more I work with clients and business partners, the more passionate I become around the business transformation that becomes possible through cloud computing. As organizations start to embrace cloud technologies, and transform into on-demand businesses, the whole landscape of product and services supply and demand is changing, both through evolution and revolution. It’s so exciting!

David is a former member of the Windows Azure team.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Alex Polvi reported Cloudkick joins the Rackspace Family in an e-mail message and post on 12/16/2010:

cloudkick_logo_800x200.pngSince we started Cloudkick, we have had the fortunate opportunity to work with a number of great companies. However, there is one in particular that has stood out, not only as a leader in their industry, but as an exceptional company to work with: Rackspace. Today, we are very excited to announce that Cloudkick will be joining the Rackspace family.

This is great news for our customers, product, and team.

Great for Our Customers

image In the coming months we will be integrating Cloudkick’s offering into the Rackspace family of products. For existing Cloudkick customers, this means you will see the same continued development of the product, with the backing and resources of one of the most trusted companies in cloud computing at our side. We can’t wait to show you what we have in store.

We are committed to continuing support for multiple clouds within the Cloudkick tools. This means no matter which cloud infrastructure vendor you choose or what data center you have physical servers in, you can continue using Cloudkick. At the same time, our customers can also gain access to Rackspace’s Fanatical Support and additional products, as Cloudkick becomes part of one of the world’s great service companies.

To “kick” things off, starting today, all basic monitoring checks (HTTP, HTTPS, PING/ICMP, SSH, DNS, and TCP) are now 100% free on an unlimited number of servers for all our accounts -- from the free forever developer account to our 1000 server plan.

Cloudkick has always given you the ability to see any Cloud servers that you give credentials for in your dashboard, whether monitored or not, but you can now leverage any amount of basic checks on an unlimited number of your servers on any cloud provider or in any datacenter, all from a single interface.

Great for Our Product

By joining Rackspace, we have the ability to enhance Cloudkick much faster than we would have ever imagined as an independent company. We can rapidly expand our engineering capabilities so we can iterate on the product faster and get to feature requests more quickly.

We expect our monitoring and management tools to become more and more sophisticated, capable of keeping an eye on, literally, entire clouds at once. We also expect our user experience to become deeply integrated into existing and new Rackspace products.

In addition, we’ve always had a passion for Open Source projects and our team hails from organizations ranging from the Apache Software Foundation to Mozilla. Cloudkick has been an active member of the OpenStack community, an open-source cloud project founded by Rackspace, so we hope to continue building Cloudkick tools for monitoring and managing OpenStack clouds.

Great for Our Team

By acquiring Cloudkick, Rackspace will be expanding into the San Francisco Bay Area. Cloudkick will remain headquartered in San Francisco and we expect to continue to grow our team.

On that note, we are hiring! If you are interested in getting in early on one of the major cloud players as it expands to San Francisco, there has never been a better time to be a Racker.

Please check out our FAQ for additional Q&A about the acquisition.

If you have additional questions or concerns, of course you can always contact us as well.

Kicking Off What’s Next

This is an exciting time in the evolution of the cloud and infrastructure tools. As we continue to build out the foundations of cloud computing, we have a strong vision; an infrastructure that can run any application in the cloud, with minimal operations support required on the user’s end.

As we join Rackspace we can’t wait to see what comes next. We look forward to expanding the breadth and functionality of the product while continuing to support you, our customers. We couldn’t have gotten to where we are today without you!

For more info, check out the FAQ.

Alex Williams analyzed Rackspace’s acquisition in his Another Giant Gets Another Sexy Startup: Rackspace Acquires Cloudkick post of 12/16/2010 to the ReadWriteCloud blog:

image After just two years since starting its business, Cloudkick has been acquired by Rackspace. The terms of the deal were not disclosed.

cloudkick_logo_800x200.pngWhat this means for Cloudkick is a big ramp up in its operations. It will also establish a presence for Rackspace in San Francisco. That's a first for the San Antonio, Texas company.

image But perhaps more so it's another smart, modern San Francisco-based cloud computing company getting acquired by one of the major cloud computing services. Last week, announced its planned purchase of Heroku, a Ruby on Rails platform for developers.

Cloudkick provides cloud server monitoring and management tools that support modern APIs and the elastic nature of the cloud.

Perhaps what it does best is take some of the complexity out of managing cloud environments. Earlier this year, Cloudkick launched a hybrid cloud model, which will give Rackspace more opportuntity with organizations seeking to extend its data centers into cloud environments.

The Cloudkick API is a core part of the service and illustrates its advancements in giving companies dynamic ways to manage its servers.


Visualization is a key feature of the company's service.


Cloudkick likes to say it is offering a human friendly way to manage servers. Its tools reflect this philosophy and the people who started the company. The approach has apparently worked. Cloudkick says they have thousands of customers. They work with all the major cloud computing providers. gained tremendous intellectual capital in its purchase of Heroku. With Cloudkick, Rackspace alo gains some of the smartest minds in the cloud computing world.

For example, Alex Polvi is chief executive officer and co-founder. He is also lead contributor to libcloud, the open source library for developers to build portable cloud applications. Previously, he worked on open infrastructure projects for the Mozilla Foundation, Google, and the Oregon State Open Source Lab.

Rackspace offers what it calls fanatical support. Cloudkick offers what it calls fanatical programming. They've worked together and have balancing capabilities.

How will the two fare? Cloudkick tools do scale but the challenge will be in competing with long established companies such as CA, which acquired Nimsoft earlier this year. How the dynamics play out with Rackspace and Cloudkick will define how the two compete against a host of very large technology companies.

Derrick Harris asked  Did Rackspace Buy Cloudkick to Keep Up With AWS? in this 12/16/2010 post to GigaOm’s Structure blog:

Rackspace is moving up the cloud stack by acquiring Cloudkick, a San Francisco-based startup that provides server management and monitoring as a service. The acquisition represents a quick exit for Cloudkick, which just exited beta in January and only incorporated on-premise server management in March.

imageThe need for Rackspace to offer monitoring and management tools is clear, but Rackspace already has a partnership with CA-owned Nimsoft. I suspect the purchase was spurred by a desire to compete more closely with Amazon Web Services.

cloudkick_logo_800x200.pngWhat’s curious is that Rackspace and Nimsoft only announced their relationship in April, and collaborated on new Nimsoft features just this month. And although Nimsoft and Cloudkick both can monitor cloud, managed and on-premise servers, Nimsoft appears to provide management capabilities beyond what Cloudkick provides, especially as it relates to managing physical servers. I’m awaiting word on how the Cloudkick acquisition affects the Nimsoft relationship, but it might not affect it at all. Part of Rackspace’s business as an MSP is to provide support atop a variety of third-party services, so it might give customers the choice of using either service.

image What is clear, though, is that AWS just announced a slew of new features for its CloudWatch monitoring service, and Rackspace might not want to have to rely on third parties for monitoring its services. Cloudkick also gives Rackspace users the ability to monitor services outside of Rackspace’s, including AWS, GoGrid, SoftLayer and Linode. Users won’t get that with Amazon’s CloudWatch. Rackspace might not be within spitting distance of AWS in terms of cloud market share, but it is, by all accounts,  a solid No. 2 and certainly doesn’t want to lose that position by sitting idle.

Also, integration into OpenStack — the Rackspace-led open source IaaS platform — might have something to do with this acquisition. Cloudkick was already contributing to the project, although not by offering its code. Rackspace denies any plans to incorporate Cloudkick code into OpenStack in the future, but we can’t ignore the possibility of that happening. Rackspace need not give away the entirety of  the Cloudkick product via OpenStack, but some basic monitoring capabilities would make for a more-complete offering.

Related content from GigaOM Pro (sub req’d):

John Brodkin asserted “Microsoft-Google rivalry expanded to new battlefields in the past year” as a lead to his The 10 bloodiest battles Microsoft and Google fought in 2010 article of 12/16/2010 for NetworkWorld, which includes the following on page 2:

imageBattle for the cloud


Despite Google's lack of market share, the company's innovations have forced Microsoft to greatly expand its cloud-based offerings. Microsoft is fighting against numerous competitors on the cloud front, including Amazon's EC2, which Microsoft counters with Windows Azure. The threat from Google Apps is one reason Microsoft is overhauling the cloud-based versions of Office, Exchange and SharePoint with Office 365, which is available in beta and will see a broader release next year.

<Return to section navigation list>