Thursday, December 23, 2010

Windows Azure and Cloud Computing Posts for 12/23/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.

Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.

<Return to section navigation list> 

SQL Azure Database and Reporting

Steve Yi revisited Sharding With SQL Azure in a 12/23/2010 post to the SQL Azure Team blog:

image Earlier this week we published a whitepaper entitled Sharding with SQL Azure to the TechNet wiki.  In the paper, Michael Heydt and Michael Thomassy discuss the best practices and patterns to select when using horizontal partitioning and sharding with your applications.

imageSpecific guidance shared in the whitepaper:

  • Basic concepts in horizontal partitioning and sharding
  • The challenges involved in sharding an application
  • Common patterns when implementing shards
  • Benefits of using SQL Azure as a sharding infrastructure
  • High level design of an ADO.NET based sharding library
  • Introduction to SQL Azure Federations


So what is sharding and partitioning, and why is it important?

Often the need arises where an application's data requires both high capacity for many users and support for very large data sets that require lightning performance.  Or perhaps you have an application that by design must be elastic in its use of resources such as a social networking application, log processing solution or an application with a very high number of requests per second.  These are all use cases where data partitioning across physical tables residing on seperate nodes; sharding or SQL Azure Federations, is capable of providing a performant scale-out solution.

In order to scale-out via sharding, an architect must partition the workload into independent units of data or atomic units.  The application then must have logic built into it to understand how to access this data through the use of a custom sharding pattern or through the upcoming release of SQL Azure Federations.

Multi-Master Sharding Archetype

Also introduced by the paper is a multi-master sharding pattern where all shards are considered read/write, there is no inherent replication of data between shards and the model is referred to as a "shared nothing" as no data is shared or replicated between shards.

Use the Multi-Master Pattern if:

  • Clients of the system need read/write access to data

  • The application needs the ability for the data to grow continuously

  • Linear scalability of data access rate is needed as data size increases

  • Data written to one shard must be immediately accessible to any client

To use sharding with SQL Azure, application architeture must take into account:

  • Current 50GB resource limit on SQL Azure database size

  • Multi-Tenant peformance throttling and connection management when adding/removing shards

  • Currently sharding logic must be written at the application level until SQL Azure Federations is released

  • Shard balancing is complicated and may require application downtime while shards are created or removed

SQL Azure Federations

SQL Azure Federations will provide the infrastructure to support the dynamic addition of shards and the movement of atomic units of data between shards providing built-in scale-out capabilities.  Federation Members are shards that act as the physical container for a range atomic units.  Federations will support online repartioning as well as connections, which are established with the root database to maintain continuity and integrity.

More Information

Additionally, we have spoken publicly about coming SQL Azure Federations technology at both PDC and PASS this year. Since that time we have published a number of blog posts and whitepapers for your perusal:

Jason Sparapani interviewed Mark Kromer (@mssqldude, pictured below) in a Q&A: Microsoft surveys horizon as cloud computing providers mount post of 12/21/2010 to the blog:

imageEveryone wants a piece of the cloud these days, it seems. Microsoft, Amazon, Google and soon, with its upcoming, And it’s no wonder; the cloud’s no muss-no fuss setup and maintenance means free resources and lower costs. So, what does the changing cloudscape mean for SQL Azure? In this edition of “SQL in Five,” Microsoft data platform technology specialist Mark Kromer discusses the expanding market and what it means for SQL Azure, what brings business to the cloud in the first place and how the new platform changes the role of the database administrator.

imageHow concerned is Microsoft about

image Mark Kromer: The announcement from Salesforce was very recently announced. But in general, Microsoft has done a good job of being a leader in terms of utility cloud-based computing with SQL Azure. We’ve been competing already in the market with Amazon’s cloud-based database offerings and now with entering the market with, we’ll continue to have the advantages of a nearly seamless transition for developers and DBAs [database administrators] from traditional SQL Server to SQL Azure. Compared to those other competitors in cloud databases, Microsoft has a strong position particularly in terms of having an established offering and a clear roadmap, which was discussed at the recent Microsoft PDC and PASS [Professional Association for SQL Server] conferences. Much more database and BI [business intelligence] functionality as well as data replication and synchronization capabilities are coming in the next release of SQL Azure with several CTPs [community technology previews] already available [on the SQL Azure CTP site].

What will customers get with SQL Azure that they won’t with

Kromer: It’s very early to comment on comparisons with at this time. But when I think about what a lot of our customers tell us in terms of their experience when using or trialing SQL Azure, a few things seem to resonate with DBAs and developers: The experience is nearly identical to SQL Server, the transition to cloud is very easy and their applications and users will not notice any changes or difference. For Microsoft, those points are very important. The ability to use existing application SQL code and application programming code with no changes, just swapping in new connection strings and credentials, was a necessary requirement since SQL Server is already the most popular database in IT. With SQL Azure, there are no developer changes to T-SQL commands and you can use your existing programming languages and data connectors to move from your on-premises SQL Server to SQL Azure., as I understand it, has been used at for some time. But as a new database back end for applications and developers, this is a new platform that they will need to learn, understand, try out and configure new data connectors.

These are still early days for the cloud in general. What do you think the players, Microsoft included, need to do to stay competitive?

Kromer: There are two areas that I receive the most inquiries about from customers when discussing a move to cloud computing these days. Now, granted, I spend my days and nights working with database administrators and developers, so this doesn’t necessarily translate to all other IT areas of cloud computing. But I would say that cloud vendors need to continue to focus on providing transparency to their data center infrastructure. Many businesses do not like the idea of giving up command and control of their data centers to someone else. Additionally, businesses generally feel more comfortable about security and disaster recovery when you open up and provide details about your infrastructure. Another area is consistency with current on-premises traditional solutions. In the case of SQL Server, the fact that SQL Azure is so similar to SQL Server makes it easy to move to a cloud platform.

Under what scenarios do you see companies using cloud databases and how do you see that evolving?

Kromer: The most common include utilizing SQL Azure as part of the development lifecycle and taking advantage of the quick stand-up and tear-down times of cloud computing with SQL Azure. The ideal production use cases will leverage the elasticity of cloud computing to take advantage of those scaling capabilities. This is starting to become more of a reality as businesses feel comfortable with cloud computing. But to get started in cloud, I see customers going to to fire up a couple of small SQL Azure databases to use for development. With tools like the new Data Sync and the CodePlex Azure Migrator, they can move the databases around and even utilize the SQL Server 2008 R2 data-tier application capabilities in Visual Studio and Management Studio. And since it is very easy and quick to stand-up a database, the SQL Server classic use case of serving a departmental application is very useful. A department that needs SQL Server but does not have infrastructure or resources to maintain a database server can utilize SQL Azure instead and not worry about the administration of a server. It is then as easy as disabling that database when you are done, and just as you pay your electric utility bill at home, your business then is no longer charged for the database because billing is based on utilization.

How do cloud databases impact the role of the DBA? Will it be harder, easier for them?

Kromer: A little bit of both. It is definitely different. If you are a DBA who is used to having full control of all the knobs and levers, backup and recovery and other infrastructure, you have to be prepared to leave that part of the Azure cloud infrastructure up to Microsoft. The database is replicated [times three] for [high availability] and backup. You create database logins, but you do not control the instances. Developers see much less of an impact although some features like CLR [Common Language Runtime] objects and partitioning are not there yet in the current version of SQL Azure. That being said, the DBA can monitor and manage the SQL Azure databases right from SSMS while the developer will access the SQL Azure databases through a new cloud-based Web tool for database development and management called Database Manager, which is based on Silverlight.

Editor’s note: For more from Mark, check out his blog at MSSQLDUDE

Mark Kromer has over 16 years experience in IT and software engineering and is a leader in the business intelligence, data warehouse and database communities. Currently, he is the Microsoft data platform technology specialist for the mid-Atlantic region, having previously been the Microsoft Services senior product manager for BI solutions and the Oracle principal product manager for project performance analytics.

Itzik Ben-Gan explained Pivoting Data with the SQL Server PIVOT and UNPIVOT commands in an article for the January 2011 issue of SQL Server Magazine:


<Return to section navigation list> 

MarketPlace DataMarket and OData

Barb Levisay suggested that you Bag More Prospects with Pinpoint Accuracy in a post to the 1/1/2011 issue of Redmond Channel Partner online:

image What if you had the opportunity to market your applications and services to 100,000 prospects each month? Sounds good, but you've heard it before. You spent hours filling out your solution profile according to the guidelines and then traded multiple e-mails with the Microsoft Pinpoint team through the approval process.

image And you've gotten ... zip. A couple of views, a couple of solicitations from outsourcers but no qualified inquiries -- let alone real leads. Every month, those 100,000 prospects paying a visit to the Microsoft Pinpoint Partner Portal and the new Microsoft Dynamics Marketplace are ignoring you. It's time to change that.

First, a little background on Pinpoint. It's the official Microsoft customer-facing partner directory, and the software giant seems committed to investing in the underlying technology and to reaching out to partners and customers to build robust marketplaces.

Microsoft launched the online marketplace as a public beta in July 2008 and Pinpoint became a full release in 2009. The marketplace was designed to replace Solution Finder and other disparate partner directories that had been created by various Microsoft business groups in support of individual products.

While any directory runs a risk of leaving your solution listing stranded in an Internet island with no traffic, there's been strong evidence that Microsoft remains committed to Pinpoint as a framework for partner listings.

Recently, the company launched the beta of the Microsoft Dynamics Marketplace to challenge the AppExchange marketplace. The new Dynamics Marketplace is powered by Pinpoint, and listings created for one directory populate the other when appropriate.

imageSimilarly, a Windows Azure Marketplace to support the Windows Azure release in 2010 is also built on Pinpoint. Both of those marketplaces are described as betas for now, and are only the first of several product marketplaces Microsoft plans to roll out on the Pinpoint search engines. [Emphasis added.]

Current partner participation levels are respectable, but not overwhelming. A recent search for all solutions in Pinpoint returned more than 7,600 listings, which represents steady growth over the 4,000 solutions that appeared in late 2008. Those numbers reflect the differentiation the marketplace can still offer to proactive partners, considering that Microsoft claims more than 640,000 partners worldwide. For partners with Dynamics or Windows Azure solutions to offer, it's even more of a greenfield. As of last month, there were only 260 listings in the Dynamics Marketplace and 29 listings in the Windows Azure Marketplace.

Targeted e-mail marketing, search-engine optimization and other methods should certainly be your priority for attracting new customers, but Pinpoint is worth a little time.

"In the scope of things, it's not going to be your primary lead generator; it may not generate you any leads, but it might," says M.H. "Mac" McIntosh, a marketing consultant based in Rhode Island. "But it's free, it just takes a little bit of time, and you may get some leads out of it. It's one of those things that I would block out a working lunch to go in and fill out the profile and update it and make sure that you're in there. I would also mark in Outlook to go back in 90 days and see what I would change."

Get to the Top of the List

Visit one of the Microsoft business product sites and you'll notice a significant increase in partner visibility. Most business solution sites now include "Find a Partner" navigation bars that open an embedded search pane with a list of Pinpoint Partners that support the product. For example, the Microsoft SharePoint 2010 Partner Finder page delivers 376 results for Pinpoint Partners that "Work with Office & SharePoint 2010."

Your first reaction is probably, "How can I get my company to the top of the list for searches on my competencies?" (There's a short answer -- but we'll come back to that a later.) The more important question for a smart Pinpoint strategy is: "Which list will help us connect with the prospects searching for what we sell?"

There doesn't seem to be anything more difficult for most partners than defining limits of what they sell. But with Pinpoint -- as with most marketing -- if you try to be all things to all people you'll be lost in the crowd. This message may sound like Microsoft talking, but it's true. You must narrow your focus to make Pinpoint work in your favor.

Be the Prospect

To help tighten up your focus, think about which customers are the absolute best fit for your app or services. List common factors of those customers like industry, geography, title and business problem solved to define your target audience. Then, be the prospect.

Stand in the prospect's shoes. He has a problem and is searching the Internet for help with that problem. He doesn't really care that you "offer comprehensive solutions delivered by experienced professionals." He does care if you can train his team on Office 2010 or build a content management system for his international offices.

Browse Pinpoint by your focus categories, just as your prospects do. See what your competitors are doing. Then build your listing using each line to entice the prospect to click on your company:

Apps+Services Listings title: Make it descriptive and specific. Instead of "Cloud Services" use "Microsoft Office 365 Cloud Migration Service."

Ratings: This is the short answer for getting to the top of the list. Partners with the most customer reviews rise to the top. Reviews can be associated at the company, the application or services level, so pay attention to which level will work in your favor for each listing. Make it easy for your customer and include the link with your request for a review.

Full disclosure: Redmond Channel Partner is an 1105 Media property, as is Visual Studio Magazine, for which I’m a contributing editor.

The MSDN Library added an Open Data Protocol (OData) Overview for Windows Phone topic on 12/15/2010 (missed when posted):

imageThe Open Data Protocol (OData) is based on an entity and relationship model that enables you to access data in the style of representational state transfer (REST) resources. By using the OData client library for Windows Phone, Windows Phone applications can use the standard HTTP protocol to execute queries, and even to create, update, and delete data from a data service. This functionality is available as a separate library that you can download and install from the OData client libraries download page on CodePlex. The client library generates HTTP requests to any service that supports the OData protocol and transforms the data in the response feed into objects on the client. For more information about OData and existing data services that can be accessed by using the OData client library for Windows Phone, see the OData Web site.

The two main classes of the client library are the DataServiceContext class and the DataServiceCollection class. The DataServiceContext class encapsulates operations that are executed against a specific data service. OData-based services are stateless. However, the DataServiceContext maintains the state of entities on the client between interactions with the data service and in different execution phases of the application. This enables the client to support features such as change tracking and identity management.

TipTip: We recommend employing a Model-View-ViewModel (MVVM) design pattern for your data applications where the model is generated based on the model returned by the data service. By using this approach, you can create the DataServiceContext in the ViewModel class along with any needed DataServiceCollection instances. For more general information about the MVVM pattern, see Implementing the Model-View-ViewModel Pattern in a Windows Phone Application.

When using the OData client library for Windows Phone, all requests to an OData service are executed asynchronously by using a uniform resource identifier (URI). Accessing resources by URIs is a limitation of the OData client library for Windows Phone when compared to other .NET Framework client libraries that support OData.

Generating Client Proxy Classes

You can use the DataSvcUtil.exe tool to generate the data classes in your application that represent the data model of an OData service. This tool, which is included with the OData client libraries on CodePlex, connects to the data service and generates the data classes and the data container, which inherits from the DataServiceContext class. For example, the following command prompt generates a client data model based on the Northwind sample data service:

datasvcutil /uri: /out:.\NorthwindModel.cs /Version:2.0 /DataServiceCollection

By using the /DataServiceCollection parameter in the command, the DataServiceCollection classes are generated for each collection in the model. These collections are used for binding data to UI elements in the application.

Binding Data to Controls

The DataServiceCollection class, which inherits from the ObservableCollection class, represents a dynamic data collection that provides notifications when items get added to or removed from the collection. These notifications enable the DataServiceContext to track changes automatically without your having to explicitly call the change tracking methods.

A URI-based query determines which data objects the DataServiceCollection class will contain. This URI is specified as a parameter in the LoadAsync method of the DataServiceCollection class. When executed, this method returns an OData feed that is materialized into data objects in the collection.

The LoadAsync method of the DataServiceCollection class ensures that the results are marshaled to the correct thread, so you do not need to use a Dispatcher object. When you use an instance of DataServiceCollection for data binding, the client ensures that objects tracked by the DataServiceContext remain synchronized with the data in the bound UI element. You do not need to manually report changes in entities in a binding collection to the DataServiceContext object.

Accessing and Changing Resources

In a Windows Phone application, all operations against a data service are asynchronous, and entity resources are accessed by URI. You perform asynchronous operations by using pairs of methods on the DataServiceContext class that starts with Begin and End respectively. The Begin methods register a delegate that the service calls when the operation is completed. The End methods should be called in the delegate that is registered to handle the callback from the completed operations.

Note: When using the DataServiceCollection class, the asynchronous operations and marshaling are handed automatically. When using asynchronous operations directly, you must use the BeginInvoke method of the System.Windows.Threading.Dispatcher class to correctly marshal the response operation back to the main application thread (the UI thread) of your application.

When you call the End method to complete an asynchronous operation, you must do so from the same DataServiceContext instance that was used to begin the operation. Each Begin method takes a state parameter that can pass a state object to the callback. This state object is retrieved using the IAsyncResult interface that is supplied with the callback and is used to call the corresponding End method to complete the asynchronous operation.

For example, when you supply the DataServiceContext instance as the state parameter when you call the DataServiceContext.BeginExecute method on the instance, the same DataServiceContext instance is returned as the IAsyncResult parameter. This instance of the DataServiceContext is then used to call the DataServiceContext.EndExecute method to complete the query operation. For more information, see Asynchronous Operations (WCF Data Services).

Querying Resources

The OData client library for Windows Phone enables you to execute URI-based queries against an OData service. When the BeginExecute method on the DataServiceContext class is called, the client library generates an HTTP GET request message to the specified URI. When the corresponding EndExecute method is called, the client library receives the response message and translates it into instances of client data service classes. These classes are tracked by the DataServiceContext class.

Note: OData queries are URI-based. For more information about the URI conventions defined by the OData protocol, see OData: URI Conventions.

Loading Deferred Content

By default, OData limits the amount of data that a query returns. However, you can explicitly load additional data, including related entities, paged response data, and binary data streams, from the data service when it is needed. When you execute a query, only entities in the addressed entity set are returned.

For example, when a query against the Northwind data service returns Customers entities, by default the related Orders entities are not returned, even though there is a relationship between Customers and Orders. Related entities can be loaded with the original query (eager loading) or on a per-entity basis (explicit loading).

To explicitly load related entities, you must call the BeginLoadProperty and EndLoadProperty methods on the DataServiceContext class. Do this once for each entity for which you want to load related entities. Each call to the LoadProperty methods results in a new request to the data service. To eagerly load related entries, you must include the $expand system query option in the query URI. This loads all related data in a single request, but returns a much larger payload.

Important noteImportant Note: When deciding on a pattern for loading related entities, consider the performance tradeoff between message size and the number of requests to the data service.

The following query URI shows an example of eager loading the Order and Order_Details objects that belong to the selected customer:'ALFKI')?$expand=Orders/Order_Details

When paging is enabled in the data service, you must explicitly load subsequent data pages from the data service when the number of returned entries exceeds the paging limit. Because it is not possible to determine in advance when paging can occur, we recommend that you enable your application to properly handle a paged OData feed. To load a paged response, you must call the BeginLoadProperty method with the current DataServiceQueryContinuation token. When using a DataServiceCollection class, you can instead call the LoadNextPartialSetAsync method in the same way that you call LoadAsync method. For an example of this loading pattern, see How to: Consume an OData Service for Windows Phone.

Modifying Resources and Saving Changes

Use the AddObject, UpdateObject, and DeleteObject methods on DataServiceContext class to manually track changes on the OData client. These methods enable the client to track added and deleted entities and also changes that you make to property values or to relationships between entity instances.

When the proxy classes are generated, an AddTo method is created for each entity in the DataServiceContext class. Use these methods to add a new entity instance to an entity set and report the addition to the context. Those tracked changes are sent back to the data service asynchronously when you call the BeginSaveChanges and EndSaveChanges methods of the DataServiceContext class.

Note: When you use the DataServiceCollection object, changes are automatically reported to the DataServiceContext instance.

The following example shows how to call the BeginSaveChanges and EndSaveChanges methods to asynchronously send updates to the Northwind data service:

private void saveChanges_Click(object sender, RoutedEventArgs e)
    // Start the saving changes operation.
        OnChangesSaved, svcContext);

private void OnChangesSaved(IAsyncResult result)
    // Use the Dispatcher to ensure that the
    // asynchronous call returns in the correct thread.
    Dispatcher.BeginInvoke(() =>
            svcContext = result.AsyncState as NorthwindEntities;

                // Complete the save changes operation and display the response.
            catch (DataServiceRequestException ex)
                // Display the error from the response.
            catch (InvalidOperationException ex)
                messageTextBlock.Text = ex.Message;
                // Set the order in the grid.
                ordersGrid.SelectedItem = currentOrder;

Note: The Northwind sample data service that is published on the OData Web site is read-only; attempting to save changes returns an error. To successfully execute this code example, you must create your own Northwind sample data service. To do this, complete the steps in the topic How to: Create the Northwind Data Service (WCF Data Services/Silverlight).

The article concludes with a detailed “Maintaining State during Application Execution” topic.

The Restlet Team added an OData Extension to Restlet 2.1 for OGDI (Open Government Data Initiative) for Java programmers in 12/2010:


imageREST can play a key role in order to facilitate the interoperability between Java and Microsoft environments. To demonstrate this, the Restlet team collaborated with Microsoft in order to build a new Restlet extension that provides several high level features for accessing OData services (Open Data Protocol).

The Open Government Data Initiative (OGDI) is an initiative led by Microsoft. OGDI uses the Azure platform to expose a set of public data from several government agencies of the United States. This data is exposed via a restful API which can be accessed from a variety of client technologies, in this case Java with the dedicated extension of the Restlet framework. The rest of the article shows how to start with this extension and illustrates its simplicity of use.

The OGDI service is located at this URI “” and exposes about 60 kinds of public data gathered in entity sets. Here are samples of such data:

  • Ambulatory surgical centers (here is the name of the corresponding entity set: “/AmbulatorySurgicalCenters” relatively to the service root URI),
  • Building permits (“/BuildingPermits”)
  • Fire stations (“/FireStations”),
  • etc.
Code generation

From the client perspective, if you want to handle the declared entities, you will have to create a class for each entity, defines their attributes, and pray that you have correctly spelled them and defined their type. Thanks to the Restlet extension, a generation tool will make your life easier. It will take care of this task for you, and generate the whole set of Java classes with correct types.

Overview of code generation

Just note the URI of the target service, and specify the directory where you would like to generate the code via the command line:

java -jar org.restlet.ext.odata Generator ~/workspace/testADO

Please note that this feature requires the use of the core Restlet, and additional dependencies such as Atom (used by OData services for all exchanges of data), XML (required by the Atom extension) and FreeMarker (used for the files generation). The following jars (take care of the names) must be present on the current directory:

  • org.restlet.jar (core Restlet)
  • org.restlet.ext.odata.jar (OData extension)
  • org.restlet.ext.atom.jar (Atom extension)
  • org.restlet.ext.xml.jar (XML extension)
  • org.restlet.ext.freemarker.jar (Freemarker extension)
  • org.freemarker.jar (Freemarker dependency)

You can also used the full command line that includes the list of required archives for the class path argument (nb: take care of the OS specific classpath separator) and the name of the main class:

java -cp org.restlet.jar:org.restlet.ext.xml.jar:org.restlet.ext.atom.jar:org.restlet.ext.freemarker.jar:
 org.restlet.ext.odata.jar:org.freemarker.jar org.restlet.ext.odata.Generator  ~/workspace/testADO

or programmatically:

String[] arguments = 
      { "",
        "/home/thierry/workspace/restlet-2.0/odata/src" };

Please note that this feature requires the use of the core Restlet, and additional dependencies such as Atom (used by OData services for all exchanges of data), XML (required by the Atom extension) and FreeMarker (used for the files generation). They must rely on the classpath.

This will generate the following Java classes and directory:

  +-- etc

The classes that correspond to entities are generated in their corresponding package (in our case: “ogdiDc”), as defined by the meta data of the target OData service.

The last class (“OgdiDcSession”) is what we call a session object. Such object is able to handle the communication with the data service, and is able to store the state of the latest executed request and the corresponding response. You probably think that such session looks like a Servlet session. Actually, this is not true. The communication between the client and the server is still stateless.

We have finished for now of the theoretical aspects; let's see how to use the generated classes.

Get the two first building permits

The code below gets the two first entities and displays some of their properties. It will display this kind of output on the console:

*** buildingPermit
Address :447 RIDGE ST NW
State   :DC

*** buildingPermit
Address :144 U ST NW
State   :DC

The listing below shows how to rRetrieve the two first “BuildingPermits” entities:

OgdiDcSession session = new OgdiDcSession();
Query<BuildingPermit> query = 

if (query != null) {
   for (BuildingPermit buildingPermit : query) {
      System.out.println("*** building permit");
      System.out.println("Owner   :" + buildingPermit.getOwner_name());
      System.out.println("City    :" + buildingPermit.getCity());
      System.out.println("District:" + buildingPermit.getDistrict());
      System.out.println("Address :" + buildingPermit.getFull_address());
      System.out.println("State   :" + buildingPermit.getState());

The first step is the creation of a new session. This is the only required action, and it must be done once, but prior to any other one. Then, as we want to get a set of “BuildingPermit”, we just create a new query and specify the desired data. In addition, as the set of data is very large, we ask to limit its size by setting the “top” parameter.

Under the hood, it actually makes a GET request to the “/BuildingPermits?top=2” resource (relatively to the service's URI), and receive as a result a AtomXML feed document. This document is parsed by the query which provides the result as an Iterator. Finally, we can loop over the iterator and access to each “BuildingPermit” instance.

Filter the set of the building permits

The code below gets the five first entities located in the city of Washington and more precisely on the fifth district and displays some of their properties. It will display this kind of output on the console:

*** building permit
Address :144 U ST NW

*** building permit
Owner   :212 36TH ST. LLC.
Address :1250 QUEEN ST NE

*** building permit
Address :1902 JACKSON ST NE

*** building permit
Address :336 ADAMS ST NE

*** building permit

The listing below shows how to retrieve the five first “BuildingPermits” entities in the fifht district of Washington:

Query<BuildingPermit> search = 
      .filter("((city eq 'WASHINGTON') and (district eq 'FIFTH'))")

if (search != null) {
   for (BuildingPermit buildingPermit : search) {
      System.out.println("*** building permit");
      System.out.println("Owner   :" + buildingPermit.getOwner_name());
      System.out.println("Address :" + buildingPermit.getFull_address());

As we want to get a set of “BuildingPermit”, we just create a new query and specify the desired data. In addition we add a filter based on the expression of two criteria: the name of the city and the district. This filter property uses a subset the WCF Data Services query syntax.

It makes a GET request to the “/BuildingPermits” resource and completes its URI with the addition of a query part including the filter and top parameters. As for the previous example, the received AtomXML feed document is parsed which produces the result as an Iterator. Finally, we can loop over the iterator and access to each “BuildingPermit” instance.


This document illustrates what can be done with the Restlet extension for the OData services. We hope that you found it simple and useful to follow to read. It is a good demonstration of how adopting of REST and related standards such as HTTP and Atom facilitates the interoperability across programming languages and executions environments.

<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Gunnar Peipman (@gpeipman) explained ASP.NET MVC: Using ProfileRequiredAttribute to restrict access to pages with the AuthorizeAttribute in a 12/23/2010 post:

image If you are using AppFabric Access Control Services to authenticate users when they log in to your community site using Live ID, Google or some other popular identity provider, you need more than AuthorizeAttribute to make sure that users can access the content that is there for authenticated users only. In this posting I will show you hot to extend the AuthorizeAttribute so users must also have user profile filled.

Semi-authorized users

image722322When user is authenticated through external identity provider then not all identity providers give us user name or other information we ask users when they join with our site. What all identity providers have in common is [a] unique ID that helps you identify the user.

Example. Users authenticated through Windows Live ID by AppFabric ACS have no name specified. Google’s identity provider is able to provide you with user name and e-mail address if user agrees to publish this information to you. They both give you [a] unique ID of user when user is successfully authenticated in their service.

There is logical shift between ASP.NET and my site when considering user as authorized.

For ASP.NET MVC user is authorized when user has identity. For my site user is authorized when user has profile and row in my users table. Having profile means that user has unique username in my system and he or she is always identified by this username by other users.

Community site & ASP.NET MVC: Is user authorized?

My solution is simple: I created my own action filter attribute that makes sure if user has profile to access given method and if user has no profile then browser is redirected to join page.

Illustrating the problem

Usually we restrict access to page using AuthorizeAttribute. Code is something like this.


public ActionResult Details(string id)


var profile = _userRepository.GetUserByUserName(id);

return View(profile);


If this page is only for site users and we have user profiles then all users – the ones that have profile and all the others that are just authenticated – can access the information. It is okay because all these users have successfully logged in in some service that is supported by AppFabric ACS.

In my site the users with no profile are in grey spot. They are on half way to be users because they have no username and profile on my site yet. So looking at the image above again we need something that adds profile existence condition to user-only content.


public ActionResult Details(string id)


var profile = _userRepository.GetUserByUserName(id);

return View(profile);


Now, this attribute will solve our problem as soon as we implement it.

ProfileRequiredAttribute: Profiles are required to be fully authorized

Here is my implementation of ProfileRequiredAttribute. It is pretty new and right now it is more like working draft but you can already play with it.

public class ProfileRequiredAttribute : AuthorizeAttribute


private readonly string _redirectUrl;

public ProfileRequiredAttribute()


        _redirectUrl = ConfigurationManager.AppSettings["JoinUrl"];

if (string.IsNullOrWhiteSpace(_redirectUrl))

            _redirectUrl = "~/";


public override void OnAuthorization(AuthorizationContext filterContext)



var httpContext = filterContext.HttpContext;

var identity = httpContext.User.Identity;

if (!identity.IsAuthenticated || identity.GetProfile() == null)

if(filterContext.Result == null)




All methods with this attribute work as follows:

  • if user is not authenticated then he or she is redirected to AppFabric ACS identity provider selection page,
  • if user is authenticated but has no profile then user is by default redirected to main page of site but if you have application setting with name JoinUrl then user is redirected to this URL.

First case is handled by AuthorizeAttribute and the second one is handled by custom logic in ProfileRequiredAttribute class.

GetProfile() extension method

To get user profile using less code in places where profiles are needed I wrote GetProfile() extension method for IIdentity interface. There are some more extension methods that read out user and identity provider identifier from claims and based on this information user profile is read from database. If you take this code with copy and paste I am sure it doesn’t work for you but you get the idea.

public static User GetProfile(this IIdentity identity)


if (identity == null)

return null;

var context = HttpContext.Current;

if (context.Items["UserProfile"] != null)

return context.Items["UserProfile"] as User;

var provider = identity.GetIdentityProvider();

var nameId = identity.GetNameIdentifier();

var rep = ObjectFactory.GetInstance<IUserRepository>();

var profile = rep.GetUserByProviderAndNameId(provider, nameId);

    context.Items["UserProfile"] = profile;

return profile;


To avoid round trips to database I cache user profile to current request because the chance that profile gets changed meanwhile is very minimal. The other reason is maybe more tricky – profile objects are coming from Entity Framework context and context has also HTTP request as lifecycle.


This posting gave you some ideas how to finish user profiles stuff when you use AppFabric ACS as external authentication provider. Although there was little shift between us and ASP.NET MVC with interpretation of “authorized” we were easily able to solve the problem by extending AuthorizeAttribute to get all our requirements fulfilled. We also write extension method for IIdentity that returns as user profile based on username and caches the profile in HTTP request scope.

Vittorio Bertocci (@vibronet) reminded developers about the New Paper: Single Sign-On from Active Directory to a Windows Azure Application in a 12/23/2010 post:

image I knooow, for many of you this is already Not News; but I can assure you that if you would have spent the last few days wrestling with the airlines & bad weather combo in EU, combined with the anechoic force field that seems to isolate my Mom’s house from the Internet,  you’d be lagging behind too! :-)

clip_image002Aaaanyway: just few days ago we published a new whitepaper on my favorite subjects, claims-based identity and cloud computing. David Mowers and yours truly are listed as the authors, but in fact Stuart Kwan, Paul Becks and various others gave a huge contribution to it. We worked for presenting you in a nice package some end to end indications on how to reuse your existing investment in AD for handling access to your Windows Azure applications. The key enablers here are, of course, ADFS and WIF.

Give it a spin and let us know what do you think! Holiday cheers, V.

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Avkash Chauhan reported a workaround for the Windows Azure VM Role - Handling Error : The sector size of the physical disk on which the virtual disk resides is not supported on 12/23/2010:

imageOnce you have VM role enabled with your subscription, you can start working on VM Role using the step by step guide provided as below:

After [you] finish creating your VHD for VMRole, and upload [it] using CSUPLOAD script, you may [incur the] following error:

Error: An error occurred trying to attach to the VHD.

  • Details: The sector size of the physical disk on which the virtual disk resides is not supported.
  • Note: This is your main error and other errors after it are just caused by this error so please do not give any consideration to them.

Reason: This error occurred because I was loading the baseimage.VHD (as described in above csupload script)from external USB 3.0, 1.0 TB disk.

  • If I use other smaller size external HDD this error did not produced. So the error could occur with certain HDD which have incompatible sector size.
  • So far I do not have a list of compatible sector size disk needed for successful upload.


  • After I moved baseimage.vhd to my local hard disk, I could upload successfully. 
  • To solve your problem, … copy the VHD to your local disk and then reference it properly in [the] “-LiteralPath” parameter. 

PRNewswire reported RBA Consulting Develops Platform for New Facebook Application, ZipiniT™ on 12/22/2010:

image RBA Consulting, an award-winning IT Consulting firm, announced today that it has developed and launched the platform for ZipiniT™, a new online tool for creating and sharing holiday greeting cards.

ZipiniT™ is a FREE service for users to create, share, send, receive and showcase their greeting card photos and letters in a private album accessed via or ZipiniT™ for Facebook. A user sends the online greeting directly to a private album controlled by the recipient.  The recipient can view the greeting by clicking on a link within an email or Facebook notification. Once the recipient activates their ZipiniT™ account they may view the greeting in their album with all their other greetings and can create and send their own online greetings. There is no charge to use the ZipiniT™ service.

imageRBA Consulting managed all aspects of the development of ZipiniT™ including project management, technical architecture, graphic design, application development, Facebook integration, testing and support. The application was built entirely on Microsoft technology including .NET, Azure and SQL Azure and is hosted on the Windows Azure platform. [Emphasis added.]

image"The vision for ZipiniT™ was to build a platform that would be free to users and allow them to easily and privately send and receive holiday cards and letters with their friends," explained ZipiniT™ founder and CEO, John Gabos. "We knew to make it work that it would have to be all about the experience. With RBA's expertise in custom application development, they created the ZipiniT™ tool with robust functionality but it is still very easy to use for anyone with an email address or Facebook account."  

"RBA is proud to be a part of an innovative project that will fundamentally change the way people share online greetings and photos," explained RBA Consulting President and COO, Mike Reinhart. "Our team of developers worked tirelessly to meet the target of launching this groundbreaking tool during the holidays. We are excited to be able to use technology to create a unique application that combines the momentum of social media with the simplicity of email." 

ZipiniT™ allows users to create and distribute holiday cards using the same unique personalization previously used with paper photo cards and letters.  Users can create and save custom slideshows within their albums which are archived online to access in the future from anywhere. In addition to eliminating the expense, sending online cards via ZipiniT™ is more environmentally-friendly than traditional paper cards and photos.

About ZipiniT™

ZipiniT™ was founded in early 2010 by Tina and John Gabos after thinking about how to improve the experience of sending and receiving holiday greeting cards. "An emailed card is not the answer, there has to be a better way to enjoy holiday cards digitally in a single collection, on a computer or a digital picture frame in the kitchen, in a way that people will still feel is a personal experience," explained Tina. Thus began the process of developing the next generation platform for exchanging holiday cards and letters digitally. For more information, please visit

About RBA Consulting

RBA Consulting is a Microsoft Gold Certified Partner with offices in Minneapolis, Dallas and Denver. With an emphasis on the Microsoft technology platform, RBA offers a complete range of Application Development, Portals and Collaboration, Infrastructure, and Business Intelligence solutions to clients ranging from mid-market to enterprise. RBA's goal is to bridge the gap between where their clients are today and where they want to go tomorrow by delivering unparalleled technology expertise, an uncompromising commitment to service and an unmatched portfolio of technology and business solutions. RBA's entrepreneurial spirit combined with its challenging and rewarding employment experience attracts the industry's top talent. For more information, please visit

Avkash Chauhan described a workaround for a Windows Azure VM Role: Unable to Login your VM even with correct credentials problem on 12/22/2010:

If you have passed the CSUPLOAD script successfully and [can] also successfully publish your VM Role service on Azure, the next step is to RDP your VM role. The detailed steps to get VM Role working are described here:

When you RDP to your VM Roll, it is possible that:

  1. Your credentials will not work even you are sure you are typing it correctly
  2. Your application on VM Roll may now work correctly

After my investigation I discovered something interesting. After creating your VHD when you try to install the OS, there is a very important step, which I think most of us will very easily miss.

When you try to install the Windows Server 2008 R2 Enterprise Edition OS on VHD please follow as below:

Note: Above steps are already described in Exercise #1, Task #1 & Step 12 inside above link.

You can verify your VM after you open it in hypervisor.

Correct VHD:

Incorrect VHD:

Avkash Chauhan also described how to overcome a Windows Azure VM Role - Handling Error : A parameter was incorrect. Details: The subscription is not authorized for this feature on 12/22/2010:

Once you have VM role enabled with your subscription, you can start working on VM Role using the step by step guide provided as below:

After finished creating your VHD for VMRole, and uploading using CSUPLOAD script, you may following error:

Error: A parameter was incorrect. Details: The subscription is not authorized for this feature

Reason: This error could occurred if any of the parameter in the CSUPLOAD script does not have correct info.

  • In above case, the error occurred because in the –Location parameter you have use “Anywhere US” which is not accepted.
  • You need to specific the exact data center location name with “–Location” parameter.

Solution: [U]se the exact data center name with the "-Location" parameter. The accepted locations are:       

  1. South Central US
  2. North Central US
  3. North Europe
  4. West Europe
  5. Southeast Asia
  6. East Asia

Mari Yi recommended Delivering Windows Azure Platform Cloud Assessment Services On Demand Partner Training on 12/22/2010:

imageLearn how you can deliver a Windows Azure Cloud Assessment for your customers to identify, win, and close Windows Azure application migration opportunities. It covers both business and architectural considerations, as well as the array of tools.

Click here to access this training

For additional training resources on Azure visit The Learning Plan Tool

<Return to section navigation list> 

Visual Studio LightSwitch

Danny Diaz noted the includsion of LightSwitch sessions in Silverlight Awesomeness In The East Coast events in January and February 2011:

Window7 and Silverlight make for a fantastic platform to build and deploy applications.  Together, you have the power to build smart, visually stunning applications that truly light up.  Want to learn how to take advantage of your desktop OS using Silverlight?  Want to learn some Silverlight programming techniques?  Want to dialog about some Silverlight Futures?  Join us for a full day of Silverlight with a special emphasis on out-of-the-browser applications.

Session One – Silverlight on Windows Fundamentals - Get an introduction to the basics of working with Silverlight. We’ll cover how to create and lay out a user interface using XAML, and examine the rich control set for building applications. Then we’ll look at coding with C# and Visual Basic, as well as the cool things you can do without writing code using features like behaviors and data binding.  Finally we’ll look at how to continue your learning with an orientation to the numerous resources available today for learning more Silverlight.

Session Two – Empowering Line of Business (LOB) Applications on Windows - Line of Business Applications (LOB) are all about adding value to the business and helping it run at full capacity. In this session you will learn how to take LOBs to the next level. We will discuss full trust, out of browser scenarios, SharePoint integration, RIA Services and more. We will also discuss the basics of the MVVM pattern that will simplify the way you develop applications today. After this session you will have all the tools you need to make your Line of business applications stand out on Windows.

Session Three – Creating Interactive Windows applications using Media and Touch - You’re a developer, not a designer, but in recent years the media side of development is getting more intense.  How can you keep up?  A basic understanding of Expression Blend, Silverlight 4 and Microsoft Pivot, which you’ll have by the end of this session, will get you farther along this path than you thought possible.  Start leveraging the power of Multi-Touch, Video and Data Visualization to create engaging experiences that will light up on Windows.

image2224222Session Four - Building RAD Silverlight Applications using LightSwitch - Visual Studio LightSwitch is a new development tool (currently in beta) for building business applications. LightSwitch simplifies the development process, letting you concentrate on the business logic and doing much of the remaining work for you. By using LightSwitch, an application can be designed, built, tested, and in your user’s hands quickly. Come explore LightSwitch for forms-over-data applications in a rapid application development fashion.

Session Five - Light up your Silverlight Apps for Win7 using Native Extensions - Learn about a new toolkit called Native Extensions for Silverlight (NESL) , which provides Silverlight friendly hooks to a number of native Windows 7 features you can use in your Silverlight applications. The libraries make it significantly easier to leverage Windows 7 from Silverlight. The libs also help with lighting up a Silverlight application with a minimum of effort for Windows 7 with items like jumplists and icon overlays. Learn “how-to” with special rigging-it-up sessions on video capture, speech, and Win7 features  like jump lists, icon overlays, taskbar progress and more!

Session Six – Silverlight Futures - This session will highlight the compelling features coming in Silverlight 5. With tighter integration with Windows APIs, full trust operations, XAML debugging, and support for a 64 bit browsers, you’ll see how Silverlight continues to be the dominating platform for building rich Internet and Windows applications.

All events are from 9:00 am to 5:00 pm. Doors Open at 8:30 am 

See Danny’s post for dates, locations, and sign-up links.

Return to section navigation list> 

Windows Azure Infrastructure

Jim O’Neill remided erstwhile developers that Azure is Free… Times Three in this 12/22/2010 post:

imageI’d like to say the season of giving has inspired us, but that’s not really the case.  Windows Azure usage – with limits, of course – has been freely available for a while, but I’ve found that not all that many folks know about it, or about how easy it is to sign up.

Now that my colleagues and I have completed our Fall 2010 Azure Firestarter tour, I thought it would be a good opportunity to produce a few short screencasts that walk through how to sign up for those free Windows Azure benefits that are available to practically anyone.

There are three screencasts in all, and the longest is just 5 1/2 minutes.  Why three?  Well. there are a number of offers in market, and they each have different qualification requirements, benefit allotments, and convertibility as summarized below and detailed in each of the videos.


Introductory Special  
  • available to all
  • recurring monthly benefit (credit card required for overages)
  • available through March 31, 2011
  • convertible to other paid accounts


MSDN Subscribers  
  • available to MSDN Premium & Ultimate subscribers
  • recurring monthly benefit (credit card required for overages)
  • 8 month period (renewable once)
  • convertible to other paid accounts
Windows Azure Pass  
  • available in select markets only
  • single 30-day period of benefits
  • not convertible to other accounts

imageNow that you've got your account, what next?  Well, writing and running “Hello, world cloud”, of course!  These links should help you along:

and once you’ve gotten past that, the sky’s (ahem!) the limit!

image Kasey Casells, e-editor for IDG Connect, offered in a 12/23/2010 email three cloud computing whitepapers for download:

Looking Ahead to The Cloud in 2015

Cloud computing is expected to change the face of IT over the next several years. Read this paper, written from the perspective of a CIO in 2015, to find out what cloud computing will be like in 2015. Will organizations think differently about cloud security? Will hardware be utilized more effectively? Will daily operations be more efficient overall? Look into the future of cloud computing today.

Download the white paper

Putting Cloud Security in Perspective

There is growing acceptance that the cloud delivery model offers real business benefits, however security concerns still threaten the general uptake of the cloud. Although cloud computing does present unique risks, it is no less secure than more traditional IT delivery models. Don't let security concerns overshadow the benefits - learn how to take advantage of cloud-based services while managing risk responsibly in this paper.

Download the white paper

Securing Remote Workers in the Cloud

Thanks to advances in technology and communication, more people are working remotely than ever before. It is estimated that there will be 1.19 billion mobile workers by 2013. This has created new security concerns for businesses. Read now to learn how managing security in a cloud environment can help overcome the challenges faced by remote workers. Read now to discover how a cloud-based environment could benefit your distributed organization.

Download the white paper

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

Kennon Owens of the Microsoft Systems Center Team announced on 12/22/1020 Microsoft Server Application Virtualization CTP Released – Run More of your Applications on Windows Azure:

image Today we are excited to announce the Community Technology Preview of Microsoft Server Application Virtualization (Server App-V), and the Server Application Virtualization Packaging Tool.

Microsoft Server Application Virtualization builds on the technology used in client Application Virtualization, allowing for the separation of application configuration and state from the underlying operating system. This separation and packaging enables existing Windows applications, not specifically designed for Windows Azure, to be deployed on a Windows Azure worker role. We can do this in a way where the application state is maintained across reboots or movement of the worker role. This process allows existing, on-premises applications to be deployed directly onto Windows Azure, providing yet more flexibility in how organizations can take advantage of Microsoft’s cloud capabilities. Server Application Virtualization delivers:

  • Application mobility: Server Application Virtualization enables organizations to move their applications from on-premises datacenters to Windows Azure to take advantage of Windows Azure’s scalability and availability. This application mobility provides a unique level of flexibility to organizations as their needs evolve, enabling movement from one environment to another as their business needs dictate without the need to re-compile or rewrite the application.
  • Simplified deployment: With Server Application Virtualization, organizations are able to virtualize applications once and then deploy these packages as needed. This process creates a method to manage applications, simply and efficiently across their Windows Server® platform or to Windows Azure.
  • Lower operational costs: By using Server Application Virtualization organizations can gain the lower management benefits of the Windows Azure platform for their existing applications. This is delivered through the virtualized application being deployed on the Windows Azure platform, meaning organizations get the benefit of Windows without the need to manage a Windows Server operating instance or image for that application. With Server Application Virtualization, organizations are able to virtualize applications once and then deploy the packages this process creates, simply and efficiently across their Windows Server® platform or to Windows Azure.

image Microsoft Server Application Virtualization converts traditional server applications into state separated "XCopyable" images without requiring code changes to the applications themselves, allowing you to host a variety of Windows 2008 applications on the Windows Azure worker role. The conversion process is accomplished using the Server App-V Sequencer. When the server application is sequenced, the configuration settings, services, and resources that the application uses are detected and stored. The sequenced application can then be deployed via the Server Application Virtualization Packaging Tool to the worker role in Windows Azure as a file.

Server App-V and the Windows Azure VM Role

Microsoft Server Application Virtualization and the Windows Azure VM role are complementary technologies that provide options for migrating your existing Windows applications to Windows Azure. With the Windows Azure VM role, you are taking a full Hyper-V VHD file with the OS and Application installed on it, and copying that Virtual Machine up to Windows Azure. With Server App-V, you are capturing an image of the application with the Server Application Virtualization Sequencer, copying that image up to Windows Azure with the Server Application Virtualization Packaging Tool, and deploying it on a Windows Azure worker role.

Connect back to your Local Network

For most of you, the on-premises server applications that you want to virtualize probably have to access local resources within your domain or in your datacenter. A question you may have is, "How can I configure my virtualized application to still access my internal network once I have moved an application to Windows Azure?" With Windows Azure Connect, you can create that linkage from within Windows Azure back into your network. This creates IPsec protected connections between machines (physical or virtual) in your network and roles running in Windows Azure. Keep in mind that you will have to account for the latency between running part of your service on-premises and part off, because you are running part of your application in one datacenter and part in another. An example of how this may work is that you have a Standard 3-tier application. You can update your Web Tier to run as a Web Role in Windows Azure. You can virtualize your Application Tier and run that as a Server App-V instance on a Worker Role in Windows Azure. Then this application can use Windows Azure Connect to access the local SQL Server that is still running in your datacenter. Eventually, you may want to migrate that SQL Server to SQL Azure, and you can do that within your own planned timeframe.


In October, during Steve Ballmer’s and Bob Muglia’s Keynote at Microsoft PDC 2010, about 1 hour 53 minutes in, Bob mentioned that we would be having a Technology Preview available before the end of the year, and this announcement signifies that release. Currently, this is an invitation only Community Technology Preview. The final release of this technology will be available to customers in the second half of 2011.

I am really excited to be writing about this technology, as it gives you a way to move some of those applications that may never be rewritten for Windows Azure to run on our Platform-as-a-Service offering. This will give you the option to move however fast, and whenever you want to the cloud on your terms and in your timeframe.

Please stay tuned for more information about this technology and some feedback on our progress in the New Year.

Gaurav Anand explained How Microsoft Cloud strategy is different than VMware in a 12/23/2010 post:

Yesterday Microsoft announced the community technology preview release of Server Application Virtualization. With Server App-V, you are capturing an image of the application with the Server Application Virtualization Sequencer, copying that image up to Windows Azure with the Server Application Virtualization Packaging Tool, and deploying it on a Windows Azure worker role.

No need to customize your apps now for MS Azure. so your applications can take benefit of microsoft Azure cloud hosting without any special coding considerations.

[An]other way facilitated by windows Azure VM role [yet in beta] is to take a full Hyper-V VHD file with the OS and Application installed on it, and copying that Virtual Machine as windows azure VM role. Now this is what Microsoft is doing different than VMware which does not let you host your apps/VM in their hosted environment. VMware  Vcloud software enables vendor to create an Infrastructure as a service model cloud and let different organizations to host their infrastructure in the cloud managed by vendor. The difference in Microsoft and VMware strategy is that while Microsoft is allowing hosting in their environment and in vendor, VMware is not hosting and managing your workloads but only vendors are. How many of us will be confident that all those vendors have good experience in managing such cloud environments [except a few big names like HP, IBM etc]. I have used Vcloud solution and i like it and it would be interesting to see how Microsoft[‘s] next release of SCVMM  [very focused on cloud] and SCCM due next year compete with next version of Vcloud due next year.

Bill Zack posted Hyper-V Survival Guide on 12/23/2010:

imageThis is for the ISV who wants to know more about Hyper-V, the basis for Microsoft’s Private Cloud offering, Hyper-V Cloud. Here is a list of  Hyper-V references that is probably much more than you need to know about the subject , but it is very comprehensive.

Particularly snazzy is a Seadragon (Deep Zoom)  Hyper-V poster that will let you zoom in on any portion of the poster.

This blog post covers a wealth of Hyper-V related links to materials for planning, deploying, managing, optimizing and troubleshooting Hyper-V.  It is also chock full of links to videos, podcasts, blogs,and more. 

Bill Zack reported Weekly Azure Updates from OakLeaf Systems on 12/22/2010:

The folks at OakLeaf Systems publish a blog listing interesting Windows Azure posts and update it on a daily basis.


In the current post:

This compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles can be found here

It is worth subscribing to.

Thanks, Bill! I’m trying to adhere to daily publishing during the week and at least one post during the weekends.

<Return to section navigation list> 

Cloud Security and Governance

Chris Hoff (@Beaker) posted The Curious Case Of the MBO Cloud with a warning about “false clouds” on 12/23/2010:

image I was speaking to an enterprise account manager the other day regarding strategic engagements in Cloud Computing in very large enterprises.

He remarked on the non-surprising parallelism occurring as these companies build and execute on cloud strategies that involve both public and private cloud initiatives.

Many of them are still trying to leverage the value of virtualization and are thus often conservative about their path forward.  Many are blazing new trails.

We talked about the usual barriers to entry for even small PoC’s: compliance, security, lack of expertise, budget, etc., and then he shook his head solemnly, stared at the ground and mumbled something about a new threat to the overall progress toward enterprise cloud adoption.

MBO Cloud.

We’ve all heard of public, private, virtual private, hybrid, and community clouds, right? But “MBO Cloud?”

I asked. He clarified:

Cloud computing is such a hot topic, especially with its promise of huge cost savings, agility, and the reduced time-to-market for services and goods that many large companies who might otherwise be unable or unwilling to be able to pilot using a public cloud provider and also don’t want to risk much if any capital outlay for software and infrastructure to test private cloud are taking an interesting turn.

They’re trying to replicate Amazon or Google but not for the right reasons or workloads. They just look at “cloud” as some generic low-cost infrastructure platform that requires some open source and a couple of consultants — or even a full-time team of “developers” assigned to make it tick.

They rush out, buy 10 off-the-shelf white-label commodity multi-CPU/multi-core servers, acquire a plain vanilla NAS or low-end SAN storage appliance, sprinkle on some Xen or KVM, load on some unremarkable random set of open source software packages to test with a tidy web front-end and call it “cloud.”  No provisioning, no orchestration, no self-service portals, no chargeback, no security, no real scale, no operational re-alignment, no core applications…

It costs them next to nothing and it delivers about the same because they’re not designing for business cases that are at all relevant, they’re simply trying to copy Amazon and point to a shiny new rack as “cloud.”

Why do they do this? To gain experience and expertise? To dabble cautiously in an emerging set of technological and operational models?  To offload critical workloads that scale up and down?

Nope.  They do this for two reasons:

  1. Now that they have proven they can “successfully” spin up a “cloud” — however useless it may be — that costs next to nothing, it gives them leverage to squeeze vendors on pricing when and if they are able to move beyond this pile of junk, and
  2. Management By Objectives (MBO) — or a fancy way of saying, “bonus.”  Many C-levels and their ops staff are compensated via bonus on hitting certain objectives. One of them (for all the reasons stated above) is “deliver on the strategy and execution for cloud computing.” This half-hearted effort sadly qualifies.

So here’s the problem…when these efforts flame out and don’t deliver, they will impact the success of cloud in general — everywhere from a private cloud vendor to even potentially public cloud offers like AWS.  Why?  Because as we already know, *anything* that smells at all like failure gets reflexively blamed on cloud these days, and as these craptastic “cloud” PoC’s fail to deliver — even on minimal cash outlay — it’s going to be hard to gain a second choice given the bad taste left in the mouths of the business and management.

The opposite point could also be made in regard to public cloud services — that these truly “false cloud*” trials based on poorly architected and executed bubble gum and bailing wire will drive companies to public cloud (however longer that may take as compliance and security catch up.)

It will be interesting to see which happens first.

Either way, beware the actual “false cloud” but realize that the motivation behind many of them isn’t the betterment of the business or evolution of IT, it’s the fattening of wallets.

David Linthicum includes SOA governance technology in his 3 SOA/Cloud Trends to Watch in 2011 post of 12/23/2010 to ebizQ’s Where SOA Meets Cloud blog:

image With the New Year right around the corner, and most of the 2011 prediction blogs already posted, perhaps it's time to look at the true trends that will occur in the world of SOA and Cloud Computing in 2011. I'll be brief so you can get back to your eggnog.

Trend 1: Cloud providers become more aware of SOA as an architectural approach to enable cloud.

image Most cloud provides consider SOA like Big Foot. They know it's out there, they see glimpses of it once in a while, but they have yet to capture it and put it in a Zoo. However, as 2011 progresses that will certainly change, and I'm seeing more cloud providers become interested in the value of SOA in the application of their cloud services within enterprises, no matter if it's PaaS, SaaS, or IaaS.

Trend 2: SOA governance technology begins to provide true value.

There is certainly a tipping point where the number of services gets to such a level that you need service governance technology to manage it for you. That tipping point was reached in 2010 for many enterprises, and it will only get worse in 2011. The use of a good service governance technology platform is absolutely essential for a successful cloud meets the enterprise deployment.

Trend 3: Centralized sharing of services becomes a focus.

While we've been learning to leverage services that come out of the clouds, and services that we externalize into the cloud, however there has not been a lot of focus on how services are hosted and discovered for sharing among enterprises. I suspect that clouds will become better at allowing enterprises to onboard services, and re-share those services in 2011. Kind of like eBay for application and infrastructure services.

The trend is your friend. Good luck in 2011.

<Return to section navigation list> 

Cloud Computing Events

Kurt Claeys will present an MSDN Live Meeting - Windows Azure: New Features and Technologies on 1/18/2010 at 2:00 PM CET from Paris, France with details of the new features in the January 2011 update. From the abstract:


Join this MSDN Live Meeting to learn more about Windows Azure and the new features in the January 2011 update.

At PDC (Professional Developers Conference, held in Redmond on 28/29 October 2010) a lot of announcements were made on some great upcoming functionalities in the Azure platform. The new functionalities are driven by customer and partner feedback and consist of improved support for enterprises and ease of development. Customers can move existing applications to Windows Azure by using the Virtual Machine (VM) Role. The support for Elevated Privileges, full IIS and Remote Desktop provides developers greater flexibility and control in developing, deploying and running cloud applications. The introduction of Windows Azure Connect enables a simple and easy-to-manage mechanism to setup IP-based network connectivity between on-premises and Windows Azure resources. The Extra Small instance offering provides access to compute instances at a substantially lower cost. In this session you get a in depth overview of all these new features.

Event ID: 1032473687

  • Presenter(s): Kurt Claeys
  • Language(s): English.
  • Product(s): Windows Azure.
  • Audience(s): Architect, NonProfessional Developer, Pro Dev/Programmer.
  • Register Online

image Speaker: ​Kurt Claeys is a Technology Solution Professional for Windows Azure at Microsoft. He has a background as solution architect for SOA and integration projects, was MVP Connected Systems Developer, trainer and co-author of ‘WCF 4.0’. In his current role at Microsoft he’s the technical resource to help partners and customers in their adoption of Windows Azure.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Derrick Harris posted Update: AWS Adding <em>Export</em> to VM Import to GigaOm’s Structure blog on 12/23/2010:

image I reported earlier this week on Amazon’s (s amzn) new VM Import service, which some have described as a Hotel California for VMware (s vmw) images; once they’re in, they can’t leave. Today, I received word from Amazon Web Services that it’s planning to address this issue, to a degree, as the offering evolves. According to the new product home page:

image[W]e will enable exporting Amazon EC2 instances to common image formats. We also plan to make VM Import available via a plugin for VMware’s vSphere console in the coming months.

image We’ll keep an eye out for details when the new capabilities are released, but it looks like VM Import users will be able to re-convert their images into VMware images if need be, and from the vSphere console, to boot. If it works as advertised (however vaguely), it’s yet another sign that AWS understands how to sell traditional IT departments on cloud computing by giving them, essentially, the features they want (sub req’d). If it ends up being less than ideal, well, it’s still a net gain for everyone involved.

Related content from GigaOM Pro (sub req’d):

David Linthicum asserted “The ability to replicate VMs between private data centers and public clouds could make the hybrid cloud a reality” in a preface to his VM Import could be a game changer for hybrid clouds post of 12/23/2010 to InfoWorld’s Cloud Computing blog:

image Amazon Web Services' new VM Import feature could be a game changer for hybrid cloud deployments. The product allows IT departments to move virtual machine images from their internal data centers to the cloud, as needed. Many applications can benefit from this neat little trick. The low-hanging fruit is disaster recovery, as well as any migration to the cloud required to bring more capacity online by allocating additional VMs.

image The process is pretty simple. "To import images, IT departments use Amazon EC2 (Elastic Compute Cloud) API tools to point to a virtual machine image in their existing environment. Next, they specify the amount of compute capacity they need and where they want to run the image in Amazon's cloud platform. VM Import will then automatically transfer the image file, migrate the image, and create the instance in Amazon's cloud," says

However, VM images are huge, and because of current bandwidth restrictions, the thought of doing this on a daily basis is a bit unreasonable. Then again, moving images up to the cloud over the weekend to protect against a business-killing outage is not a bad approach, considering your infrastructure is ready and waiting if you lose your internal systems.

The greater potential of VM Import is the free migration of virtual images between internal data centers and cloud providers, letting IT mix and match resources across traditional systems (private clouds) and public clouds. This feature will have strong value in letting you replicate VM images between on-premise and cloud-delivered platforms, as well as load balancing among them operationally.

I believe this type of hybrid cloud architecture functionality at the VM level will give enterprises the flexibility and bet-hedging approach they need to lower the risk of moving to the cloud. You can't blame them, and now it's here.

Tod Hoff recommended reading this Paper: CRDTs: Consistency without concurrency control in a 12/23/2010 post to the High Scalability blog:

For a great Christmas read forget The Night Before Christmas, a heart warming poem written by Clement Moore for his children, that created the modern idea of Santa Clause we all know and anticipate each Christmas eve. Instead, curl up with a some potent eggnog, nog being any drink made with rum, and read CRDTs: Consistency without concurrency control by Mihai Letia, Nuno PreguiƧa, and Marc Shapiro, which talks about CRDTs (Commutative Replicated Data Type), a data type whose operations commute when they are concurrent.

From the introduction, which also serves as a nice concise overview of distributed consistency issues:

Shared read-only data is easy to scale by using well-understood replication techniques. However, sharing mutable data at a large scale is a difficult problem, because of the CAP impossibility result [5]. Two approaches dominate in practice. One ensures scalability by giving up consistency guarantees, for instance using the Last-Writer-Wins (LWW) approach [7]. The alternative guarantees consistency by serialising all updates, which does not scale beyond a small cluster [12]. Optimistic replication allows replicas to diverge, eventually resolving conflicts either by LWW-like methods or by serialisation [11].

In some (limited) cases, a radical simplification is possible. If concurrent updates to some datum commute, and all of its replicas execute all updates in causal order, then the replicas converge. We call this a Commutative Replicated Data Type (CRDT). The CRDT approach ensures that there are no conflicts, hence, no need for consensus-based concurrency control. CRDTs are not a universal solution, but, perhaps surprisingly, we were able to design highly useful CRDTs. This new research direction is promising as it ensures consistency in the large scale at a low cost, at least for some applications.

A trivial example of a CRDT is a set with a single add-element operation. A delete-element operation can be emulated by adding "deleted" elements to a second set. This suffices to implement a mailbox [1]. However, this is not practical, as the data structures grow without bound. A more interesting example is WOOT, a CRDT for concurrent editing [9], pioneering but inefficient, and its successor Logoot [13].

As an existence proof of non-trivial, useful, practical and ecient CRDT, we exhibit one that implements an ordered set with insert-at-position and delete operations. It is called Treedoc, because sequence elements are identified compactly using a naming tree, and because its first use was concurrent document editing [10]. Its design presents original solutions to scalability issues, namely restructuring the tree without violating commutativity, supporting very large and variable numbers of writable replicas, and leveraging the data structure to ensure causal ordering without vector clocks.

Another non-trivial CRDT that we developed (but we do not describe here) is a high-performance shared, distributed graph structure, the multilog [2]. While the advantages of commutativity are well documented, we are the first (to our knowledge) to address the design of CRDTs. In future work, we plan to explore what other interesting CRDTs may exist, and what are the theoretical and practical requirements for CRDTs.

May all your christmases be bright.

John Foley wrote Eli Lilly on Cloud Computing Reality as a 12/2010 article for InformationWeek::Analytics (requires registration):

image There are some urgent business reasons why Eli Lilly has been an aggressive user of cloud computing the past couple years. Several of Lilly's key drugs are coming off patents, so it needs new sources of growth. Yet it can cost more than $1 billion to bring a drug to market. That means any way the company can speed research and development, and cut costs, is highly valuable to the company. And it opens the door for new approaches and new models, such as cloud computing. Lilly CIO Michael Heim and VP Mike Meadows spoke with InformationWeek's John Foley at this year's InformationWeek 500 conference. Heim doesn't claim to have all the answers. "I have no doubt but that we'll get it wrong, but that by starting that journey I think we'll find what's right, and we'll find the right trails to blaze," he says. Here are some excerpts.

<Return to section navigation list>