Thursday, August 19, 2010

Windows Azure and Cloud Computing Posts for 8/18/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.

Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

Neil MacKenzie provided Examples of the Windows Azure Storage Services REST API in this 8/18/2010 post:

imageIn the Windows Azure MSDN Azure Forum there are occasional questions about the Windows Azure Storage Services REST API. I have occasionally responded to these with some code examples showing how to use the API. I thought it would be useful to provide some examples of using the REST API for tables, blobs and queues – if only so I don’t have to dredge up examples when people ask how to use it. This post is not intended to provide a complete description of the REST API.

imageThe REST API is comprehensively documented (other than the lack of working examples). Since the REST API is the definitive way to address Windows Azure Storage Services I think people using the higher level Storage Client API should have a passing understanding of the REST API to the level of being able to understand the documentation. Understanding the REST API can provide a deeper understanding of why the Storage Client API behaves the way it does.


The Fiddler Web Debugging Proxy is an essential tool when developing using the REST (or Storage Client) API since it captures precisely what is sent over the wire to the Windows Azure Storage Services.


Nearly every request to the Windows Azure Storage Services must be authenticated. The exception is access to blobs with public read access. The supported authentication schemes for blobs, queues and tables and these are described here. The requests must be accompanied by an Authorization header constructed by making a hash-based message authentication code using the SHA-256 hash.

The following is an example of performing the SHA-256 hash for the Authorization header

private String CreateAuthorizationHeader(String canonicalizedString)
    String signature = string.Empty;
    using (HMACSHA256 hmacSha256 = new HMACSHA256(AzureStorageConstants.Key))
        Byte[] dataToHmac = System.Text.Encoding.UTF8.GetBytes(canonicalizedString);
        signature = Convert.ToBase64String(hmacSha256.ComputeHash(dataToHmac));

    String authorizationHeader = String.Format(
          "{0} {1}:{2}",

    return authorizationHeader;

This method is used in all the examples in this post.

AzureStorageConstants is a helper class containing various constants. Key is a secret key for Windows Azure Storage Services account specified by Account. In the examples given here, SharedKeyAuthorizationScheme is SharedKey.

The trickiest part in using the REST API successfully is getting the correct string to sign. Fortunately, in the event of an authentication failure the Blob Service and Queue Service responds with the authorization string they used and this can be compared with the authorization string used in generating the Authorization header. This has greatly simplified the us of the REST API.

Table Service API

The Table Service API supports the following table-level operations:

The Table Service API supports the following entity-level operations:

These operations are implemented using the appropriate HTTP VERB:

  • DELETE - delete
  • GET - query
  • MERGE - merge
  • POST - insert
  • PUT - update

Neil continues with  sections that provide examples of the Insert Entity and Query Entities operations, Get Entity, Blob Service API, Put Blob, Lease Blob, Queue Service API, Put Message, and Get Messages.

Mike Admundsen did a great deal of early work with the REST API.

Tofuller posted Announcing the Perfmon-Friendly Azure Log Viewer Plug-In on 8/18/2010:

The Background

imageAbout 3 months ago as some colleagues and I were working on the "Advanced Debugging Hands On Lab for Windows Azure" (for more info contact me via this blog) we identified an interesting opportunity within the Azure MMC. If you've worked with this tool you may have seen that the default export option for the performance counters log is an Excel file. More specifically a CSV file.  This is fine for getting a dump of event log data or native Azure infrastrucutre logs but for performance counter data this is not ideal.

The Problem

The format of the data exported in the Azure MMC made it impossible to use our existing Performance Monitor tool on Windows (perfmon) or our Performance Analysis of Logs tool from codeplex. These tools are essential for us in support as we do initial problem analysis.  Issues like managed or unmanaged memory leaks, high and low CPU hangs, and process restarts are just a few examples of issues that start with perfmon analysis. These issues do not go away in the cloud!

The Solution

The Azure MMC team made an outstanding design decision that allowed us to very quickly build a plug-in that could provide additional options for exporting those performance counters however we want.  Because the team decided to take advantage of the Managed Extensibility Framework (aka MEF) it was easy to extend the capabilies of the MMC with our own functionality.  Here are the steps that were required to code up the solution:

  1. Start from a new WPF User Control Library project. 
  2. Next you'll want to set your project to target .NET Framework 3.5 if you are using Visual Studio 2010 because the version of MEF that the Azure MMC uses is pre .NET 4.0
  3. Add References to the following assemblies located in your %InstallRoot%\WindowsAzureMMC\release folder
    1. Microsoft.Samples.WindowsAzureMmc.Model
    2. Microsoft.Samples.WindowsAzureMmc.ServiceManagement
    3. MicrosoftManagementConsole.Infrastructure
    4. System.ComponentModel.Composition
    5. WPFToolkit
  4. Next, the implementation decision made in the Azure MMC base classes meant we had to follow the MVVM pattern for WPF to create a ViewModel that would bind to the controls rendered as part of the dialog when the "Show" button is clicked.  This ViewModel class must inherit from ViewerViewModelBase<T> which is found in the WindowsAzureMmc.ServiceManagement dll referenced above. T in this case will be your actual view with all of the XAML markup.
  5. The next part was easy,  to get the perfmon counter data we just needed to override the OnSearchAsync and OnSearchAsyncCompleted methods from the ViewerViewModelBase type which takes care of making the calls to the Azure APIs and retrieving the data.
  6. Now for the fun part, we still had to get the data in the proper structure to be readable by perfmon and PAL.  To do this we wrote an Export command (see Code Snippet 1) that would be triggered from the pop up in the same way it is triggered to export to excel.  The only difference is in this case we take all the incoming performance counter data and group it first by tickcount and then map that back to the unique columns which are created as "RoleName\RoleInstance\CounterID" (see Code Snippet 2).
  7. Now that we had all the data restructured we needed to come up with a way to to take the new CSV output and get it into the native BLG perfmon format.  Big thanks to my colleague Greg Varveris for the design approach here.  All we needed to do was use a Process.Start command to run a conversion of the CSV file to BLG using relog.exe and then launch the resulting file with perfmon directly!

The Result

Once this DLL is built and dropped into the WindowsAzureMMC\release folder it is automatically picked up by the MMC and you will see a new option in the "Performance Counters Logs" drop down (note: if you drop the DLL in the release folder when the Azure MMC is already running you need to do a "refresh plugins" to get the new option to show up)

Now when you click on "Show" there will be a new dialog that opens and allows you to open the data directly in perfmon.

Here's a quick snapshot of the perfmon data.  Notice it shows multiple roles, multiple instances, and multiple counters.  There is no limitation to the amount of data you want to pull down from the MMC and show in perfmon!

On top of getting to open the data directly in perfmon you also will find that the data is being saved on disk for you to collect and store for later viewing or feed into a tool like PAL to determine if you have any bad resource usage trends.  All you need to do is go to your C:\Users\%username%\AppData\Local\Temp and you will see the files there.

What Next?

We'd love to get feedback on this plug in and whether or not it was useful to you for reviewing your Azure application performance data.  You can download it here: Windows Azure MMC Downloads Page

Tofuller continues with source code snippets.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Wayne Walter Berry (@WayneBerry) explains The Real Cost of [SQL Azure] Indexes in this 9/19/2010 post to the SQL Azure Team blog:

image In a previous blog post I discussed creating covered indexes to increase performance and reduce I/O usage. Covered indexes are a type of non-clustered index that “covers” all the columns in your query to create better performance than a table scan of all the data. An observant reader pointed out that there is a financial cost associated with creating covered indexes. He was right, SQL Azure charges for the room required to store the covered index on our server. I thought it would be interesting to write a few Transact-SQL queries to determine the cost of my covered index; that is what I will be doing in this blog post.

Covered Indexes Consume Resources

There is no such thing as a free covered index; regardless of whether you are on SQL Server or SQL Azure covered indexes consume resources. With SQL Server, you pay some of the costs upfront when you purchase the machine to run SQL Server; by buying additional RAM and hard drive space for the index. There are other costs associated with an on-premise SQL Server, like warranty, backup, power, cooling, and maintenance. All of these are hard to calculate on a monthly basis, you need to depreciate the machines over time, and some expenses are variable and unplanned for, like hard drive failures. These are overall server operating costs, there is no way to drill down and determine the cost of a single covered index. If there was, wouldn’t it would be nice to run a Transact-SQL statement to compute the monthly cost of the index? With SQL Azure you can do just that.

SQL Azure Pricing

imageCurrently, SQL Azure charges $9.99 per gigabyte of data per month (official pricing can be found here). That is the cost for the range in which the actually size of data you want to store falls, not the cap size of the database. In other words, if you are storing just a few megabytes on a 1 GB Web edition database, the cost is $9.99 a month. The ranges are 1, 5, 10, 20, 30, 40, and 50 gigabytes – the closer you are to those sizes that lower the cost per byte to store your data. Here is a Transact-SQL statement that will calculate the cost per byte.

DECLARE @SizeInBytes bigint
SELECT @SizeInBytes =
 (SUM(reserved_page_count) * 8192)
    FROM sys.dm_db_partition_stats

    WHEN @SizeInBytes/1073741824.0 < 1 THEN 9.99
    WHEN @SizeInBytes/1073741824.0 < 5 THEN 49.95 
    WHEN @SizeInBytes/1073741824.0 < 10 THEN 99.99  
    WHEN @SizeInBytes/1073741824.0 < 20 THEN 199.98
    WHEN @SizeInBytes/1073741824.0 < 30 THEN 299.97             
    WHEN @SizeInBytes/1073741824.0 < 40 THEN 399.96              
    WHEN @SizeInBytes/1073741824.0 < 50 THEN 499.95             
         END)  / @SizeInBytes
FROM    sys.dm_db_partition_stats
The Cost of Covered Indexes

Now that we know our true cost per byte, let’s figure out what each covered index costs. Note that the Transact-SQL can’t tell which indexes are covered indexes from all the non-clustered indexes, only the index creator will know why the index was created.

Here is some Transact-SQL to get our cost per month for each non-clustered index.

DECLARE @SizeInBytes bigint

SELECT @SizeInBytes =
 (SUM(reserved_page_count) * 8192)
    FROM sys.dm_db_partition_stats

DECLARE @CostPerByte float

SELECT    @CostPerByte = (CASE 
    WHEN @SizeInBytes/1073741824.0 < 1 THEN 9.99
    WHEN @SizeInBytes/1073741824.0 < 5 THEN 49.95 
    WHEN @SizeInBytes/1073741824.0 < 10 THEN 99.99  
    WHEN @SizeInBytes/1073741824.0 < 20 THEN 199.98
    WHEN @SizeInBytes/1073741824.0 < 30 THEN 299.97             
    WHEN @SizeInBytes/1073741824.0 < 40 THEN 399.96              
    WHEN @SizeInBytes/1073741824.0 < 50 THEN 499.95             
         END)  / @SizeInBytes
FROM    sys.dm_db_partition_stats

SELECT, SUM(reserved_page_count) * 8192 'bytes',
 	(SUM(reserved_page_count) * 8192) * @CostPerByte 'cost'
FROM sys.dm_db_partition_stats AS ps
    INNER JOIN sys.indexes AS idx ON idx.object_id = ps.object_id AND idx.index_id = ps.index_id

The results of my Adventure Works database look like this:


The most expense index is: IX_Address_AddressLine1_AddressLine2_City_StateProvince_PostalCode_CountryRegion which is a covered index to increase the lookup time for the address table. It costs me almost 50 cents per month. With this information, I can make a cost comparison against the performance benefits of having this covered index and determine if I want to keep it.

Keeping It All in Perspective

If I delete my IX_Address_AddressLine1_AddressLine2_City_StateProvince_PostalCode_CountryRegion index does it save me money? Not if it doesn’t change the range I fall into. Because the Adventure Works database is less than 1 Gigabyte, I pay $9.99 per month; as long as I don’t go over the top side of the range, creating another clustered index doesn’t cost me anymore. If you look at it that way, creating a covered index that doesn’t completely fill your database is basically free. Once you are committed to spending the money for the range you need for your data, you should create as many covered indexes (that increase performance) as will fit inside the maximum size.

This is basically the same as an on-premise SQL Server, once you have committed to spending the money for the server, adding additional clustered indexes within the machines resources doesn’t cost you anymore.


The prices for SQL Azure and the maximum database sizes can change in the future, make sure to compare the current costs against the queries provided to make sure your index costs are accurate.

Mary Jo Foley (@maryjofoley) asserted It takes a lot of syncing to build a Microsoft personal cloud in this 8/19/2010 post to her All About Microsoft blog for ZDNet:

imageWhen Microsoft officials talk about syncing devices and services, they can be referring to any number of different Microsoft sync technologies. There’s ActiveSync, Windows Live Sync, Sync Framework, Windows Sync, the synchronization provided via the Zune PC software client… and lots more.

imageSeveral of these sync technologies are essential to realizing what Microsoft marketing execs (and Forrester Research analysts) have started referring to as the “personal cloud.” Microsoft execs talked up the potential of the personal cloud earlier this summer at both the Worldwide Partner Conference and the Microsoft Financial Analyst Meeting (FAM).

Corporate Vice President of Windows Consumer Marketing Brad Brooks told Wall Street analysts and press they’d be hearing and seeing more from Microsoft about the personal cloud later this year. He said that Microsoft is one of four tech companies (with the others being Apple, Google and Facebook) who are set to deliver “multiple components of the emerging Personal Cloud user experience.”

“Speaking of cloud and Windows, we have a unique point of view on the cloud for consumers, and we call it the PC. Only in this case we call it the personal cloud,” Brooks told FAM attendees in late July. “And the personal cloud, well, it’s going to connect all the things that are important to you and make them available and ready for you to use wherever you’re at, whenever you need it.”

Windows Live Essentials — a suite consisting of a number of Microsoft’s Windows Live services, and which just this week got a “beta refresh” — is one element of Microsoft’s personal cloud experience. Brooks also included Windows 7, Bing, Xbox Live, Zune, Windows Phone 7 and “client virtualization technologies that so far are aimed mostly at IT managers”

Microsoft’s personal cloud experience works like this: Your Windows (preferably Windows 7) PC is the hub. From there, you can connect and sync various devices, like your phone, your gaming console, etc. In some cases, you’ll be able to sync directly from the devices to the cloud. But the main goal, from Microsoft’s standpoint, is to keep the PC at the center of a user’s syncing existence.

This fall/winter — when Microsoft rolls out the final version of its Windows Live Essentials 2011 bundle and its phone partners start selling Windows Phone 7 devices — Microsoft execs will be touting how these products are enhanced by the personal cloud. That sounds a lot fancier than saying Windows 7/Vista and Windows Phone 7 users will be able to install and run the new Windows Live Messenger, Mail, Family Safety parental controls and Live Sync (which is what they really mean).

Speaking of Live Sync, Microsoft officials have conceded (and beta testers realize) that the coming version allows users to sync their Windows PCs and Macs. But it doesn’t support phones — not even Windows Phones. And the Zune PC client (codenamed “Dorado”), which will enable the personal cloud synchronization between Windows PCs and Windows Phone 7 devices is dedicated to syncing digital media content, not things like contacts or e-mail, as Windows SuperSite’s Paul Thurrott (who is writing a Windows Phone 7 book) noted in a recent blog post.

Microsoft execs tend to gloss over this reality in their demos. What’s enabling them to seamlessly sync their photos on their Windows Phone 7s with their Windows 7 PCs? It’s not the coming version of Windows Live Sync. And I don’t think it’s the Zune sync, either.

Maybe it’s Windows Sync? That seemed to be what Brooks was telling FAM attendees last month, according to the transcript of his remarks (unless Brooks was talking about Windows Live Sync, which I’m doubting, since the coming 2011 Essentials version doesn’t support phones). Brooks said:

“Well, now with the new Windows Sync feature, I can choose one folder or choose all my folders on a PC, and choose to share them and they automatically sync up in the background whenever I’m connected to the Internet.  So, that means all my files and all my content across all these devices always keep in sync.”

Windows Sync is a feature built into Windows 7. According to Microsoft, “(u)sing Windows Sync, developers can write synchronization providers that keep data stores such as contacts, calendars, tasks, and notes on a computer or on a network synchronized with corresponding data stores in personal information managers (PIMs) and smart phones that support synchronization.”

After clicking around for a while on some of the Windows Sync links, I realized Windows Sync builds on Microsoft’s Sync Framework. What’s the Sync Framework? Microsoft’s description:

“Sync Framework a comprehensive synchronization platform enabling collaboration and offline for applications, services and devices. Developers can build synchronization ecosystems that integrate any application, any data from any store using any protocol over any network. Sync Framework features technologies and tools that enable roaming, sharing, and taking data offline.”

(By the way, Microsoft just rolled out this week Version 2.1 of the Sync Framework, which adds SQL Azure synchronization as an option.) [See article below.]

I tried to get more clarity from company execs about Microsoft’s consumer sync strategy but had little luck, as a result of many Softies being on vacation (and the fact that Microsoft is still, no doubt, ironing out the details of its fall rollouts.)

Does any of this under-the-covers stuff really matter? I realize Microsoft (hopefully) will isolate consumers from sync programming interfaces and sync providers, but I’m wondering how seamless — and complete — Microsoft’s personal cloud actually will be. “Version 1″ of this personal cloud experience seems like it will be neither. Maybe that’s OK for a “V1.”  I guess we’ll see soon….

LarenC announced Sync Framework 2.1 Available for Download in this 8/18/2010 post to the Sync Framework Team Blog:

imageSync Framework 2.1 is available for download.

Sync Framework 2.1 includes all the great functionality of our 2.0 release, enhanced by several exciting new features and improvements. The most exciting of these lets you synchronize data stored in SQL Server or SQL Server Compact with SQL Azure in the cloud. We’ve added top customer requests like parameter-based filtering and the ability to remove synchronization scopes and templates from a database, and of course we’ve made many performance enhancements to make synchronization faster and easier. Read on for more detail or start downloading now!

SQL Azure Synchronization

With Sync Framework 2.1, you can leverage the Windows Azure Platform to extend the reach of your data to anyone that has an internet connection, without making a significant investment in the infrastructure that is typically required. Specifically, Sync Framework 2.1 lets you extend your existing on premises SQL Server database to the cloud and removes the need for customers and business partners to connect directly to your corporate network. After you configure your SQL Azure database for synchronization, users can take the data offline and store it in a client database, such as SQL Server Compact or SQL Server Express, so that your applications operate while disconnected and your customers can stay productive without the need for a reliable network connection. Changes made to data in the field can be synchronized back to the SQL Azure database and ultimately back to the on premises SQL Server database. Sync Framework 2.1 also includes features to interact well with the shared environment of Windows Azure and SQL Azure. These features include performance enhancements, the ability to define the maximum size of a transaction to avoid throttling, and automatic retries of a transaction if it is throttled by Windows Azure. All of this is accomplished by using the same classes you use to synchronize a SQL Server database, such as SqlSyncProvider and SqlSyncScopeProvisioning, so you can use your existing knowledge of Sync Framework to easily synchronize with SQL Azure.

Bulk Application of Changes

Sync Framework 2.1 takes advantage of the table-valued parameter feature of SQL Server 2008 and SQL Azure to apply multiple inserts, updates, and deletes by using a single stored procedure call, instead of requiring a stored procedure call to apply each change. This greatly increases performance of these operations and reduces the number of round trips between client and server during change application. Bulk procedures are created by default when a SQL Server 2008 or SQL Azure database is provisioned.

Parameter-based Filtering

Sync Framework 2.1 enables you to create parameter-based filters that control what data is synchronized. Parameter-based filters are particularly useful when users want to filter data based on a field that can have many different values, such as user ID or region, or a combination of two or more fields. Parameter-based filters are created in two steps. First, filter and scope templates are defined. Then, a filtered scope is created that has specific values for the filter parameters. This two-step process has the following advantages:

  • Easy to set up. A filter template is defined one time. Creating a filter template is the only action that requires permission to create stored procedures in the database server. This step is typically performed by a database administrator.

  • Easy to subscribe. Clients specify parameter values to create and subscribe to filtered scopes on an as-needed basis. This step requires only permission to insert rows in synchronization tables in the database server. This step can be performed by a user.

  • Easy to maintain. Even when several parameters are combined and lots of filtered scopes are created, maintenance is simple because a single, parameter-based procedure is used to enumerate changes.

Removing Scopes and Templates

Sync Framework 2.1 adds the SqlSyncScopeDeprovisioning and SqlCeSyncScopeDeprovisioning classes to enable you to easily remove synchronization elements from databases that have been provisioned for synchronization. By using these classes you can remove scopes, filter templates, and the associated metadata tables, triggers, and stored procedures from your databases.

SQL Server Compact 3.5 SP2 Compatibility

The Sync Framework 2.1 SqlCeSyncProvider database provider object uses SQL Server Compact 3.5 SP2. Existing SQL Server Compact databases are automatically upgraded when Sync Framework connects to them. Among other new features, SQL Server Compact 3.5 SP2 makes available a change tracking API that provides the ability to configure, enable, and disable change tracking on a table, and to access the change tracking data for the table. SQL Server Compact 3.5 SP2 can be downloaded here.

For more information about Sync Framework 2.1, including feature comparisons, walkthroughs, how-to documents, and API reference, see the product documentation.

David Ramel recommended Try Your Own OData Feed In The Cloud--Or Not! in this 8/12/2010 post to Visual Studio Magazine’s Data Driver column:


So, being a good Data Driver, I was all pumped up to tackle a project exploring OData in the cloud and Microsoft's new PHP drivers for SQL Server, the latest embodiments of its "We Are The World" open-source technology sing-along.

I was going to throw some other things in, too, if I could, like the Pivot data visualization tool. I literally spent days boning up on the technologies and trying different tutorials (by the way, if someone finds on the Web a tutorial on anything that actually works, please let me know about it; you wouldn't believe all the outdated, broken crap out there--some of it even coming from good ol' Redmond).

So part of the project was going to use my very own Odata feed in the cloud, hosted on SQL Azure. The boys at SQL Azure Labs worked up a portal that lets you turn your Azure-hosted database into an OData feed with a couple of clicks. It also lets you try out the Project Houston CTP 1 and SQL Azure Data Sync.

The portal states:

SQL Azure Labs provides a place where you can access incubations and early preview bits for products and enhancements to SQL Azure. The goal is to gather feedback to ensure we are providing the features you want to see in the product.

Well, I provided some feedback: It doesn't work.

Every time I clicked on the "SQL Azure OData Service" tab and checked the "Enable OData" checkbox, I got the error in Fig. 1.

Enable SQL Azure OData begets an error

Figure 1. Problems on my end when I try to get to enable the SQL Azure OData Service. [Click image for larger view.]

I figured the problem was on my end. It always is. I have an uncanny knack of failing where others succeed in anything technology-related. It's always some missed configuration or incorrect setting or wrong version or outdated software or … you get the idea. Sometimes the cause of my failure is unknown. Sometimes it's just the tech gods punishing me for something, it seems. Check out what just now happened as I was searching for some links for this blog. This is a screenshot of the search results:

Garbled message

Figure 2. Stephen Forte's blog can be difficult to read at times. But it's absolutely not his fault. [Click image for larger view.]

I mean, does this stuff happen to anyone else?

Basically, though, I'm just not that good. But hey, there are other tech dilettantes out there or beginners who like to muck around where they don't belong, so I keep trying in the hopes of learning and sharing knowledge. And, truthfully, 99 percent of the time, I persevere through sheer, stubborn, blunt force trial-and-error doggedness. But it usually takes insane amounts of time--you've no idea.

So I took my usual approach and started trying different things. Different databases to upload to Azure. Different scripts to upload these different databases. Different servers to host these different databases uploaded with different scripts. I tried combination after combination, and nothing worked. I combed forums and search engines. I found no help.

(By the way, try searching for that error message above. I can almost always find a solution to problems like this by Googling the error message. But in this case, nothing comes up except for one similar entry--but not an exact match. That is absolutely incredible. I didn't even think there were any combinations of words that didn't come up as hits in today's search engines.)

Usually, a call to tech support or a forum post is my last resort. They don't usually work, and worse, they often just serve to highlight how ignorant I am.

But, after days of trying different things, I sent a message to the Lab guys. That, after all, is the purpose for the Lab, as they stated above: feedback.

So things went something like this in my exchange with the Labs feedback machine:

Me: When I check box to enable Odata on any database, I get error:

SQL Azure Labs - OData Incubation Error Report
Error: Data at the root level is invalid. Line 1, position 1.
Time: 8/9/2010 2:56:16 PM

D., at SQL Azure Labs: Adding a couple others..
Anything change guys?

J., at SQL Azure Labs: None from me.

M., at SQL Azure Labs: I'm seeing the same error, though I haven't changed anything either.
J., can you look at the change log of the sources to see if anything changed?
D., have you published any changes in the past week?

J., at SQL Azure Labs: OK. Will have to wait until this afternoon, however.

M., at SQL Azure Labs: J.; might something have changed on the ACS side?
The error is coming from an xml reader somewhere, and occurs when the "Enable OData" checkbox is selected...

So that was a couple days ago and I'm still waiting for a fix. But at least I know it wasn't me, for a change. That's a blessed relief.

But that made me wonder: Just how many people are tinkering around with OData if this issue went unreported for some unknown amount of time, and hasn't been fixed for several days? Is it not catching on? Is no one besides me interested in this stuff? That's scary.

I'm not complaining, mind you. I was gratified to get a response, and a rapid one at that--not a common occurence with big companies. I think Microsoft in general and the Lab guys specifically are doing some great stuff. I was enthusiastic about the possibilities of OData and open government/public/private data feeds being accessible to all. And the new ways coming out to visualize data and manipulate it are cool. And a few others out there agree with me, judging from various blog entries. But now I wonder. Maybe I'm in the tiny minority.

Anyway, today I found myself at Data Driver deadline with no OData project to write about, as I had planned to. So I had to come up with something, quick. Hmmm... what to write about?

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Alex James (@adjames) posted a detailed OData and Authentication – Part 8 – OAuth WRAP tutorial to the WCF Data Services Blog on 8/19/2010:

imageOAuth WRAP is a claims based authentication protocol supported by the AppFabric Access Control (ACS) which is part of Windows Azure.

But most importantly it is REST (and thus OData) friendly too.


The idea is that you authenticate against an ACS server and acquire a Simple Web Token or SWT – which contains signed claims about identity / roles / rights etc – and then embed the SWT in requests to a resource server that trusts the ACS server.

The resource server then looks for and verifies the SWT by checking it is correctly signed, before allowing access based on the claims made in the SWT.

If you want to learn more about OAuth WRAP itself here’s the spec.


imageNow we know the principles behind OAuth WRAP it’s time to map those into the OData world.

Our goal is simple. We want an OData service that uses OAuth WRAP for authorization and a client to test it end to end.

Why OAuth WRAP?

You might be wondering why this post covers OAuth WRAP and not OAuth 2.0.

OAuth 2.0 essentially combines the best features of OAuth 1.0 and OAuth WRAP.

Unfortunately OAuth 2.0 is not yet a ratified standard, so ACS doesn’t support it yet. On the other hand OAuth 1.0 is cumbersome for RESTful protocols like OData. So that leaves OAuth WRAP.

However once it is ratified OAuth 2.0 will essentially depreciate OAuth WRAP and ACS will rev to support it. When that happens you can expect to see a new post in this Authentication Series.


First we’ll provision an ACS server to act as our identity server.

Next we’ll configure our identity server with appropriate roles, scopes and claim transformation rules etc.

Then we’ll create a HttpModule (see part 5) to intercept all requests to the server, which will crack open the SWT, convert it into an IPrincipal and store it in HttpContext.Current.Request.User. This way it can be accessed later for authorization purposes inside the Data Service.

Then we’ll create a simple OData service using WCF Data Services and protect it with a custom HttpModule.

Finally we’ll write client code to authenticate against the ACS server and acquire a SWT token. We’ll use the techniques you saw in part 3 to send the SWT as part of every request to our OData services.

Step 1 – Provisioning an ACS server

First you’ll need an Windows Azure account and a running AppFabric namespace.

Once your namespace is running you also have a running ACS server.

Step 2 – Configuring the ACS server

To correctly configure the ACS server you’ll need to Install the Windows Azure Platform AppFabric SDK which you can find here.

ACM.exe is a command line tool that ships as part of the AppFabric SDK, and that allows you to create Issuers, TokenPolicies, Scopes and Rules.

For an introduction to ACM.exe and ACS look no further than this excellent guide by Keith Brown.

To simplify our acm commands you should edit your ACM.exe.config file to include information about your ACS like this:

<?xml version="1.0" encoding="utf-8" ?>
    <add key="host" value=""/>
    <add key="service" value="{Your service namespace goes here}"/>
    <add key="mgmtkey" value="{Your Windows Azure Management Key goes here}"/>

Doing this saves you from having to re-enter this information every time you run ACM.

Very handy.

Claims Transformation

Before we start configuring our ACS we need to know a few principles…

Generally claims authentication is used to translate a set of input claims into a signed set of output claims.
Sometimes this extends to Federation, which allows trust relationships to be established between identity providers, such that a user on one system can gain access to resources on another system.

However in this blog post we are going to keep it simple and skip federation. 
Don’t worry though we’ll add federation in the next post.


In ACS terms an Issuer represents a security principal. And whether we want federation or not our first step is to create a new issuer like this:

> acm create issuer
    -name: partner
    -issuername: partner

This will generate a key which you can retrieve by issuing this command:

> acm getall issuer
Count: 1

         id: iss_89f12a7ed023c3b7b0a85f32dff96fed2014ad0a
       name: odata-issuer
issuername: odata-issuer
        key: 9QKoZgtxxU4ABv8uiuvaR+k0cOmUxfEOE0qfPK2lCJY=
previouskey: 9QKoZgtxxU4ABv8uiuvaR+k0cOmUxfEOE0qfPK2lCJY=
  algorithm: Symmetric256BitKey

Our clients are going to need to know this key, so make a note of it for later.

Token Policy

Next we need a token policy. Token Policies specify a timeout indicating how long a new Simple Web Token (or SWT) should be valid, or put another way, how long before the SWT expires.

When creating a token policy you need to balance security versus ease of use and convenience. The shorter the timeout the more likely it is to be based on up to date Identity and Role information, but that comes at the cost of frequent refreshes, which have performance and convenience implications.

For our purposes a timeout of 1 hour is probably about right. So we create a new policy like this:

> acm create tokenpolicy
    -name: odata-service-policy
    -timeout: 3600

Where 3600 is the number of seconds in an hour. To see what you created issue this command:

> acm getall tokenpolicy
  Count: 1

     id: tp_aaf3fd9ca64d4471a5c7b5c572c087fb
   name: odata-service-policy
timeout: 3600
    key: WRwJkQ9PgbhnIUgKuuovw/6yVAo/Dh0qrb7rqQWnsBk=

We’ll need both the id and key later.

This key is what we share with our resource servers, so that they can check SWTs are correctly signed. We’ll come back to that later.


A service may have multiple ‘scopes’ each with a different set of access rules and rights.

Scopes are linked to a token policy, telling ACS how long SWTs should remain valid, how to sign the SWT, and scopes contain a set of rules which tell ACS how to translate incoming claims into claims embedded in the SWT.

When requesting a SWT a client must include an ‘applies_to’ parameter, which tells ACS for which scope they need a SWT, and consequently which token policy and rules should apply when constructing the SWT.

Here are just some of the reasons you might need multiple scopes:

  • A multi-tenant resource server would probably need different rules per tenant.
  • A single-tenant resource server with distinct sets of independently protected resources.

But for our purposes one scope is enough.

> acm create scope
    -name: odata-service-scope

For ‘appliesto’ I chose the url for our planned OData service. Notice too that we bind the scope to the token policy we just created via it’s id.

You can retrieve this scope by executing this:

> acm getall scope
             Count: 1

                id: scp_c028015be790fb5d3ead59307bb3e537d586eac0
              name: odata-service
     tokenpolicyid: tp_d8c65f770fb14a90bc707e958a722df9

You’ll need to know the scopeid to add Rules to the scope.


ACS has one real job, which you could sum up with these four words: “Claims in, claims out”. Essentially ACS is just a claims transformation engine, and the transformation is achieved by applying a series of rules.

The rules are associated with a scope, and tell ACS how to transform input claims for the target scope (via applies_to) into signed output claims.

In our simple example, all we really want to do is this: ‘If you know the key of my issuer, we’ll sign a claim that you are a ‘User’.

To do that we need this rule:

> acm create rule

"Issuer" is a special type of input claim type (normally input claim type is just a string that needs to be found in an incoming SWT) that says anyone who demonstrates direct knowledge of the issuer key will receive a SWT that includes that output claim specified in the rule*.

So this particular rule means anyone who issues an OAuth WRAP request with the Issuer name as the wrap_name and the Issuer key as the wrap_password will receive a signed SWT that claims their "Roles=User".

*NOTE: there are other ways that this rule particular can match, but they are outside the scope of this blog post, check out this excellent guide by Keith Brown for more.

To test that our rule is working try this:

WebClient client = new WebClient();
client.BaseAddress = "https://{your-namespace-goes-here}";
NameValueCollection values = new NameValueCollection();
values.Add("wrap_name", "partner");
values.Add("wrap_password", "9QKoZgtxxU4ABv8uiuvaR+k0cOmUxfEOE0qfPK2lCJY=");
values.Add("wrap_scope", "");
byte[] responseBytes = client.UploadValues("WRAPv0.9", "POST", values);
string response = Encoding.UTF8.GetString(responseBytes);
string token = response.Split('&')
    .Single(value => value.StartsWith("wrap_access_token="))


When I run that code get this:

As you can see the Roles%#dUser is simply a UrlEncoded version of Roles=User, so assuming this is a correctly signed SWT (more on that in Step 3) our rule appears to be working.

Step 3 – Creating the OAuth WRAP HttpModule

Now we have our ACS server correctly configured the next step is to create a HttpModule to crack open SWTs and map them into principles for use inside Data services.

Lets just take the code we wrote in parts 4 & 5 and rework it for OAuth WRAP, firstly by creating a OAuthWrapHttpModule that looks like this:

public class OAuthWrapAuthenticationModule : IHttpModule
    public void Init(HttpApplication context)
        context.AuthenticateRequest +=
           new EventHandler(context_AuthenticateRequest);
    void context_AuthenticateRequest(object sender, EventArgs e)
        HttpApplication application = (HttpApplication)sender;
        if (!OAuthWrapAuthenticationProvider.Authenticate(application.Context))

    void Unauthenticated(HttpApplication application)
        // you could ignore this and rely on authorization logic to
        // intercept requests etc. But in this example we fail early.
        application.Context.Response.Status = "401 Unauthorized";
        application.Context.Response.StatusCode = 401;
        application.Context.Response.AddHeader("WWW-Authenticate", "WRAP");
    public void Dispose() { }

As you can see this relies on an OAuthWrapAuthenticationProvider which looks like this:

public class OAuthWrapAuthenticationProvider
    static TokenValidator _validator = CreateValidator();
    static TokenValidator CreateValidator()
        string acsHostname =
        string serviceNamespace =
        string trustedAudience =
        string trustedSigningKey = 
        return new TokenValidator(
    public static TokenValidator Validator
        get { return _validator; }

    public static bool Authenticate(HttpContext context)
        if (!HttpContext.Current.Request.IsSecureConnection) 
            return false;

        if (!HttpContext.Current.Request.Headers.AllKeys.Contains("Authorization"))
            return false;

        string authHeader = HttpContext.Current.Request.Headers["Authorization"];

        // check that it starts with 'WRAP'
        if (!authHeader.StartsWith("WRAP "))
            return false;
        // the header should be in the form 'WRAP access_token="{token}"'
        // so lets get the {token}
        string[] nameValuePair = authHeader
                                    .Substring("WRAP ".Length)
                                    .Split(new char[] { '=' }, 2);

        if (nameValuePair.Length != 2 ||
            nameValuePair[0] != "access_token" ||
            !nameValuePair[1].StartsWith("\"") ||
            return false;

        // trim off the leading and trailing double-quotes
        string token = nameValuePair[1].Substring(1, nameValuePair[1].Length - 2);

        if (!Validator.Validate(token))
            return false;

        var roles = GetRoles(Validator.GetNameValues(token));

        HttpContext.Current.User = new GenericPrincipal(
            new GenericIdentity("partner"),
        return true;
    static string[] GetRoles(Dictionary<string, string> nameValues)
        if (!nameValues.ContainsKey("Roles"))
            return new string[] { };
            return nameValues["Roles"].Split(',');

As you can see the Authenticate method does a number of things:

  • Verifies we are using HTTPS because it would be insecure to pass SWT tokens around over straight HTTP.
  • Verifies that the authorization header exists and it is a WRAP header.
  • Extracts the SWT token from the authorization header.
  • Asks a TokenValidator to validate the token. More on this in a second.
  • Then extracts the Roles claims from the token (it assumes there is a Roles claim that contains a ',' delimited list of roles).
  • Finally if every check passes it constructs a GenericPrincipal, with a hard coded identity set to ‘partner’, and the list of roles found in the SWT and assigns it to HttpContext.Current.User.

In our example the identity itself is hard coded because currently our ACS rules don’t make any claims about the username, it just has role claims. Clearly though if we added more ACS rules you could include a username claim too.

The TokenValidator used in the code above is lifted from Windows Azure AppFabric v1.0 C# samples, which you can find here. If you download and unzip these samples you’ll find the TokenValidator here:


Our create CreateValidator() method creates a shared instance of the TokenValidator, and as you can see we are pulling these settings from web.config:

     <add key="acsHostName" value=""/>
     <add key="serviceNamespace" value="{your namespace goes here}"/>
     <add key="trustedAudience" value=""/>
     <add key="trustedSigningKey" value="{your token policy key goes here}>

The most interesting one is the trustedSigningKey. 
This is a key shared between ACS and the resource server (in our case our HttpModule). It is the key from the token policy we created in step 2.

The ACS server uses the token policy key to create a hash of the claims (or HMACSHA256) which gets appended to the claims to complete the SWT. Then to verify that the SWT and its claims are valid the resource server simply re-computes the hash and compares.

Now that we’ve got our module we simply need to register it with IIS via the web.config like this:

       <add name="OAuthWrapAuthenticationModule"


Step 4 – Creating an OData Service

Next we need to add (if you haven’t already) an OData Service.

There are lots of ways to create an OData Service using WCF Data Services. But by far the easiest way to create a read/write service is using the Entity Framework like this.

Now because we’ve converted the OAuth WRAP SWT into a GenericPrincipal by the time requests hit our Data Service all the authorization techniques we already know using QueryInterceptors and ChangeIntercepts are still applicable.

So you could easily write code like this:

public Expression<Func<Order, bool>> OrdersFilter()
    if (!HttpContext.Current.Request.IsAuthenticated)
        return (Order o) => false;
var user = HttpContext.Current.User;
    if (user.IsInRole("User"))
        return (Order o) => true;
        return (Order o) => false; 

And of course you can rework the HttpModule and interceptors as needed if your claims get more involved.

Step 5 – Acquiring and using a SWT Token

The final step is to write a client that will send a valid SWT with each OData request.

In part 3 we explored the available client-side hooks. So we know that we can hook up to the DataServiceContext.SendingRequest like this:

ctx.SendingRequest +=new EventHandler<SendingRequestEventArgs>(OnSendingRequest);

And in our event hander we can add headers to the outgoing request. For OAuth WRAP we need to add a authorization header in the form:

Authorization:WRAP access_token="{YOUR SWT GOES HERE}"

NOTE: the double quotes (") are actually part of the format, but the curly bracked ({) are not. See the string.Format call below if you have any doubts.

So our OnSendingRequest event handler looks like this:

static void OnSendingRequest(object sender, SendingRequestEventArgs e)
        string.Format("WRAP access_token=\"{0}\"", GetToken())

As you can see this uses GetToken() to acquire the actual SWT:

static string GetToken()
    if (_token == null){
       WebClient client = new WebClient();
       client.BaseAddress =
       NameValueCollection values = new NameValueCollection();
       values.Add("wrap_name", "partner");
       values.Add("wrap_password", "{Issuer Key goes here}");
       values.Add("wrap_scope", "");
       byte[] responseBytes = client.UploadValues("WRAPv0.9", "POST", values);
       string response = Encoding.UTF8.GetString(responseBytes);
       string token = response.Split('&')
        .Single(value => value.StartsWith("wrap_access_token="))

      _token = HttpUtility.UrlDecode(token); 
   return _token;
static string _token = null;

As you can see we acquire the SWT once (by demonstrating knowledge of the Issuer key)and assuming that is successful we cache it for later reuse.

Finally if we issue queries like say this:

    foreach (Order order in ctx.Orders)
catch (DataServiceQueryException ex)
    //var scheme = ex.Response.Headers["WWW-Authenticate"];
    var code = ex.Response.StatusCode;
    if (code == 401)
        _token = null;

And our token has expired, as it will after 60 minutes, an exception will occur and we can just null out the cached SWT and any retries will force our code to acquire a new SWT. 


In this post we’ve come a long way. We’ve now got a simple OData and OAuth WRAP authentication scenario working end to end.

It is a good foundation to build upon. But there are a few things we can do to make it better.

We could:

  • Configure our ACS to federate identities across domains, and configure our client code to do SWT exchange to go from one domain to another.
  • Create an expiring cache of Principals so that we don’t need to re-validate everytime a new request is received.
  • Upgrade our Principal object so it can handle more general claims rather than just User/Roles.

We’ll address these issues in Part 9.

Vittorio Bertocci (@vibronet) announced Programming Windows Identity Foundation” has been sent to the printer and “[this may look weird at first, but bear with me]” on 9/18/2010:

The Roman numerals notation emerged with Roman civilization itself, around the 9th century BC, though its roots go all the way back to the Etruscans.

It is not an especially handy system: it’s not well suited for representing large numbers, and arithmetic (especially multiplications and divisions) gets tricky real fast. Nonetheless it beats counting with fingers, scratches on sticks and stones, and backed the growth and development of Western civilization for more than 2 millennia. Although scientists and professionals managed to do their thing despite of the inherent complexities of the system, the layman was forced to rely on experts for anything beyond trivial accounting.

image What I find absolutely amazing is that Europe got exposed to Hindu-Arabic numerals, an obviously superior system, before the year 1000; and our good Fibonacci, who learned about the system in Africa, even wrote a book about it. Despite that, pretty much everybody stubbornly stuck with the old system well into medieval times.

You know what changed everything? Printing. Once printing was invented, information started to circulate fast and the superiority of the new system became evident to a wider and wider audience. Network effect and Darwinian selection did the rest, and today we pretty much all use the new system. Now anybody with basic education can do most of the math he or she needs, and science advanced to marvels which I doubt would have been invented or discovered if we’d be stuck in some Roman numbers-fueled steampunk nightmare.

Why did I bore you with that tangent? Because I believe there’s an important lesson to be learned here: no matter how incredibly good an idea is, it’s the availability of the right technology that can make or break its fortunes.

image The idea of claims has been around for quite some time now, however despite the wide consensus it gathered it didn’t enjoy widespread adoption until recent times. In fact, you have just to look at our platform to observe a Cambrian explosion of products and services which are taking hard dependencies on claims. What happened? Why now?

imageI’ll tell you what happened on our platform: Windows Identity Foundation showed up on the scenes. Windows Identity Foundation, which is at the heart  of Active Directory Federation Services, Sharepoint 2010 and can easily be in your applications and services, too. Windows Identity Foundation gave legs to the ideas that, while very compelling, often failed to cross the chasm between the whiteboard and a functioning token deserializer, a manageable STS.

Windows Identity Foundation is what makes it possible for you to take advantage of the claims-based identity patterns, without feeling the pain of implementing the entire stack yourself. Since 2007 my job included evangelizing Windows Identity Foundation: a great experience, from which I learned a lot. One of the things which I’ve observed is that oftentimes people have a hard time using WIF in the right way, because they are stuck in mental models tied to the artifacts of the old way of doing things, such as dealing with credentials and protocols directly. This happens to security experts and to generalist developers alike. Invariably, just a bit of help in seeing things from the right angle is enough to push people past the bump and unleash great productivity; like many things on the Internet, once seen claims-based identity cannot be un-seen. The frustrating part of this is, though, that without that little help it’s not always easy to go past the bump. If you follow this blog you know that we go out of our way to provide you with samples, learning materials and occasions to learn through live and online sessions: but I wanted to do more, if possible. I wanted to capture some of the experience I gathered in the last few years and package it in a format that beginners and experts alike could consume.

The result of that effort has been sent to the printers yesterday, and it’s the book Programming Windows Identity Foundation.

In later posts I will perhaps go in further details about the table of contents, the people who contributed to the book, and even some content excepts, but right now I just want to breathe and look back at the reasons for which I took on this commitment, which is what I did while writing this weird post.

Writing this book has been hard work, but I truly, truly hope that it will help you past the bumps you may encounter and fully enjoy the power of claims-based identity.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

The Windows Azure Team posted a Real World Windows Azure: Interview with Sinclair Schuller, Chief Executive Officer at Apprenda case study on 8/19/2010:

As part of the Real World Windows Azure series, we talked to Sinclair Schuller, CEO at Apprenda, about using the Windows Azure platform to deliver the company's middleware solution, which helps Apprenda's customers deliver cloud applications. Here's what he had to say:

MSDN: Tell us about Apprenda and the services you offer.

Schuller: Apprenda serves independent software vendors (ISVs) in the United States and Europe that build business-to-business software applications on the Microsoft .NET Framework. Our product, SaaSGrid, is a next-generation application server built to solve the architecture, business, and operational complexities of delivering software as a service in the cloud. 

MSDN: What was the biggest challenge you faced prior to implementing the Windows Azure platform?

imageSchuller: We wanted to offer customers a way to offload some of their server capacity needs to a cloud solution and integrate their cloud capacity with their on-premises capacity. We looked at other cloud providers, like Google App Engine and, but those are fundamentally the wrong solutions for our target customers because they do not allow developers enough flexibility in how they build applications for the cloud.

MSDN: Can you describe the solution you built with the Windows Azure platform?

Schuller: SaaSGrid is the unifying middleware for optimizing application delivery for any deployment paradigm. With SaaSGrid and the Windows Azure platform, we can offer our customers more infrastructure options and they can build very sophisticated applications with the .NET Framework. We also see cloud computing as the biggest shift in how infrastructure is provisioned and consumed, and Window Azure gives customers the ability to take full advantage of the simplified provisioning and database scaling capabilities of Windows Azure and Microsoft SQL Azure.

MSDN: What makes your solution unique?

Schuller: The software industry is moving to a software-as-a-service model. Embracing this change requires developers to refactor existing applications and build out new infrastructure in order to move from shipping software to delivering software. By coupling the infrastructure capabilities of Windows Azure with SaaSGrid, we can offer our customers an incredibly robust, highly efficient platform at a low cost. Plus, customers can go to market with their cloud offerings in record time.

MSDN: What benefits have you seen since implementing the Windows Azure platform?

Schuller: With SaaSGrid and Windows Azure, ISVs can move their existing .NET-based applications to the cloud up to 60 percent faster and with 80 percent less new code than developing from the ground up. Customers do not have to invest significant capital and attain lower application delivery costs while ensuring application responsiveness. At the same time, with Windows Azure, customers can plan an infrastructure around baseline average capacity-rather than building around peat compute-intensive loads-and offset peak loads with Windows Azure. This helps our customers reduce their overall IT infrastructure footprint by as much as 70 percent.

For more information about Apprenda, visit: To read more Windows Azure customer success stories, visit:

Scott Dunsmore, et al., released patterns & practices - Windows Phone 7 Developer Guide to CodePlex on 8/17/2010:

Welcome to patterns & practices Windows Phone 7 Developer Guide community site

This new guide from patterns & practices will help you design and build applications that target the new Windows Phone 7 platform.

The key themes for these projects are:

  1. A Windows Phone 7 client application
  2. imageA Windows Azure backend for the system [Emphasis added]

As usual, we'll periodically publish early versions of our deliverable on this site. Stay tuned!

wp7guide-tubemap.jpgProject's scope

Scott continues with lists of the project member’s blogs and recent posts.

Return to section navigation list> 

VisualStudio LightSwitch

Mary Jo Foley (@maryjofoley) reported Microsoft starts Beta 1 rollout of new LightSwitch dev tool on 8/19/2010:

image Microsoft is making the first beta of its LightSwitch development tool available to Microsoft Developer Network (MSDN) subscribers today, according to an August 19 Microsoft blog post.

Microsoft plans to make the LightSwitch beta available to the public on Monday, August 23, officials said earlier this month.


Microsoft is positioning LightSwitch, codenamed “KittyHawk,” as a way to build business applications for the desktop, the Web and the cloud. It’s a tool that relies on pre-built templates to make building applications easier for non-professional programmers. Microsoft officials have said LightSwitch is designed to bring the Fox/Access style of programming to .Net.

The LightSwitch Beta 1 documentation is available now on MSDN. The introduction to the documents makes it clear that while LightSwitch is meant to simplify development, it isn’t for non-programmers:

image“The process of creating an application by using LightSwitch resembles other development tools. Connect to data, create a form and bind the data to the controls, add some validation based on business logic, and then test and deploy. The difference with LightSwitch is that each one of those steps is simplified.”

Many professional programmers have made their misgivings about LightSwitch public, claiming that non-professionals could end up creating a bunch of half-baked .Net apps using LightSwitch. Microsoft officials have countered those objections by saying LightSwitch applications can be handed off to professional developers to carry forward if/when needed.

LightSwitch allows users to connect their applications to Excel, SharePoint or Azure services. Applications built with LightSwitch can run anywhere Silverlight can — in a variety of browsers (Internet Explorer, Safari, Firefox), on Windows PCs or on Windows Azure. Microsoft is planning to add support for Microsoft Access to LightSwitch possibly by the time Beta 2 rolls around. Support for mobile phones won’t be available in version 1 of the product, Microsoft officials have said.

Microsoft officials have said LightSwitch will be a 2011 product.

(Note: I’m finding a number of the links on the MSDN blogs aren’t working and are redirecting users to log in. The pointer in the MSDN availability blog post to which I pointed at the start isn’t working. I have a question in to Microsoft about TechNet and BizSpark availability of the first LightSwitch beta. I’ll update when I hear more.)

Update: Only MSDN subscribers are getting the Beta 1 LightSwitch bits this week. TechNet, BizSpark and other users all have to wait until August 23, a Microsoft spokesperson confirmed.

I downloaded and installed LightSwitch Beta 1 on 8/19/2010 without incident. However, make sure you have a running instance of SQL Server 200x Express named SQLEXPRESS with Named Pipes and TCP enabled before compiling a project. (Like Windows Azure, The LightSwitch server uses this instance as a local development database.) If you’re running SQL Server 2008 R2 Express, make sure the version is RTM (10.50.1600.1.)

Michael Washington wrote The First Hour With LightSwitch –BETA- on 9/18/2010, the night LightSwitch bits were first available to MSDN subscribers:



8:40 pm – I downloaded LightSwitch and it’s installing. I downloaded the .iso image and mounted it with PowerISO. This is a Beta and the first one at that. All I hope to see is “idiot Proof”. I already know how to program full scale Silverlight application.

This is not for me. It’s for “you” and “them”. The people who will hopefully have a tool that allows them to build useful applications… that will need professionals like me when they are ready to take to the next level.


Ok let’s do this…


Hmm that wasn’t what I was expecting to see next… (I don’t know what I expected actually…)


Ok now I’m wandering aimlessly… I hope this is what I’m suppose to do next…


Whew! Ok I think I’m back on track…


Yes! We have already achieved the start of Idiot Proof! (Must buy more MSFT stock…)


I spend 1 minute making a table that I call Messages.


I click on the Screen button.


Ok all this seems obvious. I am doing this as it happens so at this point I do not know if this will actually work…


Hmm… now what am I looking at? Must stare at this screen for a bit…


Ok I couldn’t resist, I had to click the Write Code button. Hmm I like what I see you are actually suppose to be able to write code if you need to.

But, I want to see what you can do if you don’t know how to write code.


Ok one small detour, If I click on the “Save bar”…


I see additional places to write code :)


But, I have to admit I am totally lost. When in doubt, hit F5





[Exactly what happened to me! My copy of SQL Server 2008 R2 express had outlived it’s “evaluation period” and wouldn’t start.]

Ok I cannot resist a small rant. This is not ok, it must work, period. If this were Microsoft Access it would work. period. This kind of thing is also the problem I have with WCF. It is is hard to deploy. The fact that it is so great means nothing if it doesn’t work. Things must work, period. Ok rant over.

In case you didn’t notice, I am no longer wearing my “happy face”. However, this is a Beta software, and consider the rant above my “early feedback”. At this point a “Popup Wizard” should open up and guide me through solving the connection problem.


Ok now I have my “professional programmer” hat on and I’m gonna try to figure this one out. I double-click on MessagesSet in the Solution Explorer


Click on Attach to External Data Source


Umm let’s click Database and then Next…


Ok I am connecting to one of my existing databases because I don’t know what else to do…


Ok I am connecting to the database used in this tutorial: RIATasks: A Simple Silverlight CRUD Example

I point that out because while that tutorial is easy, it is still a LOT for a person to go through to create an application. My hope is that LightSwitch will make things like that easy.

Now that it looks like I wont have a connection problem connecting to my existing data source, let’s try to build an application with my existing Tasks table…


Ok fine, the smile is starting to return…


However, the Solution Explorer now looks like the image above. I am going to delete MessagesSet and CreateNewMessages screen…

Michael continues with a “Second Try: Application 2.” I’m as non-plussed so far as Michael. LightSwitch doesn’t appear intuitive to me, but I’ve only spent an hour with it.

Beth Massi (@BethMassi) reported LightSwitch Beta 1 Available to MSDN Subscribers Today, General Public on Monday on 8/18/2010 with a few additional links:

image We just released Visual Studio LightSwitch Beta 1 to MSDN subscribers. Public availability will be this Monday, August 23rd but if you are an MSDN subscriber visit your subscriptions page to get access to the download now. Otherwise check the the LightSwitch Developer Center on Monday for the public download.


Here are some resources to help get you started -- we have a lot more for you on Monday via the Dev Center so stay tuned!

We're looking forward to your feedback,

Paul Patterson (@PaulPatterson) posted Microsoft LightSwitch – Creating My First Table and the two following LightSwitch articles on 8/18/2010:

imageAlrighty then. In my last post I documented my experiences in using my Visual Studio environment to create a new LightSwitch project. Now that I have my project created, and few hours of sleep, I am going to wrangle some data together to create a simple table. What’s a Table? In my previous life as [...]

Microsoft LightSwitch – First Use


Oky doky, here we go… We’ve seen all the launch material, presentations, videos, MSDN stuff, blogs, and even followed the forums. Now it is time to sink some teeth into this LightSwitch. Here’s the deal though. Instead of doing a one-off post on the end-to-end process of creating an application. I am going to take [...]

Microsoft LightSwitch – My First #fail at Beta 1 Installation


Hey, I don’t need a map. I know where I am going… Maybe I should have read the instructions before attempting the beta 1 install of LightSwitch. I fired up the virtual DVD drive and double clicked Setup.exe. The excitement was killing me… Doh! You would think I would have known better when I saw the “…Beta Prerequisite…” [...]

Paul Patterson (@PaulPatterson) announced Microsoft LightSwitch – Beta 1 Available for Download to MSDN Subscribers on 8/18/2010:


If you are an MSDN subscriber then you can now download the Beta 1 of Visual Studio LightSwitch (here).

imageI was out at an appointment so I missed the announcement this afternoon, otherwise it would have already been downloaded, installed, and a blog post up about the installation experience already.

LightSwitch Beta 1 Documentation on MSDN

Vision Clinic Application Walkthrough and Sample

Here ‘ goes…

Paul Patterson (@PaulPatterson) posted Microsoft LightSwitch – A Value Proposition for the Enterprise on 8/17/2010, a day before Microsoft released Beta 1 to MSDN subscribers:

image LightSwitch is a tool that will be used to easily create .net applications for the desktop as well as for the “cloud”. LightSwitch takes software development best practices and encloses them into an easy to use tool that developers can use to quickly build data centric Silverlight 4 applications.


The more I understand LightSwitch, the more I can understand what the value proposition will be for using LightSwitch in a larger enterprise.  Although LightSwitch is not targeted specifically at organizations with dedicated IT business units, there is some value in enabling an enterprise with the tool.

My life in IT started out in a role as a Business Analyst. A Business Analyst is, essentially, the liaison between an organization’s business units and its information technology group. From line of business application support, to defining requirements of a brand new system, the role of a business analyst wears many hats. Fundamental to the role is making sure that the IT related issues are mapped and measured directly to the overall goals and objectives of the organization. I’ve worked as a Business Analyst for many years with a number of organizations, and have arguably worn every hat possible.

Here are some of things I have seen and learned over the years as a Business Analyst, as well as many other roles.


Like it or not, non-IT business units are going to build and use IT solutions.

Every single organization I have worked for has technology related solutions that are not, or were not originally, under the watchful and controlling eye of that organizations’ information technology group. I have seen everything from synchronizing mobile device applications to single user database applications being created by and used by non-IT business units. Often times these are applications that were created without the support of the IT group.

From a technology perspective, the larger the organization is, the more disparate business units become. This is likely because the group that is charged with managing an enterprise’s technology must carefully place priority on the biggest issues. A small business unit that needs a technology solution, like a simple database application, is likely not going to be put high on the IT priority list. And even if they get on the list, there are plenty of bigger, enterprise class IT fish that need to be fried.

Knowing Enough to be Dangerous.

Everyone in an organization has goals and objectives. Business units have their own goals mapped to the greater good of the enterprise. So if there is a potential to leverage a technology solution to help meet their own objectives, and they are not going to get support from the IT group, then what are they to do?

There were times when I would work 100% of my time on issues from software applications built by business units that had no or little experience in creating software. These are applications were needed sooner than later and were created to solve real business problems. The risk of not achieving or solving a business need was such that a solution was required, regardless of the quality and supportability of it.

Microsoft Access based applications are a prime example of how business units create tools that help solve common business problems. Most of the time these types of applications are created by people who have very little experience with software design. I have seen many Access based applications created with table structures that were simply the duplication of existing Excel spreadsheets. With a little reading, a person can quickly replace a spreadsheet solution with an Access based application with tables, queries, and forms for data visualization.

Usage of business unit created applications grows quickly. Without formal IT support and control, business units can quickly respond to their own requirements. It is not uncommon to see simple applications like these become mission critical for the business unit. Even more growth occurs across business unit domains where other groups see and recognize the value in the solution. Before long, that once simple Access based application has become a division wide line-of-business application.

Where’s the Value?

LightSwitch has been criticized by some as being a Microsoft Access replacement, or as a tool that does the same as Access – a drag and drop development tool. In fact, LightSwitch is not a replacement for Access. Instead, LightSwitch offers an application developer with a richer set of tools that delivers applications in a quicker and more effective way than Access can.

Notwithstanding, Access is still a great tool, however it would take a lot more time and effort to create an application that can be created using LightSwitch. Creating an Access application with; three tier architecture, model driven abstraction, and uses distributed business logic and data tier via the cloud… You do the math. Without training and experience, how long would it take you to create an Access based application to do all that?

From an enterprise perspective, there are a couple of value propositions here.

LightSwitch offers a data centric approach to creating applications, without requiring a developer to know much about the plumbing required to create databases and user interfaces. In Access, a user needs to design tables, queries, forms, and then wire the forms to the data. This workflow is somewhat similar in LightSwitch; however LightSwitch does so using a much more intuitive workflow.

Using this data centric approach, LightSwitch creates an application using software design best practices, without requiring a developer to write a single line of code. Creating sources of data is one thing, but wiring up the business logic, communication with the data source, and data presentation is another. LightSwitch takes care of all that for you. From a business perspective, that means the developer will save time in that the developer does not have to manually create the coding infrastructure needed to do all that.

LightSwitch can consume a variety of sources of data. Using WCF RIA services, LightSwitch can consume a number of data sources. This provides an opportunity to create enterprise managed services that can be consumed by LightSwitch. Business units can then create applications that use services that are managed and controlled by the enterprise IT group. Rather than create disparate islands of duplicate information, services provide integration points across the enterprise, including LightSwitch specific extension points such as custom business types. This provides the IT stakeholders in an organization the some level of manageability and supportability controls.

Some organizations may even see value in providing the infrastructure for LightSwitch application deployments. Business units who create LightSwitch applications can publish their applications to centrally managed locations. Publishing databases and applications to IT managed servers could satisfy IT controls issues such as backup, network traffic, and version control. For example, how many multi-user access databases are out on the network, using linked databases that reside on shares?

There are probably plenty of other scenarios where using LightSwitch has value in an enterprise. There are plenty of scenarios where using LightSwitch would not make sense. It really depends on needs and capabilities of your own organizations. If I had LightSwitch back when I was doing the Business Analyst stuff, I know I could have freed up a good chunk of time to do other stuff.

Interesting conclusions when you consider that Paul didn’t have his hands on the Azure bits.

Andrew J. Brust (@AndrewBrust) asked Microsoft's New Tools: Harmony or Cacophony? in this 8/13/2010 post to his Redmond Diary blog for Visual Studio Magazine:

image In this blog and in my column, I've written a lot lately about new technologies from Microsoft that seek to make software development easier. Technologies like ASP.NET Web Pages, Razor and WebMatrix, Access Web Databases and Visual Studio LightSwitch. Each of these technologies, I believe, is bringing much needed accessibility to programming on the Microsoft platform.

I've also written about Windows Phone 7 which, despite extreme skepticism in the press and analyst communities, has the potential to be an excellent SmartPhone platform. And I've explored rather deeply HTML5, a technology that I believe poses an existential threat to Windows and to Microsoft itself, if Redmond's inertia of the last several years persists.

As I consider all of these technologies, something emerges that, with hindsight, is frightfully obvious: they need to coalesce, unify and harmonize. LightSwitch, which produces Silverlight forms-over-data applications, needs to target Windows Phone 7. Access Web Databases, which deploy as forms-over-data SharePoint applications, should perhaps have some conformity with LightSwitch, and vice-versa. [Emphasis added.]

LightSwitch targets SQL Server Express by default. WebMatrix targets SQL Server Compact. Access Web databases target SharePoint lists and SQL Server Reporting Services. In other words, each of these exciting new tools targets SQL Server in some way (SharePoint lists are stored in SQL Server tables), but none of them targets the same edition of the product. I guess that's OK for the first versions of each of these tools, but I hope these anomalies are addressed in v2 releases. Microsoft likes to talk about impedance mismatches in data access technologies, and I think they've created a massive one of their own.

What about HTML5? Its threat could be blunted if Microsoft confronted it head-on. The version of Internet Explorer on Windows Phone 7 should be HTML5-compatible. LightSwitch v2 should target HTML5 as an alternate rendering target, which would enable LightSwitch apps to run on devices other than those running Microsoft operating systems, or the Mac OS (on Intel-based Macs). I wouldn't mind seeing SharePoint get more HTML5 savvy itself. It would enhance SharePoint's richness, in every single browser, including those that run on mobile devices.

To me, the biggest downside of HTML5 is the relative dearth of good developer tools for it, and the JavaScript heaviness it can bring about. But I would think that the HTML helpers in Razor and ASP.NET Server Pages could be a huge help there. Is Microsoft working on HTML5 Razor helpers now? If not, why not? And WebMatrix aside, it should add good IDE tooling for HTML5 in the full Visual Studio product.

Microsoft's got a lot of good answers to a great number of important software development questions. Now it just needs to make those answers coordinated and consistent. If it can really integrate these tools, then it should. Divided, some or all of these tools will fall. United, they might stand. They could even soar.

<Return to section navigation list> 

Windows Azure Infrastructure

Mary Jo Foley’s (@maryjofoley) Orleans: Microsoft's next-generation programming model for the cloud post of 8/18/2010 to ZDNet’s All About Microsoft blog updates the status of this Microsoft Research project:

imageOne of Microsoft’s biggest selling points for its cloud platform is that developers can use .Net, Visual Studio and other programming tools they already know to write Azure applications.

But that’s not the end of the story. Microsoft researchers are working on a next-gen cloud programming model and associated tools. As those who’ve downloaded the Microsoft codename tracker I update each month know, something codenamed “Orleans” was believed to be Microsoft’s cloud programming model. But it’s only recently that I’ve found more details about what Orleans is and how it is evolving.

imageBlogger and cloud expert Roger Jennings was the one who first tipped me to the Orleans codename. Back in February 2009, he discovered a reference to the Orleans software platform, which described it as “a new software platform that runs on Microsoft’s Windows Azure system and provides the abstractions, programming languages, and tools that make it easier to build cloud services.”

imageSo what is Orleans, exactly? Orleans is a new programming model designed to raise the level of abstraction above Microsoft’s Common Language Runtime (CLR). Orleans introduces the concept of “grains” as being units of computation and data storage that can migrate between datacenters. Orleans also will include its own runtime that will handle replication, persistence and consistency. The idea is to create a single programming model that will work on clients and servers, which will simplify debugging and improve code mobility.

Here are a few slides from a recent Microsoft Research presentation that describe the platform in more depth:

(click on image to enlarge)

(click on image to enlarge)

(click on image to enlarge)

There are some interesting related references in these slides. “Volta,” mentioned in the first slide, was a Microsoft Live Labs project that disappeared with little explanation a couple of years ago. Volta was considered a competitor to the Google Web Toolkit and was designed to enable the creation of distributed applications. There’s also something called “DC#” in the third slide. I’m wondering if this might be “Distributed C#.) Any other guesses?

One of the leaders of the Orleans work seems to be Jim Larus, who previously worked on Microsoft Research’s Singularity micokernel operating system. These days, Larus is Director of Research and Strategy for Microsoft’s eXtreme Computing Group, which the company established “to push the boundaries of computing.” One of the places computing’s boundaries are being pushed the furthest is in the cloud, where vendors are racing to make their datacenters bigger, faster, greener and more performant.

There is no mention in any of the new materials I found as to Microsoft’s planned schedule for Orleans. I can’t even tell if Orleans exists as a research prototype or is simply slideware at this point. Maybe we’ll hear more about it at Microsoft’s upcoming cloud-focused Professional Developers Conference in late October… Meanwhile, if anyone has any more Orleans information, whether it be real details or guesses, let’s hear it.

William Vambenepe (@vambenepe) asked The necessity of PaaS: Will Microsoft be the Singapore of Cloud Computing? in this 8/18/2010 essay:

image From ancient Mesopotamia to, more recently, Holland, Switzerland, Japan, Singapore and Korea, the success of many societies has been in part credited to their lack of natural resources. The theory being that it motivated them to rely on human capital, commerce and innovation rather than resource extraction. This approach eventually put them ahead of their better-endowed neighbors.

imageA similar dynamic may well propel Microsoft ahead in PaaS (Platform as a Service): IaaS with Windows is so painful that it may force Microsoft to focus on PaaS. The motivation is strong to “go up the stack” that the alternative is to cultivate the arid land of Windows-based IaaS.

I should disclose that I work for one of Microsoft’s main competitors, Oracle (though this blog only represents personal opinions), and that I am not an expert Windows system administrator. But I have enough experience to have seen some of the many reasons why Windows feels like a much less IaaS-friendly environment than Linux: e.g. the lack of SSH, the cumbersomeness of RDP, the constraints of the Windows license enforcement system, the Windows update mechanism, the immaturity of scripting, the difficulty of managing Windows from non-Windows machines (despite WS-Management), etc. For a simple illustration, go to EC2 and compare, between a Windows AMI and a Linux AMI, the steps (and time) needed to get from selecting an image to the point where you’re logged in and in control of a VM. And if you think that’s bad, things get even worse when we’re not just talking about a few long-lived Windows server instances in the Cloud but a highly dynamic environment in which all steps have to be automated and repeatable.

I am not saying that there aren’t ways around all this, just like it’s not impossible to grow grapes in Holland. It’s just usually not worth the effort. This recent post by RighScale illustrates both how hard it is but also that it is possible if you’re determined. The question is what benefits you get from Windows guests in IaaS and whether they justify the extra work. And also the additional license fee (while many of the issues are technical, others stem more from Microsoft’s refusal to acknowledge that the OS is a commodity). [Side note: this discussion is about Windows as a guest OS and not about the comparative virtues of Hyper-V, Xen-based hypervisors and VMWare.]

Under the DSI banner, Microsoft has been working for a while on improving the management/automation infrastructure for Windows, with tools like PowerShell (which I like a lot). These efforts pre-date the Cloud wave but definitely help Windows try to hold it[s] own on the IaaS battleground. Still, it’s an uphill battle compared with Linux. So it makes perfect sense for Microsoft to move the battle to PaaS.

Just like commerce and innovation will, in the long term, bring more prosperity than focusing on mining and agriculture, PaaS will, in the long term, yield more benefits than IaaS. Even though it’s harder at first. That’s the good news for Microsoft.

On the other hand, lack of natural resources is not a guarantee of success either (as many poor desertic countries can testify) and Microsoft will have to fight to be successful in PaaS. But the work on Azure and many research efforts, like the “next-generation programming model for the cloud” (codename “Orleans”) that Mary Jo Foley revealed today, indicate that they are taking it very seriously. Their approach is not restricted by a VM-centric vision, which is often tempting for hypervisor and OS vendors. Microsoft’s move to PaaS is also facilitated by the fact that, while system administration and automation may not be a strength, development tools and application platforms are.

The forward-compatible Cloud will soon overshadow the backward-compatible Cloud and I expect Microsoft to play a role in it. They have to.

As Mary Jo Foley notes in her “Orleans” post [corrected URL here], I reported that Orleans emerged from obscurity on 2/25/2010 in my Microsoft Research Announces Cloud Computing Futures Group, Orleans Software for Azure at TechFest 2009 post.

David Linthicum asserted “Offerings that promise reserved portions of clouds could get enterprises off the fence around the use of public clouds” as a preface to his Bowing to IT demands, cloud providers move to reserved instances post to InfoWorld’s Cloud Computing blog on 8/18/2010:

image With the rising interest in private cloud computing, many public cloud providers are moving to service offerings that promise reserved portions of clouds they are calling reserved instances or sometime virtual private clouds. This is a step in the right direction, considering that many enterprises and government agencies are pushing back on public clouds. This is more around control issues than any legitimate technical arguments.

image The latest proof point around reserved instances is the new Reserved Database Instances which is now a part of the Amazon Web Services offering. As the IDG News Service reported: "With Reserved Database Instances, users can make a one-time, up-front payment to reserve a database instance in a specific region for either one or three years, according to In return, they get a discount off the ongoing hourly usage rate. A Reserved Database Instance costs from $227.50 for one year and $350 for three years plus $0.046 per hour. That compares to the standard hourly rate that starts at $0.11, according to's price list. If the database instance is used for the entire term, the discount can amount to up to 46 percent, said."

imageThe price is actually pretty high considering that commodity servers and open source stacks are pretty cheap these days. However, the core costs of keeping these systems around is installation and maintenance related, and those costs are avoided when using the cloud.

Functionally,'s Reserved Database Instances and On-Demand DB Instances are equivalent. What's more, you can add as many as 20 reserved instances and configure those instances as required to support the type of database processing you may need. The only limitation is that you can't go to a data center and actually see the blinking lights.

The ability to use the word "reserve" in the world of cloud computing is much less scary to IT, considering that we're seeing a clear pattern that those looking at cloud computing are much less likely to jump into sharing public cloud services with both feet, at least today.

I suspect that this is going to be a theme among the cloud computing providers over the next year are so: Creating offering that reflect more ownership than sharing, as that may be the only way to get the enterprises that are on the fence to bite on the public clouds.

IActionable worries about Google App Engine Updates - I hope Windows Azure keeps up! in a 8/18/2010 thread of the Windows Azure forum:

Multi-Tenancy Support, High Performance Image Service, Increased Datastore Quotas and More.

image [See story in the Other Cloud Computing Platforms and Services section below.]

Don't get me wrong. Our company heavily uses Windows and SQL Azure and we love it. But, I can't help feel that recent releases have been a bit of a let down feature-wise. There are so many suggestions and items on, so you guy's can't be out of ideas :)

imageI haven't heard anything from the Windows Azure team about Multi-Tenancy features for a while now. Are there still plans to support this in SQL or Windows Azure?

Perhaps I'm too hopeful. I mean, we still don't even have a simple 'COUNT' function available with Table Storage.

Andreas Köpf complains in his reponse to IActionable:

It is indeed a little bit "scary" that Azure has not changed much since its prevew release at the PDC 2008. Viewed from the outside there seems to be nearly no progress and for me personally it is not clear what commitment Microsoft has to cloud computing. Even the cancelled SDS project from the SQL-server team had a built-in programmatic near-realtime queryable usage and billing monitoring. Maybe MS will not do a serious move into the NO-SQL direction unless competitors like Amazon or Google are getting significantly ahead of them. Coding against the minimalistic feature set of TableStorage is currently a quite painful experience:

Regarding table-storage:

  1. Where are the long announced secondary indices? …

Which is the same question I’ve been asking for several months with no response.

Vivek Bhatnagar reported Windows Azure Billing: Moving to Invoice billing from credit card on 8/17/2010:

imageSince 12th July any Windows Azure user can choose to move to Invoice billing from credit card charges.

image Following are the steps to request for Invoice billing:

1. Click on  . It will open a service request screen.

2. Provide email id if not logged in with live ID.

3. Provide following details

a. Subject: Invoice Billing

b. Question: Move my account to invoice billing. Our VL program and agreement number is …………… (You might get a credit check request if not a Microsoft VL (Volume Licensing) customer)

c. Topic: Subscription Billing Support

d. Phone Number: ….

4. On pressing continue button, you will receive a service tracking number.

It might take 2-3 business days to get confirmation from Microsoft about invoice billing. You can also call MOCP support with service number for status.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA)

Lori MacVittie (@lmacvittie) reports “An IDC survey highlights the reasons why private clouds will mature before public, leading to the eventual consistency of public and private cloud computing frameworks” as a preface to her The Inevitable Eventual Consistency of Cloud Computing post of 8/18/2010 to F5’s DevCentral blog:

Network Computing recently reported on a very interesting research survey from analyst firm IDC. This one was interesting because it delved into concerns regarding public cloud computing in a way that most research surveys haven’t done, including asking respondents to weight their concerns as it relates to application delivery from a public cloud computing environment. The results? Security, as always, tops the list. But close behind are application delivery related concerns such as availability and performance.

Nblockquote-cyanetwork Computing – IDC Survey: Risk In The Cloud

While growing numbers of businesses understand the advantages of embracing cloud computing, they are more concerned about the risks involved, as a survey released at a cloud conference in Silicon Valley shows. Respondents showed greater concern about the risks associated with cloud computing surrounding security, availability and performance than support for the pluses of flexibility, scalability and lower cost, according to a survey conducted by the research firm IDC and presented at the Cloud Leadership Forum IDC hosted earlier this week in Santa Clara, Calif.

“However, respondents gave more weight to their worries about cloud computing: 87 percent cited security concerns, 83.5 percent availability, 83 percent performance and 80 percent cited a lack of interoperability standards.”

The respondents rated the risks associated with security, availability, and performance higher than the always-associated benefits of public cloud computing of lower costs, scalability, and flexibility. Which ultimately results in a reluctance to adopt public cloud computing and is likely driving these organizations toward private cloud computing because public cloud can’t or won’t at this point address these challenges, but private cloud computing can and is – by architecting a collection of infrastructure services that can be leveraged by (internal) customers on an application by application (and sometimes request by request) basis.


What will ultimately bubble up and become more obvious to public cloud providers is customer demand. Clouderati twitterbird like James Urquhart twitterbird and Simon Wardley twitterbird often refer to this process as commoditization or standardization of services. These services – at the infrastructure layer of the cloud stack – will necessarily be driven by customer demand; by the market. Because customers right now are not fully exercising public cloud computing as they would their own private implementation – replete with infrastructure services, business critical applications, and adherence to business-focused service level agreements – public cloud providers are a bit of a disadvantage. The market isn’t telling them what they want and need, thus public cloud providers are left to fend for themselves. Or they may be pandering necessarily to the needs and demands of a few customers that have fully adopted their platform as their data center du jour.

dynamic-infrastructure-maturity-modelInternal to the organization there is a great deal more going on than some would like to admit. Organizations have long since abandoned even the pretense of caring about the definition of “cloud” and whether or not there exists such a thing as “private” cloud and have forged their way forward past “virtualization plus” (a derogatory and dismissive term often used to describe such efforts by some public cloud providers) and into the latter stages of the cloud computing maturity model.

Internal IT organizations can and will solve the “infrastructure as a service” conundrum because they necessarily have a smaller market to address. They have customers, but it is a much smaller and well-defined set of customers which they must support and thus they are able to iterate over the development processes and integration efforts necessary to get there much quicker and without as much disruption. Their goal is to provide IT as a service, offering a repertoire of standardized application and infrastructure services that can easily be extended to support new infrastructure services. 

They are, in effect, building their own cloud frameworks (stack) upon which they can innovate and extend as necessary. And as they do so they are standardizing, whether by conscious effort or as a side-effect of defining their frameworks. But they are doing it, regardless of those who might dismiss their efforts as “not real cloud.” When you get down to it, enterprise IT isn’t driven by adherence to some definition put forth by pundits. They’re driven by a need to provide business value to their customers at the best possible “profit margin” they can. And they’re doing it faster than public cloud providers because they can.


imageWhat that means is that in a relatively short amount of time, as measured by technological evolution at least, the “private clouds” of customers will have matured to the point they will be ready to adopt a private/public (hybrid) model and really take advantage of that public, cheap, compute on demand that’s so prevalent in today’s cloud computing market. Not just use them as inexpensive development or test playgrounds but integrate them as part of their global application delivery strategy. The problem then is aligning the models and APIs and frameworks that have grown up in each of the two types of clouds.

Like the concept of “eventual consistency” with regards to data and databases and replication across clouds (intercloud) the same “eventual consistency” theory will apply to cloud frameworks. Eventually there will be a standardized (consistent) set of infrastructure services and network services and frameworks through which such services are leveraged. Oh, at first there will be chaos and screaming and gnashing of teeth as the models bump heads, but as more organizations and providers work together to find that common ground between them they’ll find that just like the peanut-butter and chocolate in a Reese’s Peanut Butter cup, the two disparate architectures can “taste better together.”

The question that remains is which standardization will be the one with which others must become consistent. Without consistency, interoperability and portability will remain little more than a pipe dream. Will it be standardization driven by the customers, a la the Enterprise Buyer’s Cloud Council? Or will it be driven by providers in a “if you don’t like what we offer go elsewhere” market? Or will it be driven by a standards committee comprised primarily of vendors with a few “interested third parties”?

David Pallman considers The Enigma of Private Cloud in this 8/17/2010 post:

image If you swim in cloud computing circles you cannot escape hearing the term private cloud. Private cloud is surely the feature most in demand by the cloud computing market—yet perhaps the longest in coming, as cloud computing vendors have gone from initial resistance to the idea to coming to terms with the need for it and figuring out how to deliver it. The concept is something of a paradox, made worse by the fact that private cloud definitely means different things to different people. There are at least 5 meanings of private cloud in use out there, and none of them are similar. Despite all this, the market pressure for private cloud is so great that cloud computing vendors are finding ways to deliver private cloud anyway. Let’s take a deeper look at what’s going on here.

What’s Behind The Demand For Private Cloud?
The desire for private cloud is easy enough to appreciate. Organizations are enamored with the benefits of cloud computing but don’t like certain aspects of it, such as the loss of direct control over their assets or sharing resources with other tenants in the cloud. This is where the paradox comes in, because management by cloud data centers and shared resources are core to what cloud computing is and why its costs are low. The market isn’t required to be logical or think through the details, however, and when there’s sufficient demand vendors find ways to innovate. Thus, while private cloud may seem at odds with the general premise of cloud computing, it turns out we need it and will have it.

There are some other drivers behind the need for private cloud that are hard to get around. Governments may have requirements for physical control of data that simply cannot be circumvented. In some countries there are regulations that business data must be kept in the country of origin. Another influence is the future dream of things working the same way in both the cloud and the enterprise. When that day comes, solutions won’t have to be designed differently for one place or the other and enterprises will be able to move assets between on-premise and cloud effortlessly.

Defining Private Cloud
How then is private cloud to be brought about? This is where we get into many different ideas about what private cloud actually is. My pet peeve is people who use the term private cloud without bothering to define what they mean by it. Let’s take a look at understandings that are in widespread use.

1. LAN Private Cloud
Some people use private cloud to simply mean their local network, similar to how the Internet can be referred to as the cloud without any specific reference to cloud computing proper. This use of the term is rather non-specific so we can’t do much with it. Let’s move on.

2. Gateway Private Cloud
This use of private cloud centers on the idea of securely connecting your local network to your assets in the cloud. Amazon’s Virtual Private Cloud is described as “a secure and seamless bridge between a company’s existing IT infrastructure and the AWS cloud” which “connects existing infrastructure to isolated resources in the cloud through a VPN connection.” In the Windows Azure world, Microsoft is working on something in this category called Project Sydney. Sydney was mentioned at PDC 2009 last year but until it debuts we won’t know how similar or different it will be to the Amazon VPC approach. Stay tuned.

This type of private cloud is valuable for several reasons. It potentially lets you use your own network security and operations monitoring infrastructure against your assets in the cloud. It potentially lets your cloud assets access something on your local network they need such as a server that you can’t or won’t put in the cloud.

3. Dedicated Private Cloud
In this flavor of private cloud you are using a cloud computing data center where an area of it is dedicated for just your use. From this you get the benefits you’re used to in the cloud such as automated provisioning and management and elasticity, but the comfort of isolation from other tenants.

Microsoft Online Services has offered this kind of private cloud with a dedicated version of the Business Productivity Online Suite (“BPOS-D”) for customers with a large enough footprint to qualify.

It seems axiomatic that dedicated private cloud will always be more expensive than shared use of the cloud.

4. Hardware Private Cloud
imageIn hardware private cloud, cutting edge infrastructure like that used in cloud computing data centers is made available for you to use on-premise. Of course there’s not only hardware but software as well. Microsoft’s recent announcement of the Windows Azure Appliance is in this category.

The nature of hardware private cloud makes it expensive and therefore not for everybody, but it is important that this kind of offering exist. First, it should allow ISPs to offer alternative hosting locations for the Windows Azure technology in the marketplace. Secondly, this allows organizations that must have data on their premises, such as some government bodies, to still enjoy cloud computing. Third, this solves the “data must stay in the country of origin” problem which is a significant issue in Europe.

Is there something like the hardware private cloud that’s a bit more affordable? There is, our next category.

5. Software Private Cloud
Software private cloud emulates cloud computing capabilities on-premise such as storage and hosting using standard hardware. While this can’t match all of the functionality of a true cloud computing data center, it does give enterprises a way to host applications and store data that is the same as in the cloud.

An enterprise gets some strong benefits from software private cloud. They can write applications one way and run them on-premise or in the cloud. They can effortlessly move assets between on-premise and cloud locales easily and reversibly. They can change their split between on-premise and cloud capacity smoothly. Lock-in concerns vanish. One other benefit of a software private cloud offering is that it can function as a QA environment—something missing right now in Windows Azure.

We don’t have software private cloud in Windows Azure today but there’s reason to believe it can be done. Windows Azure developers already have a cloud simulator called the Dev Fabric; if the cloud can be simulated on a single developer machine, why not on a server with multi-user access? There’s also a lot of work going on with robust hosting in Windows Server AppFabric and perhaps the time will come when the enterprise and cloud editions of AppFabric will do things the same way. Again, we’ll have to stay tuned and see.

Should I Wait for Private Cloud?
You may be wondering if it’s too soon to get involved with cloud computing if private cloud is only now emerging and not fully here yet. In my view private cloud is something you want to take into consideration—especially if you have a scenario that requires it—but is not a reason to mothball your plans for evaluating cloud computing. The cloud vendors are innovating at an amazing pace and you’ll have plenty of private cloud options before you know it. There are many reasons to get involved with the cloud early: an assessment and proof-of-concept now will bring insights from which you can plan your strategy and roadmap for years to come. If the cloud can bring you significant savings, the sooner you start the more you will gain. Cloud computing is one of those technologies you really should get out in front of: by doing so you will maximize your benefits and avoid improper use.

There you have it. Private cloud is important, both for substantive reasons and because the market is demanding it. The notion of private cloud has many interpretations which vary widely in nature and what they enable you to do. Vendors are starting to bring out solutions, such as the Windows Azure Appliance. We’ll have many more choices a year from now, and then the question will turn from “when do I get private cloud” to “which kind of private cloud should we be using?”

And please, if you have private cloud fever: please explain which kind you mean!

<Return to section navigation list> 

Cloud Security and Governance

Paula Rooney quotes John Pescatore in her Gartner: customers still don't get cloud computing post of 8/16/2010 to ZDNet’s Virtually Speaking blog:

Many corporate customers still do not grasp the key benefits of cloud computing, one top analyst says.

In a research note posted today, Gartner distinguished analyst John Pescatore said many corporate customers he’s talked to recently who are evaluating cloud computing for the first time are not interested in “true” cloud benefits — that is, the offloading of compute and storage to infrastructure as a service — but rather they are looking at the technology as a means to secure the virtual data center.

“A lot of those client calls are around dealing with the issues of business unit desire to use the cloud or IT wanting to use cloud, Pescatore wrote in his blog. “But when you dig a bit deeper, the current business issues (not the hype) are really about (in order of currency and importance):

1. Maintaining security when the data center goes virtual, both VMWare and SAN issues.

2. Being told “We are going to consume ‘X as a Service’ – go make sure it is secure.”

3. A narrower version of (2): “We are looking at Microsoft BPOS or Google Apps Premier Edition for email/office productivity as a service – is anyone like us doing that? If so, what about security?”

4. User desire to use consumer-grade services, like free online backup or other advertising support online offerings.

Pescatore wrote that questions about “true” cloud usage are tutorial in nature and that many clients have “no near term” plans to embrace IaaS.

Details on his finding will be released in a report later this month.

Can’t say I’m that surprised: a lot of IT pros who used mainframes or looked at mainframes didn’t get the benefits of automated workload management either.

<Return to section navigation list> 

Cloud Computing Events

Michael Coté proposes to go for the Enterprise Gold at SXSWi in this 8/19/2010 post:

image It’s panel promotion time for SXSW 2010. This year, I’m going for a panel on selling to the enterprise, targeted at the SXSW crowd, of course. I thought this would be a fun contrast to the consumer-heavy, “free” stuff that the SXSW sessions and panels are usually full of. The best way to “monetize” is to get paid for what you do and sell, to put it one way.

You can help by voting for the panel and leaving a comment, I’d appreciate it!

Here’s the proposal:

Avoid Freeloaders, Go For The Enterprise Gold

Why cater to a market that makes you eat ramen when you can slap on a suit and get budget for sushi? In this panel industry analyst Michael Coté (RedMonk) will lead a discussion with other analysts and experts illustrating how to approach enterprises and large gold-holding organizations with your technologies and services. Selling to consumers is fun, but the pay is poor compared to corporate customers who actually would like to pay for good software. We’ll cover the exceptions these outfits have, what types of technologies they’re looking for, sales processes, pricing, deflecting FUD from incumbents, and other aspects to help you bootstrap into the enterprise market. If you’re just holed up in an apartment waiting to get bought by Facebook or Google, there’s nothing for you here. But if you’d like to find out what all those dry-cleaned people are doing, come check it out and ask questions.

Questions Answered

  1. What types of functionality are enterprises looking for?
  2. How do I get around barriers put in by competitors and people who fear change?
  3. What advantages do new startups and offerings have that they can take advantage of?
  4. How do I build a sales and market program to reach enterprises?
  5. What technologies and services are low-hanging fruit?

While on a bus at some IBM function, I cooked up this idea with fellow analyst Merv Adrian – he’ll be on the panel (it was actually one of my many schemes to get more people to come to Austin for SXSW). Also, I was excited that Austin’s Kenny Van Zant volunteered to be on the panel in Twitter. As one of the long-time, senior executive at Solarwinds, he has first hand experience going after this kind of sell, but through the web instead of the usual steaks-and-strippers channels.

If that sounds interesting, it’d help get us closer to acceptance if you voted and left a comment for the panel. Hopefully we’ll see you at SXSW in March!

And while you’re in there, check out my friend Josh’s as well, about social media in gaming.

Simon Wardley (@swardley) answered whether OSCON 2010 was Arguably, the best cloud conference in the world? in this 8/18/2010 post:

image For those of you who missed the OSCON Cloud Summit, I've put together a list of the videos and speakers. Obviously this doesn't recreate the event, which was an absolute blast, but at least it'll give you a flavour of what was missed.

image Welcome to Cloud Summit [Video 14:28]
Very light introduction into cloud computing with an introduction to the speakers and the conference itself. This section is only really relevant for laying out the conference, so can easily be skipped.
With John Willis (@botchagalupe) of opscode and myself (@swardley) of Leading Edge Forum.

Scene Setting
In these opening sessions we looked at some of the practical issues that cloud creates.

Is the Enterprise Ready for the Cloud? [Video 16:39]
This session examines the challenges that face enterprises in adopting cloud computing. Is it just a technology problem or are there management considerations? Are enterprises adopted cloud, is the cloud ready for them and are they ready for it?
With Mark Masterson (@mastermark) of CSC.

Security, Identity – Back to the Drawing Board? [Video 25:12]
Is much of the cloud security debate simply FUD or are there some real consequences of this change?
With Subra Kumaraswamy(@subrak) of Ebay.

Cloudy Operations [Video 22:10]
In the cloud world new paradigms and memes are appearing :- the rise of the “DevOps”, “Infrastructure == Code” and “Design for Failure”. Given that cloud is fundamentally about volume operations of a commoditized activity, operations become a key battleground for competitive efficiency. Automation and orchestration appear key areas for the future development of the cloud. We review current thinking and who is leading this change.
With John Willis (@botchagalupe) of opscode.

The Cloud Myths, Schemes and Dirty Little Secrets [Video 17:38]
The cloud is surrounded by many claims but how many of these stand up to scrutiny. How many are based on fact or are simply wishful thinking? Is cloud computing green, will it save you money, will it lead to faster rates of innovation? We explore this subject and look at the dirty little secrets that no-one wants to tell you.
With Patrick Kerpan (@pjktech) of CohesiveFT.

Curing Addiction is Easier [Video 18:41]
Since Douglas Parkhill first introduced us to the idea of competitive markets of compute utilities back in the 1960s, the question has always been when would this occur? However, is a competitive marketplace in the interests of everyone and do providers want easy switching? We examine the issue of standards and portability in the cloud.
With Stephen O’Grady (@sogrady) of Redmonk.

Future Setting
In this section we heard from leading visionaries on the trends they see occurring in the cloud and the connection and relationships to other changes in our industry.

The Future of the Cloud [Video 29:00]
Cloud seems to be happening now but where is it going and where are we heading?
With J.P. Rangaswami (@jobsworth) of BT.

Cloud, E2.0 – Joining the Dots [Video 30:04]
Is cloud just an isolated phenomenon, or is it connected to many of the other changes in our industries.
With Dion Hinchcliffe (@dhinchcliffe) of Dachis.

The Questions
The next section was a Trial by Jury where we examined some of the key questions around cloud and open source.

What We Need are Standards in the Cloud [Video 45:17]
We put this question to the test, with prosecution Benjamin Black (@b6n) of FastIP, defence Sam Johnston (@samj) of Google and trial by a Jury of Subra Kumaraswamy, Mark Masterson, Patrick Kerpan & Stephen O’Grady

Are Open APIs Enough to Prevent Lock-in? [Video 43:21]
We put this question to the test, with prosecution James Duncan (@jamesaduncan) of Joyent, defence George Reese (@georgereese) of Enstratus and trial by a Jury of Subra Kumaraswamy, Mark Masterson, Patrick Kerpan & Stephen O’Grady

The Debates
Following the introductory sessions, the conference focused on two major debates. The first of these covered the “cloud computing and open source question”. To introduce the subject and the panelists, there were a number of short talks before the panel debates the impact of open source to cloud and vice versa.

The Journey So Far [Video 10:59]
An overview of how “cloud” has changed in the last five years.
With James Urquhart (@jamesurquhart) of CISCO.

Cloud and Open Source – A Natural Fit or Mortal Enemies? [Video 8:44]
Does open source matter in the cloud? Are they complimentary or antagonistic?
With Marten Mickos (@martenmickos) of Eucalyptus.

Cloudy Futures? The Role of Open Source in Creating Competitive Markets [Video 8:43]
How will open source help create competitive markets? Do “bits” have value in the future and will there be a place for proprietary technology?
With Rick Clark (@dendrobates) of OpenStack.

The Future of Open Source [Video 9:34]
What will cloud mean to open source development and to linux distributions. Will anyone care about the distro anymore?
With Neil Levine (@neilwlevine) of Canonical.

The Debate – Open Source and the Cloud [Video 36:24]
Our panel of experts examined the relationship between open source and cloud computing.
With Rick Clark, Neil Levine, Marten Mickos & James Urquhart

The Future Panel followed the same format with first an introduction to the experts who will debate where cloud is going to take us.

The Government and Cloud [Video 10:27]
The role of cloud computing in government IT – an introduction to the large G-Cloud and App Store project under way in the UK; what the UK public sector hopes to gain from a cloud approach, an overview of the proposed technical architecture, and how to deliver the benefits of cloud while still meeting government’s stringent security requirements.
With Kate Craig-Wood (@memset_kate) of Memset.

Infoware + 10 Years [Video 10:38]
Ten years after Tim created the term infoware, how have things turned out and what is the cloud’s role in this?
With Tim O'Reilly (@timoreilly) of O'Reilly Media.

The Debate – A Cloudy Future or Can We See Trends? [Video 50:12]
The panel of experts examine what’s next for cloud computing, what trends can they foresee.
With Kate Craig-Wood, Dion Hinchcliffe, Tim O’Reilly & JP Rangaswami

So, why "arguably the best cloud conference in the world?"

As a general conference on cloud, then the standard and quality of the speakers was outstanding. The speakers made the conference, they gave their time freely and were selected from a wide group of opinion leaders in this space. There was no vendor pitches, no paid for conference speaking slots and hence the discussion was frank and open. The audience themselves responded marvelously with a range of demanding questions.

It is almost impossible to pick a best talk from the conference because they were all great talks. There are real gems of insight to be found in each and every one and each could easily be the keynote for most conferences. In my opinion, if there is a TED of cloud, then this was it.

Overall, the blend of speakers and audience made it the best cloud conference that I've ever attended (and I've been to 50+). This also made my job as a moderator simple.

I'm very grateful to have been part of this and so my thanks goes to the speakers, the audience, the A/V crew who made life so easy and also Edd Dumbill (@edd), Allison Randal (@allisonrandal), Gina Blaber (@ginablaber) and Shirley Bailes (@shirleybailes) for making it happen.

Finally, huge thanks to Edd and Allison for letting me give a version of my Situation Normal, Everything Must Change talk covering cloud, innovation, commoditisation and my work at LEF.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Audrey Watters reported about Membase, The Database Powering Farmville in this 8/18/2010 post to the ReadWriteCloud:

image 65 million people play Zynga's online games every day. Millions of web browsers open to millions farms and millions of frontiers. They take turns, they tend crops, they send gifts. They buy millions of objects and upgrades. It's a mind-boggling amount of data. It's a new sort of data, and it warranted development of a new sort of database management system.

That system, Membase, is a "NoSQL" database optimized for storing web applications. Membase was developed by Zynga, NorthScale, and NHN, and its source code released as open source in June of this year.


Membase is one of a number of new databases that break from the relational database management system (RDBMS) model. The RDBMS has a long history, dating back to the 1970s. In a relational database, data is stored in the form of tables, as is the relationship among the data. This system has worked well to handle transactional and structured data.

But as the amount of information, the kind of information, the number of users accessing the information have grown, the relational database has faced some challenges. With new data comes new storage demands. And the traditional RDBMS is not optimized for the kind of environment that big data and cloud computing have created - one that's elastic and distributed.

From Cache to Database

Memcached is a tool that was developed to help address some of the problems as our computing needs shifted. Originally built by Brad Fitzpatrick for LiveJournal in 2003, memcached is distributed memory caching system. But according to James Phillips, Senior VP of Products at NorthScale, many applications have been using memcached for more than just transient storage. "People like memcached because it represents a practically boundless place to easily cache data, at very low cost and with predictably stellar performance. No schemas, no tables, no sharding, no normalizing, no tuning."

farmville_aug10.jpgNorthScale was founded by some of the leaders in the open source memcached community, and the company, along with Zynga, has taken that expertise to develop Membase, so that the same speed, flexibility, and simplicity of memcached could made to really store data in a database, not just a cache.

Phillips says that memcached is designed to dock next to the relational database management system, and it's already seen as a "best practice bandaid." So from there, NorthScale argues that it can help move its customers from existing RDBMS technology towards a database architecture that's more scalable - in other words, to Membase.

It helps, of course, if you're going to argue for a move away from the RDBMS to have an early success story like Zynga to point to: the 500,000 ops-per-second database behind Farmville.

The Google App Engine Team reported Multi-tenancy Support, High Performance Image Serving, Increased Datastore Quotas and More Delivered In New App Engine Release on 8/17/2010:

image Today marks the 1.3.6 release of App Engine for Java and Python. In this release we have made available several exciting new features, relaxed quota and datastore limitations, and added various issue fixes.

Multi-tenant Apps Using the Namespaces API

We are pleased to announce support for multi-tenancy for applications via the Namespaces API. With multi-tenancy, multiple client organizations (or “tenants”) can all run the same application, segregating data using a unique namespace for each client. This allows you to easily serve the same app to multiple different customers, with each customer seeing their own unique copy of the app. No changes in your code are necessary to use this API-- just a little extra configuration. Further, the API is also designed to be very customizable, with hooks into your code that you can control, so you can set up multi-tenancy in any way you choose.

Check out our application examples for Java and Python to demonstrate how to use the Namespaces API in your application. The API works will all of the relevant App Engine APIs (Datastore, Memcache, and Task Queues). Check out our docs for Java and Python to learn more.

High-Performance Image Serving

This release also includes a new, high-performance image serving system for your applications, based on the same infrastructure we use to serve images for Picasa. This feature allows you to generate a stable, dedicated URL for serving web-suitable image thumbnails. You simply store a single copy of your original image in Blobstore, and then request a high-performance per-image URL. This special URL can serve that image resized and/or cropped automatically, and serving from this URL does not incur any CPU or dynamic serving load on your application (though bandwidth is still charged as usual). It’s easy to use, just call the Python function get_serving_url, or the Java function getServingUrl and supply a Blob key (with optional serving size and/or crop arguments), and you can now serve dozens or hundreds of thumbnails on a single page with ease. To enable high performance image serving in your deployed application, you'll need to enable billing.

Custom Error Pages

Since launch, many developers have asked to be able to serve custom error pages instead of those automatically served by App Engine. We are happy to announce today we are supporting static HTML error pages that can be served for you automatically for over quota, DoS, timeout and other generic error cases, that you previously could not control. You can configure custom error handlers in your app.yaml or appengine-web.xml file. Check out the Java or Python docs for more information.

Increased Quotas

We have also continued the trend of lifting some system limitations that have been in place since launch. The Datastore no longer enforces a 1000 entity limit on for count and offset. Queries using these will now safely execute until they return or your application reaches the request timeout limit. Also, based on your feedback, we have raised nearly all of the burst quotas for free apps to the same level as the burst quotas for billed apps. Check out the docs for more information on quota limits.

Other features included in 1.3.6:

  • Java developers can now use the same app.yaml configuration file that App Engine uses for Python applications instead of appengine-web.xml (if preferred).
  • You can now pause task queues via the Admin Console interface.
  • Dashboard graphs in the Admin Console will now begin showing up to 30 days worth of data.
  • Content-Range headers are now supported with the Blobstore API.
You can read all about these features in the App Engine documentation for Java and Python. The new versions of the SDK can be found on our downloads page.

In addition to all of the new features, we’ve included several bug fixes, which you can read all about in the release notes for Java and Python.

<Return to section navigation list>