Monday, January 31, 2011

Windows Azure and Cloud Computing Posts for 1/31/2011

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.

<Return to section navigation list> 

SQL Azure Database and Reporting

Toto Gamboa published on 1/31/2011 his Thoughts on Designing Databases for SQL Azure with a detailed analysis of SQL Azure sharding":

image Cloud has been the new darling in the world of computing and Microsoft has provided once again developers a new compeling platform to build their new generation applications on. In this picture enters SQL Azure, a cloud-based edition of Microsoft’s leading database product, Microsoft SQL Server. One would ask how one designs an RDBMS database for the cloud and specifically for the Azure platform. It is basically easy to create a simple database in SQL Azure but database creation and database design are two different life forms. So we ask, how are databases be in the cloud and in SQL Azure? In this blog, I hope to provide an answer.

Scale Out

imageFirst and foremost, the cloud has been one of the primary solutions to consider when an application is in the realm of a massive scalability issue. Or to put it in another perspective, the Cloud was both a catalyst of the emergence of extremely large applications and a result of the webification of a lot of things. Design considerations need to factor in unheard of scales that can span from almost everywhere. Bandwidth needed is of the global scale, hard disk space are in the petabytes, and users are in the millions. When you talk about building an application to handle this scale, one can’t do anymore the usual stuff any one of us used to do. One now needs to think massively in terms of design. E.g. No single piece of hardware can handle the kind of scale I just mentioned. So how does one design a simple database that is intended to scale up to the degree that I just mentioned?

The Problem

Supposing, we have a great idea of developing the next big thing and we named it FreeDiskSpace. FreeDiskSpace allows us to upload massive amounts of files where files can be anything one can think of. The scale of this next big thing is thought to be in the range of having 500 million users scattered around the globe with each users allowed to upload an unlimited amount of files.

The Challenge

With such order of magnitude, the many challenges we will be facing will definitely lead and force us to question every bit of our knowledge and development capability that we might have been using when we are used to doing gigs in the few GBytes of databases with a few thousand users. If what is allowed makes each user upload files in the range of not less than 5GB, one would need to store 2,500 petabytes worth of files. How does one store 2,500 petabytes worth of files and manage the uploading/downloading of 500 million users?

In the Cloud, you are less likely to see a bunch of supercomputers handling this kind of scale. Rather, you are more likely to see hundreds of thousands of commodity servers woven together forming a cohesive unit. Making use of these mesh to behave like it is as if it is operating as a unit requires design patterns that are more appropriate for these environments.

Database Sharding in SQL Azure

First, what is database sharding. To put it simply, database sharding is a design pattern to split up data into shards or fragments where each shard can be run on its own on a separate instance running on a separate server located somewhere. Each shard contains the same structure as that of another but contains different subsets of data.

Current version of SQL Azure offers NO feature that handles automatic scaling. This will surely change in the near future as everybody doing applications handling these massive loads will somehow need an automatic way to allow the application to handle this amount of load and data easily.  One way that is guaranteed to be commonly used will be sharding databases. However for now, sharding in SQL Azure isn’t automatic. If one needs to shard, one needs to do it by hand. As I have said, in the future, Microsoft will be compelled to give us an easy way to practice sharding. Hopefully, that future is near.

Now we would ask, why do we need to shard databases in SQL Azure. The cloud and specifically with SQL Azure, with its limitations, forces us to use this design pattern. SQL Azure only allows us 50GB for a database’s maximum size on a virtual instance running on an off the  shelf commodity server. With that, we surely need a way to work around these constraints if we see our data to grow for like 500 terabytes. Second, if we foresee a huge number of users hitting our applications, doing this on a single piece of hardware would spawn concurrency nightmares that would choke our databases to death.

Though there are other approaches to solve our scaling challenges, there are other interesting motivating factors where sharding becomes a very compelling option. Firstly, it is more likely for a database server to process small databases faster than larger ones. Secondly, by utilizing multiple instances of computing power, we can spread processing load across these instances to more processes can be executed all at once.

How To Shard

Let us try to go back with our FreeDiskSpace killer app (assuming we provide each user up 1GB in database space) and practice some very simple sharding so we have a partial understanding of what sharding is all about. If we are to design the database using a single instance, the table would look like this:


If we are to shard, we need to decide on how we split up our database into logical, manageable fragments. We then decide what data is going to occupy each fragment (we can probably call the occupier our “tenant”). Based on tables we see above, there are 2 fields that we can potentially use as keys for sharding. These fields are user and country. These fields should provide answers to a question: How to we sensibly split up our database? Do we have to split it up by user, or by country? Both are actually qualified to be used as basis for splitting up our database. We can investigate each though.

Sharding by field: User


By deciding to make each user as a tenant in our database, the following statements would be true:

  • Our shard would only occupy data related to a single user.
  • Various shards can be in the same server. Shards don’t need to be in separate servers.
  • It would render country table a bit useless and awkward. But we can always decide to get rid of it and just maintain a column indicating a user’s country of origin.
  • Now with a one-user-one-database structure, 1 million users mean 1 million separate databases. In SQL Azure’s terms, assuming a 1GB costs us 10$ a month, designing our shards by user means we have to spend $10 million monthly. Now that is no joke!

Sharding by field: Country


By deciding to make country as a tenant for each of our database, the following statements would be true:

  • Our shard would only occupy data related to a country.
  • Various shards can be in the same server. Shards don’t need to be in separate servers.
  • It would render country table unnecessary as each database represent a country. But we can always decide to get rid of this table.
  • says there are only 196 countries in the world. With a one-country-one-database structure,  this means,  if we are getting the larger 10GB, 100$ a month SQL Azure databases, we only need to spend $19,600 a month.
  • This design decision though is pretty useful until we ran out of space in our 10GB design to contain all users related to a country. There are solutions to this problem though. However, I will try to discuss this in my future blogs.

It is pretty clear that picking the right key for sharding has obvious ramifications. It is very important for us to determine which ones are appropriate for our needs. It is obvious that picking country is the most cost-effective in this scenario while picking user could shield you much longer on the 10GB limit problem.

This article only presents partial concepts of sharding but nevertheless has given one the very basic idea of how database sharding can be applied, and how all these translate when you choose SQL Azure as a platform.

Toto Gamboa is a Database Specialist, Software Designer, Bird Photographer, Roadtripper, Moviegoer, Cook, Guitarist, and Microsoft MVP.

See My Resource Links for SQL Azure Federations and Sharding Topics (updated 1/28/2011) for links to more articles about sharding SQL Azure databases.

Jiří Činčura (@cincura_net) described using SQL Azure and Entity Framework in a 1/31/2011 post:

image From time to time I get a question about SQL Azure and Entity Framework. Can I use it? Does it work? Etc. Because the SQL Azure is in fact a special version (you can check unsupported and partially supported T-SQL statements) of MS SQL Server it works without any hassle. That’s the short story.

imageThe following lines will quickly guide you through the steps of using SQL Azure with Entity Framework. You need to have and Azure account ready. There’s no way to install SQL Azure to your own server or something like that. First login in to and select some project you have there. This will bring you this screen.

By default only master database is created and it’s up to you to create additional database. Right now you can create from two editions and few sizes: Web and Business and from 1GB to 50GB. The price you’ll pay depends on this selection (but not only). For purpose of this article I created 1GB Web edition database.

On the main screen you have a button to get a connection string. I’m often using it, as it’s easier for me to get information from it than compose it from pieces on screen. Mine is:

Data;Initial Catalog=test;Persist Security Info=True;User ID=jiri@lskqxi46a0;Password=******;MultipleActiveResultSets=True;Encrypt=True

After these initial steps we can start using the database. Because I was too lazy to create tables and so on manually I tested one feature of Entity Framework I’m not much familiar with. It’s Model First. You simply create the conceptual model and later generate SQL script from it for your database (yes, Firebird has support too). I created basic “blogging system” model. Author has many Posts. Posts have many Tags. Right clicking on model and selecting Generate database from model resulted in my case in simple script.

-- --------------------------------------------------
-- Entity Designer DDL Script for SQL Server 2005, 2008, and Azure
-- --------------------------------------------------
-- Date Created: 01/29/2011 16:40:18
-- Generated from EDMX file: C:\Users\Jiri\Desktop\ConsoleApplication3\ConsoleApplication3\Model1.edmx
-- --------------------------------------------------

USE [test];

-- --------------------------------------------------
-- Dropping existing FOREIGN KEY constraints
-- --------------------------------------------------

IF OBJECT_ID(N'[dbo].[FK_BlogPostTag_BlogPost]', 'F') IS NOT NULL
    ALTER TABLE [dbo].[BlogPostTag] DROP CONSTRAINT [FK_BlogPostTag_BlogPost];
IF OBJECT_ID(N'[dbo].[FK_BlogPostTag_Tag]', 'F') IS NOT NULL
    ALTER TABLE [dbo].[BlogPostTag] DROP CONSTRAINT [FK_BlogPostTag_Tag];
IF OBJECT_ID(N'[dbo].[FK_BlogPostAuthor]', 'F') IS NOT NULL
    ALTER TABLE [dbo].[BlogPosts] DROP CONSTRAINT [FK_BlogPostAuthor];

-- --------------------------------------------------
-- Dropping existing tables
-- --------------------------------------------------

IF OBJECT_ID(N'[dbo].[BlogPosts]', 'U') IS NOT NULL
    DROP TABLE [dbo].[BlogPosts];
IF OBJECT_ID(N'[dbo].[Authors]', 'U') IS NOT NULL
    DROP TABLE [dbo].[Authors];
IF OBJECT_ID(N'[dbo].[Tags]', 'U') IS NOT NULL
    DROP TABLE [dbo].[Tags];
IF OBJECT_ID(N'[dbo].[BlogPostTag]', 'U') IS NOT NULL
    DROP TABLE [dbo].[BlogPostTag];

-- --------------------------------------------------
-- Creating all tables
-- --------------------------------------------------

-- Creating table 'BlogPosts'
CREATE TABLE [dbo].[BlogPosts] (
    [ID] int IDENTITY(1,1) NOT NULL,
    [Created] datetime  NOT NULL,
    [Heading] nvarchar(max)  NOT NULL,
    [Content] nvarchar(max)  NOT NULL,
    [Author_ID] int  NOT NULL

-- Creating table 'Authors'
CREATE TABLE [dbo].[Authors] (
    [ID] int IDENTITY(1,1) NOT NULL,
    [Name_FirstName] nvarchar(max)  NOT NULL,
    [Name_LastName] nvarchar(max)  NOT NULL,
    [LastLoggedIn] datetime  NOT NULL

-- Creating table 'Tags'
CREATE TABLE [dbo].[Tags] (
    [ID] int IDENTITY(1,1) NOT NULL,
    [Name] nvarchar(max)  NOT NULL

-- Creating table 'BlogPostTag'
CREATE TABLE [dbo].[BlogPostTag] (
    [BlogPosts_ID] int  NOT NULL,
    [Tags_ID] int  NOT NULL

-- --------------------------------------------------
-- Creating all PRIMARY KEY constraints
-- --------------------------------------------------

-- Creating primary key on [ID] in table 'BlogPosts'
ALTER TABLE [dbo].[BlogPosts]

-- Creating primary key on [ID] in table 'Authors'
ALTER TABLE [dbo].[Authors]

-- Creating primary key on [ID] in table 'Tags'
ALTER TABLE [dbo].[Tags]

-- Creating primary key on [BlogPosts_ID], [Tags_ID] in table 'BlogPostTag'
ALTER TABLE [dbo].[BlogPostTag]

-- --------------------------------------------------
-- Creating all FOREIGN KEY constraints
-- --------------------------------------------------

-- Creating foreign key on [BlogPosts_ID] in table 'BlogPostTag'
ALTER TABLE [dbo].[BlogPostTag]
ADD CONSTRAINT [FK_BlogPostTag_BlogPost]
    FOREIGN KEY ([BlogPosts_ID])
    REFERENCES [dbo].[BlogPosts]

-- Creating foreign key on [Tags_ID] in table 'BlogPostTag'
ALTER TABLE [dbo].[BlogPostTag]
    FOREIGN KEY ([Tags_ID])
    REFERENCES [dbo].[Tags]

-- Creating non-clustered index for FOREIGN KEY 'FK_BlogPostTag_Tag'
ON [dbo].[BlogPostTag]

-- Creating foreign key on [Author_ID] in table 'BlogPosts'
ALTER TABLE [dbo].[BlogPosts]
    FOREIGN KEY ([Author_ID])
    REFERENCES [dbo].[Authors]

-- Creating non-clustered index for FOREIGN KEY 'FK_BlogPostAuthor'
ON [dbo].[BlogPosts]

-- --------------------------------------------------
-- Script has ended
-- --------------------------------------------------

Now we can finally write some code to let Entity Framework do some dirty job. Let's create some authors and issue a non-trivial query to see what's what.

using (Model1Container ctx = new Model1Container())
	Author a1 = new Author();
	a1.Name.FirstName = "Foo";
	a1.Name.LastName = "Bar";
	a1.LastLoggedIn = DateTime.Now;
	Author a2 = new Author();
	a2.Name.FirstName = "Jiri";
	a2.Name.LastName = "Cincura";
	a2.LastLoggedIn = DateTime.Now;

	Debug.Assert(a1.ID != default(int));
	Debug.Assert(a2.ID != default(int));

	var blogPosts = ctx.Authors
		.Where(a => a.Name.LastName == "Cincura")
		.SelectMany(a => a.BlogPosts)
		.Where(bp => bp.Tags.Any(t => t.Name == "Databases" || t.Name == "Azure" || t.Name == "Cloud"))
		.Select(bp => new { bp.Heading, bp.Created });
	Console.WriteLine((blogPosts as ObjectQuery).ToTraceString());
	foreach (var blogPost in blogPosts)

Surprise. Nothing bloodthirsty. It works as expected. Two authors are created. You can check that by querying the table or by looking at database size at The query itself is standard T-SQL query you can run on MS SQL Server as well, nothing magic.

[Extent1].[ID] AS [ID],
[Extent2].[Heading] AS [Heading],
[Extent2].[Created] AS [Created]
FROM  [dbo].[Authors] AS [Extent1]
INNER JOIN [dbo].[BlogPosts] AS [Extent2] ON [Extent1].[ID] = [Extent2].[Author_ID]
WHERE (N'Cincura' = [Extent1].[Name_LastName]) AND ( EXISTS (SELECT
	1 AS [C1]
	FROM  [dbo].[BlogPostTag] AS [Extent3]
	INNER JOIN [dbo].[Tags] AS [Extent4] ON [Extent4].[ID] = [Extent3].[Tags_ID]
	WHERE ([Extent2].[ID] = [Extent3].[BlogPosts_ID]) AND ([Extent4].[Name] IN (N'Databases',N'Azure',N'Cloud'))

Theory tells us, that if SQL Azure is more or less MS SQL Server database using more or less same wire protocol and more or less same T-SQL it should work. And it does. Here you have a small proof.

Herve Roggero (@hroggero) explained MAXDOP in SQL Azure in a 1/30/2010 post:

image In my search of better understanding the scalability options of SQL Azure I stumbled on an interesting aspect: Query Hints in SQL Azure. More specifically, the MAXDOP hint. A few years ago I did a lot of analysis on this query hint (see article on SQL Server Central:

imageHere is a quick synopsis of MAXDOP: It is a query hint you use when issuing a SQL statement that provides you control with how many processors SQL Server will use to execute the query. For complex queries with lots of I/O requirements, more CPUs can mean faster parallel searches. However the impact can be drastic on other running threads/processes. If your query takes all available processors at 100% for 5 minutes... guess what... nothing else works. The bottom line is that more is not always better. The use of MAXDOP is more art than science... and a whole lot of testing; it depends on two things: the underlying hardware architecture and the application design. So there isn't a magic number that will work for everyone... except 1... :) Let me explain.

The rules of engagements are different. SQL Azure is about sharing. Yep... you are forced to nice with your neighbors.  To achieve this goal SQL Azure sets the MAXDOP to 1 by default, and ignores the use of the MAXDOP hint altogether. That means that all you queries will use one and only one processor.  It really isn't such a bad thing however. Keep in mind that in some of the largest SQL Server implementations MAXDOP is usually also set to 1. It is a well known configuration setting for large scale implementations. The reason is precisely to prevent rogue statements (like a SELECT * FROM HISTORY) from bringing down your systems (like a report that should have been running on a different in the first place) and to avoid the overhead generated by executing too many parallel queries that could cause internal memory management nightmares to the host Operating System.

Is summary, forcing the MAXDOP to 1 in SQL Azure makes sense; it ensures that your database will continue to function normally even if one of the other tenants on the same server is running massive queries that would otherwise bring you down.

Last but not least, keep in mind as well that when you test your database code for performance on-premise, make sure to set the DOP to 1 on your SQL Server databases to simulate SQL Azure conditions.

<Return to section navigation list> 

MarketPlace DataMarket and OData

Emil Stoychev (@estoychev) reported on 1/25/2011 that he’ll be Speaking at the Seattle Silverlight UG on March 2nd:

image The MVP Summit this year takes place at Bellevue and Redmond on February 28 – March 2 and as an MVP I decided to come by this time. It will be my first time there and I’ve only heard how cool it is, so I’m eagerly waiting to see it myself.

During that period there will be also a Seattle Silverlight UG meeting. Thanks to David Kelley, the UG lead, I’ll be presenting a session on OData/WCF Data Services with Silverlight on March 2nd. I’ll cover the basics of the OData protocol, the WCF Data Services client for Silverlight, some key line-of-business scenarios like lazy loading, dynamic queries, filtering/sorting/grouping etc.

imageHere is the preliminary agenda:

  1. OData, WCF Data Services
    • What is WCF?
    • What is WCF Data Services? OData?
    • Service architecture
  2. Working with data
    • Querying data
    • Hierarchical data
    • Filtering, grouping, sorting, paging
    • Lazy Loading
    • CRUD
    • Tracking Changes

The session level will be intermediate (200 according to Microsoft leveling) with drill down to specific topics.

SilverlightShow will be giving some swag so be there if you want to be among the first to get the new SilverlightShow t-shirt.

As I understand from David, usually there are many Microsoft MVPs and employees on these UG meetings, so I believe there will be a lot of fun.

Julie Lerman (@julielerman) posted the source code for her Slice and Dice OData with the jQuery DataTables Plug-In article for MSDN Magazine’s 2/2011 issue to the MSDN Code Gallery on 1/24/2010 (missed when posted):

imageResource Page Description
OData lets you access data over the Web through simple HTTP commands. We’ll show you how the jQuery DataTables plug-in along with the Microsoft .NET Framework and Silverlight OData client libraries let you retrieve and display this data quickly, easily and with style.

imageRead the article here:

imageThe February issue wasn’t posted when the above article was posted. Stay tuned.

<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Vittorio Bertocci (@vibronet) created a Venn diagram of the Identity and Azure training kits and courses in his Windows Azure Platform Training Kit and MSDN Course Update: Closing January in Style post of 1/31/2011:

image Wade just announced the  release of the January 2011 update of the Windows Azure Training Kit and associated MSDN course: it is chock full of new stuff, go check it out!

One not-so-well-known tidbit is that there’s a significant portion of shared content between the Identity Training Kit & Course and the Windows Azure Platform Training Kit & Course; every time we update identity content that happens to touch on cloud scenarios, the content eventually finds its way in both. For example, this January release of the Windows Azure platform kit contains pretty much all the new stuff I announced in the corresponding update in the Identity Training kit. In honor of my friend Eve Maler, who’s especially fond of Venn diagrams, here there’s a graphical depiction of the HOLs in the two kits Have fun!


Eve is now a Principal Analyst at Forrester Research serving security and risk professionals and focusing primarily on identity and access management

Jonathan Rozenblit (@jrozenblit) posted Getting to know Windows Azure AppFabric to his Clear as Cloud blog on 1/30/2011:


image As you know, the Windows Azure platform has many moving parts – one of which is AppFabric. In a nutshell, AppFabric is a middleware platform for developing, deploying, and managing applications on the Windows Azure platform. It breaks down into these key areas:

  • Middleware Services: Access Control, Service Bus, Caching, and soon to be Integration, and Composite Application
  • Composite Application Environment (planned for future releases): Composition Model, Visual Design Tools managed as a Service
  • Scale-Out Application Infrastructure (planned for future releases): Composition Runtime, Sandboxing and Multi-tenancy, State Management, Scale-Out and High Availability, Dynamic Address Resolution and Routing

image7223222Itai Raz, Product Manager for Windows Azure AppFabric, just started a new blog post series that will explore key concepts and principles of Windows Azure AppFabric.  Before going into all of the technical details, Part 1 of the series (What is Windows Azure AppFabric trying to solve?) gives a great introduction and describes the challenges that Windows Azure AppFabric is meant to address. I love that he explains what AppFabric is trying to achieve - I believe that knowing why something was created before getting into its nitty-gritty ensures that it’s used in the way that it was intended to be used. Follow the series to learn more about the capabilities of AppFabric and how each of them will help address the challenges Itai talks about in Part 1.

You should also visit the Windows Azure App Fabric Developer Center on MSDN to help you get started learning Windows Azure AppFabric.  If you scroll towards the bottom of the page, you’ll find “How Do I?” videos and a couple of AppFabric samples you can download to get you going.

Here are a few additional places where you can learn about Windows Azure AppFabric:

On February 7, 2011, tune in to watch Windows Azure Boot Camp: Connecting with AppFabric, a 200 level webcast that will look at how to secure a REST Service, what you can do to connect services together, and how to work with firewalls and NATs.

And if the above wasn’t enough, you can also check out the Windows Azure AppFabric Team’s Blog and follow them on Twitter and Facebook.

This post also appears on Canadian Developer Connection

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

The Windows Azure Team updated on 1/31/2011 the “Asia-Pacific/Rest of World” section of its 20+ Nodes Available Globally for the Windows Azure CDN (now 23 nodes) originally posted on 8/9/2010:

The most common question we get asked about the Windows Azure Content Delivery Network (CDN) is, "Where are the nodes physically located?"  We're happy to say that customers choosing to serve data through the network today are offered 20 physical nodes* to improve delivery of performance-sensitive content around the globe.  Below is the list of current locations; we'll update this list as our network evolves (9/1/10: SEE UPDATES* BELOW) (1/31/11:  SEE UPDATE** BELOW).



  • Ashburn, VA
  • Bay Area, CA
  • Chicago, IL
  • San Antonio, TX
  • Los Angeles, CA
  • Miami, FL
  • Newark, NJ
  • Seattle, WA


  • Amsterdam, NL
  • Dublin, IE
  • London, GB
  • Paris, FR
  • Stockholm, SE
  • Vienna, AT
  • Zurich, CH

Asia-Pacific/Rest of World

  • Hong Kong, HK
  • Moscow, RU**
  • São Paulo, BR
  • Seoul, KR*
  • Singapore, SG
  • Sydney, AU
  • Taipei, TW*
  • Tokyo, JP

Offering pay-as-you-go, one-click-integration with Windows Azure Storage, the Windows Azure CDN is a system of servers containing copies of data, placed at various points in our global cloud services network to maximize bandwidth for access to data for clients throughout the network. The Windows Azure CDN can only deliver content from public blob containers in Windows Azure Storage - content types can include web objects, downloadable objects (media files, software, documents), applications, real time media streams, and other components of Internet delivery (DNS, routes, and database queries).

imageNo significant articles today.

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

The Windows Azure Team reported Now Available: January Update for The Windows Azure Platform Training Kit on 1/31/2011:

imageA newly updated version of the Windows Azure Platform Training Kit is now available for download.  This January update provides new and updated demo scripts, as well as a new Windows Phone 7 hands-on lab (HOL) for the Windows Azure platform. Other new features include several new demo scripts and an improved mechanism for installing Visual Studio code snippets. Read more about this update in a post by Windows Azure Technical Evangelist Wade Wegner here.  For help getting started, check out the Windows Azure platform training course here.

Included in the January update are:

  • Windows Azure Connect demo script (NEW)
  • Web and Worker Role Enhancements demo script (NEW)
  • Windows Azure Virtual Machine Roles demo script (NEW)
  • Rafiki demo script (NEW)
  • Windows Phone 7 and The Cloud HOL (NEW)
  • Use Access Control Service to Federate with Multiple Business Identity Providers HOL (NEW)
  • Refreshed Identity HOLs
  • Improved Visual Studio code snippets installation
  • Several bug fixes in demos and labs

The Windows Azure Platform Training Kit includes a comprehensive set of technical content including hands-on labs, presentations, and demos that are designed to developers learn how to use the Windows Azure platform.

Buck Woody (@buckwoody) posted Certification Notes: 70-583 Designing and Developing Windows Azure Applications on 1/31/2011:

image It’s time for another certification, and we’ve just release[d] the 70-583 exam on Windows Azure. I’ve blogged my “study plans” here before on other certifications, so I thought I would do the same for this one. I’ll also need to take exam 70-513 and 70-516; but I’ll post my notes on those separately. None of these are “brain dumps” or any questions from the actual tests - just the books, links and notes I have from my studies.

imageI’ll update these references as I’m studying, so bookmark this site and watch my Twitter and Facebook posts for when I’ll update them, or just subscribe to the RSS feed. A “Green” color on the check-block means I’ve done that part so far, red means I haven’t. First, I need to refresh my memory on some basic coding, so along with the Azure-specific information I’m reading the following general programming books:

The first place to start is at the official site for the certification. That’s here:

On that page you’ll find several resources, and the first you should follow is the “Save to my learning” so you have a place to track everything. Then click the “Related Learning Plans” link and follow the videos and read the documentation in each of those bullets.

There are six areas on the learning plan that you should focus on - make sure you open the learning plan to drill into the specifics.

  • Designing Data Storage Architecture (18%)
  • Optimizing Data Access and Messaging (17%)
  • Designing the Application Architecture (19%)
  • Preparing for Application and Service Deployment (15%)
  • Investigating and Analyzing Applications (16%)
  • Designing Integrated Solutions (15%)

Steve Marx updated his Windows Azure Startup Tasks: Tips, Tricks, and Gotchas post on 1/25, 1/26 and 1/31/2011:

In my last post, I gave a brief introduction to Windows Azure startup tasks and how to build one. The reason I’ve been thinking lately about startup tasks is that I’ve been writing them and helping other people write them. In fact, this blog is currently running on Ruby in Windows Azure, and this is only possible because of an elevated startup task that installs Application Request Routing (to use as a reverse proxy) and Ruby.

Through the experience of building some non-trivial startup tasks, I’ve learned a number of tips, tricks, and gotchas that I’d like to share, in the hopes that they can save you time. Here they are, in no particular order.

Batch files and Visual Studio

I mentioned this in my last post, but it appears that text files created in Visual Studio include a byte order mark that makes it impossible to run them. To work around this, I create my batch files in notepad first. (Subsequently editing in Visual Studio seems to be fine.) This isn’t really specific to startup tasks, but it’s a likely place to run into this gotcha.

[UPDATE 1/26/2011] If you do File/Save As…, you can click the little down arrow and choose a different encoding. “Unicode (UTF8 without signature) codepage 65001” works fine for saving batch files. Thanks, Travis Pettijohn, for emailing me this tip!

Debug locally with “start /w cmd”

This is a tip Ryan Dunn shared with me, and I’ve found it helpful. If you put start /w cmd in a batch file, it will pop up a new command window and wait for it to exit. This is useful when you’re testing your startup tasks locally, as it gives you a way to try out commands in exactly the same context as the startup task itself. It’s a bit like setting a breakpoint and using the “immediate” panel in Visual Studio.

Remember to take this out before you deploy to the cloud, as it will cause the startup task to never complete.

Make it a background task so you can use Remote Desktop

“Simple” startup tasks (the default task type) are generally what you want, because they run to completion before any of your other code runs. However, they also run to completion before Remote Desktop is set up (also via a startup task). That means that if your startup task never finishes, you don’t have a chance to use RDP to connect and debug.

A tip that will save you lots of debugging frustration is to set your startup type to “background” (just during development/debugging), which means RDP will still get configured even if your startup task fails to complete.

Log to a file

Sometimes (particularly for timing issues), it’s hard to reproduce an error in a startup task. You’ll be much happier if you log everything to a local file, by doing something like this in your startup task batch file:

command1 >> log.txt 2>> err.txt
command2 >> log.txt 2>> err.txt

Then you can RDP into the role later and see what happened. (Bonus points if you configure Windows Azure Diagnostics to copy these log files off to blob storage!)

Executing PowerShell scripts

To execute an unsigned PowerShell script (the sort you’re likely to include as a startup task), you need to configure PowerShell first to allow this. In PowerShell 2.0, you can simply launch PowerShell from a batch file with powershell -ExecutionPolicy Unrestricted ./myscript.ps1. This will work fine in Windows Azure if you’re running with osFamily=”2”, which gives you an operating system image based on Windows Server 2008 R2. If you’re using osFamily=”1”, though, you’ll have PowerShell 1.0, which doesn’t include this handy commandline argument.

For PowerShell 1.0, the following one-liner should tell PowerShell to allow any scripts, so run this in your batch file before calling PowerShell:

reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f

(I haven’t actually tested that code yet… but I found it on the internet, so it must be right.)

Using the Service Runtime from PowerShell

In Windows Azure, you’ll find a PowerShell snap-in that lets you interact with the service runtime APIs. There’s a gotcha with using it, though, which is that the snap-in is installed asynchronously, so it’s possible for your startup task to run before the snap-in is available. To work around that, I suggest the following code (from one of my startup scripts), which simply loops until the snap-in is available:

Add-PSSnapin Microsoft.WindowsAzure.ServiceRuntime
while (!$?) {
    sleep 5
    Add-PSSnapin Microsoft.WindowsAzure.ServiceRuntime

(That while loop condition amuses me greatly. It means “Did the previous command fail?”)

Using WebPICmdline to run 32-bit installers

[UPDATE 1/25/2011 11:47pm] Interesting timing! WebPI Command Line just shipped. If you scroll to the bottom of the announcement, you’ll see instructions for running it in Windows Azure. There’s an AnyCPU version of the binary that you should use. However, it doesn’t address the problem I’m describing here.

Using WebPICmdline (in CTP now, but about to ship) is a nice way to install things (particularly PHP), but running in the context of an elevated Windows Azure startup task, it has a surprising gotcha that’s difficult to debug. Elevated startup tasks run as NT AUTHORITY\SYSTEM, which is a special user in a number of ways. One way it’s special is that its user profile is under the system32 directory. This is special, because on 64-bit machines (like all VMs in Windows Azure), 64-bit processes see the system32 directory, but 32-bit processes see the SysWOW64 directory instead.

When the Web Platform Installer (via WebPICmdline or otherwise) downloads something to execute locally, it stores it in the current user’s “Local AppData,” which is typically under the user profile. This will end up under system32 in our elevated startup task, because WebPICmdline is a 64-bit process. The gotcha comes when WebPI executes that code. If that code is a self-extracting executable 32-bit process, it will try to read itself (to unzip) and search in the SysWOW64 directory instead. This doesn’t work, and will typically pop up a dialog box (that you can’t see) and hang until you dismiss it (which you can’t do).

This is a weird combination of things, but the point is that many applications or components that you install via WebPICmdline will run into this issue. The good news is that there’s a couple simple fixes. One fix is to use a normal user (instead of system) to run WebPICmdline. David Aiken has a blog post called “Running Azure startup tasks as a real user,” which describes exactly how to do this.

I have a different solution, which I find simpler if all you need to do is work around this issue. I simply change the location of “Local AppData.” Here’s how to do that. (I inserted line breaks for readability. This should be exactly four lines long.)

md "%~dp0appdata"
reg add "hku\.default\software\microsoft\windows\currentversion\explorer\user shell folders"
    /v "Local AppData" /t REG_EXPAND_SZ /d "%~dp0appdata" /f
"%~dp0webpicmdline" /AcceptEula /Products:PHP53 >>log.txt 2>>err.txt
reg add "hku\.default\software\microsoft\windows\currentversion\explorer\user shell folders"
    /v "Local AppData" /t REG_EXPAND_SZ /d %%USERPROFILE%%\AppData\Local /f

[UPDATE 1/31/2011] Prepended the %~dp0 to the webpicmdline call. Sometimes I just put a “cd %~dp0” at the top of the script to avoid gotchas around locations like this.

Use PsExec to try out the SYSTEM user

PsExec is a handy tool from Sysinternals for starting processes in a variety of ways. (And I’m not just saying that because Mark Russinovich now works down the hall from me.) One thing you can do with PsExec is launch a process as the system user. Here’s how to launch a command prompt in an interactive session as system:

psexec –s -i cmd

You can only do this if you’re an administrator (which you are when you use remote desktop to connect to a VM in Windows Azure), so be sure to elevate first if you’re trying this on your local computer.

The Windows Azure Team claimed We Want to Hear Your Thoughts and Suggestions about on 1/31/2011:

imageOur objective is to make the usability experience intuitive and efficient. We'd like to know how we're doing on this objective, and if the information that we've presented on is valuable and easy to find.

If you've visited recently, we'd appreciate it if you could complete a brief survey about your experience. We'll be relying on this feedback to add new content and functionality to the site. We look forward to hearing from you!

The following shows how brief the feedback form is:


There’s an input box at the bottom to add free-form feedback.

Andy Cross (@andybareweb) posted Migrating to Azure Diagnostics with SDK v1.3 on 1/31/2011:

image If you are planning to migrate an existing ASP.NET Web Application to Windows Azure, being able to diagnose any issues is a top priority. Windows Azure provides a comprehensive diagnostics system, designed to solve specific cloud computing diagnosis issues such as reporting statistics and errors across multiple instances in a common aggregated way. To take advantage of these diagnosis tools, you must undertake a few steps in your existing application in order to enable them. This post will detail those steps and the basics of how to migrate any Web Application to Azure.

The builds upon my implementation guide and source code is provided at the end of the post.
This post uses a vanilla ASP.NET web application, and details every step needed to make it equivalent to a brand new Azure Web Role in terms of its ability to use Windows Azure Diagnostics. I created the Web Application using File | New | Web Application – yours will be much more complex, but as a ground up approach, this is my vanilla application:

Web Application for Migration

Web Application for Migration

Next we have to create a Windows Azure Project.

Create a new Windows Azure Project

Create a new Windows Azure Project

Again this is a vanilla Project. Since we already have our source code in the project that we want to migrate, we don’t need to add any Roles to the solution. So click OK without adding any roles as below:

Create an Empty Azure Project

Create an Empty Azure Project

This empty Azure project is now ready for us to add our Web Application to. For reference this is the default empty project’s structure:

Empty Azure project

Empty Azure project

Now add the existing Web Application by right clicking on the solution and adding an Existing project, browsing to your project and adding it to the solution:

Add Existing Project

Add Existing Project

Browse to the solution

Browse to the solution

Add the project to the solution

Add the project to the solution

At this point you’ll notice that there are no Roles in your solution – this means there’s nothing for Azure to do, a Role is a project that will run on Azure. We must tell Visual Studio that the application that we just added is a Role, and to do that we right click on “Roles” and select a “Web Role in Solution”. This allows us to select from the project list that comes up – note that if you can’t see your project in the list, then you may need to migrate the project to being a Web Application first.

Here are the steps you need:

Add Web Role Project in Solution

Add Web Role Project in Solution

Select a Project

Select a Project

Your Role added

Your Role added

Now if we start this project with Visual Studio, we’ll see the application run. However, we will not see any diagnostics within the Windows Azure Diagnostics framework, as we haven’t configured anything yet. Thus we must begin to configure the Diagnostics framework.

Start by adding a class to your Web Application Project. You can do this at any point of the application as it will be compiled so it’s path is irrelevant. The name is also irrelevant, it is its inheritance hierarchy that is important.

Add a class to the application

Add a class to the application

This class you must make inherit from RoleEntryPoint:

RoleEntryPoint derivation

RoleEntryPoint derivation

As you can see, the RoleEntryPoint class is underlined by Visual Studio, this means in this case that the class isn’t found, because we are missing a reference and a using directive. You can add a reference to Microsoft.WindowsAzure.ServiceRuntime and add a using directive to solve this:

Add reference to Microsoft.WindowsAzure.ServiceRuntime

Add reference to Microsoft.WindowsAzure.ServiceRuntime

Add a using directive (Here shown using Intellisense)

Add a using directive (Here shown using Intellisense)

Now we can begin to add our code to configure the Windows Azure Diagnostics framework. This is done using the code from my previous blog. Firstly however, we must override the OnStart method in order to implement the code in the right place:

Override OnStart()

Override OnStart()

Now you can put the following code as the OnStart method:

  1. public override bool OnStart()  
  2. {  
  3. // put in here all the diagnostics setup that we want
  4. // see
  5. string wadConnectionString = "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString";  
  6.     CloudStorageAccount storageAccount = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(wadConnectionString));  
  7.     RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = storageAccount.CreateRoleInstanceDiagnosticManager(RoleEnvironment.DeploymentId, RoleEnvironment.CurrentRoleInstance.Role.Name, RoleEnvironment.CurrentRoleInstance.Id);  
  8.     DiagnosticMonitorConfiguration config = DiagnosticMonitor.GetDefaultInitialConfiguration();  
  9.     config.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1D);  
  10.     config.Logs.ScheduledTransferLogLevelFilter = LogLevel.Undefined;  
  11.     roleInstanceDiagnosticManager.SetCurrentConfiguration(config);    
  12. // note that a trace statement here will not append to the Windows Azure Diagnostics as there is no
  13. // trace listener set up for RoleEntryPoint.cs
  14. // make sure you have added the trace listener to web.config as per the blog
  15. return base.OnStart();  
        public override bool OnStart()
            // put in here all the diagnostics setup that we want
            // see

            string wadConnectionString = "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString";
            CloudStorageAccount storageAccount = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue(wadConnectionString));

            RoleInstanceDiagnosticManager roleInstanceDiagnosticManager = storageAccount.CreateRoleInstanceDiagnosticManager(RoleEnvironment.DeploymentId, RoleEnvironment.CurrentRoleInstance.Role.Name, RoleEnvironment.CurrentRoleInstance.Id);
            DiagnosticMonitorConfiguration config = DiagnosticMonitor.GetDefaultInitialConfiguration();

            config.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1D);

            config.Logs.ScheduledTransferLogLevelFilter = LogLevel.Undefined;


            // note that a trace statement here will not append to the Windows Azure Diagnostics as there is no
            // trace listener set up for RoleEntryPoint.cs

            // make sure you have added the trace listener to web.config as per the blog

            return base.OnStart();

This will again cause a couple of reference errors, so you should add a reference to Microsoft.WindowsAzure.Diagnostics and Microsoft.WindowsAzure.StorageClient.

Your references will look like this once complete:

Complete References

Complete References

If you are using System.Diagnostics.Trace, you must also add a piece of configuration into your Web.Config in order to append trace messages to the new Diagnostics framework. If you are using Windows Event Logs, you do not have to do this, but you must adapt the above “OnStart” method to setup the transfer of Event Log details. Information on how to achieve transfer of Event Logs in Windows Azure can be found here.

The Web Config entry is:

  1. <system.diagnostics>
  2. <trace autoflush="true" indentsize="4">
  3. <listeners>
  4. <add name="AzureDiagnostics" type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
  5. </listeners>
  6. </trace>
  7. </system.diagnostics>
    <trace autoflush="true" indentsize="4">
            <add name="AzureDiagnostics" type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />

Once you have completed this, you can then run your application, and any trace messages will be logged to the WADLogsTable in Development Storage:

Results in WADLogsTable

Results in WADLogsTable

You can find the source code for this here: Migrate to Windows Azure Diagnostics

Andrew Dobson (@Azulloandrew) reported Respond platform now on Windows Azure on 1/31/2011:

imageRespond is now running on Microsoft’s Windows Azure cloud. We moved Respond to Azure to cope with the fast growth we have been experiencing. This will allow us to maintain performance as Respond grows and provide our publishers and advertisers with a rapid service.

From the Respond Web site:

image Respond is an innovative advertising platform that helps publishers earn more revenue, and advertisers acquire new customers.

Respond helps solve the problem of falling ad click-through rates by making it possible for people to engage with advertisers without leaving the page they are visiting.

Learn more about how the Respond button beats banner blindness and offers an alternative to interruptive ads, or contact us now.

Dennis Burton (@dburton) described Environment Based Configuration in Windows Azure in a 1/31/2011 post:

image This is one of those examples of where demo code and samples diverge from the code you use in a real application. For almost every example you see for loading configuration settings, like connection strings, you will see the following code:


This code will pull the value in the service configuration file associated with the given key. This would be sufficient if we had the same tools for a service configuration that we had for web.config files. For web.config files, we have a set of XSL files that can be used to apply different settings for different environments. One common place this occurs is in connection strings. In the real world, we set up the configuration strings for production, staging and development environments once and use the config file generated for the current environment.

Adapting config in Azure

imageUntil the XSL tools are available for the service configuration, this technique and helper class can reduce the amount of work you have to perform on your service configuration when changing environments during the development cycle. First, add an entry in your service configuration to indicate your current environment. I use values of Development, Staging, and Production for this setting. Then for your entries that have a different value between the different environments, create a setting with the original setting name prefixed with the environment name. The image below shows how this would look for the setting ConnString.


The AzureConfigurationSettings class (shown below) will provide the functionality for the CurrentEnvironment setting to be used for determining which of the ConnString values should be used for the current instance. Using this utility, all of the appropriate connections strings may be kept in the service configuration and only one setting needs to change when switching environments. In order to consume this in your code, use AzureConfiguraitonSettings instead of RoleEnvironment when reading configuration settings.


It is worth noting that Alex Lambert has found a way to run the configuration transforms on service configuration files. He documents the process in a blog post. A side effect of this is that you lose some ability to use the visual tools for editing configuration. There is however much less noise in the configuration file than the approach in this blog post. Until support for configuration transforms is added to Visual Studio, you will have to pick your pain points.

How it works

The AzureConfigurationSettings class will first attempt to use the configuration setting without the decoration for current environment. This ensures that code you have today will function the same. If no value was found without decoration, the code will attempt to use the configuration setting prefixed with the string found in the CurrentEnvironment setting. I also handled the RoleEnvironment.Changing event so that if a new configuration were loaded through the portal with the CurrentEnvironment setting changed, the cache of that value would be invalidated.

The Source

The code you see below may also be found on bitbucket with the rest of the solution.

  1. public class AzureConfigurationSettings  
  2. {  
  3. private const string CurrentEnvironmentKey = "CurrentEnvironment";  
  4. private readonly static object lockObj = new object();  
  5. static AzureConfigurationSettings()  
  6.   {  
  7.     RoleEnvironment.Changing += (sender, e) => {  
  8.         var hasCurrentEnvironmentChanged = e.Changes  
  9.           .OfType<roleenvironmentconfigurationsettingchange>()  
  10.           .Any(change => change.ConfigurationSettingName == CurrentEnvironmentKey);  
  11. if (hasCurrentEnvironmentChanged)  
  12.         {  
  13. lock (lockObj)  
  14.           {  
  15.             isCurrentEnvironmentLoaded = false;  
  16.           }  
  17.         }  
  18.       };  
  19.   }  
  20. public static string GetConfigurationSettingValue(string key)  
  21.   {  
  22. string value;  
  23. if (TryGetValue(key, out value))  
  24. return value;  
  25. return null;  
  26.   }  
  27. private static bool isCurrentEnvironmentLoaded = false;  
  28. public static string currentEnvironment;  
  29. public static string CurrentEnvironment  
  30.   {  
  31. get
  32.     {  
  33. if (!isCurrentEnvironmentLoaded)  
  34.       {  
  35. lock (lockObj)  
  36.         {  
  37. if (!isCurrentEnvironmentLoaded)  
  38.           {  
  39. try
  40.             {  
  41.               currentEnvironment = RoleEnvironment  
  42.                 .GetConfigurationSettingValue(CurrentEnvironmentKey);  
  43.             }  
  44. catch (RoleEnvironmentException)  
  45.             {}  
  46.             isCurrentEnvironmentLoaded = true;  
  47.           }  
  48.         }  
  49.       }  
  50. return currentEnvironment;  
  51.     }  
  52.   }  
  53. private static bool TryGetValue(string key, out string value)  
  54.   {  
  55.     value = null;  
  56. try
  57.     {  
  58.       value = RoleEnvironment.GetConfigurationSettingValue(key);  
  59.     }  
  60. catch (RoleEnvironmentException)  
  61.     {  
  62. try
  63.       {  
  64. if(!string.IsNullOrEmpty(CurrentEnvironment))  
  65.           value = RoleEnvironment.GetConfigurationSettingValue(CurrentEnvironment + key);  
  66.       }  
  67. catch (RoleEnvironmentException)  
  68.       {  
  69. return false;  
  70.       }  
  71.     }  
  72. return true;  
  73.   }  

Dave Haynes reported Ayuda Makes Splash with Free Playback Software for DOOH (digital out-of-home media) on 1/31/2011:

One of my stops last week in Montreal was at the offices of Ayuda Media Systems, in a historic riverfront building in old Montreal. I didn’t know a whole bunch about the company, except that its background was in Enterprise Resource Planning (ERP) systems for the OOH media business. [Link edited.]

I also knew the company had more recently expanded its offer into Digital OOH, and that it won the RFP to develop a search and discovery tool for the Digital Place-Based Advertising Association, and that Ayuda had its own free planning and buying tools for agencies. The hook with Symphony is it’s free, but for a full, robust back-end with stuff like billing, the additional modules were fee-based. That’s where Ayuda would make money.

CEO Andreas Soupliotis kindly spent a couple of hours yakking with me last week, and hinted at a big announcement this week in Amsterdam, at the DISCO event that is running just before the ISE trade show. It’s pretty interesting. The company has added a digital signage software player, effectively bridging the gap between scheduling and playback and negating the need for sorting out a handshake between two different systems.

imageCalled Splash, it is ERP software for DOOH ad networks, built on the cloud-based Microsoft Windows Azure platform. The system includes proposal generation, billing, financial reporting, and CRM on the back-end, but also a  content management system for network operators and a free digital software player – called the “splash player.”

The really interesting thing is that the player is open-source, so that it can in theory work with any other digital signage CMS software, Windows OR Linux. The media player is the open-source Mplayer and the scheduling is rules-based, which is the more sophisticated way of building schedules, using data instead of stacked playlists.

I just have a copy of the PPT presented at DISCO, so I am working off bullet points and extrapolating on them. The arguments include how this fosters innovation, lets hardware companies focus on what they are good at instead of building custom systems, helps merging networks unify on a single platform and lets some operators work with a net as they worry about the fragile state of their existing software vendors.

Soupliotis suggested in closing (in the PPT) that there are lots of announcements to come, with multiple CMS systems managing OpenSplash players  and multiple OpenSplash Compatible Displays.

This will raise eyebrows in a few quarters, particularly at companies like BroadSign (whose calling card has for years been a DOOH-centric CMS) and NEC Displays, which launched Vukunet with the notion of tapping into the Digital OOH advertising market to stimulate sector growth and, in turn, panel sales.

Ayuda has a booth at DSE and this announcement will certainly drive some traffic.

More details likely to follow …

Avkash Chauhan describes forcoming Hawaii components in his Microsoft Research Released Windows Phone 7 + Cloud Services SDK (Codename - Project Hawaii) of 1/30/2011:

image Microsoft Research Team just released a new SDK name “Windows Phone 7 + Cloud Services SDK” to combine the Windows Phone 7 development along with Windows Azure. The SDK released under the code name Project Hawaii and a little summary is as below:

imageProject Hawaii involves building web applications and services, as well as mobile applications. While there are many platforms available, we have currently chosen to use Windows Phone 7 as the mobile platform and either Windows Azure or Internet Information Services (IIS) as the web-application server. In January 27, 2011 release the SDK has the following two parts:

imageRelay Service: Most mobile service providers do not provide mobile phones with consistent public IP addresses that would allow for them to be reachable from other devices. This makes it difficult to write applications where mobile phones communicate with each other directly.

The Hawaii Relay Service provides a relay point in the cloud that mobile applications can use to communicate. It provides an endpoint naming scheme and buffering for messages sent between endpoints. It also allows for messages to be multicast to multiple endpoints.

Rendezvous Service: The Hawaii Rendezvous Service is a mapping service from well-known human-readable names to endpoints in the Hawaii Relay Service. These well-known human-readable names may be used as stable rendezvous points that can be compiled into applications.

The SDK also has a February 2011 release scheduled which will include the following two more components:

OCR in the Cloud: The Hawaii OCR in the Cloud service takes a photographic image that contains some text and returns the text. For example, given a JPEG image of a road sign, the service would return the text of the sign as a Unicode string.

Speech to Text: The Hawaii Speech to Text service takes a spoken phrase and returns text (right now in English).

Project Hawaii Web Page:

The SDK is immediately available for download at the link below:

Bill Zack reminded Azure developers on 1/30/2011 about availability of Microsoft Research’s Azure Throughput Analyzer:

image If you are a software company developer planning to convert your on-premise application to Windows Azure you should be aware of this tool.

The Microsoft Research eXtreme Computing Group cloud-research engagement team supports researchers in the field who use Windows Azure to conduct their research.

image As part of this effort, they have built a desktop utility that measures the upload and download throughput achievable from an on-premise client machine to Azure cloud storage (blobs, tables and queue).

imageThe download contains a desktop utility and an accompanying user guide. You simply install this tool on an on-premise machine, select a data center for the evaluation, and enter the account details of any storage service created within it.

The utility will perform a series of data-upload and data-download tests using sample data and collect measurements of throughput, which are displayed at the end of the test, along with other statistics.

Add this to your toolbox.

<Return to section navigation list> 

Visual Studio LightSwitch

Alessandro Del Sole (@progalex) is writing Visual Studio LightSwitch Unleashed for Pearson Higher Education’s Sams Publishing imprint:

image ISBN-10: 0132684217
ISBN-13:  9780132684217

Publisher:  Sams Publishing
Copyright:  2011
Format:  On-line Supplement; 792 pp
Status: Not Yet Published
Estimated Availability: 05/20/2011

Online purchase price: $54.99

This item is not yet available for purchase.
See estimated availability date above.

No significant articles today.

Table of Contents

image22242222Part I. Building Applications with LightSwitch
1. Introducing Visual Studio LightSwitch
2. Exploring the IDE
3. Building Data-centric Applications with LightSwitch
4. Building More Complex Applications with Relationships and Details Screens

image Part II. Manipulating Data
5. Customizing Data Validation
6. Querying, Filtering and Sorting data
7. Customizing Applications with User Controls
8. Implementing Reporting and Printing Features
9. Aggregating Data from Different Data Sources

Part III. Advanced LightSwitch
10. Handling Events in Code
11. Dissecting a LightSwitch Application
12. Advanced LightSwitch with Visual Studio 2010
13. Debugging LightSwitch Applications

Part IV. Securing and Deploying Applications
14. Implementing Authentication
15. Deploying LightSwitch Applications

Part V. Extending the Development Environment
16. Customizing the IDE
17. Downloading and installing Extensions
18. Creating and Sharing LightSwitch Shell Extensions

Appendix A. Installing and Configuring Visual Studio LightSwitch

Alessandro is the author of Sams’ Visual Basic 2010 Unleashed.

Return to section navigation list> 

Windows Azure Infrastructure

From the 1/31/2011 update to my Windows Azure Compute Extra-Small VM Beta Now Available in the Cloud Essentials Pack and for General Use post:

imageHere’s a sample of the warning you’ll receive when your usage of Windows Azure compute hours reaches 75% of your free software benefit. In my case, the message relates to my OakLeaf Systems Azure Table Services Sample Project - Paging and Batch Updates Demo running on my MSDN Ultimate subscription benefits, which has a billing period that ends of the 5th of the month. You can expect a similar warning for Windows Azure 30-Day Passes and Windows Azure Platform Cloud Essentials for Partners benefits.


Mario Meir-Huber continued his Cloud Computing: Understanding Windows Azure series with Part 2: A look inside the Windows Azure datacenters on 1/31/2011:

image To understand Windows Azure and the Azure Services Platform, it's necessary to understand how the Microsoft Datacenters work. This article provides an overview of how Microsoft Designs their datacenters and why the Generation 4 Datacenters are so revolutionary.

The Building of Datacenters
imageMicrosoft has been building data centers for a long time. One of the best-known services Microsoft offers is Windows Update, which delivers updates as part of their content delivery network all over the world. But this is not the only product Microsoft's Datacenters are famous for. Other important products are Windows Live Messenger, Hotmail and Windows Live ID. Windows Live Messenger is one of the largest IM software and Hotmail is a frequently used e-mail software. Microsoft authorizes millions of users every day with their Live Services, which is used for Hotmail, Messenger and numerous other services. As you can see, Microsoft has experience building datacenters, but so far hasn't sold products like Windows Azure.

Microsoft's G4 - Generation 4 - Datacenters
Microsoft Research did a great job of improving their datacenters especially how they build them. Microsoft calls this the G4 - Generation 4 Datacenters. They have an industrial design - components are standardized, which lowers the cost and enables the vendors to use templates when designing their servers for Microsoft. Generation 4 Datacenters are basically built-in containers - yes, exactly those containers that we think about when we think about ship containers. There are major advantages to this design. Imagine a datacenter needs to be relocated. Microsoft would only need a couple of trucks and some property and the relocation is almost done. The main advantage to this design is that server vendors such as HP or Dell know exactly what the server racks should look like by adding them in a container. If a Datacenter needs to grow, a Generation 4 Datacenter just adds some additional containers to the existing ones. In addition, Microsoft focused on building standard tools for the cooling system so that local maintainance workers can easily get trained on the systems. It's important to note that the Generation 4 Datacenters aren't only a containerized server room. What Microsoft does with the Generation 4 Datacenters is that they improve the entire live-cycle of how the data centers are built and work. This gives Microsoft some additional benefits such as faster time-to-market and reduced costs.

How Microsoft Datacenters Help Protect the Environment
The term "Green IT" has been around for a while. Microsoft takes this term seriously and tries to minimize the energy consumption of their datacenters. For Microsoft this is not only the possibility of lowering the energy and cooling costs but also to protect our environment. With the Generation 4 Datacenters, Microsoft tries to build the containers with environmentally friendly materials and to take advantage of "ambient cooling." The last one focuses on reducing the amount of energy that needs to be invested to cool the server systems by taking advantage of the datacenter's environment. There are a couple of best practices and articles available on what Microsoft does to build environmentally friendly datacenters. I have included some links at the end of the article.

For an overview of Microsoft's Datacenter Design, this video that explains how Generation 4 Datacenters are built.

Security in Microsoft's Datacenters
Microsoft has a long tradition of building datacenters and operating systems. For decades, Microsoft had to face hackers, viruses and other malware that tried to attack their operating systems. More than other vendors, Microsoft learned from these attacks and started to build a comprehensive approach to security. The document I refer to in this article describes Microsoft's strategy for a safe Cloud Computing environment. Microsoft built an online services security and compliance team that focuses on implementing security in their applications and platforms. Microsoft's key assets for a safe and secure cloud computing environment are the commitment to trustworthy computing and the need for privacy. Microsoft works with a "privacy by default" approach.

To secure its datacenters, Microsoft holds safe datacenters certifications from various organizations such as the ISO/IEC and the British Standards Institute. Furthermore, Microsoft uses the ISO/IEC27001:2005 framework for security. This consists of the four points "Plan, Do, Check, Act."

If you want to go deeper into this Topic, I recommend you read "Securing Microsoft's Cloud Infrastructure."

What Happens with the Virtual Machines?
Figure 1 explains exactly what is going on in a Windows Azure Datacenter. I found this information in David Lemphers's blog, where he gave an overview of what happens in the datacenter. First of all, the servers are started and a maintenance OS is downloaded. This OS now talks to a service called "Fabric Controller." This service is in charge of the overall platform management and the server gets the instruction to create a host partition with a host VM. Once this is done, the server will restart and load the Host VM. The Host VM is configured to run in the datacenter and to communicate with other VMs on a safe basis. The services that we use don't run in the host VM. There's another VM, called the Guest VM, that runs within the host VM (the host VM is booted natively). Since we now have the VMRole, every guest VM holds a diff-store that will store the changes that are made to the virtual machine. The standard image is never modified. Each Host VM can contain several guest VMs.


This article is part of the Windows Azure Series on Cloud Computing Journal. The Series was originally posted on, the official Blog of the Developer and Platform Group at Microsoft Austria. You can see the original Series here.

Redgate Software’s Simple-Talk blog posted on 1/20/2011 the transcription of an interview with Buck Woody as Azure: Towards 'Platform as a Service' (missed when posted):

imageWe met up with the great Buck Woody at PASS. We bought him a coffee- we positioned the cup,  punched the button and hot instant coffee poured out. We gave him  the cup of coffee and switched on the microphone. Instantly, from Buck Woody, out poured a wonderful[ly] clear explanation of Windows Azure. This is the transcription.

One word, many products

imageAzure actually consists of two big pieces, the first and most important of which is Windows Azure. We usually just refer to it as ‘Azure’. SQL Azure is the second piece. is always referred to as SQL Azure, it’s actually a SKU or a product SKU, a product within Microsoft that you buy separately. (An SKU stands for a Stock Keeping Unit and means an individual product/version of a product).

"It is Software as a Service running on infrastructure as a service that I can control. That is Windows Azure."

Just as you might buy SQL Server 2008 R2 Enterprise edition, you can purchase, in a different way, SQL Azure. So the term ‘Azure’ can refer to two different things

Added to this, we have another piece of Azure that includes Office 365, Business Productivity Online Suite (it used to be called BPOS) , Hotmail, and even something as far over to the left as Xbox LIVE which I think is the largest purchased cloud product on the planet. Xbox is one of those products that meet all of those criteria of a Cloud application. You would probably never think of Xbox LIVE as being a cloud, but it is..

Buzzword Bingo.

Whenever I talk about cloud computing, I have to explain why I use buzzwords even though, in principle, I hate doing so. Buzzwords can be useful before a concept gets a shared vocabulary. I was reading recently a very interesting college textbook on development patterns and architecture. I think it was back from the 80s or 90s. They were using the term ‘ object-oriented’ and the professor said something like ‘I can’t stand these buzzwords but they’re required in an immature industry’. They give us something until we come up with a common lexicon that people will adapt to.

You will always start with buzzwords and then move on. None of us call everything “i” anymore, remember how Oracle was 9i and 10i and so on? To mean I’m Internet. We always put every…”e”, remember that when the Internet was young in the 90s? Everything was “e” dash this, “e” dash that, even computers, e-machines and so on, as if machines are not electronic. In the same way, I think that, in the future, the term ‘cloud’ will go away. You can call them online services, but again, it’s an immature industry so people haven’t developed that lexicon yet.

Software as a utility

We are increasingly thinking of software in the same way as as the utilities in our home, such as water or power. We use a utility rather than own it. You don’t own a power plant but you use the power and that’s a fine analogy for computing in the distance or in the cloud, but I think there’s a better analogy and it is the telephone company.

Back in the 1910s and 20s you literally bought a switchboard if your business wanted telephones in their building. Then, they were the ones with the old candlestick that you picked up and put one end in your ear and held up the other to your face. You paid for, and bought, a switchboard downstairs in the basement somewhere and you put people on that and they would take a wire and pull it from one location and put it into another. They’d hook your building up to the Internet of its day, the telephone wires that ran out on the street.

Infrastructure as a service

In those days, you would need to build and own everything just to connect to the telephone system. In effect you ran part of the telephone company. That’s what you’re doing now in IT– you buy and build computers and servers, install them in your building, you hire SysAdmins, DBAs, mail administrators, web administrators, they all live in your building and you run part of your computing here and you hook all that up to the interwebs, but you own it all, that’s what you do.

That’s what happens today with server-based applications. Some companies have said the equivalent of “We will take the switchboard, the whole thing, and we will run that switchboard on your behalf but you still own the switchboard.” This is taking a virtual PC and hosting it for you. A lot of companies do this and they call this the cloud. I don’t agree; we call that infrastructure as a Service,( IAAS). One definition of a cloud is that it is IAAS, however. It’s taking your physical platform and moving it somewhere else: You’ve eliminated the hardware layer but you own everything from the OS up, you have to patch it and so on. It’s still a computer, it may be a big one, but it’s a server, you can buy more memory for it and more hard drives for it and so on but it is an atomically contained unit. It is your responsibility.

All you’ve actually done is to abstract away the hardware. Some people call this 'CoLo', some people call this Rack Hosting. You’ve only removed the switchboard but you still own everything there and more than likely you still even have people on campus that run it for you, up there.

It’s Somebody Else’s Hardware, SEH is what I call it, it’s somewhere else. So that is a definition of cloud, I don’t subscribe to that. To me, it seems to be no more than just hosting. It is basically a Virtual Machine (VM) somewhere else

Software as a service

So we move on in history in our telephone analogy up to the 1960s and 50s: The company says “We’re tired of running a switchboard, I don’t want to run a switchboard anymore; I just want to pick up the phone and make a call.” The telephone companies said “No problem, We will repair the lines. We have vans and people that drive the vans. We have the phone lines and we’ll put the phones into your offices for you. You can choose a black or green or in some cases a putrid yellow phone. We'll hook it up and you’ll pay us a fee. All you need to do is to dial calls, just make phone calls.”

So they provided the end-to-end service and in fact that is equivalent to Hotmail and all the other online offerings, software as a service, SAAS. You now have no control whatsoever; you just use it. That is the ‘power model’, but it's that 1960s madmen kind of telephone system where you just buy telephone and you get everything, but you can't modify the service to fit your requirements.

And so now that Software As a Service and it's got a place: Maybe I just want to make phone calls, so I buy a phone. Now I happen to go buy my phone at a local store, but I could get even a phone from my phone company and they would take care of that for me

But in an enterprise there are problems with this: What if I need one extra field on Hotmail? Well Microsoft is not going to make that for you. What if I would like for my screen to be in different colours based on who I am? Nobody is going to make that for you. There you go, I brought you the food out, eat that, right?

Platform as a Service

Now we move up into what I like to call the cloud, which is the platform as a service, PAAS. Now this is a little different because now we are at the ‘Jack Bauer’ phone stage, the 24 phone stage where the phone company handle the wires, but I'm allowed to buy any phone I want. I can even have no phone and use a computer and I can control bits and pieces of it to almost any degree.

I can change the ringtone, I can change the routing of the calls and so on. It's very simple and easy to manipulate all this on the front end but I'm still not the phone company. I've empowered my phone to be different things but I don't have to be the phone company at all.

“It is Software as a Service running on infrastructure as a service that I can control. That is Windows Azure.”

The Anatomy of Azure

So Windows Azure is made up of three big pieces: the first is something called roles, the second is something called storage and the other is something called fabric or application fabric: They do different things.

Roles: There are three kinds of Role The Web, the Worker and VM Role.

The first kind is what we call a web role, which is an HTTP Server. It is IIS under the covers, slightly different, but it’s IIS so that you can run ASPX code, Java code, any kind of code that runs in IIS today including C++ .

It's the face that you present to your users and that can scale. By ‘scale’, I mean that the developers have one login where they load code onto that role, the manager has a different login where he pays for that. If the manager wants to pay for more of those, because it's Christmas time or the World Cup because pizzas are being delivered and so on, they can buy more; and when they don't need as much they can scale back and buy less.

The developer however couldn't care less, because code is code; so this also scales. The web role is your front end: Sitting behind, underneath, and around that is something we call a worker role. This role is analogous to a Windows service or a Unix daemon. Basically, the idea is that it doesn't present a front end at all but it can be C# code or Java and so on, to allow this person to write and they can talk with a web role via a queue, there's a little queue between them and they can communicate.

They did this because in Windows Azure the web and worker roles are stateless, I don't need to know which one has the ball, and it is this which allows ultimate flexibility in scale.

Whereas we, as developers, were thinking of the OS upwards, now we are only required to think of code up. Actually, it goes all the way back to the Minix, Unix, mainframes and VMS and you name it. The developers there had no clue where the disk packs were. They didn't know how much memory the overall unit had. They had no clue how to add another user to that system, they were dropped off in their world, they wrote code and it ran somewhere, that's what this is.

It's very 70s isn't it? The shortcoming on the mainframes back in the day was the front end, it wasn't as pretty as what you could get on a PC, which is why the PC revolution took off. However, the detriment to the PC is that it’s atomic; it's bound into a single box. Ever since then, we've been trying to figure out how to scale this thing out, through just incredibly stupid ways of trying to figure out how to tie PCs back together.

So Microsoft thought about it, and said “Stop, stop, just throw the code over, we'll run it across N nodes, we’ll balance it, as long as it's stateless!”. Statelessness is required, of course, for something of this magnitude, unless you so want to kill your network. If you make it stateless and give a queue for a persistent layer if you need that and then you have the worker role that’s doing work behind these web roles

We’ve now introduced the web and worker role. There's one more we've got called a VM role which is there to cater primarily for people who say “we still want that bounded box but we want you to run it.” That's fine. So we have that standard VM role. I'll just let you imagine what that looks like, it's not really that interesting to talk about

Storage: Behind the application are roles, and so how do front-end applications interact with them? Web, HTTP, OData or REST calls. So you can have Fat Applications, Web Applications, thin applications - it doesn't matter, we don't care Azure is the back-end.

"Windows Azure and SQL Azure are two different things and you can use one without the other."

I bought an iPad specifically to demo Azure applications, to explain to people that I don't care what you choose for the front end, it's up to you. . It was my first Apple product in a long time. A Cloud service should take advantage of whatever hardware you have. The reason I demo Azure on an iPad is to show people that an Apple product or any product, Linux or anything else will run in Azure app because Azure is the back end. It’s not Flash, it's not Silverlight.

Behind Azure is the storage. We have two types of storage, not counting the queue that talks between processes. These two types are blobs and tables. It is a dangerous thing to say the word ‘Tables’ to an audience of database administrators, but Azure does not use standard relational tables. Think ’key value pairs’, think ‘an XML document’, think ‘using sed or grep’, that kind of thing. It can be terabytes large.

The Table in Windows Azure is our ‘noSQL’ when you think about it. The blob storage is a 100 gigabyte unit. You can think of it as a folder if you like or a file: It can be chunked out into smaller files inside or just a folder. Its’ claim to fame is that it is a streaming capable storage. So a blob storage could have a video or audio on it.

For people who are not familiar with what streaming means, normally when we open a Word document the document can't show up until it's done opening, whereas we open a movie and watch over something like Netflix, the movie is actually playing and more of it is coming in the background, it allows that file to stay open if you want to call it that. So blob storage is wonderful for that sort of thing.

So we have blob storage to handle the streaming file-based, folder-based, data. We keep the nodes, the web roles and the worker roles and so on, alive across multiple machines, hence the ‘no state’ requirement.

So if anything were to happen, and you can’t predict what that will be, we have failover built in. It is not point-in-time failover where you’d go to the state of the data 10 minutes ago, it means that you won't notice the disruption when it goes down.

This is maintained in data centres across your region, we spent a whole lot of money on these data centres to make this very robust before the first data-storage product ever went out the door.

So we have an Americas region, we have a European region and we have an Asian region and those will grow I'm sure, and expand as time goes on.

We guarantee that the service remains in the region because of data privacy laws, Europeans don't want their data in the US and vice versa. So you are able to affinitise your data as well if you want it to be closer to the compute nodes, or it could be spread across the region. This would be all invisible to you, but something is happening under the covers.

The third part is this idea is this notion of application fabric. Application fabric allows you to do SOA (Service Oriented Architecture) type things, if you want to think about service oriented architectures. It also has your authentication layers in it.

There is some confusion in the word ‘Fabric’. There is a ‘fabric‘ within Windows Azure that is just meant to describe the computing between all the nodes, it's not meant to describe the application fabric which is entirely different.

Application fabric has our security in it, so you can do things such as to federate, as with the product Signi that you saw released at the2010 PDC, where you can federate your ADFS that you have locally in your organisation up into the cloud, so as to get certificate-based sign-on.

We did a study recently and found, just as Google and Amazon did, that the cloud applications are more secure because people were actually worried about security, ergo, they were coding correctly. Within their office building they have a false sense of security so the code standards are relaxed, and so the ones in your building are less secure than the ones you have in public. Now this is easy to think about: I dress differently in my house than I do when I go out on the street, hopefully. You don't mind walking around in your dirty T-shirt in the house, but that's not something you take outside. But if your doors are open and your windows are open, it's the same thing, right?

So what we found is that people think they are secure on premise and they’re not. That's a failure there, not a reaffirmation that the cloud is so secure, it's actually a reaffirmation that you’ve often got a false sense of the security when you’re using the traditional office-based application.

Hybrid Cloud Applications

One of the advantages of the Windows Azure or the Microsoft cloud computing approach is that you have the ability to use the technologies you use locally, Windows, C#, whatever. They transfer directly, and will even transfer across in such a way that I could write a fat application locally that uses things the way I've always done them, and then put some portion of it in the cloud if I wished. Let's think about that for a moment.

There's a well-known pizza company here in the US that came to us with a problem. Here in the US we have American football and our big contest is called the Super Bowl and it takes place on a certain Sunday once a year. It's the only holiday in the US where everybody stays home. On Super Bowl Sunday, everybody's inside the house and the last thing they want to do is cook, so they order pizza.

So this pizza company said to us “To buy enough computing power to do what we need to do for Super Bowl Sunday alone, we would need to fill this building and then we wouldn't use it again until next year, not to that level. We don't want to do that, tell us how you can help us?”

And we said okay. They explained how it all works and they said “Well we’re real concerned about security and, we can't have financial data up in your system. We don't want our credit cards up there, we're sure you’re good people, we just can't do it”. We accepted that.

So we took a look at the code, let's break the code down into its components. The components were interesting, because the majority of work we found, computing-wise, concerned the building of the pizza; choosing what ingredients would go into the pizza. The customer would log onto the web, they’d drag down a pizza pie. Then there'd be the crust to choose, and so they'd click the crust they wanted, then they would drag toppings out and then they would spread cheese on it, and then confirm the order. Theoretically, thirty minutes later someone would show up at their door with a pizza that looked similar to the one they’d chosen.

So we said, “How about we take that for you and we scale it? Whenever you need the computing-power, you scale it up, and you can then rein it back down when you don’t need it. Whenever they say, on the web browser, ’I'd like to pay for that’, all that comes down is the order. We can then pass this over to the existing system for the payment portion”. They said “That ‘works great’, and so it does.

And so that’s…and they can do that in seconds. Some national event can come up which suddenly requires pizza, it's a phone call, it's not even a phone call, it's a logon, it's a credit card, it’s a PO and suddenly now they can get more computing resources in minutes.

You can't do that anywhere else. It also becomes an operational expense not a capital one, they haven't purchased anything, they've rented something, that’s an RBAC. Capital expenses are expensive, you have to tax on them, you have to depreciate them and so on, RBAC is just easy and RBAC is wonderful because it follows revenue. As I need more, I pay for more, but I'm getting paid more for what I'm doing. As I need less, I'm getting less money from my customers, but I pay less because I'm using less things.

It's a beautiful model, and I can't imagine anybody not wanting to do likewise. I can't imagine a company looking at compute and saying “let's build-in an overcapacity into our system , and then underutilise it for most of the time”. Why would anybody do that?

Even a non-customer-facing service, maybe everybody logs on at that time to the HR system, do you want to buy enough capacity for the occasional peak in the use of the HR system , or stagger access to it? No! You’d want to put those portions of it that suffer fluctuations in demand into the cloud.

SQL Azure

imageYou can think of SQL Azure as SQL Server somewhere else; software layer upwards. You can connect to with Management Studio. It's still TDS ( tabular data-stream), It has a long name, but you're at the database level, not the instance level.

When you buy SQL Server, you purchase and install an instance and then it has within it databases. When you buy SQL Azure, you start up in a database though you can make more: You’re charged by the number of databases, space on the hard drive and so on.

Within this environment I have a full database just like I would have now, but there are some concepts that are not the same. I don't monitor performance the same way because I don't control memory or the hard drives, so I don't set up files and file groups, it's not my concern, it's the software upwards.

This is difficult because DBAs spend their lives worrying about file groups and index latches and contention, you just don't do that with Azure. When they hear that they go “Oh there’s no way that you have solved the problems I’ve solved, this is not going to work” Database size is limited, there are 1, 5 and 50 GB increments, now I'm sure that will change around and get bigger.

We hear people say “Well look, my database is way bigger than that!” I visit their company. Now, I’ve been a consultant for years, way before Microsoft. We have huge databases, and they do as well, one or two of them, but the other 900 are under 5 GB., so I say “Okay, I’ll tell you what, let’s not talk about your 2 terabyte system, let’s just put that over to the side, we’ll agree that we’re not going to talk about that one. How about these other 900? How are you handling them? Do you like buying SQL Server for those? Do you like running the servers for that? Do you like patch Tuesday? Which is more problematic for you, the terabyte system or the 900? And if it’s the terabyte system, then fine, then my argument is done, but if it’s the other 900 should we still be talking?”

People are throwing up Excel or Access databases under their desk and they suddenly become mission critical. They’re not backed up, they’re not protected in any way, shape, form or fashion, you don’t even know about them, but suddenly they’re mission critical. They’re not under RPO/RTO, they’re not following proper security precautions and so on. And not only that, they suddenly become your problem and they become an expense. You have to buy a server to host them because now people can’t do their work effectively unless that little database is running.

So how about transforming that Access database into an Azure database? You can let the department still have as much flexibility as they like and they can pay for it with their own credit card so that it doesn’t come through IT. If they want that resource they’ll get it. There you go, knock yourself out.

So now you get a shared database system they can use that is secure and so on. We back that up three times redundantly; backup then also becomes an issue. We don’t just backup in case something fails, we sometimes backup for data corruption and we sometimes we back-up to go back in time.

Now I will argue that those last two eventualities don’t happen as often as a huge failure that we’re protecting against. So clearly, the idea of having three copies of your data all the time everywhere is good for in the event of the complete failure. But it’s not as much of an answer for those occasions where you want to go back 10 minutes or whatever.

I like this paradigm: the idea of Platform as a Service appeals to me because it's very ‘mainframey’ and I felt that the mainframe model worked well except for its expense.

And in fact, in that case, you didn't just buy a piece of the telephone company, you were the telephone company, so I think that's bit like overkill. I love this because we are still a telephone company, right? And you are using those resources but in a very flexible way.

I like not having to care about the code down, I just care about code upwards. If the OS changes and patch Tuesday happens I don't care. We can trickle, and Microsoft can now trickle in patches to the entire infrastructure of Azure, there is no patch Tuesday, it’s patch all the time. If anything goes wrong, we can roll it back and be back in business with other nodes not even feeling the ripple of the water.

I'm not saying we do it invisibly; my point is we can do it more frequently as needed with no downtime. People don't mind patching, they don't like downtime, they don't like doing the work, so if it is invisible and there's no downtime is it patching? I don't care what you do as long as it's functional and doesn't break anything.

The other day, I did a rather lengthy several hundred question security request for a proposal here in the US, RFP they’re called, which is where they say well we’ll talk to you about buying your thing but you have to answer all these questions first. Kind of like a screen for a blind date in a weird way. I answered this in depth and I was answering questions such as ‘Who’s got access to the building’, ,’How is the cleaning crew badged?’ and so on,. There were questions that were, in effect ‘Who can get to my data?’ Well the fact that we’re affinitised, for one thing you can store things encrypted if you liked, You could encrypt it before you send it to us and then even if we could get to it, it would be meaningless without your certificate, you're the only one that can read it because you have the cert.

But because we’re stateless it would be difficult for anyone to know where it was. It's wandering all over all the time so to grab your data would be kind of hard to do. Think caching and so on and going and getting a particular memory location out of a cache and deciding which part of the select statement that was, there could be five of them.

I've got a blog entry at on Azure. Just click on the security links. I did a fairly lengthy blog post of all the security stuff, because it is the number one question we get asked. See also my "Azure Learning Plan" which is Here


Number one: Windows Azure and SQL Azure are two different things and you can use one without the other

Number two: I really believe in platform as a service. We did have software as a service that fits, especially fits for smaller companies that just need Office and don't want to buy it. They can just get Office 365 and they’re done.

And I really am not as thrilled about infrastructure as a service, it doesn't excite me because it's not new, it's what I call somebody else's hard drives, it's just not that big a deal. So I put that way over to the left and I'm not worried about it.

Software as a service can and is interesting, I do happen to use Hotmail and the platform as service is terribly interesting to me because it will only just get better and lets me get back to focusing on the code which I care about.

No one expects you to take what you have today and just throw it up to the cloud. That would be foolhardy. No one is expecting you to take everything you have and just toss it up there. That's a myth, we don't want that, no one wants that. For one thing, if everybody did that you'd need five or six providers the size of Microsoft to handle the capacity. It doesn't make sense because you have a big investment in what you have today on-premise. The way to take advantage of the cloud is to leverage the cloud where it makes sense, like this pizza company I described.

What is more, you can bring the data back. You never lose it since you can stream back down with Sync Services or or whatever other technology you choose. You don’t need to lose the ultimate control of your data even when you choose to store it in the Cloud.

Rob Sanfilippo posted PDF slides to Directions on Microsoft’s Windows Azure Platform Roadmap Telebriefing of 1/27/2011:

image The Windows Azure platform, a set of services that lets customers host applications in Microsoft's data centers, has been commercially available for one year. Many additions and enhancements to the platform were announced at the Professional Developers Conference in Oct. 2010.


This TeleBriefing provides an overview of the Windows Azure platform, including the Windows Azure, SQL Azure, and AppFabric components, and summarizes the latest additions and enhancements.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds


No significant articles today.

<Return to section navigation list> 

Cloud Security and Governance

Lori MacVittie (@lmacvittie) asserted Claiming SSL is not computationally expensive is like saying gas is not expensive when you don’t have to drive to work every day in an introduction to her Dispelling the New SSL Myth post of 1/31/2011 to F5’s DevCentral bog:

My car is eight years old this year. It has less than 30,000 miles on it.

image Yes, you heard that right, less than 30,000 miles. I don’t drive my car very often because, well, my commute is a short trip down two flights of stairs. I don’t need to go very far when I do drive it’s only ten miles or so round trip to the grocery store. So from my perspective, gas isn’t really very expensive. I may use a tank of gas a month, which works out to … well, it’s really not even worth mentioning the cost.

gas_prices_largeBut for someone who commutes every day – especially someone who commutes a long-distance every day – gas is expensive. It’s a significant expense every month for them and they would certainly dispute my assertion that the cost of gas isn’t a big deal. My youngest daughter, for example, would say gas is very expensive – but she’s got a smaller pool of cash from which to buy gas so relatively speaking, we’re both right.

The same is true for anyone claiming that SSL is not computationally expensive. The way in which SSL is used – the ciphers, the certificate key lengths, the scale – has a profound impact on whether or not “computationally expensive” is an accurate statement or not. And as usual, it’s not just about speed – it’s also about the costs associated with achieving that performance. It’s about efficiency, and leveraging resources in a way that enables scalability. It’s not the cost of gas alone that’s problematic, it’s the cost of driving, which also has to take into consideration factors such as insurance, maintenance, tires, parking fees and other driving-related expenses.

SSL is still computationally expensive. Improvements in processor speeds in some circumstances have made that expense less impactful. Circumstances are changing.

Commoditized x86 hardware can in fact handle SSL a lot better today than it ever could before –when you’re using 1024-bit keys and “easy” ciphers like RC4. Under such parameters it is true that commodity hardware may perform efficiently and scale up better than ever when supporting SSL. Unfortunately for proponents of SSL-on-the-server, 1024-bit keys are no longer the preferred option and security professionals are likely well-aware that “easy” ciphers are also “easy” pickings for miscreants.

In January 2011, NIST recommendations regarding the deployment of SSL went into effect. While NIST is not a standards body can require compliance or else, they can and do force government and military compliance and have shown their influence with commercial certificate authorities. All commercial certificate authorities now issue only 2048-bit keys. This increase has a huge impact on the capacity of a server to process SSL and renders completely inaccurate the statement that SSL is not computationally expensive anymore. A typical server that could support 1500 TPS using 1024-bit keys will only support 1/5 of that (around 300 TPS) when supporting modern best practices, i.e. 2048-bit keys.


Also of note is that NIST recommends ephemeral Diffie-Hellman - not RSA - for key exchange, and per TLS 1.0 specification, AES or 3DES-EDE-CBC, not RC4. These are much less “easy” ciphers than RC4 but unfortunately they are also more computationally intense, which also has an impact on overall performance.

Key length and ciphers becomes important to the performance and capacity of SSL not just during the handshaking process, but in bulk-encryption rates. It is one thing to say a standard server deployed to support SSL can handle X handshakes (connections) and quite another to simultaneously perform bulk-encryption on subsequent data responses. The size and number of those responses have a huge impact on the consumption rate of resources when performing SSL-related functions on the overall server’s capacity. Larger data sets require more cryptographic attention that can drag down the rate of encryption – that means slower response times for users and higher resource consumption on servers, which decreases resources available for handshaking and server processing and cascades throughout the entire system to result in a reduction of capacity and poor performance.

Tweaked configurations, poorly crafted performance tests, and a failure to consider basic mathematical relationships may seem to indicate SSL is “not” computationally expensive yet this contradicts most experience with deploying SSL on the server. Consider this question and answer in the SSL FAQ for the Apache web server:

blockquote Why does my webserver have a higher load, now that it serves SSL encrypted traffic?

SSL uses strong cryptographic encryption, which necessitates a lot of number crunching. When you request a webpage via HTTPS, everything (even the images) is encrypted before it is transferred. So increased HTTPS traffic leads to load increases.

This is not myth, this is a well-understood fact – SSL requires higher computational load which translates into higher consumption of resources. That consumption of resources increases with load. Having more resources does not change the consumption of SSL, it simply means that from a mathematical point of view the consumption rates relative to the total appear to be different. The “amount” of resources consumed by SSL (which is really the amount of resources consumed by cryptographic operations) is proportional to the total system resources available. The additional consumption of resources from SSL is highly dependent on the type and size of data being encrypted, the load on the server from both processing SSL and application requests, and on the volume of requests. 

Interestingly enough, the same improvements in capacity and performance of SSL associated with “modern” processors and architecture is also applicable to intermediate SSL-managing devices. Both their specialized hardware (if applicable) and general purpose CPUs significantly increase the capacity and performance of SSL/TLS encrypted traffic on such solutions, making their economy of scale much greater than that of server-side deployed SSL solutions.

Certainly if you have only one or even two servers supporting an application for which you want to enable SSL the costs are going to be significantly different than for an organization that may have ten or more servers comprising such a farm. It is not just the computational costs that make SSL deployed on servers problematic, it is also the associated impact on infrastructure and the cost of management.

Reports that fail to factor in the associated performance and financial costs of maintaining valid certificates on each and every server – and the management / creation of SSL certificates for ephemeral virtual machines – are misleading. Such solutions assume a static environment and a deep pocket or perhaps less than ethical business practices. Such tactics attempt to reduce the capital expense associated with external SSL intermediaries by increasing the operational expense of purchasing and managing large numbers of SSL certificates – including having a ready store that can be used for virtual machine instances.

As the number of services for which you want to provide SSL secured communication increase and the scale of those services increases, the more costly it becomes to manage the required environment. Like IP address management in an increasingly dynamic environment, there is a diseconomy of scale that becomes evident as you attempt to scale the systems and processes involved.


Obviously the more servers you have, the more certificates you need to deploy. The costs associated with management of those certificates – especially in dynamic environments – continues to rise and the possibility of missing an expiring certificate increase with the number of servers on which certificates are deployed. The promise of virtualization and cloud computing is to address the diseconomy of scale; the ability to provision and ready-to-function server complete with the appropriate web or application stack serving up an application for purposes of scale assumes that everything is ready. Unless you’re failing to properly provision SSL certificates you cannot achieve this with a server-deployed SSL strategy. Each virtual image upon which a certificate is deployed must be pre-configured with the appropriate certificate and keys and you can’t launch the same one twice. This has the result of negating the benefits of a dynamically provisioned, scalable application environment and unnecessarily increases storage requirements because images aren’t small. Failure to recognize and address the management and resulting impact on other areas of infrastructure (such as storage and scalability processes) means ignoring completely the actual real-world costs of a server-deployed SSL strategy.

It is always interesting to note the inability of web servers to support SSL for multiple hosts on the same server, i.e. virtual hosts.

blockquote Why can't I use SSL with name-based/non-IP-based virtual hosts?

The reason is very technical, and a somewhat "chicken and egg" problem. The SSL protocol layer stays below the HTTP protocol layer and encapsulates HTTP. When an SSL connection (HTTPS) is established Apache/mod_ssl has to negotiate the SSL protocol parameters with the client. For this, mod_ssl has to consult the configuration of the virtual server (for instance it has to look for the cipher suite, the server certificate, etc.). But in order to go to the correct virtual server Apache has to know the Host HTTP header field. To do this, the HTTP request header has to be read. This cannot be done before the SSL handshake is finished, but the information is needed in order to complete the SSL handshake phase. Bingo!

Because an intermediary terminates the SSL session and then determines where to route the requests, a variety of architectures can be more easily supported without the hassle of configuring each and every web server – which must be bound to IP address to support SSL in a virtual host environment. This isn’t just a problem for hosting/cloud computing providers, this is a common issue faced by organizations supporting different “hosts” across the domain for tracking, for routing, for architectural control. For example, and often end up on the same web server, but use different “hosts” for a variety of reasons. Each requires its own certificate and SSL configuration – and they must be bound to IP address – making scalability, particularly auto-scalability, more challenging and more prone to the introduction of human error. The OpEx savings in a single year from SSL certificate costs alone could easily provide an ROI justification for the CapEx of deploying an SSL device before even considering the costs associated with managing such an environment. CapEx is a onetime expense while OpEx is recurring and expensive.


The simplistic nature of the argument also fails to take into account the sensitive nature of keys and certificates and regulatory compliance issues that may require hardware-based storage and management of those keys regardless of where they are deployed (FIPS 140-2 level 2 and above). While there are secure and compliant HSM (Hardware Security Modules) that can be deployed on each server, this requires serious attention and an increase of management and skills to deploy. The alternative is to fail to meet compliance (not acceptable for some) or simply deploy the keys and certificates on commoditized hardware (increases the risk of theft which could lead to far more impactful breaches). 

For some IT organizations to meet business requirements they will have to rely on some form of hardware-based solution for certificate and key management such as an HSM or FIPS 140-2 compliant hardware. The choices are deploy on every server (note this may become very problematic when trying to support virtual machines) or deploy on a single intermediary that can support all servers at the same time, and scale without requiring additional hardware/software support. 


imageSSL “all the way to the server” has a profound impact on the rest of the infrastructure, too, and the scalability of services. Encrypted traffic cannot be evaluated or scanned or routed based on content by any upstream device. IDS and IPS and even so-called “deep packet inspection” devices upstream of the server cannot perform their tasks upon the traffic because it is encrypted. The solution is to deploy the certificates from every machine on the devices such that they can decrypt and re-encrypt the traffic. Obviously this introduces unacceptable amounts of latency into the exchange of data, but the alternative is to not scan or inspect the traffic, leaving the organization open to potential compromise.

It is also important to note that encrypted “bad” traffic, e.g. malicious code, malware, phishing links, etc… does not change the nature of that traffic. It’s still bad, it’s also now “hidden” to every piece of security infrastructure that was designed and deployed to detect and stop it.

A server-deployed SSL strategy eliminates visibility and control and the ability to rapidly address both technical and business-related concerns. Security is particularly negatively impacted. Emerging threats such as a new worm or virus for which AV scans have not yet but updated can be immediately addressed by an intelligent intermediary – whether as a long-term solution or stop-gap measure. Vulnerabilities in security protocols themselves, such as the TLS man-in-the-middle attack, can be immediately addressed by an intelligent, flexible intermediary long before the actual solutions providing the service can be patched and upgraded. 

A purely technical approach to architectural decisions regarding the deployment of SSL or any other technology is simply unacceptable in an IT organization that is actively trying to support and align itself with the business. Architectural decisions of this nature can have a profound impact on the ability of IT to subsequently design, deploy and manage business-related applications and solutions and should not be made in a technical or business vacuum, without a full understanding of the ramifications.

Ryan D. Hatch (@rdkhatch) asked Where’s My Order? A Real Life Transaction Failure and answered with a video clip of an Udi Dahan Tech*Ed presentation:

image So, today I went to Menards to buy hardware.  All my items were bar code scanned, I swiped my credit card, and hit the OK button on the card reader device.

Just then – I heard a beep Cash register rebooted (old PC with cash drawer).  Card reader device was locked up.

  • What happened to my order?
  • Had I bought my items… had I been charged?
  • I didn’t have a receipt.


Kristine (the clerk) below shrugs her shoulders.  Where was my Order?  Nobody really knew.

They charged my credit card again, and I “officially” owned the items.  Either that, or I paid for them twice.

This was an in-person failure.  Now, let’s change the picture.  If this had been an online order, I may never have been notified about the order failure.  I could be waiting forever for items that would never arrive.  Or get charged and never receive a receipt.  Yes, transaction failures can & do really happen.

Excellent presentation that demonstrates transaction failures & addresses tolerance:

High Availability: A Contrarian View   (by Udi Dahan)


Here’s what I learned from Udi’s presentation:

Prevention of data loss is responsibility of App Developer.

Data integrity cannot be outsourced to an IT department or Cloud.  Data integrity must be understood as a problem & addressed directly at the application-level. Just because it works smoothly on your development machine, does NOT mean it will work under high-stress loads in production.  Dev teams need to absorb the responsibility for the integrity of their data.

Amazon has a “Fed-in-2-pizzas” approach – meaning their team sizes are capped at 2 pizzas.  Each team is a small multi-disciplinary team – responsible for designing, writing, deploying, and maintaining their own software data systems.  This jigsaw-team approach puts both developers and IT directly engaged together into the production systems.

Cloud gives False sense of Reliability.

The Cloud = Millions of Highly-Available, Unreliable Blinking Lights.  Just because we have millions of blinking lights does not give our applications fault-tolerance or a data integrity strategy.  Network failures, timeouts, and database locks are all real-world examples that can be devastating to a system’s data integrity unless architected properly.  Messages Queues, like NServiceBus, can mitigate these issues by eliminating shared resource contention and storing the messages immediately in a queue.  Once queued, these messages can be processed accordingly.


Furthermore, the cloud (ie, Windows Azure) does not allow for distributed transactions.  So an ACID transaction has the scope of only 1 database.  For any cross-cutting transactions across multiple systems (ie, Message Queue + Database) – applications must implement the logic to join these transactions together.

High-Availability = Backwards Compatibility.

App versions 1 & 2 must run simultaneously during an upgrade. Otherwise, you need to take down your app to upgrade from v1 to v2.  Just because your downtime is “planned maintenance” – doesn’t mean it’s not downtime.  It is downtime.  The only solution to highly-available applications is to support backwards compatibility.  Both versions N and N+1 must be able to run simultaneously, which allows you to phase-in your upgrades across your servers without taking the entire system offline.

<Return to section navigation list> 

Cloud Computing Events

Mike Taulty (@mtaulty) reported that he’ll be at DevWeek 2011, to be held in 3/14 through 3/18/2011 at the Barbicon Conference Center in London, UK:

Just a quick plug – I’ll be speaking at DevWeek 2011 in mid-March down in London and it’d be great to see you at the event.


image I’ve been attending/speaking at DevWeek for quite a few years now and it’s a fantastic event made up of both DevWeek and SQL Server DevCon meaning that you get ( wait for it ) 7 developer tracks and 2 SQL tracks running side-by-side which is pretty astonishing.

There’s a great set of speakers at DevWeek this year including international luminaries such as Jeffrey Richter, Itzik Ben-Gan, Jeff Prosise, Juval Lowy, Christian Weyer, Bob Beauchemin, Dino Esposito, Ingo Rammer and a tonne of local UK speakers – you need to check the page for the whole list as there’s simply too many to include here.

There’s also pre- and post- conference 1-day workshops for those who want to make a week of it and drill down into particular topic areas.

I’ll be down there for as many days as my Microsoft expenses allow ( probably 1 day – times are hard! ) and my colleague Giles;


will also be down there talking about ALM and tooling in TFS. Feel free to “stop us if you spot us” and have a chat.

Unlike most current developer conferences, I found only one Windows Azure session in the DevWeek session timetable:

Introducing the Azure AppFabric service bus by Juval Lovy on 3/17/2010 at 9:30 AM:

image7223222The services bus is arguably the most accessible, ready to use, powerful, and needed piece of cloud computing. It allows clients to connect to services across any machine, network, firewall, NAT, routers, load balancers, virtualization, IP and DNS as if they were part of the same local network, and all without compromising on the programming model or security. The service bus also supports callbacks, event publishing, authentication and authorization and doing all that in a WCF-friendly manner.

This session will present the service bus programming model, how to configure and
administer solutions, working with the dedicated relay bindings including the available communication modes, relying on authentication in the cloud for local services and the various authentication options, and how to provide for end-to-end security. You will also see advanced WCF programming techniques, original helper classes, productivity enhancing utilities and tools, as well as discussion of design best practices and pitfalls.

Searching with “cloud” as the keyword turned up two additional sessions:

TECHNICAL KEYNOTE: Doctor, doctor, my app won’t scale! by Andrew Clymer, Richard Blewett, Dave Wheeler and Kevin Jones on 3/15/2011 at 9:30 AM:

So you have an application and either you can’t get it to scale or you can’t scale it beyond a certain point. Some scalability issues have well-known recipes such as
scaling websites with load balancing and caching. However, there is another class of problems where this approach is not enough. If you have to scale your *algorithms* then things just got a whole lot harder.

Should you be using lots of cores, grid computing, the cloud? Are the very languages you use holding you back? What are your options when your solution to scalability means joining the world of High Performance Computing? [Emphasis added.]

High-performance architectures by Matt Wood on 3/16/2011 at 11:30 AM:

Cloud computing provides a unique opportunity to develop highly available services
with ease. In this talk, we’ll use real world examples to introduce some of the architectural patterns developers can build upon to achieve scale and efficiency for a wide range of applications – from social games to data analysis and beyond.

We’ll aim to cover:

  • Building highly available web sites
  • On demand, high performance computing
  • Data analysis with map/reduce in the cloud
  • Virtualisation and reuse of data and services

This talk will include technical examples, architectural best practice and language
agnostic approaches to building high performance computational systems in the cloud. [Emphasis added.]

Jonathan Rozenblit announced on 1/30/2011:

imageOn February 7, 2011, tune in to watch Windows Azure Boot Camp: Connecting with AppFabric, a 200 level webcast that will look at how to secure a REST Service, what you can do to connect services together, and how to work with firewalls and NATs.

image7223222See the Jonathan Rozenblit posted Getting to know Windows Azure AppFabric to his Clear as Cloud blog on 1/30/2011 item in the Windows Azure AppFabric: Access Control and Service Bus section above

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Jeff Barr (@jeffbarr) listed AWS Events for February 2011 and marketing job openings in this 1/31/2011 post to the Amazon Web Services blog:

We've got a lot of interesting live events and webinars planned for February of 2011. First, the live events:

And then the webinars:

The AWS Marketing team is hiring. Here are some of our openings:

Amazon is greatly increasing its emphasis on evangelizing developers worldwide.

James Staten (@staten7) rings in on Verizon’s Terremark acquisition with a Verizon steps into IaaS cloud leadership ranks post of 1/31/2011 to his Forrester Research blog:

image Pop Quiz: What’s the fastest way to build a credible, enterprise-relevant and highly profitable cloud computing services practice? Buy one that already is. That’s exactly what Verizon did last week when it pushed $1.4B across the table to Terremark. Despite its internal efforts to build an infrastructure as a service (IaaS) business over the last two years, Verizon simply couldn’t learn the best practices fast enough to have matched the gains in the market it received through this move. Terremark has one of the strongest IaaS hosting businesses in the market and perhaps the best enterprise mix in its customer base of the top tier providers. It also has a significant presence with government clients including the United States’ Government Services Agency (GSA) which has production systems running in a hybrid mode between Terremark’s IaaS and traditional managed hosting services.

image Confidential Forrester client inquiries have shown struggles by Verizon to win competitive IaaS bids with its Computing as a Service (CaaS) offering; often losing to Terremark. This led to Verizon reselling the Terremark solution, (its CaaS for SMB) so they could try before the buy.

Forrester surveys of enterprise infrastructure & operations (I&O) professionals show that 29% are prioritizing investments in private cloud solutions in 2011 and that 40% see hosted private clouds as the most attractive option as they place the least amount of operational disruption on their staffs. Verizon and Terremark have both been investing in public and hosted cloud offerings to win this business. Terremark had a significant experiential lead as it has been running it’s ESX-based The Enterprise Cloud offering for the past several years. Both companies are also VMware vCloud partners with Terremark being one of the few service providers in the world operating a vCloud Express implementation, then one based on vCloud Director. Verizon was a poster child for vCloud Director at VMworld last fall and was expected to debut this as part of CaaS in early 2011. It will be interesting to see if they keep these plans moving forward or defer to Terremark. I suspect the latter as our customer insights point to Verizon having greater success reselling Terremark today than securing customers for CaaS.

This merger should be great news for I&O professionals and your cloud strategies as two trusted partners have come together to better serve your needs. The two companies combined have a first-class global data center presence, proven best practices in IaaS and enterprise class service management, control a significant segment of the backbone of the Internet and boast a stable of additional managed services.

If Verizon wasn’t on the shortlist for your cloud RFP, they should be now. Expect to see other IaaS/MSP acquisitions to shore up the cloud strategies of the other leading telecommunications and outsourcing leaders. 

Chuck Hollis, EMC’s Chief Cloud Officer, answers The 20 most asked questions about cloud computing in an interactive video:


<Return to section navigation list>