Monday, November 08, 2010

Windows Azure and Cloud Computing Posts for 11/8/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.

Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.

<Return to section navigation list> 

SQL Azure Database and Reporting

John Treadway (@cloudbzz) posted SQL In the Cloud to his CloudBzzz blog on 11/8/2010:

image Despite the NoSQL hype, traditional relational databases are not going away any time soon. In fact, based on continued market evolution and development, SQL is very much alive and doing well.

I won’t debate the technical merits of SQL vs. NoSQL here, even if I were qualified to do so. Both approaches have their supporters, and both types of technologies can be used to build scalable applications. The simple fact is that a lot of people are still choosing to use MySQL, PostgreSQL, SQL Server and even Oracle to build their SaaS/Web/Social Media applications.

When choosing a SQL option for your cloud-based solution, there are typically three approaches as outlined below. One note – this analysis applies to “mass market clouds” and not the enterprise clouds from folks like AT&T, Savvis, Unisys and others. At that level you often can get standard enterprise databases as a managed service.

  1. Install and Manage – in this “traditional” model the developer or sysadmin selects their DBMS, creates instances in their cloud, installs it, and is then responsible for all administration tasks (backups, clustering, snapshots, tuning, and recovering from a disaster. Evidence suggests that this is still the leading model, though that could soon change. This model provides the highest level of control and flexibility, but often puts a significant burden on developers who must (typically unwillingly) become DBAs with little training or experience.
  2. imageUse a Cloud-Managed DBaaS Instance – in this model the cloud provider offers a DBMS service that developers just use. All physical administration tasks (backup, recovery, log management, etc.) are performed by the cloud provider and the developer just needs to worry about structural tuning issues (indices, tables, query optimization, etc). Generally your choice of database is MySQL, MySQL, and MySQL – though a small number of clouds provide SQL Server support. Amazon RDS and SQL Azure are the two best known in this category.
  3. Use an External Cloud-Agnostic DBaaS Solution – this is very much like the cloud-based DBaaS, but has a value of cloud-independence – at least in theory. In the long run you might expect to be able to use an independent DBaaS to provide multi-cloud availability and continuous operations in the event of a cloud failure. FathomDB and Xeround are two such options.

Here’s a chart summarizing some of the characteristics of each model:

In my discussions with most of the RDBMSaaS solutions I have found that user acceptance and adoption is very high. When I spoke with Joyent a couple of months ago I was told that “nearly all” of their customers who spend over $500/month with them use their DBaaS solution. And while Amazon won’t give out such specifics, I have heard from them (both corporate and field people) that adoption is “very robust and growing.” The exception was FathomDB was launched at DEMO2010. They seem to not have gained much traction, but I don’t get the sense they are being very aggressive. When I spoke with one of their founders I learned they were working on a whole new underlying DBMS engine that would not even be compatible with MySQL. In any event, they have only a few hundred databases at this point. Xeround is still in private beta.

The initial DBaaS value proposition of reducing the effort and cost of administration is worth something, but in some cases it might be seen to be a nice-to-have vs. a need-to-have. Inevitably, the DBaaS solutions on the market will need to go beyond this to performance, scaling and other capabilities that will be very compelling for sites that are experiencing (or expect to experience) high volumes.

Amazon RDS, for instance, just added the ability to provision read replicas for applications with a high read/write ratio. Joyent has had something similar to this since last year when they integrated Zeus Traffic Manager to automatically detect and route query strings to read replicas (your application doesn’t need to change for this to work).

Xeround has created an entirely new scale-out option with an interesting approach that alleviates much of the trade-offs of the CAP Theorem. And ScaleBase is soon launching a “database load balancer” that automatically partitions and scales your database on top of any SQL database (at least eventually – MySQL will be first, of course but plans include PostgreSQL, SQL Server and possibly even Oracle). My friends at Akiban are also innovating in the MySQL performance space for cloud/SaaS applications.

Bottom line, SQL-based DBaaS solutions are starting to address many (though not all) of the leading reasons why developers are flocking to NoSQL solutions.

All of this leads me to the following conclusions – I’m interested if you agree or disagree:

  • Cloud-based DBaaS options will continue to grow in importance and will eventually become the dominant model. Cloud vendors will have to invest in solutions that enable horizontal scaling and self-healing architectures to address the needs of their bigger customers. While most clouds today do not offer an RDS-equivalent, my conversations with cloud providers suggest that may soon change.
  • Cloud-Independent DBaaS options will grow but will be a niche as most users will opt for the default database provided by their cloud provider.
  • The D-I-Y model of installing/managing your own database will eventually also become a niche market where very high scaling, specialized functionality or absolute control are the requirements. For the vast majority of applications, RDBMSaaS solutions will be both easier to use and easier to scale than traditional install/manage solutions.

At some point in the future I intend to dive more into the different RDBMSaaS solutions and compare them at a feature/function level. If I’ve missed any – let me know (I’ll update this post too).

Other Cloud DBMS Posts:

Steve Yi announced 2010 PASS Summit Sessions on SQL Azure in his 11/8/2010 post to the SQL Azure Team blog:

image The 2010 PASS summit in Seattle, WA is being held the November 8th-11th. In preparation here is a list of sessions about SQL Azure being given – almost all of them by the SQL Azure team. This is a good chance to learn more about SQL Azure and ask questions.

A lap around Microsoft SQL Azure and a discussion of what’s new: Tony Petrossian

SQL Azure provides a highly available and scalable relational database engine in the cloud. In this session we will provide an introduction to SQL Azure and the technologies that enable such functions as automatic high-availability. We will demonstrate several new enhancements we have added to SQL Azure based on the feedback we’ve received from the community since launching the service earlier this year.

Building Offline Applications for Windows Phones and Other Devices using Sync Framework and SQL Azure: Maheshwar Jayaraman

In this session you will learn how to build a client application that operates against locally stored data and uses synchronization to keep up-to-date with a SQL Azure database. See how Sync Framework can be used to build caching and offline capabilities into your client application, making your users productive when disconnected and making your user experience more compelling even when a connection is available. See how to develop offline applications for Windows Phone 7 and Silverlight, plus how the services support any other client platform, such as iPhone and HTML5 applications, using the open web-based sync protocol.

Migrating & Authoring Applications to Microsoft SQL Azure: Cihan Biyikoglu

Are you looking to migrate your on-premise applications and database from MySql or other RDBMs to SQL Azure? Or are you simply focused on the easiest ways to get your SQL Server database up to SQL Azure? Then, this session is for you. We cover two fundamental areas in this session: application data access tier and the database schema+data. In Part 1, we dive into application data-access tier, covering common migration issues as well as best practices that will help make your data-access tier more resilient in the cloud and on SQL Azure. In Part 2, the focus is on database migration. We go through migrating schema and data, taking a look at tools and techniques for efficient transfer of schema through Management Studio and Data-Tier Application (DAC). Then, we discover efficient ways of moving small and large data into SQL Azure through tools like SSIS and BCP. We close the session with a glimpse into what is in store in future for easing migration of applications into SQL Azure.

Building Large Scale Database Solutions on SQL Azure: Cihan Biyikoglu

SQL Azure is a great fit for elastic, large scale and cost effective database solution with many TBs and PBs of data. In this talk we will explore the patterns and practices that help you develop and deploy applications that can exploit full power of the elastic, highly available and scalable SQL Azure Databases. The talk will detail modern scalable application design techniques such as sharding and horizontal partitioning and dive into future enhancements to SQL Azure Databases.

Introducing SQL Azure Reporting Services: Nino Bice, Vasile Paraschiv

Introducing SQL Azure Reporting Services – An in-depth review of the recently announced SQL Azure Reporting Services feature complete with demo’s, architectural review, code samples and just as importantly a discussion on how this new feature can provide important cloud capabilities for your company. If you are a BI professional, System Integrator, Consultant, ISV or have operational reporting needs within your organization then you must not miss this session that talks to Microsoft's ongoing commitment to SQL Azure and Cloud computing.

Loading and Backing Up SQL Azure Databases: Geoff Snowman

SQL Azure provides high availability by maintaining multiple copies of your database, but that doesn't mean that you should just trust Azure and assume your data is safe. If your data is mission critical, you should maintain a backup outside the Azure infrastructure. The database is also vulnerable to administration errors. If you accidentally truncate a table in your production database, that change will immediately be copied to all replicas, and there is no way to recover that table. In this session, you'll see how to use SSIS and BCP to back up a SQL Azure database. We'll also demonstrate processes you can use to move data from an on-premise database to SQL Azure. Finally, we'll discuss procedures for migrating your database from staging to production, to avoid the risks associated with implementing DDL directly in your production database.

SQL Azure Data Sync - Integrating On-Premises Data with the Cloud: Mark Scurrell

In this session we will introduce you to the concept of "Getting Data Where You Need It". We will show you how our new cloud based SQL Azure Data Sync Service enables on-premises SQL Server data to be shared with cloud-based applications. We will then show how the Data Sync Service allows data to be synchronized across geographically dispersed SQL Azure databases.

SQLCAT: SQL Azure Learning from Real-World Deployments: Abirami Iyer, Cihan Biyikoglu, Michael Thomassy

SQL Azure was released in January, 2010 and this session will discuss the what we have learned from a few of our first live, real-world implementations. We will showcase a few customer implementations by discussing their architecture and the features required to make them successful on SQL Azure. The session will cover our lessons learned and best practices on SQL Azure connectivity, sharding and partitioning, database migration and troubleshooting techniques. This is a 300 level session requiring an intermediate understanding of SQL Server.

SQLCAT: Administering SQL Azure and new challenges for DBAs: George Varghese, Lubor Kollar

SQL Azure represents significant shift in DBA’s responsibilities. Some tasks become obsolete (e.g. disk management, HA), some need new approaches (e.g. tuning, provisioning) and some are brand new (e.g. billing). This presentation goes over the discrepancies between administering the classic SQL Server and SQL Azure.

Cihangir Biyikoglu reported on 11/7/2010 that he’ll present two sessions at SQL PASS 2010, Nov 8th to 12th in Seattle:

image Hi everyone, I’ll be presenting the following 2 sessions at PASS. If you get a chance, stop by and say hi!

Building Large Scale Database Solutions on SQL Azure

imageSQL Azure is a great fit for elastic, large scale and cost effective database solution with many TBs and PBs of data. In this talk we will explore the patterns and practices that help you develop and deploy applications that can exploit full power of the elastic, highly available and scalable SQL Azure Databases. The talk will detail modern scalable application design techniques such as sharding and horizontal partitioning and dive into future enhancements to SQL Azure Databases. We’ll do a deep dive into federations and see a demo of how one can take an regular application

Migrating & Authoring Applications to Microsoft SQL Azure

Are you looking to migrate your on-premise applications and database from MySql or other RDBMs to SQL Azure? Or are you simply focused on the easiest ways to get your SQL Server database up to SQL Azure? Then, this session is for you. We cover two fundamental areas in this session: application data access tier and the database schema+data. In Part 1, we dive into application data-access tier, covering common migration issues as well as best practices that will help make your data-access tier more resilient in the cloud and on SQL Azure. In Part 2, the focus is on database migration. We go through migrating schema and data, taking a look at tools and techniques for efficient transfer of schema through Management Studio and Data-Tier Application (DAC). Then, we discover efficient ways of moving small and large data into SQL Azure through tools like SSIS and BCP. We close the session with a glimpse into what is in store in future for easing migration of applications into SQL Azure.

<Return to section navigation list> 

Dataplace DataMarket and OData

Mike Tulty offered a PDC 2010 Session Downloader in Silverlight on 11/7/2010:

image I was struggling a little on the official PDC 2010 site to download all of the PowerPoints and videos I wanted and I scratched my head for a while before I found an OData feed which looked like it contained all the data I needed. That feed is at;

imageand I figured it wouldn’t be too much hassle to plug that into a Silverlight application that made it easier to do the downloading. I spent maybe a day on it during the rainy bits of the weekend and that application is running in the browser below and you should be able to click the button to install it locally.

Install the app from Mike’s blog.

I apologise that the XAP file is about 1MB, I think I could make this a lot smaller by taking out some unused styles but I haven’t figured out exactly which styles are used/un-used at this point. I also left the blue spinning balls default loader on it.

This is an elevated out of browser application because as far as I can tell the site serving the feed doesn’t have a clientaccesspolicy.xml file and also because I wanted to be able to write files straight into “My Videos” and “My Documents” without having to ask the user on a per-file basis.

Here’s the Help File

I’m not exactly the master of intuitive UI so here’s some quick instructions for use. Once you have the application installed you should see a screen like this;


with the circled tracks coming from the OData feed. When you click on a topic for the first time, the application goes back over OData;


and then should display the session list within that particular track;


You can then select which PowerPoints, High Def videos and Low Def videos you want to download.

Note – for the purposes of this app I only looked at the videos in MP4 format, I do not look for videos in WMV format and note also that not all sessions seem to have downloadable videos at this point.

You can make individual selections by just clicking on the content types – maybe I want Clemens’ session in High Def and I want the PowerPoints;


or you can use the big buttons at the bottom to try and do a “Select All” or “Clear All” on that particular content type – below I’m trying to download all the PowerPoints and all the Low Def videos from the “Framework & Tools” track;


Your selections should be remembered as you switch from track to track so you can go around this way building up a list of all the things that you want to download.

Once you’ve got that list, click the download button in the bottom right;


and the downloading dialog should pop up onto the screen and work should begin. It should progress like this;


showing you progress in terms of how many sessions it has completed, how many files (in total) it has completed, which session and file it’s working on right now and (potentially) the errors that it encounters along the way.

You can hit the Stop button and the downloads should stop with a set of “cancelled” errors for all the files/sessions that didn’t get completed.

A download should not (hopefully) leave a half-completed file on your disk and it should not (hopefully) overwrite any existing file on your disk.

Where are the downloaded files going? Into your My Documents and My Videos (PDC2010Low/PDC2010High) folders;


and if you do encounter errors in there like (e.g.) here where I unplug the network cable part way through…


then you should get some attempt at error handling with a list of the problems encountered whilst doing the downloads.

Enjoy – feel free to ping me with bugs and I’ll try and fix and, remember, this is just for fun – it’s not part of the “official” PDC site in any way.

Update – a few people asked me to post the source code here and so here’s the code – some notes;

  1. This was written quickly. I’d guess I spent maybe 5-6 hours on it.
  2. I wasn’t planning to share it.
  3. It has some MVVM ideas in it but they’re not taken to the Nth degree. It uses some commanding from Expression Blend.
  4. There’s probably a race condition or two in there as well.
  5. There’s no unit tests. What testing I did was done in the debugger.
  6. There are probably a bunch of styles in there that aren’t used. I started with the JetPack styles and trimmed some bits out but I wasn’t exhaustive.
  7. There’s a lot of “public” that should be “internal” and the code generally needs splitting into libraries.

but you can download it from here if you still want that code to open up and poke around in after all those caveats

Mike Flasko posted a summary of Recent OData Announcements on 11/6/2010:

image At this past PDC we announced a number of new OData producers (EBay, TwitPic, etc) and libraries.  Check out these posts to get all the news:

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Steve Peschka is on a roll with The Claims, Azure and SharePoint Integration Toolkit Part 4 post of 11/8/2010:

image72232This is part 4 of a 5 part series on the CASI (Claims, Azure and SharePoint Integration) Kit. Part 1 was an introductory overview of the entire framework and solution and described what the series is going to try and cover. Part 2 covered the guidance to create your WCF applications and make them claims aware, then move them into Windows Azure. Part 3 walked through the base class that you’ll use to hook up your SharePoint site with Azure data, by adding a new custom control to a page in the _layouts directory. In this post I’ll cover the web part that is included as part of this framework. It’s designed specifically to work with the custom control you created and added to the layouts page in Part 3.

Using the Web Part

Before you begin using the web part, it obviously is assumed that you a) have a working WCF service hosted in Windows Azure and b) that you have created a custom control and added it to the layouts page, as described in Part 3 of this series. I’m further assuming that you have deployed both the CASI Kit base class assembly and your custom control assembly to the GAC on each server in the SharePoint farm. I’m also assuming that your custom aspx page that hosts your custom control has been deployed to the layouts directory on every web front end server in the farm. To describe how to use the web part, I’m going to use the AzureWcfPage sample project that I uploaded and attached to the Part 3 posting. So now let’s walk through how you would tie these two things together to render data from Azure in a SharePoint site.

NOTE: While not a requirement, it will typically be much easier to use the web part if the WCF methods being called return HTML so it can be displayed directly in the page without further processing.

The first step is to deploy the AzureRender.wsp solution to the farm; the wsp is included in the zip file attached to this posting. It contains the feature that deploys the Azure DataView WebPart as well as jQuery 1.4.2 to the layouts directory. Once the solution is deployed and feature activated for a site collection, you can add the web part to a page. At this point you’re almost ready to begin rendering your data from Azure, but there is a minimum of one property you need to set. So next let’s walk through what that and the other properties are for the web part.

Web Part Properties

All of the web part properties are in the Connection Properties section. At a minimum, you need to set the Data Page property to the layouts page you created and deployed. For example, /_layouts/AzureData.aspx. If the server tag for your custom control has defined at least the WcfUrl and MethodName properties, then this is all you need to do. If you did nothing else the web part would invoke the page and use the WCF endpoint and method configured in the page, and it would take whatever data the method returned (ostensibly it returns it in HTML format) and render it in the web part. In most cases though you’ll want to use some of the other web part properties for maximum flexibility, so here’s a look at each one:

  • Method Name* – the name of the method on the WCF application that should be invoked. If you need to invoke the layouts page with your own javascript function the query string parameter for this property is “methodname”.
  • Parameter List* – a semi-colon delimited list of parameters for the WCF method. As noted in Parts 2 and 3 of this series, only basic data types are supported – string, int, bool, long, and datetime. If you require a more complex data type then you should deserialize it first to a string then call the method, and serialize it back to a complex type in the WCF endpoint. If you need to invoke the layouts page with your own javascript function the query string parameter for this property is “methodparams”.
  • Success Callback Address – the javascript function that is called back after the jQuery request for the layouts page completes successfully. By default, this property uses the javascript function that ships with the web part. If you use your own function, the function signature should look like this: function yourFunctionName(resultData, resultCode, queryObject). For more details see the jQuery AJAX documentation at
  • Error Callback Address – the javascript function that is called back if the jQuery request for the layouts page encounters an error. By default, this property uses the javascript function that ships with the web part. If you use your own function, the function signature should look like this: function yourFunctionName(XMLHttpRequest, textStatus, errorThrown). For more details see the jQuery AJAX documentation at
  • Standard Error Message – the message that will be displayed in the web part if an error is encountered during the server-side processing of the web part. That means it does NOT include scenarios where data is actually being fetched from Azure.
  • Access Denied Message* – this is the Access Denied error message that should be displayed if access is denied to the user for a particular method. For example, as explained in Part 2 of this series, since we are passing the user’s token along to the WCF call, we can decorate any of the methods with a PrincipalPermission demand, like “this user must be part of the Sales Managers” group. If a user does not meet a PrincipalPermission demand then the WCF call will fail with an access denied error. In that case, the web part will display whatever the Access Denied Message is. Note that you can use rich formatting in this message, to do things like set the font bold or red using HTML tags (i.e. <font color='red'>You have no access; contact your admin</font>). If you need to invoke the layouts page with your own javascript function the query string parameter for this property is “accessdenied”.
  • Timeout Message* – this is the message that will be displayed if there is a timeout error trying to execute the WCF method call. It also supports rich formatting such as setting the font bold, red, etc. If you need to invoke the layouts page with your own javascript function the query string parameter for this property is “timeout”.
  • Show Refresh Link – check this box in order to render a refresh icon above the Azure data results. If the icon is clicked it will re-execute the WCF method to get the latest data.
  • Refresh Style – allows you to add additional style attributes to the main Style attribute on the IMG tag that is used to show the refresh link. For example, you could add “float:right;” using this property to have the refresh image align to the right.
  • Cache Results – check this box to have the jQuery library cache the results of the query. That means each time the page loads it will use a cached version of the query results. This will save round trips to Azure and result in quicker performance for end users. If the data it is retrieving doesn’t change frequently then caching the results is a good option.
  • Decode Results* – check this box in case your WCF application returns results that are HtmlEncoded. If you set this property to true then HtmlDecoding will be applied to the results. In most cases this is not necessary. If you need to invoke the layouts page with your own javascript function the query string parameter for this property is “encode”.

* – these properties will be ignored if the custom control’s AllowQueryStringOverride property is set to false.

Typical Use Case

Assuming your WCF method returns HTML, in most cases you will want to add the web part to a page and set two or three properties: Data Page, Method Name and possibly Parameter List.

If you have more advanced display or processing requirements for the data that is returned by Azure then you may want to use a custom javascript function to do so. In that case you should add your javascript to the page and set the Success Callback Address property to the name of your javascript function. If your part requires additional posts back to the WCF application, for things such as adds, updates or deletes, then you should add that into your own javascript functions and call the custom layouts page with the appropriate Method Name and Parameter List values; the query string variable names to use are documented above. When invoking the ajax method in jQuery to call the layouts page you should be able to use an approach similar to what the web part uses. The calling convention it uses is based on the script function below; note that you will likely want to continue using the dataFilter property shown because it strips out all of the page output that is NOT from the WCF method:


type: "GET",

url: "/_layouts/SomePage.aspx",

dataType: "text",

data: "methodname=GetCustomerByEmail&methodparams=steve@contoso.local",

dataFilter: AZUREWCF_azureFilterResults,

success: yourSuccessCallback,

error: yourErrorCallback


Try it Out!

You should have all of the pieces now to try out the end to end solution. Attached to this posting you’ll find a zip file that includes the solution for the web part. In the next and final posting in this series, I’ll describe how you can also use the custom control developed in Part 2 to retrieve data from Azure and use it in ASP.NET cache with other controls, and also how to use it in SharePoint tasks – in this case a custom SharePoint timer job.

Open attached

Steve Peschka continued his The Claims, Azure and SharePoint Integration Toolkit Part 3 series on 11/8/2010:

image72232This is part 3 of a 5 part series on the CASI (Claims, Azure and SharePoint Integration) Kit. Part 1 was an introductory overview of the entire framework and solution and described what the series is going to try and cover. Part 2 covered the guidance to create your WCF applications and make them claims aware, then move them into Windows Azure. In this posting I’ll discuss one of the big deliverables of this framework, which is a custom control base class that you use to make your connection from SharePoint to your WCF application hosted in Windows Azure. These are the items we’ll cover:

  • The base class – what is it, how do you use it in your project
  • A Layouts page – how to add your new control to a page in the _layouts directory
  • Important properties – a discussion of some of the important properties to know about in the base class
The CASI Kit Base Class

One of the main deliverables of the CASI Kit is the base class for a custom control that connects to your WCF application and submits requests with the current user’s logon token. The base class itself is a standard ASP.NET server control, and the implementation of this development pattern requires you to build a new ASP.NET server control that inherits from this base class. For reasons that are beyond the scope of this posting, your control will really need to do two things:

  1. Create a service reference to your WCF application hosted in Windows Azure.
  2. Override the ExecuteRequest method on the base class. This is actually fairly simple because all you need to do is write about five lines of code where you create and configure the proxy that is going to connect to your WCF application, and then call the base class’ ExecuteRequest method.

To get started on this you can create a new project in Visual Studio and choose the Windows Class Library type project. After getting your namespace and class name changed to whatever you want it to be, you will add a reference to the CASI Kit base class, which is in the AzureConnect.dll assembly. You will also need to add references to the following assemblies: Microsoft.SharePoint, System.ServiceModel, System.Runtime.Serialization and System.Web.

In your base class, add a using statement for Microsoft.SharePoint, then change your class so it inherits from AzureConnect.WcfConfig. WcfConfig is the base class that contains all of the code and logic to connect to the WCF application, incorporate all of the properties to add flexibility to the implementation and eliminate the need for all of the typical web.config changes that are normally necessary to connect to a WCF service endpoint. This is important to understand – you would typically need to add nearly a 100 lines of web.config changes for every WCF application to which you connect, to every web.config file on every server for every web application that used it. The WcfConfig base class wraps that all up in the control itself so you can just inherit from the control and it does the rest for you. All of those properties that would be changed in the web.config file though can also be changed in the WcfConfig control, because it exposes properties for all of them. I’ll discuss this further in the section on important properties.

Now it’s time to add a new Service Reference to your WCF application hosted in Windows Azure. There is nothing specific to the CASI Kit that needs to be done here – just right-click on References in your project and select Add Service Reference. Plug in the Url to your Azure WCF application with the “?WSDL” at the end so it retrieves the WSDL for your service implementation. Then change the name to whatever you want, add the reference and this part is complete.

At this point you have an empty class and a service reference to your WCF application. Now comes the code writing part, which is fortunately pretty small. You need to override the ExecuteRequest method, create and configure the service class proxy, then call the base class’ ExecuteRequest method. To simplify, here is the complete code for the sample control I’m attaching to this post; I’ve highlighted in yellow the parts that you need to change to match your service reference: …

Double-spaced C# sample code omitted for brevity.

So there you have it – basically five lines of code, and you really can just copy and paste directly the code in the override for ExcecuteRequest shown here into your own override. After doing so you just need to replace the parts highlighted in yellow with the appropriate class and interfaces your WCF application exposes. In the highlighted code above:

  • CustomersWCF.CustomersClient: “CustomersWCF” is the name I used when I created my service reference, and CustomersClient is the name of the class I’ve exposed through my WCF application. The class name in my WCF is actually just “Customers” and the VS.NET add service reference tools adds the “Client” part at the end.
  • CustomersWCF.ICustomers: The “CustomersWCF” is the same as described above. “ICustomers” is the interface that I created in my WCF application, that my WCF “Customers” class actually implements.

That’s it – that’s all the code you need to write to provide that connection back to your WCF application hosted in Windows Azure. Hopefully you’ll agree that’s pretty painless. As a little background, the code you wrote is what allows the call to the WCF application to pass along the SharePoint user’s token. This is explained in a little more detail in this other posting I did:

Now that the code is complete, you need to make sure you register both the base class and your new custom control in the Global Assembly Cache on each server where it will be used. This can obviously be done pretty easily with a SharePoint solution. With the control complete and registered it’s time to take a look at how you use it to retrieve and render data. In the CASI Kit I tried to address three core scenarios for using Azure data:

  1. Rendering content in a SharePoint page, using a web part
  2. Retrieving configuration data for use by one to many controls and storing it in ASP.NET cache
  3. Retrieving data and using it in task type executables, such as a SharePoint timer job

The first scenario is likely to be the most pervasive, so that’s the one we’ll tackle first. The easiest thing to do once this methodology was mapped out would have been to just create a custom web part that made all of these calls during a Load event or something like that, retrieve the data and render it out on the page. This however, I think would be a HUGE mistake. By wrapping that code up in the web part itself, so it executes server-side during the processing of a page request, could severely bog down the overall throughput of the farm. I had serious concerns about having one to many web parts on a page that were making one to many latent calls across applications and data centers to retrieve data, and could very easily envision a scenario where broad use could literally bring an entire farm down to its knees. However, there is this requirement that some code has to run on the server, because it is needed to configure the channel to the WCF application to send the user token along with the request. My solution for came to consist of two parts:

1. A custom page hosted in the _layouts folder. It will contain the custom control that we just wrote above and will actually render the data that’s returned from the WCF call.

2. A custom web part that executes NO CODE on the server side, but instead uses JavaScript and jQuery to invoke the page in the _layouts folder. It reads the data that was returned from the page and then hands it off to a JavaScript function that, by default, will just render the content in the web part. There’s a lot more to it that’s possible in the web part of course, but I’ll cover the web part in detail in the next posting. The net though is that when a user requests the page it is processed without having to make any additional latent calls to the WCF application. Instead the page goes through the processing pipeline and comes down right away to the user’s browser. Then after the page is fully loaded the call is made to retrieve only the WCF content.

The Layouts Page

The layouts page that will host your custom control is actually very easy to write. I did the whole thing in notepad in about five minutes. Hopefully it will be even quicker for you because I’m just going to copy and paste my layouts page here and show you what you need to replace in your page. …

ASP.NET code omitted for brevity.

Again, the implementation of the page itself is really pretty easy. All that absolutely has to be changed is the strong name of the assembly for your custom control. For illustration purposes, I’ve also highlighted a couple of the properties in the control tag itself. These properties are specific to my WCF service, and can be change and in some cases removed entirely in your implementation. The properties will be discussed in more detail below. Once the layouts page is created it needs to be distributed to the _layouts directory on every web front end server in your SharePoint farm. At that point it can be called from any site in any claims-aware web application in your SharePoint farm. Obviously, you should not expect it to work in a classic authentication site, such as Central Admin. Once the page has been deployed then it can be used by the CASI Kit web part, which will be described in part 4 of this series.

Important Properties

The WcfConfig contains two big categories of properties – those for configuring the channel and connection to the WCF application, and those that configure the use of the control itself.

WCF Properties

As described earlier, all of the configuration information for the WCF application that is normally contained in the web.config file has been encapsulated into the WcfConfig control. However, virtually all of those properties are exposed so that they can be modified depending on the needs of your implementation. Two important exceptions are the message security version, which is always MessageSecurityVersion.WSSecurity11WSTrust13WSSecureConversation13WSSecurityPolicy12BasicSecurityProfile10, and the transport, which is always HTTPS, not HTTP. Otherwise the control has a number of properties that may be useful to know a little better in your implementation (although in the common case you don’t need to change them).

First, there are five read only properties to expose the top-level configuration objects used in the configuration. By read-only, I mean the objects themselves are read-only, but you can set their individual properties if working with the control programmatically. Those five four properties are:

  1. SecurityBindingElement SecurityBinding
  2. BinaryMessageEncodingBindingElement BinaryMessage
  3. HttpsTransportBindingElement SecureTransport
  4. WS2007FederationHttpBinding FedBinding
  5. EndpointAddress WcfEndpointAddress

The other properties can all be configured in the control tag that is added to the layouts aspx page. For these properties I used a naming convention that I was hoping would make sense for compound property values. For example, the SecurityBinding property has a property called LocalClient, which has a bool property called CacheCookies. To make this as easy to understand and use as possible, I just made one property called SecurityBindingLocalClientCacheCookies. You will see several properties like that, and this is also a clue for how to find the right property if you are looking at the .NET Framework SDK and wondering how you can modify some of those property values in the base class implementation. Here is the complete list of properties: …

Another couple of pages of C# code omitted for brevity.

Again, these were all created so that they could be modified directly in the control tag in the layouts aspx page. For example, here’s how you would set the FedBindingUseDefaultWebProxy property:

<AzWcf:WcfDataControl runat="server" id="wcf" WcfUrl="" OutputType="Page" MethodName="GetAllCustomersHtml" FedBindingUseDefaultWebProxy="true" />

Usage Properties

The other properties on the control are designed to control how it’s used. While there is a somewhat lengthy list of properties, note that they are mainly for flexibility of use – in the simple case you will only need to set one or two properties, or alternatively just set them in the web part that will be described in part four of this series. Here’s a list of each property and a short description of each.

  • string WcfUrl – this is the Url to the WCF service endpoint, i.e.
  • string MethodName – this is the name of the method that should be invoked when the page is requested. You can set this to what method will be invoked by default. In addition, you can also set the AllowQueryStringOverride property to false, which will restrict the page to ONLY using the MethodName you define in the control tag. This property can be set via query string using the CASI Kit web part.
  • string MethodParams – this is a semi-colon delimited list of parameter values that should be passed to the method call. They need to be in the same order as they appear in the method signature. As explained in part 2 of this blog series, parameter values really only support simple data types, such as string, bool, int and datetime. If you wish to pass more complex objects as method parameters then you need to make the parameter a string, and deserialize your object to Xml before calling the method, and then in your WCF application you can serialize the string back into an object instance. If passing that as a query string parameter though you will be limited by the maximum query string length that your browser and IIS supports. This property can be set via query string using the CASI Kit web part.
  • object WcfClientProxy – the WcfClientProxy is what’s used to actually make the call to the WCF Application. It needs to be configured to support passing the user token along with the call, so that’s why the last of configuration code you write in your custom control in the ExecuteRequest override is to set this proxy object equal to the service application proxy you created and configured to use the current client credentials.
  • string QueryResultsString – this property contains a string representation of the results returned from the method call. If your WCF method returns a simple data type like bool, int, string or datetime, then the value of this property will be the return value ToString(). If your WCF method returns a custom class that’s okay too – when the base class gets the return value it will deserialize it to a string so you have an Xml representation of the data.
  • object QueryResultsObject – this property contains an object representation of the results returned from the method call. It is useful when you are using the control programmatically. For example, if you are using the control to retrieve data to store in ASP.NET cache, or to use in a SharePoint timer job, the QueryResultsObject property has exactly what the WCF method call returned. If it’s a custom class, then you can just cast the results of this property to the appropriate class type to use it.
  • DataOutputType OutputType – the OutputType property is an enum that can be one of four values: Page, ServerCache, Both or None. If you are using the control in the layouts page and you are going to render the results with the web part, then the OutputType should be Page or Both. If you want to retrieve the data and have it stored in ASP.NET cache then you should use ServerCache or Both. NOTE: When storing results in cache, ONLY the QueryResultsObject is stored. Obviously, Both will both render the data and store it in ASP.NET cache. If you are just using the control programmatically in something like a SharePoint timer job then you can set this property to None, because you will just read the QueryResultsString or QueryResultsObject after calling the ExecuteRequest method. This property can be set via query string using the CASI Kit web part.
  • string ServerCacheName – if you chose an OutputType of ServerCache or Both, then you need to set the ServerCacheName property to a non-empty string value or an exception will be thrown. This is the key that will be used to store the results in ASP.NET cache. For example, if you set the ServerCacheName property to be “MyFooProperty”, then after calling the ExecuteRequest method you can retrieve the object that was returned from the WCF application by referring to HttpContext.Current.Cache["MyFooProperty"]. This property can be set via query string using the CASI Kit web part.
  • int ServerCacheTime – this is the time, in minutes, that an item added to the ASP.NET cache should be kept. If you set the OutputType property to either ServerCache or Both then you must also set this property to a non-zero value or an exception will be thrown. This property can be set via query string using the CASI Kit web part.
  • bool DecodeResults – this property is provided in case your WCF application returns results that are HtmlEncoded. If you set this property to true then HtmlDecoding will be applied to the results. In most cases this is not necessary. This property can be set via query string using the CASI Kit web part.
  • string SharePointClaimsSiteUrl – this property is primarily provided for scenarios where you are creating the control programmatically outside of an Http request, such as in a SharePoint timer job. By default, when a request is made via the web part, the base class will use the Url of the current site to connect to the SharePoint STS endpoint to provide the user token to the WCF call. However, if you have created the control programmatically and don’t have an Http context, you can set this property to the Url of a claims-secured SharePoint site and that site will be used to access the SharePoint STS. So, for example, you should never need to set this property in the control tag on the layouts page because you will always have an Http context when invoking that page.
  • bool AllowQueryStringOverride – this property allows administrators to effectively lock down a control tag in the layouts page. If AllowQueryStringOverride is set to false then any query string override values that are passed in from the CASI Kit web part will be ignored.
  • string AccessDeniedMessage – this is the Access Denied error message that should be displayed in the CASI Kit web part if access is denied to the user for a particular method. For example, as explained in part 2 of this series, since we are passing the user’s token along to the WCF call, we can decorate any of the methods with a PrincipalPermission demand, like “this user must be part of the Sales Managers” group. If a user does not meet a PrincipalPermission demand then the WCF call will fail with an access denied error. In that case, the web part will display whatever the AccessDeniedMessage is. Note that you can use rich formatting in this message, to do things like set the font bold or red using HTML tags (i.e. <font color='red'>You have no access; contact your admin</font>). This property can be set via query string using the CASI Kit web part.
  • string TimeoutMessage – this is the message that will be displayed in the CASI Kit web part if there is a timeout error trying to execute the WCF method call. It also supports rich formatting such as setting the font bold, red, etc. This property can be set via query string using the CASI Kit web part.

Okay, this was one LONG post, but it will probably be the longest of the series because it’s where the most significant glue of the CASI Kit is. In the next posting I’ll describe the web part that’s included with the CASI Kit to render the data from your WCF application, using the custom control and layouts page you developed in this step.

Also, attached to this post you will find a zip file that contains the sample Visual Studio 2010 project that I created that includes the CASI Kit base class assembly, my custom control that inherits from the CASI Kit base class, as well as my custom layouts page.

Open attached

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, and CDN

imageNo significant articles today.

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Buck Woody offered his recommendations for Designing for the cloud in an 11/8/2010 post:

image With the advent of platform as a service (Azure and paradigms like it) it isn't enough to simply write the same code patterns as we have in the past. To be sure, those patterns will still function in Azure, but we are not taking full advantage of the paradigm if we don't exercise it more fully.

imageI've been reading Neal Stephenson's "Snow Crash" again for the first time in a long time, and the old adage holds true - today's science fiction is tomorrows science fact. In this story there are interesting paradigms in code - a strange mixture of "the sims" or second-life and the Internet. There is also the idea of telecommuting, remote meetings and classes, which develop into a real second economy.

Developers will bring in something similar using cloud technologies. Data marketplaces such as the one we recently released will become the norm, and the App store model from Apple will bubble down - not up - into code fragments that will allow other developers and non-developers alike stitch together entirely new applications.

So what steps do you need to take now to get your skills and code up to speed? My recommendation is to modularize your code logic - not the code itself, but the thoughts behind it - in a more Service Oriented Architecture way. From the outset, think of the public and private methods the code provides, and most importantly, divorce it from the processing of the rest of the system as much as possible. And - this is the important part - keep the presentation layer separate.

Again, that concept isn't new. Lots of folks thought of this a long time ago. As software engineers we're always supposed to do that, but over and over again I find the code inextricably tied between the two. If you keep them separate, however, there is something else I propose you do: implement functional contracts between your front end and the presentation handler code. Let me explain.

I suggest you create three levels of code - all of them separate - one handling the processing of data, numerics a-la SOA, the second handling multiple rendering properties such as text, HTML, polygons, vectors and so on, and the third for each front end you plan to address. That's right, multiple front ends. I know this antithetical to a platform independent approach, but I think it is necessary. When I view an app designed for a win7 phone, an iPad and a web page, it is rubbish. Sorry, but there it is. Each platform has it's own strengths, and you should Code to those.

Keeping a three-level architecture with contracts allows you to only rewrite front ends as needed, which of course need updating often anyway.

So there you have it - not necessarily a revolutionary concept or an original one, but something to keep in mind for programming against a Platform as a Service like Windows Azure.

Avkash Chauhan explained Adding SSL (HTTPS) security with Tomcat/Java solution in Windows Azure in an 11/7/2010 post:

image Once you have your Tomcat/Java based solution running on port 80 in Windows Azure you may decide to add HTTP endpoint to it or by the start of your service development you want to support HTTP and HTTPS endpoint with your Tomcat/Java service. To add a HTTPS endpoint to regular .net based service is very easy and described everywhere however adding a HTTPS endpoint to Tomcat/Java based service is little different. The reason is that for managed service the SSL certificate is managed by managed modules you get the SSL certificate from the system however in Tomcat manages SSL certificate its own way that’s why it is slightly different. This document helps you to get the HTTPS endpoint working with your Tomcat/Java service on Windows Azure.

Adding HTTPS security to any website is basically a certificate based security based on PKI security concept and to start with you first need to obtain a valid certificate for “client and server validation” from a Certificate Authority i.e. “Verisign” or “Go Daddy” or “Thawte” etc. I will not be discussing here who you will choose and how you will get this certificate. I would assume that you know the concepts and have certificate ready to add HTTPS security to your Tomcat service.

You will be given a Certificate to verify your domain name such as so the certificate will be linked to your domain this way.

The process of adding SSL to tomcat is defined in the following steps:

  1. Getting certificates from CA and then creating keystore.bin file
  2. Adding keystore.bin file to tomcat
  3. Adding HTTPS endpoint to your tomcat solution
  4. Adding certificate to your Tomcat service at Windows Azure Portal

Here is the description of each above steps:

Getting certificates from CA and then creating keystore.bin file

To get these certificates you will need to create a CSR request from your tomcat/apache server so you can create a folder name keystore and create and save the CSR request contents there.


Most of the time you will get a certificate chain which includes your certificate, intermediate certificate and root certificate so essentially you will have 3 certificates:

  1. RootCertFileName.crt
  2. IntermediateCertFileName.crt
  3. PrimaryCertFileName.crt

Now once you received the certificate please save all 3 certificates in the keystore folder.


The certificate will only work with the same keystore that you initially created the CSR with. The certificates must be installed to your keystore in the correct order.

We will be using Keytool.exe a JAVA tool to link these certificates with Tomcat. To tool is located at:


Every time you install a certificate to the keystore you must enter the keystore password that you chose when you generated it so you will keep using the same password.

Now open a command window and use the keytool binary to run the following commands.

Installing Root Certificate in keystore:

keytool -import -trustcacerts -alias root -file RootCertFileName.crt -keystore keystore.key

There are two possibilities:

  1. You may receive a successful message as then we are good: "Certificate was added to keystore".
  2. You may also receive a message that says: "Certificate already exists in system-wide CA keystore under alias <...> Do you still want to add it to your own keystore? [no].” This is because the certificate may already stored in keystroke so select “Yes.”

You will see the message: "Certificate was added to keystore." Now we have added our Root certificate in the keystore.

Installing Intermediate Certificate in keystore:

keytool -import -trustcacerts -alias intermediate -file IntermediateCertFileName.crt -keystore keystore.key

You will see a message as: "Certificate was added to keystore".

We are good now.

Note if you don’t have an intermediate certificate not a problem and you can skip this step.

Installing Primary Certificate in keystore:

keytool -import -trustcacerts -alias tomcat -file PrimaryCertFileName.crt -keystore keystore.key

You will see a message as: "Certificate was added to keystore".

We are good now.

After it we can be sure that we have all certificates installed in keystore file.

Note you can actually see the contents of keystore.bin


Adding keystore.bin file to tomcat:

Our next step is to configure your server to use the keystore file.

Please get keystore.bin & yourdomain.key from the CSR creation location and then copy to your tomcat webapps folder:

Now open server.xml file from Tomcat\conf\server.conf and edit as below:



  1. Please be sure to have the same password as you had after CSR creation and used with keytool application.
  2. There are other methods to add keystore to tomcat so please look around on internet if you decided to prefer other methods.

Verify that you have SSL working in Development Fabric:

Adding HTTPS endpoint to your tomcat solution

Our next step is to add HTTP endpoint to Tomcat solution. Please open the tomcat solution in visual studio which is located at:


Once the solution is open please select the TomcatWorkerRole and open its properties dialog box as below:

Now please select “Endpoints” tab and you will see the windows as below:

In the “Endpoints” tab please select “Add Endpoint” button and add TCP endpoint with port 443.

For Tomcat the HTTPS endpoint defined as a “tcp” endpoint like HTTP. Setting protocol to “http” or “https” means that Azure will perform an http.sys reservation for that endpoint on the appropriate port. Since Tomcat does not use http.sys internally, we needed to make sure to model tomcat HTTPS endpoints as “tcp”.

You will also see that setting up tomcatSSL endpoint to TCP with port 443 then the SSL Certificate field get disabled so regular certificate cannot be used as below:

Now to make things in full perspective, we already know that Tomcat already has SSL certificates in its keystore.bin, so using TCP endpoint with port 443 will work even there is no certificate associated with it.

Now please save the project and verify that ServiceDefinition.cdef have the following data:

Now if you build your package using Packme.cmd these new changes will be in effect.

Adding certificate to your Tomcat service at Windows Azure Portal

Our next step is to add certificates in the Azure portal.

You need to get the Windows Azure Portal and your Tomcat Service page where you have your production and staging slots and go the “Certificates – Manage” section as below:

Now please select Manage and you will see the following screen:

In the above web windows please select all 2/3 PFX certificates (Root, Intermediate and Primary) and enter the password correctly if associated.

Once upload is done you will see the all the certificates located on portal as below:

You can also see the list of certificates install on server as below:


Now when you publish your service to Azure portal these certificates will be used to configure your Tomcat service and you will be able to use SSL with your Tomcat service.

<Return to section navigation list> 

Visual Studio LightSwitch

Wes Yanaga announced Now available: Visual Studio LightSwitch Developer Training Kit in an 11/8/2010 post to the US ISV Evangelism blog:

image The Visual Studio LightSwitch Training Kit contains demos and labs to help you learn to use and extend LightSwitch. The introductory materials walk you through the Visual Studio LightSwitch product. By following the hands-on labs, you'll build an application to inventory a library of books.

image222422The more advanced materials will show developers how they can extend LightSwitch to make components available to the LightSwitch developer. In the advanced hands-on labs, you will build and package an extension that can be reused by the Visual Studio LightSwitch end user.

Here’s the LightSwitch Training Kit’s splash screen:


Beth Massi (@BethMassi) listed LightSwitch Team Interviews, Podcasts & Videos on 11/8/2010:

image Have a long commute to work or want something to watch on your lunch break? Catch the team in these audio and video interviews!

Podcasts -

Video Interviews -

You can access these plus a lot more on the LightSwitch Developer Center

Return to section navigation list> 

Windows Azure Infrastructure

Mike Wickstrand’s Windows Azure: A Year of Listening, Learning, Engineering, and Now Delivering! post of 11/2/2010 got lost in the shuffle, so better late than never here:

image Although I’ve worked at Microsoft for more than 11 years, 2009 marked the first time I had the opportunity to attend Microsoft’s Professional Developers Conference. When I walked around PDC 09 in Los Angeles last year and spoke with developers, I found that I was inundated with many great ideas on how to make Windows Azure better, almost too many to sort through and prioritize. As someone who helps chart the future course for Windows Azure this was a fantastic problem to have at that point, because in late 2009 we were finalizing our priorities and engineering plans for calendar year 2010.

imageEnergized by those developer conversations and wanting a way to capture and prioritize it all, on the flight home I launched (Wi-Fi on the plane helped). It’s a simple site where Windows Azure enthusiasts and customers (big or small) can tell Microsoft’s Windows Azure Team directly what they need by submitting and voting on ideas. I wasn’t sure anyone would participate, so I submitted a few ideas of my own to get things going and gauge interest in some ideas we were kicking around within the Windows Azure Team. The goal of the site was and is to better understand what you need from Windows Azure and to build plans around how we make the things that "bubble to the top" a reality for our customers in the future.

So what happened? Well, with a year now gone by and a slew of features on tap for release it’s the perfect time to reflect back. In the past 12 months, more than 2000 unique visitors to have submitted hundreds of feature requests and cast nearly 13,000 votes for the features that matter most to them. There were also hundreds and hundreds of valuable comments and blog posts that grew out of the ideas people were sharing on the site. Thank you for this amazing level of participation!

With the announcements last week at PDC 10 and with a look forward to things that weren’t announced, but are in the works, I am pleased to let you know that we are addressing 62% of all of the votes cast with features that are already or soon will be available. Said another way, we are addressing 8 out of the top 10 most requested ideas (and more ideas lower down on the list) that in total account for roughly 8000 of the nearly 13,000 votes cast.

I hope that you agree that we are sincerely listening to you and knocking these high priority ideas off one-by-one. I am sure there are some of you that want new features to come sooner or perhaps you’re not happy as your requested feature isn’t yet available (or it isn’t available exactly in the way that you envisioned). With more than 2000 people participating, this is going to happen - - I just hope with what we are releasing that you are now even more enthusiastic to keep an active dialog going with me. Also, please realize this site is just one of many channels we use to determine our engineering and business priorities, and this one just happens to be the most public.

On that note, I received this e-mail the other day that I wanted to share with you:

From: Paul <last name and e-mail address withheld>
Sent: Saturday, October 30, 2010 4:15 AM
To: Mike Wickstrand
Subject: Windows Azure


I’ve been keeping a close eye on Windows Azure, and so far it’s been a case of “Wow, I’d love to develop for this, but it’s too expensive”.  I’ve been looking on “”, and I have to say, the new announcements for $0.05 per hour instances and being able to run multiple roles <web sites> per instance has tipped me in favour of Azure enough to begin learning and developing for it.

Thanks so much for listening to the feedback of the developer community.  It gives me a warm feeling that we have our Microsoft of old back, who cares and listens to the developer community.

Honestly, this is great news.



(A born-again Microsoft fan-boi)

When I was sitting on that plane last year flying back from PDC 09 I hoped that in a year I was going to be able to look each of you in the eye and demonstrate to you that Microsoft listens and that the Windows Azure Team cares about what you need. In the best case scenario I envisioned that I would hear from customers like Paul. I gave you my assurance that if you tell us what you want that I will do my best to champion those ideas within Microsoft to make those things become a reality. I hope you feel like I’ve lived up that and that I’ve earned the right to keep hearing your ideas on how to make Windows Azure great for you and for your companies.

So…a big thank you to the more than 2000 people who shared ideas and voted for what you want and need from Windows Azure. To the thousands of Windows Azure customers who regularly receive e-mails from me asking for your opinions, thank you and please keep the feedback coming. And lastly, to the Windows Azure Team, thanks for making all of this happen, for Paul and for our thousands of customers just like him.

In the past year we’ve also added a few more ears to my team, so along the way please don’t hesitate to share your ideas with (Haris),, (Adam) or (Robert).

We look forward to coming back to you in another year after PDC 11 and having an even better story to tell.

Dustin Amrhein announced “The need for eventual standardization” as an introduction to his Is Standardization Right for Cloud-Based Application Environments? essay of 11/8/2010:

image For me, this week has been one of those weeks that I think all technologists enjoy. You know what I'm talking about. The week has been one of those rare periods of time when you get to put day-to-day work on the backburner (or at least delay it until you get back to your hotel at night), and instead focus on learning, networking, and stepping outside of your comfort cocoon.

This week, I am getting a chance to attend Cloud Expo, two CloudCamps, and QCon all within a four-day span. In the process, I am meeting many smart folks (all the while finding out there are indeed people behind those Twitter handles) and coming across quite a few interesting cloud solutions. I could easily write a post talking about the people I met and the cool things they are doing, but instead I want to focus on the cloud solutions I came across during the week. When it comes to the solutions, rather than focusing in on one or two specific solutions, I wanted to focus in on a class of solutions, specifically cloud management solutions.

It's an obvious trend... the number of cloud management solutions on display far, far outweigh any other class of offering. To be fair, calling a particular offering a cloud management solution is to brush a broad stroke. Accordingly, these solutions vary in some respects. Some focus more on delivering capabilities to setup and configure cloud infrastructure, while others emphasize facilities to enable the effective consumption of said infrastructure. While some of the capabilities vary, there is one capability that nearly all have in common. Just about each of these solutions that I have seen provide some sort of functionality around constructing and deploying application environments in a cloud.

Now, the approach that each solution delivers around this particular capability widely varies. Before we get into that, let's start by agreeing on what I mean by an application environment. In this context, when I say application environment, I am thinking of two main elements:

  • Application infrastructure componentry: The application infrastructure componentry represent the building blocks of the application environment. These are the worker nodes (i.e. management servers, application servers, web servers, etc.) that support your application.
  • Application infrastructure configuration: Application infrastructure is a means to the end of providing an application. Users always customize the configuration of the infrastructure in order to effectively deliver their application.

As I said, in tackling the pieces of these application environments, the solutions took different approaches. Nearly all had a way of representing the environment as a single logical unit. The name of that unit varied (patterns, templates, assemblies, etc.), as did the degree of abstraction. Some, but not all, provided a direct means to include configuration into that representation. Others left it up to the users to work out a construct by which they could include the configuration of their environment into the logical representation of their application environment.

At this point in the cloud game, I believe most would expect this high degree of variation. In addition, I believe most would agree it is a healthy thing as it gives users a high degree of choice (even if it can be frustrating as a vendor to try to differentiate/explain your particular approach). However, as the market begins to validate the right approaches, I firmly believe we need some sort of standardization or commonality in how we approach representing application environments for the cloud.

As I see it, an eventual standardization in the space of representing application environments built for the cloud will enable several beneficial outcomes. This includes, but is not limited to:

  • Multi-system management: One of the most obvious benefits of a standardized application environment description is that it sets the course for the use of these representations by multiple different management systems. Users should be able to take these patterns, templates, assemblies, and move them from one deployment management system to another.
  • Policy-based management: A standard description of the environment and configuration paves the way for systems to be able to interact with the environment. Among other things, this may enable generically applicable policy-based management of the application environment.
  • Sustainable PaaS systems: My last post goes into this in more detail, but it is my belief that to build sustainable PaaS platforms we need a common representation of application environments.

Admittedly, there is much more to this topic than a few words. These are just a few quick thoughts inspired by some of the emerging solutions I got an up-close look at this week. What do you think? Should we gradually move toward standardization in this space?

Dustin Amrhein joined IBM as a member of the development team for WebSphere Application Server.

Chris Dannen asserted “A new crop of mobile software startups is changing the way enterprises choose software, at the expense of big players like Microsoft” in a preface to his The Great Mobile Cloud Disruption article of 1/8/2010 for the MIT Technology Review:

Soon after Apple launched the iPad this spring, the TaylorMade-adidas Golf Company bought about 80 of the tablets for its marketing and sales departments. Before long, most of those employees began using a content-sharing tool called as a way to recommend and comment on articles about leadership and personal growth, even though the IT department never sanctioned the software. Says Jim Vaughn, TaylorMade's head of sales development: "I'm not even sure how or when Box was put into the picture." But the software is now in use among hundreds of TaylorMade employees with tablets and smart phones.

image TaylorMade was able to adopt this software so quickly because it's not hosted on the servers at its Carlsbad, California, headquarters—but rather on remote computers in the cloud. It's a story that's happening over and over at many large corporations. Mobile devices are upending the way enterprisewide software is bought and run, shifting decisions from IT departments to the users themselves. This mirrors the pattern of prior sea changes in technology, most notably the PC itself, which was at first heavily resisted by IT departments.

Palo Alto, California-based—along with venture capital-backed startups with names such as Yammer, Nimbula, and Zuora—is pouncing on this latest disruption now that it's clear that iPhones, iPads, Android phones, and other gadgets are not just a consumer phenomenon but the future of collaborative work.

Globally, the enterprise software market is worth more than $200 billion a year. But more and more of that market is moving into the mobile cloud. "Mobile is very hard to do from behind a corporate firewall," says David Sacks, CEO of Yammer, a corporate social network that enables employees at companies such as Molson Coors, Cargill, AMD, and Intuit to find coworkers with the right expertise if they encounter problems in the field.

These low-cost mobile cloud services ( costs $15 per user per month) are poised to displace a sizeable chunk of the larger market and itself grow into a $5.2 billion sector with 240 million users by 2015, says ABI Research.

The main incumbent for enterprise software, Microsoft, has been slow to react, largely because most of its customers for cash-cow products such as Office (for personal productivity) as well as SharePoint and Dynamics (for enterprise resource planning) are mainly using on-premise servers, not cloud storage. Such programs are generally sold to big companies under multimillion-dollar site licenses.

The new mobile-savvy cloud services can plug into one another, working together, making them de facto allies against Microsoft. For instance, and Yammer both connect with Google Apps—a set of cloud-based tools for creating and managing documents that Google sells as an annual subscription for $50 per user. For Google, the emergence of these mobile cloud startups reinforces its strategy, says Chris Vander Mey, senior product manager for Google Apps. The idea is to offer an alternative to Microsoft Office in the cloud while making it easy to share data among programs that do other tasks. "That enables us to focus on a few things and do them really well, like e-mail," says Vander Mey.

It was only with the introduction of its Windows Phone 7 this fall that Microsoft began to strike back in earnest, with new development tools for the mobile cloud. The software giant is rolling out new cloud-based versions of SharePoint and Office 365 for its mobile devices. Alan Meeus, a senior product manager at Microsoft, says that the company now sees its new phone and its new Windows Azure software as platforms for creating any kind of application for the mobile cloud.

But the startups began developing these new services more than three years ago, when the launch of the iPhone became a harbinger of change. Millions of employees started carrying two devices—one personal, another for work.

The redundancy of people carrying two devices was the "aha" moment that led venture capitalists to put money into, says CEO Aaron Levie, who cofounded the company in his dorm room at the University of Southern California. He figured that the consumer device would become the new business device, leading to the need for new kinds of software. "We realized we needed to follow the users," he says. By now, counts enterprises such as Nike, Marriott, Dole, and Clear Channel Communications among its 60,000 business customers.

And just as consumers like to try out new apps all the time, individual users can simply visit a mobile apps store to try out new business software on their devices. "With cloud services, the cost of failure goes way down," says Mark Brennan, director of business solutions at Pandora Media, the Internet radio startup.

Pandora is relying on a low-cost mobile subscription service from Zuora to bill customers who upgrade to its paid app. And since can unify all its software services in one view on any device, says Brennan, it's okay if different employees choose different brands of software. One group may like to work entirely inside Google Docs, he says, while other groups use Salesforce Content because their files are linked to sales accounts.

Moving to become an entirely cloud-based enterprise, Pandora pretty soon won't host a single piece of enterprise software at its Oakland, California, headquarters. As more and more companies go this route, the disruption will ripple through every part of the software industry. "The status quo is always bad for innovation," says Brennan. "A toppling is always a good thing."

It’s a stretch to equate cloud-based file storage as a threat to Microsoft Windows, Office, Office 365 or any other Microsoft SKU.

Markus Klems posted Between cost cutting and innovation – striving for operational efficiency on 8/11/2010:

Enterprise IT has undergone changes in the past decade that can promote innovation, bringing down the cost and time it takes to test new concepts.  Emerging cloud services help enterprises access new markets and develop go-to-market strategies that their predecessors could only dream of.

image And yet we find CIOs presently under so much pressure that they are struggling to remain innovative.  The problem arises in part because technology has become more accessible.  There are actually too many people involved in the strategic IT decision-making process.  Functional heads are now just as likely as CIOs to make the case for technology investments to support their business strategies.

So, as IT has become the domain of the many, the CIO has become a facilitator.  The question is: how can the CIO add value to the enterprise he supports?

The valuable CIO

One way would be to help his organization plot a way through the maze of “game-changing” technologies that have been unleashed with the characteristic hype of the IT industry.

Cloud computing has created a multitude of opportunities for small and medium sized businesses.  As large Internet enterprises have opened their scalable IT platforms for external users, cloud services such as Amazon’s Web Services and Google’s App Engine have become tremendously popular. The most successful cloud services attract a vivid community of start-up companies and independent software developers who shape an ecosystem of innovation around a set of core services.  This is the innovation that efficient networked services should promote – and the innovation that CIOs at larger enterprises should facilitate.

Motivating CIO innovation

The success of these services can be explained by two major motives:

a)    On-demand access to large-scale resource pools, and

b)    the never-ending quest toward organizational efficiency.

While a) unlocks new opportunities for small businesses, b) is a particularly intriguing value proposition for larger enterprises that already own huge datacenters.

Cloud services add a new dimension to datacenter virtualization which is beyond server consolidation. Traditionally, datacenter decision makers use virtualization technology as a cost-saving tool to multiplex isolated applications across smaller numbers of physical servers. However, downsizing physical infrastructure bears an inherent cost of the increasing risk that unexpected workload spikes cannot be satisfied any more. Amazon’s CTO Werner Vogels points into a different direction: the real value of virtualization is rather the enhancement of application deployment and management capabilities.

It’s here that CIOs can add real value to their organization in the quest for efficiency: the CIO must not become a facilitator, but remain an innovative force within the corporation.

And what would an innovative force do in the current enterprise IT environment?  In my next post, I will examine three ways in which the CIO can deliver value through the deployment of cloud services, and the greater efficiency it affords.

Markus Klems is a researcher at the Karlsruhe Institute of Technology

Visit our Gartner mashup page for our full reports from the Gartner Symposium/ITXpo.

Lydia Leong (@cloudpundit) posted Symposium, Smart Control, and cloud computing on 11/7/2010:

I originally wrote this right after [the Gartner] Orlando Symposium and forgot to post it.

image Now that Symposium is over, I want to reflect back on the experiences of the week. Other than the debate session that I did (Eric Knipp and I arguing the future of PaaS vs. IaaS), I spent the conference in the one-on-one room or meeting with customers over meals. And those conversations resonated in sharp and sometimes uncomfortable ways with the messages of this year’s Symposiums.

The analyst keynote said this:

An old rule for an old era: absolute centralized IT control over technology people, infrastructure, and services. Rather than fight to regain control, CIOs and IT Leaders must transform themselves from controllers to influencers; from implementers to advisers; from employees to partners. You own this metamorphosis. As an IT leader, you must apply a new rule, for a new era: Smart Control. Open the IT environment to a Wild Open World of unprecedented choice, and better balance cost, risk, innovation and value. Users can achieve incredible things WITHOUT you, but they can only maximize their IT capabilities WITH you. Smart Control is about managing technology in tighter alignment with business goals, by loosening your grip on IT.

The tension of this loss of control, was by far the overwhelming theme of my conversations with clients at Symposium. Since I cover cloud computing, my topic was right at the heart of the new worries. Data from a survey we did earlier this year showed that less than 50% of cloud computing projects are initiated by IT leadership.

Most of the people that I talked to strongly held one of two utterly opposing beliefs: that cloud computing was going to be the Thing of the Future and the way they and their companies would consume IT in the future, and that cloud computing would be something that companies could never embrace. My mental shorthand for the extremes of these positions is “enthusiastic embrace” and “fear and loathing”.

I met IT leaders, especially in mid-sized business, who were tremendously excited by the possibilities of the cloud to free their businesses to move and innovate in ways that they never had before, and to free IT from the chores of keeping the lights on in order to help deliver more busines value. They understood that the technology in cloud is still fairly immature, but they wanted to figure out how they could derive benefits right now, taking smart risks to develop some learnings, and to deliver immediate business value.

And I also met IT leaders who fear the new world and everything it represents — a loss of control, a loss of predictability, an emphasis on outcomes rather than outputs, the need to take risks to obtain rewards. These people were looking for reasons not to change — reasons that they could take back to the business for why they should continue to do the things they always have, perhaps with some implementation of a “private cloud”, in-house.

The analyst keynote pointed out: The new type of CIO won’t ask first what the implementation cost is, or whether something complies with the architecture, but whether it’s good for the enterprise. They will train their teams to think like a business executive, asking first, “is this valuable”? And only then asking, “how can we make this work”? Rather than the other way around.

Changing technologies is often only moderately difficult. Changing mindsets, though, is an enormous challenge.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA) and Hyper-V Cloud 

Alan Le Marquand continues his series with Creating a Private Cloud – Part 2: Configuration on 11/8/2010:

In this post I’ll carry on from part 1 and cover the configuration side of the System Center products to deliver Private Cloud type functionality.

Where should you be after Part 1?

So a quick recap from part 1, you should have Windows Server 2008 R2 installed, with the additional Roles and Features etc. installed. You should have promoted this machine to be a domain controller, name of your choice, and have DNS working.

SQL Server 2008 or higher should be installed along with System Center VMM Server and Admin console installed, plus System Center SSP 2.0. So right now we have everything installed and but nothing configured. That’s the next step.

SQL Server tip

I’ve talked and used SQL Server 2008 R2 standard in this discussion so far. If you want to extend SCVMM to include System Center Operations Manager 2007 you will encounter two minor annoyances. First, SCOM doesn’t play with SQL Server 2008 R2 using the standard install procedure off the DVD. Which is not too surprising considering the release dates, but it’s not the end of the install. You can use the DBCreateWizard.exe to get around the installers relocation to play nice with R2, the procedure is documented in this KB article, The other annoyance it that you will need to uninstall the VMM admin console and reinstall the Ops Manager version. This is lower down the VMM install page; the reason for this is that the management packs are added in with this install.

The other tip, ensure the SQL Agent service is running. The Agent runs a daily job that accumulates the costs for the registered Business units.

Just a couple of tips I found creating test setups.

Self-Service Portal Tips.

Now a couple of points to remember on the SSP install that I didn’t cover in part 1, apologies for not calling this out and thank you to Didier for posting this on the blog to remind me. The deployment guide does say to enable Windows Authentication on the IIS server before installing SSP 2.0. It could cause the install to fail; it will also cause the portal to prompt you for credentials each time you access it. Windows Authentication is not selected by default.

Also worth noting is that during the SSP install you are asked for Site name and port number. You can, using Host Header names and GlobalNames in DNS set up the Single name site. On my setup, I configured a GlobalNames zone in DNS, and added ITasaService to that zone, mapping it to the FQDN of the server. I then went back into the IIS and altered the bindings for the SSP Website to use port 80 and also added the host header ITasaService. My SSP users can now access the site via http:// ITasaService.

One thing to note with the RC build, which happened to me on all the installs I’ve done of SSP 2.0. Check the Self Service Portal Service after a reboot. I’ve noticed that, even though it’s set to automatic, the service may not start. It’s always started when I’ve gone in and manually started it. Just a quirk with the RC build, which is still a pain when you are doing a demonstration and you forget that trick.

Configuring SCVMM

I’m going to start with SCVMM, since this is the core of the environment. Basically, to create a self-service Private Cloud environment you need to configure the following:-

  • User Role group for the portal
  • Add any base images and ISO files to the library
  • Create templates of the workloads you want to offer.

It’s a surprisingly short list.

So the first step is to configure a new user group for the Self-Service side. The deployment docs for SSP 2.0 tell us to create a group called Self Service User and give it the Self Service role. This group is populated automatically by the portal as new Business Unit Admins are identified.

The next step is relatively easy too. The SCVMM library can store a number of file, from ISOs to VHDs. When dealing with self-service requests the ideal is that those users pick from a predefine set of templates that you’ve already configured ready for use.

Next step is the creation of your templates. The templates are just preconfigured images that have been generalized for easy deployment. So you can either take an existing VHD, or install a new OS, configure it the way it needs to be. Then prepare it. On the VM menu, selecting New Template will take the image and do all the work needed to generalize it. When you start this process the source image is destroyed. So if you need a copy, now is a good time to take one.

clip_image002The New Template Wizard asks for some basic information and the location where you want to store the template. When you configured SCVMM’s Library component during setup you created a share for the library, within that share you can create a folder for templates to help organizes these files.

Once a template is created you have the minimum need to try out the self-service portal, which is what we will now configure.

Configuring the Self-Service Portal

As the administrator, the first task is to configure the portal. Connecting to website as the administrator, you need to configure just two of the four options on the Settings link. The main one is the “Configure Datacenter Management” link, here you configure these properties.

  • VMM Server. This is the FQDN of the machine you installed the VMM server role on.
  • Device. Configure the names any SANS or Load Balancers you have.
  • Networks. These map to the names of the networks you configured in Hyper-V. The names here must match those in the Virtual Network Manager in the Hyper-V console.
  • Active Directory. Add any Domains you have.
  • Quota Cost. Enter the default values for memory and storage. You can alter these on a template basis later.
  • Environment. Enter the names of any environments you want to use to group infrastructures in. This is purely process and organization.

Save and close and you are almost set.

Next part is to configure the Templates. On the Template page enter the library server where you’d like to get the templates from. If the Self-Service Portal service is not running, this is the point you find out as service will not return any information from the library server.

The list you configure here is the list that BU’s can access; you can also select a cost for each template.

clip_image003Now you are ready to try out the configuration of Business Units, Infrastructures and Virtual Machines. As the administrator you get to approve all requests, so while anyone could request a BU, you get the final say. Now, if you are testing this, don’t use really BU names because once you use than name you can’t delete it later yet – as I found setting up a demo.

So what do you configure? I’ll run through the basics to get the system working on a test environment. On the BU registration, a lot of it is simply documentation; the administrators are the one area that interacts with VMM. Those names you enter here are added to the role we configured earlier. When submitted, the request appears as an “Onboarding” request in the request queue.

Once approved, the BU can set up its infrastructures. From the first post, the diagram of what an infrastructure consist of gives us a good idea of what the request process will cover. The request is a 3 step process:-

  • Configure the Infrastructure name.
  • Configure the Service and Service Role
  • Add Templates for the VMs.

This maps to our diagram and pulls in the information and configurations we made earlier.

Configure the Infrastructure name.

The first part asks for the infrastructure name, the priority, how long the BU expects to keep it and the forecast for the capacity it will use. The capacity you enter here is used when calculating free space when starting VMs or when creating new Services and Service roles. If you exceed the capacity later, you will need to enter an Infrastructure change request to change the capacity.

Configure the Service and Service Role

In the Service and Service Roles we set out how the Infrastructure is set up. We provide the name of the service, add it to the environment. The environment is the list we added earlier when setting up the portal. The billing code and datacenter are names you use internally. It’s when you get to the Service Quota you start to use information from the other pages. As mentioned above, the Infrastructure capacity on the first page is used as the cap for Services and Roles.

The networks are those setup earlier or you can be request them on this page. These have to match the name use in the Hyper-V Virtual Network Manager. The services can have their own access control; you can enter additional admins to those for the BU, plus members who can access this service.

The service role is tucked away at the end, all you need to enter is the Role name and the number of images it needs. The rest for our test environment can be left to their default.

Add Templates for the VMs.

The final part of the infrastructure request is to assign the templates to this request. The list you get is the one from the template settings for the portal. Here selecting the templates available are the ones that will be available when the BU tries to create VMs. If the template is not selected here and is needed later then a new Infrastructure change request needs to be created.

Once approved the virtual machine creation can begin and the billing starts.

Creating Virtual Machines

We’ve now got to the point where BU can request Virtual Machines and manage their environment. The control on resources comes in here, when requesting a VM you are asked for the number of machines, their names, the infrastructure details and template to use. If they exceed the resources, the creation process stops. BU’s can’t say one thing about capacity and do something else.

Once created the VMs are managed and controlled from the VM page, all members of the BU Roles can start or stop the VMs created. You can also monitor the jobs from the Jobs page or as a DC Administrator the SCVMM console.

From this point on, you have enough configured to try things out. I used this information to demonstrate a Private Cloud at a trade show in the UK recently

Beyond the Basics covered here.

These two posts covered the basics. With SCVMM and SCSSP 2.0 you can test out and a simple Private Cloud scenario. By adding more Hyper-V hosts you can certainly extend this model. However, there is more. I haven’t included System Center Operations Manager (SCOM), System Center Service Manager (SCSM), System Center Configuration Manager (SCCM) or the Offline Image Servicing Solution Accelerator in either of the posts. The roles they play become more critical the larger your deployment becomes and provide much more flexibly. To help we’ve released some guidance and guides, these can be found at

Also the reporting and Dashboard components of the portal are another post on their own. These are important if you want to implement true charge-back modelling.

Mary Jo Foley (@maryjofoley) reported  Microsoft teams with six server vendors to offer certified private-cloud stacks in a comprehensive post of 11/8/2010 to ZDNet’s All About Microsoft blog:

image Microsoft is teaming with a handful of server vendors to provide validated private-cloud stacks, consisting of a variety of hardware, plus Microsoft virtualization and management software.

Microsoft took the wraps off its new “Hyper-V Cloud” programs and initiatives on the opening its TechEd Europe conference in Berlin on November 8.

imageMicrosoft is calling the reference architectures that it has developed for customers who want a faster and more “risk-free” private-cloud deployments “Hyper-V Cloud Fast Track.” Server vendors which have agreed to provide validated stacks as part of the Fast Track program include Dell, Fujitsu, Hitachi, HP, IBM and NEC. Starting on November 8, Dell and IBM will start offering their pre-configured Hyper-V Cloud systems, and the other partners will follow in the coming months, according to Microsoft.

imageWhat’s included in the Hyper-V Cloud Fast Track stack? Windows Server 2008 R2 with Hyper-V; System Center Operations Manager; System Center Virtual Machine Manager;  System Center Service Manager;  Opalis (workflow automation); and System Center Virtual Machine Manager R2 Self-Service Portal. The Self-Service Portal is the product formerly known as the Dynamic Datacenter Toolkit. Microsoft officials announced the release-to-the-Web of the final version of the Self-Service Portal at Tech Ed today. Other than Windows Server, Hyper-V and Virtual Machine Manager, these components are “recommended,” rather than required, elements of the Hyper-V Cloud stack.

image72232(A side note: In recent months, Microsoft also has cited its AppFabric middleware as part of its private-cloud solution. However, neither Windows AppFabric nor Windows Azure AppFabric is included as part of the Hyper-V Cloud stack outlined today. This is because the Hyper-V Cloud stack is more about base-level infrastructure than the cloud-application layer, the Softies said when I asked. At some point in the not-too-distant future, however, Microsoft may introduce more reference specs  outlining private-cloud processes and recommendations at the Exchange, SharePoint and SQL Server layer, officials said.)

In addition to providing the spec behind the stack, Microsoft also is announcing a new Hyper-V Cloud Service Provider Program, via which more than 70 service providers worldwide will be able to implement the newly announced private cloud stacks. Microsoft also is making available Hyper-V Cloud Deployment Guides for customers who’d prefer to build out their own private clouds.

With its certified stack program, “VMware tells you what kinds of storage, blades and other components you need to buy,” said David David McCann, General Manager of Product Management for Microsoft’s Windows Server & Cloud Division.

Microsoft’s goal in providing the Fast Track stacks is to offer mid-market and high-end customers more choices from more vendors when building shared networking, compute and storage resource pools inside their datacenters, he said.

Microsoft is leaving it up to each of the partnering vendors as to which storage or networking or blade devices they recommend. The only requirement is the resulting Hyper-V private-cloud stacks follow the 90-page guideline that McCann’s team, Microsoft Consulting Services and the OEMs created over the past eight months.

“We can’t tell server vendors how to build out their servers. We are a software vendor,” said McCann. “We’ve been appropriately humble and are not trying to be a bully.”

Microsoft has been fleshing out its private-cloud strategy and deliverables for a couple of years. But the validated stack approach is new for the company. Microsoft is counting on the new strategy to increase its virtualiztion share, boost customers’ confidence in the private cloud and create new services opportunities for its partners, McCann said.

imageMicrosoft also is continuing work on its Windows Azure Appliance — its private cloud-in-a-box offering that it announced this past summer. Microsoft is working with several of the same server partners, specifically, Dell, HP and Fujitsu, to create customized datacenter containers running Azure that will be available to customers who want to host their applications and data but not necessarily inside Microsoft’s own datacenters. It sounds like the appliances won’t be available to customers this year, as Microsoft officials said earlier this summer; the first part of 2011 seems to be the new Windows Azure Appliance delivery target.

McCann said Microsoft is expecting the Azure Appliances and the Hyper-V Cloud servers to appeal to different customer segments. Those with highly proprietary data who want/need to keep it on premises or those who absolutely need to insure that their data stays inside the country may prefer the Hyper-V Cloud approach. “It depends on the vertical, the country, the sovereignty laws, the age of the application and other factors,” McCann explained.

Analysts at IDC were bullish about the new Hyper-V Cloud stack approach.

“(I)t is reasonable to expect that Hyper-V Cloud customers will quickly be able to both self-provision virtual machines (VMs) and optimize infrastructure dynamically using automated workload migration technology driven by pre-defined application performance policies and thresholds. For organizations that have historically overlooked System Center as a viable enterprise data center management software option, Hyper-V cloud signals that it is time to take a second look,” said Mary Johnston Turner, Research Director for Enterprise Systems Management.

IT pros: What’s your take on the Hyper-V Cloud stacks? Do they make private-cloud computing more interesting? Or do you agree with some of Microsoft’s competitors who claim the private cloud is a fake cloud?

Derrick Harris posted Microsoft Pushes Private Cloud Agenda with Hyper-V Cloud Services to Giga Om’s Structure blog on 11/8/2010:

Microsoft (s msft) has expanded its cloud computing options yet again, this time with a set of programs and offerings centered on its Hyper-V hypervisor. More strategic than technological, the new programs – Hyper-V Cloud Fast Track, Hyper-V Cloud Service Provider Program, Hyper-V Cloud Deployment Guides and Hyper-V Cloud Accelerate – essentially affirm Hyper-V plus System Center as Microsoft’s internal cloud play by slapping the “cloud” label on them.

image The most significant program appears to be Fast Track, through which customers can purchase preconfigured architectures running Microsoft’s cloud software on Dell (s dell), Fujitsu, Hitachi (s hit), HP (s hpq), IBM (s ibm) or NEC hardware. HP has already productized this combination in the form of the HP Cloud Foundation for Hyper-V, which houses Hyper-V and System Center on HP’s BladeMatrix converged-infrastructure system. Microsoft already has cloud computing partnerships with HP and Dell, so their involvement shouldn’t be at all surprising.

The Hyper-V Cloud Service Provider Program looks like a take on the VMware vCloud strategy, only absent any talk of hybrid cloud computing. Whereas vCloud service-provider partners incorporate the vCloud API and vSphere Director to enable various degrees of interoperability between public and private VMware-based clouds, there is no mention of such a connection with the new Microsoft program. Partners, more than 70 worldwide, just offer infrastructure as a service built on Hyper-V and System Center. Notably, however, Microsoft did recently make Windows Server applications portable to Windows Azure.

The Hyper-V Cloud Deployment Guides and Cloud Accelerate are consulting services to help customers design and fund their Hyper-V clouds.

The new programs probably are necessary to help Microsoft sell Hyper-V and System Center as foundational pieces for internal clouds (although rebranding or prepackaging under the cloud banner wouldn’t hurt, either). Its biggest competition in hypervisor-based clouds is VMware (s vmw) (sub req’d), which has done a great job (sub req’d) marketing its myriad virtualization products as cloud software. Considering VMware’s significant leads in market and mind share, Microsoft needs to help users make the Hyper-V-is-cloud-computing connection if it wants to close the gap.

But the Hyper-V Cloud lineup also perpetuates Microsoft’s two-headed cloud attack. That’s not necessarily a good thing, because it forces Microsoft customers (those not cut out for the Windows Azure Appliance, at least) to choose between two very distinct platforms depending on their plans. All signs point to internal clouds seeing mass adoption first, which is why many internal-cloud software vendors are pushing products that mirror the public cloud experience. However, as I wrote several months ago (sub req’d), Microsoft appears determined to incorporate on-premise features into Windows Azure without integrating lessons learned in the public cloud back into its on-premise software. Microsoft is doing great things with Windows Azure, so why not bring some of that into Hyper-V and System Center to make them that much stronger?

Image courtesy of Microsoft.

Related content from GigaOM Pro (sub req’d):

Klint Finley claimed Channel Partners Need New Revenue Models to Avoid Being Left Behind in the Cloud Migration Process in a 11/8/2010 post to the ReadWriteCloud blog:

image Forrester recently released a report titled "Channel Models In The Era Of Cloud." Forrester found that as vendors are taking advantage of cloud computing, channel partners are becoming increasingly nervous about being left behind by. According to the report, more than 60% of tech industry revenue is generated by channel partners, "often by serving customers deemed otherwise unreachable by tech vendors." The firm cites the following as examples of channel companies: distributors, value-added resellers (VARs), direct market resellers (DMRs), managed service providers (MSPs), application hosting providers, and systems integrators (SIs) in the tech industry.

image Forrester anticipates different classes of channel partners will be affected differently. Distributors and implementation focused VARs will be hit the hardest. Channel partners are expecting to make less money in hardware and software sales and more in managed services and application hosting.

Managed services is the hottest area of growth, but everyone wants in on this game - different types of channel partners, vendors, and telcos alike. And vendors are making deals with telcos that are making channels very uncomfortable. For example, the Windows Azure Platform Appliance and Office 365 are available only to select telcos. Forrester predicts a reduction in the number of channel partner companies of about 12% to 15% as the managed services battle heats up.

Forrester does identify some alternate markets for channels to pursue. The firm's research suggests that many organizations remain hesitant about public clouds and will prefer public-private hybrid clouds, increasing the demand for hybrid cloud integration services. Forrester suggests that these services may be "low-hanging fruit" that channels should take advantage of.

Forrester notes that channel partners are also expanding their businesses into application development, outsourcing, smart computing (or, as we've been calling it, "the Internet of Things") integration, and financial consulting.

Forrester offers the following recommendations to vendors seeking to maintain mutually beneficial relationships with channel partners include:

  • Help them overcome business model roadblocks
  • Make them better marketers
  • Connect them to each other in a collaborative ecosystem

Marius Oiaga asserted “With the introduction of Hyper-V Cloud Microsoft now emphasizes that it owns the most comprehensive collection of Cloud offerings available on the market today” in a deck for his Hyper-V Cloud, Microsoft Democratizes Private Cloud Infrastructure Deployments post of 11/8/2010 to the Softpedia blog:

image At TechEd Europe 2010, Brad Anderson, corporate vice president of Microsoft’s Management and Security Division revealed that Hyper-V Cloud comes to provide customers looking to embrace cloud computing with yet another alternative on top of Windows Azure and Windows Azure Platform Appliances which have been announced for some time now.

Essentially, there are no more excuses for companies not to jump into the Cloud. Microsoft is making available Platform as a Service, Software as Service and Infrastructure as a Service.

imageHyper-V Cloud complements existing Cloud offerings from the Redmond company such as Windows Azure and Windows Azure Private Appliance, democratizing the building and deployment of private cloud infrastructures.

What this means is that customers will be able to access a range of resources from Microsoft and its partners in order to implement private clouds with their organizations as fast as possible and with very little risk.

At the heart of Hyper-V is, as you might have already guessed the company’s hypervisor, which is included by default into the last iteration of Windows Server.
But of course, the virtualization technology goes hand in hand with Windows Server 2008 R2.

Moreover, on top of Windows Server 2008 R2 and Hyper-V, the Hyper-V Cloud also involves Microsoft System Center.

According to Anderson, Microsoft has inked agreements with no less than six hardware vendors in order to supply customers not only with prevalidated infrastructure for their deployments but also with additional services.

“Many of our customers have told us they want the benefits of Cloud computing – fast deployment, increased agility, lower costs – but with tight control over things like physical infrastructure and security policies,” Anderson explained.
“Our new private cloud offerings fulfill that need at the infrastructure level, while providing a clear migration path to Cloud services at the platform level.”

The Hyper-V Cloud Fast Track Program
Microsoft partnered with Dell, Fujitsu, Hitachi, HP, IBM, and NEC in order to offer customers what the Anderson referred to as choice in terms of private Clouds.
Organizations can leverage predefined, validated configurations from the software giant and its partners noted above, through the Hyper-V Cloud Fast Track Program.

The reference architectures cover all the aspects of building and deploying a private Cloud from computer to storage, but also networking, virtualization and management.

One example of the Hyper-V Cloud Fast Track Program in action is the partnership between the Redmond company and HP.

The two companies have deployed together the HP Cloud Foundation for Hyper-V, mixing together the HP BladeSystem Matrix servers, with System Center and Windows Server 2008 R2 Hyper-V.

Furthermore, HP is keen on getting customers up and running as fast as possible, and through the HP CloudStart for Hyper-V it promises that private Clouds can be launched within 30 days.

Additional Hyper-V Cloud programs
With the Hyper-V Cloud Service Provider program, in excess of 70 service providers worldwide working with Microsoft are offering customers infrastructure as a hosted service.

Companies such as Korean Internet Data Center, Fasthosts, Agarik and Hostway are enabling organizations to implement both private and public Clouds fast and with reduced costs.

Hyper-V Cloud Deployment Guides will be offered for those companies that will focus on making the best of the infrastructure that they already have in place in order to build private clouds.

Anderson promised that guidance will be available through Microsoft Consulting Services, which in its turn will be leveraging extensive expertise from previous MCS customer engagements.

In addition, the software giant will offer further help for organizations looking to embrace the Cloud through the Hyper-V Cloud accelerate program.

Both customers and partners will receive fund assessments but also help with proofs of concept and with the actual deployment into production environments from MCS, as well as from members of the Microsoft Partner Network.

imageFact is that Microsoft is still not offering its Cloud platform [appliance] for private Cloud deployments, but at the same time the company is doing it all save for actually allowing customers to deploy Windows Azure in their own datacenters.

Lori MacVittie (@lmacvittie) asserted Like candy bars, it’s just a lot less messier than the alternative as a preface to her Why Virtualization is a Requirement for Private Cloud Computing post of 11/8/2010 to F5’s DevCentral blog:

image Caramel. Chocolate nougat. Coconut. No matter what liquid, flowing, tasty goodness is hidden inside a chocolate bar, without the chocolate shell to hold it we’d be in whole a lot of trouble because your mom would so be on you for that mess, let me tell you.


Every food-stuff that is liquid or gooey or both is encased in some sort of shell; even the tasty Swiss cheese and prosciutto hidden inside chicken cordon bleu is wrapped in, well, chicken. In many cases the shell isn’t even all that appealing, it’s all about getting to the center of that Tootsie Roll Pop as quickly as possible.

And that’s really what enterprise IT wants to do, too. They want to get to the heart of things as quickly as possible and as efficiently (and cost effectively) as possible. That’s why, in a chocolate-shell, (sure, nutshell would be more appropriate but it doesn’t fit well with the analogy, does it?) virtualization might as well be a requirement for the implementation of a private (on-premise) cloud computing initiative.


If you look at what you’re managing with cloud computing, it’s all about resources. RAM. CPU. Storage. Network. Those are the components that are fluid and without some sort of shell in which to “wrap them up neatly” they’re just going to make a big mess all over the data center. Now, cloud computing providers – many of whom were in business long before virtualization became “the one true way” to manage those fluid resources – may or may not leverage virtualization a la VMware, Microsoft, or Citrix to enable the management of the resources which make up the foundation of their dynamic, cloud computing environment. And they almost certainly don’t leverage any of the modern orchestration systems to manage those virtual machines (if they use virtualization) because, well, they didn’t exist when most providers built their implementations.

And it’s unlikely that providers will use those frameworks in the future because they tend to have more stringent needs in terms of multi-tenant support and integration with metering and billing systems. Additionally, they need to provide a method of management via an API and that’s almost always customized for their specific environment.

An enterprise, however, is very different.


While it’s certainly the case that virtualization is not a requirement for cloud computing, for the enterprise it’s as close as it gets. Without virtualization the systems and frameworks required to package, provision, and easily manage applications and their associated resources would require so much investment and time that ultimately it would take decades for them to implement and subsequently realize the benefits.

Virtualization therefore provides that all important encasing, chocolate shell in which to deploy applications and resources and to integrate those packages into a broader automation and orchestration framework. That framework may be custom-built or it may be based on the management offerings coming from the virtualization provider (a la VMware vSphere and vCloud and Microsoft Virtual Machine Manager, etc…). The enterprise needs the “package” because in most cases it simply doesn’t have the time to build it out themselves. Worse, the support and long-term management of such custom systems impinges far more on the realization of benefits than even developing the initial packaging system.

Leveraging “industry standard” - and nearly commoditized at this point - virtualization offerings also affords enterprises mobility and a growing selection of third-party management and migration tools that leverage the APIs and frameworks of those virtualization offerings. The chocolate shell is fairly standard, even if its contents are customized for the enterprise. It is familiar, recognizable, and there are an increasing number of folks in the field with expertise in managing such products.

Enterprises have other things to do with their time, like integrate applications, analyze data, troubleshoot the network, and develop new applications and features for existing applications. Providers don’t have these distractions, they have the time to build out whatever they’d like under the covers and abstract that into a nice, neat API that’s easy to consume. That’s their line of business, their focus. But the enterprise? Cloud computing and automation is a means to an end; a more efficient, automated data center, a la IT as a Service. Virtualization, especially industry “standard” virtualization solutions that are well supported by an existing and dynamically evolving management and tool ecosystem, gives enterprise IT a wrapper-ready solution that’s easy to consume.


Perhaps “requirement” is too strong a word, as there will always be those enterprises large enough and diverse enough and with large enough budgets to “roll their own”. But virtualization is – or should be – a best practice for any enterprise considering the implementation of a cloud computing environment. Without it, organizations will be left to their own devices (pun intended) for most related solutions – such as management and integration with charge-back/metering services – and consequently will either take significantly more time to reach parity in terms of return on their investment. If they ever get there.

It is also a requirement in that virtualization of applications results in the ability to more easily and affordably entertain the possibilities of new architectures, such as those proposed by Dave McCrory twitterbird in “The Real Path to Clouds.” His “phase 3” architecture is a ways off in terms of implementation, but just moving to phase 2 (scale out at both the web and application server tiers) can be problematic for larger organizations. Virtualization affords the ability to “play with” (for lack of a better way to say it) new architectures without significant investment and risk to the availability and business continuity required of production systems. Consider that it is far easier to share resources across both the web and application server tiers when in a virtual form factor, thus making it possible to adjust, dynamically, the resources necessary to sustain availability and performance requirements during spikes or peak usage periods.

So while virtualization is not a requirement for “cloud computing”, in general, it is for most enterprises who are planning on moving forward with cloud computing and data center automation and may be interested in new architectures that are better able to adjust to hyperscale requirements.

Microsoft’s public relations team claimed “Hyper-V Cloud programs will help customers and partners accelerate private cloud deployment” as it announced “Private Cloud” Takes Center Stage at Microsoft Tech•Ed Europe 2010 in an 11/8/2010 press release:

image Less than two weeks after announcing major updates to Windows Azure at the Microsoft Professional Developers Conference, Microsoft Corp. again demonstrated the breadth of its vision for cloud computing today when it unveiled Hyper-V Cloud, a set of programs and offerings that makes it easier for businesses to build their own private cloud infrastructures using the Windows Server platform. Dell Inc., Fujitsu Ltd., Hitachi Ltd., HP, IBM Corp. and NEC Corp. have already signed on as Hyper-V Cloud partners, and will be working with Microsoft to deliver prevalidated infrastructure and services that enable organizations to implement private clouds with increased speed and reduced risk.

image“Many of our customers have told us they want the benefits of cloud-computing — fast deployment, increased agility, lower costs — but with tight control over things like physical infrastructure and security policies,” said Brad Anderson, corporate vice president of Microsoft’s Management and Security Division. “Our new private cloud offerings fulfill that need at the infrastructure level, while providing a clear migration path to cloud services at the platform level.”

Building Private Clouds With Hyper-V Cloud and the Windows Server Platform

Windows Server 2008 R2, Microsoft’s server platform, already delivers comprehensive virtualization and management capabilities through Windows Server 2008 R2 Hyper-V. These technologies, along with Microsoft System Center, provide the components organizations need to implement private clouds. With the new Hyper-V Cloud Fast Track program, Microsoft and its partners will deliver a broad choice of predefined, validated configurations for private cloud deployments, comprising compute, storage, networking resources, virtualization and management software. These programs and offerings help reduce the risk and increase the speed of private cloud deployments.

As part of the Hyper-V Cloud Fast Track program, HP and Microsoft have jointly developed HP Cloud Foundation for Hyper-V, a reference architecture that combines HP BladeSystem Matrix, Microsoft System Center and Windows Server 2008 R2 Hyper-V to provide customers with a proven foundation for running business applications within a private cloud computing environment. This solution helps reduce infrastructure deployment time, enables dynamic flexing of physical and virtual resource pools, and delivers comprehensive management of Hyper-V virtualized infrastructure and applications. HP also offers HP CloudStart for Hyper-V to deliver to clients a fully operational, private cloud environment within 30 days.

“Organizations are challenged with IT sprawl and siloed infrastructure, which complicate the deployment, maintenance and management of IT services,” said Mark Potter, senior vice president and general manager, Industry Standard Servers and Software, HP. “Private cloud environments based on HP Cloud Foundation for Hyper-V and delivered by HP Cloud consulting services address these challenges by enabling domain experts to configure IT resources once, and IT service owners to provision and reconfigure resources as needed.”

In addition, Microsoft announced other components of Hyper-V Cloud:

  • Hyper-V Cloud Service Provider Program. More than 70 service providers around the world offer infrastructure as a finished, fully hosted service built on Microsoft technology. This option delivers a fast, cost-effective implementation for cloud services, both private and public. Service providers include Korean Internet Data Center; Fasthosts (U.K., U.S.); Agarik (France); and Hostway Corp. (U.S., U.K., Netherlands, Germany, France, Belgium, Romania).
  • Hyper-V Cloud Deployment Guides. For customers who want to build their own private clouds on top of existing infrastructure investments, Microsoft is now offering tools and guidance based on expertise developed during hundreds of Microsoft Consulting Services (MCS) customer engagements over the past few years. This element of the Hyper-V Cloud program optimizes for high levels of flexibility, control and customization.
  • Hyper-V Cloud Accelerate. To tie it all together, Microsoft is making significant investments to help customers and partners fund assessments, proofs of concept and production deployments. These services will be delivered by MCS and pre-qualified members of the Microsoft Partner Network.

More information on Hyper-V Cloud and additional details on how Dell, Fujitsu, Hitachi, HP, IBM and NEC are participating in the program can be found at

More Platform Enhancements

Microsoft made several other announcements at today’s event, including these:

  • The general availability of Microsoft System Center Virtual Machine Manager Self Service Portal 2.0, which makes it easier for customers to pool, allocate, consume and manage their computer, network and storage resources — critical components for a private cloud platform. More information on the new System Center Virtual Machine Manager Self Service Portal 2.0 can be found at
  • The availability of the release candidate of Forefront Endpoint Protection 2010, which is built on System Center Configuration Manager to unify management and security for desktops and servers.

More details on these and other announcements can be found in the full Tech•Ed Europe news fact sheet at Following is the HTML version:

Server and Tools Business News: Microsoft Tech•Ed Europe 2010: Nov. 8–12, 2010

Microsoft Corp. enables customers and partners to take advantage of the cloud in the manner that best meets their business needs, whether it is through the public cloud — using the Windows Azure platform, the general purpose operating system for Platform as a Service (PaaS)— or taking advantage of the private cloud or a combination of the two. After announcing a number of new enhancements to the Windows Azure platform at the Microsoft Professional Developer’s Conference, the company is focusing on the private cloud at Tech•Ed Europe.

While PaaS represents the future of cloud computing, today many organizations are taking an incremental approach to cloud computing and currently require a higher level of controls or customization within their own IT environments. Microsoft’s approach to the cloud includes Infrastructure as a Service (IaaS), so customers ad partners can build private cloud solutions on top of their existing datacenter investments.

For customers and partners looking to implement a private cloud strategy, Microsoft provides all of the necessary elements — including an interoperable server, virtualization, management and security capabilities — through Windows Server 2008 R2 with Hyper-V and Microsoft System Center. Today, Microsoft is announcing Hyper-V Cloud, a set of programs and initiatives to help customers and partners accelerate the deployment of private clouds:

  • Hyper-V Cloud Fast Track. For customers that need some level of customization but also want to help reduce risk and speed deployment, reference architectures can provide the perfect balance. Microsoft is collaborating with Dell Corp., Fujitsu, Hitachi, HP, IBM Corp. and NEC Corp. to deliver a broad choice of predefined, validated configurations for private cloud deployments — comprising compute, storage, networking resources, virtualization and management software.
  • Hyper-V Cloud Service Provider Program. More than 70 service providers around the world offer infrastructure as a finished, fully hosted service built on Microsoft technology. This option delivers a fast, cost-effective implementation for cloud services, both private and public. Service providers include Korean Internet Data Center; Fasthosts (U.K., U.S.); Agarik (France); and Hostway Corp. (U.S., U.K., Netherlands, Germany, France, Belgium, Romania).
  • Hyper-V Cloud Deployment Guides. For customers that want to build their own private clouds on top of existing infrastructure investments, Microsoft offers tools and guidance based on expertise developed during hundreds of Microsoft Consulting Services (MCS) customer engagements over the past few years. This element of the Hyper-V Cloud program optimizes for the highest levels of flexibility, control and customization.
  • Hyper-V Cloud Accelerate. To tie it all together, Microsoft is making significant investments to help customers and partners fund assessments, proofs of concept and production deployments. These services will be delivered via Hyper-V Cloud Services from MCS and prequalified members of the Microsoft Partner Network.

Several additional updates to enable more efficient end-to-end management across cloud and on-premises infrastructure include the following:

  • Availability of Microsoft System Center Virtual Machine Manager Self-Service Portal 2.0, a freely downloadable, partner-extensible solution that enables customers to pool, allocate and manage their compute, network and storage resources to deliver private cloud infrastructure. The solution is built on top of Windows Server 2008 R2, Hyper-V and System Center Virtual Machine Manager 2008 R2 (VMM).
  • Forefront Endpoint Protection 2010 release candidate helps improve endpoint protection across client and server infrastructure. Built on System Center Configuration Manager 2007, Forefront Endpoint Protection allows customers to use their existing client management infrastructure to deploy and manage desktop security. Forefront Endpoint Protection will release to manufacturing by the end of 2010. More information and a download of the release candidate are available at
  • Opalis 6.3 will be available by Nov. 30, 2010. As part of Microsoft’s ongoing commitment to continue to integrate Opalis with the System Center product line, the company will offer System Center Integration Packs for Virtual Machine Manager, Data Protection Manager, Configuration Manager and Service Manager, as well as platform support for Windows Server 2008 R2. These integration packs will allow customers to automate run book process workflows in an orchestrated manner, thus improving operational efficiency and service reliability.
  • Windows Azure Application Monitoring Management Pack complements the new Windows Azure portal announced at the Professional Developers Conference (PDC) and will allow existing users of System Center Operations Manager to monitor their applications and services alongside their on-premise infrastructure from within a single console. It is available as a release candidate at
  • The release candidates of Windows 7 SP1 and Windows Server 2008 R2 SP1 include powerful new features such as RemoteFX and Dynamic Memory, also vital for private cloud platforms. These features will help customers that choose to deploy Windows through Virtual Desktop Infrastructure (VDI) by enabling a more scalable and rich user experience. While Windows 7 customers will benefit from these new virtualization features, there are no new features in Windows 7 Service Pack 1 (SP1), but it will include security updates. The RCs are available for download at
  • Microsoft Application Virtualization (App-V) 4.6 SP1 will make virtualizing applications easy, fast and predictable. The Package Accelerators in App-V 4.6 SP1 will allow App-V to convert Windows application’s installation files directly into an App-V virtual application, which can cut packaging time for complex applications from hours to 10–15 minutes. The technology will be demoed during a Tech•Ed Europe session and generally available in the first quarter of 2011, as part of Microsoft Desktop Optimization Pack (MDOP) 2011.
  • Windows Small Business Server 2011 enables small-business customers to run their organizations more efficiently and highly securely through centralized management of services and business applications.
    • Windows Small Business Server 2011 Essentials is for small-business customers that want to deploy their own servers and access business information from virtually anywhere. It will be available in the first half of 2011.
    • Windows Small Business Server 2011 Standard is for small-business customers that want enterprise-class server technology. It will be available in December 2010.

More information about Microsoft’s cloud offerings is available at

The Windows Azure Team updated their Windows Azure Platform Appliance (WAPA) page in time for Microsoft Tech•Ed Europe 2010:


Learn More     Get Tools & SDK     Sign up for a Windows Azure Offer

<Return to section navigation list> 

Cloud Security and Governance

Dave Kearns reported “More than three-quarters of the respondents in a recent survey couldn't say who they believe should be responsible for data housed in a cloud environment” in a deck for his A hazy view of cloud security post of 11/5/2010 to NetworkWorld’s Security blog:

image A recent survey of 384 business managers from large enterprises revealed that confusion abounds about cloud data security. More than three-quarters of the respondents couldn't say who they believe should be responsible for data housed in a cloud environment, while 65.4% said that the company from which the data originates, the application provider and the cloud service provider are all responsible, and another 13% said they were not sure. There was no consensus on who the single party should be that protects that data.

image Courion conducted the global survey in October querying 384 business managers from large enterprises -- 86% of which had at least 1,000 employees. Among the other findings:

  • 1 in 7 companies admit that they know there are potential access violations in their cloud applications, but they don't know how to find them.
  • There is widespread confusion about who is responsible for securing cloud data -- 78.4% of respondents could not identify the single party responsible.
  • Nearly half (48.1%) of respondents said they are not confident that a compliance audit of their cloud-based applications would show that all user access is appropriate.  An additional 15.7% admitted they are aware that potential access violations exist, but they don't know how to find them.
  • 61.2% of respondents said they have limited or no knowledge of which systems or applications employees have access to.  This number spiked from 52.8% in 2009.
  • Enterprises are less confident this year than in 2009 that they can prevent terminated employees from accessing one or more IT systems.  And 64.3% said they are not completely confident, compared with 57.9% last year.

These are frightening results. Business managers don't know who is responsible for protecting their data -- they don't even know who should be responsible. It's quite possible, given these results, that no one is taking on that responsibility!

It isn't only the data stored in the cloud that is at risk, of course. Any data linked to cloud-based applications is also at risk. Organizations need to know who is responsible for protecting that data and how it is being done. And that all starts with robust identity services so that only authorized persons get access and all access can be tracked back to the user.

Who's protecting your data?

<Return to section navigation list> 

Cloud Computing Events

Watch Brad Anderson’s keynote at Microsoft Tech•Ed Europe 2010:

imageimageYou might need to scroll right to the Keynote Starts point at about 00:50:00 until the AV crew edits the stream.

imageimageDave Anderson’s Office 365 demo starts at 01:22:30 


imageimageGreg Jenson’s Infrastructure as a Service (System Center) starts at 1:43:30. 

imageimageJames Conard’s Windows Azure (Fabrikam Shipping) demo starts at 01:58:30

imageimage Vlad Joanovic’s System Center Monitoring of Dinner Now demo with AveCode starts at 2:18:50

See the new Windows Azure Platform Appliance (WAPA) and Hyper-V Cloud section above for more news from Tech•Ed Europe 2010.

Session videos for Microsoft Tech•Ed Europe 2010 will be posted as they become available:

The SQL Server Team on 8/11/2010 invited readers to Watch the PASS Summit 2010 Keynotes Via Live Stream!:

Tuesday: PASS Summit 2010 Day One Live Streaming Keynote

Live streaming begins Tuesday, Nov 9, 2010 from 8:00am to 10:00am Pacific. The session will begin at 8:15am.

Speaker: Ted Kummert, Senior VP, Business Platform Division at Microsoft
With opening remarks from Rushabh Mehta, PASS President
Ted Kummert will kick off PASS Summit on Nov 9 by highlighting the continued innovation across Microsoft’s business and information platform. Kummert will explore Microsoft’s key technical investments as well as Mission Critical applications, and the accessibility of Business Intelligence.

Wednesday: PASS Summit 2010 Day Two Live Streaming Keynote

Live streaming begins Wednesday, Nov 10, 2010 from 8:00am to 10:00am Pacific. The session will begin at 8:15am.

Speaker: Quentin Clark, General Manager of Database Systems Group at Microsoft
With opening remarks from Bill Graziano, VP Finance at PASS
Quentin Clark will showcase the next version of SQL Server and will share how features in this upcoming product milestone continue to deliver on Microsoft’s Information Platform vision. Clark will also demonstrate how developers can leverage new industry-leading tools with powerful features for data developers and a unified database development experience.

Thursday: PASS Summit 2010 Day Three Live Streaming Keynote

Live streaming begins Thursday, Nov 11, 2010 from 8:00am to 10:00am Pacific. The session will begin at 8:15am.

Speaker: David DeWitt, Technical Fellow, Data and Storage Platform Division at Microsoft
With opening remarks from Rick Heiges, PASS Vice President, Marketing
David DeWitt will be discussing SQL query optimization and address why it is difficult to always execute good plans in his highly anticipated technical keynote. DeWitt will also cover new technologies that offer the promise of better plans in future releases of SQL Server.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

The HPC in the Cloud Blog asserted Handbook of Cloud Computing Presents Comprehensive View of Cloud Computing Trend in an 11/8/2010 post:

image LexisNexis Risk Solutions today announced the publication of Handbook of Cloud Computing (Springer: 2010), a collection of essays featuring contributions from world experts in the field of cloud computing from academia, research laboratories and private industry. The book was edited by Armando Escalante, CTO of LexisNexis Risk Solutions, and Dr. Borko Furht, professor and chairman of the Department of Computer Science and Engineering at Florida Atlantic University.

imageCloud computing is reshaping information technology, helping drive new cost efficiencies, accelerating time to market, providing access to greater computing resources, and increasing their availability and scalability. Handbook of Cloud Computing provides a comprehensive view into the cloud computing trend, presenting the systems, tools, and services of the leading providers including Google, Yahoo, Amazon, IBM, and Microsoft. The book introduces the basic concepts of cloud computing and its applications, as well as current and future technologies in the field.

Topics covered include:

  • Cloud Computing Fundamentals
  • Systems and Tools
  • Cloud Applications and Solutions
  • Data-Intensive Supercomputer in the Cloud
  • High Performance Computing
  • Virtual Private Clouds
  • Scientific Services and Data Management in the Cloud

Read More: Page 1  of  3, 2, 3, All ».

The handbook is a bit pricey at $182.86 from Amazon.

Adron Hall continues his Cloud Throw Down: Part 3 in an 11/8/2010 post with more props to Amazon Web Services:

image Now we’re going to throw down on something that I’ve had more than a few requests for. I’m going to break out and get some charts, graphs, and price differentials on AWS and Windows Azure. This throw down entry is going to be nothing but money, money, and more money. Have any guesses yet how this one is going to come out? Well read on!

Previous Throw Down…

Relational Database Storing 1 GB to 50 GB

Amazon Web Services and Windows AzureThis comparison may shock you.  The two primary products from AWS and Windows Azure are AWS RDS, or Amazon Web Services Relation Data Store and SQL Azure.  The following chart shows the initial cost at 1GB of storage in each, and then the progressive increase in price as we scale up to 50GB.  There is one thing to add here, that at 50GB SQL Azure stops, so if you have more than 50GB of storage you want in a relational database, you don’t even have an option in SQL Azure.  But let’s just take a look, and then I’ll go through and explain the pricing and declare the victorious.

Here’s a graph, with pricing along the left y axis and the storage requirement along the x axis.  Feel free to check out the original spreadsheet too.

RDBMS with SQL Azure and AWS RDSRDBMS with SQL Azure and AWS RDS

Like I was saying about the surprise.  SQL Azure starts out cheaper than Amazon’s options but immediately goes into the stratosphere of pricing!  $499.95 just seems absolutely insane.  You really gotta love a limited SQL Server to go diving after that RDBMS versus Amazon’s more scalable RDS option, which never breaks $82.50.  Really, this isn’t a victory, it’s a wholesale slaughter over SQL Azure.

Rating & Winner:  Relational Database Storing 1 GB to 50 GB goes to AWS.

Single Instance .NET Web Application running on Windows

First off, let’s just take a look at the micro instances.  The instances are perfect for testing out, basic development work and developer servers, and even scaling to larger things.  Here’s how the costs pan out.

Windows Azure vs. AWS Micro InstancesWindows Azure vs. AWS Micro Instances

The blue, in Windows Azure color shows the Windows Azure Micro Instance.  Almost double what a similar instance running Windows Server 2008 would cost you on AWS, and more well more than double, which I’ll point out again in the next section.  AWS is the obvious cheaper candidate with the smallest instances.  But what about the slightly larger sizes, let’s take a look at that.

Windows Azure vs. Amazon Web Services Middle Tier InstancesWindows Azure vs. Amazon Web Services Middle Tier Instances

The charts are also available in the previously linked spreadsheet.  As one can see from these prices they fluctuate on sizes as the instances increase in size.  The Linux instances are almost always cheaper than a comparable Windows Azure instance, and from a ECU/Processor Compute range AWS almost always comes out less expensive with the similar Windows Azure offering.  I still haven’t compared actual process power and performance, but I intend to do that one of these days over the next few weeks or months.  However for pricing on instances the options, and lower price winner with generally equal processing power is…

Rating & Winner:  Single Instance .NET Web Application running on Windows goes to AWS.

Single Instance PHP, Java, or Ruby on Rails Web Application running on Linux

Now really, we don’t have to do too much more research for this measurement.  Again, AWS handily beats Windows Azure in price and instance capabilities for the Linux, LAMP stack, and general PHP applications.  For more information regarding this I also posted a link regarding WordPress Hosting on AWS & Windows Azure.  Technically feasible with both services, however astronomically cheaper on AWS.  Thus, in this category…

Rating & Winner:  Single Instance PHP, Java, or Ruby on Rails Web Application running on Linux goes to AWS.

Amazon Web Services and Windows AzureThis competition just wasn’t really a good bout.  AWS handily beats Windows Azure in price and compute power overall.  Even when getting into the higher performance options, AWS has high performance compute options that aren’t even available.  Don’t worry Windows Azure fans, there is hope still.  In my next bout I intend to compare the two from a more PaaS oriented point of view.  One of the features and capabilities that will come up is Windows Azure AppFabric.  That will be a much closer fight I’m sure.  For now though…

Today’s Winners is easily AWS.  The rest of my throw down series will be coming over the next week.  If you have any ideas or things I should compare the two services on, please let me know.  Thanks and hope you enjoyed another bout of the cloud giants.

To check out more about either cloud service navigate over to:

Max Cacas reported NIST defining steps to cloud computing in an 11/5/2010 article for Federal News Radio:

image As the lead agency when it comes to federal cloud computing, agency chief information officers are counting on the National Institute of Standards and Technology to help establish the roadmap into the cloud.

Dawn Leaf, NIST's senior executive for Cloud Computing, announced that the agency's Information Technology Lab (ITL) now has a cloud computing test bed.

"The internal name for the cloud computing simulation project is Koala," she said during the second NIST Cloud Computing Forum and Workshop in Gaithersburg, Md. "The objective is to assess and characterize resource allocation algorithms within a public infrastructure-as-a-service cloud model."

She added they are hoping to have the first results of tests using Koala by early next year. The goal is to smooth the path of agencies into the cloud.

"We listened to the CIO community, and the message was very clear, and I don't think it will surprise anyone who was here today," Leaf said. "The message was, 'We need guidance, and we need it right now.'"

To that end, federal chief information officer Vivek Kundra's keynote to the cloud conference tied together some recently reported headlines regarding efforts to fulfill the government's cloud computing needs.

"Three major companies, Google, Microsoft and IBM earlier this week, all have stepped up to the challenge, and all have launched .gov clouds," Kundra said. "They have addressed one of the biggest problems that the federal government has in migration to the cloud, which is security."

Kundra said the companies' platforms are providing cloud services in the context of federal security requirements.

Kundra added the General Services Administration late in October awarded a blanket purchase agreement to 11 vendors for infrastructure-as-a-service. He said the BPA will let agencies begin the early phases of migrating to the cloud.

Security is one of six aspects of the cloud computing roadmap expected to be discussed during the second day of the NIST conference today. Cite Furlani, director of NIST's Information Technology Laboratory is hoping to leverage ITL's expertise in two realms of security in their work on the cloud.

"In cybersecurity…where our research focus has been, and continues to be is in key management, visualization and authentication, all underpinnings that are necessary for a robust cloud computing infrastructure and identity management," Furlani said. "We've been working in that area for decades, building on the work we've done on how we identify a person through biometrics, but understanding who you're communicating with across networks."

In addition, Furlani said ITL is working to integrate the needs of the advanced Internet protocol Version 6 (IPv6) into the cloud computing standards. And finally, NIST is working on a method allowing it experts to use metrics and other performance criteria to test conclusively whether the cloud computing standards they are formulating actually work.

"It's definitely our unique role and that's where we function and contribute," she said.

The NIST Cloud Computing Forum and Workshop wraps up today at the Gaithersburg Holiday Inn with individual workshop sessions on six key cloud computing topics.

Download mp3

<Return to section navigation list>