Friday, June 11, 2010

Windows Azure and Cloud Computing Posts for 6/9/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this daily series.

Update 6/11/2010: This is the second of a series of posts with content from posts about TechEd North America 2010 sessions, as well as the usual sources.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated in June 2010 for the January 4, 2010 commercial release. 

Azure Blob, Drive, Table and Queue Services

Jim Nakashima described the Windows Azure Storage Browser in the Visual Studio Server Explorer in this 6/10/2010 post:

As part of the June 2010 release of the Windows Azure Tools, we now have a Windows Azure Storage browser in the Visual Studio Server Explorer:


It is our first cut at this feature and we've been iterating fairly quickly on the Windows Azure Tools so I'm excited about having this feature not only for what it delivers today but also because it lays the foundation for the future.  In this post, I'll go over what you can and can’t do with the Windows Azure Storage browser and how added some features to hopefully make it easier for you to handle navigating through large data sets.

Jim continues with illustrated “Connecting to a Storage Account”, “Browsing Blob Storage”, and “Browsing Table Storage” topics and concludes:

We are really dying to get the edit/write capability and Queue capability into the product.  Hopefully we’ll be able to schedule it soon! [Emphasis added.]

We the Windows Azure Storage browser for you so let me know what you like, don’t like and what features you want to see next!

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Alex James offered Tip 56 - Writing an OData Service using the Reflection Provider in this 6/11/2010 post:

At TechEd I got a lot of questions about how to expose Data as OData

By now you probably know you can use Data Services and the Entity Framework to expose data from a database as an OData Service. You might even know you can use Data Services with a custom Data Service Provider to expose arbitrary data from anywhere.

But did you know about the Data Services Reflection provider?.

Turns out reflection provider  is VERY simple to use.

To show you just how simply I am going to create an OData service to expose some in memory data [source code omitted for brevity]:

  • First you need some data …
  • Next you need a class to act as your Data Source. Data Services will expose all the IQueryable properties as Feeds and infer types for all the types exposed by those Feeds. …
  • Now all you need to do is create the Data Service, and expose our sets. Simply add a WCF Data Service to you web application and modify the generated code …

That’s it you are done:

A couple of points worth noting:

  • This service is read-only. If you want to make it read-write you have to implement IDataServiceUpdateProvider.
  • In this example the data comes from in-memory arrays, however if you have an IQueryable that supports it your data can come from anywhere.
  • Yes I know my data is all wrong, I’m writing this on a SouthWest flight from New Orleans to Denver with no internet. But thankfully there is a real OData Service for the World Cup here.

Walter Wayne Berry explains Working With Collations In SQL Azure in this 6/11/2010 post:

A collation encodes the rules governing the proper use of characters for either a language, such as Greek or Polish, or an alphabet, such as Latin1_General (the Latin alphabet used by western European languages). The default collation for character data in SQL Azure databases is SQL_Latin1_General_CP1_CI_AS. This collation is also used across the SQL Azure infrastructure to sort and compare metadata that defines database objects. The server and database level collations are not configurable in SQL Azure. However, you can use a collation of your choice at the column and expression level. This article will show you how.

Although the server and database collations cannot be configured in SQL Azure, you still can query both of these properties, for example:


Currently, both of queries will return the default collation: SQL_Latin1_General_CP1_CI_AS.

If the solution you are building on SQL Azure requires a different collation for character data you will need to set the collation at the column level or use the expression level collation to explicitly cast to a specific collation. Keep reading to learn how.

Column Collation

When using SQL Server Management Studio’s Generate Script Wizard (more about using the Generate Script Wizard in this blog post), column collations are included by default. You can verify this by checking that the “Include collation” option is set to “True” (the default value).

This sample shows how to create columns with a specific collation:

    c1    nvarchar(20) COLLATE SQL_Latin1_General_CP1_CI_AS,
c2    nvarchar(20) COLLATE Japanese_CI_AS

To retrieve column collation property for the example above:

SELECT name, collation_name FROM sys.columns
    WHERE    object_id = OBJECT_ID('t', 'U')
        AND name <> 'id'

For more information on how to use column or expression level collations see COLLATE and Setting and Changing Collations in SQL Server Books Online.

Walter continues with “Temporary Tables” and “Static Strings” topics.

Alex James reported the availability of source code in his OData - WCF Data Services Best Practices from TechEd post of 6/11/2010:

imageYesterday I promised to share all the code from my Best Practices – Creating an OData Service using WCF Data Services session at TechED.

So here goes, essentially this is what I did:

  1. Downloaded, unzipped, opened and ran the MVC Music Store Sample
  2. Added an album to my cart, registered and ordered the album.
  3. Added a data service to that project to expose the Entity Framework model already in the Music Store sample:
    public class MusicService : DataService<MusicStoreEntities>
  4. Added specific EntitySetAccessRules:
    // We don’t want the Data service to expose carts at all config.SetEntitySetAccessRule(
    // You can only get 1 OrderDetail at a time
    // Everything else you can read & query.
  5. Added Server Driven Paging limits:
    config.SetEntitySetPageSize("*", 5);
  6. Added a query interceptor to only allow users to see just the[ir] own orders:

    public Expression<Func<Order, bool>> OrdersFilter()
        var user = HttpContext.Current.User.Identity.Name;
        if (string.IsNullOrEmpty(user))
            return (Order o) => false;
        else if (user== "Administrator")
            return (Order o) => true;
            return (Order o) => o.Username == user;

  7. Made my service web browser friendly by Configuring the EDMX to map Genre.Name to the Entry/Title and Genre.Description to the Entry/Summary.
    See the EDMX in the final copy of the source to see how.
  8. Added a ClientAccessPolicy.xml so that Silverlight apps hosted on different sites can interact with our Data Service.
    NOTE: The two sites in question need to be in the same internet zone!
  9. Demoed using some JQuery JSON code that accesses the MusicService from the same site.
    NOTE: this doesn’t work x-site, for that you need JSONP.
  10. Added support for JSONP by applying the [DataServicesJSONP.JSONPSupportBehavior] attribute to our MusicService.
    You can download the source from CodeGallery.
  11. Demoed some JQuery JSONP code that accesses the MusicService from a remote site.
  12. Configured ASP.NET to expose an authentication service, so non-browser agents can easily connect and logon to the Forms Authentication service, by adding this to the web.config:

      <authenticationService enabled="true" requireSSL="false"/>

  • Tested the authentication service using Fiddler by sending this RAW request to authenticate:
    POST http://localhost:1397/Authentication_JSON_AppService.axd/Login HTTP/1.1
    Content-Type: application/json
    { "userName": "Alex", "password": "password", "createPersistentCookie":false}
  • Opened this Data Services client application to show how to authenticate against the forms authentication service.

You can download the final copy of the Music Service code if you want.

Nikos Anastopoulos offers a Summary of SQL Azure Announcements at Tech▪Ed 2010 in this 6/11/2010 post:

imageSQL Azure Extends to 50 GB

  • With our continued commitment to deliver increasing value in data cloud services to our customers we are happy to announce that SQL Azure database is now extending from 10GB to 50GB size of database storage capacity.
  • With 50GB, SQL Azure service offers customer much higher scalability for their applications and data, whether they extend their existing applications to the cloud or explore the cloud only new application scenarios. With 50 GB capacity now available, customers can have much broader use of the SQL Azure highly available, relational database service for many of their LOB business applications, taking full advantage of the SQL Azure elasticity, ease of provisioning and managing and familiar development environment and tools.
  • The new 50GB database size will be available to our customers worldwide starting June 28th

SQL Azure Subscription Offer

  • Microsoft is offering more flexible ways for customers to try larger SQL Azure databases. Starting August 1st, we will have a new discounted SQL Azure promotional offer (SQL Azure Development Accelerator Core) to customers, allowing them to subscribe to larger SQL Azure databases at a significant discount. This offer includes exclusively SQL Azure Business Edition database and will enable customers to use the larger database size with discounting. More information is available at

Public Preview of the Data Sync Service

  • SQL Azure Data Sync Service offers customers flexibility and control over their data, allowing them to make discrete and granular decisions on how their data and which components should be distributed across multiple datacenters in different geographic locations. Based on the customers’ internal policies and business needs, customers are encouraged to register at and within 1-2 weeks they will receive an invitation to subscribe to the service.

CTP of the SQL Server Web Manager for SQL Azure to be offered this Summer

  • Community Technology Preview (CTP) of the Microsoft SQL Server Web Manager (SSWM), a lightweight and easy to use database management tool for SQL Azure databases, to be offered this summer. SSWM is designed specifically for Web developers and other technology professionals seeking a straightforward solution to quickly develop, deploy, and manage their data-driven applications in the cloud. SSWM is geared toward basic database management tasks such as authoring and executing queries, designing and editing database schema, and editing table data.

Access 10 Support for SQL Azure

  • Microsoft Office 2010 will natively support data connectivity to SQL Azure opening up opportunities for Office users and technology providers to create rich experiences using cloud computing.
  • For Information Workers, this gives an easy way to connect directly from Office applications to cloud-based relational databases, enabling ease of use and providing flexibility to IT. Microsoft partners will have more choices to integrate rich Office Business Applications that can connect directly to both on-premise or cloud databases creating unique and agile solutions.

The Pre-announced Spatial Data Support to Become Live

  • As it was pre-announced at MIX back in March  ̵ ̵  SQL Azure will offer spatial data support. Spatial is a data type for storing location based information (like latitude, longitude) that enables spatial operations like mapping, distance between points, is a location in a region, etc. and enables development of the location aware operations.
  • We introduced it on-premise with SQL Server 2008 and pre-announced for SQL Azure back in March.
  • We're announcing that this capability is now becoming available in SQL Azure.


Nikos is a Platform Solutions Specialist, at Microsoft Hellas's presales team (STU) for Enterprise Partners and Customers.

Wayne Walter Berry points out availability of Video: TechEd 2010 - What’s New in Microsoft SQL Azure in a 6/10/2010 SQL Azure Team blog post:

SQL Azure provides a highly available and scalable relational database engine in the cloud. In this demo-intensive videolearn how to quickly build Web applications with SQL Azure Databases and familiar Web technologies. Patric McElroy demonstrate several new enhancements added to SQL Azure based on the feedback the SQL Azure Team has received from the community since launching the service earlier this year.

View The Video Here

The PowerPoint slides and Access database shown in the video are available for download at

Wayne Walter Berry reported Video: TechEd 2010 - Migrating Applications to Microsoft SQL Azure is available in his 6/10/2010 post to the SQL Azure Team blog:

imageAre you looking to migrate your on-premise applications and database from MySql or other RDBMs to SQL Azure? Or are you simply focused on the easiest ways to get your SQL Server database up to SQL Azure? Then, this video is for you. Cihan Biyikoglu covers two fundamental areas in this session: application data access tier and the database schema+data. In Part 1, he dives into application data-access tier, covering common migration issues as well as best practices that will help make your data-access tier more resilient in the cloud and on SQL Azure.

In Part 2, the focus is on database migration. He goes through migrating schema and data, taking a look at tools and techniques for efficient transfer of schema through Management Studio and Data-Tier Application (DAC). Then, we discover efficient ways of moving small and large data into SQL Azure through tools like SSIS and BCP. He closes the video with a glimpse into what is in store in future for easing migration of applications into SQL Azure.

View The Video Here

Jason Short explained Exposing OData from an Entity Framework Model in his 6/10/2010 post:

image After reading Scott Hansleman’s article on exposing OData for Stack Overflow I thought it would be nice to update the previous post I did on data services to include the new WCF Data Services.  WCF Data Services (formerly called Data Services, and “Astoria”) can expose OData to callers through a very simple interface. LINQPad was not available to query the interface at the time, so I will also discuss how to use LINQPad to write queries against a Data Service.

VistaDB exposing oData through WCF Data ServiceFor my example I am going to expose a VistaDB test database that shows SQL Commands, and examples of their syntax.  It is a very simple model, but provides interesting data to query against (other than Northwind!).  You can use any Entity Framework provider to perform these steps, they are not specific to VistaDB.

Being able to consume data across the web in a rest-ful manner is part of the power of OData, lots of applications that are powered by .Net are going to be able to consume OData services very easily.  But the OData protocol is not just for .Net, PHP, Java, Javascript and others also have the ability to consume the data.

Wayne Walter Berry links Video: TechEd 2010 – SQL Azure Development Best Practices in his 6/10/2010 post:

imageThis video from Microsoft TechEd North America 2010 covers best practices for using the SQL Azure cloud relational database. Rick Negrin walks through the creation of a departmental application from scratch. We see firsthand how easy it is to provision a SQL Azure Database and start developing against it. He also look at importing and exporting data, and reporting. Time is also spent looking at strategies for migrating your existing applications to the cloud so that you are provided with high availability, fault tolerance and visibility to these often unseen data repositories. Finally we see how the reach of the cloud provides you with opportunities to create a new, differentiated class of applications.

View The Video

The PowerPoint slides and Access database shown in the video are available for download at

Cihan Biyikoglu’s Pricing for the New Large SQL Azure Databases Explained post of 6/10/2010 describes how Microsoft charges for larger Web and Business Edition SQL Azure database instances:

imageWith TechEd 2010 in New Orleans this week, we announced the pricing structure for the large SQL Azure databases. The informational is available online here.

There are a few key improvements to point out;

Larger Database Sizes for SQL Azure: SQL Azure today offers 2 editions with a ceiling of 1GB for web edition and 10GB for business edition. With our next service release, ceiling sizes for both web and business editions will increase 5x.

    • Web edition will supports a 5GB ceiling. With the added billing increment, web edition databases will be billed at the 1GB rate for databases below 1GB of total data or at 5GB rate for databases between 1GB and 5GB size.
    • Business edition will support a 50GB ceiling and will be billed at 10GB increments (10GB, 20, 30, 40 and 50GB).

Usage Based Billing: Even though both editions can now support larger ceiling sizes (web up to 5GB and business up to 50GB), you will be billed based on the peak db size in a day rolled up to the next billing increment.

    • Web edition will support 1GB and 5GB as billing increments.
    • Business edition will support 10, 20, 30, 40 and 50GB as billing increment.

Lets look at a few examples; Assume we have a a web edition database that has a MAXSIZE=5GB. If the database size is 800MB, the daily charge for the database will be at the 1GB rate for web edition. if the next day, the database size grows to 3GBs, the daily charge will be based on the next billing increment for web edition which is 5GB for that day. If the next day, after some data deletion, the size drops back to 900MB, the daily charge will be based on 1GB back again.

The same example applies to a business edition edition database. Assume we have a business edition database with MAXSIZE=50GB. If the total database size is 8GB, the daily charge for the database will be at the 10GB rate. If the next day, the database size grows to 25GB, the daily charge will be based on the next billing increment for the business edition which is 30GB and so on.

Cost Predictability Enhancements: Even though both editions can grow to larger ceiling sizes (such as 50GB), you can cap the data size per database and control your bill at the billing increments. MAXSIZE option for CREATE/ALTER DATABASE will help set the cap on the size of the database. If the size of your database reaches to the cap set by MAXSIZE, you will receive an error code 40544. You will only be billed for the MAXSIZE amount for the day. When database reaches the MAXSIZE limit, you cannot insert or update data, or create new objects, such as tables, stored procedures, views, and functions. However, you can still read and delete data, truncate tables, drop tables and indexes, and rebuild indexes. …

Cihan continues with “The New CREATE & ALTER DATABASE T-SQL Syntax” details.

Wayne Walter Berry reported Video: How Do I: Manage SQL Azure Firewall rules? was available for download on 6/9/2010:

imageThis video by Max Adams from TechNet discusses the IP firewall rules inherent in SQL Azure, and demonstrates connecting to a SQL Azure database using Microsoft SQL Server Management Studio 2008.

Faisal Mohamood advised developers to Remember to re-enable MARS in your SQL Azure based EF apps in this 6/9/2010 post:

imageMARS (Multiple Active Result Sets) is a feature in SQL Server / ADO.NET that allows for multiple results-sets to be streamed over a single connection; this enables for simpler programmability in many scenarios. Entity Framework leverages MARS for supporting features such as lazy loading (for example, consider the case where you are iterating over a set of customers, and loading the set of orders for each customer that is returned).

MARS is turned ON by default when you use the Entity Framework designer to build your application for SQL Server – however, Entity Designer in Visual Studio 2010 turns it OFF if your database is on SQL Azure. This was done because SQL Azure did not support MARS until recently.

Now that MARS is available on SQL Azure, it is a good idea to make sure that MARS is turned on after you generate your model based on SQL Azure. Simply find the connection string that is generated and set MultipleActiveResultSets to true.

Needless to say, we will make sure that MARS is automatically turned ON for SQL Azure based apps when we get around to releasing the next update to the product.

Wayne Walter Berry explains Generating a BCP Utility Script for SQL Azure in this 6/8/2010 post:

imageIf you are migrating tables from SQL Server to SQL Azure, one of the easiest ways is to use bcp utility to draw data out of your SQL Server into a file and then move the data from the file to SQL Azure. However, it can be tedious to write all the bcp utility commands by hand, since bcp utility requires that you execute a single command for each table, moving one table’s worth of data at a time (Find about more about how to use bcp utility with SQL Azure in our early blog post). Would it not be nice to move all the tables with a single batch file? This article presents a Transact-SQL script that will create a batch file with all the bcp utility commands you need to move a whole database.


Before you run the script below:

  • You need to have already created the database schema on SQL Azure before you move files with bcp utility. You can do this using the Generate Script Wizard; see our previous blog post for more information.
  • The tables on the SQL Azure destination database should be empty, which means that you shouldn’t run the BCP utility batch file twice on the same destination database.
  • The script below runs in SQL Server Management Studio connected to the source SQL Server database. You will need to modify the variables at the top of the script to reflect your source SQL Server and destination SQL Azure server, before you execute the script.

The Transact-SQL script will generate the commands for a batch file using Transact-SQL PRINT statements. After executing the script, just copy the whole output to a file with a .bat extension. Once you have the batch file create you can run it from the command line to facilitate the move of your database.

Preserving Primary Keys

If you are using IDENTITY to generate the primary keys in your database, the bcp utility commands generated will preserve the numbering of your primary keys using the –E flag. However, the referential integrity of the foreign keys will not be checked when they are inserted. This was done so that rows could be inserted regardless of the dependencies amongst the tables – primary keys do not need to be inserted before foreign keys.

Because the primary keys are not regenerated there should not be any constraints violated as long as the source database is not written too while the batch file is running. Here lies the hitch, you need to make sure that your source database is either in read only mode, or that no application is writing data to it. The bcp utility commands are not wrapped inside a big transaction. There can be significant time between the time the first command in the batch file is executed and the last, this gives an opportunity for the data writes.

Wayne continues with the T-SQL script, which you can cut and paste to SQL Server Management Studio or Notepad.

Wayne Walter Berry describes this Video: How Do I: Use the RoleManager Class to Log SessionIDs in SQL Azure? in a 6/8/2010 post:

imageThis video by Max Adams from TechNet shows us how to use the Azure Diagnostics assemblies; an important feature of Microsoft Windows Azure Platform. Max delves into using the assemblies and the Trace class to log application information within Azure storage. This video includes connecting to SQL Azure and retrieving a SQL session id, using the Diagnostics assembly in Azure to store logs to Azure storage.

Dinakar Nethi posted Scaling Out SQL Azure to the TechNet Wiki on 6/4/2010:

SQL Azure Database is a cloud database service from Microsoft. SQL Azure provides web-facing database functionality as a utility service. Cloud-based database solutions such as SQL Azure can provide many benefits, including rapid provisioning, cost-effective scalability, high availability, and reduced management overhead. This paper provides an overview on some sc ale out strategies, challenges with scaling out on-premise and how you can benefit with scaling out with SQL Azure.

Recommended reading. Thanks to Wayne Walter Berry for his 6/8/2010 heads-up.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Joannes Vemorel’s Designed for large scale forecasting with Windows Azure post of 6/11/2010 describes Lokad’s “cloudy architecture:”

Lokad is a proud user of Windows Azure, the cloud computing plateform of Microsoft. Our forecasting technology would be nowhere as scalable and accurate without Azure.

Cloud computing opens tremendous opportunities as far reliability, security, performance and costs are concerned. Yet, to get the most out of the cloud, apps have to be natively designed for the cloud. Our current cloudy architecture is drawn below.

Our migration toward Azure cost us about 1 year of efforts that spread from late 2008 to early 2009 (which was still pretty fast considering the tremendous challenge it represented to redesign the architecture from scratch).

Best patterns and practices for entreprise apps in the cloud are still a very nascent area. At Lokad, we want to share our experience and get feedback from the community.

Although Lokad is not an open source company, we release as open source components that we believe to be applicable to other businesses. As a matter of fact, we are supporting multiple open source projects such as:

  • Lokad.Cloud - an O/C mapper for Windows Azure (object to cloud)
  • Lokad.CQRS - Command-Query Responsibility Segregation for Windows Azure.

Willing to design an enterprise app on the cloud? Make sure you check those two projects.

Rinat Abdullin’s Lokad.Cloud vs. Lokad.CQRS post of 6/11/2010 analyzes the differences between the two Lokad projects:

Currently Lokad has three Tier 1 open source projects:

  • Lokad Shared Libraries - Libraries and guidance for building .NET applications efficiently.
  • Lokad.Cloud - .NET O/C mapper (object to cloud) and distributed executor for Windows Azure.
  • Lokad.CQRS - Command-Query Responsibility Segregation for Windows Azure.

Tier 1 means that:

  • Project is currently in production at Lokad.
  • It is being actively developed and maintained.

Why do we have and support two different projects for building Windows Azure applications?

That's because these frameworks focus on distinct scenarios and thus have different requirements and features.

Lokad.Cloud was designed for the high-scale computing in the Cloud. It simplifies and reduces friction of any Research and Development effort in this area, starting from the exploration in distributed solutions and up to highly scientific statistical analysis. That's why students in Computer Science successfully use this project for their assignments (i.e. building multi-player game for Windows Azure). All experience in this field is embedded in this project.

Companies could also use Lokad.Cloud to implement cloud bursting scenarios for Windows.Azure. Auto-scaling capabilities help to cope with bursts of CPU intensive tasks, while optimizing costs at the same time.

Lokad.CQRS is for building scalable enterprise solutions and integrating them together. It does not have that many science-oriented features, but adds additional functionality and protection that help to deliver, deploy and maintain successful real-world business applications. Experience and theory in this field, as applied to Windows Azure, is being embedded into Lokad.CQRS.

If company needs some scalable web site with a reliable enterprise integration functionality that is intermixed with business logic and is rapidly evolving in ever changing world, then that's what what Lokad.CQRS is good at achieving fast and efficiently. Companies could use theory, pieces of code, ideas or everything altogether.

Microsoft’s News Center reported State of Florida Leverages Microsoft Cloud Solution for Census Count in this 6/11/2010 press release:

imageThe Florida House of Representatives is making one final push over the next month for its state residents to be counted in the 2010 Census, through its MyFloridaCensus ( website and Web-based application. MyFloridaCensus is an innovative component in Florida’s overall effort to ensure a complete count of residents during the ongoing 2010 Census, supplementing door-to-door canvassing, which ends nationwide July 10.

MyFloridaCensus is hosted in the Windows Azure cloud platform and runs using Microsoft Silverlight for cross-browser compatibility. With the support of a Bing Maps interface, the collective technology allows visitors to share their experiences with the 2010 Census and build a social user-generated experience around the once-per-decade count. In turn, the Florida House provides the U.S. Census Bureau, state and local governments, and citizens with dynamic feedback and visual representations of that feedback. Unlike most traditional government websites, MyFloridaCensus offers Floridians the opportunity to take part in the gathering of information, and thus affords individual citizens the opportunity to speak for the betterment of their communities.

“Once Florida residents share the census impact in their communities, we use to work with the U.S. Census Bureau to account for streets, neighborhoods and communities that may otherwise be missed in the 2010 Census,” said Florida state Rep. Dean Cannon. …

Ian Murphy reported Quest's TOAD hops onto the Cloud in this 6/11/2010 post to

imageNext week Quest Software will announce the release of TOAD for Cloud, extending the reach of its database tools into the Cloud environment. In the first version there will be support for four datasources - Amazon SimpleDB, Microsoft SQL Azure, Apache HBase and any database with an ODBC driver. [Emphasis added.]

TOAD for Cloud will be freely available off of the Quest website and according to Brent Ozar at Quest Software," the target market is DBAs and data analysts who need to be able to do cross joins between Cloud platforms and their existing databases inside the organisation."

Ozar has already said that Quest will be adding support for more databases over time including the Apache Cassandra project but stopped short of identifying Oracle, Sybase and DB2 as early targets despite the fact that all three either have either shipped or have announced Cloud versions of their products.

As this is built on the same underlying toolset as TOAD for Data Analysts, it is likely that the full reporting capabilities of that product will be available to the TOAD for Cloud product soon.

DotNet Solutions offers a Wikipedia Explorer case study of moving compute-intensive operations to 50 SQL Azure instances in this 6/11/2010 post:

image The client: Microsoft is "All In" with the Cloud & was looking for a killer application to show off its power. They turned to Dot Net Solutions.

The challenge: Wikipedia Explorer is all about visualising relationships between documents within Wikipedia, in an attempt to improve the traditional, flat user experience. In the original version, all the data was downloaded from Wikipedia on the fly. This was very simple, but also very slow and meant different visualisations would be produced each time, based on which links were downloaded first. Microsoft wanted a much faster browsing experience – with much more continuity – created in Windows Presentation Foundation.

image The solution: Dot Net worked alongside its design partner and the Microsoft developer and platform evangelism team in Redmond, Washington, to deliver a robust proof of concept for this new browsing experience.

Wikipedia makes available a database dump, which contains a complete snapshot of the whole site. However, the problem is the dump is enormous. Converting from wikicode (the proprietary format it is stored in) to XML/XAML is very processor intensive – on a single high-powered server it would take somewhere in the region of 4-6 months to complete. Obviously, this was a non-starter.

By building the application on top of Windows Azure we could scale this process out to a large number of servers. It currently runs on 50 server instances. The same process that would have taken up to six months took a little over four days – almost exactly 1/50th the time – demonstrating the power of the Azure Services platform. It is easy to scale out processor intensive tasks and have them completed much faster by provisioning more hardware.

Jim Nakashima chronicles TechEd Session Takeaways - Using Visual Studio to build Windows Azure Applications in a 6/10/2010 post:

Thank you to all of you who attended my session at TechEd 2010 - COS307 | Using Microsoft Visual Studio 2010 to Build Applications That Run on Windows Azure

Here are some of the key takeaways and links from the session:

Lots of New Tools

The June 2010 release of the Windows Azure Tools has now includes:

  • Support for .NET 4
  • Deploy from Visual Studio
  • IntelliTrace debugging in the cloud
  • Windows Azure Storage and Compute integration in the Server Explorer
  • Windows Azure Activity Log window in VS to watch long running operations

Getting Started

The Web Platform Installer automates a number of the steps to install Windows Azure or to install IIS prior to installing the Windows Azure Tools for VS 2010.

Get the patches -


ASP.NET Web Roles vs ASP.NET Web Applications

The 3 differences are:

  • References to the Windows Azure specific assemblies: Microsoft.WindowsAzure.Diagnostics, Microsoft.WindowsAzure.ServiceRuntime, and Microsoft.WindowsAzure.StorageClient
  • Bootstrap code in the WebRole.cs/vb file that starts the DiagnosticMonitor as well as defines a default behavior of recycling the role when a configuration setting change occurs.
  • The addition of a trace listener in the web.config file: Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener.


The NerdDinner sample code can be found at:

ASP.NET Provider scripts for SQL Azure

To use the ASP.NET providers with SQL Azure, you can use these scripts: to setup the database.

Using IntelliTrace

Using IntelliTrace to debug services / applications that are running in the Cloud

Eugenio Pace describes Windows Azure Architecture Guide – Part 2 – Saving surveys in Tailspin in his 6/9/2010 post:

As I wrote in my previous post , different sites in TailSpin have different scalability needs. The public site, where customers complete surveys, would probably have need to scale to a large number of users.

The first consequence in the design is the separation of this website into a specific web role in Windows Azure. In this way, TailSpin will have more flexibility in how to manage instances.

The “answer surveys” use case is also a great example for “delayed writes” in the backend. That is: when a customer submits answers for a specific survey we want to capture that immediately (as fast as possible), send a “Thank you” response and then take our time to capture those answers and include them in TailSpin’s data repository. All the work of inserting the responses into the data model do not have to be tied to the user response. Much less all the calculations involved in updating the summary statistics of that particular survey.

This pattern is exactly what we discussed here. Here’s an updated diagram for this particular use case:


  1. Customer gets a survey
  2. Completes answers
  3. Submits answers
  4. Web site queues the survey response
  5. Sends “Thank you” to the user (The goal is to make Tp as small as possible)
  6. A worker picks up the new survey and stores it in the backend, then updates the statistics

Nothing completely new here.

Eugenio continues with “some interesting considerations.”

Return to section navigation list> 

Windows Azure Infrastructure

Lori MacVittie claimed “If we look at cloud in terms of what it does offer instead of what it doesn’t, we may discover more useful architectures than were previously thought to exist” in her Is Your Glass of Cloud Half-Empty or Half-Full? post of 6/10/2010:

imageI have a fairly large, extended family. While I was growing up we gathered at our grandparent’s home during the holidays for, of course, a meal. Grandma would put extra chairs around the table but because she had five children (and spouses) there really wasn’t any room for us grandchildren. So we got to sit … at the little kid’s table. Eventually we weren’t “little kids” any more and we all looked forward to the day we could sit at the “big” table with the adults.

Now grandma was a stickler for time, and dinner was served at exactly twelve noon. Not 12:01, not 11:59. 12:00. Exactly. If you weren’t there, that was just too bad. So it inevitably it was the case that someone wasn’t on time and it was then that, in age-descending pecking order*, some of the “kids” got to sit at the “big” table. Until the Johnny-come-lately adults showed up, at which point we were promptly banished back to the kids table.

This “you can sit at the big-table unless a grown up needs your place” strategy is one that translates well to a hybrid cloud computing strategy.


There are myriad surveys out there regarding the inhibitors to cloud adoption. At the top is almost always security and control. CIOs are quick to indicate they do, in fact, have interest in the cloud and its purported operational benefits, but they aren’t necessarily willing to risk the security and availability of business-critical applications to get them.

As has been previously mentioned in Why IT Needs to Take Control of Public Cloud Computing, it may be that IT needs to adopt the view that the data center is the “big” table at which business-critical applications are deployed and the cloud, as the “little kids’ table” is where all non-critical application end up when the “big table” is full. A kind of cloud bursting, if you will, that assumes business-critical applications have priority over local data center compute resources. Non-critical applications may be initially deployed locally but if business-critical applications need additional compute resources then non-critical workloads must give up their local resources and move to the cloud.

This strategy treats cloud as it is today, as compute on-demand and little more. It assumes, moreover, that the application needs very little “care and feeding” in terms of its supporting application and application delivery infrastructure. A little non-specialized load balancing for scale and a fat pipe might be all this application really needs. That makes it perfect for deployment in an environment that caters to providing cheap “utility” compute resources and little else because it can be migrated – perhaps even while live – to an off-premise cloud environment without negatively impacting the business.

That would not be true of a business-critical application for which there are strictly defined SLAs or compliance-related polices, many of which are implemented via complex integration with other components, systems, and applications internal to the data center. Migrating a business-critical “application” is a lot more complicated and time-consuming than a non-business critical, non-integrated application. That’s because the former requires either (a) migration of all related components and supporting infrastructure or (b) a secure, optimized tunnel to the off-premise cloud computing environment that enables the continued use of integrated application and network components.


The immaturity of cloud computing environments with regards to the availability of enterprise-class infrastructure services continues to be a root cause of cloud “reluctance.” Without the ability to deploy a critical application in an environment similar to that of the local data center, CIOs are going to be understandably cautious. But for applications that don’t need such a complex network of support infrastructure cloud computing is well-suited for deployment and doing so is certainly – at least from a CAPEX and long-term OPEX point of view – the most appealing option available. dynamic-infrastructure-maturity-modelBefore cloud can mature, before we reach the “network standardization and services-based infrastructure” we need the standards upon which standardization will be based. Interestingly enough, that doesn’t necessarily mean industry standards. The speed at which various standards organizations are moving today makes it highly probable that organizations that are moving more quickly will develop their own set of standardization that will eventually form the basis for industry standards. Some might argue, in fact, that this is the way it should happen, as organizations are the ones that use and exercise the Infrastructure 2.0 APIs and frameworks currently available across the infrastructure spectrum to integrate and manage infrastructure components in their own data centers.

Without those standards and the resulting infrastructure services, in order for organizations to reap the benefits of cloud computing we should probably stop looking at cloud with a “glass is half-empty” view and take a “glass is half-full” perspective, instead. Don’t look at cloud in terms of what it doesn’t offer, but instead what it does offer: inexpensive, easily and rapidly provisioned compute resources. Compute resources that can serve as overflow for non-critical applications when the really important applications need more compute power.

* As the oldest grandchild I was good with this order of operations, of course.

Phil Wainwright asserted “Like it or not, cloud providers have to engage with public policy makers, especially if they want to operate across national borders” in his Can the cloud avoid government? post of 6/11/2010 to his “Software as Services” blog for ZDNet:

image Government should leave the tech industry well alone, ranted Mike Arrington earlier this week. But it isn’t so easy to avoid government if you’re a cloud computing provider with global ambitions. The cloud operates over the World-Wide Web, which as the name suggests, inevitably crosses national borders and touches many different legal regimes. Cloud-enabled globalization also challenges many of the protections that governments have put in place at the behest of established industries and interest groups that are adept at lobbying lawmakers.

For these reasons, the cloud industry can’t afford to ignore government. The minimum response should be to stay aware of what government is up to and how its actions may affect cloud providers. Many in the industry will probably side with Arrington in demanding less regulation and interference rather than more. But since government isn’t going away, it’s better to actively engage and make the case for the cloud industry — if nothing else to ensure that if there are negative impacts (such as those exposed in the UK’s recent Digital Economy legislation), they are at least purposeful rather than incidental.

With that in mind, cloud and SaaS providers in Europe will be gathering in Luxembourg later this month to discuss the challenges facing the industry and what role government and other public policy influencers can play in helping or hindering its success. Taking place on Monday 21st June, EuroCloud Congress is the first pan-European member meeting of the EuroCloud industry network that first launched last October [disclosure: of which I am an unpaid vice-president and conference organizer]. The meeting will bring together members from across Europe, along with public policymakers and influencers from national government and the European Commission.

In Europe, one of the first challenges for cloud providers is to unravel and make sense of the legislation that already exists. US businesses should consider themselves lucky that at least their huge domestic market is relatively free of legal pitfalls. In Europe, government and regulation is a fact of life for anyone looking beyond the confines of their own national borders — which is pretty much a given for those in the Internet, cloud and SaaS sectors. Each country has its own laws and practices on matters such as data privacy, business contracts and taxation. The scale of this challenge is difficult to explain to Americans, who can generally assume a common legal framework with occasional variations in individual states. Europeans, by contrast, rarely find exceptions when European harmonization has actually worked (the most shining example, in those countries that have adopted it, is the single currency, itself apparently now undermined by hidden inconsistencies between countries).

The industry thus has no choice but to ask government to interfere, in the hope of at least removing some of the inadvertent barriers to cloud services that have sprung up because of differences in national legislation. I’m told it’s possible to find examples where obeying privacy laws in one country will force a provider to break the prevailing laws in another country if its customers happen to be based there. No one intended such a result, but it’s up to the industry to show where such anomalies exist and why it’s urgent to resolve them.

At least the industry should find a willing ear from Europe’s commissioner for the Digital Agenda, Neelie Kroes, whose mandate includes the establishment of “an integrated single market for the delivery of electronic services,” and who will be represented at the Congress.

The agenda will examine the state of play in a number of different topic areas to establish where further action is most needed, whether by policy makers or by action within the industry. Experts contributing to the debate will include speakers from the European Commission and the Luxembourg government, academics and market researchers, and a cross-section of industry contributors, ranging from start-ups like SLA specialist Sensible Cloud to established players such as RightNow Technologies and Microsoft. There are six main topic headings under discussion:

  • Security and certification
  • Internationalisation within Europe and beyond
  • Service levels and customer experience
  • The legal framework for cloud and SaaS provision
  • Fostering and recognising innovation
  • Industry alliances and partnership

If Europe’s cloud providers can’t rapidly surmount some of the unintended regulatory barriers they face, then the continent’s small and mid-size enterprises will suffer competitively, as they’ll be less able to benefit from the responsiveness and cost savings that flow from using cloud services. No wonder the dominant cloud utility providers today are US companies such as Amazon, Google and Microsoft. European policy makers need to make sure their actions don’t hold back the emergence of the next generation of cloud-enabled businesses in the region. EuroCloud’s members will be hoping their Congress in Luxembourg marks an important milestone on the journey towards a building a strong cloud industry in Europe.

David Linthicum asks “Cloud services fail when the demand overwhelms them -- but why is that allowed to happen in the first place?” in his Combating cloud outages: There's a simple solution post of 6/11/2010:

imageI was amused by Steve Jobs' Wi-Fi overload issues during his iPhone 4 presentation. While he could ask the audience to "put your laptops on the floor" and turn off their 3G-to-Wi-Fi devices, most cloud providers won't have the luxury of asking customers not to use their services when their cloud platforms get oversaturated.

There have been many recent availability issues with cloud providers, such as Twitter's and Google Calendar's struggles, as well as Google App Engine's datastore taking a dirt nap under demand. Or, as Google puts it in a recent post: "There are a lot of different reasons for the problems [with data store] over the last few weeks, but at the root of all of them is ultimately growing pains. Our service has grown 25 percent every two months for the past six months."

There are also many cloud outages and availability issues that aren't reported, but have the same negative affects on the cloud users. What we hear in the press is the tip of the iceberg.

I think this increase in outages caused by saturation is just the start. I suspect with the increased use of cloud computing this year and next, clouds falling over due to stress will be more commonplace.

The core issue is the saturation of resources by too many users doing too much on the cloud provider's servers. Putting any architecture and design issues aside for now, it's as simple as that -- it's also a very old problem.

Eric Nelson claimed Windows Azure Platform eBook Update #2 [is] 100 pages of goodness in this 6/8/2010 post:

I previously mentioned I was working on a community authored eBook for the Windows Azure Platform. Well, today I assembled the 20 articles that made it through to the end of the review process into a single eBook – and it looks (and reads) great. Still a lot more to do (and stuff in the way of me doing it) but as a teaser, here is the (very draft) table of contents.

Jim Nakashima described the new Windows Azure Storage Browser in the Visual Studio Server Explorer in this 6/10/2010 post:

As part of the June 2010 release of the Windows Azure Tools, we now have a Windows Azure Storage browser in the Visual Studio Server Explorer:

It is our first cut at this feature and we've been iterating fairly quickly on the Windows Azure Tools so I'm excited about having this feature not only for what it delivers today but also because it lays the foundation for the future.  In this post, I'll go over what you can and can’t do with the Windows Azure Storage browser and how added some features to hopefully make it easier for you to handle navigating through large data sets.

Eric Nelson answers How do I cancel my Windows Azure Platform Introductory Special? (or any Subscription) in this 6/9/2010 post:

Short answer: Don’t! Just kidding :-)

Long answer:

I believe it is the same process as for other Microsoft Online Services – but I have never tried it. Hence please post a comment if you follow this successfully or not and I will amend.

From, search for “cancel” and you get:


What I am not clear about is whether an Introductory Special is classed as a trial. Either way, the answer is to contact support and ask to cancel. I would suggest you are fully armed with details of your subscription which you can get from signing in to

You can contact support via a online web form at


You can call them.

The details are again on the support page


In the UK you can call 0800 731 8457 or (0) 20 3027 6039 Monday – Friday 09:00 – 17:00 GMT (UTC).

I hope that helps.

<Return to section navigation list> 

Cloud Security and Governance

Giuseppe Andrianò’s Web Signage In The Windows Azure Cloud post of 6/11/2010 describes digital signing solutions for Windows Azure:

Web Signage is (ahead of Dynamax’s planned release of the first digital signage software available through the Windows Azure cloud computing platform.

Edisonweb, the Italian software house that develops the Web Signage platform for digital signage, becomes officially a ‘Front Runner’ and announces the release and immediate availability of its solution for Windows Azure.

Websignage WindowsAzure DS Platform

The solution consists of web based management application delivered as a service and a player software managing the multimedia content playback on digital signage displays. Web Signage Player has successfully passed compatibility tests conducted by Microsoft for the 32 and 64-bit versions of Windows 7.

Riccardo D’Angelo, CEO of Edisonweb told us “We are particularly proud of having achieved a first, in the software for digital signage arena, this important compatibility goal that will allow us to further shorten the release and development times and increase both scalability and performance, thanks to the great flexibility offered by the Windows Azure platform.

The compatibility with Microsoft’s cloud services of the Windows Azure platform, strengthens the offer towards the international markets: both software and infrastructure will be available as a service supplied through the Microsoft data centers spread throughout the world. Web Signage is always supplied by the nearest Windows Azure datacenter to ensure the highest performance.

Also, thanks to the evolution from SaaS to IaaS, a set of more flexible distribution and resale agreement, including OEM and co-brand formulas will be available to partners.

Edisonweb is a software house specialized in developing innovative web based solutions. For over fifteen years develops applications in the e-Government, healthcare, info-mobility, tourism and digital marketing fields, creating high performance and simple to use solutions. Edisonweb is a Microsoft Gold Partner and has achieved the ISV/Software Solutions competence, which recognizes commitment, expertise and superiority using Microsoft products and services.

Lori MacVittie asserts Multi-Tenant Security Is More About the Neighbors Than the Model in her 6/9/2010 post to F5’s DevCentral blog:

Scott Sanchez twitterbird recently rebutted the argument that “Cloud Isn’t Secure Because It Is Multi-Tenant” by pointing out that “internal data centers are multi-tenant today, and you aren’t managing them as well as a public cloud is managed.”

Despite the truth of that statement, his argument doesn’t take into consideration that multi-tenant cloud security isn’t just about the risks of the model, it’s about the neighbors. After all, there’s no such thing as a “renters association” that has the right to screen candidate tenants before they move in and start drinking beer on their shared, digital lawn in a public environment. When an organization implements a multi-tenant model in their data center the tenants are applications with the same owner. In a public cloud the tenants are still applications, but those applications are owned by any number of different organizations and, in some cases, individuals.


With the exception of co-location and dedicated hosting, this is essentially the same risk that caused organizations not to embrace the less expensive option to outsource web-application and its infrastructure. Once the bits leave the building there is a loss of control, of visibility, and of ability to make decisions regarding what will and more importantly what won’t run on the same machine as a critical business application. If the bits stay in the building as Scott points out there’s still very little control afforded to the business stakeholder but there’s also less need for such concerns because every application running in the internal data center is ultimately serving the same business.

Unlike the public clouds, the resources of the private cloud are shared only within the corporate community. They're controlled by the corporation, not a third-party vendor that has the ability to lease them to anyone it chooses.

Private cloud computing Takes Off in Companies Not Keen on Sharing

See full article from DailyFinance:

imageAnd if somehow one of the applications in the data center, whether multi-tenant or not – manages to chew up resources or utilize so much bandwidth other  applications starve, IT can do something about it. Immediately. When everything is running in the same controlled environment the organization has, well, more control over what’s going on.

The public cloud multi-tenant model is different because the organization’s neighbors may not be Mr. Rogers and they might just be Atilla the Hun. And even if they are harmless today there’s no guarantee they will be tomorrow – or in the next hour. There’s no way to know whether applications of the serious business kind or of the serious making-money-phishing kind are running on or near the organization’s application. And that’s important because there is very little (if any) visibility into the cloud infrastructure, which is also inherently multi-tenant and shared.

There’s no transparency, nothing that’s offered to assuage the fears of the organization. No guarantees of bandwidth even if the app next door start spraying UDP packets like water from a fire-hose and saturates the physical network or any one of several intermediate network devices between the server and the boundary of the cloud provider’s network. In many cases, the customer can’t even be assured that its data (you know, the lifeblood of the organization) is actually isolated on the network from cloud boundary to application. They can’t be certain that their data won’t be silently captured or viewed by someone lucky enough to have rented out the room above their store for the night. Deploying an application that handles highly sensitive data in a public cloud computing environment is very nearly a crap shoot in terms of what kind of neighbors you’ll have at any given point in the day. 

Lori continues with a “THEN THERE’S BUSINESS RISK” topic.

Dana Gardner goes to “[T]he intersection of cloud computing, security, Internet services, and best practices to uncover differences between cloud perceptions and reality” in his Adopting cloud-calibre security now pays dividends across all IT security concerns post of 6/9/2010 to his Briefings Direct blog for ZDNet:

Today’s headlines point to more sophisticated and large-scale and malicious online activities. For some folks, therefore, the consensus seems to be that the cloud computing model and vision are not up to the task when it comes to security.

But at the RSA Conference earlier this year, a panel came together to talk about security and cloud computing, to examine the intersection of cloud computing, security, Internet services, and Internet-based security practices to uncover differences between perceptions and reality.

The result is a special sponsored BriefingsDirect podcast and video presentation that takes stock of cloud-focused security — not just as a risk, but also as an amelioration of risk across all aspects of IT.

Join panelists Chris Hoff, Director of Cloud and Virtualization Solutions at Cisco Systems; Jeremiah Grossman, the founder and Chief Technology Officer at WhiteHat Security, and Andy Ellis, the Chief Security Architect at Akamai Technologies. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

To view a full video of the panel discussion on cloud-based security, please go to the registration page.

Dana continues with transcriptions of “a few excerpts.”

Cumulux outlines Cloud Governance requirements in this 6/10/2010 post:

Cloud Computing enables a tremendous amount of flexibility and scalability for deploying and managing your applications on the cloud. With this flexibility comes a list of items that have to be managed more closely compared to traditional systems.  Availability, Security, privacy, location of cloud services and  compliance are just some of the aspects of the cloud that have to be monitored and managed closely.

Governance in the Cloud is about defining policies around managing the above factors and tracking/enforcing the policies at run time when the applications are running. Different cloud vendors have varying degrees of flexibility when it comes to giving their clients access to the underlying infrastructure. It becomes imperative for businesses to understand these capabilities and define policies that mirror the needs of the business

Governance Policy Definitions

During the design and development stage, it is important to establish rules and policies around how the various services in the cloud are going to be monitored and managed. The Quality of Service ( QoS ) of the underlying Cloud infrastructure and the Service Level Agreement ( SLA) levels of both the platform as well as the application have to be monitored and tracked. Additionally, defining access policies to control the roles in the organization who have access to the cloud environment is a key component of defining establishing governance policies. For example, Governance policies should be defined for the following

  1. Role based access to establish control over who  has access to deploy and manage cloud assets
  2. Metrics for monitoring the application’s performance and other business critical KPIs
  3. Rules for defining critical levels of the metrics define above
  4. Service Levels ( SLA ) of both the application as well as the underlying infrastructure
  5. Quality of Service Levels

Governance Policy Enforcement
One of the main attractiveness of the cloud is the ability to reduce the “Time to Market” significantly. Cloud gives businesses the ability to roll out changes to applications almost instantaneously compared to tradition models. This capability comes with its own set of issues around versioning, upgrades and compatibilities of services. Well defined and enforced policies are a must to ensure robustness of the cloud based application. Policies can be enforced through the following

  1. Change Management Reports to track and log the changes happening to the cloud assets
  2. Alerts and Notifications to ensure that changes are captured and bubbled up to the decision makers in a timely fashion
  3. Threshold based actions based on pre-defined rules. For example, automatically increasing the footprint ( read # of load balanced Cloud instances ) if the performance of the system is below certain threshold levels

<Return to section navigation list> 

Cloud Computing Events

Kevin McLaughlin offered Five Observations From Microsoft's TechEd Show in this 6/10/2010 article for CRN’s ChannelWeb blog. Here are the first two:

1. VMware Gets Props From TechEd Attendees

image VMware's booth was situated on the periphery of the TechEd show floor, but the company wasn't far from the minds of show attendees. VMware's vSphere 4 won the Best Of TechEd award in the virtualization category, and it also won the Attendee’s Pick award as the product rated highest by TechEd attendees across all categories.

It's an ironic twist in light of the feuding that's been going on between the two companies in the red-hot virtualization market. At VMworld last September, Microsoft and Citrix complained about being relegated to a tiny 10x10 booth, which was widely perceived as VMware's revenge for Microsoft handing out poker chips inviting VMware customers to switch to Hyper-V at the previous year's event.

We surely haven't seen the last of this fight, but for now, Microsoft says it's confident about the momentum that Windows Server 2008 R2 has achieved.

"The big thing we're seeing now is customers actively pursuing a Microsoft virtualized server environment," Dai Vu, director of virtualization solutions marketing for Microsoft, told CRN in an interview at TechEd. "Even if customers have traditionally invested in VMware ESX, they're now deploying Hyper-V to those environments."

2. Microsoft's New Server & Cloud Division Paying Off

Microsoft last December underwent a re-organization that included the formation of a new Server & Cloud Division (SCD) that united the Windows Server & Solutions and Windows Azure groups. Leadership of the Windows Azure development team moved from Chief Software Architect Ray Ozzie to Server and Tools Business (STB) President Bob Muglia, and Amitabh Srivastava, Microsoft senior vice president, was appointed as head of the SCD.

In an interview at TechEd, Bill Laing, corporate vice president of the Windows Server & Solutions Division, said bringing these technologies together has gone "incredibly well" and will help streamline development. "It really feels like one organization, and that was our goal," Laing said. …

Kevin continues with: “3. The New Mantra: Multi-Tenancy”, “4. Microsoft Uses Humor In Mobile Mea Culpa”, and “5. Microsoft Not Shy About Pointing Out Cisco's UC Flaws.”

James Miller announced an Avoid PCI Burnout with a Security-focused Approach Webcast on 6/15/2010 at 11:00 AM PT:

Join us on June 15th as Gartner analyst Avivah Litan discusses how organizations are utilizing PCI to protect cardholder data and pass audits.  You’ll also learn how thousands of customers are using Tripwire to implement a security-based solution that allows them to continuously maintain PCI compliance.

Topic:    Avoid PCI Burnout with a Security-focused Approach

Presenter:  Avivah Litan, Gartner VP Distinguished Analyst on Security and PCI Compliance

When:  June 15 @ 11:00 AM PT (2 pm Eastern)

Register Now (Site registration required.)

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Amazon Web Services summarizes Recent Announcements in this 6/11/2010 e-mail message and Web post;

image This month, we are excited to recap several recent announcements, including new features for our content delivery, database, and storage services. Also, to make it easier to build applications in the cloud, we have added Amazon S3 and Amazon RDS to the AWS Management Console. If you are a developer or architect, check out the newly released whitepapers on Web hosting and building fault-toleration applications on AWS.

  • News & Announcements
    • Amazon CloudFront Adds HTTPS Support, Lowers Prices, Opens NYC Edge Location
    • AWS Management Console Adds Support for Amazon S3 and Amazon RDS
    • AWS Import/Export Exits Beta and Announces Web Service Support
    • Amazon Elastic MapReduce for Hadoop 0.20, Hive 0.5, and Pig 0.6
    • Enhanced Data Protection with Multi-AZ Deployments for Amazon RDS
    • New Storage Option: Reduced Redundancy Storage (RRS)
    • Provide Feedback on AWS Support
    • Featured Case Study: Guardian News & Media
  • Developer Resources
  • AWS On The Road
  • Virtual Events

Stacey Higginbotham reports Exclusive: VMware in Talks to Acquire EngineYard in this 6/10/2010 post to the GigaOM blog:

UPDATED: VMware is back on the hunt for new startups as it looks to further raise its profile in the platform-as-a-service market. Sources tell me its latest target is EngineYard, the Ruby on Rails cloud provider that’s raised $37 million from the likes of Amazon, Benchmark, DAG Ventures and Bay Partners. Neither VMware nor EngineYard could be reached for comment. Update: A VMware spokesperson responded to my query in email by saying, “We don’t comment on speculation or rumors.”

VMware has been buying startups such as Zimbra, SpringSource and others as it works to transition from providing a hypervisor to offering higher-value services. Earlier this year it released its VMforce platform as a service built on’s infrastructure using SpringSource’s Java-based framework. Adding a Ruby-focused platform or capability makes a lot of sense, and EngineYard has been working on a transition of its own — moving away from startups and more toward the enterprise, where VMware’s focus is.

For a closer look on how the PaaS business is evolving, (GigaOM Pro sub req’d) check out our panel dedicated to the topic at our Structure 2010 conference in two weeks. If platforms as a service don’t interest you, perhaps VMware’s CEO Paul Maritz will explain in his keynote what else his company is looking for in its acquisition spree.

Mary Jo Foley reports Microsoft hits back on expanded Novell-VMware alliance in this 6/10/2010 post to her All About Microsoft blog for ZDNet:

image It’s relatively rare that Microsoft execs comment officially on Redmond’s competitors. Something’s got to really hit a nerve before that happens. It seems that occurred this week, based on a June 9 post on the Microsoft Virtualization Team Blog.

image Novell and VMWare announced an expanded partnership on June 9, via which VMware will distribute and support the SUSE Linux Enterprise Server operating system. VMware also announced plans to standardize its virtual-appliance-based product on SUSE Linux Enterprise Server.

The newly minted deal didn’t sit well with Microsoft — especially because Microsoft execs love to trot out Novell as an example of Microsoft’s interoperability love. Microsoft and Novell announced a similar distribution and support deal a couple of years ago (which also included patent-protection clauses that irked a number of customers and players in the open source camp). And just last week, Microsoft execs highlighted new high-performance advances achieved by Novell and Microsoft in their joint lab in Cambridge, Mass.

In a June 9 post, entitled “VMWare figures out that virtualization is an OS feature,” Patrick O’Rourke, director of communications, Server and Tools Business, highlights the 3.5 year partnership between MIcrosoft and Novell, claiming it has benefited more than 475 joint customers.

“(T)he vFolks in Palo Alto are further isolating themselves within the industry. Microsoft’s interop efforts have provided more choice and flexibility for customers, including our work with Novell. We’re seeing VMWare go down an alternate path,” O’Rourke says. …

Mary Jo concludes:

What do you think? Nothing but a war of words? Or did this deal between Novell and VMware really hit Microsoft where it hurt?

Bruce McNee and Lee Geishecker report a new NetSuite OEM agreement with Rootstock Software in a new Signposts: Cloud Business Solutions Begin to Expand Beyond Core Front and Back Office into Manufacturing Research Alert of 9/10/2010 for Saugatuck Technologies:

What is Happening?
image Leveraging off of NetSuite’s core (horizontally focused) ERP / CRM / eCommerce suite – and its SuiteCloud PaaS development platform – Rootstock is the first of what could be several SuiteCloud partner solutions that NetSuite ultimately elects to brand as its own – as it fleshes out broader portions of the application stack. In total, Saugatuck believes that NetSuite has more than 300 SuiteCloud partners.
More importantly, we believe this announcement signals the beginning of a broader trend toward SaaS in Manufacturing and the Supply Chain. This Research Alert provides a quick recap of activity in the space, highlighting more than a half dozen serious players to watch. …

The authors continue with the usual “Why is it Happening?” and “Market Impact” topics.

Bob Familiar announced Office Web Apps (OWA) Released in a 6/9/2010 post to the Innovation Showcase blog:

Microsoft has released Office Web Applications (OWA). OWA comprises Word, Excel, PowerPoint and OneNote that run in any browser. Users can create, edit and share notebooks, spreadsheets, documents and slideshows free of charge, whether or not they have Office software installed on their computers.


SkyDrive is used as the storage location for your OWA documents with each user getting 25gig of free storage. One thing you will notice right away there is no Save option. The documents are automatically saved to your SkyDrive.

The functionality of the OWA applications while trimmed down from their rich client counterparts, is still quite impressive. I was able to create a PowerPoint very quickly that had a great looking template and 3D Smart Art.

Bob continues with a series of OWA screen captrues.

<Return to section navigation list> 

blog comments powered by Disqus