Wednesday, December 29, 2010

Windows Azure and Cloud Computing Posts for 12/29/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.

Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.

<Return to section navigation list> 

SQL Azure Database and Reporting

Wayne Walter Berry (@WayneBerry) recommended on 12/27/2010 that you Know Your Data before moving to an SQL Azure Database:

image Knowing your data might be the most important quality of the new database administrator (DBA). Just like a marketing person needs to know their customers, a CEO needs to know their market; a DBA needs to truly understand their data. This article is about how knowing your data becomes even more important with cloud databases like SQL Azure Database, and NoSQL style data stores like Cassandra and Windows Azure Storage.

SQL Server

Using a scale up approach for your on-premise SQL Server database, the SQL Server DBA needed:

  • A good working understanding behind the relationships of the data, i.e. how tables related to each other. This allowed the DBA to develop foreign key to primary key relationships, one of the mantras of a relational database.
  • Understanding how data was being inserted and updated into the table allowed the DBA to optimize their queries for locking, partitions for performance, and transaction logs for growth.
  • Understanding how data was queried from the database and how paging worked, allowed the database administrator to tune the database with covered indexes and denormalization of data for better performance.
  • How much data was being added over a period of time, how to scale up the database to grow with the data and how to warehouse that data.
SQL Azure

imageIf you are designing your SQL Azure data store for growth, you need to design in a scale out approach. For more about scale out your SQL Azure database read: Scale Out Your SQL Azure Database. Scaling out divides your data across multiple databases, a concept that SQL Azure calls federation – federation is the concept of sharding in SQL Azure. For more about SQL Azure Federation read: Intro to SQL Azure Federations: SQL Azure Federations Overview.

Federating your data breaks transaction boundaries and pushes SQL Azure towards a NoSQL database structure. The reason I call it a step toward NoSQL is that SQL Azure doesn’t allow you to have transactions that span multiple databases. The parity of transactions to a single database is caused in part because the databases exist on different hardware. If the transaction needs to travel on the network there would be considerable performance degradation. Because SQL Azure is on commodity hardware, the maximum scalable size is 50 Gigabytes. If you need a database bigger than 50 Gigabytes, you need to federate your data across multiple databases and across transaction boundaries. You lose the ACID principles of the database – closer to NoSQL than to a relational database.

The DBA needs to understand more about data in order to properly partition or federate the data correctly. Specifically the DBA needs to know:

  • How to distribute the data into acceptable transaction boundaries.
  • How to find the natural dividing lines of the data so that you can partition it across multiple databases.
Transaction Boundaries

Every query on SQL Azure exists within a transaction context – a principle of ACID relational databases. A transaction is treated in a coherent and reliable way, independent of other transactions. Just like SQL Server, SQL Azure is capable of creating transaction contexts for any statement that you might want to create against the database. And it can rollback any transactions where a failure might prevent the whole transaction from completing.

Regardless of SQL Azure transaction capabilities, typically very few of the statements executed against the database need to have a transaction context. For example, when selecting data with a forward-only, read-only cursor that only row-locks – the transaction context is restricted to the row that is being read. However, what if that table is only updated when the application is offline? The table doesn’t require any locking for read – since there are never any writes. The database doesn’t know this, only the application – or more specifically the application developer.

The pretense of the NoSQL database is that the application developer knows every case where they need a transaction context and every case where they do not. Since true transaction needs are typically very low, the application developer programs all transactions into the client code – and doesn’t require a database that supports transactions. They might even program the rollback code.

Since the NoSQL developer knows the data (some might say better than the relational database developer), they can program their application to use multiple databases to store the data – in other words they know how to scale out using homegrown transactions on the client application.

So they can use commodity hardware in the cloud which is significantly more cost effective than on-premise SQL installations managing the same data volume.

Dividing Lines

If the application is highly dependent on transactions – the developer can keep all transaction to a single database by partitioning the data on transaction boundaries. For example, if the application is supporting software as a service (SaaS) that is used by multiple users in such a way that no user modifies the other user’s data – the user is the transaction boundary. All the data for a user is stored in the same database. This way all the data reads or write happen together in a single database and they don’t span databases. Each database holds a set of users that it can support, however no two databases hold data for the same users.


Knowing your data allows to you “see” the natural transaction boundaries and gives you an edge on scaling out. I foresee the role of the DBA trending away from hardware maintenance and towards database architecture.

Wayne also posted Scale Out Your SQL Azure Database on 12/24/2010 (missed when posted):

Scaling up an on-premise SQL Server is the concept of adding more computing resources to your existing hardware, like memory, processing power or storage. The purpose is to allow your SQL Server to handle more queries, gain performance or store more data. When you do decided to go to the cloud you should realize that SQL Azure doesn’t scale up like SQL Server – it scales out. Scaling out is adding additional machines with the same data and dividing the computation between them. This article will discuss the differences between scaling up and scaling out and how you need to plan for scale out on SQL Azure to gain performance.

Due to hardware and licensing costs of SQL Server, it is much easier to add resources to existing hardware, than it is to deploy an additional replicated SQL Server on new hardware to gain performance.

The problem with scaling out (adding more machines) in an on-premise SQL Server installation is that this adds a degree of complexity to the design that deters DBAs from this path. The DBA now has to consider clustering, deploying a replication schema, etc. It is easier to purchase bigger hardware and redeploy your current databases on it, replacing the existing hardware, then it is to deploy as a cluster.

In essence, the SQL Server DBA has been trained to scale up, however hardware costs prohibit infinite scale up. If you have to double your server capacity (RAM, DISK, CPU) by purchasing a new server every time you double your workload and you can’t put the old server to use – your server costs get exponentially greater. In contrast, if you can add an additional server in a SQL Server cluster every time your work load doubles (scaling out) then your expenses grow at the same rate as your demand for resources. This is the beauty of scale out – additional resources cost you the same as the current resources – your costs grow linearly with your hardware.

However, most DBAs rely on Moore’s law: Hardware speeds, disk storage, and memory capacity doubles every two years. This means that you can double the performance at the same price as the original cost of your two-year old servers. So, if your workload doubles every two years, you end up purchasing a server twice as big (to scale up) at the same cost as the old server. In this case, all you have to do is purchase a server that will meet your current needs for two years – and hope that your capacity needs do not double beyond that.

SQL Azure

imageCurrently, you cannot scale up SQL Azure processing and memory resources – you are allocated a fixed amount of computing and memory resources per database. Through SQL Azure load balancing, you might gain more resources on a particular machine in the database center, but you cannot crack the case and add more RAM to SQL Azure. For more information see: Inside SQL Azure.

You can, however, scale up SQL Azure storage to the limit of 50 Gigabytes. This is the same as increasing the storage system on an on-premise SQL Server. After 50 Gigabytes you need to scale out your storage.

The inability to scale up confuses many people and leads them to think that SQL Azure doesn’t scale at all. It does scale – it scales out. Not only does it scale out, it scales out at a fixed cost; double the resource costs you roughly double the price.

This table will help determine your options for scaling:


* You have other options for better performance than just scaling, you can also optimizing your queries, see: Improving Your I/O Performance for more information.

Scaling Out With SQL Azure

In order to scale out with SQL Azure, you need to be able to partition your data, dividing the database across multiple databases. Each of these databases represents a fixed amount of computing and memory resources; the more databases that you divide your data across, the more computing and memory resources you have at your disposal.

Usually scaling involves work that a DBA is used to doing, for example:

  • Adding more RAM.
  • Upgrading the storage system.
  • Clustering two on-premise SQL Servers.
  • Optimizing stored procedures.

However, partitioning your data involves working with the developers that wrote the application that queries the database. Currently, you can’t scale out your SQL Azure database without modifying the client access layer to be aware of the partitioning. This is generally is the developers area, not the DBAs.

Vertical partitioning divides a table into multiple tables that contain fewer columns. For SQL Azure, we take this one step further, that the tables the data is stored on are in different databases. In other words, scaling out your storage, memory and processing power across multiple databases. For more information see: Vertical Partitioning.

Horizontal partitioning divides a table into multiple tables. Each table then contains the same number of columns, but fewer rows. Like vertical partitioning, horizontal partitioning scales out your storage, memory and processing power across multiple databases. For more information see: Horizontal Partioning.

Another option is to shard your database, using multiple databases to store the same data; each database contains an identical copy of your data. With sharding, you scale out the processing power and RAM, however the amount of data stored stays the same. The benefit of sharding is that your application doesn’t have to make multiple queries to collect all the data – since all the data can be found in one of the many databases. However, for each additional database you are paying for the redundant data to be stored – paying for data storage to gain performance For more information see: Sharding with SQL Azure.


In order to efficiently scale, whether with an on-premise SQL Server or SQL Azure in the cloud, you need to be able to scale out. In order to scale out, you need to design databases that scale – which means that the application developers need to be involved in coding the database that can be vertically or horizontally partitioned, or sharded.

<Return to section navigation list> 

MarketPlace DataMarket and OData

Patrick Riley (@priley86) demonstrated integrating a WP7 app with the DataMarket in his To the Cloud! post of 12/29/2010:

image This week I am taking some time to discuss integrating your WP7 app with Windows Azure and the Microsoft “cloud”. This can be hugely beneficial if you happen to write the next “farmville”. The cloud offers the possibility to scale up and out, and in a matter of minutes.

In this post, I will show a quick demo and the tools you’ll need for constructing your all-embracing nimbus. Let’s get the basics first. You’ll need IIS 7, the WP7 developer tools, and the Windows Azure SDK. Once you get those out of the way, head over to the MSDN All-In-One Code team blog and download the demo. Ahhh, now for the fun stuff.

But first a bit of background. The key to the Windows Azure platform has been neatly constructed in something we like to call roles, or runtimes for execution that is. The beauty of it all is that they can be quickly targeted to any platform “in the cloud”, and instances can be replicated instantly in your configuration file, or within the Azure platform app fabric. This enables you to scale out as many web applications (web roles) across several endpoints running as many background processes (worker roles) as you need, all at the push of a button. I know…I was excited too.

imageThis demo showcases the Windows Azure Datamarket, another service being provided by Microsoft’s cloud which allows you to consume cloud data using a REST service and the OData specification. Read the whitepaper for more fascinating tidbits.  The Azure DataMarket requires your Microsoft Live Id, and a data service subscription. Luckily, this demo uses a free Web Crime service provided by There are several others in the store which provide real time data feeds.

Enough chit chat, let’s see this in action. First you’ll want to pull down your Account subscription key and add it to the CrimeService.svc.cs file, i.e.:

private static string azureDataMarketAccountKey = “ey1Oab3KdvlEkIhk=”;

public List<CityCrimeWeight> GetCrimes()


List<CityCrimeWeight> results = new List<CityCrimeWeight>();

datagovCrimesContainer svc = new datagovCrimesContainer(new Uri(“”));

svc.Credentials = new NetworkCredential(“yourLiveId”, azureDataMarketAccountKey);

// Obtain all crime data in New York.

var query = from c in svc.CityCrime where c.State == “New York” select c;

Next, fire up the demo in VS 2010 along with the Windows Azure Compute Emulator. This will enable you to watch web roles boot up. Here you can see I have specified two instances in my service configuration (csfg) file. The UI emulator will show your service endpoints along with your role runtime statements.

Once you have this piece in play, you’re almost ready to go. You can fire up your WP7 client application (preferably in another instance of Visual Studio) and see your cloud services at work. You may want to change the ServiceAddress property in the MainPage code behind depending on where the CrimeService web role was deployed, i.e.:

private static string ServiceAddress = “”;

This demo shows how easy it is to expand your WP7 application into a Azure DataMarket Cloud Consuming Machine! Hope you enjoyed.

Jeffrey Foreman described Windows Azure Cloud Mashups with DataMarket and JackBe in a 12/15/2010 post (missed when published):

Mashupn. An audio recording that is a composite of samples from other recordings, usually from different musical styles.

Mashups in music are fun. Mashups in cloud computing can be powerful and even liberating for businesses. When creating a web application in a cloud environment, why should you be limited to your own creations if useful tools already exist? By creating a web mashup, you blend multiple web tools into your own application to create something that gives end users everything they need.

image JackBe is a mashup tool provider using Microsoft’s Azure DataMarket to develop dashboard apps. By running JackBe in Azure, end users can create their own web application mashups, making use of data sources (SQL Azure, Dallas, or OData) already available in the Windows Azure Marketplace.

imageOne example JackBe highlighted featured a logistics app that plans routes for delivery of perishable foods. Keeping food fresh is a priority with the delivery, so the mashup makes use of data sources from the marketplace such as:

  • Bing maps with Navteq dynamic routing
  • Microsoft Dynamics on Demand customer order data and real-time Weather Central data.
  • Microsoft SharePoint – sharing online information about the location and routes of delivery trucks
  • Azure DataMarket: constantly updated fuel prices and the locations of filling stations.

With advanced cloud computing power, businesses have the tools they need to blend streams of content seamlessly into their web applications, and Windows Azure , along with JackBe, makes this possible. For more information about JackBe’s mashups in Microsoft’s Azure Marketplace, you can watch their video walkthrough of the Open Government Data Mashup.

Julie Strauss posted Optimized DataMarket integration available with updated version of PowerPivot to the PowerPivot Team blog on 10/29/2010 (Missed when published; thanks to Andrew Brust for the heads up):

Yesterday at PDC2010, Microsoft announced the RTW of Windows® Azure™ Marketplace DataMarket (DataMarket). Windows® Azure™ Marketplace DataMarket is formerly known as codenamed “Dallas”.

As some of you already know we have been working with the DataMarket team for a while to ensure that PowerPivot had a great integrated experience with the DataMarket preview releases. The goal being to provide a seamless experience allowing developers and IW’s to analyze commercial and public domain data from DataMarket directly in PowerPivot with just a few clicks!

What does it mean?

It means that PowerPivot had to undergo a slight facelift. Partly to accommodate for better discoverability of the DataMarket integration. Partly to optimize usability of connecting to DataMarket data feeds. The changes we have implemented are relatively small, but should make a significant difference for those of you who will be working with data from DataMarket data feeds.

It should be mentioned that even if you do not have the updated version of PowerPivot installed you can obviously still connect and use the DataMarket data feeds using the standard data feed user interface. It may just take a little more effort :) (for more details see previous blog post)  

What does the changes look like?

When working in PowerPivot:

  • You will see a dedicated DataMarket control has been added to the PowerPivot ribbon                


  • When selecting the DataMarket ribbon control you will see a slightly modified version of the standard data feed Import Wizard page, which has been optimized for more efficient import of DataMarket data feeds. Modifications include a brief introduction to DataMarket, specific entry points for account key as well as links allowing you to navigate directly to your account key or to the page for browsing available datasets on the DataMarket


  • The rest of the wizard pages have not undergone any changes are the same as the original data feed pages

When accessing from DataMarket:

  • You have the option of analyzing your dataset by launching PowerPivot directly from the DataMarket web site


  • Logic has been added to the service document allowing PowerPivot to seamlessly identify your feed as a DataMarket (rather than a standard atom feed) taking you directly to the dedicated DataMarket dialog

When is it available?

The updated PowerPivot msi was made available for download from the PowerPivot download page today!

Where to learn more?

To learn more, please visit the SQL Server team blog, the DataMarket team blog or simply click here to watch the video showcasing the PowerPivot and DataMarket integration in action!

The current PowerPivot build is 10.50.1747.0 of 12/21/2010. The 32-bit version failed to work for me on 32-bit Excel 2010 running under 64-bit Windows 7; startup hangs forever when “Preparing the PowerPivot window.” The 64-bit version works as expected with 64-bit Excel running under 64-bit Windows Server 2008 R2 (new install), as well as a virtual 64-bit Windows 7 guest OS with a Windows Server 2008 R2 host OS (upgraded build).

<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Alik Levin described Windows Azure AppFabric Access Control Service (ACS) v2 – Error Codes and How To Troubleshoot And Fix Them on 12/29/2010:

image ACS Error Codes page is invaluable resource when you need to troubleshoot and fix errors related to Windows Azure AppFabric Access Control Service (ACS) v2. It is designed to concentrate in one place the following information: Error number, Error Message, and How To Fix columns. It is also broken into several categories so it’d be easier to scan through it looking for related materials and finding the priceless answer faster. The categories are:

  • image722322Active federation protocol errors, including SOAP and WS-Trust
  • OpenID protocol errors
  • Facebook Graph protocol errors
  • General security token service errors, including identity provider metadata
  • Rules engine, data, and management service errors
  • OAuth 2.0 protocol errors
  • Other errors

Bookmark it, it is work in progress but much work is done already so the rubber can hit the road.

Related Books
Related Info

* If you haven’t done so already, check out Alik’s comprehensive Windows Identity Foundation (WIF) and Azure AppFabric Access Control Service (ACS) Survival Guide external resource post to the Microsoft TechNet Wiki, last updated on 11/25/2010. Highly recommended!

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

Wely Lau explained How to establish virtual network connection between cloud and on-premise with Windows Azure Connect (Part 3– Activating and Enabling Windows Azure Connect) in a 12/29/2010 post:

This is the third post of Windows Azure Connect blog post series, please check out the first post on how to set up on-premise server and the second post for preparing the application.

Now let’s move to the main part of this series which is the configuring the Windows Azure Connect.

Requesting Beta Access to Azure Connect

At the time where this post was written, Windows Azure Connect is on beta status. Therefore, for those who have not possess the access, log-on to, click on Beta Programs and click on appropriate service.


Having done so, you will probably need some time (may varies to days or weeks) in order to be activated).

Activating Azure Connect

Assuming your request has been granted, on the Windows Azure Developer Portal, click on Virtual Network link a the left-bottom of the page.


Select your appropriate subscription and click OK to activate your Windows Azure Connect when a pop-up occurs.

Applying Activation Token to Windows Azure Role


The next step is to get a token and apply the token to our Windows Azure Project. The intention of doing this is to tell Windows Azure Connect that this particular role is “Windows Azure Connect –enabled”.

To do that, click on Get Activation Token. When the pop-up occurs, copy the token by clicking on Copy to Clipboard.


If a Silverlight dialog show up, click on Yes to accept.

Go back to your Visual Studio’s solution, double-click on your intended role (in my cases WebRole1). Click on Virtual Network tab, check the Activate Windows Azure Connect checkbox and paste the token key that we’ve copied on the portal earlier.


Installing Windows Azure Connect Endpoint on On-premise Machine

Next step is to installing a small agent on our on-premise machine. Windows Azure Connect has the ability to connect back to on-premise machine through this small agent.

Go back to our Windows Azure Developer Portal. To get the agent, click on Install Local Endpoint button, copy the URL to the Clipboard.

image                   image

Paste on your browser to download the agent. Please note that at the moment, we can only use IE to download.

When finished the download, install the agent by following the wizard.


When it’s successfully install, you will see a small icon on your taskbar image.

You can click on Open Windows Azure Connect and see the detail.


As expected, the agent is not connected since it has not been configured.

Go back to your Windows Azure Developer Portal, you will now can see that your machine name (including the details of that machine) is shown on the Groups and Roles section.


In the next post, I’ll show you how to group the on-premise instance with cloud instances.

Wely’s live Windows Azure Connect Application reported the following problem when I tried it on 12/29/2010 at about 2:00 PM PST:


Jim O’Neill continued his series with Azure@home Part 13: Remote Desktop Configuration on 12/28/2010:

This post is part of a series diving into the implementation of the @home With Windows Azure project, which formed the basis of a webcast series by Developer Evangelists Brian Hitney and Jim O’Neil.  Be sure to read the introductory post for the context of this and subsequent articles in the series.

imageAs you likely know by now, the web and worker roles deployed as part of Azure@home, or as part of any Azure application for that matter, are ultimately hosted in a virtual machine (VM) running somewhere in the Azure data center that you selected for your application.  Until recently though, that was about all you knew.  The VMs are managed by the Azure fabric running through the data center - from the point you deploy your code, through upgrades, hardware failures, and operating system updates.  Sometimes though it would be great to take a look at your running application for monitoring, troubleshooting, or simply curiosity!

In this post, I’ll cover how to set up the deployment of an Azure application, specifically Azure@home, for remote desktop access, and in a follow-up post, I’ll take some time to poke around at live instances of the WebRole and WorkerRole within Azure@home.

Configuring the Application for Remote Desktop
Visual Studio Tooling Support

The easiest way to set up an application for Remote Desktop access is when you publish the Azure cloud service project from Visual Studio.  A link on the Deploy Windows Azure project dialog brings up a second dialog through which you can enable and configure Remote Desktop access.

Remote Desktop Configuration

Remote Desktop configuration requires installing a certificate in Windows Azure that will be associated with the hosted service (namely your deployed Azure application).  You can browse the certificates (in your Personal certificate store) via the dropdown list on the Remote Desktop Configuration dialog and select one, or just create a new one.

Certificates for Remote Desktop access require a KeySpec of AT_EXCHANGE versus AT_SIGNATURE.  If you are using the makecert utility, specify the –sky switch with a value of exchange (or 1).   Exchange keys can be used for signing and encrypted session key exchange, whereas signature keys are only for authentication (digital signing).  certutil is a great resource for peering into your certificate configurations and seems to provide more information than the Certificate Manager tool (certmgr.exe) or MMC plug-in (certmgr.msc).

If you elect to create a new certificate (via the last option in the certificate drop down list), you’ll be prompted for a friendly name.  Next, you specify a user name, password, and expiration date; these are used to set up a temporary account on the remote VM which you will be accessing.

Remote Desktop Configuration

The net effect of these modifications is to add configuration settings in both the ServiceDefinition.csdef and ServiceConfiguration.cscfg files associated with the Windows Azure cloud services project.  You can also make these modifications (detailed next) in the XML editor directly.

ServiceDefinition Modifications

In the definition file, module import directives are associated with each of the roles in the Azure application.  For Azure@home, the updated file looks like the following, with the highlighted lines indicating the additions.  

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="AzureAtHome" upgradeDomainCount="2" xmlns="">
  <WebRole name="WebRole">
      <Site name="Web">
          <Binding name="HttpIn" endpointName="HttpIn" />
      <Setting name="DiagnosticsConnectionString" />
      <Setting name="DataConnectionString" />
      <InputEndpoint name="HttpIn" protocol="http" port="80" />
      <Import moduleName="RemoteAccess" />
  <WorkerRole name="WorkerRole">
      <Setting name="DiagnosticsConnectionString" />
      <Setting name="DataConnectionString" />
      <Setting name="FoldingAtHome_EXE" />
      <Setting name="AzureAtHome_PollingInterval" />
      <Setting name="AzureAtHome_PingServer" />
      <LocalStorage name="FoldingClientStorage" cleanOnRoleRecycle="false" sizeInMB="1024" />
      <Import moduleName="RemoteAccess" />
      <Import moduleName="RemoteForwarder" />

Each role in the project imports a RemoteAccess module, which means every VM hosting an instance of those roles will be eligible for remote access.

Also, exactly one role in the project (here, the WorkerRole) imports the RemoteForwarder module.  The role marked as RemoteForwarder receives all of the Remote Desktop traffic that flows to the application and then routes it to the appropriate role instance.  You can mark an existing role as the RemoteForwarder or use a dedicated role (perhaps deployed on an extra-small instance) to better isolate the Remote Desktop traffic from your own application role instances.

ServiceConfiguration modifications

The ServiceConfiguration.cscfg file is likewise modified to contain specific values relating to the remote desktop access.  Below you can see new ConfigurationSettings corresponding to the remote account user name, password, and expiry.  Also included is a separate Certificates section to indicate the certificate used to exchange the account information via the Remote Desktop session.  Configuration for the WebRole is identical with the exception that there is no markup to enable the RemoteForwarder, since only one role in the Azure project needs to incorporate and enable this module.

  <Role name="WorkerRole">
    <Instances count="5" />
      <Setting name="DiagnosticsConnectionString" value="DefaultEndpointsProtocol=https;AccountName=azureathome;AccountKey=REDACTED" />
      <Setting name="DataConnectionString" value="DefaultEndpointsProtocol=https;AccountName=azureathome;AccountKey=REDACTED" />
      <Setting name="FoldingAtHome_EXE" value="Folding@home-Win32-x86.exe" />
      <Setting name="AzureAtHome_PollingInterval" value="15" />
      <Setting name="AzureAtHome_PingServer" value="" />
      <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" />
      <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" 
value="jimoneil" />
      <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" 
value="REDACTED" />
      <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" 
value="2011-01-27T23:59:59.0000000-05:00" />
      <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" />
      <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" 
thumbprintAlgorithm="sha1" />

If you only wish to expose one role for remote access, you can do so, but you have to modify the configuration file XML yourself.  In fact, if you change any of the Remote Desktop configuration values generated by Visual Studio (even flipping a value from true to false), when you subsequently click the Configure Remote Desktop connections… link in the Visual Studio publish dialog, you’ll get a message box indicating that the Remote Desktop configuration was not generated by Windows Azure Tools, and you must use the Xml editor to modify the configuration.

Deploy Windows Azure project dialogCreate Service Package

As the final step in Visual Studio, I’m opting to create the service package directly, via the Publish option.  As you might recall, this creates two files on the client machine, a .cspkg file and a copy of the .cscfg file, that you can then upload via the Windows Azure Management Portal into a new hosted service.

Alternatively, you can set up credentials on the client to deploy directly to Azure; however, to do so you must first set up a hosted service in the Azure portal and also install a management certificate to associate with your Windows Azure subscription.

In either case, the Remote Desktop Configuration aspect of the deployment remains the same as what we configured earlier in this article.

Configuring the Azure Service Deployment for Remote Desktop

In the Windows Azure Management portal (I’m using the new portal launched after PDC 10), you can deploy the package and configuration files and install the certificate required for remote desktop access in the same step.

Exporting the Remote Desktop Certificate

First though, you’ll need to export the certificate created above into a PKCS#12 formatted file stored on your local machine.  You can do that easily via the Certificate Export Wizard launched from the certmgr.msc MMC plugin, as shown in the short video clip below (note: there is no audio).

[See original post for embedded video clip.]

Deploying the Azure Service (and Certificate)

Now that you have the certificate file, service configuration document, and service package file on your development machine, you’re ready to deploy the service to Azure via the Windows Azure Management Portal.  You can browse to the two service files directly from the Create a new Hosted Service dialog (below) and of course, select the target data center region, service URL, and deployment type (staging or production). 

The Add Certificate button spawns a separate dialog (stylized below) via which you browse for the certificate file exported earlier and also supply the password that secures private key in the .pfx file (this is the same password you specified when exporting the certificate via the wizard in the previous step).

Creating a new service on Azure

Once the service has been deployed, you’ll see something along the lines of the following in the Windows Azure Management Portal.  Note that each of the individual role instances is represented; there are two instances of the Azure@home web role and five instances of the worker role in this deployment.  Also included below the highlighted service is the certificate we uploaded with the cloud application.

Azure@home deployed

Accessing an Instance via Remote Desktop

The Windows Azure Management Portal allows you to modify the Remote Desktop configuration as well as launch a Remote Desktop session.

Modifying Remote Desktop Configuration

When you select a Role node (not an instance node) in a deployment configured for Remote Desktop, the Enable checkbox and Configure option in the upper right of the Windows Azure Management Portal ribbon menu are enabled.  Choosing Configure brings up the dialog below to set the credentials for the Remote Desktop session as well as modify the certificate and expiration date (values we initially supplied in the Remote Desktop Configuration dialog in Visual Studio)

Remote Desktop configuration in the Windows Azure portal

Connecting via Remote Desktop

To connect via Remote Desktop to a specific instance running in the Azure data center, select a node instance, and then press the Connect button in the upper right of the portal’s ribbon menu, as shown below:

Launching Remote Desktop from the Windows Azure portal

You’ll be prompted to access a file with a .rdp extension that’s downloaded on your behalf; press Open, then Allow, and finally Connect if and when prompted; you can elect not to be prompted again on subsequent connection attempts.

Prompt to Open/Save Remote Desktop file

Prompt to allow Remote Desktop access

Prompt to confirm trust

Tip: save the .RDP file locally and you can bypass the Windows Azure Management Portal whenever you want to Remote Desktop in to the VM instance again.

Remote Desktop loginWhen you press Connect on this final dialog (which is noting, as we’d expect, that the self-signed certificate provided cannot be verified), the Remote Desktop session starts up, prompting you for your credentials (see right), namely the user name and password you specified in the Visual Studio Remote Desktop Configuration dialog at the outset (or that you perhaps modified via the Windows Azure Management Portal).

Once you have supplied valid credentials, you should be logged in to the VM instance hosting the selected role, like below!  If it’s the first time you’ve accessed this VM, you’ll see that your user account is first configured – as it would be for any new user on a Windows system.  Once that’s done (and presuming the VM has not restarted), on subsequent logins you’ll be able to resume whatever you were doing in your last session.

Remote Desktop session to Azure instance

At this point the VM is yours to explore, and that’s where we’ll pick up things on the next post in this series.

Jim is a Developer Evangelist for Microsoft covering the Northeast District.

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Nuno Filipe Godinho (@NunoGodinho) described Debugging Windows Azure Application with IntelliTrace on 12/29/2010:

image Microsoft introduced IntelliTrace in the Visual Studio 2010 Ultimate Edition, and was previously called Historical Debugger, since it provides a way to debug historical data, because it provides a way to save information needed to provide a Debugging capabilities even if we aren’t using live code, or capable to reproduce a specific behavior.

This capability is very important when we use it locally on our machines, but when we look at the Cloud where we don’t know what’s really happening this is much more important, and so this is how we can use IntelliTrace to Debug Windows Azure Roles.

1. Publish our Cloud Service and define that it should have IntelliTrace enabled.


2. After that, we should specify exactly what we want to be logged by IntelliTrace so we can analyze it afterwards.


     3. Now select the elements you need.


      • “IntelliTrace events only” and “IntelliTrace events and call Information” this defines exactly what you need in terms of information, if only the events are enough or if you need to have the call stack and variable values and so on.

4. Select which modules to Log


5. Select which processes to log


6. Select the Events we would like to be logged and used by IntelliTrace


7. Select the maximum amount of disk space that can be used by IntelliTrace


8. Now that all settings is in place just publish it


  Now just wait for it to be published


  Now that it’s already published and running let’s analyze the IntelliTrace Logs

1. Open the Server Explorer and check that you have the recently installed environment with IntelliTrace


2. Now select an instance and right-click on it and select the View IntelliTrace Logs option


3. Now just wait for the Logs to be downloaded


4. When the logs are downloaded you’ll get a summary of your IntelliTrace logs like this


    • This IntelliTrace summary has information about:
      • Threads that are being used
      • Exception Data list
      • System Information
      • Modules Loaded

5. Now just select the Start Debugging button to start using IntelliTrace


6. Don’t for forget to show IntelliTrace windows for Events and Calls


Hope this helps you to know better what’s happening with your Windows Azure Cloud Services.

Dominick Baier asked Windows Azure Diagnostics: Next to Useless? and answered “Yes!” in this 12/29/2010 post:

image To quote my good friend Christian:

“Tracing is probably one of the most discussed topics in the Windows Azure world. Not because it is freaking cool – but because it can be very tedious and partly massively counter-intuitive.” [See post below.]


image The .NET Framework has this wonderful facility called TraceSource. You define a named trace and route that to a configurable listener. This gives you a lot of flexibility – you can create a single trace file – or multiple ones. There is even nice tooling around that. SvcTraceViewer from the SDK let’s you open the XML trace files – you can filter and sort by trace source and event type, aggregate multiple files…blablabla. Just what you would expect from a decent tracing infrastructure.

imageNow comes Windows Azure. I was already very grateful that starting with the SDK 1.2 we finally had a way to do tracing and diagnostics in the cloud (kudos!). But the way the Azure DiagnosticMonitor is currently implemented – could be called flawed.

The Azure SDK provides a DiagnosticsMonitorTraceListener – which is the right way to go. The only problem is, that way this works is, that all traces (from all sources) get written to an ETW trace. Then the DiagMon listens to these traces and copies them periodically to your storage account. So far so good.

But guess what happens to your nice trace files:

  • the trace source names get “lost”. They appear in your message text at the end. So much for filtering and sorting and aggregating (regex #fail or #win??).
  • Every trace line becomes an entry in an Azure Storage Table – the svclog format is gone. So much for the existing tooling.

To solve that problem, one workaround was to write your own trace listener (!) that creates svclog files inside of local storage and use the DiagMon to copy those. Christian has a blog post about that. OK done that.

Now it turns out that this mechanism does not work anymore in 1.3 with FullIIS (see here). Quoting:

“Some IIS 7.0 logs not collected due to permissions issues...The root cause to both of these issues is the permissions on the log files.”

And the workaround:

“To read the files yourself, log on to the instance with a remote desktop connection.”

Now then have fun with your multi-instance deployments….

So the bottom line is, that currently you cannot copy IIS logs, FREB logs and everything else that gets written by W3WP. Nice…


Hopefully, the SQL Azure team will fix this problem in a future SDK version.

Christian Weyer reported Writing trace data to your beloved .svclog files in Windows Azure (aka XmlWriterTraceListener in the cloud) fails with Windows Azure SDK v1.3 at the end of a 12/29/2010 post:

imageTracing is probably one of the most discussed topics in the Windows Azure world. Not because it is freaking cool – but because it can be very tedious and partly massively counter-intuitive.

One way of doing tracing is to use System.Diagnostics features like traces sources and trace listeners. This has been in place since .NET 2.0. Since .NET 3.0 and the rise of WCF (Windows Communication Foundation) there was also extensive usage of the XmlWriterTraceListener. We can see numberless occurrences of the typical .svclog file extension in many .NET projects around the world. And we can view these files with the SvcTraceViewer.exe tool from the Windows SDK.

All nice and well. But what about Windows Azure?
In Windows Azure there is a default trace listener called Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener from the Microsoft.WindowsAzure.Diagnostics assembly.

If you use this guy and want to trace data via trace sources, your data will be stored in Windows Azure Storage tables. Take some time to play around with it and find out that the data in there is close to useless and surely not very consumer-friendly (i.e. try to search for some particular text or error message. Horror).

So, taking these two facts I thought it would be helpful to have a custom trace listener which I can configure just through my config file which uses Azure local storage to store .svclog files. From there on I am using scheduled transfers (which I demonstrated here) to move the .svclog files (which are now custom error logs for Windows Azure) to Azure blob storage. From there I can just open them up with the tool of my choice.

Here is the simplified code:

using System.Configuration;
using System.Diagnostics;
using System.IO;
using Microsoft.WindowsAzure.ServiceRuntime;

namespace Thinktecture.Diagnostics.Azure
    public class LocalStorageXmlWriterTraceListener : XmlWriterTraceListener
        public LocalStorageXmlWriterTraceListener(string initializeData)
            : base(GetFileName(initializeData))

        public LocalStorageXmlWriterTraceListener(string initializeData, string name)
            : base(GetFileName(initializeData), name)

        private static string GetFileName(string initializationData)
                var localResourceItems = initializationData.Split('\\');
                var localResourceFolder = localResourceItems[0];
                var localResourceFile = localResourceItems[1];

                var localResource = RoleEnvironment.GetLocalResource(localResourceFolder);

                var fileName = Path.Combine(localResource.RootPath, localResourceFile);

                return fileName;
                throw new ConfigurationErrorsException("No valid Windows Azure local 
resource name found in configuration."); } } } }

In my Azure role (a worker in this particular case, but it also works with a web role) I configure the trace listener like this:

    <trace autoflush="true">
        <add type="Thinktecture.Diagnostics.Azure.LocalStorageXmlWriterTraceListener, 
               AzureXmlWriterTraceListener, Version=,
               Culture=neutral, PublicKeyToken=null"
             initializeData="TraceFiles\worker_trace.svclog" />

After scheduling the transfer of my log files folder I can use a tool like Cerebrata’s Cloud Storage Studio to look at my configured blob container (named traces)– and I can see my .svclog file.


Double-clicking on the file in blob storage opens it up in Service Trace Viewer. From here on it is all the “good ole” tracing file inspection experience.


Note: as you can see the Service Trace Viewer tool is not just for WCF – but you knew that before!

UPDATE: this does not properly work with Azure SDK 1.3 and Full IIS due to permission issues – there is more information in the SDK release notes. Very unfortunate.

Christian Weyer described Transferring your custom trace log files in Windows Azure for remote inspection in an earlier 12/29/2010 post:

image You can write your trace data explicitly to files or use tracing facilities like .NET’s trace source and listener infrastructure (or third party frameworks like log4net or nlog or…). So, this is not really news.

In your Windows Azure applications you can configure the diagnostics monitor to include special folders – which you obtain a reference to through a local resource in VM’s local storage - to its configuration whose files will then be transferred to the configured Azure Storage blob container.

Without any further ado:

public override bool OnStart()
    Trace.WriteLine("Entering OnStart...");

    var traceResource = RoleEnvironment.GetLocalResource("TraceFiles");
    var config = DiagnosticMonitor.GetDefaultInitialConfiguration();
        new DirectoryConfiguration
            Path = traceResource.RootPath,
            Container = "traces",
            DirectoryQuotaInMB = 100
    config.Directories.ScheduledTransferPeriod = TimeSpan.FromMinutes(10);

    DiagnosticMonitor.Start("DiagnosticsConnectionString", config);
    return base.OnStart();

Note: remember that there are special naming conventions in Azure Storage, e.g. for naming blob storage containers. So, do not try to to use ‘Traces’ as the container in the above code!

And a side note: of course this whole process incurs costs. Costs for data storage in Azure Storage. Costs for transactions (i.e. calls) against Azure Storage and costs for transferring the data from Azure Storage out of the data center for remote inspection.

Alright – this is the base for the next blog post which shows how to use a well-known trace log citizen from System.Diagnostics land in the cloud.

The Windows Azure Team posted Real World Windows Azure: Interview with Rok Bermež, Lead Programmer at Kompas Xnet, and Uroš Žunič, Development Manager at Kompas Xnet on 12/29/2010:

The Real World Windows Azure series spoke to Rok Bermež, Lead Programmer at Kompas Xnet, and Uroš Žunič, Development Manager at Kompas Xnet, about using the Windows Azure platform to deliver cloud-based website solutions for the company's customers. Here's what they had to say:

MSDN: Can you give us a quick summary of what Kompas Xnet does and who you serve?

Bermež: Kompas Xnet was originally established as the IT support and training group for the Slovenian company KOMPAS d.o.o. We became a separate entity in 1995, and now have offices throughout Slovenia. We've been a Microsoft Gold Certified Partner since 2005 and provide consulting and training services for developers and users of most of the standard business technology solutions, including software from Adobe, Corel, and Microsoft. We also deliver custom software and website development and IT infrastructure support for a variety of large and small businesses throughout Europe.

MSDN: Was there a particular challenge you were trying to overcome that led you to use the Windows Azure platform?

Žunič: We were contracted by Microsoft to develop a website for the NT Konferenca 2010, the largest annual tech-education conference in Slovenia, attended by more than 2,000 IT professionals from all over the region-basically all our colleagues and competitors. In previous years, the conference used solutions delivered by local hosting providers, but connectivity issues interrupted the availability of the site during every conference. 

We had to create a solution that could help manage every aspect of the conference, but our most important requirement was that the application had to be available at all times. We believed we could use cloud technology to deliver high functionality, ensure 100 percent availability, and avoid the high up-front cost of deploying an on-premises solution.

MSDN:  Can you describe the solution that you developed? Which components of the Windows Azure platform did you use? 

Žunič: We built the NT Konferenca 2010 website ( and a back-end administrative interface using Windows Azure and Microsoft SQL Azure. Attendees used the site for tasks such as registering, finding event schedules, and downloading materials. We also used Windows Azure to do a lot of the back-end functionality.

Bermež: Our developers used SQL Azure to manage the website databases and Windows Azure storage services to store profile-images, presentations, and other data. We used Windows Communication Foundation and Queue storage in Windows Azure to connect the website to on-premises security, registration, and accounting systems. We also used Microsoft Silverlight 3 to build an application for viewing conference lectures and presentations on the website.

MSDN: How does using Windows Azure help Kompas Xnet deliver more advanced solutions to its customers?

Žunič: Everybody at NT Konferenca 2010 was very happy with the website and the connectivity problems that had plagued earlier conferences were completely gone. With that success, we were able to coordinate the deployment of the conference website with other projects. We're already working on an online presence for the Slovenian chapter of UNICEF and a new website for one of the largest newspapers in Slovenia.

Bermež: Because Windows Azure works so seamlessly with classic web-development tools like ASP.NET and the Microsoft Visual Studio development system, developing for Windows Azure significantly lowers the barriers for entry into the cloud. With Windows Azure, we can develop high-availability solutions with cycles as short as a few days or even hours.


Figure 1. Kompas Xnet used components of the Windows Azure platform to build the functional elements of the NT Konferenca 2010 website ( [Corrected].

Read the full story at:

To read more Windows Azure customer success stories, visit:

Jas Sandhu posted Quicksteps to get started with PHP on Windows Azure on 12/27/2010 (was not accessible from MSDN blogs on 12/28/2010):

image The weather in the northern hemisphere is still a little nippy, and if you're like me, you're spending a lot of time indoors with family and friends enjoying the holiday season. If you're spending some of your time catching up and learning new things in the wonderful world of cloud computing, we have a holiday gift of some visual walkthroughs and tutorials on our new "Windows Azure for PHP" center. We pushed up these articles to help you quickly get set up with developing for Windows Azure

"Getting the Windows Azure pre-requisites via the Microsoft Web Platform Installer 2.0" will help you quickly set up your machine in a "few clicks" with all the necessary tools and settings you will need to work with PHP on Windows, IIS and SQL Server Express. We’ve included snapshots of the entire process you will need to get a developer working with the tools built by the “Interoperability at Microsoft” team.

"Deploying your first PHP application with the Windows Azure Command-line Tools for PHP" will visually walk you through getting the tool, getting familiar with how it's used and packaging up a simple application for deployment to Windows Azure.

Deploying your first PHP application to Windows Azure” will build on top of the former articles with walk troughs of how to deploy the application using the Windows Azure management console, both the “classic” and present versions.

I hope this will help you get over the first speed bump of working on Microsoft’s cloud computing platform and we look forward to bringing you more of these based on your feedback and input. So please check them out and let us know how you feel!

<Return to section navigation list> 

Visual Studio LightSwitch

MSN Video released Visual Studio LightSwitch: Integrate with Microsoft Office on 8/23/2010 (missed when posted):

image2224222Leverage the power of Microsoft Office quickly and easily in your business applications using Visual Studio LightSwitch. For example, with Visual Studio LightSwitch you can include data stored in your SharePoint sites inside your business applications without a lot of coding.


Return to section navigation list> 

Windows Azure Infrastructure

Charlie Burns, Lee Geishecker, Lisa Pierce, Mike West, and Bill McNee co-authored on 12/29/2010 the Four Key Trends: A Look Ahead and Best Wishes for 2011 Reasearch Alert for Saugatuck technologies (site registration required):

image As 2011 draws near, we take a look forward and identify the major factors which we expect to heavily influence IT strategies for the next twelve to twenty-four months. Of course every IT organization will be heavily influenced by “micro-factors” such as specific strategies pursued by the business units it supports. However, here we focus on four broader “macro-factors” which every IT organization should consider as plans and strategies are developed and fine-tuned for 2011 and beyond.

1. Consumer Convenience Meets Business Productivity: Over two years ago Saugatuck projected that mobile computing would be a key influence in enterprise IT (509MKT, “Mobility: Defining the Future”, 30Sept2008). No doubt, 2011 will see expanded and accelerated infusion of mobile devices into business operations. Traditional laptops will not be relegated to recycle centers; however, they – along with netbooks – will be increasingly left behind when their users head to meetings, sales calls, etc. carrying smartphones and tablets. Ease-of-portability and user-friendly interfaces will make the newest small form-factor devices the “must have” for business users from executives to field sales personnel.

New devices bring new challenges for IT organizations ranging from application design, to data security and network bandwidth, to proactive administration/management of both mobile devices and services. IT organizations will need to surmount these challenges and embrace this next wave of delivery of IT services.

2. Clouds Are Thickening and Spreading: 2011 will bring key changes in multiple areas of Cloud Computing. Despite the likely occurrence of a significant security breach, security for Public Clouds will be improved and many business customers will find it matches (and often surpasses) the security of in-house infrastructures. And, while many large enterprises will continue to select Private Clouds for various uses, this choice will increasingly be motivated by factors such as workload interdependency (e.g. data and scheduling), rather than concerns about security.

Similarly, major IT vendors will introduce and / or enhance their PaaS offerings striving to make them the logical / best choice for SaaS providers as well as for customers seeking to develop new workloads or re-engineer old workloads into an on-demand model. IT organizations should develop plans for adopting Cloud Computing where it can yield significant benefits such as infrastructure cost reductions or desired flexibility/agility. Such plans must entail thorough evaluation of existing workloads (818MKT, “Cloud IT Guidance”, 07Dec2010) to identify the relative difficulty of migration to a Cloud offering.

Recovering Economy: While worldwide economic uncertainty ruled the day in 2010 – with US and Global GDP pegged to close the year out with 2.8 percent and 4.6 percent growth respectively – the recent fiscal / tax policy compromise passed by the Obama administration and Congress suggests much strong performance in 2011. Goldman Sachs, Morgan Stanley and Moody’s are all now forecasting US GDP growth of 4.0 percent in 2011, up from earlier consensus forecasts that suggested growth would be closer to 3.0 percent.

While there are a number of important threats to this scenario – including rising commodity prices, slowing emerging / BRIC market growth, jitters surrounding European sovereign debt, continued weakness in the US housing market, and US trade balances that continue to balloon, overall we have largely put the Great Recession behind us. According to Mark Zandi of Moody’s, “the banking system is much better capitalized, household deleveraging is well under way, and corporate America is very profitable.” Job growth should be strong in 2011, with 2.6 million new payrolls added – with unemployment forecast to fall to 9 percent by percent by YE2011.

In this environment, a slightly stronger domestic and international economic scenario will likely lead to even strong capital spending in 2011, as many global giants begin to invest for the future – spending down their incredibly strong balance sheet positions. However, IT budgets will likely remain constrained, as IT organizations will continue to be asked to support the business and provide new services without substantial new funding. This macro-factor amplifies or provides additional “motivation” for the first two macro-factors articulated above. With creative planning, IT organizations can accommodate all three. For example, an IT organization can move to adopt Cloud offerings for implementation of Virtual Desktops and provide increase mobility to users, increased security, and reduced support costs.

Early Effects of Net Neutrality: While this topic has been debated publicly for months, the US Federal Communications Commission (FCC) finalized its rulemaking on December 21. For more information please see FCC Docket # FCC 10-201. However, please note that as of this writing, the full decision is not yet available on the FCC website. As a result, readers are cautioned that this is our preliminary analysis.

Both mobile computing and adoption of Cloud Computing depend on the affordable and widespread availability of high speed networks. The FCC’s Net Neutrality ruling impacts both. However, in some cases the rules are more wide-ranging than even we recently anticipated (821MKT, “Net Neutrality and the Cloud”, 10 Dec2010).  For instance, the ruling extends the FCC’s regulatory authority to mobile Broadband access (3G/4G services), and to services that could be used as a substitute for the Internet, like MPLS and Ethernet, which are heavily used by both Cloud service providers and enterprises. While it’s highly likely that at least some of the rules contained in FCC 10-201 will be overturned in Congress and the Courts, this will be a protracted process. At the same time, it is unclear how the US FCC’s actions will ripple internationally.

Thus, WAN providers – at least in the US – may elect to limit large facilities-based investments in IP networks in favor of WAN and application optimization technologies like load balancing, deep packet inspection, etc. There is also some industry speculation that US mobile providers may begin implementing application-based pricing (for applications that use 3G and 4G services). Until the regulatory landscape is more certain, providers of cloud-based platform and applications services would do well to recall our advice in the Strategic Perspective cited above, which urges them to build/modify applications for use on WANs that are constrained by bandwidth, latency, etc. Business customers of cloud-based IaaS offers should follow the same advice.

The research alert continues with the usual “Why Is It Happening?” and “Market Impact” topics.

Jay Fry (@jayfry3) posted Making 'good enough' the new normal on 12/22/2010:

image In looking back on some of the more insightful observations that I’ve heard concerning cloud computing in 2010, one kept coming up over and over again. In fact, it was re-iterated by several analysts onstage at the Gartner Data Center Conference in Las Vegas earlier this month.

The thought went something like this:

IT is being weighed down by more and more complexity as time goes on. The systems are complex, the management of those systems is complex, and the underlying processes are, well, also complex.

The cloud seems to offer two ways out of this problem. First, going with a cloud-based solution allows you to start over, often leaving a lot of the complexity behind. But that’s been the same solution offered by any greenfield effort – it always seems deceptively easier to start over than to evolve what you already have. Note that I said “seems easier.” The real-world issues that got you into the complexity problem in the first place quickly return to haunt any such project. Especially in a large organization.

Cloud and the 80-20 rule

But I’m more interested in highlighting the second way that cloud can help. That way is more about the approach to architecture that is embodied in a lot of the cloud computing efforts. Instead of building the most thorough, full-featured systems, cloud-based systems are often using “good enough” as their design point.

This is the IT operations equivalent of the 80-20 rule. It’s the idea that not every system has to have full redundancy, fail-over, or other requirements. It doesn't need to be perfect or have every possible feature. You don't need to know every gory detail from a management standpoint. In most cases, going to those extremes means what you're delivering will be over-engineered and not worth the extra time, effort, and money. That kind of bad ROI is a problem.

“IT has gotten away from “good enough” computing,” said Gartner’s Donna Scott in one of her sessions at the Data Center Conference. “There is a lot an IT dept can learn from cloud, and that’s one of them.”

The experiences of eBay

In talking about his experiences working at eBay during the same conference, Mazen Rawashdeh, vice president of eBay's technology operations, talked about his company’s need to be able to understand what made the most impact on cost and efficiency and optimize for those. That mean a lot of “good enough” decisions in other areas.

eBay IT developed metrics that helped drive the right decisions, and then focused, according to Rawashdeh, on innovation, innovation, innovation. They avoided the things that would weigh them down because “we needed to break the linear relationship between capacity growth and infrastructure cost,” said Rawashdeh. At the conference, he laid out a blueprint for a pretty dynamic IT operations environment, stress-tested by one of the bigger user bases on the Web.

Rawashdeh couched all of this IT operations advice in one of his favorite quotes from Charles Darwin: “It’s not the strongest of species that survive, nor the most intelligent, but the ones most responsive to change.” In the IT context, it means being resilient to lots of little changes – and little failures – so that the whole can still keep going. “The data center itself is our ‘failure domain,’” he said. Architecting lots of little pieces to be “good enough” lets the whole be stronger, and more resilient.

Everything I needed to know about IT operations I learned from my cloud provider

So who seems to be the best at “good enough” IT these days? Most would point to the cloud service providers, of course.

Many end-user organizations are starting to get this kind of experience, but aren’t very far yet. Forrester’s James Staten says in his 2011 predictions blog that he believes end-user organizations will build private clouds in 2011, “and you will fail. And that’s a good thing. Because through this failure you will learn what it really takes to operate a cloud environment.” He recommends that you “fail fast and fail quietly. Start small, learn, iterate, and then expand.”

Most enterprises, Staten writes, “aren’t ready to pass the baton” – to deliver this sort of dynamic infrastructure – yet. “But service providers will be ready in 2011.” Our own Matt Richards agrees. He created a holiday-inspired list of some interesting things that service providers are using CA Technologies software to make possible.

In fact, Gartner’s Cameron Haight had a whole session at the Vegas event to highlight things that IT ops can learn from the big cloud providers.

Some highlights:

· Make processes experienced-based, rather than set by experts. Just because it was done one way before doesn’t mean that’s the right way now. “Cloud providers get good at just enough process,” said Haight, especially in the areas of deployment and incident management.

· Failure happens. In fact, the big guys are moving toward a “recovery-oriented” computing philosophy. “Don’t focus on avoiding failure, but on recovery,” said Haight. The important stat with this approach is not mean-time-between-failures (MTBF), but mean-time-to-repair (MTTR). Reliability, in this case, comes from software, not the underlying hardware.

· Manageability follows from both software and management design. Management should lessen complexity, not add to it. Haight pointed toward tools trying to facilitate “infrastructure as code,” to enable flexibility.

Know when you need what

So, obviously, “good enough” is not always going to be, well, good enough for every part of your IT infrastructure. But it’s an idea that’s getting traction because of successes with cloud computing. Those successes are causing IT people to ask a few fundamental questions about how they can apply this approach to their specific IT needs. And that’s a useful thing.

In thinking about where and how “good enough” computing is appropriate, you need to ask yourself a couple questions. First, how vital is the system I’m working on? What’s its use, tolerance for failure, importance, and the like. The more critical it is, the more careful you have to be with your threshold of “good enough.”

Second, is speed of utmost importance? Cost? Security? Or a set of other things? Like Rawashdeh at eBay, know what metrics are important, and optimize to those.
Be honest with yourself and your organization about places you can try this approach. It’s one of the ideas that got quite a bit of attention in 2010 that’s worth considering.

Jay is Strategy VP for CA Technology's cloud business.

Srinivasan Sundara Rajan proposed a “4+1 view of cloud hosted applications” in his Cloud Architecture Diagramming Standards post of 12/29/2010:

image Diagrammatic View of the System
Using diagrams to represent the architecture of a system has been a standard in enterprise applications, so that the information about the system is conveyed to multiple stakeholders in a standardized form.

Typically enterprises used 4+1 Views to depict the Architecture of the systems. The 4+1 View Model describes software architecture using five concurrent views, each of which addresses a specific set of concerns.

The 4+1 model describes the architecture of software systems based on the use of multiple concurrent views. The views represent multiple stakeholders:

  • End users
  • Developers
  • Project managers

The four views are represented as:

  • Logical View
  • Implementation View
  • Process View
  • Deployment View
  • Coupled with Selected Use Cases View, this successful model in representing the architectures diagrammatically is called the 4+1 View Model.

The following diagram explains how a 4+1 View provides a blueprint of the system.

4+1 View Tailoring Of Cloud Hosted Applications & Systems
As evident, the 4+1 views and the associated diagrams provide a good representation of a cloud hosted software application. However there are some new factors specific to cloud that may require further tailoring of these diagrams and views to fully represent a cloud system.

Logical View: This view provides the functionality of the system from the view of end users.  It Provides Functional requirements- What the system should provide in terms of services to its users. As evident this view will be the same between the  data center application and a cloud hosted application as the end user functionality remains the same.

Process View: This view provides the dynamic aspects of the system, explains the system processes and how they communicate, and focuses on the runtime behavior of the system. From a Cloud hosted application perspective  there needs to be some tailoring of the Process View such that the processes that are hosted by the  Cloud Provider in support of the application  needs to be represented  to get a full view of the system.

For example if an application is hosted on Amazon EC2 Platform, The process view may also include the processes  supported by  components like:

  • Amazon Elastic Block Store
  • Amazon Cloudwatch
  • Amazon Auto Scaling
  • Amazon Elastic Load Balancing
  • Amazon High Performance Computing

Implementation View: Focuses on how the system is built and  Which technological elements are needed to implement the system.  Both application and infrastructure components are mentioned in this view.

This view  will not be much different between the datacenter application and a cloud hosted application. However  there are multiple  roles of  virtual servers in a cloud applications and they connect with each other using multiple protocols, all these needs to be explained in  this view.

imageIf consider a Azure Cloud Platform, this view will represent:

  • Azure instances in Web Role
  • Azure instance in Worker Role
  • SQLAzure Instances
  • Other App Fabric Components

Deployment View: It is concerned with the topology of software components on the physical layer, as well as communication between these components. This view  will have the maximum tailoring for  the  applications that are hosted on Cloud.

  • At this time there is no clearly way to represent Deployment aspects like
  • o On Demand Instances
  • o Reserved Instances
  • o Spot Instances
  • o Availability Zones
  • o Other Virtual Server Migration scenarios
  • However some tailoring of the diagramming notations should be in a position to represent the missing pieces of a Cloud Deployment.

As  the enterprise  perspective of  Cloud platform is to deploy applications on a long term basis and once deployed there needs to be a  option to represent the architecture of the system diagrammatically, it is very important to consider the  existing  Architectural Diagramming Standards. As explained above some tailoring of these diagrams and notations are needed to clearly represent an application that is hosted on a Cloud platform.

Jeremy Geelan commented “All this week Cloud Computing Journal authors are looking at the short- and mid-term future of the Cloud” as a lead to his My Top Five Cloud Predictions for 2011: Steve Jin post of 12/29/2010:

image Continuing our series, we hear now from Steve Jin (pictured), Top 50 blogger on Cloud Computing at, Author of VMware VI & vSphere SDK (Prentice Hall), and Founder of the open source VI Java API:


1. The focus of cloud computing will gradually shift from IaaS to PaaS which becomes key differentiator in competition. Developer enablement becomes more important than ever in ecosystem evangelism, full software lifecycle integration, IDE support, API and framework, and etc.

2. Many more mergers and acquisitions will take place in the cloud space for companies to build stronger cloud portfolio. For big players, it should include dual vertically complete stacks both as services and products. Whoever gets there first will gain enormous advantages over its competitors.

3. Virtualization will continue to play a pivotal role in transforming the IT infrastructure toward cloud computing. 2011 will see more enterprises 100% virtualized and get onto private cloud journey as a natural next step. Automation will be critical for operational efficiency for both private cloud and cloud service providers. Big enterprises will push for diversification of hypervisors in their cloud datacenters for tier hybrid cloud where different types of workloads run on different platforms.

4. Cloud and mobile computing will cross-empower each other, and therefore generate a new wave of innovations and start-ups. The cloud will be the ultimate powerhouse for mobile devices. And mobile devices will make the cloud more accessible and user friendly.

5. Customers will demand cloud interoperability and portability, and therefore drive more adoption of open source software and open standards. I expect key standard organizations will release important standards on API, security, communication, and so on.

Phil Wainwright prognosticated Six big trends to watch in 2011 in a 12/28/2010 post to ZDNet’s Software as Services blog:

image I’ve long been a fan of Geoffrey Moore’s classic business books about the evolutionary path taken by emerging technologies and the companies that champion them. Especially now that several of the key emerging technologies I follow are at such different stages of their evolutions.

image These contrasting cross-currents are going to make 2011 a fascinating and turbulent year, one in which SaaS enters the tornado and mobile enters the bowling alley at the very same time as cloud trips over the chasm. I’ve decided to highlight six trends in enterprise computing for the coming year, but here’s a seventh prediction: middle-of-the-road analysts and pundits will find it even harder than ever to make any sense of everything that’s going on right now.

1. Mainstream means mobile

For many years, mobile has been a peripheral afterthought when developing enterprise applications. Even when running in a browser, the laptop or desktop PC has been the primary user platform, and a mobile client was always an option at best. In 2011, there’s going to be seismic shift. Significant numbers of enterprise software vendors will upend their development priorities and develop for mobile first, desktop second.

2. Fake cloud #fails the crowd

It should be no surprise to find me predicting that so-called ‘private cloud’ will disappoint. Cloud computing has ridden to the peak of the Gartner hype cycle, and fake cloud is now leading the way into the trough of disillusionment. Vendors and enterprises seeking to capture the benefits of cloud computing without understanding the core principles will come a cropper, and cloud’s reputation will suffer accordingly, even if undeservedly.

3. IT management gets wired to the cloud

The days when cloud computing came in an unaccountable black box are drawing to an end. Enterprise buyers rightly demand oversight and governance of their computing, even if hosted by a provider. Instead of take-it-or-leave it service levels, there’s a new trend towards visibility and accountability. Examples include RightNow’s Cloud Services Portal or the detailed reporting and governance built into managed cloud offerings from the likes of OpSource and Rackspace. 2011 will see instrumentation bringing new depth and detail to cloud and SaaS offerings.

4. Data just wants to be mined

The volume of data being accumulated every day is exploding, and it’s yielding huge new value for those who know how to mine and refine it. This emerging new value equation is changing the relationship between data and security, as Wikileaks has shown. Governments and corporations today (not to mention consumers) are sitting on rich seams of data whose value they have barely realized. Others are mining that wealth, whether openly or surreptitiously. I can’t put it better than I wrote back in 2006: “Value comes from the views that you create to filter, join and represent data — whether it’s your data or someone else’s (more often the latter).”

5. Social technologies remake enterprise apps

The ability to collaborate in real time, to instantly initiate conversations or to develop a thread across follow-the-sun timezones — all these capabilities are bringing people together in new ways that cut across the old business processes of industrial-era enterprise applications. The old way was to put the organization and its process automation first. Now applications are being remade to put people at the center of process and have automation serve their needs. The outcome will break down the old silos of resource-centric process management, to replace them with new, people-centric automation stacks.

6. Business transformation becomes the big story

The tech industry is obsessed with its pursuit of the new, new thing. In 2011 the new, new thing is not a technology at all, but a new way of doing business that’s enabled by all of the above. The new year’s most telling innovations will not be in mobile, cloud or social technologies but in how smart, entrepreneurial business people adapt to the potential that blossoms from those technologies.

Gregor Petri (@GregorPetri) published Cloud Predictions Beyond 2011 - 2: The need for a cloud abstraction mode on 12/28/2010:

If the cloud is to fulfill on its promise we need to start thinking of it as a cloud, not as an aggregation of its components (such as VMs etc.)

As mentioned in a previous post I‘ll use some of my upcoming posts to highlight some cloud computing "megatrends" that I believe are happening - or need to happen – beyond 2011. One of these would be the creation of an “abstraction model” that can be used to think about (and eventually manage) the cloud.  A nice setup to this was done by Jen-Pierre Garbani of Forrester, who in a recent post at Computerworld UK talks about the need to Consider the Cloud as a solution not a problem.

In this is he uses the example of the T-ford -which was originally designed to use the exact same axle with as roman horse carriages, until someone come up with the idea of paving the roads - to argue that customers should not “design cloud use around the current organization, but redesign the IT organization around the use of cloud  .. The fundamental question of the next five years is not the cloud per se but the proliferation of services made possible by the reduced cost of technologies”.

I could not agree more, it is about the goal not about the means. But people keep thinking in terms of what they already know. It was Henry Ford who ones said “If I had asked people what they wanted, they would have said faster horses." Likewise people think of clouds and especially of Infrastructure as a Service (IaaS) in terms of virtual machines.  It is time to move beyond that and think of what the machines are used for (applications/services) and start managing them at that level.

Just like we do not manage computers by focusing on the chips or the transistors inside, we should not manage clouds by focusing on the VM’s inside. We need a model that abstracts from this, just like Object Orientated models abstract programmers from having to know how underlying functions are implemented we need a cloud model that abstracts IT departments from having to know on which VM specific functions are running and from having to worry about moving them.

In that context Phil Wainwright also wrote an interesting post: This global super computer the cloud, a post that originated 10 years ago. First, it is amazing that the original article is still on-line after 10 years – imagine what it would take to do that in a pre-cloud era. Second, the idea of thinking of the cloud as a giant entity makes sense but I disagree with him when he quotes Paul Buchheit’s statement on the cloud OS: “One way of understanding this new architecture is to view the entire Internet as a single computer. This computer is a massively distributed system with billions of processors, billions of displays, exabytes of storage, and it’s spread across the entire planet”  That is the equivalent of thinking of your laptop as a massive collection of chips and transistors, or of a program you developed as a massive collection of assembler put, gets and goto statements.

To use a new platform we need to think of it as just that, as a platform, not what it is made off. If you try to explain how electrons flow through semiconductors to explain how computers work, nobody (well almost nobody) will understand. That is why we need abstractions.

Abstractions often come in the form of models, like the client/server model or (talking about abstraction) the object oriented model or even the SQL model (abstracts from what goes on inside the database).

Unfortunately the current cloud does not have such a  model yet – at least not one we all agree on. That is why everyone is trying so hard to slap old models onto it and see whether they stick. For example for IaaS (infrastructure as a Service) most are trying to use models of (virtual) machines that are somehow interconnected, which makes everything overly complex and cumbersome.

What we need is a model that describes the new platform, without falling into the trap of describing the underlying components (describing a laptop by listing transistors). The model most likely will be service oriented and should be implementation agnostic (REST or SOAP, Amazon or P2P, iOS or Android, Flash or HTML5). Let’s have a look what was written 10 years ago that we could use for this, my bet would be on some of the Object Oriented models out there.

Gregor is Advisor Lean IT and Cloud Computing at CA Technologies.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds


No significant articles today.

<Return to section navigation list> 

Cloud Security and Governance

Drew Voros (@bizeditor) gets it wrong in his Voros: WikiLeaks pokes holes in cloud computing article of 12/29/2010 for the Oakland Tribune:

"I need help with tapping phones." When Panamanian President Ricardo Martinelli used his BlackBerry to send this urgent message in 2009 to the U.S. ambassador seeking help from U.S. drug agents to go after his political enemies, the president certainly didn't expect it to end up in newspapers across the world.

But that's what happened this week after WikiLeaks obtained this document as one of the many thought-to-be-safe communiques distributed via the U.S. Department of Defense classified version of a civilian Internet and cloud-computing system.

Really, though, how many people are surprised that Drug Enforcement Agency agents are operating throughout the world as spies? Like most of what has been revealed in "Cablegate," the accepted idea that diplomacy is a game of mutual distrust and back-stabbing has been confirmed through the unauthorized distribution of this confidential material.

However, there's a bigger story behind these headlines, and it doesn't have anything to do with Tom Clancy spy-and-dagger plot lines.

For all the sound and fury that the WikiLeaks controversy has unleashed, no one seems to be talking much about how the world's most secure cloud-computing system, Uncle Sam's SIPRNet, was so easily breached. [Wikipedia link added.]

The idea that someone could access thousands of confidential documents armed with nothing more than the right military security clearance and a CD-ROM masquerading as a Lady Gaga music album should raise concerns in corporate IT departments.[* Emphasis added.]

Considering how cloud-computing is being pitched by Silicon Valley companies as the "New New Thing" for businesses, take a cautionary lesson from the Web's digital Deep Throat saga.

In this case where sensitive diplomatic correspondence was illegally obtained, an American soldier on duty in Iraq had legal access to the SIPRNet data bases and reportedly uploaded as many as 250,000 files onto a CD-ROM that was made to look like a disk full of personal music.

It doesn't take a rocket scientist to extrapolate that any cloud-computing system is only as secure as the people who touch the hardware. Do you trust your own employees more than a software company operating as a cloud-computing service? [Emphasis added.]

Most Americans use cloud computing all the time, and may not even realize it. If you have ever logged on to an e-mail service like Hot-mail and written e-mail and stored e-mail there, you have been an active participant in cloud computing.

There are a lot of good financial and technical reasons for individuals and businesses to access and store files remotely.

There's also a requirement of blind faith, much like storing all your valuables 100 miles from your home.

The federal government, and more specifically the Defense Department, seem like perfect organizations to use cloud computing for security purposes.

But as any espionage fan will tell you, the human element is the key to any breach.

Like diplomacy, the decision to store vital information with a cloud-computing company requires a level of trust that might be as solid as, well, a cloud.

* Bradley Manning’s access to SIPRNet wasn’t a security breach of a cloud data source, it was an authorization governance failure of a (supposedly) private network. According to Wikipedia, “While in Iraq, Manning was able to access SIPRNet from his workstation. He was also said to have had access to the Joint Worldwide Intelligence Communications System.” As Voros stated, “It doesn't take a rocket scientist to extrapolate that any cloud-computing system is only as secure as the people who touch the hardware.” “Hardware” includes network servers as well as connected on-premises client devices.

Voros is the Oakland Tribune’s Business Editor.

<Return to section navigation list> 

Cloud Computing Events


No significant articles today.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Klint Finley analyzed Amazon Web Services, WikiLeaks and the Elephant in the Room in a 12/29/2010 post to the ReadWriteCloud blog:

imageYesterday Amazon Web Services sent out a promotional email titled "Amazon Web Services Year in Review." Understandably, the email didn't mention one of the biggest AWS stories of the year: the company's decision to remove the WikiLeaks website from its servers.

image Dave Winer noticed something else of note in the email: a paragraph about how the U.S. Federal Government is one of AWS's customers, with over 20 federal agencies taking advantage of the company's services. And, according to the announcement, that number is growing. Winer suggests this is the reason that closed WikiLeaks' account. "It makes perfect sense that the US government is a big customer of Amazon's web services. It also makes perfect sense that Amazon wouldn't want to do anything to jeopardize that business," Winer wrote. "There might not have even been a phone call, it might not have been necessary."

Winer also noted that after the U.S. Army announced it would be purchasing iPhone or Android devices for all its troops, Apple dropped a WikiLeaks app from the App Store.

Winer's explanation is purely speculative, and some might call it a conspiracy theory. But it points to a big issue for free speech in the cloud: what happens if one, smaller customer criticizes a bigger customer? In the Web 1.0 era, if you got kicked off a Web host you just found another. Today, the number of providers like AWS are small. As AWS's promotion material points out, cloud computing gives smaller outfits the ability to take advantage of high-performance computing.

Put WikiLeaks aside for a moment. What happens if a small journalistic outfit starts using a cloud provider to do some serious data journalism, but in the process offends one of its hosts' large customers. Maybe they're right-wing journalists criticizing the Obama administration or maybe it's a group of liberal muckrakers uncovering hidden truths about a major financial institution. It doesn't matter. What matters is that they have access to the resources they need to learn what they need to learn and publish what they need to publish.

And it's not just a free speech issue: freedom of commerce could be in jeopardy as well. What happens if a small company wants to compete with Netflix? Will AWS find a "terms of service" violation to slap it with? Considering the rumors that is considering competing with Netflix itself, this could become a problem quickly.

I've focused on AWS in this article, but these concerns apply to any provider. Practically all of the "2011 cloud computing predictions" type articles I've read this year mention consolidation as a major trend for 2011. If everyone's right, we'll likely see fewer Infrastructure-as-a-Service companies in the next few years. Are data centers becoming the new "means of production"?

Like net neutrality, it's a problem that seems difficult to solve without legislation. But I'm all ears: how else could this issue be solved, or is it really an issue at all?

Glenn Greenwald disclosed Wired's refusal to release or comment on the Manning chat logs in a 12/29/2010 post to

image Last night, Wired posted a two-part response to my criticisms of its conduct in reporting on the arrest of PFC Bradley Manning and the key role played in that arrest by Adrian Lamo.  I wrote about this topic twice -- first back in June and then again on Monday.  The first part of Wired's response was from Editor-in-Chief Evan Hansen, and the second is from its Senior Editor Kevin Poulsen.  Both predictably hurl all sorts of invective at me as a means of distracting attention from the central issue, the only issue that matters:  their refusal to release or even comment on what is the central evidence in what is easily one of the most consequential political stories of this year, at least.

[Photo credit Wired, AP; Kevin Poulsen (left), Bradley Manning (center) and Evan Hansen (right)]

image That's how these disputes often work by design:  the party whose conduct is in question (here, Wired) attacks the critic in order to create the impression that it's all just some sort of screeching personality feud devoid of substance.  That, in turn, causes some bystanders to cheer for whichever side they already like and boo the side they already dislike, as though it's some sort of entertaining wrestling match, while everyone else dismisses it all as some sort of trivial Internet catfight not worth sorting out.  That, ironically, is what WikiLeaks critics (and The New York Times' John Burns) did with the release of the Iraq War documents showing all sorts of atrocities in which the U.S. was complicit:  they tried to put the focus on the personality quirks of Julian Assange to distract attention away from the horrifying substance of those disclosures.  That, manifestly, is the same tactic Wired is using here:  trying to put the focus on me to obscure their own ongoing conduct in concealing the key evidence shining light on these events.

Guy Rosen analyzed the number of Amazon Web Services EC2 instances for 2007 through 2010 in his Recounting EC2 One Year Later post to 12/29/2010:

image It’s been over a year since my original Anatomy of an EC2 Resource ID post. In what became my little claim to fame in the industry, I uncovered the pattern behind those cryptic IDs AWS assigns to every object allocated (such as an instance, EBS volume, etc.). The discovery revealed that underlying the IDs is a regular serial number that increases with each resource allocated. While this may sound technical and insignificant, it turned out to be very valuable: it enabled, for the first time, a glimpse into the magnitude of Amazon’s cloud.

imageThe numbers gathered in that post back in September 2009 showed that approximately 50,000 instances were being spun up every day on EC2 (in the us-east-1 region). So what’s happened since? I joined forces with CloudKick, providers of a cloud management and monitoring platform, to dig up more data. Here’s what we found: (click to expand to an interactive chart)

Click to open interactive chart

The chart above plots the number of instances launched per day, from mid-2007 till present day. Growth is, well, unmistakable. A couple of peaks dominate the landscape around February and October 2010, peaks which somewhat correlate to availability of new AWS services (see interactive map). The evidence is highly circumstantial though as I find it hard to draw a direct conclusion on how these specific events pushed the daily launch count as high up as 150,000.

Let’s zoom out now and look at EC2′s growth over the years:

For this chart, we averaged out the instance launch counts over each year (data for 2007 and 2010 may be partial). Based on the results, activity on EC2 has been multiplying several times over every year. The biggest step was 2008-2009, exhibiting 375% growth (that’s almost 5X). 2009 was the year AWS really exploded, but what strikes me as odd is that growth actually slowed down the following year (to 121%). One theory would be that the cloud has begun to saturate the early adopters and it is truly time to cross the chasm. Crossing, however, is turning out to be a difficult feat.

Responding to my previous research, a top Amazon official commented that a count of instance launches doesn’t really reflect anything meaningful (like the actual customer base, server count or revenues – all of which we’d all love to figure out). I respond that it’s examining the numbers one year later that provides the real value: it’s like looking at a mysterious dial on your car’s dashboard: even without understanding the exact parameter measured, if it shoots up then there’s a decent chance you’re driving faster.

Guy is Co-Founder & CEO of Onavo by day, and a cloud computing blogger by night.

Jeff Barr (@jeffbarr) reported AWS Import/Export Now in Singapore and provided an example of its use in a 12/29/2010 post to the Amazon Web Services blog:

image You can now use the AWS Import/Export service to import and export data into and out of Amazon S3 buckets in the Asia Pacific (Singapore) Region via portable storage devices with eSATA USB 2.0, and SATA interfaces.

image You can ship us a device loaded with data and we'll copy it to the S3 bucket of your choice. Or you send us an empty device and we'll copy the contents of one or more buckets to it. Either way, we'll return the device to you. We work with devices that store up to 8 TB on a very routine basis and can work with larger devices by special arrangement. We can handle data stored on NTFS, ext2, ext3, and FAT32 file systems.

Our customers in the US and Europe have used this service to take on large-scale data migration, content distribution, backup, and disaster recovery challenges.

For example, Malaysia-based AMP Radio Networks runs a web platform for 9 FM radio stations. They host this platform and the associated audio streaming on EC2 and make use of CloudWatch, Auto Scaling, and CloudFront. AWS Import/Export allows AMP Radio Networks to transfer huge amounts of data directly from their facilities to AWS. They save time and money and can bring new content more quickly than they could if they had to upload it to the cloud in the traditional way.

Chris Czarnecki reported Google App Engine SDK Release New Features on 12/29/2010:

image With grabbing a lot of the Cloud Computing headlines over the last few weeks with its new releases, announcements and acquisition of Heroku, the news that Google have released Google App Engine SDK 1.4.0 was easily missed. I thought that I would highlight what this important release adds to the SDK as Google announced this was the most significant App Engine release of the year.

image First I will address the improvements to existing API’s. Until now, background tasks run from Cron or Task Queue were limited to 30 seconds. This limit has been extended to 10 minutes. Many API calls also had 1MB size limits. These have been removed on the URLFetch API, being increased from 1MB to 32MB. The same size increases also apply to Memcache batch get/put operations and Image API requests. Outgoing attachments on the Mail API have increased from 1MB to 10MB. These changes are welcome improvements to existing API’s but its the new features that are really exciting.

image Firstly, the Channel API enables a bi-directional channel that allows server side pushing of data to client side JavaScript. This mean there is no need to write JavaScript that polls the server side application looking for changes. Secondly, a new feature known as Always-On is provided. This is aimed at applications that have variable or low traffic levels. Without these features, Google may turn your app off if there is no traffic. When a new request arrives, there will be a delay as the application is loaded. Always on prevents this, reserving 3 machine instances that will never be turned off. This is a chargeable feature that costs $9 per month. Finally there is warm up requests which is a feature that anticipates your application requires more machine instances and will load them before sending the instances requests.

These new features are a welcome improvement to the Google App Engine Platform as a Service (PaaS). The competition in this area between major players such as, VMware, Microsoft, Red Hat and Google amongst others is good news for developers. The platforms are becoming more an more powerful, feature rich and cost effective.

If you would like to know more about the Google App Engine and how it may benefit your organisation, or indeed about Cloud Computing in general, why not consider attending Learning Tree’s Cloud Computing course.

Calvin Azuri reported Aprimo to be Acquired by Teradata Corporation in a 12/28/2010 post to the TMCNet blog:

A definitive agreement to acquire cloud-based integrated marketing software provider Aprimo has been signed by data warehousing and business analytics company Teradata Corporation.

The acquisition has been signed for approximately $525 million. Subject to adjustment, there will be approximately $25 million of cash at closing, which has also been included in the acquisition amount. Powerful business analytics with integrated marketing solutions will be brought together by the two top visionary companies. Corporations will be therefore able to improve and optimize marketing performance with data-driven insights. Aprimo's deep expertise in Cloud and Software as a Service or SaaS functionality will also be drawn upon by Teradata.

In a release, Mike Koehler, president and CEO of Teradata Corporation, said, "Combining these visionary companies positions Teradata as a leader in integrated marketing management, marketing resource management, and multi-channel campaign management, providing customers an end-to-end solution available in SaaS and on-premise environments."

Koehler added that the company’s addressable customer base will also be broadened with this combination. The combination will also fuel marketing innovation for the company’s customers. The future of integrated marketing management will be driven by Teradata and Aprimo. The customers will be also offered compelling business value.

In a release, Bill Godfrey, CEO at Aprimo, said, "Aprimo and Teradata are both laser focused on customer success, core business values that forge together our experience and commitment to help marketers revolutionize marketing and achieve their vision of integrated marketing management."

Godfrey added that the combined value proposition has come at a time when marketers are consolidating and integrating their marketing teams and systems. At the same time, the marketing teams are demanding more strategic analytics and intelligence. Teradata's powerful business analytics and Aprimo's cloud-based integrated marketing software will be beneficial to both the present and future customers of the company.

Aprimo will continue to market its products and services under the name Aprimo, even after it is integrated into Teradata’s operations. The strategy of Teradata's analytic and applications business will be supported by Aprimo once the acquisition is complete.

Calvin is a contributing editor for TMCnet.

The impending acquisition received little coverage in the computer press, probably because the announcement occurred on 12/22/2010, just before Christmas.

<Return to section navigation list>