Thursday, November 24, 2011

Windows Azure and Cloud Computing Posts for 11/21/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222


• Updated 11/25/2011 1:00 PM PST: Added lots of new articles marked .

Update 11/24/2011: This post was delayed due to Blogger’s recent HTTP 500 errors with posting and editing.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

Azure Blob, Drive, Table, Queue and Hadoop Services

• Derrick Harris (@derrickharris) listed 6 reasons why 2012 could be the year of Hadoop in an 11/24/2011 post to Giga Om’s Structure blog:

imageHadoop gets plenty of attention from investors and the IT press, but it’s very possible we haven’t seen anything yet. All the action of the last year has just set the stage for what should be a big year full of new companies, new users and new techniques for analyzing big data. That’s not to say there isn’t room for alternative platforms, but with even Microsoft abandoning its competitive effort and pinning its big data hopes on Hadoop, it’s difficult to see the project’s growth slowing down.

Here are six big things Hadoop has going for it as 2012 approaches.

1. Investors love it

Cloudera has raised $76 million since 2009. Newcomers MapR and Hortonworks have raised $29 million and $50 million (according to multiple sources), respectively. And that’s just at the distribution layer, which is the foundation of any Hadoop deployment. Up the stack, Datameer, Karmasphere and Hadapt have each raised around $10 million, and then are newer funded companies such as Zettaset, Odiago and Platfora. Accel Partners has started a $100 million big data fund to feed applications utilizing Hadoop and other core big data technologies. If anything, funding around Hadoop should increase in 2012, or at least cover a lot more startups.


2. Competition breeds success

Whatever reasons companies had to not use Hadoop should be fading fast, especially when it comes to operational concerns such as performance and cluster management. This is because MapR, Cloudera and Hortonworks are in a heated competition to win customers’ business. Whereas the former two utilize open-source Apache Hadoop code for their distributions, MapR is pushing them on the performance front with its semi-proprietary version of Hadoop. This means an increased pace of innovation within Apache, and a major focus on management tools and support to make Hadoop easier to deploy and monitor. These three companies have lots of money, and it’s all going toward honing their offerings, which makes customers the real winners.

3. What learning curve?

Aside from the improved management and support capabilities at the distribution layer, those aforementioned up-the-stack companies are already starting to make Hadoop easier to use. Already, Karmasphere and Concurrent are helping customers write Hadoop workflows and applications, while Datameer and IBM are among the companies trying to make Hadoop usable by business users rather than just data scientists. As more Hadoop startups begin emerging from stealth mode, or at least releasing products, we should see even more innovative approaches to making analytics child’s play, so to speak.

4. Users are talking

It might not sound like a big deal, but the shared experiences of early Hadoop adopters could go a long way toward spreading Hadoop’s utility across the corporate landscape. It’s often said that knowing how to manage Hadoop clusters and write Hadoop applications is one thing, but knowing what questions to ask is something else altogether. At conferences such as Hadoop World, and on blogs across the web, companies including Walt Disney, Orbitz, LinkedIn, Etsy and others are telling their stories about what they have been able to discover since they began analyzing their data with Hadoop. With all these use cases abound, future adopters should have an easier time knowing where to get started and what types of insights they might want to go after.

5. It’s becoming less noteworthy

This point is critical, actually, to the long-term success of any core technology: at some point, it has to become so ubiquitous that using it’s no longer noteworthy. Think about relational databases in legacy applications — everyone knows Oracle, MySQL or SQL Server are lurking beneath the covers, but no one really cares anymore. We’re hardly there yet with Hadoop, but we’re getting there. Now, when you come across applications that involve capturing and processing lots of unstructured data, there’s a good chance they’re using Hadoop to do it. I’ve come across a couple of companies, however, that don’t bring up Hadoop unless they’re prodded because they’re not interested in talking about how their applications work, just the end result of better security, targeted ads or whatever it is they’re doing.

6. It’s not just Hadoop

If Hadoop were just Hadoop — that is, Apache MapReduce and the Hadoop Distributed File System — it still would be popular. But the reality is that it’s a collection of Apache projects that include everything from the SQL-like Hive query language to the NoSQL HBase database to machine-learning library Mahout. HBase, in particular, has proven particularly popular on its own, including at Facebook. Cloudera, Hortonworks and MapR all incorporate the gamut of Hadoop projects within their distributions, and Cloudera recently formed the Bigtop project within Apache, which is a central location for integrating all Hadoop-related projects within the foundation. The more use cases Hadoop as a whole addresses, the better it looks.

Disclosure: Concurrent is backed by True Ventures, a venture capital firm that is an investor in the parent company of this blog, Giga Omni Media. Om Malik, the founder of Giga Omni Media, is also a venture partner at True.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

Derrick’s six points bolster the wisdom of the Windows Azure team decision to abandon Dryad and DryadLINQ in favor of Hadoop and MapReduce. See my Google, IBM, Oracle [and Microsoft] want piece of big data in the cloud post of 11/7/2011 to

Full disclosure: I’m a paid contributor to TechTarget’s

Brent Stineman (@BrentCodeMonkey) described Long Running Queue Processing Part 2 (Year of Azure–Week 20) in an 11/23/2011 post:

imageSo back in July I published a post on doing long running queue processing. In that post we put together a nice sample app that inserted some messages into a queue, read them one at a time and would take 30 seconds to process each message. It did processing in a background thread so that we could monitor it.

imageThis approach was all good and well but hinged on us knowing the maximum amount of time it would take us to process a message. Well fortunately for us in the latest 1.6 version of the Azure Tools (aka SDK), the storage client was updated to take advantage of the new “update message” functionality introduced to queues by an earlier service update. So I figured it was time to update my sample.


Fortunately for me given the upcoming holiday (which doesn’t leave my time for blogging given that my family lives in “the boonies” and haven’t yet opted for an internet connection much less broadband, updating a message is SUPER simple.

myQueue.UpdateMessage(aMsg, new TimeSpan(0, 0, 30), MessageUpdateFields.Visibility);

All we need is the message we read (which contains the pop-receipt the underlying API use to update the invisible mssage), the new timespan, and finally a flag to tell the API if we’re updating the message content/payload or its visibility. In the sample above we of course are setting its visibility.

Ok, time for turkey and dressing! Oh wait, you want the updated project?

QueueBackgroundProcess w/ UpdateMessage

Alright, so I took exactly the same code we used before. It inserts 5 messages into a queue, then reads and processes each individually. The outer processing loop looks like this:

while (true)
    // read messages from queue and process one at a time…
    CloudQueueMessage aMsg = myQueue.GetMessage(new TimeSpan(0,0,30)); // 30 second timeout
    // trap no mesage.
    if (aMsg != null)
        Trace.WriteLine("got a message, '"+aMsg.AsString+"'", "Information");
        // start processing of message
        Work workerObject = new Work();
        workerObject.Msg = aMsg;
        Thread workerThread = new Thread(workerObject.DoWork);

        while (workerThread.IsAlive)
            myQueue.UpdateMessage(aMsg, new TimeSpan(0, 0, 30), MessageUpdateFields.Visibility);
            Trace.WriteLine("Updating message expiry");
            Thread.Sleep(15000); // sleep for 15 seconds

        if (workerObject.isFinished)
            myQueue.DeleteMessage(aMsg.Id, aMsg.PopReceipt); // I could just use the message, illustraing a point
            // here, we should check the queue count
            // and move the msg to poison message queue
        Trace.WriteLine("no message found", "Information");

    Trace.WriteLine("Working", "Information");

The while loop is the processor of the worker role that this all runs in. I decreased the initial visibility timeout from 2 minutes to 30 seconds, increased our monitoring of the background processing thread from every 1/10th of a second to 15 seconds, and added the updating of the message visibility timeout.

The inner process was also upped from 30 seconds to 1 minute. Now here’s where the example kicks in! Since the original read only listed a 30 second visibility timeout, and my background process will take one minute, its important that I update the visibility time or the message would fall back into view. So I’m updating it with another 30 seconds every 15 seconds, thus keeping it invisible.

Ta-da! Here’s the project if you want it.

So unfortunately that’s all I have time for this week. I hope all of you in the US enjoy your Thanksgiving holiday weekend (I’ll be spending it with family and not working thankfully). And we’ll see you next week!

• Brad Calder of the Window Azure Storage Team reported the availability of the Windows Azure Storage: A Highly Available Cloud Storage Service with Strong Consistency paper on 11/20/2011:

imageWe recently published a paper describing the internal details of Windows Azure Storage at the 23rd ACM Symposium on Operating Systems Principles (SOSP) [#sosp11].

The paper can be found here. The conference also posted a video of the talk here. The slides are not really legible in the video, but you can view them separately here.

The paper describes how we provision and scale out capacity within and across data centers via storage stamps, and how the storage location service is used to manage our stamps and storage accounts. Then it focuses on the details for the three different layers of our architecture within a stamp (front-end layer, partition layer and stream layer), why we have these layers, what their functionality is, how they work, and the two replication engines (intra-stamp and inter-stamp). In addition, the paper summarizes some of the design decisions/tradeoffs we have made as well as lessons learned from building this large scale distributed system.

A key design goal for Windows Azure Storage is to provide Consistency, Availability, and Partition Tolerance (CAP) (all 3 of these together, instead of just 2) for the types of network partitioning we expect to see for our architecture. This is achieved by co-designing the partition layer and stream layer to provide strong consistency and high availability while being partition tolerance for the common types of partitioning/failures that occur within a stamp, such as node level and rack level network partitioning.

In this short conference talk we try to touch on the key details of how the partition layer provides an automatically load balanced object index that is scalable to 100s of billions of objects per storage stamp, how the stream layer performs its intra-stamp replication and deals with failures, and how the two layers are co-designed to provide consistency, availability, and partition tolerant for node and rack level network partitioning and failures.

Brad Calder

<Return to section navigation list>

SQL Azure Database and Reporting

• Mark Scurrell (@mscurrell) announced a SQL Azure Data Sync Service Update in an 11/21/2011 post to the Sync Framework Team Blog:

imageThanks for trying out our Preview version and sending us suggestions and feedback. We released a minor service update a few days ago based on the input we have received so far.

Some of the important changes in this update are:

  • imageLog-ins with either username@server or just username are accepted.
  • Column names with spaces are now supported.
  • Columns with a NewSequentialID constraint are converted to NewID for SQL Azure databases in the sync group.
  • Administrators and non-Administrators alike are able to install the Data Sync Agent.
  • A new version of the Data Sync Agent is now available on the Download Center, but if you already have the Preview version of the Data Sync Agent it will continue to work.

<Return to section navigation list>

MarketPlace DataMarket, Social Analytics and OData

• Chris Ballard (@chrisaballard) described Creating an OData Feed to Import Google Image Search Results into a SQL Server Denali Tabular Model in an 11/23/2011 post to the Tribal Labs blog:

imageRecently, I worked on a prototype BI project which needed images to represent higher education institutions so that we could create a more attractive interface for an opendata mashup. Its a common data visualisation problem: you have a set of data, perhaps obtained via some open data source, but the source does not include any references to image data which you can use. So the question was, where could we obtain images for each of the institutions that we could use? Using the Google Image Search was the obvious answer but how could I get the images returned from a search in a format which I could use and combine with the rest of my data? The Google Image Search API provides a JSON interface to allow you to programmatically send image search requests and results. Unfortunately, as of May 2011 this is being deprecated (and there appears to be no Google alternative available) however it is still currently functional and so appeared to meet our needs.

Open Data Mashup

The data for our opendata mashup came from a number of different sources (expect a blog post on this very soon!) and we created a SQL Server 2012 (Denali) Tabular Model in Analysis Services to bring together a number of external opendata sets relating to higher education institutions. The main source of data about each HE institution was obtained by extracting data from the Department for Education Edubase system, which contains data on educational establishments across England and Wales. Initially, we wanted our app to query the Google Image Search API dynamically to return a corresponding image, however this proved too slow to realistically use. Instead, as our app will directly query the tabular model I decided to pull in the images associated with each institution directly into the tabular model so it is available for the app to query.

Both PowerPivot and the Tabular Model designer in BIDS allow you to import data from an OData (Open Data Protocol) feed to a model. Odata is a web protocol for querying and updating data based on Atom syndication and RESTful HTTP-based services. You can find out more information over at So, to bring the image search results into the tabular model, I built a simple data service using WCF Data Services which serves up the image URLs as an OData feed which can then be consumed in the tabular model. The data service uses the WCF Data Service Reflection Provider to define the data model for the service based on a set of custom classes.

One issue is that the image search needs to be based on a known list of institutions (which are already in our model), so the service needed to query the model first to get the list of institutions which could then be used as the basis of our query. This does have the advantage that the image URLs will be updated dynamically if the institution data in the model changes, however it is dependent on the feed data being processed after the institution data.

OData Service

To create the OData service I created a simple .NET class called WebImageSearch which can be used to call the Google Image Search REST Service and extract the first image matching the search results:

WebImageSearch Class

This class provides two methods, ImageSearch (which is used to call the Google Image Search REST service and carry out the search):

ImageSearch Method

and ProcessGoogleSearchResults to process the JSON returned from the service call. Note that this uses an open source library called Jayrock which allows you to easily process JSON using DataReaders.

ProcessGoogleSearchResults Method

The ImageSearch method returns a list of urls as a collection of objects of type Image. This class is decorated with a number of attributes which tell the WCF Data Service Reflection Provider how to interpret the class as a data feed entity. In this case a DataServiceKeyAttribute defines the attribute which is the key for the entity in the resulting data feed. EntityPropertyMapping attributes (whilst totally optional) define a mapping from class properties to entity properties. I found that unless these were specified no data is returned in the feed when displayed in the browser.

Image Class

A helper class (in this case called InstitutionImages) does the business with reading the existing institution names from the tabular model, passing these to the ImageSearch method in the WebImageSearch class and builds a list of type List<Image> containing the returned image URLs and image IDs (so that we can relate them back to the institutions in the model). We also need to provide a property on this class which returns an IQueryable interface which are the entity sets of Image entity types in the data feed:

Images property returning IQueryable Interface

Finally to complete the picture, we need to add a new WCF Data Service to the project. In the data service definition we define it as a DataService of type InstitutionImages (which is the name of the class containing our IQueryable property) and also need to define Entity Set access rules:

WCF Data Service Definition

Importing the data

Now we are ready to test the service! To return data from the OData service, we can simply define the entity set we are interested in in the URL, for example:


If we browse to this URL in the browser this will return the data feed as an Atom syndication. To add the feed to our tabular model in SQL Server Denali BIDS Tabular Model Designer, Click the Import from Data Source button and select “Other Feeds” and specify the URL to the OData service:

Connecting to data feed in SQL Server Denali

The data feed will then be imported into the tabular model:


Thoughts on other approaches

This is not an ideal solution by any means, but it does give a way to import more dynamic data which originates from APIs and services which don’t provide a specific OData feed into a tabular model. This approach would be fine if the data being imported was not dependent on the data in another table (in this case the URLs to return are dependent on each institution), however where there is a relationship, it becomes a bit messy as the service needs to query the model, creating a circular dependency.

A better approach would be to create a user defined function which can then be used to dynamically populate data in a new column based on existing data in the table (sort of like user defined functions in multidimensional modeling). Unfortunately this is not possible with a tabular model in SQL Server Denali. Perhaps this is something that the SQL Azure Data Explorer project may enable us to deal with more elegantly in the future by combining these sources prior to loading into the tabular model?

This has started me thinking of how OData support in SQL Server Denali could be enhanced in order to fully take advantage of the capabilities of OData, but that is the subject of another post I think!

The Data Explorer Team answered What was the best year for kids movies? with a video in a 11/22/2011 post:

In today’s post we are featuring a demo video which uses “Data Explorer” to answer the age-old question, “What was the best year for kids movies?”

Features showcased in the video include:

  • Importing data from OData and HTML sources.
  • Using data in one table to look up values in another.
  • Replacing values in a column.
  • Summarizing/grouping rows.

• James Govenor described B2C Social Analytics: Capturing “Moments of Truth” in an 11/22/2011 post to his RedMonk blog:

imageI presented at ActuateOne Live, a customer event for the company behind BIRT, last week. The subject of my talk was Analytics and Data Science: the breaking wave. My key argument is that cratering costs of processing, RAM and storage, combined with a new generation of data processing technologies built and open sourced by web companies (noSQL), are combining to allow enterprises unprecedented opportunities to do the things with data they always wanted to but the DBA said they couldn’t afford.

I also sat on a panel looking at mobile, cloud and “agile analytics”. Seems the panel went well. I evidently triggered some thoughts from a colleague at another analyst firm, Richard Snow at Ventana Research.

I like Richard’s use of the phrase “moments of truth” to describe the customer service experiences that traditional CRM apps do such a terrible job of capturing.

“After consumers interact with a company in some way (for example, see an advertisement, visit a website, try to use a product, call the contact center, visit social media or even talk to a friend), they are left with a perception or feeling about that company. If the feeling is good, they feel satisfied, if it is bad they are unhappy; in either case they have had a Moment of Truth. Adding up all these moments of truth, a company can gauge their overall satisfaction level, their propensity to remain loyal and buy more, and the likelihood they will say good or bad things about the company to friends or on social media.”

As Richard says

James put forward the view that companies need to focus more on customer behavior and the likely impact on customer behavior of marketing messages, sales calls, social media content, product features, an agent’s attitude, IVR menus and other sources, or as I recently wrote, how customers are likely to react to moments of truth in their contacts. Understanding this requires analysis of masses of historic and current data, both structured and unstructured. It will be interesting to see what Actuate does in this area as it develops more customer-related solutions

Tracking social media interactions can give us insight into these moments of truth. Actuate offers Twitter integration, as do many other analytics companies, while Facebook integration is also heating up fast – see for example Adobe SocialAnalytics and Microstrategy Facebook CRM.

This stuff isn’t getting any easier though. It used to be that you could track what people said on social networks. But with Facebook turning on “automated sharing“, so tracking your apps and creating implicit declarations about what you like on your behalf, the data deluge is going to get a lot worse before it gets better.

We’ll need to understand that online persona may not always give us the “moments of truth“, because people online are trying to create a persona. There is a difference, for example, between what they share, and what they click on – that is, Kitteh vs Chickin.

Disclosure: Actuate and Adobe are both clients.

I believe James uses someone else’s portrait as a Twitter avatar.

My (@rogerjenn) More Features and the Download Link for My Codename “Social Analytics” WinForms Client Sample App of 11/22/2011 begins:

imageBlogger borks edits with Windows Live Writer, so instead of updating yesterday’s New Features Added to My Microsoft Codename “Social Analytics” WinForms Client Sample App post, I’ve added this new post which shows:

  • A change from numeric to text values from the ContentItemType enum
  • A Cancel button to terminate downloading prematurely
  • Addition of a Replies to the Return Tweets, Retweets and Replies Only check box

Here’s the latest screen capture:


Download the source files from the Social Analytics folder of my SkyDrive account:


The downloadable items are:

  • SocialAnalyticsWinFormsSampleReadMe.txt
  • The source files in
  • A sample ContentItems.csv file for 23 days of data (92,668 total ContentItems).

Earlier posts in this series include:

Turker Keskinpala (@tkes) reported OData Service Validation Tool Update: 12 new rules added on 11/22/2011 to the OData wiki:

imageOData Service Validation Tool is updated once again with 12 new rules. Below is the breakdown of added rules:

imageThis rule update brings the total number of rules in the validation tool to 109. You can see the list of rules that are under development here.

OData Service Validation Codeplex project was also updated with all recent changes.

The SQL Server Team announced Microsoft Codename "Social Analytics" Lab in an 11/21/2011 post:

A few weeks ago we released a new “lab” on a site, which some of you may be aware of, called SQL Azure | Labs. We created this site as an outlet for projects incubated out of teams who are passionate about an idea. The labs allow you to experiment and engage with teams on some of these early service concepts and while these projects are not committed to a roadmap your feedback and engagement will help immensely in shaping future investment directions.

imageMicrosoft Codename "Social Analytics” is an experimental cloud service that provides an API enabling developers to easily integrate relevant social web information into business applications. Also included is a simple browsing application to view the social stream and the kind of analytics that can be constructed and integrated in your application.

You can get started with “Social Analytics” by exploring the social data available via the browsing application. With this first lab release, the data available is limited to two topics (“Windows 8” and “Bill Gates”). Future releases will allow you to define your own topic(s) of interest. The data in “Social Analytics” includes top social sources like Twitter, Facebook, blogs and forums. It has also been automatically enriched to tie conversations together across sources, and to assess sentiment.

Once you’re familiar with the data you’ve chosen, you can then use our API (based on the Open Data Protocol) to bring that social data directly into your own application.

Do you want to learn more about the Microsoft Codename “Social Analytics” Lab? Get started today, or for more information visit the official homepage.

imageBetter yet, check out my (@rogerjenn) recent posts about Codename “Social Analytics”:

<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

• Paolo Salvatori described Handling Topics, Queues and Relay Services with the Service Bus Explorer Tool in an 11/24/2011 post:

imageThe Windows Azure Service Bus Community Technology Preview (CTP), which was released in May 2011, first introduced queues and topics. At that time, the Windows Azure Management Portal didn’t provide a user interface to administer, create and delete messaging entities and the only way to accomplish this task was using the .NET or REST API. For this reason, I decided to build a tool called Service Bus Explorer that would allow developers and system administrators to connect to a Service Bus namespace and administer its messaging entities.

image72232222222Over the last few months I continued to develop this tool and add new features with the intended goal to facilitate the development and administration of new Service Bus-enabled applications. In the meantime, the Windows Azure Management Portal introduced the ability for a user to create queues, topics, and subscriptions and define their properties, but not to define or display rules for an existing subscription. Besides, the Service Bus Explorer enables to accomplish functionalities, such as importing, exporting and testing entities, which are not currently provided by the Windows Azure Management Portal. For this reason, the Service Bus Explorer tool represents the perfect companion for the official Windows Azure portal, and it can also be used to explore the features (session-based correlation, configurable detection of duplicate messages, deferring messages, etc.) provided out-of-the-box by the Service Bus brokered messaging.

I’ve just published a post where I explain the functioning and implementation details of my tool, whose source code is available on MSDN Code Gallery. In this post I explain how to use my tool to manage and test Queues and Topics.

For more information on the Windows Azure Service Bus, please refer to the following resources:

Read the full article on MSDN.

The companion code for the article is available on MSDN Code Gallery.

Avkash Chauhan (@avkashchauhan) answered Windows Azure Libraries for .NET 1.6 (Where is Windows Azure App Fabric SDK?) on 11/20/2011:

imageAfter the release of latest Windows Azure SDK 1.6, you may have wonder[ed] where is Windows Azure AppFabric SDK 1.6? Before [the] SDK 1.6 release, AppFabric SDK was shipped separate[ly] from Azure SDK. However things [have] changed now and [the] Windows Azure SDK 1.6 merges both SDK[s] together into one SDK. So when you install new Windows Azure SDK 1.6, App Fabric SDK 1.6 is also installed.

image72232222222Here are a few things to remember now about Windows Azure App Fabric SDK:

  • The App Fabric SDK components are installed as “Windows Azure Libraries for .NET 1.6”, seen below:

  • Because App Fabric components are is now merge into Windows Azure SDK, Add/remove program will no longer have a separate entry for Windows Azure AppFabric SDK.
  • As you know Windows Azure App Fabric has two main Components ServiceBus and Cache so both of these components are inside Azure SDK in separate folders as below:
  • Service Bus:
    • C:\Program Files\Windows Azure SDK\v1.6\ServiceBus

  • Cache:
    • C:\Program Files\Windows Azure SDK\v1.6\Cache.

  • Since SDK 1.6 is a side by side install, the old AppFabric SDK 1.5 can still be found under C:\Program Files\Windows Azure AppFabric SDK\V1.5. Just uninstall it if you are going to use SDK 1.6 binaries to avoid issues.
  • Windows Azure Libraries for .NET 1.6 also [has] the following update [for] Queues:
    • Support for UpdateMessage method (for updating queue message contents and invisibility timeout)
    • New overload for AddMessage that provides the ability to make a message invisible until a future time
    • The size limit of a message is raised from 8KB to 64KB
    • Get/Set Service Settings for setting the analytics service settings

Windows Azure SDK 1.6 Installation Walkthrough:

<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

• Avkash Chauhan (@avkashchauhan) answered “No” to Can you programmatically enable CDN within your Windows Azure Storage Account? in an 11/23/2011 post:

imageI was recently asked if there is an API which can enable CDN for a Windows Azure storage account programmatically?

After a little digging I found that as of now you cannot programmatically enable CDN for a Windows Azure Storage account. You will have to access Windows Azure Storage account directly on Windows Azure Management Portal and then manually enable CDN for that account as below:

imageI also found that you programmatically cannot get CDN URL for your Windows Azure storage account and you would need to access it directly from Windows Azure Management Portal.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Brian Swan (@brian_swan) described Packaging a Custom PHP Installation for Windows Azure in an 11/23/2011 post to The Silver Lining Blog:

imageOne feature of the scaffolds that are in the Windows Azure SDK for PHP is that all rely on the Web Platform Installer to install PHP when a project is deployed. This is great until I want my application deployed with my locally customized installation of PHP. Not only might I have custom settings, but I might have custom libraries that I want to include (like some PEAR modules or any of the many PHP frameworks out there). In this tutorial, I’ll walk you through the steps for deploying a PHP application bundled with a custom installation of PHP. This tutorial does not rely on the Windows Azure SDK for PHP, but you will need…

Ultimately, I’d like to make this process (below) easier (by possibly turning this into a scaffold to be included in the Windows Azure SDK for PHP?), so please provide feedback if you try this out…

1. Customize your PHP installation. Configure any settings and external libraries you want for your application.

2. Create a project directory. You’ll need to create a project directory for your application and the necessary Azure configuration files. Ultimately, your project directory should look something like this (we’ll fill in some of the missing files in the steps that follow):

-(application files)
-(any external libraries)

A few things to note about the structure above:

  • You need to copy your custom PHP installation to the bin directory.
    • Make sure that all paths in your php.ini are relative (e.g. extension_dir=”.\ext”)
  • Any external libraries need to be in your application root directory.
    • Technically, this isn’t true. You could use a relative path for your include_path configuration setting (relative to your application root) and put this directory elsewhere in your project directory.
  • Maybe this goes without saying, but be sure to turn off any debug settings (like display_errors) before pushing this to production in Azure.

3. Add a startup script for configuring IIS. IIS in a Web role is not configured to handle PHP requests by default. So, we need a script to run when an instance is brought on line to configure IIS. (We’ll set this script to run on start up in the next step.) Create a file called configureIIS.cmd, add the content below, and save it in the bin directory:

SET PHP_FULL_PATH=%~dp0php\php-cgi.exe
SET NEW_PATH=%PATH%;%RoleRoot%\base\x86 
%WINDIR%\system32\inetsrv\appcmd.exe set config -section:system.webServer/fastCgi /+"[fullPath='%PHP_FULL_PATH%',maxInstances='12',idleTimeout='60000',activityTimeout='3600',requestTimeout='60000',instanceMaxRequests='10000',protocol='NamedPipe',flushNamedPipe='False']" /commit:apphost
%WINDIR%\system32\inetsrv\appcmd.exe set config -section:system.webServer/fastCgi /+"[fullPath='%PHP_FULL_PATH%'].environmentVariables.[name='PATH',value='%NEW_PATH%']" /commit:apphost
%WINDIR%\system32\inetsrv\appcmd.exe set config -section:system.webServer/fastCgi /+"[fullPath='%PHP_FULL_PATH%'].environmentVariables.[name='PHP_FCGI_MAX_REQUESTS',value='10000']" /commit:apphost
%WINDIR%\system32\inetsrv\appcmd.exe set config -section:system.webServer/handlers /+"[name='PHP',path='*.php',verb='GET,HEAD,POST',modules='FastCgiModule',scriptProcessor='%PHP_FULL_PATH%',resourceType='Either',requireAccess='Script']" /commit:apphost
%WINDIR%\system32\inetsrv\appcmd.exe set config -section:system.webServer/fastCgi /"[fullPath='%PHP_FULL_PATH%'].queueLength:50000"

4. Add a service definition file (ServiceDefinition.csdef). Every Azure application must have a service definition file. The important part of this one is that we set the script above to run on start up whenever an instance is provisioned:

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="YourProjectDirectory" xmlns="">
    <WebRole name="YourApplicationRootDirectory" vmsize="ExtraSmall" enableNativeCodeExecution="true">
            <Site name="YourPHPSite" physicalDirectory="./YourApplicationRootDirectory">
                    <Binding name="HttpEndpoint1" endpointName="defaultHttpEndpoint" />
            <Task commandLine="configureIIS.cmd" executionContext="elevated" taskType="simple" />
            <InputEndpoint name="defaultHttpEndpoint" protocol="http" port="80" />

Note that you will need to change the names of YourProjectDirectory and YourApplicationRootDirectory depending on what names you used in your project structure from step 1. You can also configure the VM size (which is set to ExtraSmall in the file above). For more information, see Windows Azure Service Definition Schema.

5. Generate a service configuration file (ServiceConfiguration.cscfg). Every Azure application must also have a service configuration file, which you can generate using the Windows Azure SDK. Open a Windows Azure SDK command prompt, navigate to your project directory, and execute this command:

cspack ServiceDefinition.csdef /generateConfigurationFile:ServiceConfiguration.cscfg /copyOnly

This will generate a ServiceConfiguration.cscfg file and a ServiceDefinition.csx directory in your project directory.

6. Run your application in the Compute Emulator. If you want to run your application in the Compute Emulator (for testing purposes), execute this command:

csrun ServiceDefinition.csx ServiceConfiguration.cscfg /launchbrowser

One thing to note about doing this: The configureIIS.cmd script will be executed on your local machine (setting your PHP handler to point to the PHP installation that is part of your Azure project). You’ll need to change this later.

7. Create a package for deployment to Azure. Now you can create the package file (.cspkg) that you need to upload to Windows Azure with this command:

cspack ServiceDefinition.csx ServiceConfiguration.cscfg

8. Deploy your application. Finally, you can deploy your application. This tutorial will walk you through the steps:

Again, I’d be very interested to hear feedback from anyone who tries this. Like I mentioned earlier, I think turning this into a scaffold that is included in the Windows Azure SDK for PHP might be very useful.

• Avkash Chauhan (@avkashchauhan) provided Troubleshooting details: Windows Azure Web Role was stuck due to an expception in IISConfigurator.exe process in an 11/23/2011 post:

imageRecently I was working with a partner problem in which the Azure web role was stuck and showing “preparing node” status in Windows Azure Management portal. Luckily RDP access to Azure VM was working so investigation to the problem was easier.

imageAfter logging to Azure VM over RDP the IISConfigurator.log showed the following error:

IISConfigurator Information: 0 : [11/22/11 10:05:42.49] Started iisconfigurator with args 
IISConfigurator Information: 0 : [11/22/11 10:05:42.71] Started iisconfigurator with args /start
IISConfigurator Information: 0 : [11/22/11 10:05:42.72] StartForeground selected. Check if an instance is already running
IISConfigurator Information: 0 : [11/22/11 10:05:42.80] Starting service WAS
IISConfigurator Information: 0 : [11/22/11 10:05:43.28] Starting service w3svc
IISConfigurator Information: 0 : [11/22/11 10:05:43.53] Starting service apphostsvc
IISConfigurator Information: 0 : [11/22/11 10:05:43.96] Attempting to add rewrite module section declarations
IISConfigurator Information: 0 : [11/22/11 10:05:44.00] Section rules already exists
IISConfigurator Information: 0 : [11/22/11 10:05:44.00] Section globalRules already exists
IISConfigurator Information: 0 : [11/22/11 10:05:44.00] Section rewriteMaps already exists
IISConfigurator Information: 0 : [11/22/11 10:05:44.00] Adding rewrite module global module
IISConfigurator Information: 0 : [11/22/11 10:05:44.03] Already exists
IISConfigurator Information: 0 : [11/22/11 10:05:44.05] Enabling rewrite module global module
IISConfigurator Information: 0 : [11/22/11 10:05:44.07] Already exists
IISConfigurator Information: 0 : [11/22/11 10:05:44.07] Skipping Cloud Drive setup.
IISConfigurator Information: 0 : [11/22/11 10:05:44.07] Cleaning All Sites
IISConfigurator Information: 0 : [11/22/11 10:05:44.07] Deleting sites with prefix:
IISConfigurator Information: 0 : [11/22/11 10:05:44.08] Found site:Ayuda.WebDeployHost.Web_IN_0_Web
IISConfigurator Information: 0 : [11/22/11 10:05:44.11] Excecuting process 'D:\Windows\system32\inetsrv\appcmd.exe' with args 'delete site "<sitename>.WebDeployHost.Web_IN_0_Web"'
IISConfigurator Information: 0 : [11/22/11 10:05:44.22] Process exited with code 0
IISConfigurator Information: 0 : [11/22/11 10:05:44.22] Deleting AppPool: <AppPool_GUID>
IISConfigurator Information: 0 : [11/22/11 10:05:44.28] Found site:11600
IISConfigurator Information: 0 : [11/22/11 10:05:44.30] Excecuting process 'D:\Windows\system32\inetsrv\appcmd.exe' with args 'delete site "11600"'
IISConfigurator Information: 0 : [11/22/11 10:05:44.49] Process exited with code 0
IISConfigurator Information: 0 : [11/22/11 10:05:44.49] Deleting AppPool: 11600
IISConfigurator Information: 0 : [11/22/11 10:05:44.52] Deleting AppPool: 11600
IISConfigurator Information: 0 : [11/22/11 10:05:44.55] Unhandled exception: IsTerminating 'True', Message 'System.ArgumentNullException: Value cannot be null.
Parameter name: element
at Microsoft.Web.Administration.ConfigurationElementCollectionBase`1.Remove(T element)
at Microsoft.WindowsAzure.ServiceRuntime.IISConfigurator.WasManager.RemoveAppPool(ServerManager serverManager, String appPoolName)
at Microsoft.WindowsAzure.ServiceRuntime.IISConfigurator.WasManager.TryRemoveSiteAndAppPools(String siteName)
at Microsoft.WindowsAzure.ServiceRuntime.IISConfigurator.WasManager.CleanServer(String prefix)
at Microsoft.WindowsAzure.ServiceRuntime.IISConfigurator.WCFServiceHost.Open()
at Microsoft.WindowsAzure.ServiceRuntime.IISConfigurator.Program.StartForgroundProcess()
at Microsoft.WindowsAzure.ServiceRuntime.IISConfigurator.Program.DoActions(String[] args)
at Microsoft.WindowsAzure.ServiceRuntime.IISConfigurator.Program.Main(String[] args)'

If you study the above IISConfigurator.exe process exception call stack in the log, you will see, prior to the crash the code was trying to delete the AppPool 11600 and the deletion was completed and then there was an exception. After that the role could not start correctly.

During the investigation I found the following details which I decided to share with all of you:

The Windows Azure application has a Web role which was creating using a modified version of the “Windows Azure Accelerator for Web Roles” . This Application was customized in a way that when the role starts, in the role startup code it does the following:

  1. Creates a few sites in the IIS
  2. Create appropriate bindings for new sites created in step #1.

So when Windows Azure VM was restarted/rebooted due to any reason (OS–update, manual reboot, etc) the role gets stuck due to IISConfigurator exception.

This is because the IIS was not clean, when machine started so IISConfigurator was removing all previous sites to prepare IIS for Web Role. IISConfigurator process has only 1 minute to perform all the tasks and if IISConfigurator could not finish all task within 1 minute the exception will occur due to 1 minute timeout.


Because web role was creating all these sites during role startup to ideal solution was to clean the IIS when the machine shuts down rather then putting burden on IISConfigurator to cleanup IIS during startup.

So the perfect solution, was to cleanup IIS during the RoleEnvironment.Stopping event. So to solve this problem we just added code launching “appcmd” to clean all the sites which are created earlier in RoleEnvironment.Stopping event and made sure IIS is clean.

You can read my blog blow to understand what is Stopping event and how to handle it properly in your code:

Ultimately, the problem was related with IIS having residual setting during machine startup and IISConfigurator could not clean IIS before timeout kicks in, which ultimately cause role startup problems. After adding necessary code in Stopping event to clean IIS, the web role started without any issues on reboot. You can also add a startup task to make sure the IIS is clean prior to role start as a failsafe.

• Joel Foreman described In-Place Updates in Windows Azure in an 11/21/2011 post to the Slalom Blog:

imageRecent improvements to Windows Azure will now give developers better flexibility and control over updating existing deployments. There is now better support for in-place updates across a wider range of deployment scenarios without changing the assigned VIP address of a service. You can read more about these changes in the MSDN blog Announcing Improved In-place Updates by Drew McDaniel. Here is a bit more about the problem space and the new changes.

imageWhen you deploy a new service to Windows Azure, it is assigned a VIP address. This VIP address will stay the same as long as a deployment continues to occupy the deployment slot (either production or staging) for this service. But if a deployment is deleted and removed entirely, the next deployment will be assigned a new VIP.

imageSomeday we will live in a world when this VIP address will not matter. For some applications, it doesn’t and the application continues to function just fine if it changes. But in my experience thus far, there are often cases were a dependency on this address will occur. Some very common examples are to support an A record DNS entry for a top-level domain name. Another common example the practice of IP “white-listing” for access to protected resources behind a firewall. I think that in the future, there won’t be a need for a this dependency as the infrastructure world catches up to the cloud movement. But we are not there yet.

In order to preserve the VIP address of your existing service across deployments, developers can utilize to mechanisms: in-place upgrades or the “VIP Swap” which is the swapping of two running deployments (i.e. staging and production). But there were limitations to the types of deployments that either of these methods could support. What this really meant is that as long as the topology of your service didn’t change, this would work fine. But for major releases, it likely would not work. For instance, if your new deployment added an endpoint (i.e. HTTPS), an in-place upgrade could not be performed. Or if your deployment added a new role (i.e. new worker role), a VIP swap could not be performed with the existing production instance. The end result was having to delete the existing deployment and deploy the new service. This would cause a brief interruption in service, cause a new VIP to be assigned, and any downstream dependencies on the VIP would have to be updated. It was a pain to deal with.

The new improvements to how deployments are handled, both in-place upgrades and VIP swaps, will eliminate these scenarios, along with many others! I am very pleased with these updates In fact, off the top of my head I cannot think of a deployment scenario that cannot be supported now by an in-place upgrade or VIP swap. Check out the matrix provided in the link above to see all of the different deployment scenarios which are covered.

We can’t get around taking that dependency on the VIP address of our service, but at least now we are better enabled to deal with preserving the VIP address for our service.

David Linthicum (@DavidLinthicum) asserted “With rise of supercomputing and high-end platforms in public clouds, the day will come when you can't get them any other way” in a deck for his Why supercomputers will live only in the cloud article of 11/23/2011 for InfoWorld’s Cloud Computing blog:

imageThe new public beta of Cluster Compute Eight Extra Large is's most powerful cloud service yet. Its launch indicates that Amazon Web Services (AWS) intends to attract more organizations into high-performance computing. "AWS's cloud for high-performance computing applications offers the same benefits as it does for other applications: It eliminates the cost and complexity of buying, configuring, and operating in-house compute clusters, according to Amazon," notes the IDG News Service story. The applications include physics simulations, seismic analysis, drug design, genome analysis, aircraft design, and similar CPU-intensive analytics applications

imageThis is a core advantage of cloud computing: the ability to access very expensive computing systems using a self-provisioned and time-shared model. Most organizations can't afford supercomputers, so they choose a rental arrangement. This is not unlike how I had to consume supercomputing services back when I was in college. Certainly the college could not afford a Cray.

The question then arises: What happens these advanced computing services move away from the on-premise hardware and software model completely? What if they instead choose to provide multitenant access to supercomputing services and hide the high-end MIPS behind a cloud API?

This model may offer a more practical means of providing these services, and supercomputers are not the only platform where this shift may occur. Other more obscure platforms and application[s] could be a contender for the cloud-only model, such as huge database services bound to high-end analytics, geo-analytics, any platform that deals with massive image processing, and other platforms and applications that share the same patterns.

I believe that those who vend these computing systems and sell about 20 to 30 a year will find that the cloud becomes a new and more lucrative channel. Perhaps they will support thousands of users on the cloud, an audience that would typically not be able to afford the hardware and software.

Moreover, I believe this might be the only model they support in the future, and the cloud could be the only way to access some platform services. That's a pity for those who want to see the hardware in their own data center, but perhaps that's not a bad thing.

imageSee the Microsoft Research reported a updated CTP of Project Daytona: Iterative MapReduce on Windows Azure on 11/14/2011 article below (in this section).

Avkash Chauhan (@avkashchauhan) described Azure Diagnostics generates exception in SystemEventsListener::LoadXmlString when reading event log which includes exception as event data on 11/22/2011:

imageWhen you write [a] Windows Azure application, you may wish [to] include you[r] own error or notifications in the event log. Or you would like to save exceptions in the event log, which [is] ultimately collected by Windows Azure Diagnostics in the Azure VM and sent to Windows Azure Storage.

imageWhen you create an event log with Exception.ToString(), the event can’t be consumed by Azure Diagnostics. [I]nstead you might see an exception in [the] Azure Diagnostics module.

The exception shows a failure in SystemEventsListener::LoadXmlString while loading the event string.

If you dig deeper you will find that this exception is actually related [to the] “xmlns” attribute present in your exception string as below:

<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<error xmlns="">
  <message xml:lang="en-US">The specified entity already exists.

This exception is caused by a known issue in [the] Azure diagnostics component by the string “xmlns” being included in the content of an event log.

To solve this problem you have two options:

  1. You can remove xmlns from the error data
  2. You also can replace double quote “ with single quote in the xmlns string

Brian Hitney announced Rock, Paper, Azure is back… in an 11/22/2011 post:

imageRock, Paper, Azure (RPA) is back! For those of you who have played before, be sure to get back in the game! If you haven’t heard of RPA, check out my past posts on the subject. In short, RPA is a game that we built in Windows Azure. You build a bot that plays a modified version of rock, paper, scissors on your behalf, and you try to outsmart the competition.

imageOver the summer, we ran a Grand Tournament where the first place prize was $5,000! This time, we’ve decided to change things a bit and do both a competition and a sweepstakes. The game, of course, is a competition because you’re trying to win. But we heard from many who didn’t want to get in the game because the competition was a bit fierce.

Competition: from Nov 25 through Dec 16, each Friday, we’ll give the top 5 bots a $50 Best Buy gift card. If you’re the top bot each Friday, you’ll get a $50 gift card each Friday.

Sweepstakes: for all bots in the game on Dec 16th, we’ll run the final round and then select a winner at random to win a trip to Cancun. We’re also giving away an Acer Aspire S3 laptop, a Windows Phone, and an Xbox Kinect bundle. Perfect timing for the holidays!

Rock Paper Azure

Check it out at!

imageThe last time I flew to Cancun (in my 1956 Piper PA23-150 Apache), it was called Puerto Juarez and the first hotels were being built. Wish I’d saved the 35-mm pictures I took.

Avkash Chauhan (@avkashchauhan) listed Resources to run Open Source Stacks on Windows Azure on 11/22/2011:


imageApache Tomcat


Ruby on Rails


Scala & Play Framework:



Bruno Terkaly (@brunoterkaly) posted Presentation and Training Kit: Android Consuming Cloud Data – Powered By Windows Azure on 11/21/2011:

imageGoal of this post – To teach you how to demo and Android Application consuming standards-based RESTful Web Services

This post has a simple goal – to prepare you to give a presentation on how you would communicate to the cloud from an Android phone. The presentation can be given a time range of 30 minutes to an hour, depending on the level of detail you wish to provide. This talk has been given at the Open Android Conference. Details can be found here:

O'Reilly Open Android Conferenceimage

This is developer-centric – hands-on coding

This is designed to be a hands-on demo, meaning that there are working samples to demonstrate key concepts. Source code, PowerPoint slides, and videos are all part of this package. All the material is available on my blog posts.

Resources are publicly available

All of the materials for this talk are publicly available. This dramatically simplifies follow up with audience members, who frequently ask for the presentation materials

A flow has been defined for this talk

imageThere are 4 main sections in this talk. Each section can take from 10 to 15 minutes. Following parts 1-4 below will allow you to give a deep, hands-on code demo of connecting Android mobile applications to the Microsoft Cloud – Windows Azure.


Bruno continues with details of the four parts of the presentation.

Microsoft Research reported a updated CTP of Project Daytona: Iterative MapReduce on Windows Azure on 11/14/2011:

imageMicrosoft has developed an iterative MapReduce runtime for Windows Azure, code-named Daytona. Project Daytona is designed to support a wide class of data analytics and machine-learning algorithms. It can scale to hundreds of server cores for analysis of distributed data. Project Daytona was developed as part of the eXtreme Computing Group’s Cloud Research Engagement Initiative.

Download Details

File Name:


Date Published:
14 November 2011

Download Size:
11.16 MB


Note: By installing, copying, or otherwise using this software, you agree to be bound by the terms of its license. Read the license.


On Nov. 14, 2011 we released an updated Daytona community technical preview (CTP) that contains a number of performance improvements from our recent development sprint, improved fault tolerance, along with enhancements for iteratative algorithms and data caching. Click the Download button above to get the latest package with these updates. Learn more about this release...


Project Daytona on Window Azure is now available, along with a deployment guide, developer and user documentation, and code samples for both data analysis algorithms and client application. This implementation of an iterative MapReduce runtime on Windows Azure allows laboratories, small groups, and individual researchers to use the power of the cloud to analyze data sets on gigabytes or terabytes of data and run large-scale machine learning algorithms on dozens or hundreds of compute cores.

Included in the CTP Refresh (Nov. 14, 2011)

This refresh to the Daytona CTP contains the following enhancments:

  1. Updated
    1. Binaries
    2. Hosting project source
    3. API help reference file (CHM
    4. Documentation

Included in the CTP Release (Nov. 14, 2011)

This CTP release consists of a ZIP file ( that includes our Windows Azure cloud service installation package along with the documentation and sample codes.

  • The Deployment Package folder contains a package to be deployed on your Windows Azure account, a configuration file for setting up your account, and a guide that offers step-by-step instructions to set up Project Daytona on your Window Azure service. This package also contains the user guide that describes the user interface for launching data analytics jobs on Project Daytona from a sample client application.
  • The Samples folder contains source code for sample data analytics algorithms written for Project Daytona as examples, along with source code for a sample client application to invoke data analytics algorithms on Project Daytona. The distribution also includes a developer guide for authoring new data analytics algorithms for Project Daytona, along with user guides for both the Daytona iterative MapReduce runtime and client application.

About Project Daytona

The project code-named Daytona is part of an active research and development project in the eXtreme Computing Group of Microsoft Research. We will continue to tune the performance of Project Daytona and add new functionality, fix any software defects that are identified, and periodically push out new versions.

Please report any issues you encounter in using Project Daytona to XCG Engagement or contact Roger Barga with suggestion for improvements and/or new features.

Thanks to Roger Barga for the heads-up about the new CTP.

<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

• Michael Washington (@ADefWebserver) reported in an 11/25/2011 Tweet:

Church+ is a “commercial application built with #Visual Studio #LightSwitch”:

Personal Details

Detailed Membership information and profile window: This window displays the full data of a particular member. Where obtainable, the picture of the members can be attached to their information. It is a comprehensive window that displays everything needed to know about a member, a very important tool in following-up on the spiritual and other aspects of life of a parishioner.


Detailed data on each family in the church is captured by this window. The family name, father’s name, mother’s name, and number of children/ward they have and contact address of the family are captured here.

Membership Charts

Membership Charts: This window provides detailed analysis of the church mix. It displays the mix in four pie charts each analyzing the church in terms of membership status e.g. first timer, full member, worker etc. Membership by gender i.e. Male/Female mix of the church, Membership by Marital Status, showing the percentage of married to singles and the Age Mix of the church. It describes all these parameters in percentages.

Birth Management

Birth Management: This displays detailed information about new births in the church. It captures the Father and the Mother’s name of the baby, the name(s) of the baby, time and the date of birth of the child, the day of dedication and the officiating minister.

Birthday and Anniversary Reminders

Birthday and Anniversary Reminder window is the first display the application runs once started. It displays birthday(s) and anniversary(ies) that fall to each particular day church+ is opened. It pull up birthday/anniversary information of members from the database and automatically reminds the administrator of these daily.

Detailed Church Activities

Church Activity: This view displays detailed data about a particular activity/service of the church. It captures the attendance, offerings, testimonies, the date, start and end time, description of the service, the preacher, topic of the sermon, the text and special notes. All necessary information about any type of service is captured, and can be called up at any point in time.

Church Attendance and Growth Analysis

Attendance and Church Growth Analysis Window: This view provides strategic tools to analyse how the church is doing. This is a window where detailed analysis of each activity/service is done. It helps the church to see the progress or otherwise of the services. Attendances per service are compared to previous services in a visual and graphical form using bar charts. It is a wonderful church growth tool that removes guesswork and forces the church leadership to ask intelligent questions that will result in decisions critical to the growth of the church.

Church Service Report

The Report Windows: church+ has printable reports for the first-timers, converts, members detailed report, church activity/service report, financial reports, and comparable church activity/service report over a period of time.

First Timer Report

The Visual Studio LightSwitch Team described LightSwitch Video Training from Pluralsight! in an 11/21/2011 post:

image222422222222Pluralsight provides a variety of developer training on all sorts of topics and they have generously donated some LightSwitch training for our LightSwitch Developer Center!

Just head on over the the LightSwitch “How Do I” video page and on the right you’ll see three video modules with over an hour and a half of free LightSwitch training.

1. Introduction to Visual Studio LightSwitch (27 min.) Introduction to Visual Studio Lightswitch
2. Working with Data (30 min.) Working with Data
3. Working with Screens (37 min.) Working with Screens

Also, don’t forget to check out all 24 “How Do I” videos as well as other essential learning topics on the Dev Center.

• Kostas Christodoulou asserted Auditing and Concurrency don’t mix (easily)… in an 11/20/2011 post:

In MSDN forums I came across a post addressing an issue I have also faced. Auditing fields can cause concurrency issues in LightSwitch (not exclusively).
In general basic auditing includes keeping track of when an entity was created/modified and by whom. I say basic auditing because auditing is in general much more than this.

Anyhow, this basic auditing mechanism is very widely implemented (it’s a way for developers to be able to easily find a user to blame for their own bugs :-p), so let’s see what this can cause and why in LightSwitch.
In the aforementioned post but also in this one, I have clearly stated that IMHO the best way to handle concurrency issues is using RIA Services. If you don’t, read what follows.

Normally in any application, updating the fields that implement Audit tracking would be a task completed in the business layer (or even Data layer in some cases and this could go as deep as a database trigger). So in LightSwitch the first place one would look into to put this logic would be EntityName_Inserting and EntityName_Updating partial methods that run on the server. Which is right, but causes concurrency issues, since after saving the client instance of the entity is not updated by the changes made at the server and as soon as you try to save again this will cause concurrency error.

So, what can you do, apart from refreshing after every save which is not very appealing? Update at the client. Not appealing either but at least it can be done elegantly:

Let’s say all entities to implement auditing have 4 fields:

  • DateCreated
  • CreatedBy
  • DateModified
  • ModifiedBy
Add to the Common project a new interface called IAuditable like below:
namespace LightSwitchApplication{
    public interface IAuditable{
        DateTime DateCreated { get; set; }
        string CreatedBy { get; set; }
        DateTime DateModified { get; set; }
        string ModifiedBy { get; set; }

Then, also in the common project, add a new class called EntityExtensions:
namespace LightSwitchApplication{
    public static class EntityExtensions{
        public static void Created<TEntityType>(this TEntityType entity, IUser user)
           where TEntityType: IAuditable{
           entity.DateCreated = entity.DateModified = DateTime.Now;
           entity.CreatedBy = enity.ModifiedBy = user.Name;

        public static void Modified<TEntityType>(this TEntityType entity, IUser user)
           where TEntityType: IAuditable{
           entity.DateModified = DateTime.Now;
           entity.ModifiedBy = user.Name;
Now let’s suppose your entity’s name is Customer (imagination was never my strong point), the screen’s name is CustomerList and the query is called Customers.
First Viewing the Customer entity in designer click write code and make sure that:
partial class Customer : IAuditable{

Then at your screen’s saving method write this:
partial void CustomerList_Saving{
    foreach(Customer customer in this.DataworkSpace.ApplicationData.Details.GetChanges().AddedEntities.OfType<Customer>())
    foreach(Customer customer in this.DataworkSpace.ApplicationData.Details.GetChanges().ModifiedEntities.OfType<Customer>())
This should do it. This way you can also easily move your logic to the server as the interface and extension class are defined in the Common project and they are also available to the server.

Return to section navigation list>

Windows Azure Infrastructure and DevOps

• Nicholas Mukhar quoted Apprenda CEO: Competition Non-Existent in .NET PaaS Space in an 11/23/2011 post to the TalkinCloud blog:

imageIn late November 2011 Apprenda, a platform-as-a-service (PaaS) provider for enterprise companies, released its Apprenda 3.0 solution featuring more manageability and platform support for developers. Shortly after the release I spoke with Apprenda CEO Sinclair Schuller to learn about the company’s background and the state of competition in the PaaS market. My most surprising discovery? Schuller noted a lack of competition in the PaaS market for .NET applications. Here’s why:

image“We’re seeing more competition now, but no one else is focusing on the .NET,” Schuller said. Apprenda attributes the lack of PaaS .NET competition to two factors:

  1. Research and development is easier around Java applications, meaning there’s an easier entrance into the marketplace if you’re developing PaaS for Java apps.
  2. Developers get nervous because Microsoft owns the .NET framework.

These factors don’t seem to bother Schuller, who said Apprenda is “happy to be part of the Microsoft ecosystem.” So what has Apprenda discovered that other PaaS providers have yet to catch on to? Schuller noted two untapped areas that Apprenda has since exploited — the lack of private cloud PaaS solutions and mobile PaaS.

“If you look at other PaaS providers, they wrote a lot of software, but it’s all tied to infrastructure,” he said. “But ours is portable. It can be installed anywhere you can get a traditional Windows Server. It can take over all Windows Servers that are working together and join them together to create a platform. Developers don’t have to worry about servers. They can tell Apprenda to run an application and it decides which server is best to run it on.”

Schuller’s idea to found Apprenda came from his enterprise IT background. Specifically, building accounting applications while employed at Morgan Stanley Portfolio Accounting. “We wrote web applications for Java in about one to four months, but then it took 30 to 90 days to get the applications deployed. And there was a lot of human error throughout that process,” he said. “Accountants want their applications delivered quickly. They were breathing down our necks, and when we scaled the applications across multiple projects, the application complexity becomes higher. We found ourselves rebuilding common components.” So Sinclair and a team of entrepreneurs decided to build Apprenda — a technology layer that’s sold as an application platform so developers don’t have to rebuild mission-critical components and IT staff can scale their applications.

The next step for Apprenda? The company has two releases each year with the next one scheduled for May 2012. Schuller said Apprenda is also focused on expanding the different types of applications its solution can support.

Read More About This Topic

• Lori MacVittie (@lmacvittie) asserted #devops It’s a simple equation, but one that is easily overlooked in an introduction to her The Pythagorean Theorem of Operational Risk post of 11/23/2011 to F5’s DevCentral blog:

imageMost folks recall, I’m sure, the Pythagorean Theorem. If you don’t, what’s really important about the theorem is that any side of a right triangle can be computed if you know the other sides by using the simple formula a2 + b2 = c2. The really important thing about the theorem is that it clearly illustrates the relationship between three different pieces of a single entity. The lengths of the legs and hypotenuse of a triangle are intimately related; variations in one impact the others.

pythagorean operational riskOperational risk – security, availability, performance – are interrelated in very much the same way. Changes to one impact the others. They cannot be completely separated. Much in the same way unraveling a braided rope will impact its strength, so too does unraveling the relationship between the three components of operational risk impact its overall success.


It is true that the theorem is not an exact match, primarily because concepts like performance and security are not easily encapsulated as concrete numerical values even if we apply mathematical concepts like Gödel numbering to it. While certainly we could use traditional metrics for availability and even performance (in terms of success at meeting specified business SLAs), still it would be difficult to nail these down in a way that makes the math make sense.

But the underlying concept - that it is always true* that the sides of a right triangle are interrelated is equally applicable to operational risk.

Consider that changes in performance impact what is defined as “availability” to end-users. Unacceptably slow applications are often defined as “unavailable” because they render it nearly impossible for end-users to work productively. Conversely, availability issues can negatively impact performance as fewer resources attempt to serve the same or more users. Security, too, is connected to both concepts, though it is more directly an enabler (or disabler) of availability and performance than vice-versa. Security solutions are not known for improving performance, after all, and the number of attacks directly attempting to negate availability is a growing concern for security. So, too, is the impact of attacks relevant to performance, as increasingly application-layer attacks are able to slip by security solutions and directly consume resources on web/application servers, degrading performance and ultimately resulting in a loss of availability.

But that is not to say that performance and availability have no impact on security at all. In fact the claim could be easily made that performance has a huge impact on security, as end-users demand more of the former and that often results in less of the latter because of the nature of traditional security solutions impact on performance.

Thus we are able to come back to the theorem in which the three sides of the data center triangle known as operational risk are, in fact, intimately linked. Changes in one impact the other, often negatively, and all three must be balanced properly in a way that maximizes them all.


Devops plays (or could and definitely should play) a unique role in organizations transforming from their traditional, static architectures toward the agile, dynamic architectures necessary to successfully execute on cloud and virtualization strategies. The role of devops focuses on automation and integration of the various delivery concerns that support the availability, performance, and security of applications. While they may not be the ones that define and implement security policies, they are the ones that should be responsible for assuring they are deployed and applied to applications as they move into the production environment. Similarly, devops may not be directly responsible for defining availability and performance requirements, but they are the ones that must configure and implement the appropriate health monitoring and related policies to ensure that all systems involved have the data they need to make the decisions required to meet operational requirements. Devops should be responsible for provisioning the appropriate services related to performance, security, and availability based on their unique role in the emerging data center model.

Devops isn’t just scripts and automation; it’s a role requiring unique skillset spanning networking, development, application delivery, and infrastructure integration. Devops must balance availability, performance, and security concerns and understand how the three are interrelated and impact each other. The devops team should be the ones that recognize gaps or failures in policies and are able to point them out, quickly.

The interrelationship between all three components of operational risk puts devops front and center when it comes to maintaining application deployments. Treating security, performance, and availability as separate and isolated concerns leads to higher risks that one will negatively impact the other. Devops is the logical point at which these three concerns converge, much in the same way application delivery is the logical data center tier at which all three concerns converge from a services and implementation point of view. Devops, like application delivery, has the visibility and control necessary to ensure that the three sides of the operational risk triangle are balanced properly.

* In Euclidian geometry, at least. Yes, other systems exist in which this may not be true, but c’mon, we’re essentially applying Gödel numbers to very abstract metrics and assuming axioms that have never been defined let alone proven so I think we can just agree to accept that this is always true in this version of reality.

• Lydia Leong (@CloudPundit) described Why developers make superior operators in an 11/21/2011 post:

imageDevelopers who deeply understand the arcana of infrastructure, and operators who can code and understand the interaction of applications and infrastructure, are better than developers and operators who understand only their own discipline. But it’s typically easier, from the perspective of training, for a developer to learn operations, than for an operator to learn development.

While there are fair number of people who teach themselves on-the-job, most developers still come out of formal computer science backgrounds. The effectiveness of formal education in CS varies immensely, and you can get a good understanding by reading on your own, of course, if you read the right things — it’s the knowledge that matters, not how you got it. But ideally, a developer should accumulate the background necessary to understand the theory of operating systems, and then have a deeper knowledge of the particular operating system that they primarily work with, as well as the arcana of the middleware. It’s intensely useful to know how the abstract code you write, actually turns out to run in practice. Even if you’re writing in a very high-level programming language, knowing what’s going on under the hood will help you write better code.

Many people who come to operations from the technician end of things never pick up this kind of knowledge; a lot of people who enter either systems administration or network operations do so without the benefit of a rigorous education in computer science, whether from college or self-administered. They can do very well in operations, but it’s generally not until you reach the senior-level architects that you commonly find people who deeply understand the interaction of applications, systems, and networks.

Unfortunately, historically, we have seen this division in terms of relative salaries and career paths for developers vs. operators. Operators are often treated like technicians; they’re often smart learn-on-the-job people without college degrees, but consequently, companies pay accordingly and may limit advancement paths accordingly, especially if the company has fairly strict requirements that managers have degrees. Good developers often emerge from college with minimum competitive salary requirements well above what entry-level operations people make.

Silicon Valley has a good collection of people with both development and operations skills because so many start-ups are founded by developers, who chug along, learning operations as they go, because initially they can’t afford to hire dedicated operations people; moreover, for more than a decade, hypergrowth Internet start-ups have deliberately run devops organizations, making the skillset both pervasive and well-paid. This is decidedly not the case in most corporate IT, where development and operations tend to have a hard wall between them, and people tend to be hired for heavyweight app development skills, more so than capabilities in systems programming and agile-friendly languages.

Here are my reasons for why developers make better operators, or perhaps more accurately, an argument for why a blended skillset is best. (And here I stress that this is personal opinion, and not a Gartner research position; for official research, check out the work of my esteemed colleagues Cameron Haight and Sean Kenefick. However, as someone who was formally educated as a developer but chose to go into operations, and who has personally run large devops organizations, this is a strongly-held set of opinions for me. I think that to be a truly great architect-level ops person, you also have to have a developer’s skillset, and I believe it’s important to mid-level people as well, which I recognize as a controversial opinions.)

Understanding the interaction of applications and infrastructure leads to better design of both. This is an architect’s role, and good devops understand how to look at applications and advise developers how they can make them more operations-friendly, and know how to match applications and infrastructure to one another. Availability, performance, and security are all vital to understand. (Even in the cloud, sharp folks have to ask questions about what the underlying infrastructure is. It’s not truly abstract; your performance will be impacted if you have a serious mismatch between the underlying infrastructure implementation and your application code.)

Understanding app/infrastructure interactions leads to more effective troubleshooting. An operator who can CTrace, DTrace, sniff networks, read application code, and know how that application code translates to stuff happening on infrastructure, is in a much better position to understand what’s going wrong and how to fix it.

Being able to easily write code means less wasted time doing things manually. If you can code nearly as quickly as you can do something by hand, you will simply write it as a script and never have to think about doing it by hand again — and neither will anyone else, if you have a good method for script-sharing. It also means that forever more, this thing will be done in a consistent way. It is the only way to truly operate at scale.

Scripting everything, even one-time tasks, leads to more reliable operations. When working in complex production environments (and arguably, in any environment), it is useful to write out every single thing you are going to do, and your action plan for any stage you deem dangerous. It might not be a formal “script”, but a command-by-command plan can be reviewed by other people, and it means that you are not making spot decisions under the time pressure of a maintenance window. Even non-developers can do this, of course, but most don’t.

Converging testing and monitoring leads to better operations. This is a place where development and operations truly cross. Deep monitoring converges into full test coverage, and given the push towards test-driven development in agile methodologies, it makes sense to make production monitoring part of the whole testing lifecycle.

Development disciplines also apply to operations. The systems development lifecycle is applicable to operations projects, and brings discipline to what can otherwise be unstructured work; agile methodologies can be adapted to operations. Writing the tests first, keeping things in a revision control system, and considering systems holistically rather than as a collection of accumulated button-presses are all valuable.

The move to cloud computing is a move towards software-defined everything. Software-defined infrastructure and programmatic access to everything inherently advantages developers, and it turns the hardware-wrangling skills into things for low-level technicians and vendor field engineering organizations. Operations becomes software-oriented operations, one way or another, and development skills are necessary to make this transition.

It is unfortunately easier to teach operations to developers, than it is to teach operators to code. This is especially true when you want people to write good and maintainable code — not the kind of script in which people call out to shell commands for the utilities that they need rather than using the appropriate system libraries, or splattering out the kind of program structure that makes re-use nigh-impossible, or writing goop that nobody else can read. This is not just about the crude programming skills necessary to bang out scripts; this is about truly understanding the deep voodoo of the interactions between applications, systems, and networks, and being able to neatly encapsulate those things in code when need be.

Devops is a great place for impatient developers who want to see their code turn into results right now; code for operations often comes in a shorter form, producing tangible results in a faster timeframe than the longer lifecycles of app development (even in agile environments). As an industry, we don’t do enough to help people learn the zen of it, and to provide career paths for it. It’s an operations specialty unto itself.

Devops is not just a world in which developers carry pagers; in fact, it doesn’t necessarily mean that application developers carry pagers at all. It’s not even just about a closer collaboration between development and operations. Instead, it can mean that other than your most junior button-pushers and your most intense hardware specialists, your operations people understand both applications and infrastructure, and that they write code as necessary to highly automate the production environment. (This is more the philosophy of Google’s Site Reliability Engineering, than it is Amazon-style devops, in other words.)

But for traditional corporate IT, it means hiring a different sort of person, and paying differently, and altering the career path.

A little while back, I had lunch with a client from a mid-market business, which they spent telling me about how efficient their IT had become, especially after virtualization — trying to persuade me that they didn’t need the cloud, now or ever. Curious, I asked how long it typically took to get a virtualized server up and running. The answer turned out to be three days — because while they could push a button and get a VM, all storage and networking still had to be manually provisioned. That led me to probe about a lot of other operations aspects, all of which were done by hand. The client eventually protested, “If I were to do the things you’re talking about, I’d have to hire programmers into operations!” I agreed that this was precisely what was needed, and the client protested that they couldn’t do that, because programmers are expensive, and besides, what would they do with their existing do-everything-by-hand staff? (I’ve heard similar sentiments many times over from clients, but this one really sticks in my mind because of how shocked this particular client was by the notion.)

Yes. Developers are expensive, and for many organizations, it may seem alien to use them in an operations capacity. But there’s a cost to a lack of agility and to unnecessarily performing tasks manually.

But lessons learned in the hot seat of hypergrowth Silicon Valley start-ups take forever to trickle into traditional corporate IT. (Even in Silicon Valley, there’s often a gulf between the way product operations works, and the way traditional IT within that same company works.)

Brent Stineman (@BrentCodeMonkey) described Cloud Computing as a Financial Institution in an 11/21/2011 post:

imageOk, I’m sure the title of this post is throwing you a bit but please bear with.

I’ve been travelling the last few weeks. I’m driving as its only about 250 miles away. And driving unlike flying leaves you with a lot of time to think. And this train of thought dawned on me yesterday as I was cruising down I35 south somewhere between Clear Lake and Ames in northern Iowa.

The conventional thinking

imageSo the common theme you’ll often hear when doing “intro to cloud” presentations is comparing it to a utility. I’ve done this myself countless times. This story goes that as a business owner, what you need is say light. Not electricity, no a generator, not a power grid.

Just like the you utility company manages the infrastructure to delivery power to your door so when you turn on the switch, you get the light you wanted. You don’t have to worry about how it gets there. Best yet, you only pay for what you use. No needs to spend hundreds of millions on building a power plant and the infrastructure to deliver that power to your office.

I really don’t have an issue with this comparison. Its easy to relate to and does a good job of illustrating the model. However, what I realized as I was driving is that this example is a one way example. I’m paying a fee and getting something in return. But there’s no real trust issue except in the ability for my provider to give me the service.

Why a financial institution?

Ok, push aside the “occupy” movement and the recent distrust of bankers. A bank is where you put assets for safe keeping. You have various services you get from the provider (atm, checking account) that allow you to leverage those assets. You also have various charges that you will pay for using some of those services while others are free. You have a some insurance in place (FDIC) to help protect your assets.

Lastly, and perhaps most importantly, you need to have a level of trust in the institution. You’re putting your valuables in their care. You either need to trust that they are doing what you have asked them to do, or that they have enough transparency that you can know exactly what’s being done.

What really strikes me about this example is you having some skin in the game and needing to have a certain level of trust in your provider. Just like you trust the airline to get you to your destination on time, you expect your financial provider to protect your assets and deliver the services they have promised.

It’s the same for a cloud provider. You put your data and intellectual property in their hands, or you keep in under your mattress. Their vault is likely more secure then your box spring, but its what you are familiar with and trust. Its up to you to find cloud provider you can trust. You need to ask the proper questions to get to that point. Ask for a tour of the vault, audit their books so to speak. Do your homework.

Do you trust me?

So the point of all this isn’t to get a group of hippies camped out on the doorstep of the nearest datacenter. Instead,the idea here is to make you think about what you’re afraid of, especially when you’re considering a public cloud provider. Cloud Computing is about trusting your provider but also having responsibility for make sure you did your homework. If you’re going to trust someone with your most precious possessions, be sure you know exactly how far you can trust them.

<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

• Yung Chow (@yungchou) described System Center Virtual Machine Manager (VMM) 2012 as Private Cloud Enabler (4/5): Working with Service Templates in an 11/25/2011 post:

imageA key feature delivered by VMM 2012 is the ability to deploy an application based on a service template which enables a push-button deployment of a target application infrastructure. VMM 2012 signifies a direct focus, embedded in product design, on addressing the entire picture of a delivered business function, rather than presenting fragmented views from individual VMs. VMM 2012 makes a major step forward and declares the quintessential arrival of IT as a Service by providing out-of-box private cloud product readiness for enterprise IT.

In this forth article of the 5-part series on VMM 2012,

I further explain the significance of employing a service template.

Service Template

This is in my view the pinnacle of VMM 2012 deliveries. The idea is apparent, to deliver business functions with timeliness and cost-effectiveness by standardizing and streamlining application deployment process. of Here I focus on the design and architectural concepts of a service template to help a reader better understand how VMM 2012 accelerates the process of a private cloud with consistency, repeatability, and predictability. The steps and operations to deploy and administer a private cloud with a service template will be covered in upcoming screencasts as supplements to this blog post series.

The term, service, in VMM 2012 means a set of VMs to be configured, deployed, and managed as one entity. And a service template defines the contents, operations, dependencies, and intelligence needed to do a push-button deployment of an application architecture with a target application configured and running according to specifications. This enables a service owner to manage not only individual VMs, but the business function in its entirety delivered as a (VMM 2012) service. Here, for instance, a service template developed for StockTrader is imported and displayed in the Service Template Designer of VMM 2012 as below revealing

StockTrader service template

  • Four VM templates to construct a four-tier application architecture with web front-end, business service layer, operations, and SQL back-end
  • Each VM template specifying how to compose and instantiate an intended VM operationally including specifications of virtualization platforms, RAM, OS, product key, domain, local admin credentials, networking, application virtualization packages to be installed/configured, and the associated timing, procedures, and operations

Application Deployment as Service via IaaS

Since VMM 2008, Microsoft has offered private cloud deployed with IaaS. Namely a self-service user can be authorized with the ability to provision infrastructure, i.e. deploy VMs to authorized environment, on demand. While VMs can be deployed on demand, what is running within those VMs when and how is however not a concerned of VMM 2008.

VMM 2012 on the other hand is designed with service deployment and private cloud readiness in mind. In addition to deploying VMs, VMM 2012 can now deploy services. As mentioned earlier, a service in VMM 2012 is an application delivered by a set of VMs which are configured, deployed, and maintained as one entity. More specifically, VMM 2012 can deploy on demand not only VMs (i.e. IaaS), but VMs collectively configured as an instance of a defined application architecture for hosting a target application by employing a service template. As VMs are deployed, an instance of a defined application architecture is automatically built, and a target application hosted in the architecture becomes functional and available. 2012 therefore convers an application deployment into a service via IaaS.

The Rise of Service Architect

imageImportantly, a service template capturing all relevancies of an application deployment is an integral part of the application development and production operations. A seasoned team member (whom I call Service Architect) with a solid understanding of application development and specifications, private cloud fabric construction, and production IT operations is an ideal candidate for authoring service templates.

Context and Operation Models

In a private cloud setting, enterprise cloud admin constructs fabric, validate service templates, and acts as a service provider. Service owners are those self-service users authorized to deploy services to intended private clouds using VMM 2012 admin console and act as consumers. So while enterprise IT provides fabric, a service owner deploys services based on authorized service templates to authorized private clouds on demand. Notice a self-service user can access authorized templates and instances of VMs and services, private clouds, VM instances, etc. A self-service users nevertheless does not see the private cloud fabric. in VMM 2012 admin console.

Setting the context at an application, a service owner deploys a service based on an authorized service template to an authorized private cloud on demand and acts as a service provider. At the same time, an authorized end user can access the application’s URL and acts as a consumer. Here an end user does not and need not know how the application is deployed. As far as a user is concerned, the user experience is similar to accessing a web application.

Standardization, Consistency, Repeatability, and Predictability

What specified in a service template are static definitions and pre-defined criteria of the what, how, when, and inter-dependency and event-driven information to automate the deployment process of an application. To be able to deploy an application multiple times with the same service template in the same environment, there is also incidence information like machine names which are generated, validated, and locked down by VMM 2012 right imagebefore deployment when clicking Configure Deployment from Service Template Designer. The separation of incidence information from static variables and the intelligence of inter-dependencies and event-driven operations among VMs of an application included in a service template altogether offer an opportunity to standardize the process and make a deployment standardized, consistent, repeatable, and predictable.

imageService Template is in essence a cookie cutter which can reproduce content according to preconfigured specifications, in this case the shape of a cookie. A service based on a VMM 2012 service template can deployed multiple times on the same fabric, i.e. the same infrastructure, by validating the incidence information of each deployment. This is similar to using the same cookie cutter with various cookie dough. The incidences are different, the specifications are identical.

Upgrading [the] Service

Deployment with a service template can greatly simplify an upgrade scenario of an already deployed application. First, the production application infrastructure of StockTrader can be realistically and relatively easily mimicked in a test environment by configuring and deploying the same service template to a private cloud for development, such as an isolated logical network of 192.168.x.x subnet defined in the Network pool of the private cloud fabric. in VMM 2012. A new release, 2011.11.24 for example, of the application based on a service template (Release 2011.11) can then be developed and tested in this development environment.

Once concluding the development process and ready to deploy Release 2011.11.24, a cloud administrator can then import the service template and associated resources, as applicable, of Release 2011.11.24 into the private cloud fabric, followed by validating the resource mapping so all references in Release 2011.11.24 are pointing to those in production. To upgrade an application from Release 2011.11 to Release 2011.11.24 at this point is simply a matter of applying the production instance to the service template of Release 2011.11.24. It is as straightforward as form VMM 2012 admin console, right-click a running service and set a target template as show below.

Upgrading applicaiton with serice template

This process is wizard-driven. Depending on how an application’s upgrade domain is architected, current application state, and the natures of changes, application outage may or may not be necessary. The following highlights a process of replacing a service template from Release 2011.11 to Release 2011.11.24 on an instance of StockTrader service.

Replacing service template

Replacing service template

There are different way a new service template can be applied to a running incidence. For an authorized self-service user, the above process can also be easily carried out with App Controller, which I will detail in Part 5 of this blog post series.

Retiring Service

In VMM 2012, deleting a running service will stop and erase all the associated VM instances. Nevertheless, the resources referenced in the service template are still in place. To delete a service template, all configure deployments and deployed instances must be deleted first.

Archiving Services

As private clouds are built and services are deployed, releases of services can be documented by archiving individual service templates with associated resources. Notice this is not about backing up instances and data associated with the instances of an application, but to as preferred keep records of all resources, configurations, operations, and intelligence needed to successfully deploy the application.

Closing Thoughts

imageWith the maturity of virtualization and introduction of cloud computing, IT is changing with an increasing speed and the industry is transforming as we speak. VMM 2012 essentially substantiates the arrival of IT as a Service in enterprise IT. While the challenges are overwhelming, the opportunities are at the same time exciting and extraordinary. IT professionals should not and must not hesitate anymore, but get started on private cloud and get started now. Be crystal clear on what is cloud and why virtualization is far from cloud. Do master Hyper-V and learn VMM 2012. And join the conversation and take a leading role in building private cloud. With the strength of a new day dawning and the beautiful sun, there is no doubt in my mind that a clear cloudy day is in sight for all of us.

[To Part 1, 2, 3, 4, 5]

• Daniele Muscetta (@dani3l3) commented on my Configuring the Systems Center Monitoring Pack for Windows Azure Applications on SCOM 2012 Beta on 11/24/2011:

imageDaniele Muscetta has left a new comment on your post "Configuring the Systems Center Monitoring Pack for...":

Roger - I am not sure if you saw my reply to your comment on

imageThere are multiple reasons for not enabling those rules by default - the most important of which is that the customer gets billed for IO and storage transactions, and we want them to consciously enable this being aware of it - not find out later when they get a higher bill.

Another reason is that it is not guaranteed that the customer actually has those counters enabled and collected in table storage.

Another couple of points would be that the UI behaviour (=not showing "enabled" after having enabled thru overrides) is by design in OM - only the default value is shown in the grid; the overridden value is shown in the "overrides" view, in the "overrides" report, among other places.

One last note is about the diagram/architecture you took from our blog - that diagram depicts the APM feature in OpsMgr 2012 - at the time of writing that is just for on-premise monitoring of IIS machines that have the OpsMgr agent installed.

The current Azure MP works differently, not deploying an agent on the Azure VMs, but using the WAD tables as well as the Azure management API.

• Kevin Remde (@KevinRemde) continued his series with Cloud on Your Terms Part 24 of 30: Hybrid for Public Cloud on 11/24/2011:

All the free eval software to build a test cloud can be found here.Back in part 2 of this 30 part series, John Weston introduced the topic of Hybrid Cloud. That is the combining of Public, Private, and or traditional IT into a system that works for you. Today in Part 24, he continues the discussion with a look at some examples of using public cloud services that extend from and augment your internal infrastructure.

imageCheck out his blog post HERE.

And if you have missed any of the series posts, check out my summary post for links to all of the articles available so far at

• Derrick Harris (@derrickharris) asked Will cultural pushback kill private clouds? in an 11/21/2011 post to Giga Om’s Structure blog:

imageFor years, we’ve heard that cultural hurdles within large enterprises are a big problem for the adoption of public cloud computing. That’s why, conventional wisdom suggests, private clouds will be far more popular within those types of companies, because it lets them get the business benefits without having to bend on issues such as security. But if a recent blog post by Gartner analyst Lydia Leong is telling, it looks as if cultural hurdles are also impeding private cloud adoption — at least when it comes to doing it right.

The gist of Leong’s post is that companies want to build internal IT operations like those at Amazon (s amzn), Rackspace (s rax) and Facebook, but they don’t want to make the organizational changes necessary to actually do it. Cloud providers and others operating at webscale don’t typically have top-heavy management structures, but, rather, have flatter IT organizations where managers are replaced by team leaders who also play critical roles in (gasp!) writing code and maintaining systems. And that, says Leong, doesn’t sit well with some of her clients.

imageTo a degree, their concerns are fair enough. Some of the folks responsible for deciding to build a private-cloud infrastructure and then funding it don’t want to lose their jobs. Cloud computing naturally ushers in new job descriptions such as devops, and it almost certainly means that decisionmakers won’t sit in ivory towers free from IDEs and servers. Others, Leong notes, don’t want to lose the ability to attract talent by eliminating the clear management hierarchy that promises promotions up the ladder.

imageBut what do these attitudes mean for the growing number of private cloud deployments? That companies are deploying cloud computing software but not using it to its full potential because they won’t make the necessary organizational changes? It’s troubling if that’s the case, because it means private cloud computing looks more like a wasted opportunity than an IT revolution. It looks a lot more like Virtualization 2.o than Amazon (s amzn) Web Services. Provisioning resources might be a smoother process, and maybe application development is easier, but IT departments themselves are still inflexible and inefficient.

What everyone loves about companies such as Google and Amazon, though, is that they’re able to deliver quality services while running highly agile, automated and innovative operations themselves. My colleague Stacey Higginbotham recently wrote about Google’s approach of granting much power to IT generalists that can work across divisions to ensure the efforts of various teams are aligned and will result in a better system.

Even if they accept that cloud computing is an application-centric operations model as James Urquhart recently explained, and implement fairly strict standards for new applications development, as Leong suggests, IT managers eventually have to learn to get out of the way. CEOs are expecting a lot more from IT departments, and having layer upon layer of bureaucracy isn’t going to help them deliver.

Image courtesy of Flickr user Cushing Memorial Library and Archives, Texas A&M.

Alan Le Marquand posted Announcing the Release of the ‘System Center Virtual Machine Manager 2012” course to the Microsoft Virtual Academy on 11/22/2011:

imageThe Microsoft Virtual Academy team would like to announce the release of the System Center Virtual Machine Manager 2012 course.

This course focuses on how using System Center Virtual Machine Manger 2012 can help your business build, deploy, and maintain a private cloud.

imageAfter completing these three modules you will have learnt about the Virtual Machine Manger 2012 product, and the features it utilizes to build and support the virtualized and physical resources that are part of your private cloud infrastructure.

The course will also expose you to cloud computing at the business level, from the perspective of Virtual Machine Manger 2012, and show how to extend that knowledge to the technical level.

The last module will show you how to manage applications within your private cloud using Virtual Machine Manger 2012 to deploy, update, and manage them.

After completing this course, try out what you’ve learnt by downloading VMM from the TechNet Evaluation Center. Download Microsoft System Center 2012 Pre-Release Products

Sign up and take this course!

<Return to section navigation list>

Cloud Security and Governance

• Mike Small asserted “Adopting cloud computing can save money, but good governance is essential to manage the risk” in an introduction to his Good Governance Controls Risk in the Cloud article of 11/25/2011 for the Cloud Security Journal:

Cloud computing provides organizations with an alternative way of obtaining IT services and offers many benefits including increased flexibility and cost reduction. However, many organizations are reluctant to adopt the cloud because of concerns over information security and a loss of control over the way IT service is delivered. These fears have been exacerbated by recent events reported in the press including outages by Amazon[1] and the three day loss of BlackBerry services from RIM[2]. What approach can an organization take to ensure that the benefits of the cloud outweigh the risks?

To understand the risks involved it's important to understand that the cloud is not a single model. The cloud covers a wide spectrum of services and delivery models ranging from in-house virtual servers to software accessed by multiple organizations over the Internet. A clear explanation of this range is described by NIST[3]. This document describes the five essential characteristics that define the cloud, the three service models, and the four deployment models. The risks of the cloud depend on both the service model and the delivery model adopted.

When moving to the cloud it's important that the business requirements for the move are understood and that the cloud service selected meets these needs. Taking a good governance approach, such as COBIT[4], is the key to safely embracing the cloud and the benefits that it provides:

  • Identify the business requirements for the cloud-based solution. This seems obvious but many organizations are using the cloud without knowing it.
  • Determine the cloud service needs based on the business requirements. Some applications will be more business-critical than others.
  • Develop scenarios to understand the security threats and weaknesses. Use these to determine the response to these risks in terms of requirements for controls and questions to be answered. Considering these risks may lead to the conclusion that the risk of moving to the cloud is too high.
  • Understand what the accreditations and audit reports offered by the cloud provider mean and actually cover.

The risks associated with cloud computing depend on both the service model and the delivery model adopted. The common security concerns are ensuring the confidentiality, integrity, and availability of the services and data delivered through the cloud environment. Particular issues that need attention when adopting the cloud include ensuring compliance and avoiding lock-in.

To manage risk, an organization moving to the cloud should make a risk assessment using one of the several methodologies available. An independent risk assessment of cloud computing[5] was undertaken by ENISA (the European Network Information and Security Agency). This identifies 35 risks that are classified according to their probability and their impact. When the risks important to your organization have been identified, these lead to the questions you need to ask the cloud provider. I propose the following top 10 questions:

  1. How is legal and regulatory compliance assured?
  2. Where will my data be geographically located?
  3. How securely is my data handled?
  4. How is service availability assured?
  5. How is identity and access managed?
  6. How is my data protected against privileged user abuse?
  7. What levels of isolation are supported?
  8. How are the systems protected against Internet threats?
  9. How are activities monitored and logged?
  10. What certification does your service have?

The cloud service provider may respond to these questions with reports from auditors and certifications. It's important to understand what these reports cover.

There are two common types of report that are offered: SOC 1 and SOC 2. SOC stands for "Service Organization Controls" and the reports are based on the auditing standard SSAE[6] no. 16 (Statement on Standards for Attestation Engagements which became effective in June 2011):

  • SOC 1 report: Provides the auditor's opinion on whether or not the description of the service is fair (it does exist) and whether or not the controls are appropriate. Appropriate controls could achieve their objectives if they were operating effectively.
  • SOC 2 Report: It's similar to a type 1 report but includes further information on whether or not the controls were actually working effectively. It includes how the auditor tested the effectiveness of the controls and the results of these test.

Note that these reports are based on the statement of the service that the organization claims to provide - they are not an assessment against best practice.

A service organization may also provide an auditor's report based on established criteria such as Trust Services (including WebTrust and SysTrust). The Trust Services Principles and Criteria[7] were established by the AICPA and cover security, availability, processing integrity, privacy, and confidentiality. A typical auditor's report[8] on a cloud service will simply refer to which of the five areas are covered by the report and it's up to the customer to evaluate whether the Trust Principle and criteria are appropriate for their needs. In addition ISACA have recently published a set of IT Control Objectives for Cloud Computing[9].

Cloud computing can reduce costs by providing alternative models for the procurement and delivery of IT services. However, organizations need to consider the risks involved in a move to the cloud. The information security risks associated with cloud computing depend on both the service model and the delivery model adopted. The common security concerns of a cloud computing approach are maintaining the confidentiality, integrity, and availability of data. The best approach to managing risk in the cloud is one of good IT governance covering both cloud and internal IT services.


  1. PCWorld

Mike Small is a Fellow of the BCS, a Senior Analyst at KuppingerCole and a member of the London Chapter of ISACA.

<Return to section navigation list>

Cloud Computing Events

• Michael Collier (@MichaelCollier) reported about a new Windows Azure Developer Webcast Series to start 12/7/2011 at 10:00 AM PST:

imageStarting next month, I will be holding a series of monthly webcasts focusing on the Windows Azure developer experience. This will be a four-part series starting on Wednesday, December 7th and running to Wednesday, March 7th. Each session will run from 1pm – 2pm ET. Click the link in the session titles below to sign up.

imageWindows Azure Storage – Wednesday, December 7, 2011
There are three core tenants of Windows Azure storage – queues, tables, and blobs. One of the great features of Windows Azure is that we can consume the storage services from a platform that communicates via a REST interface. Libraries are available which make working with the native REST interface a more natural experience, but all features are not available in many libraries. In this webcast we will take a look Windows Azure storage from a developer’s point of view. We’ll look at using the native REST interface, as well as the .NET storage client library, for working with Windows Azure storage. [Emphasis Michael’s.]

Windows Azure Service Management – Wednesday, January 11, 2012
With the Windows Azure Service Management API we can control nearly all aspects of a Windows Azure service. This allows us to easily manage areas such as deployments, service upgrades, and subscription management. Additionally, with the PowerShell cmdlets we gain even greater power over the management of a Windows Azure service. In this webcast, we will take a look at managing a Windows Azure service from a developer’s point of view. We’ll look at using both the Windows Azure Service Management API and PowerShell cmdlets to exercise control over our Windows Azure services.

Windows Azure Role Communication – Wednesday, February 8, 2012
Understanding how Windows Azure roles can communicate with each other is a key aspect to developing enterprise caliber applications on the Windows Azure platform. A Windows Azure role communicates with the outside world via input endpoints, and with other roles via internal endpoints. In this webcast, we will take a look at Windows Azure role communication from a developer’s point of view. We’ll see how configure input endpoints for exposing a service to the outside world, as well as using internal endpoints for role-to-role communication.

Windows Azure AppFabric – Wednesday, March 7, 2012
Windows Azure AppFabric is next generation building block services for creating connected services. With services such as Caching, Service Bus (relay, queues, and topics), and Access Control Services (ACS) developers can focus more on building great solutions and less on plumbing services necessary to do so. In this webcast, we will take a look at the many features offered as part of Windows Azure AppFabric. We’ll see just how easy it can be to add scalable caching with AppFabric Caching, create robust connected solutions with the Service Bus, and secure applications with ACS.

Eric Nelson (@ericnel) reported Six Weeks of Windows Azure will start on … Monday 23rd January 2012 on 11/24/2011:

imageYesterday we locked in the dates for which is fantastic news.

We also confirmed that our core set of webinars and online surgeries will happen on the Monday and Wednesday of each week – with additional activities happening on other days which we will announce nearer the start.

imageWhich gives us the following key dates:

  • Week 1
    • Monday 23rd Jan: Kick off + Webinar <- try not to miss this one!
    • Wednesday 25th: Surgery + Webinars
  • Week 2
    • Monday 30th Jan: Weekly check-in + Webinar
    • Wednesday 1st Feb: Surgery + Webinars
  • Week 3
    • Monday 6th Feb: Weekly check-in + Webinar
    • Wednesday 8th Feb: Surgery + Webinars
  • Week 4
    • Monday 13th Feb: Weekly check-in + Webinar
    • Wednesday 15th Feb: Surgery + Webinars
  • Week 5
    • Monday 20th Feb: Weekly check-in + Webinar
    • Wednesday 22nd Feb: Surgery + Webinars
  • Week 6
    • Monday 27th Feb: Weekly check-in + Webinar
    • Wednesday 29th Feb: Surgery + Webinars
  • And finally
    • Monday 5th March: “After show party” :-)

We will be sharing more details on the topics very soon.

We also have locked in the timings which we will normally follow (subject to speaker commitments)

  • Monday
    • Weekly check-in 12pm to 1:30pm
    • Technical webinar 3pm to 4:30pm
    • Bonus evening webinar 6pm to 7pm (not all weeks)
  • Wednesday
    • Technical Webinar 10pm to 11pm
    • Surgery 12pm to 1pm
    • Commercial Webinar 3pm to 4pm
    • Bonus evening webinar 6pm to 7pm (not all weeks)

All the above will be going into the “official” document – but we wanted to share early.

Couple of obvious questions:

  • Do you need to attend every session? No. We will be recording all sessions and some simply will not be relevant for your specific application/scenario.
  • Will we be sharing these as meeting requests? Yes. Just not quite yet.
  • Is there always a clear split between Technical and Commercial webinars? No. Sometimes the content will span both.

Don’t forget to sign up today at as it helps us scale the activities appropriately.

Robert Cathey reported CloudBeat 2011: Talking About What’s Next in the Cloud on 11/21/2011:

imageOn November 30, Randy Bias [@randybias, pictured at right] is headed to Redwood City to talk about the future of cloud at VentureBeat’s CloudBeat 2011. Joining him are folks like Allan Leinwand of Zynga, Amit Singh of Google, Thomas Kelly of Best Buy, Adam Selipsky of AWS, and Lew Moorman of Rackspace. There are others. It’s a solid program.

imageRandy will participate in two sessions. First up, Randy will moderate a fireside chat with Adrian Cockcroft of Netflix. The conversation will focus on how Netflix is expanding its engagement with AWS globally. Adrian will offer his thoughts regarding whether or not anyone can close the lead AWS has opened up in public cloud. He’ll also give his candid opinion of OpenStack.

Next up, Lew Tucker of Cisco will join Randy in a conversation about how the opportunities in cloud look very different today than they did two years ago. And if you take a thoughtful look at how the industry has evolved, several useful patterns begin to reveal themselves. Understanding this mosaic can lead to better deployment strategies, better business models and smarter cloud startup investments. They’ll talk big data, web-scale cloud, open systems and more.

If you’re in the Bay Area, come join us. There’s a discount if you follow this link.

Jeff Barr (@jeffbarr) announced on 11/21/2011 a Webinar: Getting Started on Microsoft Windows With AWS scheduled for 12/8/2011 at 9:00 to 10:00 AM:

imageWe're going to be running a free webinar on December 8th at 9 AM PST.

Designed for business and technical decision makers with an interest in migrating Windows Server and Windows Server applications to the AWS cloud, the webinar will address the following topics:

  • imageSupport for the Microsoft .NET platform.
  • Ways for you to take advantage of your existing Microsoft investments and skill set to run Windows Server applications such as Microsoft SharePoint Server, Microsoft Exchange Server, and SQL Server on the AWS Cloud without incurring any additional Microsoft licensing costs.
  • imageThe AWS pay-as-you go model that allows you to purchase Windows Server computing resources on an hourly basis.
  • An architecture that allows you to quickly and easily scale your Windows Server applications on the AWS Cloud through pre-configured Amazon Machine Images (AMIs) to start running fully supported Windows Server virtual machine instances in minutes.

Please register now and join us on December 8th, at 09:00 am – 10:00 am PST.

<Return to section navigation list>

Other Cloud Computing Platforms and Services

Todd Hoff reported a Paper: Don’t Settle for Eventual: Scalable Causal Consistency for Wide-Area Storage with COPS in an 11/23/2011 post to his High Scalability blog:

Teams from Princeton and CMU are working together to solve one of the most difficult problems in the repertoire: scalable geo-distributed data stores. Major companies like Google and Facebook have been working on multiple datacenter database functionality for some time, but there's still a general lack of available systems that work for complex data scenarios.

The ideas in this paper--Don’t Settle for Eventual: Scalable Causal Consistency for Wide-Area Storage with COPS--are different. It's not another eventually consistent system, or a traditional transaction oriented system, or a replication based system, or a system that punts on the issue. It's something new, a causally consistent system that achieves ALPS system properties. Move over CAP, NoSQL, etc, we have another acronym: ALPS - Available (operations always complete successfully), Low-latency (operations complete quickly (single digit milliseconds)), Partition-tolerant (operates with a partition), and Scalable (just add more servers to add more capacity). ALPS is the recipe for an always-on data store: operations always complete, they are always successful, and they are always fast.

ALPS sounds great, but we want more, we want consistency guarantees as well. Fast and wrong is no way to go through life. Most current systems achieve low latency by avoiding synchronous operation across the WAN, directing reads and writes to a local datacenter, and then using eventual consistency to maintain order. Causal consistency promises another way.

Intrigued? Let's learn more about causal consistency and how it might help us build bigger and better distributed systems.

In a talk on COPS, Wyatt Lloyd, defines consistency as a restriction on the ordering and timing of operations. We want the strongest consistency guarantees possible because it makes the programmer's life a lot easier. Strong consistency defines a total ordering on all operations and what you write is what you read, regardless of location. This is called linearizability and is impossible to achieve strong consistency with ALPS properties. Remember your CAP. Sequential consistency still guarantees a total ordering on operations, but is not required to happen in real-time. Sequential consistency and low latency are impossible to achieve on a WAN. Eventual consistency is an ALPS system (Cassandra), but is a weak property that doesn't give any ordering guarantees at all.

There's a general idea if you want an always-on scalable datastore that you have to sacrifice consistency and settle for eventual consistency. There's another form of consistency, causal consistency, that sits between eventual consistency and the stronger forms of consistency. Causal consistency gives a partial order over operations so the clients see operations in order governed by causality. Theoretically causal consistency is a stronger consistency guarantee, that is also scalable, and maintains ALPS properties. It's a sweet spot for providing ALPS features and strongish consistency guarantees.

A key property of causal consistency to keep in mind is that it guarantees you will be working on consistent values, but it doesn't guarantee you will be working on the most recent values. That's a property of strong consistency. So under a network partition your operations won't match those in other datacenters until they are made eventually consistent.

The driver for causal consistency is low latency. They want operations to always be fast. Other approaches emphasize avoiding write-write conflicts via transactions and latency isn't as important. You'll never do a slow 2PC across a WAN.

Here's a money quote describing causal consistency in more detail:

The central approach in COPS involves explicitly tracking and enforcing causal dependencies between updates. For instance, if you upload a photo and add it to an album, the album update “depends on” the photo addition, and should only be applied after it. Writes in COPS are accepted by a local datacenter that then propagates them to other, remote, datacenters. These remote datacenters check that all dependencies are satisfied by querying other nodes in the cluster before applying writes. This approach differs from traditional causal systems that exchange update logs between replicas. In particular, the COPS approach avoids any single serialization point to collect, transmit, merge, or apply logs. Avoiding single serialization points is a major factor in enabling COPS to scale to large cluster sizes.

Even though COPS provides a causal+ consistent data store, it is impossible for clients to obtain a consistent view of multiple keys by issuing single-key gets. (This problem exists even in linearizable systems.) In COPS-GT, we enable clients to issue get transactions that return a set of consistent values. Our get transaction algorithm is non-blocking, lock-free, and takes at most two rounds of inter-datacenter queries. It does, however, require COPS-GT to store and propagate more metadata than normal COPS. Our evaluation shows that COPS completes operations in less than a millisecond, provides throughput similar to previous systems when using one server per cluster, and scales well as we increase the number of servers in each cluster. It also shows that COPS-GT provides similar latency, throughput, and scaling to COPS for common workloads.

Michael Freedman gives an example involving three operations on a social networking site:

  1. Remove boss from friends group.
  2. Post looking for a new job.
  3. A friend reads the post.

Causality is given by the following rules:

  1. Thread of execution rule. Operations done by the same thread of execution or ordered by causality. The first operation happens after the second.
  2. Gets-From rule. Operations that read a value are after write operations.
  3. Transitive closure rule. The first operation is before the read of the post.

The result is that operations happen in the order you expect. The post for a new job happens after the boss is removed from the friends group. In another example, a photo upload followed by adding a reference of the photo album will always happen in that order so you don't have to worry about dangling references. This makes the job of the programmer a lot easier, which is why we like transactional systems so much: the expected happens.

How does causality handle conflicting updates? Say two writes in different datacenters happen to the same key at the same time. This is unordered by causality because the operations do not occur in the same thread of execution. What we want all datacenters to agree on a value. By default the rule is to have the last writer win. You can have application specific handlers as well to that all datacenters converge on the same value. This sounds a lot like eventual consistency to me. They call this causal consistency + convergent conflict handling as causal+ consistency.

Their innovation is to create a causal+ consistent system that is also scalable. Previous systems used log shipping, which serializes at a centralized point. Instead of logs they use dependency meta data to capture causality. They replace the single serialization point with distributed verification. They don't expose the value of a replicated put operation until they confirm all the causally previous operations have shown up in the datacenter.

COPS is their system implementing causal+ consistency:

  • Organized as a geo-replicated system with a cluster of nodes in each datacenter.
  • Each cluster stores all data.
  • Scale-out architecture with many nodes inside each cluster.
  • Consistent hashing to partition keys across nodes.
  • Assumes partitions do not occur within a datacenter so strongly consistent replication is used within a datacenter. Use chain replication, though could use Paxos.
  • Between datacenters where latency is high, data is replicated in a causal+ consistent manner.
  • They use a thick client library. It tracks causality and mediates local cluster access.
  • Value is written immediately to the local datacenter. Immediately queued up for asynchronous replication.
  • Clients maintains dependency information, which includes a version number uniquely identifying a value. This information is inserted into dependency list. Any future operations are causally after the current operation. This information is used to resolve dependencies in the system.
    • Why not just use vector clocks? Because they've targeted very large distributed systems where ther vector clock state would get out of control.
  • Get transactions give a consistent view of multiple keys with low latency. They only have read transactions. Write conflicts are handled by last writer wins or application specific reconciliation.
  • They've found their system gives high throughput and near linear scalability while providing causal+ consistency.

The details of how all this works quickly spirals out of control. Best to watch the video and read the paper for the details. The questioning at the end of the video is contentious and entertaining. I'd like to see that part go on longer as everyone seems to have their own take on what works best. It's pretty clear from the questions that there's no one best way to build these systems. You pick what's important to you and create a solution that gives you that. You can't have it all it seems, but what can you have is the question.

Will we see a lot COPS clones immediately spring up like we saw when the Dynamo paper was published? I don't know. Eventually consistent systems like Cassandra get you most of what COP has without the risk. Though COPS has a lot of good features. Causal ordering to a programmer is a beautiful property as are the ALPS properties in general. The emphasis on low-latency is a winner too. Thick client libraries are a minus as they reduce adoption rates. Complex client libraries are very difficult to port to other languages. Not being able to deal with write-write conflicts in an equally programmer friendly manner while maintaining scalability for large systems, is unfortunate, but is just part of the reality of a CAP world. You could say using a strongly consistent model in each datacenter could limit the potential size of your system. But, all together it's interesting and different. Low-latency, geo-distribution, combined with a more intuitive consistency model could be big drivers for adoption for developers, and it's developers that matter in these sorts of things.

Related Articles

Jeff Barr (@jeffbarr) described New - AWS Elastic Load Balancing Inside of a Virtual Private Cloud in an 11/21/2011 post:

imageThe popular AWS Elastic Load Balancing Feature is now available within the Virtual Private Cloud (VPC). Features such as SSL termination, health checks, sticky sessions and CloudWatch monitoring can be configured from the AWS Management Console, the command line, or through the Elastic Load Balancing APIs.

imageWhen you provision an Elastic Load Balancer for your VPC, you can assign security groups to it. You can place ELBs into VPC subnets, and you can also use subnet ACLs (Access Control Lists). The EC2 instances that you register with the Elastic Load Balancer do not need to have public IP addresses. The combination of the Virtual Private Cloud, subnets, security groups, and access control lists gives you precise, fine-grained control over access to your Load Balancers and to the EC2 instances behind them and allows you to create a private load balancer.

Here's how it all fits together:

When you create an Elastic Load Balancer inside of a VPC, you must designate one or more subnets to attach. The ELB can run in one subnet per Availability Zone; we recommend (as shown in the diagram above) that you set aside a subnet specifically for each ELB. In order to allow for room (IP address space) for each ELB to grow as part of the intrinsic ELB scaling process, the subnet must contain at least 100 IP addresses (a /25 or larger).

We think you will be able to put this new feature to use right away. We are also working on additional enhancements, including IPv6 support for ELB in VPC and the ability to use Elastic Load Balancers for internal application tiers.

<Return to section navigation list>

Technorati Tags: Windows Azure, Windows Azure Platform, Azure Services Platform, Azure Storage Services, Azure Table Services, Azure Blob Services, Azure Drive Services, Azure Queue Services, SQL Azure Database, Open Data Protocol, OData, Windows Azure AppFabric, Azure AppFabric, Windows Server AppFabric, Server AppFabric, Cloud Computing, Visual Studio LightSwitch, LightSwitch, Amazon Web Services, AWS, Project “Daytona”, Codename “Social Analytics”, Windows Phone 7 Mango, Windows Phone 7.5, WP 7.5, Systems Center Operations Manger 2012, SCOM 2012, PHP, PHPAzure, Hadoop, MapReduce