Sunday, February 05, 2012

Windows Azure and Cloud Computing Posts for 2/3/2012+

A compendium of Windows Azure, Service Bus, Big Data, Hadoop, HPC, Connect, SQL Azure Database, and other cloud-computing articles. image222


• Updated 2/5/2012 (pre-SuperBowl) with new articles marked .

• Updated 2/4/2012 with new articles marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

Azure Blob, Drive, Table, Queue and Hadoop Services

Haydn Shaughnessy (@haydn1701) listed me as #7 in his Who Are The Top 20 Influencers in Big Data? article of 2/3/2012 for Forbes Magazine:

imageA month back I used Traackr to look at influencers in mobile (look out for an upcoming piece on Kred if you are interested in the community of influencers around social business). One notable feature of Traackr is that the influencer list changes on almost a weekly basis. Because it is driven by a set of keywords, writers come in and out of the list, or go up and down, depending on what they are writing about and how often. …

imageThis month I tracked the influencers in Big Data. And again this list will change regularly.

What’s interesting about this for me is that Big Data is an area I know comparatively little about. Many of the names on the list are new to me.

That means, I think, that I have a research tool here that can quickly allow me to become more knowledgeable in new areas.

This is vitally important in the age of knowledge flow – we need to be able to take snapshots, and we need to frame a subject area in an instant.

Though Traackr is marketed as an influence tool, I think it’s a good research tool in general. On the mobile project, I’ve now set up my own monitoring service (more in the next post) which keeps check of what influencers are saying about Nokia (NOK). Samsung and the iPhone (AAPL).

Below are the results of the Big Data survey. Influencers are ranked by Reach, Resonance and Relevance. The terms are explained below. Here are the top 5 according to Traackr.

UPDATE – In fact Kristen Nicole is now with SiliconAngle.

These are being driven by keywords like “data analytics”, “big data economics” , “big data and transport”, “big data and health” etc. Bear in mind that the score mixes frequency with being on-topic, as well as how they impact an audience, and it measure across blogs, Facebook and Twitter.

Bob Gourley of CTO Vision comes out on top with David Smith of Revolution Analytics, and Klint Finley of SiliconAngle.

As I said earlier this is not an area I much cover, so it’s great to find new names and to know they have relevance for the topics I’m interested in.

Numbers 6 – 10, bearing in mind these can change weekly and so include some surprises. I hadn’t thought of Dion Hinchcliffe as a Big Data writer but I can see from Traackr that has has been getting traction for his end of year post on Emerging Tech Trends.

Read more: Page 2 Next Page ». Haydn notes on Page 2:

… The fact that some experts have different lives in different communities is interesting, and an issue I’ll look at again next week. The sites Wikibon (part of the SiliconAngle group) (David Vellante) and Oakleaf Systems (Roger Jennings) are new to me. …

Ricardo Villalobos (@ricvilla) posted a link to his Supporting mobile device applications using RESTful services running on Windows Azure MSDN article on 2/3/2012:

imageI recently wrote an article with my friend and colleague Bruno Terkaly for MSDN magazine on using Windows Azure to support mobile device applications (including iOS, Android, and Windows Phone) that require back-end services. The solution that we propose is based on a REST style architecture, a very popular approach that provides flexibility and speed. I hope that you enjoy it and find it useful.

imageHere is the link:

The Microsoft Codename “Trust Services” Team revised its Learn More about Microsoft Codename "Trust Services" TechNet Wiki article of 1/26/2012 on 2/2/2011:

Protect your data in the Cloud

imageTrust and Security have been hot topics for the public cloud since its inception. Corporate IT departments and CIOs have repeatedly expressed concerns over the loss of control associated with moving various levels of sensitive data to a public cloud. At the same time, the overall benefits of a public cloud are tremendous and continue to gain momentum. This means that many organizations have a pressing need to migrate to public cloud infrastructure in spite of ongoing concerns about security.

Encryption is one of the fundamental required tools for protecting data in the cloud. However, encrypting the data in the cloud, and then storing the encryption keys in the cloud in order to be able to access the data, provides only a very minor improvement over simply storing the data in the cloud in the first place.
Trust Services provides a unique combination of end-to-end application level encryption and power of the cloud to roam encryption keys in a totally secure way. It enables data driven applications to work with sensitive data, securely stored in different cloud-based storages while continuing to maintain control over access to this data.


Learn how to use "Trust Services" by following these samples.
To download samples, visit Trust Services Samples Download page .

Trust Services Copy SQL Database Sample

An application that copies a SQL Server table(s) from one database to another, while encrypting and decrypting columns containing sensitive data.

Encrypt Files Using PowerShell

A sample script that encrypts all files recursively in a directory using Trust Services SDK PowerShell snap-in.

Encrypt Windows Azure Blob Store

A sample application that demonstrates an end-to-end scenario, from defining data policies to encryption of the data using Trust Services SDK

Yet another stealth release of a Codename “Whatever” CTP. I’ve requested an invitation, which the team promises in two business days. Stay tuned for more details next week.

• Update 2/4/2012: I received my invitation code the next day 2/2/2011 and have begun to test CTP1.

Note: The Trust Services Lab SDK and Shell (x64) of 2/2/2012 downloaded on 2/3/2012 failed on my Windows 7 Professional test machine with the following error:


I subsequently received my invitation code and reported the problem, which Christian Weyer also encountered, to the “Trust Services” team.

IBM offered on 2/3/2012 a download of Enterprise Hadoop Solutions: A Forrester Research, Inc. Report dated 2/2/2012 (detailed registration required):

The Forrester Wave™: Enterprise Hadoop Solutions, Q1 2012

image_thumb3_thumbForrester Research, Inc. views Hadoop as "the open source heart of Big Data", regarding it as "the nucleus of the next-generation EDW [enterprise data warehouse] in the cloud," and has published its first ever The Forrester Wave (tm): Enterprise Hadoop Solutions report (February 2, 2012).

This report evaluates 13 vendors against 15 criteria with IBM being placed in the Leaders category.

The IEEE Computer Society published Simon S. Y. Shim’s Guest Editor's Introduction: The CAP Theorem's Growing Impact (PDF) in the February 2012 issue of Computer magazine. From the abstract and first two paragraphs:

imageThe computing community has been developing innovative solutions to meet the formidable challenge of handling the exponential growth in data generated by the Web.

For a long time, commercial relational database management systems with ACID (atomicity, consistency, isolation, and durability) properties from vendors such as Oracle, IBM, Sybase, and Microsoft have been the default home for computational data. However, with the phenomenal growth of Web-generated data—which Vint Cerf referred to as an “information avalanche”—this conventional way of storing data has encountered a formidable challenge.

Because the traditional way of handling petabytes of data with a relational database in the back end does not scale well, managing this phenomenon referred to as the big data challenge has become problematic. Highly scalable database solutions are needed to meet the demand of handling this data explosion. With the abundance of inexpensive commodity servers, public and private clouds can store and process big data effectively by scaling horizontally into distributed systems. …

Unfortunately, Eric Brewer’s Pushing the CAP: Strategies for Consistency and Availability article for the same issue is behind a paywall and only this abstract is freely available:

The CAP theorem asserts that any networked shared-data system can have only two of three desirable properties. However, by explicitly handling partitions, designers can optimize consistency and availability, thereby achieving some trade-off of all three. The featured Web extra is a podcast from Software Engineering Radio, in which the host interviews Dwight Merriman about the emerging NoSQL movement, the three types of nonrelational data stores, Brewer's CAP theorem, and much more.

You can listen to the podcast here (free).

<Return to section navigation list>

SQL Azure Database, Federations and Reporting

Wade Wegner (@WadeWegner) and Steve Marx (@smarx) produced Episode 69 - SQL Azure Federations with George Huey and released it on 2/3/2012:

imageJoin Wade and Steve each week as they cover the Windows Azure platform. You can follow and interact with the show at @CloudCoverShow.

imageIn this episode, Wade is joined by George Huey—Principal Architect Evangelist for Microsoft—to discuss how to scale-out with SQL Azure Federations. George is the author and creator of the SQL Azure Migration Wizard and the SQL Azure Federation Data Migration Wizard.

In the news:

My Loading Big Data into Federated SQL Azure Tables with the SQL Azure Federation Data Migration Wizard v1.2 and Adding Missing Rows to a SQL Azure Federation with the SQL Azure Federation Data Migration Wizard v1 posts of 1/17 and 1/18/2012 described using George’s SQL Azure Federation Data Migration Wizard to load about 5 GB data into a six-member federation.

Cihan Biyikoglu (@cihangirb) described SQL Azure Migration Wizard: Fantastic tool for moving data between SQL Azure Federations and SQL Server and scale out single SQL Azure DBs in a 2/3/2012 post:

imageIf you worked with federations, I am sure you are already know about the online tools like SQL Azure Management portal that give you the ability to orchestrate your federations with repartitioning operations or resize member MAXSIZE and edition.

imageIn episode #69 on Cloud Cover, George and Wade cover moving schema and data between SQL Server, SQL Azure databases with Federations! So you can move SQL Server databases to SQL Azure Federation, or single scale-up SQL Azure databases to SQL Azure Federations or simply transfer, move data between SQL Azure Federations. All these are possible with the well known community tool : SQL Azure Migration Wizard. What is great is the code is available on codeplex and the tool generates the scripts you can run from command line for you. You don’t get the great retry logic or the parallelism with that SQL Azure Migration Wizard provides when you do that however…

There is one more good news in all this; George does not cover this in his talk but, you can also use SQL Azure Migration Wizard to perform the missing MERGE repartitioning operation manually. There is an article that is coming on that topic soon. You simply export a member, then DROP the existing member that has been exported. Then import the data back in. Fantastic!

Well here is the episode; watch if for yourself: Cloud Cover Episode #69

My Adding Missing Rows to a SQL Azure Federation with the SQL Azure Federation Data Migration Wizard v1 post of 1/18/2012 explains the use of MERGE with George’s Wizard.

<Return to section navigation list>

MarketPlace DataMarket, Social Analytics and OData

• Anupama Mandya posted a 15-minute WCF Data Services and OData for Oracle Database video clip to the Oracle Learning Library on 2/2/2012:

imageThis tutorial covers developing WCF Data Services and Open Data Protocol (OData) applications for the Oracle Database using Visual Studio.

imageThe video is part of Oracle’s .NET and Visual Studio 2010 series.

Alex James (@ADJames) described CQRS with OData and Actions? in a 2/3/2012 post:

imageI love the Actions feature in OData – which is hardly surprising given I was one of its designers.

Here’s the main reason why I love it: Actions allow you move from a CRUD architecture style, where you query and modify data using the same model, to an architectural style where you have a clear separation between the model used to query and the model used to update. To me this feels a lot like Greg Young’s baby CQRS, or Command Query Responsibility Segregation.

imageI’ll admit I’m taking some liberties here because these two models are actually ‘merged’ into a single metadata document ($metadata) that describes them both, and you can share types between these two models… however this feels insignificant because the key benefits remain.

Why would you want to move from a CRUD style application to a CQRS style one?

Let’s look at a simple scenario, imagine you have Products that look like this:

public class Product
public int ID {get;set;}
public string Name {get;set;}
public Decimal Cost {get;set;}
public Decimal Price {get;set;}

And imagine you want to Discount Marmite (a product in your catalog) by 15%. Today using the CRUD style, the default in OData before Actions, there is only one option: you PUT a new version of the Marmite resource with the new Price to the URL that represents Marmite, i.e. something like this:

POST ~/Products(15) HTTP/1.1
Content-Type: application/json
// abbreviated for readability
“ID”: 15,
“Name”: “Marmite”,
“Cost”: 3.50,
“Price”: 4.25 // ($5 – 15%)

Notice to support this you have to allow PUT for Products. And this has some real issues:

  • People can now make changes that we don’t necessarily want to allow, i.e. modifying the Name & Cost or changing the Price too much.
    • Basically “Update Product” is NOT the same as “Discount Product”.
  • When a change comes through we don’t actually know it is a Discount. It just looks like an attempt to update a Product.
  • If you need information that is not part of Product to perform a Discount (perhaps a justification) there is no where to put that information.

More generally the CRUD model is painful because:

  • If you want to update lots of resources simultaneously, imagine for example that you want to discount every product in a particular category, you first have to retrieve every product in that category, and then you have to do a PUT for each of them. This of course introduces a lot of unnecessary latency and introduces consistency challenges (it is hard to maintain a transaction boundary across requests & it the longer the ‘transaction’ lasts the more likely a concurrency check will fail).
  • If you want to update something you have to allow it to be read somewhere.

Back to our scenario, it would be much better to disable PUT completely and create a Discount action, and advertise it’s availability in the Marmite resource (to keep your system as Hypermedia driven as possible):

“__metadata”: {
// abbreviated for simplicity
“Actions”: {
“#MyActions.Discount”: [{ “title”: “Discount Marmite”, “target”: “Products(15)/Discount”}]
“ID”: 15,
“Name”: “Marmite”,
“Cost”: 3.50,
“Price”: 5.00

The name of the Action (i.e. #MyActions.Discount) is an ‘anchor’ into the metadata document that can be found at ~/$metadata that says you need to provide a percentage.

POST ~/Products(15)/Discount HTTP/1.1
Content-Type: application/json

“percentage”: 15

This is much better. Notice this doesn’t allow me to modify the Cost or the Name, and indeed can easily be validated to make sure the percentage is within an acceptable range, and it is semantically much clearer what is happening.

In fact by moving from a CRUD style architecture to one inspired by CQRS but based on actions you can:

  • Give targeted update capabilities that:
    • Allow only certain parts of the ‘Read’ model to be modified.
    • Allow things that are not even in the ‘Read’ model to be modified or provided if needed.
  • Selectively give users permissions to only the ‘Actions’ or Commands they need.
  • Log every Action and replay them at a later date to rebuild your data (i.e. Event Sourcing).
  • Capture what is requested (i.e. Discount the product) and respond immediately before the change has actually been made, safe in the knowledge you will eventually process the request and achieve “Eventual Consistency”.
  • Capture more information about what is happening (i.e. User X discounted Marmite by 15% is much better than User X updated Marmite).
  • Create Actions that manipulate a lot of entities simultaneously (i.e. POST ~/Categories(‘Yeast Spreads’)/Products/Discount…)

Of course simply separating the read/write models in OData doesn’t give you all of these advantages immediately, but at least it creates a foundation that can scale to support things like Event Sourcing or Eventual Consistency later if required.

Some of you may be thinking you can achieve many of these goals by having a more granular model that makes things like “Discount” a resource, and that would be true. However for most people using OData that way of thinking is foreign and more importantly the EDM foundations of OData get in the way a little too. So for me Actions seems like the right approach in OData.

I love this.
But what do you think?

DJ Adams (@qmacro) described Making OData from SAP accessible [to the Sesame Data Browser] with the ICM's help in a 1/31/2012 post to the SAP Community Network blog:

imageI'm totally enamoured by the power and potential of SAP's NetWeaver Gateway, and all it has to offer with its REST-informed data-centric consumption model. One of the tools I've been looking at in exploring the services is the Sesame Data Browser, a Silverlight-based application that runs inside the browser or on the desktop, and lets you explore OData resources.

imageOne of the challenges in getting the Data Browser to consume OData resources exposed by NetWeaver Gateway (get a trial version, available from the Gateway home page on SDN) was serving a couple of XML-based domain access directive files as described in "Making a Service Available Across Domain Boundaries" - namely clientaccesspolicy.xml and crossdomain.xml, both needing to be served from the root of the domain/port based URL of the Gateway system. In other words, the NetWeaver stack needed to serve requests for these two resources:




Without these files, the Data Browser will show you this sort of error:

A SecurityException has been encountered while opening the connection.
Please try to open the connection with Sesame installed on the desktop.
If you are the owner of the OData feed, try to add a clientaccesspolicy.xml file on the server.--

So, how to make these two cross domain access files available, and specifically from the root? There have been some thoughts on this already, using a default service on the ICF's default host definition, or even dynamically loading the XML as a file into the server cache (see the ABAP program in this thread in the RIA dev forum).

But a conversation on Twitter last night about BSPs, raw ICF and even the ICM reminded me that the ICM is a powerful engine that is often overlooked and underloved. The ICM -- Internet Communication Manager -- is the collection of core HTTP/SMTP/plugin services that sits underneath the ICF, and handle the actual HTTP traffic below the level of the ICF's ABAP layer. In the style of Apache handlers, there are a series of handlers that the ICM has to deal with plenty of HTTP serving situations - Logging, Authentication, Server Cache, Administration, Modification, File Access, Redirect, as well as the "ABAP" handler we know as the ICF layer.

Could the humble ICM help with serving these two XML resources? Of course it could!

The File Access handler is what we recognise from the level 2 trace info in the dev_icm tracefile as HttpFileAccessHandler. You all read the verbose traces from the ICM with your morning coffee, right? Just kidding. Anyway, the File Access handler makes its features available to us in the form of the icm/HTTP/file_access_<xx> profile parameters. It allows us to point the ICM at a directory on the filesystem and have it serve files directly, if a URL is matched. Note that this File Access handler is invoked, and given a chance to respond, before we even get to the ABAP handler's ICF level.

With a couple of these file_access parameters, we can serve static clientaccesspolicy.xml and crossdomain.xml files straight from the filesystem, matched at root. Here's what I have in my /usr/sap/NPL/SYS/profile/NPL_DVEBMGS42_nplhost parameter file:

icm/HTTP/file_access_1 = PREFIX=/clientaccesspolicy.xml, DOCROOT=$(DIR_INSTANCE)/qmacro, DIRINDEX=clientaccesspolicy.xml
icm/HTTP/file_access_2 = PREFIX=/crossdomain.xml, DOCROOT=$(DIR_INSTANCE)/qmacro, DIRINDEX=crossdomain.xml

(I already have file_access_0 specifying something else not relevant here).

What are these parameters saying? Well the PREFIX specifies the relative URL to match, the DOCROOT specifies the directory that the ICM is to serve files from in response to requests matching the PREFIX, and DIRINDEX is a file to serve when the 'index' is requested. Usually the PREFIX is used to specify a directory, or a relative URL representing a 'container', so the DIRINDEX value is what's served when there's a request for exactly that container. The upshot is that the relevant file is served for the right relative resource. The files are in directory /usr/sap/NPL/DVEBMGS42/qmacro/.

While we're at it, we might as well specify a similar File Access handler parameter to serve the favicon, not least because that will prevent those pesky warnings about not being able to serve requests for that resource, if you don't have one already:

icm/HTTP/file_access_1 = PREFIX=/favicon.ico, DOCROOT=$(DIR_INSTANCE)/qmacro, DIRINDEX=favicon.ico

The upshot of all this is that the static XML resources are served directly by the ICM, without the request even having to permeate up as far as the ABAP stack:

[Thr 139708142688000] Handler 5: HttpFileAccessHandler matches url: /clientaccesspolicy.xml
[Thr 139708142688000] HttpSubHandlerCall: Call Handler: HttpFileAccessHandler (1089830/1088cf0), task=TASK_REQUEST(1), header_len=407
[Thr 139708142688000] HttpFileAccessHandler: access file/dir: /usr/sap/NPL/DVEBMGS42/qmacro
[Thr 139708142688000] HttpFileAccessHandler: file /usr/sap/NPL/DVEBMGS42/qmacro/clientaccesspolicy.xml modified: -1/1326386676
[Thr 139708142688000] HttpSubHandlerItDeactivate: handler 4: HttpFileAccessHandler
[Thr 139708142688000] HttpSubHandlerClose: Call Handler: HttpFileAccessHandler (1089830/1088cf0), task=TASK_CLOSE(3)

and also that the browser-based Sesame Data Browser can access your Gateway OData resources successfully:



If you're interested in learning more about the Internet Communication Manager (ICM) and the Internet Communication Framework (ICF), you might be interested in my Omniversity of Manchester course:

Web Programming with SAP's Internet Communication Framework

Which is currently running in March (3rd and 4th) and May (9th and 10th) in Manchester.

• Johan Bollen and Huina Mao describe Twitter Mood as a Stock Market Predictor in an article for the IEEE’s Computer magazine’s October 2011 issue (missed when published). From the abstract:

imageIt has often been said that stock markets are driven by "fear and greed" — that is, by psychological as well as financial factors. The tremendous volatility of stock markets across the globe in recent years underscores the need to better understand the role that emotions play in shaping stock prices and other economic indices. READ FULL ARTICLE (pdf) »

This article supplements Affect detection in tweets: The case for automatic affect detection by Scott Counts, Munmun de Choudhury and Michael Gamon of Microsoft Research and my Twitter Sentiment Analysis: A Brief Bibliography of 11/26/2011 and New Features Added to My Microsoft Codename “Social Analytics” WinForms Client Sample App of 11/21/2011.

<Return to section navigation list>

Windows Azure Access Control, Service Bus and Workflow

•• Wictor Wilén (@wictor) posted a Visual guide to Azure Access Controls Services authentication with SharePoint 2010 - part 1 on 2/1/2012:

imageA year and a half ago I posted the Visual guide to Windows Live ID authentication with SharePoint 2010 series, a post that got a tremendously amount of hits (and still gets) and tons of comments (and new ones still coming in). It showed quite a cumbersome way to Live ID enable your SharePoint 2010 Web Applications using the Microsoft Service Manager, MSM, (which works some times and some times not). Although it did/do work it is not the best way to enable Live ID authentication to your SharePoint 2010 web site. The MSM required you to first test in their INT environment and get approval before putting it into production, and you had to follow a set of guidelines on how to use Live ID logos etc etc, not mentioning all the manual configuration.

imageMicrosoft has a service in its Windows Azure offering called Access Control Services. This is essentially an Identity Federation Provider, living in Windows Azure. This IP not only allows you to federate Live ID authentication but also Google ID, Facebook ID etc. In this post, and subsequent ones, I'll do a visual guide on how to configure SharePoint 2010 to use Windows Azure Access Control Services, ACS, to handle your authentication.

Configuring Azure ACS

First of all let's get acquainted with Windows Azure Access Control Services. But before you start you need to have a Windows Azure subscription. Unfortunately this isn't for free, but if you have an MSDN Subscription you can take advantage of the MSDN Azure benefits. Once you have your subscription you head on over to and sign in with your Live ID. Once you're signed in you are in the Azure Management Portal.

Windows Azure Portal

On the left hand side is the navigation and select Service Bus, Access Control & Caching (this is the part that was/is called Azure AppFabric), #1 in the image below. This will load all the AppFabric Services and you will get a new navigation tree on the left hand side and the Ribbon menu will update. To create your Service Namespace, which is like a "container" for the AppFabric services, click on the New button in the Ribbon (#2).


This will bring up a dialog where you create your new namespace. First of all select the services you want in this namespace - for this authentication sample we only need the Access Control (#1). Secondly you need to specify a unique namespace for your Service Namespace (#2). After that select an appropriate Region (#3) and optionally a Subscription if you have multiple ones. Once your satisfied click Create Namespace (#4) to start the creation.

New Service Namespace

The creation will take a couple of minutes, so now it's a good time to take that coffee you all been waiting for! As always when dealing with AuthN, coffee breaks are good, that's what my good ol' buddy Spence always nagging about. Wait until the service namespace has the status of Active until proceeding further.


Once it has the Active Status, select your newly created Service Namespace (#1) and choose Access Control Service in the Ribbon menu (#2). This will open the Access Control Services administration. Note: you will navigate away from the Azure management and reuse the same browser window.

Configure it...

The Access Control Service administration site contains a lot of configuration options. You will see a left hand side navigation where you can set up everything from Identity Providers, IP, and Trusted Parties to custom certificates and get details on how to integrate this with your applications. The first thing we will do here is to add a couple of identity providers, click on Identity Providers in the menu.

Access Control Services Management Portal

Identity Providers

The Identity Providers menu option shows all the Identity Providers that currently is available for this ACS Service Namespace. By default you will only see Windows Live ID. While Windows Live ID might work (if you can live with all the guid based identities) it's very convenient to add other IP's here. Click on Add to add a new IP.

Identity Providers

I like to add the Google Identity Provider, since all the user identities from Google IP will use the Google login id (e-mail). Select Google amongst the preconfigured IP's and then click Next.

Add IP

The next page gives you some customizations options for the Google IP. Change anything you want here and click Save to continue.

Login Page

This should take you back to the IP page and you should now see both Windows Live ID and Google listed there.

The Relying Party

Next thing to do is to add a Relying Party Application, that is our SharePoint Web Application. Choose Relying party applications in the left hand menu. You will have no RP's configured by default so click on Add to create a new one.

Relying Parties

You will now see a form and it is here things must be written correctly otherwise you will not get the AuthN to work with SharePoint 2010. First of all give your RP a nice and easy name (#1). Secondly is the realm (#2), if you remember from the Live ID visual guide the Realm is important. Use a URI instead of URL, it's easier to remember and always works. In this case I choose uri:visualauthn. Then we also need to fill in the Return URL (#3). The return URL must point to http://server/_trust/default.aspx when dealing with SharePoint 2010 (of course replace with your server name, localhost also works in a test environment).

RP Settings

The next things to configure is the tokens. First of all SAML 1.1 must be used as Token Format (#1), SAML 2.0 is default in ACS so make sure to change this. Leave Token encryption policy to None (#2). Then finally an important piece - the Token lifetime. By default this is set to 600 seconds and you need to increase this value. The reason for that is that SharePoint 2010 has the expected token lifetime configured to 600 seconds and when SharePoint validates the token, which is after it's been issued by ACS it will fall outside the lifetime. So you have two options here lower the SharePoint lifetime or increase it in ACS, in this case I've done the latter and set it to 700 seconds.

Token Settings

The rest of the configuration is left intact. If you like you can uncheck Windows Live ID if you do not want Live ID users to sign in with this RP. Click Save when you're done.


You should now see your newly created Relying Party.

The RP

Some rules!

We now have two IP's connected to the RP - each of the IP's has a set of outgoing claims to our RP and we need to make sure that the claims received from the IP's to the RP are passed through to SharePoint as outgoing claims from the RP. This is done through the Rule Groups. Select Rule groups in the left hand menu. You will see a Rule Group called "Default Rule Group for Visual AuthN" - this group was automatically created for us when we created the RP: Now click on the rule group to create the actual rules.

Rule Groups

Note that there are no rules by default. To create the default rules, just click on Generate to create them.


First we need to select for which IP's to generate the rules, make sure both Live ID and Google (in this case) are selected and click the Generate button.

Generate Rules

Now ACS will generate the default rules for all selected IP's. Click Save to complete the rules setup.

The Rules

I will in follow up posts on this one, show you how to fiddle a bit with these rules. But for now we're just using all the default settings.


Next thing to do is to create a certificate that we will be using for Token Signing. The Management Portal makes it very easy for us to make a self signed certificate for testing and demo purposes. For production scenarios either purchase an X.509 certificate or request one from your local Certification Authority (CA) (for instance AD Certificate Services). Just make sure it's a certificate for sign and encrypt the payload. Navigate to the Certificates and Keys in the ACS Management Portal. Click on Add next to the Token Signing certificates.

Certificates and Keys

First of all make sure that the correct Relying Party is selected, it should be the one you just created. ACS allows you to have multiple RP's so just make an extra check.

Used for

If you want to create a self signed certificate, take a look at the middle of that page. There is a small snippet that you just can copy and paste to create your own certificate (if you have the MakeCert utility, which is a part of the Windows SDK).


Copy and paste that into a command window or a PowerShell console. This will create your signing certificate and store in the My store on the box you run the command at.

MakeCert POSH

Now you need to export this certificate into two files (with and without the private key). One to upload to ACS and one to import into SharePoint 2010 later. You can export the certificates using the Certificates MMC Snap-In (like in the previous visual Live ID guide) or use PowerShell, which will impress your colleagues more.

Export Cert POSH

The PowerShell I use to export the certificate to one password protected Pfx file (with key) and a Cer file (without key) are the following:

$cert = @(dir cert: -Recurse | Where-Object { $_.subject -like "CN=visualauthn*" })[0] $type = [System.Security.Cryptography.X509Certificates.X509ContentType]::Cert
$bytes = $cert.Export($type)
[System.IO.File]::WriteAllBytes("c:\visualauthn.cer", $bytes)
$type = [System.Security.Cryptography.X509Certificates.X509ContentType]::Pfx
$pass = read-host "Password" -assecurestring
$bytes = $cert.Export($type, $pass)
[System.IO.File]::WriteAllBytes("c:\visualauthn.pfx", $bytes)

As you can see I grab the certificate with the correct subject and then use the .NET classes to export the certificates and finally save the bytes into files. Replace the exported filenames with your own and also the subject on the first line when doing this for your service,

The Pfx file must be uploaded to ACS. You should still be on the Add Token Signing certificate page and now choose to upload the Pfx certificate and then enter the password you used when exporting it. Make sure that you choose to use this certificate as primary certificate and then click Save.

Upload cert

By now we're done with ACS. Let's head on over to configuring SharePoint.

Configuring SharePoint

Now it's time for the fun stuff - SharePoint. First of all you need to have a Web Application that uses Claims Authentication. If your web app is in Classic mode, either create a new one or upgrade from Classic to Claims.

Claims FTW

Trusted Root Authority

Now we need to make sure that the SharePoint farm trusts the certificate used by ACS to sign the tokens. This is done by uploading the other certificate file into SharePoint, using PowerShell. The following code is used to import the .cer file:

asnp microsoft.sharepoint.powershell
$cert = Get-PfxCertificate "C:\visualauthn.cer"
New-SPTrustedRootAuthority -Certificate $cert -Name "Visual AuthN ACS"

Trusted Root Authority POSH

To verify that the certificate is imported as a trusted certificate in SharePoint, go to Central Administration > Security > Manage Trust. You should see the name of the trust there:

Ok, it's there

The Trusted Identity Provider

Next up is to add the ACS RP as a Trusted Token Issuer in SharePoint. Once again we'll do this using PowerShell. Here it is really important that you specify the exact realm as entered when you created the RP in ACS (see line 1 in POSH below). Then you also need the Sign In URL for your RP. Modify line 2 below to match the URL for your Service Namespace (bold and red). Next we define a claim mapping for the identity claim that we want to use, in this case the e-mail address. Finally we just add the new trusted identity token issuer

$realm = "uri:visualauthn"
$signinurl = "”
$map1 = New-SPClaimTypeMapping
-IncomingClaimTypeDisplayName "Email" –SameAsIncoming
New-SPTrustedIdentityTokenIssuer -Name "Visual AuthN ACS"
-Description "ACS rocks!"
-Realm $realm
-ImportTrustCertificate $cert
-ClaimsMappings $map1
-SignInUrl $signinurl
-IdentifierClaim $map1.InputClaimType

Magic POSH

Now it's time to modify our Web Application to use this ACS RP. Go to Central Administration and Web Application administration. Choose the Web Application you want to enable the RP for (remember that it has to be a Claims Web App). Choose to modify Authentication Providers from the Ribbon menu and select the correct Zone (normally Default). Then scroll down to Claims Authentication Types and check the Trusted Identity Provider checkbox and then our own ACS Trusted Identity Provider. Once that's done click Save.

Web App AuthN

One final thing here before testing it all. I personally prefer to add a User Policy on the web application directly for one of the users that will log in through the ACS RP. You can of course log in using Windows AuthN and then set permissions inside the site if you prefer so. But this is how I do it. Select the web application and then click User Policy in the Ribbon. Then click Add User choose All Zones and enter the Google e-mail of the user that you will test with, give the user Full Control on the web application. Make sure that you type the e-mail correct - SharePoint will by default validate anything that you write in in Claims mode (more on this in another post).

User Policy

With this policy in place all should be set to test drive it all.

Test it!

Now all we have to do is test it. Browse to the web application for which you added the Trusted Identity Provider, once it's loaded you will be presented by the default multiple login page. The drop down will show all the available AuthN providers for the web application. To use the ACS login we only need to choose that provider in the list.


Once the provider is selected you will be redirected to the ACS RP login page. In this case we will see two possible providers to use - Live ID and Google. Click on the Google button and you will be redirected, once again...

Login 2

..this time to the Google log in page which will ask you for username and password. Enter the username (e-mail) that you used when creating the user policy for the web application and log in. In this case, with Google, Google will ask for confirmation that you trust the ACS RP. Choose Allow and you'll be redirected back to the RP which will seamless redirect you to SharePoint.

Login 3

...and voila! You're in! Take a look at the username in the upper right corner - it should be the e-mail address of the Google ID you used.



That was it - a visual guide on how to configure federated authentication using Windows Azure Access Control Services and SharePoint 2010. It is this easy! Even though this article was quite lengthy you can do it all in a couple of minutes (compare that to the way I previously showed)!

<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

• AliceLaura updated on 2/3/2012 her License Link for Windows Azure and Windows HPC TechNet Wiki article of 1/31/2012:

imageThis topic describes the steps to enable applications that are running on Windows Azure nodes to use a license server on an enterprise network. The Windows Azure nodes in this scenario are deployed as part of a Windows HPC Server 2008 R2 cluster, Service Pack 2 or later.

Note: These procedures are intended for proof of concept or evaluation purposes only. The connectivity enabled with these steps is not very stable. Additionally, connectivity is established by using a Beta version of Azure Connect.


A large percentage of HPC applications, MPI and otherwise, in use today are licensed, commercial applications which use a licensing server to enable users to share licenses within an Enterprise. This document describes a method of safely extending a license server's reach into a set of Azure compute nodes (Windows Azure compute instances that are joined to a Windows HPC cluster).

Most license servers work over IPv4. Azure Connect can be used to establish an encrypted IPv6 connection from an on-premises machine to the Azure Nodes. In the approach described in this topic, we create an unencrypted IPv4 tunnel inside the IPv6 tunnel for communication between the Azure Nodes and the license server (by way of a “junction box”). We set up a junction box on a standalone physical machine or on a virtual machine. Ideally, the junction box is not domain joined (to increase security by restricting Enterprise network access to boundary servers). We install the Azure Connector endpoint software on the junction box to establish the IPv6 connection to the Windows Azure nodes. We then create an IPv4 VPN server on the junction box. Each Azure Node will have the Azure Connect client (automatically with deployment from HPC) and the IPv4 client connection to the VPN server on the junction box (manual configuration step). Communications between the junction box and the license server occur over an IPv4 connection.

The following diagram illustrates the basic architecture of this solution:

Additional considerations:

  • The IPv4 tunnel is unencrypted. This is still secure because it exists “inside” the encrypted IPv6 tunnel created by Azure Connect. We do this to reduce compute overhead (encryption/decryption) on each end of the tunnel.
  • To reduce security attack surface, we strongly recommend the junction box is NOT domain joined, and the IPv4 VPN connects using non-domain credentials that are local to the junction box.
  • The setup steps done on the Azure compute nodes (initiating the IPv4 VPN connection) are not necessarily persistent across a node servicing by the Azure fabric controller. Thus, over the course of several days of a large deployment you’ll see nodes gradually losing connectivity with the license server because they’ve been serviced and the IPv4 connection was not re-started. To avoid this, you can define the IPv4 setup steps as a script and specify the script in the Node Template as part of the provisioning process (see Configure a Startup Script for Windows Azure Nodes ).

The procedures in this topic assume that you have the following prerequisites:

  • An on-premises head node running Windows HPC Server 2008 R2 SP2 or later
  • Windows Azure nodes already deployed and operational (see Deploying Windows Azure Nodes in Windows HPC )
    Note: When you create the Azure Node template in the HPC Cluster Manager console, ensure that you enable Azure Connect. By enabling this option, the necessary Azure Connect setup is performed automatically at deployment.
  • Licensing server is operational
  • Junction box server is configured as follows:
    • Junction box server can be on a physical machine or on a VM on the head node, license server, or another server
    • Junction box is not joined to the domain
    • Junction box has reliable internet connectivity
    • Junction box has Internet Security and Acceleration (ISA) or other firewall client installed if required by your Enterprise network environment
    • IPv6 is enabled on the junction box NIC …

AliceLaura, who’s a technical writer with the Windows HPC team, continues with a step-by-step tutorial.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

•• The All-In-One Code Framework Team updated a Story Creator Sample Application for Windows Phone 7 (CSWP7AzureVideoStory) on 2/3/2012:


imageThis sample solution is a story creator for Windows Phone 7. You can use it to create photo stories on your phone, and send the stories to a Windows Azure service to encode them into videos. The Windows Azure Service includes a REST service built with WCF Web API, a simple HTML5 browser client that allows you to see encoded videos, and a native component encodes videos using WIC and Media Foundation.

imageWhile individual pieces of technologies are very interesting, the true power comes when the platforms are combined. We know most developers need to work with the combined platform rather than individual technologies. So we hope this sample solution will be helpful to you.

A VB.NET version is available as Story Creator Sample Application for Windows Phone 7 (VBWPAzureVideoStory) of 2/1/2012.

•• The All-In-One Code Framework Team posted a Change AppPool identity programmatically (CSAzureChangeAppPoolIdentity) project on 1/13/2012 (missed when published):


image​Most of customers test their applications to connect to cloud entities like storage, SQL Azure,AppFabric services via compute emulator environment. If the customer's machine is behind proxy that does not allow traffic from non-authenticated users, their connections fail. One of the workaround is to change the application identity. This cannot be done manually for Azure scenario since the app pool is created by Azure when it is actually running the service. Hence, I have written sample customers can use to change theAppPool identity programmatically

•• Mike Wood (@mikewo) answered Why is Windows Azure Deployment Slow Compared to Heroku? in a 1/23/2012 post:

imageEarlier this month I got an email from a friend of mine who was trying out the Tutorial of running Node.js on Windows Azure. He had gotten to the point where he was deploying the code to Azure using the PowerShell command and was surprised to see it taking several minutes (like about 10). Having been using Heroku for some of his other work he thought he might be doing something wrong since deployments to Heroku takes only a few seconds ("with typing" as he put it).

imageSo, why does it take so long to deploy to Windows Azure when Heroku only takes seconds? It has to do with the differences between the two platforms. Both Heroku and Windows Azure are Platform as a Service (PaaS) providers, meaning they abstract a large portion of their infrastructure away from you so that all you need to focus on is the application code and data, but the two platforms have different levels of abstraction.

imageLet's start with Heroku. First off, I'm not a Heroku expert. What follows is my understanding of the Heroku platform and how it works based on some conversations I've had, a presentation I attended by James Ward, research from their site and other articles. If I get something wrong, please let me know in the comments. Also, cloud computing platforms advance and change at an amazing pace, so what's written in this post is as of the time of it's posting. If you are reading this six months after it's posted the landscape may look a lot different.

The Heroku platform runs on Linux machines that have their resources partitioned by the open source resource partitioning layer LCX. This partitioning scheme breaks up the machines into what Heroku has termed "dynos". A dyno is a compute unit and the unit in which you are billed. You can think of them as "instances" of your application. They are completely isolated from one another so as to ensure that there is no sharing of data between the different dynos that may be running on the same physical server. A dyno has a set of resources dedicated to it from the physical server, so it gets a slice of CPU cycles, the disk subsystem, network IO, etc.

When you deploy an application to Heroku it gets packaged into what is called a "slug" (you can upload pure code to Heroku and let the platform compile it for you, or you can upload compiled bits, but either way a slug gets created and this is the internal package that Heroku uses to deploy to dynos). When a slug is deployed to a dyno the Heroku Dyno Manifold (the controlling software of all deployments and resourcing in the Heroku environment) finds a physical server, or servers if you are deploying multiple instances, that has available dynos and it drops the slug into it/them. Basically, they are copying your slug (deployable code) onto a machine that is already running and calling a process to start up. This is obviously pretty quick because the time is mostly in copying the code and the process start up. This is why you see average deployment times in the seconds for Heroku.

Now let's look at how Windows Azure works. The deployment unit to Windows Azure is a package file, which contains your compiled code and/or files. This is basically a zip file and probably not too much unlike the slug file used by Heroku. When you perform a deployment the package file is uploaded and the Windows Azure Fabric Controllers (the equivalent of the Dyno Manifold in Heroku) inspects the service definition file included in the package to determine the compute resources the deployment needs. The Fabric Controllers then select a physical server, or servers if you are deploying multiple instances, that has space on it for the deployment. A full virtual machine is brought up on the target physical servers and then your code is deployed to that VM as well. So, unlike the dynos in Heroku the virtual machines in Windows Azure aren't already fully running just ready for code (the host machines likely are). This is the case because as you are assigned a virtual machine in Windows Azure rather than just a process and you have a lot more control over that virtual machine. For example, you may run start up commands to configure the IIS server with something non-standard, or you may need to register 3rd party components before the system responds to traffic. The abstraction Windows Azure provides starts at the Virutal Machine layer, where for Heroku the abstraction is down to the process.

In short, you could compare the differences between Heroku and Windows Azure deployments by thinking of Heroku as copying code to an already running machine and starting a process vs. spinning up a full virtual machine from a stopped state and then copying the code. Obviously, one is going to take longer than the other. This is why my friend was seeing a deployment to Windows Azure taking about 10 minutes. Either way, this is still faster than you can procure a server for deployment in a non-cloud environment.

So, if you were just looking at deployment times between the platforms Heroku wins hands down, but just as with all choices in cloud computing you have to be aware of what the differences are in order to make the right decision for your application. The approach that Heroku has taken is on the higher end of the PaaS model where they are abstracting the idea of Virtual Machines and servers as a whole away from you. Windows Azure abstracts away the infrastructure, but still provides you a lot more control over the virtual machine your application code is running on. So, for our discussion here, we are trading speed for control. Just like if you decided to use a full on Infrastructure as a Service (IaaS) provider (like EC2 instances from Amazon which Heroku actually runs on) instead of a PaaS provider you are trading less configuration and infrastructure concerns in the PaaS world for a LOT more control over your environment and infrastructure in the IaaS world.

My friend said he spent a few hours trying to research why there was such a discrepancy in deployment times between the platforms Hopefully this post will save others time in the future.

My (@rogerjenn) Uptime Report for my Live OakLeaf Systems Azure Table Services Sample Project: January 2012 of 2/3/2012 reports a second month with no downtime:

imageMy live OakLeaf Systems Azure Table Services Sample Project demo runs two small Windows Azure Web role instances from Microsoft’s South Central US (San Antonio, TX) data center. Here’s its uptime report from for January 2012:


imageFollowing is detailed Pingdom response time data for the month of January 2012:


Martin Tantow (@mtantow) described Pixar Ventures in Cloud Computing Animation in a 2/3/2012 post to the CloudTimes blog:

imageOne of the best features of cloud computing is its ability to summon other compute-based business services. Cloud computing effectively allows enterprises to decide on acquisition and deployment cloud facilities. It provides them control over peak migration and requirement loads from consumers. This allows them to plan ahead the workload distribution requirements plus the cost of services needed to run a certain amount of workload.

imageIn cases when the pre-set data storage capacity is met, the cloud infrastructure can easily re-compute and re-deploy workload limits to enable uninterrupted cloud services to consumers. This process is called “cloudbursting.”

It is very early to tell that “cloudbursting” is already on the mainstream because its adoption is still very new to some cloud vendors and suppliers. And although it is not yet very popular, many cloud developers are already putting together their efforts to create a new cloud business model through “cloudbursting.”.

One of the few developers that have seen the potential of “cloudbursting” is Pixar. It recently announced the tie-up it closed with GreenButton and Microsoft to work on a new cloud service for the movie industry. This future cloud service software will be administered and managed by GreenButton while it will be run through Microsoft Azure.

GreenButton CEO, Scott Houston said the new cloud management service will help film makers and advertising agencies that are currently using Pixar’s RenderMan to continue utilizing the services at a much cheaper price. These services include Pixar’s compute resources for 3D and animation features. This move according to Houston will provide the film and advertising industry better access to technology at a very reasonable cost.

imagePixar is just one of the businesses that would benefit from the “cloudbursting” application. There are, however, other vertical industries that can take advantage of this new cloud application that only runs via the cloud. “Cloudbursting,” a new cloud model that may boost businesses in 2012, but until GreenButton and Microsoft is ready with the framework everyone from the animation industry will have to wait for its launching. [Emphasis added.]

Nathan Totten (@ntotten) described Windows Azure Toolkit for Social Games Updates in a 2/2/2012 post:

imageThe Windows Azure Toolkit for Social Games makes it easier for developers to quickly build social and casual games using Windows Azure. Last week, we released version 1.2.2 of the toolkit. You can download the source here or the self-extracting package here. This version adds significant performance increases, improved stability, and now uses Autofac for dependency injection. Additionally, as part of this release, we have moved the toolkit Github – this allows you to easily fork, clone, and contribute back to the toolkit in the same fashion you can with the open sourced Windows Azure SDKs.

imageFor those who haven’t downloaded or used the social gaming toolkit yet I would encourage you to check it out. There are a lot of common patterns and tons of reusable code in the toolkit that applies for many scenarios besides the gaming space. With more and more applications relying on real-time communication, sharing, and feedback the mechanisms that are used in games can be applied to many kinds of software.

The core features of this toolkit are:

  • Samples Games (Tic-Tac-Toe and Four in a Row)
  • Authentication with ACS (Access Control service)
  • JavaScript Tests
  • Leaderboard
  • Game Friends
  • User Profiles
  • Invites and Notifications
  • Tests for both server and client code
  • Reusable JavaScript libraries

You can read more about the social gaming toolkit on the project site’s wiki. We have documents that will help you get started using the toolkit and deploying the toolkit. Additionally, you will find blog posts about this release and other on my blog.

<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Jan Van der Haegen (@janvanderhaegen) described his LightSwitch twitter bot in a 2/2/2012 post:

Hey guys ‘n guls,

imageI was reading some sites about LightSwitch yesterday and realized how hard it was to keep track of all the different blogs, articles, code samples, …

Because I’m on Twitter a lot (yes, a LOT), I created a Twitter-bot account (@LightSwitchNews) that monitors interesting LightSwitch-related sources, and tweets if a new blog post is written, a new extension or code sample is added to the gallery, … The bot is created with “if this then that“, a website with the amazingly simple but effective concept that events on one platform (RSS, WordPress, GMail, GCalendar, whatever) can be configured to trigger an action on any other… (Really interesting, check it out!)

image_thumb1I created this user for my own purpose really (but feel free to follow if you find it interesting, of course), whenever I have some spare time I can just see what the bot has been tweeting and stay updated about what’s going on in the community. It’s my portal to the LightSwitch community…

If you know any interesting LightSwitch community members, or if you are one and my bot’s not tweeting about you, let me know (@janvanderhaegen) and I’ll add you!

Return to section navigation list>

Windows Azure Infrastructure and DevOps

• IT Channel Insight reported Gartner Reveals Platform as a Service (PaaS) Forecast in a 2/3/2012 post:

imageGartner, the world’s leading IT research and advisory company, has addressed the Platform as a service (PaaS) trend in the Gartner Special Report, “PaaS 2012: Tactical Risks and Strategic Rewards.”

imagePaaS is a commonly used reference to the layer of cloud technology architecture that contains all application infrastructure services, or ‘middleware’. It is the technology that intermediates between the underlying system infrastructure (operating systems, network, storage, and so forth) and the overlaying application software. PaaS facilitates the deployment of applications without the expense and complexity of purchasing and maintaining the underlying hardware, software and provisioning hosting capabilities. The facilities required to support the building and delivery of web applications and services are available entirely from the Internet. The technology services that form part of a full-scope, comprehensive PaaS include application development tools, database management systems, portal products, and business process management suites, among many others, all of which are offered as a service.

“With large and growing vendor investment in PaaS, the market is on the cusp of several years of strategic growth, leading to innovation and likely breakthroughs in technology and business use of all of cloud computing,” said Yefim Natis, Vice President and distinguished analyst at Gartner. “Users and vendors of enterprise IT software solutions that are not yet engaged with PaaS must begin building expertise in PaaS or face tough challenges from competitors in the coming years.”

In the previous year’s PaaS Special Report, Gartner analysts said that 2011 would be a pivotal year for the PaaS market. As Gartner predicted, the broad vendor acceptance and adoption in 2011 amounted to a sound industry endorsement of PaaS as an alternative to the traditional middleware deployment options. In 2012, the PaaS market is at its early stage of growth and, as such, does not yet have well-established industry leaders, best practices or dedicated standards.

“While there are clear risks associated with the use of services in the new and largely immature PaaS market, the risk of avoiding the PaaS market is equally high,” said Yefim Natis. “The right strategy for most mainstream IT organizations and software vendors is to begin building familiarity with the new cloud computing opportunities by adopting some PaaS services now, albeit with the understanding of their limitations and with the expectation of ongoing change in the market offerings and use patterns.”

Gartner anticipates that PaaS offerings will become widely available in late 2012 and by the end of 2013, all major software vendors will have competitive offerings in the market. By 2016, it is envisaged that competition among the PaaS vendors will produce new models, new standards and new software market leaders.

Mike Healey wrote Research: 2012 State of Cloud Computing and InformationWeek::Reports posted it on 2/3/2012:


The Cloud You Didn't Know You Had

Research: MainframesNext time that annoying guy starts going on about how "the cloud is going to change everything," smack him upside the head. "Everything" has already changed, say the 511 business IT professionals, all from companies with 50 or more employees, responding to our InformationWeek 2012 State of Cloud Computing Survey. Adoption of public cloud services has been on a consistent upward pace for the past four years, since we began keeping track. One-third of 2012 respondents' organizations are already receiving services from a cloud provider, and an additional 40% are in the planning or evaluation stages. Just 27% say they won't consider it. In our 2008 cloud survey, people couldn't even agree on a definition--21% of 456 respondents from companies of all sizes said cloud was "pretty much a marketing term used haphazardly."

OK, so not everything has changed.

More than 500 IT pros weighed in on their use of public cloud services, and we can sum the results up in two words: blind leap. Just 28% assess the impact on their internal ­networks, even though 73% are using multiple providers. It's not too late to reverse the lemming migration. Here’s how.

Still, frustration with vendor hype aside, all types of public cloud services are gaining followers. So IT's got this down, right? Not so fast. We're seeing major gaps in how organizations are selecting, integrating and monitoring the services their employees depend on. The bulk of cloud initiatives come from the ground up and are reactive, in response to line-of-business requirements. IT rarely has an overarching vision of how it all fits together.

We expect the march to the public cloud to continue unabated, spurred by the siren song of lower costs, quicker implementation, and even less need for internal IT. Should we just fall in line and accept the inevitable?

imageNot so fast. Cloud computing is still very much a work in progress, wedged somewhere between CB radios and penicillin on the worldwide-usefulness scale. Providers' offers of lower initial cost and faster ramp up have lulled many organizations into a sloppy start, but you can get back on track. In this report, we'll lay out the critical steps every organization needs to take to make sure its cloud leap goes on more than just faith. (R4020212)

Survey Name InformationWeek 2012 State of Cloud Computing Survey
Survey Date December 2011
Region North America
Number of Respondents 511 at organizations with 50 or more employees
Purpose To determine in the role of cloud computing in the enterprise
Methodology InformationWeek surveyed business technology decision-makers at North American companies with 50 or more employees. The survey was conducted online, and respondents were recruited via an email invitation containing an embedded link to the survey. The email invitation was sent to qualified InformationWeek subscribers.

Table of Contents

    3 Author's Bio
    4 Executive Summary
    6 Research Synopsis
    7 The Cloud You Didn’t Know You Had
    9 The Reality of SLAs
    10 Impact Assessment
    11 How to Make Cloud Soup
    13 A Healthy Helping of Worry
    15 Can You Go All In?
    15 Three Points on a Path
    18 The Big Leap
    19 Appendix
    29 Related Reports

    7 Figure 1: Identifying Cloud Impact on Internet-Facing Architecture
    8 Figure 2: Monitoring Cloud-Based App Performance
    9 Figure 3: Cloud SLAs
    11 Figure 4: Number of Cloud Providers Used
    12 Figure 5: Integrating Cloud Applications
    13 Figure 6: Cloud Provider Preference
    14 Figure 7: Cloud Services Concerns
    15 Figure 8: Weighing the Risk
    16 Figure 9: Future Degree of Cloud Use
    17 Figure 10: Use of Cloud Computing Services
    19 Figure 11: Cloud Providers in Use
    20 Figure 12: Planned Cloud Provider Use
    21 Figure 13: Replace or Fire a Cloud Provider?
    22 Figure 14: Greatest Performance Inhibitor
    23 Figure 15: Cloud-Based App Performance
    24 Figure 16: Change in Performance
    25 Figure 17: Job Title
    26 Figure 18: Company Revenue
    27 Figure 19: Industry
    28 Figure 20: Company Size

About the Author

Mike Healey is the president of Yeoman Technology Group, an engineering and research firm focusing on maximizing technology investments for organizations, and an InformationWeek Reports contributor. He has more than 23 years experience in technology and software integration.
Prior to founding Yeoman, Mike served as the CTO of national network integrator GreenPages. He joined GreenPages as part of the acquisition of TENCorp, where he served as president for 14 years. Prior to founding TENCorp, Mike was an international project manager for Nixdorf Computer and a Notes consultant for Sandpoint Corp.
Mike has taught courses at MIT Lowell Institute and Northeastern University and has served on the Educational Board of Advisers for several schools and universities throughout New England. He has a BA in operations management from the University of Massachusetts Amherst and an MBA from Babson College.
He is a regular contributor for InformationWeek, focusing on the business challenges related to implementing technology. His work includes analysis of the SaaS market, green IT and operational readiness related to virtualized environments.

JP Morgenthal (@jpmorgenthal) asserted “Application development has been moving in the direction of platform abstraction” in an introduction to his Cloud Needs Application Architects to Understand IaaS post of 2/3/2012:

imageApplication development has been moving in the direction of platform abstraction. That is, the need for developers to have detailed knowledge of the infrastructure that the application was being deployed on was becoming less important with increasing sophistication of the application platform for which they were developing. Cloud computing is now reversing this course of action, at least in the short term.

Actually, the platform abstraction is a bit of a misnomer since the implementation resulted in operations struggling to tweak the infrastructure to meet performance requirements. Additionally, most applications typically had their own dedicated hardware allowing for specialization to meet the needs of the applications deployed on that hardware.

So, more accurately, cloud computing illustrates the flaws in the approach of pure platform abstraction and a ‘Chinese Wall’ between application development and operations as operations now has fewer tweaks at their disposal to make an application perform in a multi-tenancy environment. Hence, it is imperative that application architects begin to incorporate into their design the impacts of operating in the cloud into their architectures. Application architects must be able to understand how the application will perform given the environment that the application will be operating under.

Impacts that application architects will need to think about in this cloud world include:

  • Databases – running a highly-available database in the cloud is a daunting task; especially without direct control over the storage. Environments like Amazon offer database services that deliver greater performance than can be achieved if you put up your own database in their IaaS, but there are also pitfalls.
  • Software failover – applications can now implement failover far less expensively using commodity hardware. Hence, failover should now be developed into the application instead of relying on the application platform or hardware infrastructure. Given that application architects have not focused on this use case in many cases, it will require some education and experience before this can become common.
  • Virtual networking – virtual networks enable the application development team to take control over their own application’s networking infrastructure. Once again, the lack of experience here means that there are likely to be many misconfigurations that impact the performance and availability of the application in addition to enabling security flaws.
  • Instrumentation, logging and monitoring – these are areas that the application development teams have been pushing responsibility off onto the application platforms. However, without visibility beyond the hypervisor, it’s imperative that they incorporate this back into the applications or they may have significant issues troubleshooting or auditing their applications.

As my famous Uncle Winthrop liked to say, “Now that I've given you a band saw, I need to teach you how to use it or you will just be wasting a lot of wood and in the worst case might lose a few fingers.”

Todd Hoff described Stuff The Internet Says On Scalability For February 3, 2012 in this week’s post to his High Scalability blog:

imageI'm only here for the HighScalability:

  • 762 billion: objects stored on S3; $1B/Quarter: Google spend on servers; 100 Petabytes: Storage for Facebook's photos and videos.
  • Quotable Quotes:
    • @knorth2: #IPO filing says #Facebook is "dependent on our ability to maintain and scale our technical infrastructure"
    • @debuggist: Scalability trumps politics.
    • @cagedether: Hype of #Hadoop is driving pressure on people to keep everything
    • @nanreh: My MongoDB t shirt has never helped me get laid. This is typical with #nosql databases.
    • @lusis: I kenna do it, Capt'n. IO is pegged, disk is saturated…I lost 3 good young men when the cache blew up!
    • Kenton Varda: Jeff Dean puts his pants on one leg at a time, but if he had more than two legs, you'd see that his approach is actually O(log n)
  • imageOne upon a time manufacturing located near rivers for power. Likewise software will be located next to storage, CPU, and analytics resources in a small cartel of clouds. That's the contention of Here Come the Cloud Cartels. This tributary system (pun intended) will be Amazon, Cisco Systems, Google, I.B.M., Microsoft, Oracle and a few competitors. Supposedly the benefit will be cheap computing, but when has a cartel ever lead to cheap anything? [Emphasis added.]

William Vambenepe (@vambenepe) recommended Come for the PaaS Functional Model, stay for the Cloud Operational Model in a 2/2/2012 post:

imageThe Functional Model of PaaS is nice, but the Operational Model matters more.

Let’s first define these terms.

The Functional Model is what the platform does for you. For example, in the case of AWS S3, it means storing objects and making them accessible via HTTP.

The Operational Model is how you consume the platform service. How you request it, how you manage it, how much it costs, basically the total sum of the responsibility you have to accept if you use the features in the Functional Model. In the case of S3, the Operational Model is made of an API/UI to manage it, a bill that comes every month, and a support channel which depends on the contract you bought.

The Operational Model is where the S (“service”) in “PaaS” takes over from the P (“platform”). The Operational Model is not always as glamorous as new runtime features. But it’s what makes Cloud Cloud. If a provider doesn’t offer the specific platform feature your application developers desire, you can work around it. Either by using a slightly-less optimal approach or by building the feature yourself on top of lower-level building blocks (as Netflix did with Cassandra on EC2 before DynamoDB was an option). But if your provider doesn’t offer an Operational Model that supports your processes and business requirements, then you’re getting a hipster’s app server, not a real PaaS. It doesn’t matter how easy it was to put together a proof-of-concept on top of that PaaS if using it in production is playing Russian roulette with your business.

If the Cloud Operational Model is so important, what defines it and what makes a good Operational Model? In short, the Operational Model must be able to integrate with the consumer’s key processes: the business processes, the development processes, the IT processes, the customer support processes, the compliance processes, etc.

To make things more concrete, here are some of the key aspects of the Operational Model.

Deployment / configuration / management

I won’t spend much time on this one, as it’s the most understood aspect. Most Clouds offer both a UI and an API to let you provision and control the artifacts (e.g. VMs, application containers, etc) via which you access the PaaS functional interface. But, while necessary, this API is only a piece of a complete operational interface.


What happens when things go wrong? What support channels do you have access to? Every Cloud provider will show you a list of support options, but what’s really behind these options? And do they have the capability (technical and logistical) to handle all your issues? Do they have deep expertise in all the software components that make up their infrastructure (especially in PaaS) from top to bottom? Do they run their own datacenter or do they themselves rely on a customer support channel for any issue at that level?


I personally think discussions around SLAs are overblown (it seems like people try to reduce the entire Cloud Operational Model to a provisioning API plus an SLA, which is comically simplistic). But SLAs are indeed part of the Operational Model.

Infrastructure change management

It’s very nice how, in a PaaS setting, the Cloud provider takes care of all change management tasks (including patching) for the infrastructure. But the fact that your Cloud provider and you agree on this doesn’t neutralize Murphy’s law any more than me wearing Michael Jordan sneakers neutralizes the law of gravity when I (try to) dunk.

In other words, if a patch or update is worth testing in a staging environment if you were to apply it on-premise, what makes you think that it’s less likely to cause a problem if it’s the Cloud provider who rolls it out? Sure, in most cases it will work just fine and you can sing the praise of “NoOps”. Until the day when things go wrong, your users are affected and you’re taken completely off-guard. Good luck debugging that problem, when you don’t even know that an infrastructure change is being rolled out and when it might not even have been rolled out uniformly across all instances of your application.

How is that handled in your provider’s Operational Model? Do you have visibility into the change schedule? Do you have the option to test your application on the new infrastructure or to at least influence in any way how and when the change gets rolled out to your instances?

Note: I’ve covered this in more details before and so has Chris Hoff.


Developers have assembled a panoply of diagnostic tools (memory/thread analysis, BTM, user experience, logging, tracing…) for the on-premise model. Many of these won’t work in PaaS settings because they require a console on the local machine, or an agent, or a specific port open, or a specific feature enabled in the runtime. But the need doesn’t go away. How does your PaaS Operational Model support that process?

Customer support

You’re a customer of your Cloud, but you have customers of your own and you have to support them. Do you have the tools to react to their issues involving your Cloud-deployed application? Can you link their service requests with the related actions and data exposed via your Cloud’s operational interface?

Security / compliance

Security is part of what a Cloud provider has to worry about. The problem is, it’s a very relative concept. The issue is not what security the Cloud provider needs, it’s what security its customers need. They have requirements. They have mandates. They have regulations and audits. In short, they have their own security processes. The key question, from their perspective, is not whether the provider’s security is “good”, but whether it accommodates their own security process. Which is why security is not a “trust us” black box (I don’t think anyone has coined “NoSec” yet, but it can’t be far behind “NoOps”) but an integral part of the Cloud Operational Model.

Business management

The oft-repeated mantra is that Cloud replaces capital expenses (CapExp) with operational expenses (OpEx). There’s a lot more to it than that, but it surely contributes a lot to OpEx and that needs to be managed. How does the Cloud Operational Model support this? Are buyer-side roles clearly identified (who can create an account, who can deploy a service instance, who can manage a deployed instance, etc) and do they map well to the organizational structure of the consumer organization? Can charges be segmented and attributed to various cost centers? Can quotas be set? Can consumption/cost projections be run?

We all (at least those of us who aren’t accountants) love a great story about how some employee used a credit card to get from the Cloud something that the normal corporate process would not allow (or at too high a cost). These are fun for a while, but it’s not sustainable. This doesn’t mean organizations will not be able to take advantage of the flexibility of Cloud, but they will only be able to do it if the Cloud Operational Model provides the needed support to meet the requirements of internal control processes.


Some of the ways in which the Cloud Operational Model materializes can be unexpected. They can seem old-fashioned. Let’s take Amazon Web Services (AWS) as an example. When they started, ownership of AWS resources was tied to an individual user’s Amazon account. That’s a big Operational Model no-no. They’ve moved past that point. As an illustration of how the Operational Model materializes, here are some of the features that are part of Amazon’s:

  • You can Fedex a drive and have Amazon load the data to S3.
  • You can optimize your costs for flexible workloads via spot instances.
  • The monitoring console (and API) will let you know ahead of time (when possible) which instances need to be rebooted and which will need to be terminated because they run on a soon-to-be-decommissioned server. Now you could argue that it’s a limitation of the AWS platform (lack of live migration) but that’s not the point here. Limitations exists and the role of the Operational Model is to provide the tools to handle them in an acceptable way.
  • Amazon has a program to put customers in touch with qualified System Integrators.
  • You can use your Amazon support channel for questions related to some 3rd party software (though I don’t know what the depth of that support is).
  • To support your security and compliance requirements, AWS support multi-factor authentication and has achieved some certifications and accreditations.
  • Instance status checks can help streamline your diagnostic flows.

These Operational Model features don’t generate nearly as much discussion as new Functional Model features (“oh, look, a NoSQL AWS service!”) . That’s OK. The Operational Model doesn’t seek the limelight.

Business applications are involved, in some form, in almost every activity taking place in a company. Those activities take many different forms, from a developer debugging an application to an executive examining operational expenses. The PaaS Operational Model must meet their needs.

Geva Perry (@gevaperry) emphasized The Importance of Customer Engagement For Cloud Companies in a 2/1/2012 post to his Thinking Out Cloud blog:

imageIn July I wrote about the public beta launch of Totango and how it promised to help increase SaaS sales by better understanding customers, and especially their interaction with online channels, including the SaaS service itself.

Now, Totango has announced that it has analyzed and optimized the sales engagement or interaction with over one million prospects and customers of SaaS businesses. That’s a big number.

When looking at the research published by Totango here are some important takeaways for SaaS sales:

Next best action and next best channel for each customer

If you can, use a zero touch (self service) or low touch (inside sales) model. Using an inside sales rep (phone and email) is exponentially cheaper than using classic enterprise sales (somebody who gets on an airplane and wines & dines customers). And using a self-service sales model is exponentially cheaper than inside sales. David Skok has published some of the math in his presentation on building a sales and marketing machine (see slides 18, 19, 20). As I know from my own experience with many companies, and as Joel York has pointed out, some B2B sales are too complex for zero touch or low touch selling, but generally modern buyers expect to be in charge and don’t always appreciate a call with a rep. The trick is to figure out which prospects would benefit from a sales call and which would actually be hurt by it. Big data from solutions like Totango can help: A/B testing batches of leads can help you figure out what is the best next action for each specific situation.

Know how your customers are using your service from day one

Not all prospects are created equal. Totango found that the single most important indicator of a prospect’s likelihood to sign up for a paid service is his or her activity level during the free trial period. So if you know who your most active trial users are you can direct your inside sales team’s attention to these prospects. Free trial users who are still active during day 3 of their trial were 4 times more likely to convert into paying users than the average customer. Active trial users who were contacted by a sales rep were 70% more likely to buy the paid service than those who weren’t (but of course sales reps are likely to choose the best looking prospects to begin with). Since a sales rep doesn’t have enough hours in the day (and is too expensive as noted above) to contact ALL trial users, it’s important to have each rep start their day by calling those prospects most likely to convert. This will increase sales effectiveness and sales efficiency: Zack Urlocker from Zendesk is quoted on the Totango website as saying that they were able to increase free to paid conversion by more than 30% using this approach.

Focus on customer success

By definition, the all-important customer lifetime value metric in SaaS is determined not by the value of the initial sale, but by the value of ALL sales made by a customer over its lifetime including expansion sales and subscription renewals. The key to success upselling an account is to know what value the customer has received from the solution to date. If you know that a customer has been successful, it’s an easy sale. If you call a customer and find out they have discontinued using the service months ago, it’s an embarrassment. You should really receive an early warning indicator that an account is no longer using the service and proactively address and turn a customer before they cancel the service. Totango’s research found that most cancellations are followed by a period of non-use and that non-use is more prevalent than you might think: a full half of paid SaaS customers log in less than once a month or do not use their paid service at all. Another 19% is using their paid service less than once a week.

In summary, it seems to me that knowing your customer is rapidly going from a “nice to have” to a “must have”. Those early adopters of customer engagement software will be able to:

  • Dramatically increasing sales by analyzing customer data and deciding on the best next sales action for each customer
  • Exponentially lowering sales costs by routing prospects to the lowest cost sales channel

And those who don’t pay attention will very soon be at a very significant competitive disadvantage…
Check out this very cool infographic from Totango:

<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds


No significant articles today.

<Return to section navigation list>

Cloud Security and Governance

Jay Heiser asserted Megaupload is world’s biggest hot potato in a 2/3/2012 post to his Gartner blog:

imageThe dozens of petabytes of Megaupload data belonging to millions of Internet users is manifesting itself as a giant hot potato, currently burning a cash flow and PR hole into the bottom lines of several global hosting firms.

imageThe Electronic Frontier Foundation has formerly requested that this hot potato be allowed to fester indefinitely, announcing yesterday “EFF formally requested the preservation of the data seized when the U.S. government shut down and related sites in January of 2012, notifying the court and attorneys involved in the case that Megaupload’s innocent users deserve a fair process to control and retrieve their lawful material.”

I also agree that innocent users deserve a fair process, although it is difficult to envision what that could be. What I don’t agree with is the part about ‘data seized’. As far as I can tell, its still sitting in its original servers in multiple data centers belonging to Carpathia, Cogent, and some number of additional hosting firms. The DOJ did not seize it at all–they just took multiple steps to ensure that the service would be inaccessible:

  • They took possession of Mega’s domain names, making it impossible for customers to access it.
  • They froze Mega’s financial assets, making it impossible for them to pay the hosting providers.
  • They arrested Mega leadership on criminal charges, ensuring that they would be focused on staying out of jail, instead of figuring out how to restore their file storage services.

Mega’s staff are under arrest at worst, and unpaid and looking for work at best. Mega’s hosting firms are stuck with thousands of idle servers, mostly filled with toxic digital waste of bootlegged movies and pornography. Carpathia has strongly suggested that they do not have administrative access to these servers (although they haven’t explicitly said so). It would be nice to think that any legal content would be provided to the 50,000,000 or so people to whom it belongs, but its difficult to envision the practicalities.

Without providing any public suggestion of how it should be done, in a letter to the DOJ on Feb 1, the EFF formally requested that the DOJ take possession of the poisonous potato. Described as a matter of fairness, with Constitutional overtones, this preservation step would presumably be a financial one, but not a physical one.

For the DOJ, theirs was a hugely visible act which immediately encouraged several Megaupload competitors to change their practices. It sent a clear message that ‘the USA will not tolerate Internet IP piracy.’ Given the huge level of citizen push back on SOPA and PIPA, its easy to envision growing pressure to change US policy.

For the hosters, this digital hot potato represents an immediate loss of income, and a potential PR disaster. Just leaving the Mega servers in place represents an ongoing expense, actually turning them on and serving their content would represent an even bigger expense. Coming up with a mechanism to allow ‘legitimate’ users to collect their data while excluding illegal content seems a practical and legal rat hole, with endless potential to attract lawyers from the DOJ, the EFF, foreign governments, and the entertainment industry. It isn’t difficult to envision that they would eventually be on the receiving end of some sort of class action lawsuit.

For the EFF, this is a PR gift, representing their biggest ever opportunity to play hero for millions of impacted Megausers. I don’t blame them for making hay in this sunshine. Cloud computing not only means that the criminals and innocent bystanders are sharing the same virtual premises, but the scale of cloud computing ensures an astounding amount of collateral damage. This isn’t the 1920s, and today’s digital G Men can’t shoot a bootlegger without also hitting an innocent bystander.

For the bootleggers and porn pushers, this probably represents no more than a minor setback.

For some number of individuals and small businesses, too naive to have understood the relative risks and benefits of the public cloud computing model, this probably represents a permanent loss. The EFF is actively soliciting the names and details from impacted users, and it will be interesting to see what data is provided on the number of individuals claiming that their only copy of their personal property is trapped in Megalimbo.

For me, this is an endlessly fascinating story, resulting in some of my best Gartner blog readership stats. Aside from sheer drama of the event, though, it raises important questions about the role of government within the Internet, the liabilities of a provisioning model that relies on a chain of providers, and whether the leverage of this computing model is creating monster sized services that are too big to allow to fail.

<Return to section navigation list>

Cloud Computing Events

Josh Holmes reported on PHP Benelux 2012 in a 2/3/2012 post:

Last weekend I was at PHP Benelux (@phpbenelux). This is the third year that they have run the conference but the first time that I’ve been able to make it, definitely an advantage of living in Ireland… It was a really fun conference for a number of reasons.

imageFirst, we had a PHP on Azure Hackathon where we had a number of PHP devs working on getting a project up onto Azure. There were definitely some learnings around the setup and preparation side of the hackathon but once people got setup, it was pretty good. In the three hours with setup problems and the like we actually had 3 people get a project up and running. I was fairly pleased with that as an outcome given the first time nature of this exercise.

IMG_0490First of all, the people are fantastic. That includes everyone from the other speakers, to the attendees to the conference organizers. To be fair, I’ve got a lot of friends that were there and it was great to just hang out with some of them. But I also met a ton of new people there. The speaker’s themselves range from international PHP celebrities, such as David Zülke and Derick Rethans, to local Belgian and Dutch speakers getting their first breaks. Microsoft’s own Craig Kitterman from the Azure product team was one of the speakers as well.

Second, the content, across the board, was top notch. I really enjoyed a number of the talks from Matthew Weier O’Phinney’s VIM talk to David Coallier’s closing keynote on taking PHP to the next level.

Third, the conference organization is on par with a number of pro-conferences that I attend. The tremendous number of little touches, like the fact that they track your flights and pick you up from the airport and arrange travel back for you as well, is what really puts it over the top. One of the things that I particularly liked is that they, at the end of the conferences in the closing bits called out each of the sponsors and talked about why that sponsor was important. This is something that not nearly enough conferences do. The sponsors invest quite a bit of resources ranging from cash to people to many types of resources. It was great to hear how some of the sponsors who had a small amount of cash were deeply involved in other ways while some others simply wrote a check. Both are valid sponsorships and are needed but it was really interesting to see how the different sponsors were involved. I also liked that at the end they called up all of the speakers. that line of speakers filled the stage. For anyone who thought this was a small conference, it’s amazing to see all of the bodies on stage that were involved from the staff to speakers and see how many people it takes to pull it off.

Looking forward to next year…

<Return to section navigation list>

Other Cloud Computing Platforms and Services

Tony Baer (@TonyBaer) described Big Moves in Big Data: EMC's Hadoop Strategy in a 2/3/2012 post to Dana Gardner’s Briefings Direct blog:

imageTo date, Big Storage has been locked out of Big Data. It’s been all about direct attached storage for several reasons. First, Advanced SQL players have typically optimized architectures from data structure (using columnar), unique compression algorithms, and liberal usage of caching to juice response over hundreds of terabytes. For the NoSQL side, it’s been about cheap, cheap, cheap along the Internet data center model: have lots of commodity stuff and scale it out. Hadoop was engineered exactly for such an architecture; rather than speed, it was optimized for sheer linear scale.

Over the past year, most of the major platform players have planted their table stakes with Hadoop. Not surprisingly, IT household names are seeking to somehow tame Hadoop and make it safe for the enterprise.

Up ' til now, anybody with armies of the best software engineers that Internet firms could buy could brute force their way to scale out humungous clusters and if necessary, invent their own technology, then share and harvest from the open source community at will. Hardly a suitable scenario for the enterprise mainstream, the common thread behind the diverse strategies of IBM, EMC, Microsoft, and Oracle toward Hadoop has been to not surprisingly make Hadoop more approachable.

Up ' til now, anybody with armies of the best software engineers that Internet firms could buy could brute force their way to scale out humungous clusters and if necessary.

What’s been conspicuously absent so far was a play from Big Optimized Storage. The conventional wisdom is that SAN or NAS are premium, architected systems whose costs might be prohibitive when you talk petabytes of data.

Similarly, so far there has been a different operating philosophy behind the first generation implementations from the NoSQL world that assumed that parts would fail, and that five nines service levels were overkill. And anyway, the design of Hadoop brute forced the solution: replicate to have three unique copies of the data distributed around the cluster, as hardware is cheap.

As Big Data gains traction in the enterprise, some of it will certainly fit this pattern of something being better than nothing, as the result is unique insights that would not otherwise be possible. For instance, if your running analysis of Facebook or Twitter goes down, it probably won’t take the business with it. But as enterprises adopt Hadoop – and as pioneers stretch Hadoop to new operational use cases such as what Facebook is doing with its messaging system – those concepts of mission-criticality are being revisited.

And so, ever since EMC announced last spring that its Greenplum unit would start supporting and bundling different versions of Hadoop, we’ve been waiting for the other shoe to drop: When would EMC infuse its Big Data play with its core DNA, storage?

Today, EMC announced that its Isilon networked storage system was adding native support for Apache Hadoop’s HDFS file system. There were some interesting nuances to the rollout.

Big vendors feeling their way
It’s interesting to see how IT household names are cautiously navigating their way into unfamiliar territory. EMC becomes the latest, after Oracle and Microsoft, to calibrate their Hadoop strategy in public.

Oracle announced its Big Data appliance last fall before it lined up its Hadoop distribution. Microsoft ditched its Dryad project built around its HPC Server. Now EMC has recalibrated its Hadoop strategy; when it first unveiled its Hadoop strategy last spring, the spotlight was on the MapR proprietary alternatives to the HDFS file system of Apache Hadoop. It’s interesting that vendor initial announcements have either been vague, or have been tweaked as they’ve waded into the market. For EMC’s shift, more about that below.

For EMC, HDFS is the mainstream

MapR’s strategy (and IBM’s along with it, regarding GPFS) has prompted debate and concern in the Hadoop community about commercial vendors forking the technology. As we’ve ranted previously, Hadoop’s growth will be tied, not only to megaplatform vendors that support it, but the third party tools and solutions ecosystem that grows around it.

For such a thing to happen, ISVs and consulting firms need to have a common target to write against, and having forked versions of Hadoop won’t exactly grow large partner communities.

Regarding EMC, the original strategy was two Greenplum Hadoop editions: a Community Edition with a free Apache distro and an Enterprise Edition that bundled MapR, both under the Greenplum HD branding umbrella. At first blush, it looked like EMC was going to earn the bulk of its money from the proprietary side of the Hadoop business.

This reflects emerging conventional wisdom that the enterprise mainstream is leery about lock-in to anything that smells proprietary for technology where they still are in the learning curve.

What’s significant is that the new announcement of Isilon support pertains on to the HDFS open source side. More to the point, EMC is rebranding and subtly repositioning its Greenplum Hadoop offerings: Greenplum HD is the Apache HDFS edition with the optional Isilon support, and Greenplum MR is the MapR version, which is niche targeted towards advanced Hadoop use cases that demand higher performance.

Coming atop recent announcements from Oracle and Microsoft that have come clearly out on the side of OEM’ing Apache rather than anything limited or proprietary, and this amounts to an unqualified endorsement of Apache Hadoop/HDFS as not only the formal, but also the de facto standard.

This reflects emerging conventional wisdom that the enterprise mainstream is leery about lock-in to anything that smells proprietary for technology where they still are in the learning curve. Other forks may emerge, but they will not be at the base file system layer. This leaves IBM and MapR pigeonholed – admittedly, there will be API compatibility, but clearly both are swimming upstream.

Central Storage is newest battleground

As noted earlier, Hadoop’s heritage has been the classic Internet data center scale-out model. The advantage is that, leveraging Hadoop’s highly linear scalability, organizations could easily expand their clusters quite easily by plucking more commodity server and disk. Pioneers or purists would scoff at the notion of an appliance approach because it was always simply scaling out inexpensive, commodity hardware, rather than paying premiums for big vendor boxes.

In blunt terms, the choice is whether you pay now or pay later. As mentioned before, do-it-yourself compute clusters require sweat equity – you need engineers who know how to design, deploy, and operate them. The flipside is that many, arguably most corporate IT organizations either lack the skills or the capital. There are various solutions to what might otherwise appear a Hobson’s Choice:

  • Go to a cloud service provider that has already created the infrastructure, such as what Microsoft is offering with its Hadoop-on-Azure services;
  • Look for a happy, simpler medium such as Amazon’s Elastic MapReduce on its DynamoDB service;
  • Subscribe to SaaS providers that offer Hadoop applications (e.g., social network analysis, smart grid as a service) as a service;

    Pioneers or purists would scoff at the notion of an appliance approach because it was always simply scaling out inexpensive, commodity hardware, rather than paying premiums for big vendor boxes.

  • Get a platform and have a systems integrator put it together for you (key to IBM’s BigInsights offering, and applicable to any SI that has a Hadoop practice)
  • Go to an appliance or engineered systems approach that puts Hadoop and/or its subsystems in a box, such as with Oracle Big Data Appliance or EMC’s Greenplum DCA. The systems engineering is mostly done for you, but the increments for growing the system can be much larger than simply adding a few x86 servers here or there (Greenplum HD DCA can scale in groups of 4 server modules). Entry or expansion costs are not necessarily cheap, but then again, you have to balance capital cost against labor.
  • Surrounding Hadoop infrastructure with solutions. This is not a mutually exclusive strategy; unless you’re Cloudera or Hortonworks, which make their business bundling and supporting the core Apache Hadoop platform, most of the household names will bundle frameworks, algorithms, and eventually solutions that in effect place Hadoop under the hood. For EMC, the strategy is their recent announcement of a Unified Analytics Platform (UAP) that provides collaborative development capabilities for Big Data applications. EMC is (or will be) hardly alone here.

With EMC’s new offering, the scale-up option tackles the next variable: storage. This is the natural progression of a market that will address many constituencies, and where there will be no single silver bullet that applies to all.

Jeff Barr (@jeffbarr) posted Amazon CloudFront: Looking Back, Looking Forward, Making Plans on 2/2/2012:

imageLooking Back
In 2011 we added a total of seven edge locations to Amazon CloudFront and Route 53. We also added lots of new features, as I documented last year.

imageLooking Forward
Our newest edge locations are located in Milan, Italy and Osaka, Japan. This brings our total worldwide location count to 26 (see the CloudFront page for a complete list). Each new edge location helps lower latency and improves performance for your end users.

Making Plans

We have additional locations in the pipeline for 2012 and beyond. Our planning process takes a number of factors in to account including notes from our sales team and discussions on the Amazon CloudFront forum. We also collect latency measurements from a number of points around the globe to our current set of locations and correlate them with broadband Internet penetration and existing Amazon CloudFront usage in the area.

I would also like to invite you to participate in the Amazon CloudFront Edge Location Survey. We are very interested in your suggestions for additional locations. We'd also like to learn a bit more about the type of content that you deliver to your customers. …

Barb Darrow (@gigabarb) asked Why is Amazon hiring like a drunken sailor? in a 2/1/2012 article for GigaOm’s Structure blog:

imageThe most striking thing about Amazon’s fourth-quarter and year-end numbers was that the company’s head count was up a whopping 67 percent to 56,200 employees, compared with 33,700 a year ago, according to Amazon’s new 8-K filing. Sixty-seven percent is a very big number — even for Amazon.

imageWhile most of the questions on Amazon’s earnings call on Tuesday night focused on the Kindle Fire business, Justin Post, an analyst with Bank of America Merrill Lynch, tried to get Amazon to drill down into that 67 percent head count growth, which — he pointed out — was “quite a bit higher than units or revenue growth.” But Amazon CFO Thomas Szkutak didn’t bite. “The majority of those increases are in our operations and customer service area . . . it’s in support of the growth,” Szkutak said.

imageGiven Amazon Web Services’ push into enterprise computing, smart money is that a good chunk of those workers are supporting AWS users, not selling or otherwise dealing with Kindles or book sales.

imageEarlier this week, Amazon announced new premium support options for EC2. The company added Amazon-fielded support for third-party software including Windows and Red Hat Linux operating systems and Apache and IIS web servers running on Amazon infrastructure.

According to the AWS blog post by AWS evangelist Jeff Barr:

If you have Gold or Platinum Premium Support, you can now ask questions related to a number of popular operating systems including Microsoft Windows, Ubuntu, Red Hat Linux, SuSE Linux, and the Amazon Linux AMI. You can ask us about system software including the Apache and IIS web servers, the Amazon SDKs, Sendmail, Postfix, and FTP. A team of AWS support engineers is ready to help with setup, configuration, and troubleshooting of these important infrastructure components.

As most in the enterprise IT world can attest, support engineers do not come cheap. And with a customer base as large as Amazon’s, it will need quite a few. The margins may be higher on sales of such enterprise services, but they also require a greater deal of customer support. And customer expectations for that support are much higher. Enterprise IT companies like EMC, Oracle and IBM know this. They typically offer a range of support options, including on-site hand-holding if needed. It is unclear to some whether Amazon does.

Currently, Amazon Platinum tier service costs either $15,000 per month or 10 percent of total AWS usage for that period, whichever is higher. (The Amazon support pricing is posted here).

There is a healthy debate about whether Amazon, which built its empire on the razor-thin margins of bookselling (and some would say Infrastructure-as-a-Service offerings) really wants to enter the world of higher-margin enterprise software and services, which require a higher level of hand-holding and support than Amazon has offered in the past. This news about bulked up support for AWS is a sign that it does intend to go there.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

<Return to section navigation list>