Thursday, July 22, 2010

Windows Azure and Cloud Computing Posts for 7/22/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

Windows Azure Platform Diagram   
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.

Azure Blob, Drive, Table and Queue Services

No significant articles today.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Wayne Walter Berry (@WayneBerry) reported SQL Azure Supports Hierarchyid Data Type in a 7/22/2010 post to the SQL Azure Team blog:

image While this might seem like a non-announcement, the data type hierarchyid has just been added as a supported data type with SQL Azure Service Update 3, announced here. The hierarchyid data type is used to manage hierarchical data and tables that have a hierarchical structure. For example, employee manager relationships, and thread forum data.

imageSQL Azure also supports the hierarchyid functions as of Service Update 3. This seems obvious; you need these functions to interact with the hierarchyid data type. However, SQL Azure doesn’t currently support the .NET CLR and many of the functions might appear to be CLR functions. While these are CLR functions, they are supported.

These are the support functions for hierarchyid:

  • GetAncestor
  • IsDescendantOf
  • ToString
  • GetDescendant
  • Parse
  • Write
  • GetLevel
  • Read
  • GetRoot
  • GetReparentedValue

Marcelo Lopez Ruiz explains OData and optimistic concurrency in this 7/22/2010 post:

image As a short follow-up on HTTP and optimistic concurrency, I wanted to touch on how OData deals with concurrency.

The nice thing is that there is very little for me to write about. By and large, the OData protocol "piggy-backs" on the HTTP protocol and uses the same mechanism of entity tags and headers. All good.

imageThe only difference is that OData also specifie[s] the individual ETags for each entity when you are asking for a feed of many entities at once. This allows you to not have to re-request each entity separately just to get the ETag in the cases when you want to update one of them. The Concurrency control and ETags section discusses this, and for example you can see how this comes in an ATOM entry in an m:etag attribute.

<feed namespace declarations>
<entry m:etag='W/"123"'>...</entry>
<entry m:etag='W/"456"'>...</entry>

This just makes things more efficient and eliminates round-trips, while still preserving all the HTTP goodness for general HTTP processors and caches.

imageSee Microsoft XML guru Jean Paoli reports on his keynote to O’Reilly Media’s OSCON 2010 conference in Interoperability Elements of a Cloud Platform Outlined at OSCON posted to the Interoperability @ Microsoft blog on 7/22/2010 in the Cloud Computing Events section.

Vitkaras explains OData’s Data Services Expressions – Part 9 – Expansions in this 7/21/2010 post:

imageSeries: This post is the ninth part of the Data Services Expressions Series which describes expressions generated by WCF Data Services.

In the last post we looked at projections, which leaves us with just one last big feature, expansions.

The $expand query option

[A] query which targets a certain entity by default returns just primitive and complex properties of that entity. Navigation properties are only represented as links in the response. Expansions allow clients to ask the server to return the entity or entities referred to by a specified navigation property along with the base entity of the query. For a sample let’s take a look at this query:


Such query returns category entities. Each category will contain its ID, Name and other primitive or complex properties in the payload. But the Products navigation property will only be represented in the payload as a link. If the client needs the products for each category as well, it would have to download the feed this link points to for each of the categories. Or it can use expansions and ask the server to include the Products in the response as well, like this:


This query will return the same Categories as the previous one, but the Products navigation property will now also include all the product entities for each category in the response. This feature can be very useful for clients to minimize the number of requests issued against the server.

Expressing expansions in expressions

Now let’s take a look at how such query is executed on the server. The simplest approach would be to simply assume that the server can ask for the value of the Products property and it would get back the product entities it needs. But this approach would be inefficient and rather complicated to implement for many providers.

As an example let’s consider a provider which reads its data from a database. In the database navigation properties are usually stored as foreign keys. A query which returns categories would simply return all the rows from the table with categories. At the time the service would ask for all products for a given category the provider would have to issue a new query against the database to get such data (from different tables). And this would be repeated for each category in the results. This could quickly become very expensive, not counting that your favorite DBA would not like your application at all.

Much better solution would be to ask the provider to get categories and their products in one query so that it could issue a join query. But how to tell the provider that Products are needed as well? We could invent some new way (special method or so), but that would go against the idea that all queries are passed to the provider as LINQ queries.

The way this is resolved is by explicitly projecting the navigation property which needs to be expanded. So a simple example of the query above could look something like this:

categories.Select(category => new
    Category = category,
    Products = category.Products

You can try this with any existing LINQ provider and it should work as expected, that is retrieve the categories as well as all their products from the data store.

The above approach can’t be used directly by WCF Data Services. The main problem is that it requires generation of a new type for each set of expanded properties (the return type of the query). So instead a special type is used, which is called ExpandedWrapper. It’s rather similar to the ProjectedWrapper which we discussed in the previous post and it serves very similar purpose.

ExpandedWrapper is a generic type (in fact there are 12 such types) where the generic parameters specify the type of the entity being expanded and then types of each of the expanded properties. Each ExpandedWrapper then has a property ExpandedElement which will store the entity being expanded (the category in our sample) and a certain number of properties called ProjectedProperty0, ProjectedProperty1 and so on to store the expanded navigation properties. So in our sample above using ExpandedWrapper the query would look like:

categories.Select(category => new ExpandedWrapper<Category, IEnumerable<Product>> { 
    ExpandedElement = category,
    Description = "Products",
    ProjectedProperty0 = category.Products

I didn’t mention the property Description, which servers the same purpose as ProjectedWrapper.PropertyNameList and stores a comma delimited list of names of navigation properties expanded.

The real expression

To be able to look at the real query generated by WCF Data Services our simple reflection based service we used so far won’t do. The reflection provider (which is used if you just provide classes as the service definition) assumes the exact thing we want to avoid. That is being able to access a navigation property any time without any consequences. For in-memory data, this is a very valid assumption as all the data is readily available without a need to issue expensive queries. So the expression generated by the reflection provider for a simple expansion like the one in our sample will not try to tell the LINQ provider to get the expanded properties, since there’s no need. For a custom provider WCF Data Services can’t make such assumption, and so it will generate the expansion expression like shown above.

So instead of our simple provider we used so far we will use a sample from the OData Provider Toolkit, which you can download here: From the downloaded archive extract the Typed\RWNavProp folder and open the solution in it. Add the InterceptingProvider toolkit (as mentioned in our second post) to the solution and open the DSPResourceQueryProvider.cs file from the DataServiceProvider project. In it find the method GetTypedQueryRootForResourceSet<TElement> and wrap the call returned IQueryable into the intercepting query, something like this:

private System.Linq.IQueryable GetTypedQueryRootForResourceSet<TElement>(ResourceSet resourceSet)
    return Toolkit.InterceptingProvider.Intercept(
        (expression) =>
            return expression;

Now simply start the ODataDemoService and issue our sample query. You should see an expression like this:

System.Linq.Enumerable+<CastIterator>d__b1`1[ODataDemo.CategoryEntity].Select(p =>
new ExpandedWrapper`2() {
        ExpandedElement = p,
        Description = "Products",
        ProjectedProperty0 = p.Products})

Which is exactly like the one we constructed above, so it works as it should. The above text doesn’t actually show the real type of the ExpandedWrapper, it just shows that it’s a generic type with two parameters (that’s what the `2 means in CLR). The real type if you look at it in the debugger is ExpandedWrapper<CategoryEntity, IEnumerable<ProductEntity>>.

Note that unlike the ProjectedWrapper expressions, there’s no need for Convert operators in this one since all the properties are strongly typed due to the fact that the ExpandedWrapper is a generic class.

The p.Products expression above is yet another standard property access expression as described in this post, and as such might differ based on the property definition you use in your provider.

During serialization WCF Data Services will now recognize the results as being ExpandedWrapper objects, and it will access the ExpandedElement property and use its value as the entity instance. Then when it needs to get the value of the navigation property to expand it will access the right ProjecteProperty property (based on the comma delimited names in the Description property).

That’s it for a simple expansion. It gets much more complicated if several features are combined together. When both expansions and projections are combined the query will use a combination of ExpandedWrapper and ProjectedWrapper objects and it will get considerably more complex. Also using server driven paging will influence the expression tree a lot. But those are all topics for some future posts.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

John Fontana asserts Passwords [are] on the death watch list in this 7/21/2010 post about the Cloud Identity Summit 2010 to Ping Identity’s Cloud blog:

John Fontana

Password proliferation officially went under a death watch today, given little time to be run out of town as the Cloud Identity Summit kicked off amid the old-west justice of Keystone, Colo.

The deed likely won’t take the form of marauding IT pros, but you get the picture.

During the opening keynotes of the Cloud Identity Summit, Google’s Eric Sachs, product manager in the company’s security and CIO department, said it with one slide: Eliminate Passwords.

That’s an official company strategy for the search and online app giant. The company is already testing a new infrastructure for Google Apps accounts that will reduce passwords by allowing users to sign in to many more Google services with their Google Apps account credentials.

Sachs’s proclamation came after Ping CEO Andre Durand said “passwords are one of the weakest things that exist in the cloud today.”

He called on the industry leaders, end-users and vendors gathered at the conference to end password proliferation in order to help boost security in the cloud – a so-called Password Non-Proliferation Treaty.

Durand said it won’t be easy, and noted that a lot of work needs to be done on standards.

The whole notion of stemming password proliferation produced this zinger from Ping CTO Patrick Harding, who compared passwords to hamburgers and proclaimed, “If we don't get rid of passwords, the Cloud will need a colonoscopy in 5 years.”


But as the cry went out for reducing passwords, others wondered where the conversation now needs to go.

Anil Saldhana of Red Hat posted on his Twiter account. “Ok, I got that we need to eliminate passwords, should we talk about Levels of Assurance?”

The call-to-action is out there – what do you think?

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Eric Golpe (@ericgol) reported Initial Windows Azure Benefits Doubled for MSDN Subscribers in this 7/22/2010 post:

image I just found out today that effective immediately, MSDN subscribers at the Premium, Ultimate, and BizSpark levels are eligible for 16 months of Windows Azure platform benefits instead of 8 months. This increased commitment to MSDN subscribers demonstrates Microsoft’s commitment to building a strong ecosystem of developers on the Windows Azure platform.

imageIf you are a MSDN subscribers or BizSpark member, I encourage you to sign up for your Windows Azure benefits if you have not done so already.  There will be even more pretty cool features coming that you will not want to miss out on. (Of course, I can't tell you what those are yet, so you will have to have a bit of trust.)

imageSome of the details of the new change, effective immediately, are:

  • Eligible MSDN subscribers can sign up for this offer once.
  • The first phase of this introductory offer lasts 8 months from the time of sign up, then the subscriber must renew their Windows Azure platform subscription once for another 8 months. After that, the subscriber will cancel this introductory account and sign up for the ongoing MSDN benefit based on their subscription level.
  • MSDN subscribers who have already signed up must choose to have their Azure subscription “auto renew” so that it does not end after 8 months. “Auto renew” is the default setting, so this will only need to be changed if the subscriber had changed it previously.
  • MSDN subscribers who elect to use more of the Windows Azure platform than is included in their monthly benefit will be charged at our standard rates.   

For more information, visit Windows Azure Platform Benefits for MSDN Subscribers

Eric Golpe is a Microsoft Senior Applications Development Consultant specializing in Windows Azure, Microsoft Office SharePoint Server, Visual Studio Team Foundation Server, Dynamics CRM, and SQL Server.

My live OakLeaf Systems Azure Table Services Sample Project app runs under my MSDN Premium benefit.

Brian Swan (@brian_swan) announced Windows Azure Command Line Tools for PHP Available in Web Platform Installer in this 7/22/2010 post:

image The Windows Azure Command Line Tools for PHP are now available in the Microsoft Web Platform Installer (Web PI). This announcement was made on the Interoperability Team blog as part of a post that outlines interoperability elements of a cloud platform. The entire post deserves a close read, but I’m initially most excited about this small piece of the announcement:

imageAvailable today is the latest version of the Windows Azure Command Line Tools for PHP in the Microsoft Web Platform Installer (Web PI). The Windows Azure Command Line Tools for PHP enable developers to use a simple command-line tool without an Integrated Development Environment to easily package and deploy new or existing PHP applications to Windows Azure. Microsoft Web PI is a free tool that makes it easy to get the latest components of the Microsoft Web Platform as well as install and run the most popular free web applications.

Installing and using these tools fairly easy already (I wrote a post about using these tools a while back), but this makes installation and configuration even easier. Here’s what to do:

1. Download the Web Platform Installer here:

2. On the Developer Tools tab, click Customize under the Windows Azure Platform Tools.


3. Select Windows Azure Command-line Tools for PHP.


4. Click Install, and you will be ready to use the tools shortly.

Note that a prerequisite for installing these tools is the Windows Azure SDK, but the Web PI knows this and will automatically install this SDK for you. Also note that the command line tools are installed in the C:\Program Files\Windows Azure Platform Tools for PHP directory. For information about using these tools, see this post: Using the Windows Azure Command Line Tools for PHP.

Ceasar de la Torre explains How to set a default page to a Windows Azure Web Role App (Silverlight, ASP.NET, etc.) in a 7/22/2010 post:

This is a very easy tip, but useful.

image As you may know, Windows Azure management portal has not a way to set many things you could do using IIS Management console. But, because most of the IIS 7.x configuration can be also set using XML configuration files, in most of the cases we don’t really need the IIS Manager.

imageIn this case, we can set a default page to a Windows Azure Web Role App (SIlverlight, ASP.NET, etc.) changing the web.config. We just specify the default page within the System.WebServer section:

                 <add value="MySilverlightInitialPage.aspx"/>                     

Then, your named Silverlight or ASP.NET page will be respected as the default document of the web site, and therefore will be served when the client requests for the root URL (

Very easy!!

Bruce Kyle reports US Developers Get a Month Free of Windows Azure in Virtual Boot Camp in this post of 7/22/2010 to the US ISV Evangelism blog:

image Developers in the United States are being offered a month free to develop your applications in Windows Azure. You also get free no-cost phone, chat and email support during and after the Windows Azure virtual boot camp with the Microsoft Platform Ready for Windows Azure. No credit card required.

windows-azure-logo-lgTo learn more, see Virtual Boot Camp USA. You'll find a link for an email where you can register for the free month.

About Microsoft Partner Ready

When you join Microsoft Platform Ready, formerly known as the Front Runner program, you can access one-on-one technical support by phone or e-mail from our developer experts, who can help get your applications in the cloud. Once your application is compatible, you'll get a range of marketing benefits to help you let your customers know that you're a Front Runner.

Microsoft Partner Ready is now available to partners worldwide.

About Windows Azure

Windows Azure is a cloud services operating system that serves as the development, service hosting and service management environment for the Windows Azure platform. Windows Azure provides developers with on-demand compute and storage to host, scale, and manage web applications on the internet through Microsoft datacenters.

Also, see:

The Windows Azure Team posted a Real World Windows Azure: Interview with Johannes Schick, Chief Executive Officer, höltl Retail Solutions case study in Q and A style on 7/22/2010:

image As part of the Real World Windows Azure series, we talked to Johannes Schick, CEO of höltl Retail Solutions, about using the Windows Azure platform to run the company's point-of-sale (POS) system. [URL added.] Here's what he had to say:

MSDN: What service does höltl Retail Solutions provide?

Schick: höltl Retail Solutions is a Microsoft Gold Certified Partner that offers innovative products for optimizing retail processes for the clothing and nonfood industries. The most popular of these is the POSFlow chain software solution for midsize retailers-more than 50,000 people throughout Europe use the POSFlow solution.

MSDN: What was the biggest challenge höltl Retail Solutions faced prior to adopting the Windows Azure platform?

imageSchick: POSFlow is popular among midsize retailers, and we wanted to make the solution viable for smaller retailers, too. However, because POSFlow is an on-premises solution, we had to send technicians out to customers' locations to set up, install, and configure the software, which could take up to three or four hours plus travel time. That wasn't a cost-effective or time-efficient model for smaller businesses.

MSDN: Can you describe the solution you built with the Windows Azure platform to help address your need for efficient deployment?

image Schick: We migrated our existing solution to the Windows Azure platform using the Microsoft .NET Framework 3.5 and the Microsoft Visual Studio 2008 Team Suite development system. We are using the Windows Azure platform AppFabric Service Bus to bridge cloud, on-premises, and hosted deployments. For our relational database needs, we're using Microsoft SQL Azure. For the visual interface, we used the Microsoft Silverlight 3 browser plug-in.

The POSFlow solution takes advantage of the Windows Azure platform to deliver cloud-based POS services. …

Q&A continues in the usual format.

HPC in the Cloud reports SaaS Eclipses On-Premise Business Application Preference on 7/22/2010 by regurgitating a press release:

image SAN FRANCISCO, July 22, 2010 --SPI Research’s Professional Services Business Applications Market Adoption report, based on responses from 244 professional service organizations reveals billable service organizations have increased their appetite for cloud-based (Software-as-a-Service) applications, overshadowing on-premise business solutions by a wide margin. The study found varied levels of application adoption:

    * Enterprise Resource Planning (ERP) is the most prevalent (94%)
    * Client Relationship Management (86%)
    * Remote Service Delivery (81%)
    * Professional Services Automation (56%)
    * Human Capital Management (55%)
    * Business Intelligence (43%)

image The report provides PS executives and software application providers insight into the level of market adoption, integration and satisfaction with core Professional Service business applications. SPI Research also examines what sectors of the PS industry and which applications are moving to the cloud and how soon.     

43% of the 244 participants represented embedded PS organizations within technology companies (Software, SaaS, Hardware and Networking). 57% represented independent professional service providers.

Market share leaders by category are: Intuit/QuickBooks (ERP); (CRM); Webex (Remote Service Delivery); NetSuite/OpenAir (PSA); ADP (HCM) and SAP/Business Objects (BI).

The 2010 Professional Services Business Applications Market Adoption report was created by Jeanne Urich and R. David Hofferberth, P.E..

The complete 46 page report, which contains 60 insightful charts and tables, is now available for $295.

To see the table of contents and purchase the report click:

About Service Performance Insight

Service Performance Insight focuses on the global service economy. We provide a unique depth of operating experience combined with unsurpassed analytic capability.

Page:  1  of  2
1 | 2 All »

Greg Willis delivers an Aussie’s view of migration to Windows Azure in his SaaS software and moving to Azure post of 7/21/2010:

While launching the Windows Azure Platform into the Australian market over the last year, I have spoken to dozens of customers and partners about their initial use cases for using cloud platforms.

One part of the industry that has rapidly migrated to cloud deployment on Azure are SaaS ISV’s.  These were software companies already delivering their solution ‘as a service’ across the internet but who were typically managing their own hosting and having to carefully balance when they made additional infrastructure investments to just-in-time support their growth and new customers and markets.  

With the cloud as a deployment platform, these ISV’s can now ‘pay as they grow’ in a much more granular fashion and with the additional benefit of building on a high-availability platform with no server hardware and operating system maintenance.

A good Australian-based example are Connect2Field.  Connect2Field is a SaaS solution for a range of service based businesses who have a number of field staff who they need to send job information to on a daily basis. 


Connect2Field Solution Overview

Steve Orenstein discusses Connect2Field’s Windows Azure migration experience and benefits in their recent Real World Azure interview.

In common with many of the customers and partners I have spoken to, this move to a cloud Platform as a Service (PaaS) has enabled Connect2Field to really focus their energy on developing their solution and business rather than maintaining and growing the application deployment platform.  For Connect2Field this focus has allowed them to start addressing their potential global market with a high-availability solution.

Return to section navigation list> 

Windows Azure Infrastructure

Nick Eaton announced Microsoft escapes Apple assault with record Q4, FY revenue in this 7/22/2010 to the SeattlePI blog:

Microsoft smashed Wall Street expectations Thursday by reporting record fourth-quarter revenue -- nearly $800 million more than was expected. And the software juggernaut announced record annual revenue for its 2010 fiscal year.

image Q4 FY10 revenue was $16.04 billion -- a 22 percent from the year-ago period, when Microsoft reported its first-ever year-over-year revenue and earnings decrease. Analysts had expected Microsoft to earn $15.27 billion in the quarter ended June 30. Net income was $4.52 billion, or 51 cents per share.

Earnings were fueled by continuing sales of Windows 7. Microsoft said it has sold more than 175 million Windows 7 licenses to date. The Windows and Windows Live division brought in $4.55 billion of revenue and $3.06 billion in operating income during the quarter.

"An outstanding quarter for the Windows division," said Matt Rosoff, an analyst with Kirkland-based Directions of Microsoft. "And I really think that's the reason Microsoft had a blowout quarter."

The Redmond company escaped an assault by Apple, which reported quarterly revenue of $15.7 billion on Tuesday. If Microsoft hadn't beaten that number, it would have been the first time Apple's revenue exceeded that of Microsoft.

Microsoft's revenue for the its fiscal year 2010 was $62.48 billion, a 7 percent increase from fiscal 2009. The company's previous annual record was $60.42 billion, set in FY2008.

Net income for FY2010 was $18.76 billion, or $2.10 a share, up a whopping 18 percent from the year-ago period. The economic recession negatively affected Microsoft's earnings during FY2009.

image"We saw strong sales execution across all of our businesses, particularly in the enterprise with Windows 7 and Office 2010," Kevin Turner, Microsoft's chief operating officer, said in a news release. "Our transition to cloud services is well under way with offerings like Windows Azure and our Business Productivity Online Services, and we look forward to continuing our product momentum this fall with the upcoming launches of Windows Phone 7 and Xbox Kinect." [Emphasis added.]

More information is on its way.

David Linthicum reported that The Biggest Question I Get: Is the Cloud Right for my Business? in this 7/22/2010 post to ebizQ’s Where SOA Meets Cloud blog:

image Or, should the question be, is the cloud a good option for you considering the nature of your business and the limited ducats you're able to spend? The cloud is bringing applications to smaller players who were once out of reach, and is quickly changing the playing field in terms of who can access and leverage enterprise-level cloud applications and infrastructure.

image There are other things you need to consider as well, including how to sync data now stored in the cloud back into your on-premise systems. Most of those who move to the cloud will do so in segments, so integration is going to be an ongoing problem.

Also, make sure to keep things on your radar that typically are not, including security, privacy, and performance. Despite what the naysayers are spouting, these are typically easy problems to solve, but you need to do some advanced planning.

The best way to proceed is to first understand your existing limitations, and therefore the core needs of your existing IT solution. There is always something that's desired that has not been affordable in the past, such as CRM or calendar sharing. Now is a great time to reevaluate the affordability of those systems, given the new opportunities in the cloud.

Second, make sure to do your homework around the true ROI of using cloud computing for some of your IT needs. As we mentioned above, make sure to look at costs and benefits over a three to five year horizon. That's a good indicator of value. In certain cases, cloud computing systems could be more expensive than on-premise systems, despite the cloud computing hype crowd telling you otherwise. The truth is somewhere in the middle, and it depends upon your organization.

Finally, continue to look at cloud computing as part of the strategy. If no cost effective solutions exist today in the cloud, chances are a few will show up next year or the year after. This is about looking at the option of cloud computing as a quickly changing space, and cloud computing providers should get better and cheaper as time progresses.

Clearly, it's an evolution in thinking, technology, and the way we consume IT resources. There are no magic beans here, but perhaps some opportunities now or in the future.

Lori MacVittie (@lmacvittie) claims Those eight bits in the IP header aren’t doing much of anything these days, perhaps it’s time to put them to work in a preface to her Get Out Your Crayons. We Need to Color Us Some Bits post of 7/22/2010:

image Back in the early days of bandwidth management, when quality of service and prioritization of traffic were on everyone’s minds because we were stuck with low throughput connectivity, there was a brief discussion about the use of IP’s TOS (Type of Service) bits as a means to meet specific application performance needs.

imageI say brief because, well, it never really got anywhere. See, even though the creators of the IP specification had looked into the future and provided a technical solution to prioritization of traffic they couldn’t have looked into the future and seen the organizational roadblocks to leveraging such a simple but effective method of managing traffic.

The biggest problem was that you could ensure that TOS bits were honored within the organization, but once those packets passed through the organization’s boundary of control, i.e. onto the Internet, there was no guarantee or requirement that any other organization honor those bits. And because packets flow through many, many different routers and switches along its very long yet ironically very short travels from datacenter to client, if just one fails to honor the bits then the packets go gray and prioritization is lost.

Hence it was that bandwidth management moved up the stack, with queuing and rate shaping at the transport and application layers of the stack becoming the norm and the “coloring” of TOS bits fell into disuse, like childhood crayons set aside in favor of cool gel pens and clicky-mechanical pencils.


Interestingly enough, the need for prioritization and bandwidth management still exists and, in many cases, it could become of paramount importance to the successful implementation of a mature cloud.

It is easy to forget that when you look under the covers of a cloud computing environment that there still exist physical network connections that comprise that environment. Despite the magnitude of virtualization in use at all layers that abstracts the entire infrastructure from its physical implementations, that network is still there. It’s massive, it’s huge, and it’s a spider’s web of connectivity.


I am not the only one thinking about this, I’m fairly certain. An InformationWeek Analytics survey last year included a concern rarely seen in cloud computing surveys: “speed to activate new services/expand capacity.” And folks are apparently somewhat concerned about this. Less so than other “problem” areas of cloud but enough concerned that it made the chart.

The problem is that because everyone is still actually sharing a physical network connection all that traffic gets mixed up on the wire. At the physical layer, every packet is the same regardless of what payload it’s carrying. But the reality is that some packets are more “important” in the sense that some will be extremely time sensitive. A request to provision a service that’s under increasing load is more important than many application requests but both look the same to the routers and switches that get those packets from point A to point B within a cloud computing environment.

As cloud computing providers and enterprises get better at automating the provisioning and elastic scalability processes, the packets that make up requests for such operational tasks will become increasingly time sensitive and important to ensuring operational efficiency and success.

The problem, as once reared its ugly head before, is that all that traffic – customer and operational – is running over HTTP. REST, SOAP, whatever. The APIs that make it possible to provide “compute as a service” are almost unilaterally implemented using a combination of HTTP and REST or SOAP-based architectural principles. And so are the applications running in the cloud. Everything is running over HTTP and thus, as with bandwidth management challenges in the past, it becomes increasingly difficult to distinguish traffic without inspecting its payload.

And payload inspection always adds latency. It may be only microseconds but it’s still latency. And if every router has to do it, well, microseconds eventually add up to seconds. And when we’re talking about ensuring the order of operations in a provisioning process, those seconds can make a big difference.


Sometimes the solution really is to go back to the beginning, to our roots, to the network.

It could be that the solution lies in out-of-band management networks. If not physically at least logically separated network channels that are by default prioritized and therefore will never get stuck in a traffic jam trying to get to location “C” because the application at location “A” is physically on the same network and heavily oversubscribed at the moment an important management request is trying to get to “C”. This would, however, add another layer of complexity to the management of not just the physical but the logical network. Complexity that is almost always translated into higher costs, which of course gets passed on to the customer.

imageIt could be that DCSP (DiffServ or Differentiated Services Code Point (DSCP)) – what eventually became of TOS because it was basically unused - could become the solution precisely because a single provider “owns” the entire network. Because a provider has control over all the components comprising the underlying network infrastructure it could enforce the honoring of DSCP bits across its network and thus prioritize traffic at the IP layer.

blockquote DiffServ is concerned with classifying packets as they enter the local network. This classification then applies to Flow of traffic where a Flow is defined by 5 elements; Source IP address, Destination IP, Source port, Destination port and the transport protocol. A flow that has been classified or marked can then be acted upon by other QoS mechanisms. Multiple flows can therefore be dealt with in a multitude of ways depending on the requirements of each flow. Packets are first Classified according to their current DSCP. Then they are separated into queues where one queue may be routed via a marking mechanism and another queue may be examined more closely.

-- Quality of Service overview

If a provider segments by port “cloud management” traffic from “normal” traffic, this solution would be fairly easy to implement. If not, well, then we’re going to need something else. It’s the concept behind DiffServ and TOS that’s important to leverage in such a solution – the recognition that some traffic must be prioritized to ensure delivery in a timely fashion.


What seems clear, to me at least, is that eventually we’re going to run into scenarios in which we need something akin to Operations Performance Management (OPM) to ensure that management and control messages are delivered in a timely fashion.

This is especially true as cloud computing matures and we start to see the dynamism inherent in infrastructure 2.0 components put into broader use. Real-time enforcement of security and delivery policies must be real-time. They can’t be near-time, they must happen now. When we start relying on existing open standards like HTTP and messaging hubs and event-driven networking architectures we have to remember we get the good and the bad from those existing standards and implementations. We get the ease of use and integration, the flexibility, and the ubiquity of support, but we also get the problems that have plagued quality of service implementations forever: when everything is delivered via HTTP then everything looks like HTTP. Differentiation is nice, but we need to have a way to do that that doesn’t impede performance in a cloud computing environment.

In this case, it seems wise to look down the stack and return to a perhaps less sophisticated but absolutely more elegant and simple means of distinguishing not only traffic but precedence and terms of service. DSCP or TOS or whatever we might decide to slap into those 8 bits in the IP header (perhaps there’s room for a new specification and use of those bits?) would be infinitely more scalable and easily supported in a cloud computing environment for distinguishing between the small subset of internal “traffic types” that need to be managed.

Boris Lublinsky’s Rearchitecturing Applications for the Cloud post to the InfoQ News blog of 7/21/2010 reviews an SOA World article by two InfoSys developers:

image In their SOA World article, "Cloud Application Migration", Chetan Kothari and Ashok Kumar [Arumugam] discuss the challenges that many organizations are facing when trying to move to the cloud:

image Is "cloud computing" the logical next step for me to successfully execute business strategy? If so, what should be my cloud strategy? Which applications are best suited to run on cloud? These are the questions we will discuss, attempt to answer, and where required, make suitable recommendations.

The authors of the article see many challenges migrating applications to the cloud, from "security, to SLA management, to regulations, to fear of vendor lock-in, to lack of any standards." In spite of that, they consider one should take advantage of the cloud when there is good opportunity without, necessarily trying to move everything to the cloud.

The authors encourage the transition to the cloud, noticing several benefits:

Migrating applications to the cloud to benefit from its elastic infrastructure services is a quick, cost-effective, and tactical approach to reap the benefits of the cloud. It offers a natural entry point to exploit the value of the cloud platform without any significant overhead costs. The migration will be straightforward, usually a simple re-hosting exercise, with minimal or no impact on application code. This minimizes any risks with migration while still keeping the costs low. However, it must be said that while this offers a cost-effective approach, it does not offer the cost advantage (while delivering services) against competitors who have true multi-tenant capabilities.

Their opinion is shared by ZapThink’s Jason Blumberg who considers that such a transition needs a Cloud Architecture:

The missing link between the business benefits that Cloud Computing promises and the products on the market, of course, is architecture... From the enterprise perspective... leveraging Clouds as part of the broader enterprise IT context is at the core of getting value from them. What is a best practice-based approach to leveraging Cloud-based resources in the context of the existing IT environment to address changing business needs? The answer to that question is the Cloud Architecture that is the missing link for organizations struggling to piece together a vendor-neutral Cloud strategy.

Kothari and Kumar consider materializing such an architecture a "daunting task":

Migrating applications to SaaS architecture and hosting it on a shared services model gives true multi-tenant cost advantage to an enterprise. It helps rationalize a portfolio by removing redundant applications offerings similar services across geographies or lines of business in favor of a single multi-tenant application shared across all its users. However, enabling SaaS architecture on an existing application could be a daunting task ...

The necessity of rearchitecting applications to be moved to a cloud is reemphasized by Janakiram MSV in his interview with

The first step towards the Cloud is to start refactoring your applications for the loosely coupled architecture. There should not be any affinity between the web tier, application tier and the database tier. One of the key tenets of the Cloud is Elasticity, which is the ability to scale out and scale down on demand. You never know which tier of your applications demands to be scaled out. In one scenario, you may have to scale out the web tier to meet the ongoing traffic demand. If you see that the middle tier is becoming the bottleneck, you may have to add more no. of application servers and same is the case with the data tier. Given this dynamic, on-demand nature of the Cloud, your applications should be designed to seamlessly work in a single server scenario to a clustered environment.

Both Blumberg and MSV emphasize that moving to the cloud typically means that SOA and data persistence are the key elements for the cloud migration:

Respecting the SOA principles is a great step in your journey to the Cloud. The other key thing is the way data is persisted on the Cloud. There are new models like BLOBs, Queues and flexible entities...

Although everyone talks about moving to the cloud, a question rarely is brought up on which specific cloud type is more appropriate for a given company. Also, it is also not immediately clear what kind of application rearchitecting is required for a successful cloud computing implementation.

You’ll probably find the "Cloud Application Migration", article which proposes to answer What are the real challenges that make organizations take a cautious, wait-and-watch approach to cloud adoption? question, worth reading.

<Return to section navigation list> 

Windows Azure Platform Appliance 

No significant articles today.

<Return to section navigation list> 

Cloud Security and Governance

Tanya Forsheit posted FAQ on the "BEST PRACTICES Act" - Part One to the Information Law Group blog on 7/22/2010:

image Congressman Bobby Rush has introduced a new data privacy bill to Congress known as the “Building Effective Strategies to Promote Responsibility Accountability Choice Transparency Innovation Consumer Expectations and Safeguards" Act (a.k.a. “BEST PRACTICES Act” or “Act”). Congressman Rush has been active in the data security/privacy legislation space. In December of 2009, his “Data Accountability and Trust Act” or (“DATA Act”) passed the House of Representatives. While DATA focused more on data security and breach notice, the stated focus of the Act is as follows:

To foster transparency about the transparency about the commercial use of personal information, provide consumers with meaningful choice about the collection, use, and disclosure of such information, and for other purposes.

image This Act comes on the heels of the Boucher Bill, which also represents a comprehensive data privacy approach (for more information on the comments to the Boucher Bill you can look here and here).

We have put together a summary of the Act in “FAQ” format. In Part One we look at some of the key definitions, requirements concerning transparency, notice and individual choice, mandates around accuracy, access and dispute resolution, and finally data security and data minimization requirements under the Act. Part Two will focus on the “Safe Harbor” outlined in the Act, various exemptions for de-identified information and application and enforcement of the Act.

What kinds of entities does the Act apply to?

The Act applies to “covered entities,” which means any person engaged in interstate commerce that collects or stores data containing covered information or sensitive information. Covered entities do not include any divisions of Federal or state government or some specified business meet specified criteria (e.g. store less than 15,000 records; collect less than 12,000 records in a year, etc.; see definition of “covered entity” for more detail).

Observations: Unlike some bills (e.g. the DATA Act) that limit jurisdiction to only those entities regulated by the FTC, the scope of entities regulated by the Act is broad and goes to the limits of Federal jurisdiction. Significantly, it does not appear that the Act makes the traditional distinction between data owner/controller and service provider/processor. As such, service providers may be subject to the Act as a result of collection or storage of covered/sensitive information on behalf of their customers.

What kinds of information does the Act regulate?

The Act regulates “covered information” and “sensitive information.”

“Covered information” includes such information elements as first name or initial and last name, postal address, email address, telephone/fax number, government issued identification numbers (e.g. tax ID, driver’s license number, etc.), financial account numbers, credit/debit card number, access codes/passwords, “unique persistent identifiers” used to collect, store or identitify information about a specific individual or create a profile (e.g. customer numbers, IP addresses, unique pseudonym), and any information collected, stored, used or disclosed in connection with the foregoing information. Section (B) of the definition also lists a number of important exclusions related to business related information.

“Sensitive information” means information associated with covered information of an individual that relates to directly to the individual’s medical history or health, race or ethnicity, religious beliefs/affiliations, sexual orientation/behavior, financial information (income, assets, liabilities, etc.), a person’s geolocation information, unique biometric information and social security number.

Observations: The definitions of information regulated under the Act go far beyond any U.S. definition of personally identifiable information. For example, the “traditional” definition of PII normally requires first name and last name combined with additional information such as financial account numbers. The definition of “covered information” in the Act does not require such a combination – each data element stands on its own and may not need to be tied to or identify a specific person. If I, as an individual, had an email address that was, that would would appear to satisfy the definition of covered information even if my name was not associated with it. The definition of “sensitive information” echo similar definitions under the EU Data Protection Directive and other laws based on an EU Model. Interestingly, however, it also specifically includes geolocation information (which is becoming a larger privacy issue with the prevalence of mobile computing and smartphones). …

Tanya’s detailed FAQ continues with answers to:

  • How does the Act promote transparency about the commercial use of information?
  • How must the notice required under the Act be provided?
  • Is notice required for “in-person transactions”?
  • Are covered entities required to get consent from individuals for the collection and use of covered information?
  • Are covered entities required to get consent from individuals for the disclosure of covered information to third parties?
  • Are covered entities required to get consent from individuals for the collection, use or disclosure of sensitive information?
  • Does the Act put any limitations or restrictions on behavioral advertising or tracking an individual’s Internet browsing activities?
  • Are there any exceptions to the consent requirements of the Act?
  • Do covered entities have any obligation concerning the accuracy of information they collect, assemble or maintain?
  • Does the Act require the covered entity to provide individuals with access to covered information or sensitive information?
  • Is there any time frame by which a covered entity must respond to a permitted access, correction or amendment request?
  • Does the Act impose any data security requirements with respect to covered information or sensitive information?
  • Does the Act require covered entities to conduct any risk assessment with respect to its information handling practices?
  • Does the Act require any audits or assessments?
  • Does the Act limit how long a covered entity can retain covered/sensitive information?

Note that this bill was introduced to, not passed by, Congress. Check out the Recent Updates column for more coverage by the group of cloud-related security and privacy issues.

<Return to section navigation list> 

Cloud Computing Events

David Makogon will present Introduction to Windows Azure to the Washington DC DotNet Users Group (DCDNug) on 7/28/2010 6:30 PM EDT at 2400 N Street NW, Washington, DC 20037:

imageJoin us for the David Makogon's Introduction to Windows Azure presentation.

6:30 PM - 6:45 PM Pizza! Beverages! Networking!
6:45 PM - 7:00 PM Sponsor's time
7:00 PM - 8:30 PM  Introduction to Windows Azure -  David Makogon
8:30 PM - 8:45 PM Raffle for some great giveaways!

Microsoft XML guru Jean Paoli reports on his keynote to O’Reilly Media’s OSCON 2010 conference in Interoperability Elements of a Cloud Platform Outlined at OSCON posted to the Interoperability @ Microsoft blog on 7/22/2010:

OSCON Keynote Jean Paoli

This week I’m in Portland, Oregon attending the O’Reilly Open Source Convention (OSCON). It’s exciting to see the great turnout as we look to this event as an opportunity to rub elbows with others and have some frank discussions about what we’re collectively doing to advance collaboration throughout the open source community. I even had the distinct pleasure of giving a keynote this morning at the conference.

The focus of my presentation, titled “Open Cloud, Open Data” described how interoperability is as an essential component of a cloud computing platform. I personally think it’s critical to acknowledge that the cloud is intrinsically about connectivity. Because of this, interoperability is really the key to successful connectivity.

imageWe’re facing an inflection point in the industry – where the cloud is still in a nascent state – that we need to focus on removing the barriers for customer adoption and enhancing the value of cloud computing technologies. As a first step, we’ve outlined what we believe are the foundational elements of an open cloud platform.

They include:

  • Data Portability:
    How can I keep control over my data?
    Customers own their own data, whether stored on-premises or in the cloud. Therefore, cloud platforms should facilitate the movement of customers’ data in and out of the cloud.
  • Standards:
    What technology standards are important for cloud platforms?
    Cloud platforms should support commonly used industry standards so as to facilitate interoperability with other software and services that support the same standards. New standards may be developed where existing standards are insufficient for emerging cloud platform scenarios.
  • Ease of Migration and Deployment:
    Will your cloud platforms help me migrate my existing technology investments to the cloud and how do I use private clouds?
    Cloud platforms should provide a secure migration path that preserves existing investments and should enable the co-existence between on-premise software and cloud services. This will enable customers to run “customer clouds” and partners (including hosters) to run “partner clouds” as well as take advantage of public cloud platform services.
  • Developer Choice:
    How can I leverage my developers’ and IT professionals’ skills in the cloud?
    Cloud platforms should offer developers a choice of software development tools, languages and runtimes.

Through our ongoing engagement in standards and with industry organizations, open source developer communities, and customer and partner forums, we hope to gain additional insight that will help further shape these elements. We’ve also pulled together a set of related technical examples which can be accessed at to support continued discussion with customers, partners and others across the industry.

Interoperability Elements of a Cloud Platform

Click image for full size version.

In addition, we continue to work with others in the industry to deliver resources and technical tools to bridge non-Microsoft languages — including PHP and Java — with Microsoft technologies. As a result, we have produced several useful open source tools and SDKs for developers, including the Windows Azure Command-line Tools for PHP, the Windows Azure Tools for Eclipse and the Windows Azure SDK for PHP and for Java. Most recently, Microsoft joined Zend Technologies Ltd., IBM Corp. and others for an open source, cloud interoperability project called Simple API for Cloud Application Services, which will allow developers to write basic cloud applications that work in all of the major cloud platforms.

Available today is the latest version of the Windows Azure Command Line Tools for PHP to the Microsoft Web Platform Installer (Web PI). The Windows Azure Command Line Tools for PHP enable developers to use a simple command-line tool without an Integrated Development Environment to easily package and deploy new or existing PHP applications to Windows Azure. Microsoft Web PI is a free tool that makes it easy to get the latest components of the Microsoft Web Platform as well as install and run the most popular free web applications.

On the data portability front, we’re also working with the open source community to support the Open Data Protocol (OData), a REST-based Web protocol for manipulating data across platforms ranging from mobile to server to cloud. You can read more about the recent projects we’ve sponsored (see OData interoperability with .NET, Java, PHP, iPhone and more) to support OData. I’m pleased to announced that we’ve just release a new version of the OData Client for Objective-C (for iOS & MacOS), with the source code posted on CodePlex, joining a growing list of already available open source OData implementations.

Microsoft’s investment and participation in these projects is part of our ongoing commitment to openness, from the way we build products, collaborate with customers, and work with others in the industry. I’m excited by the work we’re doing , and equally eager to hear your thoughts on what we can collectively be doing to support interoperability in the cloud.

Jean Paoli is general manager for Interoperability Strategy at Microsoft. He wrote the Foreword to my Introducing Microsoft Office InfoPath 2003 title for Microsoft Press.

Krishnan Subramanian casts a jaundiced eye at OSCON Week: Microsoft And Interoperability in his 7/22/2010 post to the CloudAve blog:

Using Microsoft and Interoperability in the same sentence makes me chuckle every time. Well, partly it could be due to my open source bias and I am not denying that. Off late, Microsoft has been making half-hearted attempts to embrace open source. I use the term "half-hearted" here because they do show some serious willingness to embrace open source on one side but they also indulge in open source bashing and threats from the CEO level onwards. In short, I am of the opinion that Microsoft wants open source their way than the real open source way.

Having made my thought process clear about Microsoft's willingness to embrace the idea of "openness", I thought I will share my opinions on their interoperability/portability claims. Ever since they lost out the first mover advantage to Amazon in cloud computing, Microsoft is touting openness and interoperability with their Azure platform. They preach interoperability almost in the same way Richard Stallman preached software freedom, with a zeal. Now, they have even come out with a website for interoperability standards in the cloud. Having lost the marketshare to Amazon, it is only natural that Microsoft sprinkles "open" and "interoperability" in their campaign. We have seen again and again how companies falling big behind the market leader embrace openness as a tool to fight back. I guess Microsoft's love for interoperability follows the same trend.

In fact, I am excited about Microsoft talking about openness and interoperability. In fact, as open source evangelists, many of us have long worked hard to make this happen. An interoperable world lead by Microsoft is very good for the industry, especially the users. It will lead to increased innovation while empowering the users of cloud services with more freedom. However, I am still not happy with all the interoperability talk by Microsoft. Yes, I agree that we can now run Java, PHP and other open source applications on top of Windows Azure. But is it the true meaning of interoperability?

True interoperability is always a two way traffic. What is happening with Microsoft's cloud initiative is that we can take all sorts of applications from other platforms to Windows Azure but we cannot do the same with .NET applications running on Azure. Yes, we can port some of the .NET applications to certain versions of Windows server operating systems running on Amazon or other clouds. For me, it is not openness. It is not the right way to do interoperability. All Microsoft is doing is to ensure that web servers like Apache and database servers like MySQL run seamlessly on the Azure platform. It is not interoperability but it is plain old opportunism. Well, opportunism is not wrong. In fact, it is even necessary for success in the business world. However, I will seriously respect their interoperability claims only if they offer me an easy way to seamlessly port my .NET applications to Linux platform. Maybe, they should work even more closely with folks at Mono project to make this happen. When the day comes when I will be able to move all my applications (whether it is based on open source frameworks or .NET framework) to any platform running on any providers' cloud, I will definitely agree with Microsoft's interoperability claims. Till then, it is just a tactic to get more users on to their service.

Having said that, I want to clearly state that I don't have any problem with Microsoft's closed approach. They have every right to take the approach and as long as there is a market for it, it is even a smart way of doing business. My problem is not with their proprietary, closed way of doing technology. My problem is only with their claims about interoperability. Interoperability is never a one way street.

The Windows Azure Platform Hub posted a workaround to a bug in the signup for Windows Azure Access Tickets at WPC 2010 in WPC Content and Access Tickets of 7/22/2010:

image[Attention] If you received an access ticket for the Windows Azure Platform while at WPC and the login doesn’t contain a complete email address, please add to the login listed on the card.

imageRight now most of the Microsoft sales and marketing folks are off at MGX (the annual Microsoft sales meeting in Atlanta), but while we’re all cooped up in another convention center, I keep hearing from partners who are kicking off Windows Azure projects as a result of what they heard last week at WPC. As a result of this momentum I’ve posted PowerPoint presentations and links to the session videos that you may find interesting.

Dion Hinchcliffe’s Connecting the Dots Between the Cloud and Enterprise 2.0 post of 7/21/2010 discusses his presentation to O’Reilly Media’s OSCON conference:

image Yesterday in Portland at OSCON’s Cloud Summit I spoke about major emerging trends in business, IT, and the Web. Specifically, I explored how Enterprise 2.0, Cloud Computing, and something known as Service-Oriented Architecture (SOA) have converged on top of the same “problem space” to become the essential fabric for how we solve the business problems in our organizations.

image At first, none of these topics might seem mainstream to the lay businessperson. Nothing could be further from the truth and most of us are impacted by this every day. For years the rate of improvement in information technology in the business world has been falling farther and farther behind the rest of the world. Application backlogs and unmet needs are common, while the centralized nature of most IT departments makes it clear that only so much is possible, even as the rate of technological change grows. I want to be clear that the many hard-working people in the IT trenches are not at fault, it’s largely due to the archaic model for how we apply technology to business.

The real purpose of my talk was to examine how much we’ve learned about how we use modern network technologies today to achieve business objectives. As an industry, we recognized the importance of interconnected systems just over a decade ago and Service-Oriented Architecture (SOA) was launched as a top-level business initiative in many organizations around the world. The goal was to systematically reap the benefits of easy interoperability between our business systems, turning our applications into reusable platforms, and drive innovation by fostering unintended consequences that create significant new business value. It largely didn’t happen for reasons that are much clearer now but weren’t then.

Pulling Together The Threads: Cloud Computing, Enterprise 2.0, and SOA

Cloud and E2.0: Connecting the Dots – OSCON Cloud Summit – 2010

While we were seeking the best way to realize SOA, and often having a very hard time of it, spending billions globally in the process, the Web sped ahead and began to discover many of the solutions to the challenges of opening and connecting our systems. This was true both technically and from a business perspective. Among many innovations, the Web went on to discover the power of Open APIs, which provided a successful model for large-scale SOA. The Web also became social and primarily user-generated, identifying powerful new models for driving both distribution and consumption that culminated in a global remaking of how we communicate. Along the way, it also became clear that social computing wasn’t just another communication paradigm, it was an entirely new way to think about how we relate to our business, data, and ourselves.

My premise is that the Web (and the full realization of it as a source of data, services, applications, and people being referred to as “the cloud”) has become our Global SOA. It’s the world’s most effective example of SOA with countless API providers, hundreds of millions of data creators (us), and an ecosystem of data that can be accessed and made sense of with tools such as search and analytics. This is in stark contrast to the enterprise today where, as I point out in my presentation, “Most of the vast repositories of data in enterprises is not accessible in any practical manner by most people“.

There is where Enterprise 2.0 comes in, with an emergent vision of a federated knowledge ecosystem that is fundamentally open and social (since that is the communication method we’ve essentially adopted globally today). Participation, openness, and self-service are some of the intrinsic elements of Enterprise 2.0 and why it’s a key part of the Social Business vision. It is now likely that social tools will ultimately be the dominant model for how we work together in the business world,  and they are deeply affected by both cloud computing and SOA. You can view my slides above in Slideshare for a more detailed walk-through of my thinking and opinions on this topic.

View more presentations from Dion Hinchcliffe.

John Treadway’s Cloudy View from HostingCon post of 7/21/2010 analyzes cloud content at the HostingCon 2010 conference:

image I spent a couple of days in Austin at HostingCon, meeting with a broad cross-section of the hosting community.  Rackspace CTO John Engates and lots of other “Rackers” were there to promote OpenStack.  Most of the other big mass-market shared hosters were there too – like The Planet, and others.  Then there were lots of little guys.  Small hosting resellers, guys with a couple thousand feed of space inside a larger data center, etc.

A good 40% of the conference content was about cloud.  But for people in the cloud business for the past few years, it might have felt a lot like 2007.  Lots of very basic information being shared/discussed, and a whole bunch of people who don’t know or don’t want to know.  I stopped by the cPanel booth in the expo.  cPanel is the #1 hosting control panel for this shared hosting business, with a gazillion hosters using their stuff.  I asked one of their exectutives if they were going to make it easy for their customer to move to a cloud model?  “Customers are asking us, but then we ask them what they mean by cloud and as soon as they can give us a straight answer maybe we’ll do that,” was his reply.  Okay, that’s a failure to lead if I ever saw one.

Some of the guys who do have clouds, like’s vCloud Express, are seeing substantial uptake in this new (to them) market.  Clearly we should be expecting a new wave of clouds to start appearing in the next few months.  The average revenue per user (ARPU) of cloud is so much higher than shared hosting, they can’t let it pass them by.  However, most of these guys are going to struggle to get there with a general lack of capability to develop what they need to make this work (given that none of the “cloud stack” solutions on the market today are as plug & play as a cPanel and require a lot of knowledge, skill and investment to get running).  Uh, opportunity calling??

I did learn a lot about the business models these guys are used to, which are somewhat different than what we’re all comfortable with in the cloud space – a flat fee per user /per module/capability used per month is a good summary.  Basically, you make money when the hosting guys are selling, not when they have servers that are ready but not being used.  The $xxx / year / socket model won’t work for these guys.

Another big part of this market is the hosting reseller business.  Something for the cloud guys to consider.  A little host called SingleHop actually ran a session about reselling their cloud, and ReliaCloud from MN was looking to do the same.  How many ways can you slice and dice it.  That brings me to a point about VMware and their VSPP program.  It won’t fly for long in this market at the current prices.  There’s not enough margin left for the reseller business – which is a huge issue here.

So, that’s about it from HostingCon 2010.

Ping Identity’s Cloud Identity Summit’s final day is 7/22/2010, but John Fontana promises video archives of sessions will be available next week:


Lawrence Walsh posted Microsoft Cloud Strategy: New Channel Chief Outlines Transformation Plan to the Channel Insider blog on 7/13/2010:

image Microsoft Channel Chief Jon Roskill provided solution providers at the Microsoft Worldwide Partner Conference with insights into the tools, training, support and programs available for transitioning to cloud-based businesses.

image For much of the Microsoft Worldwide Partner Conference, executives from CEO Steve Ballmer down have talked nothing but cloud computing and what Microsoft is doing to deliver Web-based applications and platforms. Answers for how Microsoft will work with partners came this morning from new channel chief Jon Roskill, who laid out many specifics for the aid and support for partners in the channel transformation.

image “The transition that we’re going through now has never been clearer. With each transition, we’ve gained more opportunity. … We’re going to be successful in the transition to the cloud, and we’re going to do it together,” Roskill told thousands of Microsoft partners packed into the Verizon Center in Washington, D.C.

Roskill, who recently assumed the post of corporate vice president of worldwide channels from Allison Watson, gave partners a broad but clear overview of Microsoft’s vision for an enhanced channel network that supports partners in the cloud. That strategy has four pillars: business planning, branding, customer support tools and business enablement tools.

Microsoft is providing partners with business planning tools that will help develop solutions and strategies for capturing customers in different market segments (SMB to enterprise). It’s also launching Partner Profitability Modeler, an online tool for determining the financial position of new cloud computing opportunities and estimating three-year profit/loss, revenue and investment costs.

To accelerate cloud computing adoption, Microsoft is providing partners with its new Cloud Essentials Pack, a one-year subscription for training on Microsoft cloud solutions, technical and sales support, and licenses for internal use of cloud applications.

Microsoft is releasing a series of Web-based management tools through which partners can order and administer cloud products for their customers, as well as a sales dashboard through which they can monitor account activity.

Microsoft considers partner use of its current generation of applications—on-premises and cloud—essential to its cloud strategy. Roskill made special note of the necessity that partners use the latest versions of Microsoft products to both gain familiarity with them and demonstrate their commitment to customers.

“We want you to be running on our latest software. If we’re all running on the latest and greatest, you’re going to do a better job going out and evangelize,” Roskill said.

Microsoft is providing qualified partners with up to 250 internal-use licenses for BPOS and CRM Online, and will provide similar offerings for Windows Azure and Intune in the future. 

To demonstrate cloud competency, Microsoft has created the Cloud Partner Badge for use in branding and marketing solution provider businesses. It’s also removing “network” from the “Microsoft Partner Network” logo, giving partners clearer branding association.

Microsoft is developing training and support programs to help partners better position their businesses for reselling and supporting cloud products, such as BPOS.

“We believe this is going to allow partners to quickly and effectively position BPOS solutions as they’re going through trial and deployment phases, and help customers have a good experience out of the box,” he says.

For support, Microsoft is providing partners with up to 20 hours of free online support. Qualified Cloud Accelerate Partners will receive up 40 to 160 free telephone technical support in implementations in cloud implementations involving 500 to 2,500 seats.

Admittedly, this is old news but it’s a concise summary of the events at WPC 2010. You might also want to check out Microsoft's Cloud Showdown of 7/16/2010 by the Channel Insider staff.

Darryl K. Taft’s Microsoft Pumps Windows Azure as Top Cloud Choice for Developers post to’s Application Development blog tackles WPC 2010 events from a .NET developer’s standpoint:

image Microsoft’s Windows Azure is for developers. The company's “general purpose” cloud computing platform offers developers more flexibility and choice than any other at the developer tools level, Microsoft officials said.

WASHINGTON – Microsoft’s Windows Azure is for developers.

imageWell, that is what the head of Microsoft’s Windows Azure team said. Microsoft has positioned Windows Azure as the “general purpose” cloud platform. At the Microsoft Worldwide Partner Conference (WPC) here, key Microsoft officials delivered targeted messaging about the Windows Azure platform as the more mainstream, general-purpose cloud platform as compared to competing offerings such as Amazon Web Services, Google App Engine and’s solutions, among others.

image “At Microsoft we’re pulling platform as a service (PAAS) and infrastructure as a service (IAAS) together in Windows Azure," said Bob Muglia, president of the Server and Tools Business at Microsoft. “Windows Azure is the world’s first general purpose cloud platform,” he added.

In an interview with eWEEK, Amitabh Srivastava, senior vice president of Microsoft’s Server and Cloud Division, explained Microsoft’s positioning versus cloud competitors such as Google or Amazon Web Services.

“Google is a platform as a service, but it’s only restricted to two languages – Python and Java. You have to fit in with the way they do things. We’re being general purpose. Amazon is an infrastructure as a service; they provide no tools support. How you develop your applications is your concern. You’re on your own. We support any language and multiple frameworks. We provide a rich ecosystem of technology or you can use open source software like MySQL or Apache. Our approach is we don’t put any shackles on the developer.”

In a separate meeting with eWEEK at WPC, Robert Wahbe, Microsoft’s corporate vice president of Server and Tools marketing, amplified the differences between Windows Azure and competitors, saying, “We allow you to use any language to build your apps, including native… You can use C and C++ to build apps for Windows Azure. You won’t find that in many other platforms. We’re the general purpose guys.”

Moreover, Wahbe said, another Microsoft cloud computing competitor, also is limited to a particular language in Java. “And their collaboration with WMware means they can allow developers to manage virtual machines, but I don’t think developers want to have to do that,” he said.

Meanwhile, Srivastava, who was part of the initial “Red Dog” development team that created the Windows Azure cloud computing platform, said from its inception Windows Azure targeted developers.

“When we were developing Azure from day one it was done for developers,” he said. “You have to allow developers to bring their skills, their current set of skills, to the cloud. So we said developers should get to choose tha language they want to use. You can use any environment you want. You can use Visual Studio or you can do the entire development in Eclipse. You can’t pigeonhole developers into one or two languages or one or two frameworks. Just because our lineage is Windows Server doesn’t mean we will restrict you to using C# or a Microsoft language.”

In addition, to make things easier for developers, Srivastava said Microsoft has encapsulated the core development concepts for Windows Azure into a software development kit (SDK). That means “a developer, before taking an application to the cloud, can develop their application and run it on the PC.” This enables developers to test their apps and to single out bugs that might not otherwise be caught before the application was deployed. “You’re not going to catch all the bugs, but you’ll catch many or even most of them. Debugging in a cloud environment is hard. So we wanted to enable a developer to take their application to market as fast as they can.”

Srivastava said the technology that became the Windows Azure SDK started as an internal tool used by the Red Dog team to buid and test applications for the cloud platform. It was known as Red Dog in a box, he said. “Red Dog in a box is the thing we used ourselves.”

However, the “Red Dog in a box” Srivastava speaks of in this context is not to be confused with the new Windows Azure appliance that Microsoft announced at the WPC. At the WPC event, Microsoft announced the Windows Azure platform appliance, the first turnkey cloud services platform for deployment in customer and service provider datacenters. Dell, eBay, Fujitsu and HP are early adopters of a limited production release of the appliance, Microsoft said.

The appliance is an enabler for developers, particularly for Microsoft partners with development expertise, Srivastava said. “The appliance allows partners to provide customers with their own clouds and to build their own value-add stack on top of it.”

In a statement on the company’s new cloud appliance, Microsoft’s Muglia said the software giant is “the first and only company that offers customers and partners a full range of cloud capabilities and the flexibility to deploy these services where and how they wish — whether that is with Microsoft, a service provider, in a customer datacenter or a combination of all three. Today’s introduction of the Windows Azure platform appliance ushers in a new era of cloud computing, and we are looking forward to working with our partners to bring all the benefits of the appliance to our customers and the business technology industry.”

Microsoft officials said the new Windows Azure platform appliance combines Windows Azure and Microsoft SQL Azure with Microsoft-specified hardware, enabling on-demand IT capacity and faster delivery of new applications. Large enterprises and service provider partners deploying the appliance in their datacenters will have the benefits of the cloud services that Microsoft offers today, while maintaining physical control of location, regulatory compliance and data. In addition, Muglia disclosed new details regarding Microsoft code name “Dallas,” an information service powered by the Windows Azure platform that provides developers and information workers access to third-party premium data sets and Web services.

Al Hilwa, program director for applications development software at market research firm IDC said he believes Microsoft is moving ahead at an aggressive pace with its push into the cloud. Hilwa told eWEEK:

“Unlike with mobile, Microsoft is mobilizing much more quickly with cloud. The cloud market is IT based and enterprises move much more slowly than consumers so there is more time to build a deliberate strategy. Microsoft is targeting cloud on multiple fronts. On the one hand offering its existing Office applications to the cloud through a scalable partner model, on the other hand they are targeting application development either to traditional architectures or to new elastic cloud architectures with Azure. They are covering all the bases in a way that only they can because of their on-premises strength. The impact on their revenue in the long run is still to be determined, but they have definitely identified this area as one where failure is not an option.”

Another WPC 2010 background piece but the quotes are worth preserving.

Kevin McLaughlin delivers the last WPC 2010 backgrounder for OakLeaf’s coverage in page 2 of his Microsoft Unfurls Its Multi-Faceted Cloud Message post of 7/20/2010 to the ChannelWeb blog:

image In addition to channel incentives, Microsoft (NSDQ:MSFT) also showed a willingness of give up some control over its Windows Azure platform to help address companies' fears about putting sensitive data in the cloud. The Windows Azure appliance, slated to arrive later this year, lets companies run Azure from within their own data centers.

imageThe Azure appliance is aimed at large enterprises, so it's not something VARs are going to be adding to their portfolios anytime soon. Still, the significance of Microsoft giving customers the freedom to run Azure any way they wish wasn't lost on solution providers. "I think Microsoft has actually figured out the cloud, and that wasn't really the case in the past few years," said Joseph Giegerich, managing partner with Gig Werks, a Yonkers, N.Y.-based solution provider. "Their message of choice is now clear with Azure on- and off-premise, and that's just as important as outlining the skills partners will need to make the transition."

image But even as Microsoft goes all-in with cloud computing, there's a realization that not all partners are grasping the value proposition of the cloud, mainly because they're terrified of the changes it brings. As it has done at past WPCs, Microsoft offered gentle-yet-firm reminders that the cloud is an industry-wide shift, and not one that Microsoft is foisting upon partners of its own accord.

"The cloud does change and makes us reinvent our business models, yours and ours," Microsoft CEO Steve Ballmer said in his WPC keynote. "But it's a change that's inevitable. It's a change that allows us all to deliver new value."

Ric Opal, vice president of Peters & Associates, an Oakbrook Terrace, Ill.-based solution provider, says despite the trepidation some partners feel about moving to the cloud, most realize that the path to future revenue isn't going to the one they've traditionally followed. "As a partner, you shouldn't plan your business around the annuity model. You have to go get the services that are there," he said.

To assuage partners' fears, Microsoft is also helping them identify cloud opportunities. Microsoft's new Business Builder for Cloud Services helps partners find cloud deals in small and medium companies as well as in the public sector. The new Cloud Profitability Modeler business-planning tool helps solution providers develop a three-year forecast of revenue, profits and losses, and other metrics for building a cloud business.

Microsoft says these tools are helping demystify the cloud and defuse partners' fears. "Partners want to be all-in but need the road map," said Flinders. "Many partners are thinking about P&L as they figure out how to transition to the cloud. Our job is to support them, and help take some of the burden off them."

Cloud computing has been on the agenda at the past several Microsoft Worldwide Partner Conferences, but never has the message resonated so strongly with partners as it did at this year's event. The incentives, tools, and road maps are now in place for partners, and there's a definite sense that Microsoft's cloud vision is gaining momentum.

As this freight train chugs away from the station, it's getting close to now-or-never time for partners to jump on board. "Microsoft's cloud message is now as clear as it's going to get. Partners that retool themselves and can shift to the new model are going to have plenty of opportunities," said Giegerich.

<< Previous | 1 | 2

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

David Linthicum asserts “Rackspace's OpenStack could signal a new race to open up cloud computing technology” in a preface to his Rackspace's bold move to open the cloud post of 7/22/2010 to InfoWorld’s Cloud Computing blog:

image Rackspace's recent OpenStack announcement is a strong, if familiar, open source play. Given Rackspace's place in the market, open-sourcing its cloud code provides strong differentiation from Amazon, which has become the de facto standard for storage and compute services in the cloud.

And with NASA contributing the code that runs its own Nebula cloud platform, OpenStack has the potential to capture the increased "rate of innovation" that can result from open-sourcing its code.

OpenStack will launch in two phases. The first offering, Cloud Files-based OpenStack Object Storage, is currently available. The second piece, a compute-provisioning engine based on Cloud Servers from Nebula technologies, will be released later this year.

imageWe've seen this method utilized with on-premises offerings for years. If you can't beat them, go open source. Here, appending the term "open source" to the cloud helps eliminate lock-in fears among cloud users.

But beware. Open portability means both cloud-to-cloud and cloud-to-on-premises. Though we have the promise of portability between compatible cloud platforms and among private clouds and traditional systems, we have no assurances that this portability will be viable in the future.

Cloud providers have been building solutions on open source offerings such as Xen and LAMP for years. Rackspace's move adds a new chapter to cloud computing's open source story -- one that will certainly draw some interest away from Amazon.

That said, questions remain regarding how such a move will impact IT organizations. Until those questions are answered, I remain hopeful, though with a measure of pragmatic skepticism.

Chris Czarnecki proposes Avoiding Vendor Lock-in In the Cloud in this 7/21/2010 post to the Learning Tree’s Cloud Computing blog:

image One of the concerns organisations have of moving to the cloud is becoming locked in to a particular cloud vendor. It is refreshing to see a number of initiatives and projects that are looking to provide solutions that avoid vendor lock-in.

One project recently announced is OpenStack. Aimed at creating an open source cloud operating system, the project is based on code that RackSpace uses for its own cloud offering as well as the code that is the basis for the NASA nebula cloud. Although at an early stage, the exciting prospect offered by OpenStack is for organisations to build their own private cloud and integrate this seamlessly with other clouds – maybe from partner organisations or other public cloud vendors including RackSpace. The nearest offering to this so far has been the Eucalyptus cloud which has been built with an interface compatible with Amazon and their professional version allows the integration of private clouds with the Amazon cloud. Being able to do such similar integration with a wide choice of providers and collaborative organisations is certainly an attractive proposition.

Another initiative that takes an alternative approach to vendor lock-in is the Unified Cloud Interface project. This approach aims to create an open and standardized cloud interface to unify all vendors cloud products. An example of working with Enomaly Elastic Compute Platform and Amazon EC2 is available online here.

Whilst teaching the Cloud Computing course for Learning Tree, I get to meet a lot of people adopting or considering adopting cloud computing. Along with security, vendor lock-in is a major concern. It is also one of the concerns I discuss in detail in a white paper I put together recently. You can get a copy here. To learn about the consequences of these new initiatives and what they may mean for your organisation why not attend the Cloud Computing Course or if you are time constrained, the half day overview.

Ignacio M. Llorente offers “Our Commitment to an Open Source Cloud Ecosystem” in his The OpenNebula Position on the OpenStack Announcement post of 7/22/2010:

As many of you know, a new open-source cloud platform, OpenStack, was recently announced. Here at OpenNebula, we think this is a very exciting development in the cloud community, and we're glad to see so many major players coalescing around an open-source solution. However, we have also been concerned by the all the high-profile announcements and opinion pieces that describe OpenStack as the first initiative for the definition of an open architecture for IaaS cloud computing and a "real" open-source project, criticizing some existing open-source cloud projects as being "open-core" closed initiatives (in some cases conflating "open-core" with "having an Enterprise edition"), and pointing out their lack of extensibility and inability to efficiently scale to manage tens of thousand of VMs. This is the reason why we have decided to write this post in order to clearly state our position in order to avoid misunderstandings, particularly with our growing community of users.

OpenNebula is and always will be 100% Apache-licensed Open-Source Software
image OpenNebula was first established as a research project back in 2005, with  first public release in March 2008. We have a strong commitment with open-source, being one of the few cloud management tools that are available under Apache license. The Apache license allows any Cloud and virtualization player to innovate using the technology without the obligation to contribute those innovations back to the open source community (although we encourage that this work be contributed back to the community). This is the case for many third-party commercial products that embed OpenNebula.

OpenNebula is NOT "Open Core"
C12G Labs
is a new start-up that has been created to provide the professional integration, certification and technical support that many enterprise IT shops require for internal adoption and to allow the OpenNebula project to not be tied exclusively to public financing (research grants, etc.), contributing to its long-term sustainability. Although C12G Labs does provide an Enterprise edition of OpenNebula, all software extensions and patches created by C12G (distributed in the Enterprise Edition of OpenNebula to support customers and partners) are fully contributed back to OpenNebula and its ecosystem under an OSI-compliant license.

So OpenNebula is NOT a feature or performance limited edition of the Enterprise version. C12G Labs contributes to the sustainability of the community edition and is committed to enlarge the OpenNebula community. C12G Labs dedicates an amount of its own engineering resources to support and develop OpenNebula and so to maintain OpenNebula's position as the leading and most advanced open-source technology to build cloud infrastructures.

OpenNebula is an Open-Source Community

The OpenNebula technology has matured thanks to an active and engaged community of users and developers. OpenNebula development is driven by our community in order to support the most demanded features and by international research projects funding OpenNebula in order to address the demanding requirements of several business and scientific use cases for Cloud Computing. We have also created the OpenNebula ecosystem where related tools, extensions and plug-ins are available from and for the community.

OpenNebula is a Production-ready and Highly-scalable Technology
OpenNebula is an open-source project aimed at developing a production-ready cloud management tool for building any type of Cloud deployment, either in scientific or in business environments. OpenNebula releases are tested to assess its scalability and robustness in large scale VM deployments, and under stress conditions. Of course, you don't have to take our word for it: several users have reported excellent performance results to manage tens of thousands of VMs. We have been encouraging some of these users to write on our blog about their experiences with OpenNebula. So far, you can read this recent blog post on how OpenNebula is being used at CERN, with more user experiences blog posts to follow soon.

OpenNebula is a Flexible and Extensible Toolkit
Because two datacenters are not the same, OpenNebula offers a open, flexible and extensible architecture, interfaces and components that fit into any existing data center; and enable its integration with any product and service in the Cloud and virtualization ecosystem, and management tool in the datacenter. OpenNebula is a framework, you can replace and adapt any component to efficiently work in any environment.

OpenNebula is Hypervisor Agnostic and Standards-based
OpenNebula provides an abstraction layer independent from underlying services for security, virtualization, networking and storage, avoiding vendor lock-in and enabling interoperability. OpenNebula is not only built on standards, but has also provided reference implementation of open community specifications, such us the OGF Open Cloud Computing Interface. OpenNebula additionally leverages the ecosystems being built around most popular cloud interfaces, Amazon AWS, OGC OCCI and VMware vCloud

OpenNebula Implements an Open Architecture Defined by Major Players in the Cloud Arena
OpenNebula is the result of many years of research and the interaction with some of the major players in the Cloud arena. This technology has been designed to address the requirements of business use cases from leading companies in the context of flagship international projects in cloud computing.

The main international project funding OpenNebula is RESERVOIR. OpenNebula is an implementation of the IaaS management layer of the RESERVOIR open architecture defined by its partners: IBM, Telefonica Investigacion y Desarrollo, University College of London, Umeå University, SAP AG, Thales Services SAS, Sun Microsystems Germany, ElsagDatamat S.p.A,  Universidad Complutense de Madrid, CETIC, Universita Della Svizzera italiana, Universita degli Studio di Messina,  and the European Chapter of the Open Grid Forum. The outcome of this collaboration is the unique functionality provided by OpenNebula.

OpenNebula will Continue Incorporating State-of-the-Art Features Demanded by Major Players
OpenNebula is used, together with other software components, in new international innovative projects in Cloud Computing. StratusLab with the participation of  Centre National de la Recherche Scientifique, Universidad Complutense de Madrid, Greek Research and Technology Network S.A., SixSq Sárl, Telefonica Investigacion y Desarrollo and Trinity College Dublin, aimed at bringing cloud and virtualization to grid computing infrastructures. BonFIRE with the participation of Atos Origin, University of Edinburgh, SAP AG, Universitaet Stuttgart, FRAUNHOFER, Interdisciplinary Institute for Broadband Technology,  Universidad Complutense de Madrid, Fundacio Privada I2CAT, Hewlett-Packard Limited, The 451 Group Limited, Technische Universitaet Berlin, IT-Innovation, and Institut National de Recherche en Informatique et en Automatique, aimed at designing, building and operating a multi-site cloud-based facility to support research across applications, services and systems targeting services research community on Future Internet;.

And many others, such as 4CaaSt with the participation of UPM, 2nd Quadrant Limited, BonitaSoft, Bull SAS, Telefónica Investigación y Desarrollo, Ericsson GMBH, FlexiScale, France Telecom, Universitat St Gallen, ICCS/NTUA, Nokia Siemens Networks, SAP AG, Telecom Italia, UCM, Universitaet Stuutgart, UvT-EISS, and ZIB, aimed at creating an advanced PaaS Cloud platform which supports the optimized and elastic hosting of Internet-scale multi-tier applications.

*   *   *

All that said, we'd like to reiterate that we strongly support initiatives like OpenStack. This open source initiative is fully aligned with our vision on what the cloud ecosystem should look like, and we will be happy to contribute to OpenStack with our significant track record in open source and scalable cloud computing management, and with an implementation of  the open APIs that will be defined in the context of the OpenStack architecture. However, we felt that some of the buzz surrounding OpenStack unfairly characterized existing open source efforts, and felt it was necessary to reiterate our commitment to an open source cloud ecosystem.

Ignacio M. Llorente, on behalf of the OpenNebula project

Disclaimer: The above represent our position, and may not reflect the positions of any of the projects and organizations referenced in the post.

According to the OpenNebula site: “OpenNebula was first established as a research project back in 2005 by Ignacio M. Llorente and Rubén S. Montero, releasing the first version of the toolkit and continuing as an open source project in March 2008.”

Audrey Watters assesses the Impact of OpenStack Project Goes Beyond the Cloud Industry Leaders in this 7/21/2010 post to the ReadWriteCloud blog:

image Since the announcement of OpenStack crossed the wire on Monday, much of the emphasis has been on Rackspace's decision to open source their code and what this might mean in terms of the other major (proprietary) cloud players. But there are 25 companies who've signed on to the OpenStack organization and the benefit of the open source project will be far-reaching beyond just the cloud service providers.

I had a chance today at OSCON to talk with Peder Ulander, CMO of, about what being part of the OpenStack initiative means to the startup. offers a software platform for building and managing both public or private cloud environments. allows customers to deploy cloud services within their existing IT infrastructure, facilitating the move to the cloud. has been open source all along, with modular offerings designed to meet the needs of its customers.

While the benefits, and even the inevitability, of cloud technology are clear to some, there are still major inroads to be made before adoption is ubiquitous. The fears of vendor lock-in - which is a major piece of what the OpenStack project addresses - are a big part of what's causing some companies to hesitate about the cloud. After all, despite the frequently heard assertion that cloud computing is a "disruptive" technology, Ulander says that companies don't necessarily want to adopt something "disruptive."

Ulander likens the adoption of the cloud to the adoption of Linux. Once major players like IBM "sanctioned" its use, Linux was able to make greater inroads in the enterprise. Similarly with the backing of major players like Rackspace, Ulander contends that the OpenStack initiative will probably help cloud computing take a big leap towards legitimacy and acceptance.
It's too early to tell if the open source project will be collaborative or competitive, but already sees the benefits of participating. is planning to adopt the OpenStack Object Storage platform into its CloudStack product, for example, allowing the company to immediately add a new feature to its offerings.

<Return to section navigation list> 

blog comments powered by Disqus