Friday, February 20, 2009

Azure and Cloud Computing Posts for 2/16/2009+

Windows Azure, Azure Data Services, SQL Data Services and related cloud computing topics now appear in this weekly series.

Note: This post is updated daily or more frequently, depending on the availability of new articles.

• Update 2/18/2009: Several additions
Update 2/21/2009 11:30 AM PST: Several additions

Azure Blob, Table and Queue Services

•• Dan Vanderboom’s Windows Azure: Blobs and Blocks post of 2/21/20909 describes an extension to the StorageClientLibrary to use blocks (BlobContainerRest.PutLargeBlobImpl) when creating blobs smaller than 4 MB.

•• Joe Gregorio of the Google App Engine Team and champion of Atom and AtomPub protocols posted Back to the Future for Data Storage on 2/19/2009. Joe explains why relational databases aren’t well suited for scalability across a distributed system. He cite’s Michael Stonebraker’s paper, "One Size Fits All": An Idea Whose Time Has Come and Gone, that posits there are common datastore use cases, such as Data Warehousing and Stream Processing that are not well served by a general purpose RDBMS and that abandoning the general purpose RDBMS can give you a performance increase of one or two orders of magnitude.

Joe concludes:

It's an exciting time, and the takeaway here isn't to abandon the relational database, which is a very mature technology that works great in its domain, but instead to be willing to look outside the RDBMS box when looking for storage solutions.

And, of course, Google’s Bigtable is “outside the RDBMS box.”

mh415 asked How to mimic the RDBMS "auto increment" feature in Azure Tables? in the Windows Azure forum and MSFT evangelist Steve Marx replied:

With the functionality in Windows Azure tables today, the only thing you can do is query and then conditional write on a "last used index" entry, which will require a minimum of two round-trips to the storage service and retry logic.  The etag tracking you get from the ADO.NET client library should protect you from your race conditions; you'll just need to handle the retries.

So the algorithm looks like this:

  1. Query "last used index" value (stored in a table)
  2. Increment value
  3. Try to conditionally write new value, repeating steps 1-3 until successful
  4. Use the new index freely, knowing no other instance will be using it

In effect, you'll have created a lock by using the conditional write to detect races.

Simon DaviesChanging the Default SQL Server Instance For Windows Azure Development Storage post of 2/17/2009 shows you how to change the Azure Table storage instance from the default localhost\SQLEXPRESS instance to another default or named SQL Server instance.

Jon Udell advances Derik’s topics (see below) to Azure Services in his Using the Azure table store’s RESTful APIs from C# and IronPython post of 2/17/2009. Jon’s “general strategy” is:

    • Make a thin wrapper around the REST interface to the query service
    • Use the available query syntax to produce raw results
    • Capture the results in generic data structures
    • Refine the raw results using a dynamic language

Derik Whittaker’s Getting data from a REST service using C# of 2/15/2009 uses a helper method with HTTP GET to return data as plain old XML (POX), text string, or in JavaScript Object notation (JSON) format.

Derik’s Posting data to a REST service using C# post of 2/14/2009 shows the code to POST the following to a generic REST service:

  1. someValue which is a string
  2. anotherValue which is a string
  3. finalValue which is an Int32

SQL Data Services (SDS)

•• The .Net and SQL Services Teams report they are Resuming New .NET Services and SQL Services Account Provisioning as of 11:12 AM PST 1/20/2009.

Was the outage in deploying Azure projects and services reported by Steve Marx in his RESOLVED: Windows Azure Outage Windows Azure forum post caused by the account provisioning hiatus?

Niraj Nagrani and Nigel Ellis are interviewed at PDC 2008 by Keith and Woody in Deep Fried Bytes’ Episode 26: Discovering Azure SQL Services podcast of 2/13/2006. Probably will emphasize SDS because Niraj is a senior product manager with the Microsoft SQL Server Technical Marketing team and Nigel is SQL Data Services Development Manager.

.NET Services: Access Control, Service Bus and Workflow

•• Maureen O’Gara’s Call for Cloud Security Guidelines Heard post of 2/20/2009 observes “Chief information security officers concerned about software services’ security in the cloud” and goes on to report:

Infosecurity Europe, which granted is a show but one takes things as one finds them, says it surveyed 470 organizations and found that 75% of them intend to reallocate or increase their budgets to secure cloud computing and software as a service in the next 12 months.

It also interviewed a panel of 20 chief information security officers (CISOs - a new ‘C') of large enterprises only to learn that they are concerned about the availability and security aspects of software services in the cloud. It said they were particularly concerned about the lack of standards for working in the cloud, SaaS and secure Internet access. All of them reportedly said they would welcome the development of guidelines.

•• David Pallman offers a two-part Introduction to Live Services:

Introduction to Live Services, Part 1: How Windows Live Fits Into Azure Cloud Computing, which “clarifies the intersection of Live Services and cloud computing.”

Introduction to Live Services, Part 2: A Guided Tour of Live Services is “a guided tour of the many capabilities in Live Services that are available to you.”

•• Wictor Wilén demonstrates Custom code with SharePoint Online and Windows Azure in this 2/20/2009 post, which describes a Worker Cloud Service for processing a SharePoint list with a SharePoint workflow in response to added documents.

Girish P’s Windows Azure bringing Cloud computing to the mainstream post of 2/19/2009 promotes Windows Live Services to “bring it all together” for developers and end users.

Folks giving presentations might want to borrow Girish’s better-than-average diagrams.

Update 2/21/2009: Girish’s diagrams originated on the How Does It Work? page of Microsoft’s main Azure Services Platform page.

Live Windows Azure Apps, Tools and Test Harnesses

•• Jason Young’s Azure – Performance, IoC, and Instances post of 2/19/2008 states that he’s “disappointed by [Azure’s] current reality” when it comes to performance. Azure’s currently in the pre-Beta stage and it’s unfair to judge the performance of even beta versions. My experience shows the Windows Azure OS to be reasonably performant; it’s Internet latency that appears to me to be the problem.

Check out my live Azure Table and Blob test harnesses, which include timing data for execution of storage operations without including time for HTTP request and response operations over my DSL connection.

•• Steve Marx suggests: Try Windows Azure Now! (Almost) No Waiting as of 2/20/2009. Steve writes that he’s:

[H]appy to report that these days you can expect to receive a token within two business days of registering. [Emphasis Steve’s.]

• Reuven Cohen’s Testing The World Wide Cloud post of 2/18/2009 describes SOASTA’s new CloudTest Global Platform. Reuven writes:

SOASTA has unveiled an ambitious plan to utilize an interconnected series of regionalized cloud providers for global load and performance testing Web applications and networks. They're calling this new service the CloudTest Global Platform, which is commercially available today, and is said to enable companies of any size to simulate Web traffic and conditions by leveraging the elasticity and power of Cloud Computing.

This probably won’t help many of us until we get a few more Azure instances.

John O’Brien and Bronwen Zande gave a Windows Azure: A developers introduction to coding in the cloud presentation to the Queensland MSDN Users Group on 2/17/2009.

The session included a demonstration of a port of John’s earlier Silverlight DeepZoom sample to Windows Azure. You can download John’s source code for the session from a link on his Qld MSDN User Group – Windows Azure talk complete post of 2/17/2009. Be sure to read John’s comment regarding edits to the configuration files and his earlier Silverlight Deep Zoom Sample Code and Silverlight Deep Zoom Sample Code Part 2 posts to understand the session code, which includes a Worker Role and a Web services.

Click here for a live Azure demonstration with images about print making from the Australian government: http://www.printsandprintmaking.gov.au/RSSFeed.ashx?mode=3&page=0&index=1&itemtypeid=3. Click the Go button at the top of the page to load the imanges, wait for their thumbnails to appear in the carousel at the bottom of the page, and then click one of the thumbnails to manipulate it in the upper pane.

Tanzim Saquib’s Building applications for Windows Azure of 11/7/2008 is an early (missed) tutorial that shows you how to build a simple ToDo list application with Azure.

Azure Infrastructure

•• Chris Capossela describes five issues he believes will be front and center for business customers as they prepare for this evolution in his The Enterprise, the Cloud, and 5 Key Drivers for 2009 post of 2/21/2009 on GigaOm. The five issues he details are:

  1. Value
  2. Due diligence
  3. Timing
  4. The right tools
  5. Trust

Thanks to David Linthicum for the heads-up on Twitter.

•• James Urquhart’s Infrastructure 2.0 and the New Data Center Culture article of 2/20/2009 for SNS News Service explains how and why:

The number of people and skill sets required to run computing is an increasing burden on corporate IT. …

It takes real expertise to tend to the routers and switches that form the basis of a network infrastructure, but most of that expertise is applied through highly manual processes. …

So IT relies on “clerks” to get the network job done. …

Virtualization also enables levels of automation that were previously impractical with highly customized physical infrastructure. As the virtual infrastructure has to be completely controlled through a computer program, it has not taken long for IT operations to begin to drive out the manual tasks that were once required to provision, maintain, recover, and retire computer servers in the past. …

The tactical IT administrator is about to become another excellent example of the effects of automation – thanks in large part to Infrastructure 2.0.

In other words, IT clerks will soon be asking “Do want fries with that?”

•• Kyle Gabhart offers 10 Steps to Successful Cloud Adoption when “Adopting your very own cloud” in this 2/20/2009 post. Kyle writes:

Adoption can be a scary process.  In your fear of doing something wrong, you may be tempted to buy a big, expensive consulting package and just have someone else handle everything.  You don’t need to do that.  Simply find a subject matter expert to serve as a mentor that can guide you through the process of pragmatically evaluating and possibly even adopting Cloud.  Along the way, make sure that this mentor is educating you and your team so that you are able to function effectively once this person has left the building

•• Gregg Ness’s The Coming Cloud Computing Wars post of 2/20/2009 claims “Cloud Isn't Hype - It's a Vendor Struggle for Relevance” as he leads with:

The cloud computing meme continues to billow as Juniper and IBM announce a cloud management partnership rumors swirl about heavy petting between VMware and shareholder/partner Cisco. A few months ago it seemed like every cloud discussion included Google and/or Amazon; now it appears that “the network infrastructure issue” has finally reared its head and ushered in networking and management leaders into the cloud conversation.

•• Christopher Hoff reviews the increasingly controversial "Above the Clouds: A Berkeley View of Cloud Computing" technical paper in his Berkeley RAD Lab Cloud Computing Paper: Above the Clouds or In the Sand? post of 2/19/2009. Chris’s critique concludes:

Given that it was described as a "view" of Cloud Computing and not the definitive work on the subject, I think perhaps the baby has been unfairly thrown out with the bath water even when balanced with the "danger" that the general public or press may treat it as gospel. …

That being said, I do have issues with the authors' definition of cloud computing as unnecessarily obtuse, their refusal to discuss the differences between the de facto SPI model and its variants is annoying and short-sighted, and their dismissal of private clouds as relevant is quite disturbing.  The notion that Cloud Computing must be "external" to an enterprise and use the Internet as a transport is simply delusional. …

However, I found the coverage of the business drivers, economic issues and the top 10 obstacles to be very good and that people unfamiliar with Cloud Computing would come away with a better understanding -- not necessarily complete -- of the topic.

Chris’s review is probably closer to my take on the paper than any other critique I’ve read so far.

•• Chris also expresses his views about What People REALLY Mean When They Say "THE Cloud" Is More Secure... on 2/20/2009. Chris writes:

Almost all of these references to "better security through Cloudistry" are drawn against examples of Software as a Service (SaaS) offerings.  SaaS is not THE Cloud to the exclusion of everything else.  Keep defining SaaS as THE Cloud and you're being intellectually dishonest (and ignorant.) …

I *love* the Cloud. I just don't trust it.  Sort of like why I don't give my wife the keys to my motorcycles. [Emphasis added.]

•• Chris Evans raises a question about the Storage Network Industry Association (SNIA) standardizing on AWS’s S3 SPI in his Cloud Computing: Common API Standards post of 2/19/2009. Chris writes:

What’s really needed, is to standardise on:

    • Security Model - users want consistency of security across cloud storage providers.  The security model needs to be consistent to provide ease of access, integration with technologies like Active Directory or LDAP.
    • Access Method - standardisation on the use of XML, REST, SOAP, FTP or other protocols to access storage.

Fortunately, Azure uses REST and SDS uses REST and SOAP protocols to access storage.

Dmitry Sotnikov points in his Gartner on Cloud and information control post of 2/18/2009 to a Gartner report entitled “Trusted SaaS Offerings for Secure Collaboration” and priced at US$195. Dmitry writes:

The report is really valuable for anyone either building clouds or cloud-related products, or considering to move sensitive data to a SaaS application.

The key areas they look into are:

  • List of typical SaaS applications which have high trust requirements.
  • Key security features which such applications should possess.
  • Transparency measures which cloud computing/SaaS providers need to implement.

Excellent report: short, to the point, and with material you can use while developing or evaluating SaaS application with trust requirements.

Simon DaviesDynamic Languages and Windows Azure post of 2/17/2009 discusses reflection and the Azure Development Fabric’s Code Access Security restrictions, as well as an implementation of Lisp called L Sharp created by Rob Blackwell at Active Web Solutions that you can read about on his blog here and try it at http://lsharp.cloudapp.net/default.aspx.

Paul Miller adds to the review traffic with his Digging into Berkeley's View of Cloud Computing post of 2/17/2009. “To understand more, [Paul] spent some time on the phone with two of the paper's authors this morning, [Armando Fox and Dave Patterson,] and the result has just been released as a podcast.”

Brenda Michelson’s Unintentional Cloud Watching -- Cloud Computing for Enterprise Architects post of 2/17/2009. Brenda spent 19 years in corporate IT, most recently as Chief Enterprise Architect for L.L. Bean. Previous to L.L. Bean, over the span of 10 years, she provided development services for insurance, banking, a chip manufacturer and a world leader in aircraft engine design & manufacturing. Brenda writes:

On "the morphing of boxes to platforms", what follows is a slide I created for last summer's ComputerWorld Data Center Directions conference.  I was asked to do a mini-presentation on server management, but as you can see, I started with a broader view of "boxes morphing to platforms" and then spoke of related management implications.

Dan’l Lewin, Microsoft’s Corporate VP of Strategic and Emerging Business Development, defines cloud computing with a 50,000-foot view of Azure in this three-minute BeetTV video (2/17/2009, thanks to Alin Irimie.)

Kyle Gabhart says the Industry [Is] Buzzing with Interest Around Cloud Computing and contributes more buzz with the slides from his The Role of Cloud in the Modern Enterprise webinar of 2/16/2009. Kyle claims that Owning Hardware is soooooo 2008 in this post of 2/17/2008, citing Fortune magazine’s Tech Daily post: "Goodbye hardware. Hello, services"

David Linthicum’s SOA needs to learn from the cloud, and the other way around post of 11/17/2009 warns that cloud hype will result in ignoring proper cloud architecture. David writes:

Cloud computing is indeed disruptive technology, and something that needs to be understood in the context of a holistic IT strategy, as well as understood, defined, and leveraged from domain to domain. The silliness that hurt the adoption in SOA is bound to infect cloud computing as well, if we let it happen. I urge you to get below the surface here quickly, else history will repeat itself.

Mike Kavis provided the backstory for David’s post (above) in his If SOA is Dead, Cloud Computing better start writing its will post of 2/12/2009.

Niraj Nagrani and Nigel Ellis are interviewed at PDC 2008 by Keith and Woody in Deep Fried Bytes’ Episode 26: Discovering Azure SQL Services podcast of 2/13/2006. There’s no explanation for the long posting delay.

Cloud Computing Events

•• Security Issues Receive Focus at IDC’s Cloud Computing Forum from OakLeaf ontains excerpts of stories by reporters who attended San Francisco’s one-day IDC Cloud Computing Forum on Wednesday 2/18/2009.

•• Cloud Computing Conference 2009 will take place 5/28 to 5/29/2009 at the the ISEP Conference Center - Instituto Superior de Engenharia do Porto (Engineering Institute of Porto), Rua Dr. António Bernardino de Almeida, 431 P-4200-072 Porto, Portugal. Free registration is here and the agenda is here. Discussion topics are:

    • Digital Identity: How could Identity 2.0 be the backbone (driving force) of cloud computing
    • Future of telecommunications: How Cloud Computing will depend on telecommunication development and network quality
    • User data protection and confidentiality: Reputation and trust, how to use old experiences and well known examples as starting points
    • Interoperability: How will Cloud Computing platforms talk together, and how users will be able to move their data among Clouds
    • IT departments perspectives and integration: How could we already start get benefits of Cloud Computing
    • User perspective: How Internet (the Cloud) will become our PC
    • Small companies and startups opportunities: How to become a Cloud Computing provider and how to use Cloud Computing to add (real) value to business.

The Cloud Computing Interoperability Forum will participate in an all-day workshop entitled "Strategies and Technologies for Cloud Computing Interoperability (SATCCI)" to be held in conjunction with the Object Management Group (OMG) March Technical Meeting on 3/23/2009 at the Hyatt Regency Crystal City, Arlington, VA.

According to his Joint CCIF / OMG Cloud Interoperability Workshop on March 23 in DC post of 2/19/2009, Reuven Cohen will present his thoughts on creation of an open unified cloud interface and opportunity for unification between existing IT and cloud based infrastructures (a.k.a. hybrid computing.)

OpSource SaaS Summit 2009, starts Wednesday 3/11/2009 at the Westin St. Francis, in San Francisco and continues through 3/13/2009. According to OpSource:

SaaS Summit 2009, the largest on-demand industry gathering in the world, is being held in San Francisco on March 11 – 13, 2009. This year’s agenda focuses on the opportunities emerging from the depths of the current economic downturn for SaaS, Web and Cloud computing companies.

Topics include:

      • Thriving, Not Just Surviving
      • SaaS Marketing in a Downturn
      • Cloud Confusion
      • Selling SaaS to the Enterprise
      • Funding the Cloud
      • Minimal Cost, Maximum Gain with Social Networking
      • SaaS Channels: Money Maker or Money Waster

OpSource recently received US$10 million funding from NTT.

2nd International Cloud Computing Conference & Expo will take place, 3/30-4/1/2009, at the Roosevelt Hotel in New York City, with more than 60 sponsors and exhibitors and over 1,500 estimated delegates from 27 different countries. The conference is gathering an array of cloud luminaries as presenters, including Amazon’s Werner Vogels as a keynoter. The Early Bird Price of US$1,695 ($300 saving) expires 2/20/2009.

2009.cloudviews.org promises to be an international conference with a lively discussion and a demonstration place of how changes are already happening. Following are the proposed discussion topics:

    • Digital Identity - How could Identity 2.0 be the backbone (driving force)  of Cloud Computing
    • Future of telecommunications - How Cloud Computing will depend on telecommunication development.
    • User data protection and confidentiality - Legal perspective.
    • Cloud Computing platforms interoperability and data management (mobility) .
    • IT departments and cloud computing integration
    • User perspective - how Internet (the cloud) will become our PC
    • Small companies and startups opportunities - how to become a Cloud Computing provider and how to use cloud computing to add (real) value to business.

What’s missing are the venue and conference dates.

Update 2/2-/2009: See the first entry of this topic (Cloud Computing Events)

Seems to me that cloud hype is overshadowed only be the number of conferences devoted to the topic.

Other Cloud Computing Platforms and Services

•• Jan Pabellon reports that Open Source Vendor SugarCRM Embraces Cloud Computing in a Big Way in his 2/22/2009 post from the Philippines. Jan writes:

Just recently SugarCRM launched a new way to meld Internet services and open source software by launching Cloud Services and Social Feeds. These new Cloud Connectors for SugarCRM allow for company and contact data residing in other cloud environments to be called and presented in SugarCRM. These services include such sites as LinkedIn, ZoomInfo and Crunchbase. The Sugar Feeds feature on the other hand provides a Facebook-like rolling set of notices and alerts based on activity within SugarCRM.

•• James Urquhart reports (along with many others) on 2/21/2009 that Ubuntu now has 'cloud computing inside', which probably would be a more accurate statement in the future tense.

•• Amazon Web Services has updated SimpleDB’s Select API with the Count(*) aggregate function and will now return the partial result set of entities retrieved during the five-minute time limit, as reported in this Announcing Count and Long Running Queries release note of 2/19/2009.

If SimpleDB can return Select Count(*) Where … aggregate values why can’t SDS and Azure Tables?

AWS’s New WSDL Coming Soon post of the same date announces “a new WSDL version which excludes the Query and QueryWithAttributes APIs” in favor of the Select API.

•• Thorsten vok Eiken’s The Skinny on Cloud Lock-in post of 2/19/2009 describes Thorsten’s Lock-in Hypothesis: “The higher the cloud layer you operate in, the greater the lock-in,” which posits that vendor lock-in increases as you move from Infrastructure as a Service (IaaS, e.g. Amazon Web Services) to Platform as a Service (PaaS, e.g. Azure or Google App Engine) to Software as a Service (SaaS, Salesforce.com).

The most interesting element of the post were the results of a survey RightScale recently conducted with their customers and prospects what concerned them most about lock-in:

What piqued my interest is RightScale’s confirmation of my conclusion that data lock-in is more important than vendor lock-in. However, I was surprised that concern for data lock-in outweighed single-vendor worries by 3.5:1. (Image courtesy of RightScale.)

•• Andrew Conry-Murray’s A Cloud Conservative post of 2/19/2009 reports that the Vanguard Group, Inc. chose a private cloud. Andy quotes Bob Yale, who runs technology operations for the company and is very concerned about client data in a public cloud:

"You read a lot about the providers and their security, but given the nature of our business and the criticality of our client data, we aren't comfortable that providers bring the same rigor around data protection as we do. We aren't ready to give up control of our data."

•• rPath, VMware and BlueLock will demonstrate how to maximize application portability, deployment options in virtual and cloud environments in a Webinar on 2/25/2009 according to a press release of 2/19/2009. Erik Troan, founder and CTO, rPath; Wendy Perilli, director of product marketing, VMware; and Pat O’Day, CTO, BlueLock will deliver “Blending Clouds: Avoiding Lock-In and Realizing the Promise of Hybrid Compute Environments — Today,” a webinar and live multi-cloud demonstration. You can register for the event here.

Paul Miller podcasts a 40-minute conversation with Armando Fox and Dave Patterson of Berkeley’s RAD Lab about their Above the Clouds: A Berkeley View of Cloud Computing, which has gathered considerable notoriety among the clouderati. You can listen to the “cloudcast” here.

When I was a kid in Berkeley, RAD Lab meant the radiation laboratory, a nickname for the cyclotron at the top of Strawberry Canyon.

Krishnan Subramanian describes the San Diego Computer Center’s new National Science Foundation’s grant in his Academic Research On Cloud Computing Gets Funded post of 2/18/2009. Krishnan quotes HPC Wire:

Researchers from the San Diego Supercomputer Center (SDSC) at the University of California, San Diego, have been awarded a two-year, $450,000 grant from the National Science Foundation (NSF) to explore new ways for academic researchers to manage extremely large data sets hosted on massive, Internet-based commercial computer clusters, or what have become known as computing "clouds."

John Foley wants more transparency from Amazon about its plans for Amazon Web Services (AWS), data center location and construction schedule, and granular income data for AWS. In his Amazon's Cloud Is Too Cloudy post of 2/18/2009, Foley writes:

So I was glad to see the interview with TechFlash, as it presented an opportunity to learn more about Amazon's groundbreaking IT service model. In the Q&A, Jassy talks about enterprise adoption of AWS, service level agreements, and how he and a group of buddies get together every month to scarf down chicken wings at the Wingdome.

But Jassy clams up when asked about the size of AWS and future plans. …

As I pointed out in a post a few days ago, Amazon is growing in influence in the IT industry, having struck agreements with IBM, Microsoft, Oracle, Red Hat, and Sun in the past 12 months. As Amazon's reach expands, its reticence becomes a bigger issue. How can IT pros, with confidence, turn over their IT workloads to a service provider that provides such limited visibility into its core operations?

Krishnan Subramanian’s Asterisk On The Clouds post of 2/17/2009 explains how open-source PBX and Telephony platform Asterisk can manage voice services from the cloud. He points to two articles that explain how to install and run Asterisk on Amazon EC2:

John Foley takes on VMware’s vCloud “initiative” in his VMware To Take Its Next Steps Into The Cloud post of 2/17/2009. John writes:

VMware’s work in private clouds entails bringing some of the capabilities of public cloud services, including self-service provisioning and usage-based billing, to the corporate data center. It’s also working on securing private clouds by isolating virtual machines in environments where multiple business units share IT resources (which describes most companies).

George Crump discusses performance issues with WebDAV, HTTP, NFS and CIFS in his Getting Data To The Cloud post of 2/17/2009 and points to his Web cast on Cloud Storage Infrastructures to learn more.

Reuven Cohen’s Describing the Cloud MetaVerse post of 2/17/2009 uses the Cloud MetaVerse term to describe the inter-relationships between the world of various internet connected technologies and platforms.

Reuven’s Red Hat Announces It's Kinda Interoperable, Sort Of, Maybe? post of the same date is skeptical of the capability of the

[R]eciprocal agreement with Microsoft to enable increased "interoperability" for the companies’ virtualization platforms. Both companies said that they would offer a joint virtualization validation/certification program that will provide coordinated technical support for their mutual server virtualization customers. …

Digging a little deep[er] it appears that Red Hat and Microsoft don't fully grasp what Interoperability actually is or more to the point who it benefits. But rather they seem to taking advantage of the buzz that interoperability has enjoyed in 2009. So now rather then slapping a "cloud" logo on your product, you slap an interoperable logo on there too.

2 comments:

Dan Vanderboom said...

This is a great collection of resources, but there is surprisingly little content on SDS! Looking forward to announcements at MIX09.

Roger Jennings (--rj) said...

@Dan,

Me, too!

--rj