Monday, March 30, 2009

Examples of Scaling Relational Databases Up and Out

There’s considerable doubt about the capability of enterprise-grade relational database management systems (RDBMSs) to provide sufficient elasticity for cloud-scale services while supporting most features that data-oriented developers consider de rigueur, such as:

  • Immediate consistency
  • Rich data types
  • Views
  • Indexes
  • Stored procedures
  • Triggers
  • Explicit ACID transactions
  • Inner and outer joins
  • Referential integrity
  • Cell-level encryption
  • Full-text search
  • Horizontal and vertical partitioning

Microsoft’s mid-course correction to SQL Server Data Sources (SDS, formerly SQL Server Data Services, SSDS) changed the table data model from Entity-Attribute-Value (EAV) with freeform attributes (a.k.a, flexible properties) and simple data types to fully relational with most SQL Server 2008’s features. My A Mid-Course Correction for SQL Data Services posts of 2/24/2009 and SQL Data Services Abandons REST for TDS API and Knocks My Socks Off of 3/10/2009 describe the anticipated changes.

Updated 4/3/2009 with excerpts from Simon Munro’s SQL Data Services Does Not Scale post of 3/31/2009.


“Cloud RDBMS usage looks good” according to James Hamilton, who’s currently an Amazon Web Services vice president and distinguished engineer and formerly a high-level architect for Microsoft’s data centers and SQL Server, as well as IBM’s DB2. Hamilton gave the opening keynote, Cloud Computing Economies of Scale, presented at the Self Managing Database Systems workshop, which is part of the International Conference on Data Engineering in Shanghai, China on 3/29/2009. His slides to accompany a closing panel titled Grand Challenges in Database Self-Management cite:

  • Microsoft’s Hotmail with more than 300 million users and more than 2 billion non-span messages per day with SQL Server on every back-end node of a 10,000-server farm.
  • Facebook with 1,800 MySQL copies.
  • Windows Live ID with 420 million IDs and well over 1 billion authentications per day

However, Hamilton notes that “all the complexity ‘is above’ the RDBMS:

    • Partitioning & partition management done above RDBMS
    • Many of these workloads could run on simple ISAMS
    • Many new workloads are going non-relational
    • For many installations, the most mission critical data-intensive applications don’t involve RDBMS

Azure covers the above issues with Azure Table services, which use a “conventional” EAV data model that’s similar to Amazon’s SimpleDB and Google’s App Engine.

Hamilton asks: “Why the RDBMS exodus?” and answers:

    • Failure to scale
    • Excess administrative complexity
    • Resource intensive due to monolithic delivery of un-needed features
    • Unpredictable response times
    • Opaque failure modes
    • Access patterns excessively random
    • Slow to evolve to new workload patterns

Azure and SDS incorporate many of the required features Hamilton describes in his On Designing and Deploying Internet-Scale Services paper for the 21st Large Installation System Administration Conference (LISA ’07) in November 2007.


Wai-Ming Mok, a former product-line manager at Sun Microsystems, posted Multi-tenancy @ Salesforce.com on 3/29/2009. Mok observes:

Salesforce.com supports over 55K customers, including Google and Dell. This feat is achieved by an ingenious group of Oracle database experts who have taken an enterprise class relational database and turned it into a multi-tenant system that runs customer relationship management for these many customers. BTW, this system supports close to 200M transactions each weekday, at less than 1/4 of a second response time.

Mok’s post continues:

On March 25, Craig Weissman, CTO of Salesforce.com, gave an illuminating presentation on the internal architecture at his company to a room full of attendees at the SDForum SAMsig. Some highlights:

    • There are 15 or so “pods”, each consisting of 50 or so servers, running a 4-way Oracle RAC and application (Java) and support servers. Each pod supports thousands of customers.
    • Each Oracle RAC database consists of static tables that store the data from thousands of customers all mixed together. One row of data belongs to one customer and the next row belongs to another, and the columns may have completely different types of data between the rows. Control and access to the data are managed by metadata. In essence, the Oracle database is transformed to a multi-tenant database.
    • Customer data in the columns are stored as human-readable strings. Some customers have requested certain data to be encrypted. Appropriate transformation functions are used to convert to the correct data types when the data are accessed.
    • Using Lucene, the data are all indexed.
    • Apex is a new language to enable customers to write data processing applications, like a new generation of 4GL. It resembles Java. Governors are deployed to prevent abuse of the system resources. [Emphasis added.]

The flexible properties of SDS’s original data model and Azure Tables have characteristics similar to those highlighted above.


Simon Munro argues that SQL Data Services Does Not Scale in this 3/31/2009 essay based on his “Comparing Azure Storage and SQL Data Services” presentation to the SQLBits conference in Manchester, UK. Simon writes:

The only (and suggested) way of getting scalability out of SDS is to scale out using partitions. Generally this is a really difficult thing for the relational model and SQL to do as a big part of the SQL model is the ability to have consistent data. Brewers CAP conjecture implies that SQL, by being Consistent and Available has to forgo Partition Tolerance – and I believe that that is the case with SDS. …

Anyone who has tried to build a solution based on partitioned databases will know that the architecture in the above case study is no simple achievement and takes a lot of work whereby an entire layer needs to exist to allow the system to first identify the correct partition for the data. In the same MIX09 presentation Nigel Ellis hints at efforts that will be made in terms of adding some partition support to SDS, which may, through the use of configuration, provide for better querying across partitions (although ACID may be more difficult).

It’s to be noted that the SDS team now refers to “departmental” sized databases. As an example, the First round of Questions and Answers post of 3/12/2009 to the SQL Data Services Team Blog says:

The database size will be capped. We are still evaluating what the cap will be, but the plan is to ensure that the allowed database size supports most, if not all, departmental and web application workloads. [Emphasis added.]

A video segment of the session might be available shortly here.


Did the SDS team make the right decision when they listened to customer feedback and abandoned the EAV data model with flexible properties for more traditional RDBMS features, Transact-SQL, and the Tabular DataStream (TDS) protocol? Only time will tell.

However, I’m betting that .NET developers, Azure’s target audience, will welcome the change despite having to roll their own support for RESTful Astoria-based data access.

Sunday, March 29, 2009

Windows Azure and Cloud Computing Posts for 3/23/2009+

Windows Azure, Azure Data Services, SQL Data Services and related cloud computing topics now appear in this weekly series.

Update 3/28/2008 and 3/29/2009: More on the Manifesto, other additions
• Update 3/26/2009 and 3/27/2009: Cloud Computing Manifesto flap (see the Azure Infrastructure section), other Additions

Note: This post is updated daily or more frequently, depending on the availability of new articles.

Azure Blob, Table and Queue Services

Mike Amundsen’s Importing SQL Data Services (SDS) Data into Azure Table Storage (ATS) article of 3/29/2009 shows you “the steps to export your data from the current SDS storage system and import that same data into the Azure Table Storage system.”

Steve Marx (@smarx) confirmed on 3/24/209 in a reply to my Tweet: “[S]econdary indexes are still planned in Windows Azure tables.” I was concerned that SDS’s move to the relational model would redirect development resources from Azure tables to enabling more SDS features.

Phani Raju continues his series on the ADO.NET Data Services upgrade with ADO.NET Data Services Friendly Feeds , Mapping CLR Types of 3/20/2009 which includes “some samples of how to map your entity properties to Atom/custom markup in the atom:entry element.”

Mike Amundsen’s REST-like MOVE, COPY : "This one isn't easy." post of 3/19/2009 analyzes Roy T. Fielding’s approach to the problem:

… Fielding's approach is truly about representating state. second, this commitment to keeping REST all about communicating state (the very thing that makes the REST style so effective over widely-distributed, heterogeneous networks) makes actions such as COPY and MOVE so much harder to model.

SQL Data Services (SDS)

Jeff Currier’s SDS coding examples – Part 1 (C# & ADO.NET) post of 3/29/2009 shows an example of using ADO.NET’s SqlClient class to connect to an SDS user database with HTTPS, add a tbl_Person table and INSERT four rows with T-SQL. Jeff explains that user databases are “where your data resides.” SDS introduces the concept of a logical master database, which stores the metadata for all your user databases in a specified geolocation.

Jeff’s SDS Java JDBC examples performs the same operations with Java 1.6, NetBeans as the editor, and the latest JDBC driver for SQL Server. Unless you’ve been granted access to the new SDS configuration, you can’t execute either code sample.

James Hamilton’s Cloud Computing Economies of Scale post of 3/28/2009 includes a summary of his keynote, Cloud Computing Economies of Scale, presented at the Self Managing Database Systems workshop, which is part of the International Conference on Data Engineering in Shanghai. Much of James’s session applies to SDS much more than to Amazon EC2. 

James’ follow-up Grand Challenges in Database Self-Management post of 3/29/2009 covers the Self Managing Database Systems closing panel titled Grand Challenges in Database Self-Management which was tasked with “identify[ing] one substantial open problem related to self-managing databases - something that people interested in this area should be working on.  Feel free to define "database" broadly.” James’s slides for the panel discussion, which are interesting in the context of new SDS, are here.

Nigel Ellis provides a sample code snipped for Accessing SDS From PHP in this 3/27/2009 post and adds:

I’ve just returned from MIX2009 where I announced our new relational service in my What's New with SQL Data Services talk.   You can watch the full video and download the slides from here.

During my talk, I demonstrated running an application on Windows Azure using PHP to interact with a database hosted in our new relational SQL data service. I used the unified ODBC support built in to PHP to interact with SDS.

Jamie Thomson’s SQL Data Services and Entity-Attribute-Value models post of 3/12/2009 takes issue with Jeffrey Schwartz’s question and Niraj Nagrani’s answer in Jeff’s UPDATED: Microsoft Exec Explains SDS About-Face article for Visual Studio Magazine.

My More Confusion about Relational SQL Data Services and the Entity-Attribute-Value Data Model post of 3/28/2009 contends that SDS (and its predecessor, SQL Server Data Services) were based on the EAV data model. Although it would be possible for the SDS team to implement an EAV version, Azure Tables use the EAV data model and will have much greater scale-out capabilities than SDS v1, which will be limited to “departmental” size databases. See my post for more details.

Patric McElroy’s Mix ’09 Update #2 of 3/20/2009 reviews Nigel Ellis’ session on SDS: What’s New in Microsoft SQL Data Services (MIX09-T06F) with concentration on audience interests as represented by end-of-session questions. I was surprised that no one asked about server-side encryption for SDS’s new relational tables (see How Much Customer Demand Does the SDS Team Need to Get Encryption in SDS V1?)

Lev Novik of the SDS Team explains Accessing the New Relational SDS with REST with SQL Server Data Services in this 3/20/2009 post. Of course, you’ll have to wait until May 2009 for a chance to join a limited, invitational SDS CTP or later for a public CTP.

.NET Services: Access Control, Service Bus and Workflow

Clemens Vasters writes a detailed essay on the “.NET Service Bus Naming system [a]s a forest of (theoretically) infinite-depth, federated naming trees” in his Azure: Microsoft .NET Service Bus post of 10/27/2008. Although his post is about about five months old, it provides developers a great introduction to why the .NET Service Bus is an important adjunct to the Azure Services Platform.

Thanks to @DotNetServices for the heads up.

• Aaron Skonnard’s CLOUD COMPUTING: Building Distributed Applications with .NET Services article from MSDN Magazine’s May 2009 issue is available. It covers:

    • Understanding .NET Services
    • Configuring the .NET Access Control Service
    • Communicating Through the .NET Service Bus
    • Building a cloud workflow

His DevWeek 2009 – it’s a wrap post of 3/26/2009 includes a link to the demos from his WCF, REST, ADO.NET Data Services, .NET 4.0, and Dublin breakout sessions.

Mike Taulty climbs on the Azure bus with Windows Azure - .NET Services and the Service Bus of 3/27/2009. Mike writes:

[O]ne thing that I definitely picked up at DevWeek is that I need to take a bunch more interest in what falls into that category of Microsoft .NET Services, namely;

  • Access Control
  • Service Bus
  • Workflow Services

so that’s been pushed onto my stack marked “TO DO” ( along with a tonne of other stuff :-) ).

Mikes posts have a history of technical sophistication and accessible tutoring. I’m looking forward to his Azure posts.

• Justin Smith explains “a new experimental client API (TokenClient) for interacting with the Access Control Service (ACS)” in his TokenClient (Mix) introduction post of 3/24/2009. Justin writes:

The purpose of this API is to simplify the developer interaction with the ACS Security Token Service. It still uses WS-Trust on the wire, but restricts the WS-Trust options to what I believe to be the bare minimum.

Scott Watermasysk points out in his .Net Service Bus Queues post of 3/25/2009 that Clemens Vasters announced at MIX09 that .NET Services will gain RESTful Service Bus Queues in a future release. Get more details in Clemens’ Connecting Applications across Networks with Microsoft .NET Services session starting at 00:36:18.

The .NET Services Team’s .NET Services March 2009 CTP Breaking Changes Pre-Announcement post of 3/20/2009 provides early warning of breaking changes to all three of the the .NET Services March 2009 CTP’s features.

Vittorio Bertocci finally posts A visual tour of the .NET Access Control service, part 2: fun with scopes and issuers on 3/17/2009 after returning from Belgium’s TechDays. Vito’s first installment of the series was A visual tour of the .NET Access Control service via Azure Services Management Console of 1/08/2009.

Live Windows Azure Apps, Tools and Test Harnesses

David Pallman upgrades his Azure Storage Explorer with the following new features as described in his Azure Storage Explorer 2.0 Now Available post of 3/28/2009:

1. New WPF-based UI with Outlook-style navigation and more polish.
2. Support for multiple storage projects, and the ability to configure projects directly in the tool instead of forcing you to edit the configuration file.
3. Ability to view pictures in blob storage as images.

Binaries and source code are downloadable from CodePlex.

• Jon Udell’s elmcity+azure project is live in Azure at http://elmcity.cloudapp.net/ on 3/27/2009 displaying events for Keene, NH; Ann Arbor, MI; Huntington, WV; and Baltimore, MD. 

• Mike Amundsen promises to post examples of an SDS to Azure Utility (kinda) this weekend.

David Pallman’s Azure Application Monitor now on CodePlex post of 3/25/2009 describes his tracking application for Azure-hosted services that he posted to CodePlex. David writes:

Azure Application Monitor has 2 parts, a library and a monitoring application. You use the library to instrument your app to capture process information to cloud storage. You use the monitoring application to view that information.

Instrumenting your apps is very easy. Add a reference to the AzureMonitorLib assembly and call the AzureMonitor.Start static method in your initialization code, providing an application name and a role name. That will cause a background thread to run, periodically capturing process information and writing it to cloud storage.

Gus Perez announces Windows Azure Tools and Live Framework Tools Refreshed on 3/20/2009. The refresh applies to the March 2009 CTP of Azure and Live Framework tools. Gus writes:

We’ve just refreshed the MIX releases of the Windows Azure Tools and the Live Framework Tools. The updated versions address an issue that would cause Visual Studio to close unexpectedly if run on specific language editions of Windows (e.g., German). In addition, the Windows Azure Tools refresh includes a fix that allows targeting a different instance of SQL Server other than SQL Server Express.

There’s no reason to upgrade to the new versions if you’re not affected by these issues but we felt they were worth addressing for those who have encountered them.

These updates address the issues described in Jim’s post and Danny’s post.

Azure Infrastructure

James Urquhart’s CCIF pulls out of the Open Cloud Manifesto post of 3/29/2009 observes:

In a post to the Cloud Computing Interoperability Forum …, the original organizers of that group--Reuven Cohen, Sam Charrington, Jesse Silver, and David Nielsen--have announced that the CCIF will no longer be a signatory of the controversial Open Cloud Manifesto to be presented Monday.

Gregg Ness makes the case for network automation in his Bringing Cloud Computing Down to Earth essay of 3/29/2009, which begins:

Whether you’re a small business considering cloud services or an enterprise contemplating public or private cloud services, it pays to understand some of the technical challenges and players likely to have a significant impact on the availability, security and costs of those services. Cloud computing is a game changer, and it may also pay to know who could win or lose as IT services are decoupled from specialized hardware in specific locations.

And concludes with:

As Cisco, Microsoft, VMware, Juniper, IBM and Sun place their bets in various forms of partnership or collaboration it seems clear that whoever offers the most dynamic infrastructure with the most effective security and greatest capacity will have a strategic advantage selling to large enterprises and service providers. That advantage could put incredible pressures on those who have yet to articulate and deliver on the new vision.

My experience with the three major player in the IaaS space shows Microsoft to have the most dynamic infrastructure. Whether Azure is “The Triple Play” with security and capacity remains to be seen.

Jimmy Blake urges caution when selecting an SaaS vendor in his Beware reading the Cloud menu right to left post of 3/29/2009 and quotes Salesforce’s CFO, Graham Smith:

Graham Smith predicted that the global economic crisis may result in lower prices for Software-as-a-Service applications.  At the same time Smith predicted that traditional on-premise software vendors would need to cut more tha[n] Sofware-as-a-Service mode and that the economic crisis is actually driving some customers to adopt a cloud-based model.

James Urquhart’s Cloud Computing: What we learned from Manifestogate post of 3/28/2009 provides a reasoned critique of the “Open Cloud Computing Manifesto” (see below). James points are:

  1. It's an opinion piece, not a standards proposal.
  2. Those that have publicly stated that they won't sign have the most to lose.
  3. It's probably a bad idea to release even an industry opinion piece without public commentary.
  4. It's what follows that is important here.

James agrees with me when he says in his conclusion:

I'm not convinced that a top-down formal standards approach will do anything other than repeat the mixed success of the WS-* efforts to date.

Richard Waters brings the Open Cloud Computing Manifesto to the mainstream in his Tech rivals in cloud computing clash article of 3/27/2009 for the Financial Times. Waters’ lede:

Microsoft and Amazon.com have clashed with IBM and a group of other leading technology companies over an attempt to set some broad technology principles for the coming era of “cloud computing”.

The unusual public spat points to a deeper struggle under way between some of the world’s biggest technology concerns as they try to position themselves for what is expected to be the next big thing in the tech world.

George Reese hits the nail on the head in his The Varieties of Openness Worth Wanting in the Cloud post of 3/27/2009 to the O’Reilly Radar blog. Here’s George’s list:

  1. We want a clear path to business continuity planning.
  2. We want the freedom to use the tools that work for us, not the ones a vendor mandates.
  3. We want the freedom to use our data as we see fit.

which I think is a great start.

Jimmy Blake’s The cloud is the start of a journey, not the end post of 3/18/2009 makes the point that a “positive benefit of cloud computing” is:

If business processes are flawed, the move to Software-as-a-Service changes the focus from implementing the plumbing to solving the business problem.  Businesses have now recognised the necessity to move from being IT-led to being business-led, hence the popularity of business-focused IT frameworks like ITIL and COBIT.

• RSA, The Security Division of EMC, posted a reprint of its detailed The Role of Security in Trustworthy Cloud Computing white paper on 3/27/2009. The deck of the Overview reads:

Public cloud computing introduces new stakeholders into the security equation and loosens IT's control.

They aren’t kidding.

Ben KepesTelcos and SaaS – An End to End SLA? post of 3/27/2009 posits:

[T]he telcos will start to leverage end users demands for end to end SLAs (and not just uptime guarantees). With web delivered services this isn’t possible – the web being, as it is, an amalgam of multitudes of different facets. However what telcos have is an existing network that could be leveraged by way of private connects and aggregations of cloud computing offerings. To borrow (and twist) a Microsoft term – telcos are perfectly positioned to go for a S+S play – software plus services but with one of the services being end to end delivery.

• Paul Miller writes in his Windows Azure explored, in conversation with Amitabh Srivastava post of 3/27/2009:

As part of my ongoing series of conversations with those shaping the future of the Cloud Computing landscape, I spoke with Amitabh Srivastava this week. Amitabh is a Corporate Vice President at Microsoft, with responsibility for Windows Azure.

The result has just been released as a podcast during which Amitabh discusses the background to Azure, the company's intention to move from Community Technology Preview to formal release this year, and the manner in which Azure will interact with company data centres and locally deployed applications.

• Tom Lounibos describes signing up Open Cloud Computing Manifesto for cloud computing customer SOASTA in his The Open Cloud Debate: A Customers Perspective post of 3/27/2009:

I for one fully support this Manifesto for an “Open Cloud Platform” and ask its authors to place my company’s name as one of its supporters, and they have agreed to do so. Which suggest that the people who are pushing this “Open” initiative are at least interested in how some “actual” customers of cloud computing may be thinking, while the companies who choose not to participate have sent a message to me (and others) that perhaps they are not.

• Reuven Cohen’s Introducing the Open Cloud Manifesto post of 3/27/2009 on the Cloud Computing Journal blog announces:

The first version of the manifesto will be published Monday, March 30th to be ratified by the greater cloud community.

This post follows Ruv’s Re: Microsoft Moving Toward an Open Process on Cloud Computing Interoperability post of 3/26/2009 with a “Their 2:28 AM pre-announcement of the manifesto was a complete surprise” subtitle. Ruv also re-released Repost: Cloud Neutrality on 3/27/2009.

Apparently Ruv forgot to have Microsoft sign the NDA.

You can read version 1.0.9 of the Manifesto now from a link on Geva Perry’s The Open Cloud Manifesto: Much Ado About Nothing.

• Graeme Thickins conducts on 3/26/2009 An Interview with George Reese About His New Cloud Computing Book, O’Reilly Media’s Cloud Application Architectures, which is currently available as a Rough Cuts eBook. Reese gave the following answer to Graeme’s question “For what types of readers did you primarily write the book? What will they get from it that they can't get elsewhere?”

The book is for people tasked with making the move into the cloud and guiding them through that move. I start by establishing what the cloud means from my perspective and what its value is to an organization. The book covers how you evaluate what makes sense to move into the cloud and, once the decision is made, the security, availability, and disaster recovery planning necessary to operate at an enterprise level in the cloud.

Reese claims “a body of experience in putting transactional database applications into the Amazon Cloud.”

• Wally McClure is working on a live Azure service and explains some of the problems he encountered in More things that I have learned with Azure of 3/26/2009 and eTag error in Windows Azure of 3/25/2009.

• Joe Wilcox rings in with Cloud Manifesto: Is Microsoft Afraid of Rain? on 3/26/2009 and GigaOm’s Stacey Higginbotham contributes her $0.02 with Thunder in the Cloud Over Openness of the same date.

• Steven Martin’s Moving Toward an Open Process on Cloud Computing Interoperability post of 3/26/2009 takes on a purported secret Cloud Manifesto:

Very recently we were privately shown a copy of the document, warned that it was a secret, and told that it must be signed "as is," without modifications or additional input. …

To ensure that the work on such a project is open, transparent and complete, we feel strongly that any "manifesto" should be created, from its inception, through an open mechanism like a Wiki, for public debate and comment, all available through a Creative Commons license. After all, what we are really seeking are ideas that have been broadly developed, meet a test of open, logical review and reflect principles on which the broad community agrees. This would help avoid biases toward one technology over another, and expand the opportunities for innovation.

For more details on this tempest in a teacup, see my “Secret” Cloud Computing Manifesto Dustup post of 3/26/2009.

Dinesh Kulkarni (of LINQ to SQL fame) contributes .NET RIA Services Resources (for March 2009 Preview) with a data-sources emphasis on 3/26/2009.

• Nikhil Kothari posted .NET RIA Services MIX '09 Talk - Slides + Code on 3/19/2009. His article also includes a detailed overview of his demos for T41F - Building Data-Driven Applications in ASP.NET and Silverlight, which include Azure data sources. Here’s Nikhil’s definition of this new technology:

.NET RIA Services builds on Silverlight 3 on the client and ASP.NET on the server to simplify building n-Tier data-driven applications. First, .NET RIA Services provides a prescriptive CRUD-based pattern for authoring your application/domain/business logic, i.e. queries, operations, rules for authorization, validation etc. on the middle tier and makes that functionality easily consumable from your presentation tier in the form of bindable data. Second, on the presentation tier, we're providing new data controls such as a DataSource, enhanced DataGrid and DataForm/DataPager controls for mainline data/LOB scenarios. Third, we're also providing higher level building blocks such as authentication, cross-tier user state and the like.

It appears to me that .NET RIA Services provides a RAD approach to RAD for Azure clients.

@Azure, who claims to be the "Official Azure Services Platform account - covering Windows Azure, SQL Data Services & .NET Services - news/links/info" on Twitter says:
"Windows Azure tokens now freely available, DM if you have issues and need a token" in a 3/25/2009 Tweet (Emphasis added.)

David Lemphers recommends Design [Azure Services] for Failure, Growth and Distribution! in this 3/24/2009 post. Strangely, Dave makes no mention of Azure in his post.

Steve Martin’s Windows Azure and Windows Server - Licensing Model post of 3/23/2009 clarifies the original SSDS licensing terms with the following:

We don’t envision something on our price list called “Windows Azure” that is sold for on-premises deployment. Some implementation details aren’t going to be practical for customers, such as our global data-center hardware design and large scale multi-tenancy features which are integral to Windows Azure and the Azure Services Platform. Why? We will continue to evolve Windows Server and System Center focusing significantly on technologies like virtualization, app and web server capabilities, single-pane management tools for managing on-premise and cloud in the same way, etc. which extend the enterprise data center in significant ways. We’ll continue to license Windows Server and System Center (and therefore the shared innovation derived from Windows Azure) to hosters through our SPLA program.

Mary Jo Foley confirms a well-known fact: Microsoft: No on-premise Azure hosting for business users in her 3/23/2009 post. Microsoft has maintained since they announced SSDS at least year’s MIX 08 conference that there would be no on-premises version of the Azure Services Platform.

Steve Nagy delivers an easily readable description of The Azure Fabric Controller in this 3/23/2009 post, which links to his What Is: The Azure Fabric and the Development Fabric post of the same date.

Simon Davies explains How I got php running in Windows Azure with the March 2009 CTP in this 3/20/2009 post, which complements his earlier Dynamic Languages and Windows Azure post of 2/17/2009.

David Chappell updates and expands his original Microsoft-sponsored whitepaper about the Azure Services Platform in his Introducing Windows Azure post of 3/19/2009. The new version focuses entirely on Windows Azure and includes new details on March 2009 CTP features. (Thanks to Sam Gentile for the heads up.)

Cloud Computing Events

• Bernard Golden’s Cloud Computing Meets Washington post of 3/26/2009 reports on a Google-sponsored cloud computing panel marking the release of a new, Google-commissioned report, Envisioning the Cloud: The Next Computing Paradigm (see John Foley’s post about the report in the “Other Cloud Computing Platforms and Services” section.) Bernard writes:

After presenting the slides, during which I made some observations about cloud issues and opportunities, the floor was opened for questions. I would say that half of the questions revolved around data security and privacy. Many in the audience were familiar with current government laws and regulations relating to these issues, but have not yet begun to consider how cloud computing will impact them (Heyward commented that today's laws are based on a mid-80s computing environment).

This is a constant refrain at cloud-computing conferences targeted at government agencies. Bernard’s essay on the topic is a worthwhile read, but I’m concerned at not seeing a single reference to Google’s sponsorship of the report in his piece.

Gartner Says Worldwide Cloud Services Revenue Will Grow 21.3 Percent in 2009 in this 3/26/2009 press release for its three Gartner Outsourcing Summits to be held in Las Vegas, London and Tokyo:

Worldwide cloud services revenue is on pace to surpass $56.3 billion in 2009, a 21.3 percent increase from 2008 revenue of $46.4 billion, according to Gartner, Inc. The market is expected to reach $150.1 billion in 2013. …

Business processes delivered as cloud services are the largest segment of the overall cloud services market, accounting for 83 percent of the overall market in 2008. The segment, consisting of cloud-based advertising, e-commerce, human resources and payments processing, is forecast to grow 19.8 percent in 2009 to $46.6 billion, up from $38.9 billion in 2008. …

While much of the publicity for cloud computing currently centers on systems infrastructure delivered as a service, this is still an early-stage market. In 2008, such services accounted for only 5.5 percent of the overall cloud services market and are expected to account for 6 percent of the market in 2009. Infrastructure services revenue was $2.5 billion in 2008 and is forecast to reach $3.2 billion in 2009.

Additional information is available in the Gartner report “Forecast: Sizing the Cloud; Understanding the Opportunities in Cloud Services” (US$1,495.)

Mark Koenig and Bill McNee, researchers for Saugatuck Technology attended the CloudForce Tour 2009 in New York City and released Salesforce Stakes Claim in the Cloud (requires registration) on 3/26/2009. The report analyzes Salesforce’s integration of social computing (Salesforce CRM for Twitter) with its enterprise business applications. 

Reuven Cohen reviews the Strategies and Technologies for Cloud Computing Interoperability (SATCCI) workshop held in conjunction with the Object Management Group (OMG) March Technical Meeting at the Hyatt Regency Crystal City, Arlington, VA on March 23, 2009 (see Windows Azure and Cloud Computing Posts for 3/16/2009+.) Ruv writes:

NIST announced the creation of "Cloud Interoperability Profile" for federal-compliant cloud infrastructures with the goal of creating a standardized profile for cloud computing with in the federal government. I'm told this is a big step for the agency.

and provides a brief review of other presentations and announcements at the workshop.

My Azure-Related Sessions at MIX 09 (Updated) post was updated 3/23/2009 with links to all Azure-related sessions.

Guy Burstein’s Download MIX09 Sessions and Watch Offline post of 3/20/2009 provides a formatted list of all MIX09 session and keynote videos that you can automatically download for future review by clicking a “Download Selected with Free Download Manager” menu command.

Other Cloud Computing Platforms and Services

•• Srini Chari posted a link on 3/28/2009 in the Cloud Computing Google Group to the March 2009 “Confronting the Data Center Crisis: A Cost - Benefit Analysis of the IBM Computing on Demand (CoD) Cloud Offering” white paper that IBM sponsored. The paper includes an overview of the benefits of cloud computing and details of the following three customer case studies:

    1. Top International Wealth Management Savings Company – New Capability
    2. Major New York Based Financial Conglomerate - Faster Time to Results
    3. Ingrain Rocks-Solution Provider for Petroleum E&P - New Business Model

•• Phil Rack’s Cloud Computing and Integration post of 3/28/2009 and earlier Cloud Computing and BI post discuss the applicability of Cloud Computing to business intelligence (BI) projects. Phil’s post discusses Mathematica’s service for EC2, but strangely omits any reference to SAS’s forthcoming US$70 million cloud computing facility, which “will provide the additional data-handling capacity needed to expand SAS’ OnDemand offerings and hosted solutions.” (Phil is an SAS consultant.)

Microsoft has been promoting SQL Reporting Services and SQL BI Services (sometimes as SQL Analysis Services) in future features of the Azure Services Platform since announcing SSDS at MIX 08.

Jason Ipock compares features of Microsoft Azure, Google App Engine, Amazon Web Services and Rackspace CloudSites, CloudFiles and Cloud Servers services in his Reaching For The Clouds post of 3/24/2009. Jason provides narrative descriptions and the most detailed tabular comparison I’ve seen so far.

Craig Balding asks Compliance as a Service: Does It Exist? in this 3/27/2009 post and asks for readers to provide examples of CaaS services. One example is a credit card payment gateway service provider that Craig discusses in his What Does PCI Compliance in the Cloud Really Mean? post of 3/14/2009. In the earlier post, Craig takes on Mosso’s somewhat specious claim to “have “PCI enabled” a Cloud Sites customer that needed to accept online credit card payments in return for goods (i.e. a merchant).”

Alan Wilensky’s The Strategist: Mitigating Cloud Computing Client Services Risk via Trusted, Blind API Brokers - Part IV post of 3/27/2009 asks:

Other than the actual insurance underwriting and policy sales, is there a real business model here in operating the technical services pool of a blind trust API broker/ Data mirroring / continuity services for the insurance industry? How big ? [Alan’s emphasis.]

Here’s Alan’s answer:

Oh yes, oh my G-d yes. I am writing this series because I got far enough in my work for the last client, that I did see the foggy future in a way that mature analysts sometimes do.

How big? I believe that operating the Trusted Services Pool will be worth about 60 - 120 million annually when it hits it stride.

• Larry Dignan reports Amazon Web Services: No Open Cloud Manifesto for us on 3/27/2009, quoting an “Amazon spokeswoman:”

We just recently heard about the manifesto document.  Like other ideas on standards and practices, we’ll review this one, too. Ideas on openness and standards have been talked about for years in web services. And, we do believe standards will continue to evolve in the cloud computing space. But, what we’ve heard from customers thus far, customers who are really committed to using the cloud, is that the best way to illustrate openness and customer flexibility is by what you actually provide and deliver for them. …

In any event, we do believe that standards will continue to evolve and that establishing the right ones, based on a better understanding of what is needed, will best serve customers.

• Dava Baran’s IBM Getting SUN For Free - Massive Layoffs To Follow post of 3/26/2009 posits that:

[A]t $6.5 billion IBM is essentially getting SUN for FREE. Here is the analysis: prior to the takeover bid SUN had a maket capitalization of about $3 billion, cash of $2 billion for a total of about $5 billion. IBM’s offer is $1.5 billion more. According to the source SUN’s campus in Santa Clara and Palo Alto are speculated to be worth about $1 billion (which IBM will most likely sell). So IBM is essentially paying $500 million for SUN’s customer list.

• Dion Hinchcliffe’s Cloud computing and the return of the platform wars analysis of 3/26/2009 credits Sun with providing the evidence of a coming cloud platform war:

Sun’s announcement last week that its new Cloud Compute Service would be API compatible at a storage level with Amazon’s popular S3 service is probably the first real evidence of the coming platform war in the cloud computing space. It’s a war that’s likely to be significant and protracted given the number of players that are lining up for a shot at what’s sizing up to be the next big development in the evolution of computing.

Here’s a classic Hinchcliffe illustration of cloud computing growth:

Dion concludes:

[T]he clouds are not waiting and it’s been the small and medium sized businesses that have had no resources to build their own world-class data centers that have been blazing the trail so far. Since software powers so much of the world today, one interesting notion is whether cloud computing will be the springboard that allows them to create agile 21st century businesses that eclipse older, traditional firms that can’t adapt. Combined with 2.0 business models, this is quite possible.

Alan Wilensky continues his series on underwriting business continuity risks with his Rating and Certifying the Cloud Hosting and Web Application Providers. Part III post of 3/25/2009. With regard to cloud host certifications, Allen writes:

Certifications are invasive, involving on site auditing and live tests that determine specific functionality. ISO, SAS 70, and SystTrust, are some of the current examples of certs that are currently in vogue for typical data center assurances. Unfortunately, none of these standards, as good as they are, really addresses all of the issues underwriters need to individually insure a client of a cloud host, SAAS or PAAS provider. In the case of PAAS start ups, it’s a messy process to accurately quantify risks when so much muscle and blood has been invested in cutting over incumbent processes - and the fact that for some reason, the PAAS providers, taken as a group, are some of the shakiest kids on the block.

and goes on to describe what areas insurance carriers want verified.

Alan Wilensky’s The Strategist: Underwriting Business Continuity in the Cloud. Part II. of 3/23/2009 expands on his earlier The Strategist: Certification Services for the Cloud - Reliability, Continuity, and Indemnification Against Outages post of 3/8/2009. In Part II, Allen writes:

Insuring business continuity was a game of physical premises insurance, which evolved into records and facilities, and now, today, optionally covers servers, workstations. software, and systems. It is a mishmash of offerings, and many industries have varying degrees of dependencies on internal IT infrastructure. The insurance products for Small and Medium businesses are semi-flexible, while mega enterprises have core needs that exceed what professional lines can provide, and instead rely on customized underwriting for the Fortune 1000.

Maureen O’Gara reports NetSuite Mimics Salesforce Cloud on 3/25/2009. She writes:

Salesforce.com wannabe NetSuite, Larry Ellison's other company, has bundled up a bunch of widgetry much like Salesforce's Force.com platform-as-a-service and dubbed it the SuiteCloud Ecosystem.

As one might infer from the name, the on-demand products, development tools and services are designed to let customers run their business operations in the cloud and let software developers build mission-critical apps on NetSuite's ERP/CRM/e-Commerce underpinnings, which means ISVs can embed NetSuite's feature set into their apps.

Apparently Larry finally groks the cloud.

John Foley’s Uncle Sam's Cloud Computing Dilemma post of 3/24/2009 has the following lede:

Federal agencies are under pressure to deploy cost-effective IT systems quickly, and cloud computing is one of the solutions favored by the Obama Administration. Yet, would-be cloud users in government will have to navigate a thicket of security requirements and other guidelines, warns one expert.

In a slide presentation shared with attendees at a cloud interoperability workshop yesterday in Arlington, Va., John Curran, CTO and COO of ServerVault, tackled the question of what cloud vendors could do to let federal agencies use cloud services while complying with federal IT policies. …

You can get more detail on those requirements from Curran's downloadable presentation here.

Michael Hickins contends that Cloud Computing FUD Muddies SaaS Waters in this 3/24/2009 post:

Vendors adopting cloud computing strategies are disparaging SaaS in a manner that could jeopardize their entire service-based business model just as it's starting to gain larger acceptance in the market.

He then goes on to criticize “the FUD-ful aspects of the statement made by Don Klaiss, CEO of open-source enterprise resource planning (ERP) vendor Compiere, when it announced that its product will be available on Amazon … cloud platform.]

John Foley’s Report: Cloud Computing Could Be Bigger Than The Web post of 3/20/2009 links to a Google-sponsored "Envisioning The Cloud: The Next Computing Paradigm," white paper by Marketspace’s Jeffrey Rayport and Andrew Haywood. John writes:

[The authors] define the cloud in its broadest sense as including everything from Hotmail and Facebook to Amazon Web Services and Google App Engine. They present cloud computing in big, bold terms, arguing that it's in the national interests of the United States to adopt pro-cloud policies. "It's high time to ensure that the cloud's promise as an opportunity for U.S. wealth generation, job creation, and business and technology leadership does not pass our country by,"

Kevin L. Jackson points out Booz Allen Hamilton‘s entrance as a “cloud transition guidance provider” in his Booz Allen Hamilton Lays Out Path To Cloud post of 3/23/2009. He links to BAH Principal Rod Fontecilla’s "Cloud Computing: A Transition Methodology" article, which proposes a “two phased approach” to cloud computing.

Dimitry Sotkinov’s Gartner Highlights 5 Cloud Start-Ups post of 5/23/2009contains a link to Gartner’s “Cool Vendors in Cloud Computing Management and Professional Services, 2009” paper, which covers:

    • Appirio – which provide professional services and integration solutions based on salesforce.com for various web 2.0 services including Google Apps, Facebook, and Amazon. They are doing relatively well and their 2,000 customers include Japan Post Office, Qualcomm, Genentech, and Author.
    • CohesiveFT – a cross-platform virtual appliance packaging solution for VMware, Xen, Parallels, and Amazon EC2. They also have a product called VPN-Cubed which allows companies to set up secure networking between their servers across the clouds.
    • Hyperic – application and infrastructure monitoring solution for Amazon EC2 (I wrote about them in my notes from Cloud Computing Expo last year.)
    • RightScale – cross-cloud automation engine and a set of application templates.
    • Ylastic – browser-based Amazon-management interface for mobile phones and other devices.

Dave Graham says IDC analyst Frank Gens’ “Clouds and Beyond: Positioning for the Next 20 Years in Enterprise IT … “was definitely the highlight of the morning general sessions for me” in his IDC Directions – Boston 2009: The Future is Cloud(y) – Part 1 post of 3/20/2009. Dave also reviews three other non-cloud sessions.

Paul Miller asks Can we draw a map of the Cloud ? on 3/20/2009 and points to Troy Angrignon’s post of 3/16/2009 on SandHill.com, in which he shares his list of “cloud computing giants” and his barely readable (in full-screen mode) Cloud Computing Ecosystem Map v1.0.

Paulo Calçada claims that “EaaS - Everything as a Service, doesn’t exist and it won’t need to. We already have: IaaS - Infrastructure as a Service, PaaS - Platform as a Service, and finally, SaaS - Sofware as a Service” in his EaaS: Everything as a Service - The next big buzzword? post of 3/20/2009. Paulo and his cohorts man the CloudViews.org blog that supports the Cloud Computing Conference 2009 being held on May 28 – 29, 2009 in Porto, Portugal.

LINQ and Entity Framework Posts for 3/23/2009+

• Updates: 3/28/2009 and 3/29/2009

Note: This post is updated daily or more frequently, depending on the availability of new articles.

Entity Framework and Entity Data Model (EF/EDM)

Julie Lerman takes on many-to-many associations again in her Inserting Many to Many Relationships in EF with or without a Join Entity post of 3/29/2009.

Binary Bob’s Disconnected Clients, Changed data and Entity Framework post of 3/25/2009 discusses the use of EF with Silverlight apps. Bob writes:

Entity Framework is here to stay as Microsoft is steering future development towards its use. But if you use Entity Framework in your Silverlight apps, you may bump up against some hurdles trying to persist changes made to the Entities on the client back to the store.

and decides to use .NET RIA Services to make the connection. Bob’s More than one way to skin an EDM query post of 3/22/2009 provides a diagram that helps you “to conceptualize the whole framework and understand the pieces of the stack, what they offer and how they interrelate.”

Lynn Eriksen asks Ado.Net Data Services – Are you experienced? on 3/24/2009 and proceeds to share his experiences when deploying his first EF application.

Alex JamesIndex of Tips of 3/25/2009 has links to the first nine of his useful tips for working with Entity Framework v1 and LINQ to Entities.

Jeff Derstadt explains in Self-Tracking Entities in the Entity Framework post of 3/23/2009 how self-tracking entities will work in EF v2. Jeff describes them:

Self-tracking entities know how to do their own change tracking regardless of which tier those changes are made on. As an architecture, self-tracking entities falls between DTOs and DataSets and includes some of the benefits of each.

Simon Segal reports on his Entity Framework Profiler – Progress as of 3/22/2009. His earlier A Profiler for the Entity Framework – Proposed post of 3/21/2009 explains:

The .ToTraceString() method and the SQL Server Profiler are just too disconnected to provide me with a solution to analysing and logging the SQL Query output produced by my Entity Framework code. To change that, I am currently working on a profiler for the Entity Framework. Of course the inspiration to start on this project came from watching what Ayende was doing with NHProf (which is incredible), however I should point out that this project does not seek in any way nor set out to accomplish the same goals, it is my attempt at building myself a usable tool for a job and if it can anyone else then that’s great too.

Joel Reyes tackles READERS COMMENTS: performance of EF vs. LINQ to SQL in this generalized post of 3/20/2009 that begins:

Original Comment: “EF ObjectQueries seem to create some pretty poorly performing queries compared to LINQ in my experience. 2. If you are saying Entity SQL performs really well, what are you comparing this to, ObjectQueries or other data access technologies?”

and ends:

All that said, we have been continuously investing in improving the SQL generated by EF, by improving the query trees we generate and by improving the provider we own: the one that generates T-SQL for SQL Server. We have also invested substantially in fixing LINQ to SQL bugs and in improving things like how both technologies handle string parameters. Hopefully, the fruits of these investment will be apparent in the versions that we will release with .NET 4.0 and Visual Studio 2010.

In fact on April 08th we will have  a  webcast exploring the new additions and improvements made to EF in .NET 4.0. Stay tuned!

Alex James’ Foreign Keys in the Entity Framework post of 3/16/2009 notes the controversy surrounding exposing FK values in the Entity Data Model v2 but decides:

Those who want FKs in their Entities should be able to do so, so they can gain all the benefits that having FKs in your Entities undoubtedly provide. On the other hand customers who are concerned that have FKs in their Entities in someway pollutes their model can continue to use the type of associations we had in .NET 3.5 SP1.

Hoorah! I’ve been clamoring for optional surfacing of FK values since the initial EF v1 preview.

LINQ to SQL

Chris Rock’s Implicit conversion from data type sql_variant to uniqueidentifer is not allowed. post of 3/23/2009 describes how to recover from the dreaded "Implicit conversion from data type sql_variant to uniqueidentifer is not allowed. User the CONVERT function to run this query" exception.

Sergey Zwezdin explains the importance of applying the DataServiceKey attribute when writing the constructor for an entity’s primary key(s) in his ADO.NET Data Services and LINQ-to-SQL and ADO.NET Data Services и LINQ to SQL: continuation posts of 3/23/2009 and 3/24/2009.

Damien Guard starts a LINQ to SQL tips and tricks series with LINQ to SQL tips and tricks #1 of 3/16/2009: Loading a delay-loaded property. Damien writes:

LINQ to SQL lets you specify that a property is delay-loaded meaning that it is not normally retrieved as part of normal query operations against that entity. This is particularly useful for binary and large text fields such as a photo property on an employee object that is rarely used and would cause a large amount of memory to be consumed on the client not to mention traffic between the SQL and application.

Other topics include “Multiple entity types from a single stored procedure” and “Intercepting create, update and delete operations.”

Thanks to Damien for the heads up on Twitter.

LINQ to Objects, LINQ to XML, et al.

• Bart de Smet’s ExceLINQ – Not Your Typical LINQ Provider post of 3/28/2009 describes his LINQ to Excel provider that generates an Excel chart from worksheet values. Here’s Bart’s pseudocode:

 

You can download the prototype here.

• Arjan Einbu describes an extension method for the TextReader class in his Parsing textfiles with LINQ (or LINQ-to-TextReader) post of 3/27/2009.

• Jacob Proffitt describes his work on a LINQ provider for BlogEngine.net in his Multiple Blog Data post of 3/27/2009. Jacob created a new beta release at the project’s homepage (CodePlex) for BlogEngine.net users.

Michaël HompusGenerate integer lists using LINQ post of 3/25/2009 shows you how to return an IEnumerable<int> collection with the corresponding range of values with LINQ’s Range() method.

Alexandra Rusina, a Microsoft programming writer, explains How to use LINQ methods to compare objects of custom types in this 3/25/2009 post to the C# Frequently Asked Questions blog.

Charlie Calvert’s Essential LINQ Published post of 3/21/2009 describes his pleasure in receiving “a box full of the first copies of his most recent book.”

I know the feeling.

Eric White’s Announcing the Release of PowerTools for Open XML V1.1 post of 3/19/2009:

PowerTools for Open XML is an open source project on CodePlex that makes it easy to create and modify Open XML documents using PowerShell scripts. I introduced the PowerTools for Open XML in June 2008 in the post, Automated Processing of Open XML Documents using PowerShell.

ADO.NET Data Services (Astoria)

• Rob Fonseca-Ensor’s Use T4 code generation to add INotifyPropertyChanged to ADO.Net Data Services (Astoria) Entities post of 3/26/2009 begins:

The current version of ADO.Net Data Services (Astoria) for Silverlight generates some extremely Plain Old CLR Objects. As Shawn Wildermuth notes, you dont even get INotifyPropertyChanged support. You might need this if you want to databind directly to your Astoria entities, but if you’re doing serious Silverlight work, it pays to wrap your entities up in a ViewModel. Even if you’re doing this, having INotifyPropertyChanged support can really help - for example, instead of saving every single entity on your model, you only need to fire the ones that change.

Rob provides a sample T4 template that solves the problem.

• Phani Raju continues his series on the ADO.NET Data Services upgrade with ADO.NET Data Services Friendly Feeds , Mapping EDM Types – I of 3/28/2009, which provides “some samples of how to map your entity properties to Atom/custom markup in the atom:entry element.”

Phani’s ADO.NET Data Services Friendly Feeds , Mapping CLR Types of 3/20/2009 includes “some samples of how to map your entity properties to Atom/custom markup in the atom:entry element.”

Sergey Zwezdin’s Debugging ADO.NET Data Services post of 3/29/2009 explains how to use the InitializeService method to write logs for Request Error messages that tell you to “See server logs for more details.”

ASP.NET Dynamic Data (DD)

Steve Naughton’s Cascading Filters – Dynamic Data Futures (Futures on Codeplex) post of 3/27/2009 is the second in the DD Cascading Filters series that began with:

Steve Naughton’s Cascading Filters – for Dynamic Data v1.0 appeared 3/26/2009. Steve writes:

This is an article is a follow on from my previous article Cascading or Dependant Field Templates for ASP.Net 4.0 Preview and is the start of a series of three articles which describe Cascading Filters for:

  1. Dynamic Data (v1.0) .Net 3.5 SP1
  2. Dynamic Data Futures (Futures on Codeplex)
  3. Dynamic Data Preview 3 (Preview 3 on Codeplex)

This first version is pretty similar to the previous article on cascading FieldTemplates here we will adapt the CascadingFieldTemplate class to facilitate our need for cascading filters.

Dave Ebbo explains Using a DomainService in ASP.NET and Dynamic Data in this 3/24/2009 post. Dave writes:

One of the big things that I discussed in my MIX talk is the new DomainDataSource control.  It is currently available in Preview form as part of ASP.NET Dynamic Data 4.0 Preview 3.  This can be confusing, because even though Dynamic Data makes use of DomainDataSource, DomainDataSource is absolutely not tied to Dynamic Data, and is fully usable in ‘regular’ aspx pages.

SQL Data Services (SDS) and Cloud Computing

This topic moved on 1/3/2009 to Windows Azure and Cloud Computing Posts for 1/5/2009+.

Miscellaneous (WPF, WCF, MVC, Silverlight, etc.)

Rob Conery announces that the MVC Storefront’s project name has changed to “Kona” in his Kona Screencast 1: It’s Baaaaack! post of 3/27/2009. Rob writes:

This may be a really dumb thing I’ve just done – but bear with me because I think it makes sense. I knew this point was going to come when I first started the MVC Storefront – the point where I’m no longer building “an MVC sample app” and I change directions to build a flexible, pluggable Open Source community application. So I decided to mark the occasion by changing the application name from MVC Storefront to Kona. This screencast is all about that change and what’s become of the MVC Storefront.

His post includes a link to a 40 MB Kona 1 Webcast.

Dinesh Kulkarni’s .NET RIA Services Resources (for March 2009 Preview) post of 3/26/2009 also includes a “Common questions and quick answers” section about .NET RIA Services.

Saturday, March 28, 2009

More Confusion about Relational SQL Data Services and the Entity-Attribute-Value Data Model

Jamie Thomson’s SQL Data Services and Entity-Attribute-Value models post takes issue with Jeffrey Schwartz’s question and Niraj Nagrani’s answer in Jeff’s UPDATED: Microsoft Exec Explains SDS About-Face article for Visual Studio Magazine.

Q. Are you basically not going to be offering SDS with the EAV tables any more?

A. We are looking into our future roadmap to make sure that Astoria [ADO.NET Data Services] can be leveraged on top of SDS and Entity Data Model continues to exist, and we will continue to provide for that through Astoria. We will continue to work with the Astoria framework and figure out how SDS can support that.

Obviously, the answer was to a question that appears not to have been asked. I can’t think of any reason that relational SDS couldn’t support the server-side pieces of SQL Server Data Services (Astoria) v1.

Jamie contends:

Firstly, whilst the current incarnation of SDS is (underneath the ACE model abstraction) built upon an EAV model (which I think is what the interviewer was alluding to) I don’t believe its true to say that the ACE model with which a user interacts is an EAV model in itself.

Secondly, it is absolutely not the case that it won’t be possible to host EAV models on the future incarnation of SDS. That version of SDS will be built upon SQL Server 2008 which contains the new sparse column feature – a perfect storage mechanism for EAV models. Moreover, its more than possible to build EAV models on a traditional relational database (I’ve done it myself) and if you want to know how to do it then go and read Arnie Rowland’s excellent treatise on the subject at Through the Looking Glass: Elegant -or Not?.

My take is that SDS (and its predecessor, SQL Server Data Services) were based on the EAV data model.

Although it would be possible for the SDS team to implement an EAV version, Azure Tables use the EAV data model and will have much greater scale-out capabilities than SDS v1, which will be limited to “departmental” size databases.

Undoubtedly, SDS will be carry a surcharge over plain-vanilla (but fully elastic) Azure EAV Tables, like the SQL Server option for Windows Server running on Amazon EC2. So implementing EAV on SDS would carry a significant price penalty over native EAV with Azure Tables.

“Secret” Cloud Computing Manifesto Dustup

Update 3/28/2009: James Urquhart’s Cloud Computing: What we learned from Manifestogate post of 3/28/2009 provides a reasoned critique of the “Open Cloud Computing Manifesto” (see below). James points are:

  1. It's an opinion piece, not a standards proposal.
  2. Those that have publicly stated that they won't sign have the most to lose.
  3. It's probably a bad idea to release even an industry opinion piece without public commentary.
  4. It's what follows that is important here.

James agrees with me when he says in his conclusion:

I'm not convinced that a top-down formal standards approach will do anything other than repeat the mixed success of the WS-* efforts to date.

You can read version 1.0.9 of the Manifesto now from a link on Geva Perry’s The Open Cloud Manifesto: Much Ado About Nothing.

Update 3/27/2009: George Reese hits the nail on the head in his The Varieties of Openness Worth Wanting in the Cloud post of 3/27/2009 to the O’Reilly Radar blog. Here’s George’s list:

  1. We want a clear path to business continuity planning.
  2. We want the freedom to use the tools that work for us, not the ones a vendor mandates.
  3. We want the freedom to use our data as we see fit.

which I think is a great start. Tim O’Reilly says it’s “What Stallman should have been talking 10 yrs ago.

Update 3/27/2009: Reuven Cohen’s Introducing the Open Cloud Manifesto post of 3/27/2009 on the Cloud Computing Journal blog announces:

The first version of the manifesto will be published Monday, March 30th to be ratified by the greater cloud community.

This post follows Ruv’s Re: Microsoft Moving Toward an Open Process on Cloud Computing Interoperability post of 3/26/2009 with a “Their 2:28 AM pre-announcement of the manifesto was a complete surprise” subtitle.

Apparently Ruv forgot to have Microsoft sign the NDA.

Update 3/27/2009: Larry Dignan reports Amazon Web Services: No Open Cloud Manifesto for us on 3/27/2009, quoting an “Amazon spokeswoman:”

We just recently heard about the manifesto document.  Like other ideas on standards and practices, we’ll review this one, too. Ideas on openness and standards have been talked about for years in web services. And, we do believe standards will continue to evolve in the cloud computing space. But, what we’ve heard from customers thus far, customers who are really committed to using the cloud, is that the best way to illustrate openness and customer flexibility is by what you actually provide and deliver for them. …

In any event, we do believe that standards will continue to evolve and that establishing the right ones, based on a better understanding of what is needed, will best serve customers.

Mary Jo Foley seconded Larry’s conclusion in her Amazon not joining the Open Cloud Manifesto, either post of the same date.

You can read version 1.0.9 of the Manifesto now from a link on Geva Perry’s The Open Cloud Manifesto: Much Ado About Nothing.

Steven Martin’s Moving Toward an Open Process on Cloud Computing Interoperability post of 3/26/2009 takes on an unidentified “secret Cloud Manifesto”:

Very recently we were privately shown a copy of the document, warned that it was a secret, and told that it must be signed "as is," without modifications or additional input. …

To ensure that the work on such a project is open, transparent and complete, we feel strongly that any "manifesto" should be created, from its inception, through an open mechanism like a Wiki, for public debate and comment, all available through a Creative Commons license. After all, what we are really seeking are ideas that have been broadly developed, meet a test of open, logical review and reflect principles on which the broad community agrees. This would help avoid biases toward one technology over another, and expand the opportunities for innovation.

In response to Steve’s request, Sam Johnston announced a new Manifesto in the Cloud Computing Community wiki (see his comment at the end of Todd Bishop’s Microsoft criticizes secret drafting of cloud-computing manifesto post of the same date.) The Manifesto lays out 10 rather innocuous “motherhood and apple pie” principles and notes that “It is complementary to the Cloud Computing Bill of Rights, which describes the rights of cloud computing users.”

The flap also caused eWeek’s Darryl Taft to chime in with a timely Microsoft Calls for Open Cloud Standards article that begins with “If Microsoft is the pot, what color is the kettle?” and concludes:

I know [Microsoft’s] Ramji and Cooney are sincere about their interaction with open source. And Martin, a straight-shooting Texan, is just as sincere –- if not more -- about the issue of openness and interoperability. All three come from Java or open-source backgrounds.

So, rather than seeing the griping of a company grasping for shortcuts because it's playing catch-up, perhaps what we're seeing is simply a different Microsoft.

Techmeme picked up the story later in the morning with links to The Register, Silicon Alley Insider, eWeek and OakLeaf Systems.

Sam credits Reuven Cohen, a cloud-computing thought leader and founder of the Cloud Computing Interoperability Forum, as the source of the “secret manifesto.” This turns out to be the “Open Cloud Manifesto", as described in Ruv’s Introducing the Open Cloud Manifesto thread in the CCIF Google group, which leads off with:

Over the last few weeks I have been working closely with several of the
largest technology companies and organizations
helping to co-author the Open
Cloud Manifesto
. [Emphasis added.]

Later Thursday morning, @Ruv and @Beaker (Christofer Hoff) fired off a Tweet volley about the issue and David Linthicum posted a 15-minute How to make cloud computing interoperability work podcast.

Ruv writes about Sam’s wiki in a comment:

We [presumably Ruv and Sam] have been working on this document for weeks and it was originally supposed to be announced today, but got pushed off until Monday to make time for some last minute alterations. I am not sure what his [presumably Sam’s] intentions are. I will say his timing does seem rather suspect.

After the dust settles, I hope these folks don’t end up proposing a mess like the WS* “standards.”