Wednesday, October 28, 2009

Deploying Production Projects to the Cloud with the New Windows Azure Portal

The Windows Azure team sent the following message to all Windows Azure account holders on 10/9/2009:

Windows Azure CTP participant,

You are receiving this mail because you have an application or storage account in Windows Azure in the “USA - Northwest” region.  Windows Azure production applications and storage will no longer be supported in the ‘USA-Northwest’ region.  We will be deleting all Windows Azure applications and storage accounts in the “USA - Northwest” region on October 31st.

To move your application/storage, first delete the project using the “Delete Service” button.  Then recreate it, choosing the “USA - Southwest” region.  (It may take a few minutes for your previous application and storage account names to become available again.)

clip_image002

Note that deleting your storage account will destroy all of the data stored in that account.  Copy any data you wish to preserve first.

If you would like help migrating your project or have any other concerns, please reply to this mail.

- The Windows Azure Team

I started migrating these sample applications from Cloud Computing with the Windows Azure Platform on 10/26/2009:

This tutorial supplements Chapter 2’s “Deploying Applications and Services to the Azure Cloud” section, which starts on page 42, and Chapter 4’s “Creating Storage Accounts” section, which starts on page 64.

Following is the step-by-step process for using the renovated Windows Azure portal to re-create Windows Azure apps in Microsoft’s recently activated South Central US (San Antonio, TX) data center. The process is based on the Azure Table Services Sample Project and the Windows Azure SDK/Tools for Visual Studio July 2009 CTPs.

Delete the Old and Create a New Hosted Service

1. Navigate to the portal’s Provisioning page at https://windows.azure.com/Cloud/Provisioning/Default.aspx and sign in with your Windows Live ID to open the My Projects page:

The SQL Server Data Services (SSDS) team assigned my PDC08 CTP account prior to PDC 2008. Accounts also are known as projects and subscriptions.

2. Click the Project Name item to open the list of Hosted Service and Storage Accounts. You must delete a service in the original data center (North West US, for this example) before you can create a service with the same DNS name in the new data center (South Central US). You must first Suspend and Delete the instance of a Hosted Service by stopping it, if it’s running, and deleting it. Then click the Delete Service button to delete the Hosted or Storage Service:

3. Click the New Service link to open the Project | Create a New Service page:

4.  Click the Hosted Services link to open the Create a Service | Service Properties page. Type an arbitrary Service Label and brief Service Description:

5. Click Next to open the Create a Service | Hosted Service page. Type a valid third-level DNS name in the Service Name text box and click Check Availability. When moving a service, the name is the same as that for the original version. (You might need to wait a few minutes for the service deletion to propagate through the data center.)

6. You’ll probably want all (or at least most) of your Hosted Service and Storage Accounts to share the same affinity group, so select the Yes, This Service Is Related to Some of My Other Hosted Service or Storage Accounts option button.

7. If this is the first service or account in the Affinity Group, click the Create a New Affinity Group radio button, type a name for the new Affinity Group in the text box (USA-SouthCentral for this example), and select a data center location in the Region list:

If you’ve already created a new Affinity Group, select the Use Existing Affinity Group radio button and select the new Affinity Group in the list.

8. Click Create to create the new hosted service, oakleaf1host at oakleaf.cloudapp.net for this example:

Delete, Recreate and Test the Storage Account for the Project

9. It’s a good practice to test your project by running it in the Development Fabric with Cloud Storage, so back up the Storage Account’s data and then delete the Account in the original data center. OakLeaf sample applications include code to generate required data.

The following steps create a new Storage Account in the Affinity Group you created in step 7.

10. Create a new storage account by clicking the New Service link to open the Project | Create a New Service page (see step 3’s screen capture) and click Storage Account to open the Create a Service | Service Properties page.

11. Type an arbitrary name, oakleaf1store1 for this example, in the Service Label text box and a brief Service Description:

12. Click next to open the Create a Service | Storage Account page. Type and test the third-level DNS name for the account, which usually (but not necessarily) is the same as that for the service.

13. Select the Yes, the Service Is Related … and Use Existing Affinity Group radio buttons and select the Affinity group you created in step 7:

 

14. Click Create to create the new storage service:

15. Copy the Primary Access key to the clipboard, open the project’s ServiceConfiguration.cscfg file and replace the previous AccountSharedKey value with the value copied in step 14:

16. Press F5 to run the project in the Development Fabric with Cloud Storage. The oakleaf.cloudapp.net project opens with an empty Customers table, so the Customers DataGrid isn’t visible:

17. Click the Create Customers button to populate and display the paged DataGrid with the first 12 of 91 customer entities in about 1/4 second:

You can learn more about the project by clicking the here link to navigate to the Azure Storage Services Test Harness: Table Services 1 – Introduction and Overview post of 11/18/2008, which includes links to the remaining six parts of the Tables Services Test Harness series:

Deploy the Hosted Service from the Development Fabric to the Production Fabric

18. Open the project (\WROX\Azure\Chapter04\SampleWebCloudService\SampleWebCloudService.sln for this example) in Visual Studio 2008/2010, right-click the Cloud Service node and choose Publish to generate the ServiceName.cspkg Service Package file in the \ProjectFolder\bin\Debug\Publish folder, which also contains the ServiceConfiguration.csfg file:

19. If the Azure Services portal isn’t open, right-click the Cloud Service node, and choose Browse to Portal to open the My Projects page and click the Project Name to open the services list, and click the link to the hosted service you added (oakleaf1host for this example) to open its Hosted Service Name page:

20. If you want to bypass the Staging Deployment process, click Deploy to open the Service Name | Production Deployment page. Otherwise, click the bar at the right to display the Staging button.

21. Click the Application Package section’s Browse button, navigate to and select the application’s Service Package file and click Open to add the filename to the Select a File text box. Do the same for the ServiceConfiguration.cscfg file, type a name for the deployment, and click Deploy.

22. After a brief interval required to allocate a new virtual machine instance, the WebRole status changes to Initiated and a Run button replaces the Deploy Button.

23. Click the Run button which changes the status to Initializing. After a few minutes to initialize the new instance the WebRole status changes to Started and Upgrade, Suspend, Configure and Delete (disabled) buttons appear:

Use the Deployment ID to identify a project when requesting assistance deploying your project from an Azure Team member in the the Windows Azure forum.

24. Click the Web Site URL link to run the project from the Windows Azure cloud.

Tuesday, October 27, 2009

Amazon Attempts to Preempt PDC 2009 Release of SQL Azure with MySQL 5.1 Relational Database Service

For the second year in a row, Amazon Web Services (AWS) announces new and improved services a few weeks before Microsoft’s Professional Developers Conference (PDC). Last year it was Amazon Web Services Announces SLA Plus Windows Server and SQL Server Betas for EC2, which I reported on 10/23/2008.

This year, it’s Amazon Relational Database Service (Amazon RDS) Beta, announced on 10/27/2009, which delivers pre-configured MySQL 5.1 instances with up to 68 GB of memory and 26 ECUs (8 virtual cores with 3.25 ECUs each) servicing up to 1 TB of data storage. One Elastic Compute Unit (ECU) provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. A summary of RDS beta pricing is here.

Stay tuned for a comparison of AWS RDS and SQL Azure pricing.

Werner VogelsExpanding the Cloud: The Amazon Relational Database Service (RDS) post of the same date contrasts AWS’ three approaches to cloud-based databases:

AWS customers now have three database solutions available:

  • Amazon RDS for when the application requires a relational database but you want to reduce the time you spend on database management, Amazon RDS automates common administrative tasks to reduce your complexity and total cost of ownership. Amazon RDS allows you to manage your database compute and storage resources with a simple API call, and only pay for the infrastructure resources they actually consume.
  • Amazon EC2- Relational Database AMIs for when the application require the use of a particular relational database and/or when the customer wants to exert complete administrative control over their database. An Amazon EC2 instance can be used to run a database, and the data can be stored within an Amazon Elastic Block Store (Amazon EBS) volume. Amazon EBS is a fast and reliable persistent storage feature of Amazon EC2. Available AMIs include IBM DB2, Microsoft SQL Server, MySQL, Oracle, PostgreSQL, Sybase, and Vertica.
  • Amazon SimpleDB for applications that do not require a relational model, and that principally demand index and query capabilities. Amazon SimpleDB eliminates the administrative overhead of running a highly-available production database, and is unbound by the strict requirements of a RDBMS. With Amazon SimpleDB, you store and query data items via simple web services requests, and Amazon SimpleDB does the rest. In addition to handling infrastructure provisioning, software installation and maintenance, Amazon SimpleDB automatically indexes your data, creates geo-redundant replicas of the data to ensure high availability, and performs database tuning on customers' behalf. Amazon SimpleDB also provides no-touch scaling. There is no need to anticipate and respond to changes in request load or database utilization; the service simply responds to traffic as it comes and goes, charging only for the resources consumed.

Jeff Barr’s Introducing Amazon RDS - The Amazon Relational Database Service post of the same date to the AWS Blog digs into the details of creating and populating an RDS instance:

… Using the RDS APIs or the command-line tools, you can access the full capabilities of a complete, self-contained MySQL 5.1 database instance in a matter of minutes. You can scale the processing power and storage space as needed with a single API call and you can initiate fully consistent database snapshots at any time.

Much of what you already know about building applications with MySQL will still apply. Your code and your queries will work as expected; you can even import a dump file produced by mysqldump to get started.

Amazon RDS is really easy to use. I'll illustrate the most important steps using the command-line tools, but keep in mind that you can also do everything shown here using the APIs.

The first step is to create a database instance. …

Jeff then goes on to demonstrate how to:

  • Create a database named mydb with room for up to 20 GB of data
  • Check on the status of your new database at any time
  • Edit the database's security group so that it allows inbound connections, enable connections from any (or all) of your EC2 security groups, or enable connections from a particular IP address or address range using CIDR notation
  • Expand your instance’s storage space immediately or during the instances maintenance window
  • Set up a two hour backup window and a retention period for automated backups and create a database snapshot at any time

James Hamilton’s Relational Database Service, More Memory, and Lower Prices provides a overview of AWS RDB, which begins:

I’ve worked on our around relational database systems for more than 20 years. And, although I freely admit that perfectly good applications can, and often are, written without using a relational database system, it’s simply amazing how many of world’s commercial applications depend upon them. Relational database offerings continue to be the dominant storage choice for applications with a need for structured storage.

There are many alternatives, some of which are very good. ISAMs like Berkeley DB. Simple key value stores. Distributed Hash Tables. There are many excellent alternatives and, for many workloads, they are very good choices. There is even a movement called Nosql aimed at advancing non-relational choices. And yet, after 35 to 40 years depending upon how you count, relational systems remain the dominant structured storage choice for new applications.

Understanding the importance of relational DBs and believing a big part of the server-side computing world is going to end up in the cloud, I’m excited to see the announcement last night of the Amazon Relational Database Service.

He concludes:

AWS also announced last night the EC2 High Memory Instance, with over 64GB of memory. This instance type is ideal for large main memory workloads and will be particularly useful for high-scale database work. Databases just love memory. …

AWS EC2 On-Demand instance prices were reduced by up to 15%.

Lower prices, more memory, and a fully managed, easy to use relational database offering.

James is a Vice President and Distinguished Engineer on the Amazon Web Services team where he is focused on infrastructure efficiency, reliability, and scaling.

RightScale’s Amazon launches Relational Database Service and larger server sizes post of 10/26/2009 takes another tack when describing RDS capabilities:

… With the Relational Database Service AWS fulfills a long standing request from a large number of its users, namely to provide a full relational database as a service. What Amazon is introducing today is slightly different than what most people might have expected, it’s really MySQL5.1 as a service. The RDS product page has the low-down on how it works, but the short is that with an API call you can launch a MySQL5.1 server that is fully managed by AWS. You can’t SSH into the server, instead you point your MySQL client library (or command line tool) at the database across the network. Almost anything you can do via the MySQL network protocol you can do against your RDS instance. Pretty simple and the bottom line is that businesses that don’t want to manage MySQL themselves can outsource much of that to AWS. For background on RDS I’d also recommend reading Jeff Barr’s write-up and Werner’s blog which recaps the data storage options on AWS.

What AWS does is keep your RDS instance backed up and running, plus give you automation to up-size (and down-size). You can create snapshot backups on-demand from which you can launch other RDS instances and AWS automatically performs a nightly backup and keeps transaction logs that allow you to do a point-in-time restore. …

One of the current shortcomings of RDS is the lack of replication. This means you’re dependent on one server and it’s impossible to add slave MySQL servers to an RDS instance in order to increase read performance. It’s also impossible to use MySQL replication to replicate from a MySQL server located in your datacenter to an RDS instance. But replication is in the works according to the RDS product page. …

Note re “… and down-size”: According to RDS documentation, you can’t reduce the size of allocated data storage.

Alan Williamson offers his knowledgeable and balanced third-party view of RDS in his Amazon takes MySQL to the cloud; or have they? post of 10/27/2009:

Amazon have just announced a whole slew of updates to their Web Services platform, at a time I might add, that their stock is riding at an all time high of $124 per share; no better time to announce price cuts. They announced the following high level features:

  • New Amazon RDS service
  • Additional 2 EC2 instance types
  • Price reduction on EC2 per-hour prices
Amazon RDS service

RDS or, Relational Database Service, is basically a full managed MySQL 5.1 server within the Amazon infrastructure. Instead of spinning up an EC2 instance, installing MySQL and faffing with all the security, backup and tuning settings, you merely call the API endpoint CreateDBInstance, supply which type of instance you want and how much storage you want, and you are up and running.

Their pricing starts off at $0.11 per hour for the smallest instance, going the whole way up to $3.10 for their highest instance. You also get charged for the amount of storage you reserve at the usual $0.10 per GB[-month].

A number of big gains to be had here with Amazon RDS straight out of the gate. Namely, no longer do you have to wrestle with EBS to manage your persistence storage - no panics of data loss should the instance you had MySQL running suddenly dies. Secondly, you don't have to concern yourself with running the latest patched version of MySQL, as Amazon will keep the server binaries up-to-date for you.

They provide tools to easily and quickly take snapshots and create new instances, which is essentially wrapper tools around their existing EBS service and provide nightly backups for you automatically. Again, simply utilising their existing EBS service.

You do actually get a MySQL server, listening on port 3306 for all your connections from your app servers and all the tools you've built up to manage MySQL over the years. From an operational stand point, it’s business as usual.

But before you go terminating all your existing MySQL instances, allow some caution.

Amazon RDS, at the moment (although they plan to address this soon), is a one-singer-one-song setup. There is no real time replication and you are relying on a single Amazon EC2 instance not to fail. [Emphasis added.]

Some are probably not too worried about this, as they are probably sailing close to the wind without replication at the minute. However, how does a forced 2-hour downtime per week work out for your application? This is called the "Maintenance Window" and is an opportunity for Amazon to patch the server with the latest security and performance updates. They claim to only bring the server down as short as possible and you get to pick the time of day. [Emphasis added.]

That's going to sting, especially with no replication to take up the slack from the blackout. …

Jeff Barr says in his blog post: “A High Availability offering so that you can easily and cost-effectively provision synchronously replicated RDS instances in two different availability zones” is “planned for the coming months.'”

Jeffrey Schwartz begins his Amazon Sets Stage for Cloud Battle With Microsoft article of 10/27/2009 for Redmond Developer News with:

In what could be an escalating war in the emerging arena of cloud-based computing services, Amazon today said it will let customers host relational data in its EC2 cloud service using the MySQL database The company today also said that it plans to slash the costs of its EC2 service by as much as 15 percent.

The news comes just weeks before Microsoft is expected to make available its Azure cloud service at the annual Microsoft Professional Developers Conference in Los Angeles (see SQL Azure Is PDC Ready). Microsoft initially did not plan to offer a relational database service with Azure, but the company reversed course after earlier testers largely rejected the non-relational offering. Microsoft's SQL Azure Database will be part of the Azure offering (see Microsoft Revamps Cloud Database and Targeting Azure Storage). …

And concludes:

Roger Jennings, principal analyst at Oakleaf Systems and author of the book Cloud Computing with the Windows Azure Platform (Wrox), said Amazon clearly has taken a swipe at Microsoft. "It certainly appears that way," Jennings said in an interview.

The offering from Amazon includes support for larger databases of up to 1 terabyte, compared to just 10 gigabytes for SQL Azure, Jennings pointed out. "Larger SQL Azure databases must be sharded, but distributed queries and transaction[s] aren’t supported yet," he said in a follow-up email.

Jennings also noted that Amazon MySQL instances scale up resources with an hourly surcharge and scale out by sharding, if desired. "SQL Azure scales out by sharding but not up," he said. "SQL Azure instances have a fixed set of resources not disclosed publicly."

Monday, October 26, 2009

Installing the Windows Azure SDK and VS 2008 Tools from the Microsoft Web Platform Installer 2.0

The Microsoft Web Platform Installer (Web PI) 2.0 is now the preferred method for installing or upgrading to the current CTP or release version of the Windows Azure SDK and Windows Azure Tools for Visual Studio 2008 (July 2009 CTP when this post was written.)

Note: This process supersedes that described in the following section of Cloud Computing with the Windows Azure Platform’s Chapter 2:

  • “Creating and Running Projects in the Azure Development Platform,” p. 23

Following is the step-by-step method for installing or upgrading the Azure SDK and Tools with Web PI 2.0:

1. Open the Web PI 2.0 landing page (click image to open full-size screen capture):

2. Click the Download It Now button to start the download process and choose Run so you always receive the current SDK and Tools versions. When download completes, the What’s New or Web Platform tab opens:

3. Click the Options button to open the Change Options page and mark the Developer Tools check box:

4. Click the OK button to add the Developer Tools tab and display the Install Windows Azure Tools for Microsoft Visual Studio 2008 July CTP (or later) page:

 

5. Click the Install button to display the EULA for the Tools, SDK, and Fast CGI for Internet Information Services (IIS) feature:

6. Click I accept button to begin installation of the Tools, SDK, and Fast CGI:

7. When installation completes, click Finish to dismiss the installer:

 

8. New installations require adding the DevelopmentStorageDb database to a local instance of SQL Server 2005 or 2008 Express (.\SQLEXPRESS). Choose [All] Programs, Windows Azure SDK (July 2009 CTP), right click Development Storage and choose Run as Administrator. If the following message appears, click Yes to add the Development Storage database:

9. Creating the database opens the Development Storage Initialization status dialog:

 

10. Click OK to dismiss the dialog.

If you’re updating an earlier CTP version, such as the May 2009 CTP, and receive a message similar to the following:

and you’re running Windows 7 or Windows Server 2008 R2, you probably need to remove the existing SDK and Tools instances with Control Panel’s Programs and Features applet and then reinstall the SDK and Tools with Web PI 2.0. For more information, see the Windows Azure forum’s Issue with VS Tools ( May CTP) thread or search for “Role instances did not start within the time allowed”.

Also, check to see if Wally McClure’s post in answer to his Role instances did not start within the time allowed thread is accessible. (It wasn’t on 10/26/2009.)

Sunday, October 25, 2009

LINQ and Entity Framework Posts for 10/19/2009+

Note: This post is updated weekly or more frequently, depending on the availability of new articles.

Entity Framework and Entity Data Model (EF/EDM)

Julie Lerman created a Screencast – What’s new in the Entity Data Model Designer in VS2010 and posted it on 10/23/2009. Here’s her description and a link to view it:

I have created a screencast showing some of the best new features of the new Entity Data Model Designer. This is using the new Beta 2 version of Visual Studio 2010.

The screencast is about 20 minutes long.

This is not designed to be an introduction to building models. Rather, it specifically highlights (and demonstrates) the improvements, so should be of great interest to those who have been using the designer or possibly looked at in the past and said “meh” and walked away.

In the video, I go over:

  • EDM Wizard naming the entities with proper plural and singular names and a few minor issues
  • Impact that Foreign Keys has on Association Mappings
  • Creating complex types
  • Mapping Insert/Update/Delete Stored procs to entities with Complex Types
  • Creating functions from Stored Procedures that return results which do not map to an entity
  • Quick look at Model First Design

There are more new goodies in the designer, just couldn’t cover them all in one shot!

You can also find the screencast at http://www.screencast.com/users/JulieLerman/folders/EF4

Gil Fink offers on 10/21/2009 his ADO.NET Entity Framework Session Slide Deck:

Two days ago I had a half day session at Microsoft [Israel] about Entity Framework. The agenda was as follows:

  • Entity Framework Introduction
  • Exploring the Entity Data Model
  • Querying and Manipulating Entity Data Models
  • Customizing Entity Data Models Examples
  • EF4

As I promised the slide deck and demos can be downloaded from here. I want to thank the attendees of the session and I hope you had great time like I did.

Also, I’m sorry for the late publishing but it was caused by technical problems
to upload the files into the blog’s server. Instead I put it in my Skydrive.

P.S – in order to use the demos you need to restore the added database
backup and replace all the connection strings in the demos to point to
the database you restored.

Jeffrey Schwartz reviews Entity Framework’s speckled history in his EF4 Beta 2 Arrives with New Features article for 1105 Media’s Data Driver column of 10/21/2009:

The big news for .NET developers this week is the release of beta 2 of Visual Studio 2010 and the .NET Framework 4, which became generally available today (see VS2010 and .NET 4 Beta 2 Go Live). For developers of data-driven applications, beta 2 included some major improvements to the ADO.NET Entity Framework, Microsoft's new interface for providing access to apps using object-relational modeling (ORM) rather than programming directly against relational schema.

First released with the Visual Studio 2008 and the .NET Framework 3.5 SP1, the Entity Framework has become a polarizing issue among database developers. As covered extensively, the Entity Framework has been a godsend to many programmers who welcome the move to model-driven development.

However it has also been much maligned by those who prefer the tried-and-true ADO.NET data access mechanism or those who feel that there are better object-relational mapping technologies than Microsoft has to offer such as the Nhybernate or Spring.NET.

Nonetheless Microsoft's Entity Framework stands out in another unique way: it is part of Visual Studio and the .NET Framework. And like it or not, it's here to stay. …

Gil Fink’s Calling User-Defined Functions (UDFs) in Entity Framework post of 10/20/2009 describes his response to a forum question:

Yesterday I answered a question in Entity Framework forum in regard of how to use User-Defined Functions (UDFs) in Entity Framework. This post holds the answer.

The Problem

I have a UDF in my database that I want to use with Entity Framework. When you use the Entity Data Model Wizard you can surface the function in the SSDL. When you will try map the created function element to a FunctionImport you’ll get the following exception: ”Error 2054: A FunctionImport is mapped to a storage function 'MyModel.Store.FunctionName that can be composed. Only stored procedure functions may be mapped.”
So how can we use the UDF?

Gill provides the answer.

Matheiu Mezil’s Entity Framework: the productivity way post of 10/20/2009 details gaining productivity with T4 templates:

One of the best points with Entity Framework is the developer productivity gain. If you associate EF with the T4 template code generation, this gain explodes. Imagine that we want a WCF service which exposes some data. For each entity type, you probably want a Get method which returns the entities, perhaps another Get which takes the entity id as parameter and which returns the entity with its relationships, a Add, a Update, perhaps a Delete.

EF allows an important productivity gain for the entities development and their use. However, in our case, to code to write is almost the same for each entity type. It means that it’s time to use the T4 template.

In all T4 samples I studied, this great template is used only for the entity generation. We will try here to go ahead. …

Kati Iceva’s and Noam Ben-Ami’s New features in Entity Framework impacting providers post of 10/22/2009 warns:

Providers for Entity Framework 3.5 will work unmodified against Entity Framework 4.0. Also, most of the features and improvements introduced in Entity Framework 4.0 will just work against an Entity Framework 3.5 provider. However, there are several features that require provider changes to be enabled. Below is a list of these features along with short description of the changes required to enable them.

New features in Entity Framework Runtime impacting providers

  • New EDM (Canonical) Functions
  • Date/Time functions
  • Math Functions
  • Aggregate functions
  • String functions

Considerations for Implementing New Features

Exposing provider functions in LINQ To Entities ()

  • Which functions to expose?
  • Overloads – Provide enough for pleasant user experience
  • Proxies Implementation
  • Translating String.StartsWith, String.EndsWith and String.Contains to LIKE in LINQ to Entities

New Features in Entity Designer Impacting Providers

  • Model First

Elisa Flasko posted a detailed UPDATED WIKI: Home item on 10/19/2009 to the ADO.NET Entity Framework & LINQ to Relational Data (a.k.a. AdoNetEfx) blog containing detailed links in the following categories:

  • Entity Framework Toolkits & Extensions
  • Entity Framework Samples
  • LINQ to SQL Samples
  • Entity Framework Learning Tools

Thanks, Elisa!

LINQ to SQL

See the “LINQ to SQL Samples” category in Elisa Flasko’s post above.

Jim Wooley explains Setting DataContext Connection String in Data Tier in this 10/21/2009 post:

LINQ to SQL offers a quick mechanism to build a data tier in a n-Tier application. There’s a challenge when using the DBML designer in a Class Library. The designer stores the connection string in the Settings file. Although it appears that you can change it in the config file, any changes you make there will be ignored because they are actually retained in the generated Settings file.

While you could go into the DataContext’s .Designer file and change the location of the connection string, any changes you make there will be overwritten if you make changes to the DBML file. So what are you to do?

Jim continues with a detailed solution.

LINQ to Objects, LINQ to XML, et al.

Florent Clairambault’s LINQ : Join with multiple conditions of 10/21/2009 explains:

I wanted to some simple join in LINQ on multiple criteria. And coming from standard SQL it’s not as straightforward as I thought it would be. The little disturbing thing is that you lose auto-completion.

Eric continues with SQL and LINQ syntax examples.

Beth Massi’s SDC 2009 Recap & Surprise post of 10/20/2009 begins:

SDC09 has been a blast. I just finished my last session on Building Office Business Applications with Visual Studio 2010 and I think it went well. We created an OBA for good old Northwind Traders. I migrated the 2008 code which is here on code gallery into VS 2010 and showed the new features of VS2010 that makes Office development easier focusing on RAD data binding (including WPF) and designers, new multi-project deployment, and SharePoint 2010 tools. …

I also did a talk on using Open XML and LINQ to XML to manipulate Office 2007 document formats.

And ends with an admission that October 20 is Beth’s birthday. Happy Birthday, Beth!

Deborah Kurata explains Populating a DataGridView from Xml Data in this 10/20/209 post:

If you are using XML in a WinForms application you may find the need to display the XML data in a DataGridView.

Let's take this XML:

<states>
    <state name="California">
        <abbreviation>CA</abbreviation>
        <year>1850</year>
        <governor>Schwarzenegger</governor>
    </state>
    <state name="Wisconsin">
        <abbreviation>WI</abbreviation>
        <year>1848</year>
        <governor>Doyle</governor>
    </state>
</states>

Displaying XML in a DataGridView sounds easy, but if you just set the DataGridView DataSource to the XML data, you will get something like this:

image

Notice how it has lots of yuck in it: attribute details, node properties, and so on for every node in the XML. So how do you get something more like this:

image

The trick is to use anonymous types.

Deborah then shows you how to “use anonymous types.”

ADO.NET Data Services (Astoria)

Alex James explains Using Data Services over SharePoint 2010 – Part 1 – Getting Started in this detailed 10/21/2009 post:

ADO.NET Data Services 1.5 and SharePoint 2010 allow developers to write applications against SharePoint Lists using the familiar Data Services Client programming model.

Setup

The SharePoint 2010 beta will become generally available in November.

In order to use Data Services with SharePoint 2010 Beta, ADO.NET Data Services 1.5 must be installed on your server *before* you install SharePoint 2010.

And then to program against a Data Services enabled SharePoint box you need either:

  • VS 2008 SP1 with the ADO.NET Data Services 1.5 installed (CTP 2 or higher) or
  • VS 2010 (Beta 2 or higher)
Where is your Data Service Endpoint?

Assuming you’ve got everything setup and working the first thing you need to know is where your Data Service endpoint is installed.

If your SharePoint site is http://mflasko-dev/ (thanks Mike) then your Data Services endpoint will be http://mflasko-dev/_vti_bin/listdata.svc.

If you open one of these endpoints up in Internet Explorer you’ll see something like this:

DataServiceRoot

As you can see this is a standard Data Services Atom feed, listing all the Lists on the SharePoint site.

Cool.

Once you know where your Data Service is located, the next step is to write your Data Services Client application.

Alex continues with code for a sample Astoria client.

ASP.NET Dynamic Data (DD)

No significant new posts found this week.

Saturday, October 24, 2009

OakLeaf Blog Joins Technorati’s “Top 100 InfoTech” List

Despite the lowly 3738 authority reported for several days by the Technorati widget to the left, a visit to Technorati’s OakLeaf Systems Site Details page on 10/24/2009 revealed Techorati Authority: 511 and membership in the TOP 100 INFOTECH group for the OakLeaf Systems blog:

On the same day, Technorati ranked the OakLeaf blog #65 and included it as one of the Top 10 Fallers:

Hopefully, this downward spiral won’t continue.

I’ve sent a tweet to @technorati asking about the discrepancy between the three authority figures (38, 511 and 692). I’ll update this post when I receive a reply.

Windows Azure and Cloud Computing Posts for 10/21/2009+

Windows Azure, Azure Data Services, SQL Azure Database and related cloud computing topics now appear in this weekly series.

•• Update 10/24 and 10/25/2009: David Linthicum: Cloud data integration issues; Aaron Skonnard: Building a cloud app podcast; Steve Nagy: Overcoming objections to developing with Azure; Kevin Jackson: AtomPub’s high overhead; James Hamilton: Networks are in my Way; Patric McElroy: SQL Azure performance and SLA; •• Bob Sutor: Who is the user for cloud computing?; and a few more. 
• Update 10/23/2009: David Lemphers: Designing high-performance Windows Azure Storage; Steve Marx: Wants help improving http://dev.windowsazure.com; Msdev.com: 36 Windows Azure videos for developers; Chris Hoff: Can we secure the cloud?; Lori MacVittie: The Cloud Is Not A Synonym For Cloud Computing; Bill Kallio: Setting Queue Visibility Timeouts; Thomas Claburn: Intel CIO predicts PHR to sell PCs; Ramaprasanna: Get started with Windows Azure quickly; and many more.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts, Databases, and DataHubs*”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page.
* Content for managing DataHubs will be added when Microsoft releases a CTP of the technology

Off topic: OakLeaf Blog Joins Technorati’s “Top 100 InfoTech” List on 10/24/2009.

Azure Blob, Table and Queue Services

• Bill Kallio explains how to use Azure Queues and set visibility timouts during polling in his Azure - Setting Visibility Timeout For Polling Queues post of 10/23/2009:

… This past week my team and I ran into an interesting issue with the Windows Azure StorageClient that I'd like to bring up in case others run into it in the future. Perhaps I can save someone some time. Here is some quick background:

The MessageQueue class is an abstract class included in the StorageClient sample project for interacting with Azure Storage Queues. Additionally, the QueueRest class inherits from the abstract MessageQueue class and provides the implementation used when working with Azure Storage Queues.

One of the key methods in the QueueRest class is GetMessage. This method pulls a message off of the queue, and optionally allows the developer to specify a visibility timeout for the message. The visibility timeout just let's the queue know how long this message should be unavailable to other processes requesting messages from the queue.

Additionally, the QueueRest class implements an easy-to-use toolset for polling a queue for messages. This works well with the Azure architecture, where you have a set of Worker Roles (you can think of them as Windows Services) waiting for work to arrive in the queue. When a message enters the queue, the worker will pick it up and process it. This architecture takes advantage of the scalability of Azure. Do you have a lot of work backing up in your queue? Just spawn more worker instances to process the work! …

Here is a fancy chart I spent hours on to better visualize this process:

Bill  provides sample code and descriptions for setting the visibility timeout during polling.

David LemphersDesigning a High Performance Windows Azure [Storage] Service! post of 10/23/2009 notes recounts:

One thing that popped up this week [in a trip to the UK] was the problem case of:

“I have a service that has millions of users, and each of these users needs to store data in my service, do I just create a new storage account for each user?”

It’s a great question, and the answer is critical to designing a solution that exploits the design of our Storage Service.

So first of all, the best way to think about a storage service account, is not in terms of a “user” account, but more as a storage endpoint. What I mean by that is, creating a new storage account per user is similar to creating a new database server per user in RDMS type systems, rather than partitioning your data inside the data store.

So what design pattern do you apply to this kind of problem?

Firstly, the Storage Service has a optimum account size per service of ~10.

So say I’m building a system that needs to store photos for millions of customers. I would first ensure my front end had the ability to uniquely identify each customer, so I would use something like LiveID, forms based auth, or pretty much anything where you can get a customers unique ID.

David then goes on to recommend how to set up backend storage endpoints such that:

[Your] application is not only partitioning the data at the blob container level for each user, but you[‘re] distributing the number of customers across a collection of accounts, which gives you the best performance design.

<Return to section navigation list> 

SQL Azure Database (SADB, formerly SDS and SSDS)

•• Kevin Hoffman recommends using Binary Serialization and Azure Web Applications in this 10/25/2009 post:

You might be thinking, pfft, I'm never going to need to use Binary Serialization...that's old school. And you might be right, but think about this: Azure Storage charges you by how much you're storing and some aspects of Azure also charge you based on the bandwidth consumed. Do you want to store/transmit a big-ass bloated pile of XML or do you want to store/transmit a condensed binary serialization of your object graph?

I'm using Blob and Queue storage for several things and I've actually got a couple of projects going right now where I'm using binary serialization for both Blobs and Queue messages. The problem shows up when you try and use the BinaryFormatter class' Serialize method. This method requires the Security privilege, which your code doesn't have when its running in the default Azure configuration.

Kevin shows you how to enable full trust with a minor change to your app’s service definition file. I have the same issue as Kevin about the the bigtime bandwidth charges and performance hit resulting from wrapping data in the RESTful AtomPub XML format.

David Linthicum warns that Data Integration is a Huge Issue for Cloud Computing Migration in his 10/24/2009 post to ebizQ’s Leveraging Information and Intelligence blog:

Loraine Lawson has done a great job in providing further analysis of Kevin Fogarty article cloud computing reality checks over at Computerworld.

Loraine points out some of the key issues of the article, including:

"Cloud platforms are all different, so migration, support, cost and capacity issues also will be different according to platform."

This is what I think surprises most people about integration between the enterprise and a cloud provider. The APIs, or other points of integration that the cloud computing providers offer are all very different in terms of functionality and quality of the service.

"You may find existing integration points break when you try to move one end point into the cloud - and odds are 10 to 1 your legacy systems are deeply integrated with many, many other on-premise systems."

Integration is almost always a complex problem. Indeed, I wrote a book on the topic, and if you think this is about linking your CRM system with a cloud provider and you're done, you have another thing coming. This is about understand the data you're integrating, including data latency, data governance, semantic mediation, etc.. In for a penny, in for a pound. …

Dave continues with an analysis of licensing and EULA issues.

Murty Eranki’s What type of performance is guaranteed with SQL Azure? thread of 10/22/2009 asked:

Does a database instance with 10 GB … not require compute power. What type of performance is guaranteed with SQL Azure?

Microsoft’s Patric McElroy replied on 10/24/2009 with the following details of current and future Azure SLAs:

We currently do not provide a performance SLA for SQL Azure although this is something that we are looking into. 

Although we run on "commodity hardware" in the data center, it is pretty powerful and the throughput and response rate for most customers has been very good.  As mentioned on another thread, we continue to refine our throttling mechanisms to make sure we are providing the highest level of performance while still being fair across tenants and protecting the health of the overall system.  This is a delicate balancing act and I'm sure we'll continue to refine this based on feedback from customers for some time to come.

Early adopters have also found that SQL Azure allows them to provide a data tier to their applications that leverages scale-out - of relational storage and query processing.  For certain applications, this can have dramatic effects as you harness the power of 10's of CPUs in parallel to service the needs of your app.  For an internal partner running on this same infrastructure they were able to take end user queries down from 10's of minutes to 10's of seconds - quite dramatic for that app.

• David Nichols presented Life on the Farm: Using SQL for Fun and Profit in Windows Live to The 3rd ACM SIGOPS International Workshop on Large Scale Distributed Systems and Middleware (LADIS 2009) on 10/11/2009. The slides for his presentation are here.

Dave’s first slide asks “What is Windows Live?” His answer:

• [It’s] Not:
– Bing – search
– Azure – cloud computing platform
– MSN – News and entertainment portal
• Windows Live is
– Mail (Hotmail)
– Instant messaging
– Photo and file sharing
– Calendar
– Social networking
– Blogging
– File sync

He continues with an analysis of how the Windows Live group addresses issues with using SQL Server for applications requiring high availability and elasticity. Many of these issues (and their solutions) are common to SADB.

Dave “is a software developer in the Windows Live group at Microsoft, working on large-scale storage systems. He came to Microsoft by the acquisition of PlaceWare, Inc. where he was a co-founder and principal architect. The PlaceWare web conferencing product became Microsoft Office Live Meeting.”

My illustrated SQL Azure October 2009 CTP Signup Clarification post of 10/22/2009 explains how to subscribe to the October CTP without copying your invitation code GUID somewhere.

Salvatore Genovese announces SplendidCRM for Microsoft Windows Azure in this 10/20/2009 press release, which includes a link to a live Azure demo:

SplendidCRM solutions for open-source use, has entered the cloud with its release of SplendidCRM 4.0 Community Edition. This release has been specifically updated to run in Microsoft's Windows Azure Platform (http://www.microsoft.com/azure/). SplendidCRM can be installed to run just the database in SQL Azure, with the web application running locally, or to run the database in SQL Azure and the web application in Windows Azure. With this new capability our customers will be able to minimize the cost while maximizing the reliability of their customer data. A live Azure demo is available at http://splendidcrm.cloudapp.net.

SplendidCRM continues to evolve with the incorporation of the Silverlight 3 Toolkit to replace the previous flash-based and hand-made charts. “We continue to pioneer the use of XAML-only graphics as a means to promote rapid application development”, noted Paul Rony, President and Founder of SplendidCRM. “Implementation of a dashlets model will allow each user to customize their home page experience with the new Silverlight charts.”

“In addition, the SplendidCRM query engine has been optimized to handle millions of records using our custom paging system. This custom paging also allows SplendidCRM to be very bandwidth efficient, which is especially important in the SQL Azure environment because of the bandwidth related charges.”

Microguru Corporation announces their free Community Edition license for the Gem Query Tool for SQL Azure:

Gem Query Tool is a lightweight SQL query tool specifically designed for use with SQL Azure,

Microsoft's new database offering in the Cloud. Gem Query provides an intuitive user interface to connect to and work with SQL Azure databases. Gem Query Tool supports execution of any DDL and DML script supported by SQL Azure. To facilitate authoring of SQL queries, Gem Query Tool for SQL Azure displays tables and columns in your database.

Required Runtime: Microsoft .NET Framework 3.5 SP1

Gem Query Tool for SQL Azure - Community Edition License: Free to use for any legal purpose. This software may not be reverse engineered and may not be used as a basis for creating a similar product. The user assumes full responsibility when using this product. No warranties, explicit or implicit, are made with respect to this product.

Gem Query Tool - Main Window

<Return to section navigation list> 

.NET Services: Access Control, Service Bus and Workflow

No significant new posts on this topic today.

<Return to section navigation list> 

Live Windows Azure Apps, Tools and Test Harnesses

Aaron Skonnard contributed .NET Rocks’ show #492, Aaron Skonnard Builds a Real Cloud App, on 10/22/2009:

Aaron Skonnard talks about his experiences building a real application in the cloud.

• Thomas Claburn reports “Paul Otellini sees businesses moving to replace their aging hardware and promise in moving to a more distributed approach to healthcare” in his Web 2.0 Summit: Intel CEO Expects PC Sales Surge post of 10/22/2009 to InformationWeek’s Healtcare blog:

… In an interview with Web 2.0 Summit program chair John Battelle, Otellini said that PC sales are looking up. …

Asked what he made of Microsoft (NSDQ: MSFT)'s shift from loathing to loving cloud computing, Otellini said that from a hardware perspective, cloud computing isn't much of a change. "I like [Larry] Ellison's definition of the cloud," he said. "He said there's nothing new here. You still have servers, networks, and clients. What's different is the use model." …

Otellini also expressed confidence in the market for healthcare IT, noting that Intel has partnered with GE to focus on home healthcare. "Let's keep people [in need of medical treatment] at home longer," he said, noting that home care represents the lowest cost to society. "We're developing a family of devices to allow that," he said, citing video conferencing and intelligent medication systems as examples.

Healthcare, he said, needs to shift from a centralized model to a distributed one. …

Steve Marx asks What Makes a Great Developer Website? Help Me Improve http://dev.windowsazure.com! in this 10/22/2009 post to his Cloud Development blog:

Did you see the new http://windowsazure.com?  I think the site has a much cleaner look and makes it easier to find what you’re looking for.

http://dev.windowsazure.com takes you directly to the Windows Azure developer site, which is the place developers go for information about Windows Azure.

My responsibility is to play curator for that site.  I’m going to try to find the best content available and organize it in a way that developers can find the information they need as quickly and easily as possible.  I’d like your help doing that.

Three questions every product website should answer

When I’m investigating a new development technology, these are the questions I immediately ask (and the time I’m willing to spend answering each):

  1. What is this? (30 seconds)
  2. Is it useful to me? (5 minutes)
  3. How can I get started? (15 minutes)

I’d like http://dev.windowsazure.com to deliver great answers to these questions for a developer audience and then provide organized, deeper content for people who decide to invest their time learning the platform.

I’d like to see more live Windows Azure and SQL Azure demo applications along the lines of those produced by Jim Nakashima (Cloudy in Seattle) during Windows Azure’s early days.

The Msdev.com site offers 36 current and future Windows Azure Training Videos for developers. Future segments are scheduled for release in November and December 2009.

Robert Rowley, MD reports EHR use and Care Coordination improves health outcomes for Kaiser Permanente in this 10/22/2009 post to the PracticeFusion blog:

A recent report out of the Kaiser system shows how EHR use, combined with Care Coordination, improves chronic disease management. The 5-year project (published in the British Medical Journal) showed that specialists (nephrologists) can improve the health outcomes of high-risk kidney disease patients by using their EHR to identify patients at-risk for deterioration, and proactively consulting on these cases remotely, making recommendations to their primary care physicians for medication management or requesting referrals. Chronic kidney disease contributes to a significantly higher cost of health care, and good management is a PQRI quality metric.

Can this experience be generalized outside the Kaiser system? What lessons can be learned here? One of the healthcare-delivery system characteristics of Kaiser is that it is a very large, closed, multi-specialty group practice, with a finite well-defined patient population assigned to this group. Patients “belong” to the entire team – unsolicited consultation from specialists using EHR tools can occur without requiring specific patient permission for care from “outside” providers. Within the group, there is no accusation that such unsolicited consultation is simply “trolling for additional business,” which is the claim in an uncoordinated fee-for-service private setting outside their system.

Clearly, then, reproducing the Kaiser experiment of using EHR tools to proactively consult on at-risk patients remotely would be difficult to achieve outside a coordinated-network setting. Large group practices, and at-risk IPAs that are delegated to manage the care of their insurance-assigned patient population – in short, accountable care organizations (ACOs) – are about the only settings that come to mind where such interventions are feasible. …

Dr. Rowley’s EHRs Fare Well in Battle of QI Strategies post of the same date notes that electronic health records are effective Quality Improvement (QI) tools:

… The scientists [Mark Friedberg and colleagues from Brigham and Women’s Hospital and Blue Cross/Blue Shield of Massachusetts] looked at these QI techniques, among others: providing feedback to physicians regarding their performance, distributing reminders to patients and physicians about needed services, making interpreter services available, extending office hours to weekends and evenings and and using multifunctional EHRs.

Multifunctional EHRs were defined as those which included alerts and reminder systems, as well as decision support at the point of care.

The scientists found that practices which routinely used multifunctional EHRs scored significantly higher on 5 HEDIS measures, including screening for breast cancer, colorectal cancer and Chlamydia, and 2 in diabetes care. The improved performance ranged from 3.1% to 7.6%. …

• Alice Lipowicz asserts HHS faces hurdles on electronic exchange of medical lab results in this 10/20/2001 FederalComputerWeek post:

Federal health authorities will face several problems implementing the electronic exchange of patient lab results as part of an electronic health record (EHR) system, members of a federal advisory workgroup said at a meeting today.

The workgroup is a task force of the Health IT Policy Committee that advises the Health and Human Services Department’s (HHS) Office of the National Coordinator for Health information technology. HHS plans to release regulations later this year on how to distribute $19 billion in economic stimulus funding to doctors and hospitals that buy and "meaningfully use" certified EHR systems.

Roughly 8,000 hospital labs and 6,000 independent clinical labs perform three quarters of the lab testing in the United States and some of those facilities have installed interfaces that enable standardized electronic delivery of the results, Jonah Frohlich, deputy secretary of health information technology at California's Health and Human Services Agency, testified before the Information Exchange Workgroup.

Without interfaces in place, many lab results that could be sent electronically are scanned and faxed to physicians, he said. “While approximately one-quarter of physicians nationally have an EHR, many still receive faxed lab results that are either manually entered or scanned into the patient record. This is a limitation of both the lab and EHR industry,” Frohlich said. …

The HealthVault team announces the first issue of HealthVault News for You (consumers) in this 10/21/2009 post:

Each month we'll feature applications that can help you take charge of your health, highlight health tools and tips, and help you get the most out of your HealthVault experience. …

<Return to section navigation list> 

Windows Azure Infrastructure

•• Ping Li asserts The Future Is Big Data in the Cloud in this 10/25/2009 post to GigaOm:

While when it comes to cloud computing, no one has entirely sorted out what’s hype and what isn’t, nor exactly how it will be used by the enterprise, what is becoming increasingly clear is that Big Data is the future of IT. To that end, tackling Big Data will determine the winners and losers in the next wave of cloud computing innovation.

Data is everywhere (be it from users, applications or machines) and as we get propelled into the “Exabyte Era” (PDF), is growing exponentially; no vertical or industry is being spared. The result is that IT organizations everywhere are being forced to grapple with storing, managing and extracting value from every piece of it -– as cheaply as possible. And so the race to cloud computing has begun.

This isn’t the first time IT architectures have been reinvented in order to remain competitive. The shift from mainframe to client-server was fueled by disruptive innovation in computing horsepower that enabled distributed microprocessing environments. The subsequent shift to web applications/web services during the last decade was enabled by the open networking of applications and services through the Internet buildout. While cloud computing will leverage these prior waves of technology –- computing and networking –- it will also embrace deep innovations in storage/data management to tackle Big Data. …

Ping Li is a partner with Accel.

•• Steve Nagy’s “Why Would I Use Windows Azure?” or “Developer Evolution” essay begins as follows:

Foreword: Apologies for the title, I’m still not sure (after completing the entry) what it should be called.

Why would I use the Windows Azure Platform? Its a good question, one that I’ve had a lot of internal discussions on lately (fellow consultants from Readify). For most its quite a paradigm shift and therefore difficult to grok. Many believe that you can only build Azure apps from scratch and that it would be a lot of work to convert an existing product to use the Azure platform. I don’t want to say this is a false statement, but Azure is pretty big, bigger than most people realise.

I Can’t Use Azure Because…

Most of us are only familiar with the Windows Azure component which allows you to host either services or web pages. Most custom applications require a data store and for Microsoft developers, this tends to be SQL Server. Unfortunately there is no one-for-one mapping to easily port your database over to the data stores in the cloud (Windows Azure Storage, SQL Azure, etc). This means people feel resistance when they do consider the Azure services and write-off the whole platform.

When SQL Azure is presented as an option, people tend to pick on the little features that are missing, expecting a direct cloud equivalent for their data. But some things just don’t make sense for SQL Azure. People lose context: SQL Azure is a service. It should be treated like any other service within your architecture. You don’t need to be implementing the next service related buzzword (SOA, SaaS, S+S, etc) to get the benefits of the abstractions provided by services. If you build your services in an autonomous fashion, SQL Azure will fit right in with your story.

Steve continues to deflate other arguments about cloud computing in general and Azure in particular.

• Bob Sutor asks Who is the user for cloud computing? in this 10/24/2009 blog post, which opens:

I think many of the discussions of cloud computing focus too much on the implementation side and not enough on who the potential users are and what will be their needs. Many users don’t have or need a very precise definition of “cloud computing.” Indeed, I think that for many people it simply matters whether their applications and data live on their machines or devices, or if they are run through a browser or reside somewhere out on the network, respectively.

Following are abbreviations of Bob’s six initial categories:

    1. A user of a virtualized desktop on a thin or fat client
    2. A non-technical end user who accesses services through a browser or via applications such as disk backup to remote storage
    3. A “cloud choreographer” who strings together cloud-based services to implement business processes
    4. A service provider who needs to handle peak load demands
    5. A developer who employs dynamic resource allocation in clouds to speed application or solution creation
    6. An IT system administrator who does not build clouds but deploys onto them, probably in addition to traditional managed systems

Bob is Vice President for Open Source and Linux, IBM Software Group, IBM Corporation. He was very active in the development of SOAP and WS-* standards for XML Web services.

• Ramaprasanna offers on 10/23/2009 the shortest How to Get started with Windows Azure post yet:

It is getting simpler.

Firstly you need to have a live ID.

If you don’t have one get one here.

To get a Windows Azure token, all you need to do is to click here and complete the application. It would take approximately a week to get a token. 

Once you receive a token you can redeem it at windows.azure.com .

You can now use the Windows Azure deployment portal to deploy your Azure application in the Microsoft Windows Azure cloud.

The Cloud Computing Use Case Discussion Group posted 42-page Version 2.0, Draft 2 of its Cloud Computing Use Cases white paper on 10/23/2009. Here’s the TOC:

Table of Contents
1 Introduction
2 Definitions and Taxonomy
   2.1 Definitions of Cloud Computing Concepts
   2.2 Taxonomy
   2.3 Relationships Between Standards and Taxonomies
   2.4 Application Programming Interfaces (APIs)
3 Use Case Scenarios
   3.1 End User to Cloud
   3.2 Enterprise to Cloud to End User
   3.3 Enterprise to Cloud
   3.4 Enterprise to Cloud to Enterprise
   3.5 Private Cloud
   3.6 Changing Cloud Vendors
   3.7 Hybrid Cloud
   3.8 Cross-Reference: Requirements and Use Cases
4 Customer Scenarios
   4.1 Customer Scenario: Payroll Processing in the Cloud
   4.2 Customer Scenario: Logistics and Project Management in the Cloud
   4.3 Customer Scenario: Central Government Services in the Cloud
   4.4 Customer Scenario: Local Government Services in a Hybrid Cloud
   4.5 Customer Scenario: Astronomic Data Processing
5 Developer Requirements
6 Conclusions and Recommendations

Following is a typical Use Case Scenario illustration for what appears to me to be a so-called hybrid cloud:

• Mary Hayes Weier defines Alternative IT in this 10/22/2009 InformationWeek post:

Cloud Computing. SaaS. They're such over-used marketing words that they've become the butt of jokes (Larry Ellison on YouTube, anyone?). But hopefully the hype machine hasn't generated too much noise to drown out the fact that there have been some significant, permanent changes in how CIOs view software. At InformationWeek, we call it Alternative IT.

And our hope is that Alternative IT doesn't become another shallow term that can mean just about anything. (For example, I recently got a press release in my inbox with "SaaS" in the headline, and it turned out to be a Web-based service for storing digital photos. I mean, where will it end?) But we know, from talking to the CIOs who spend billions of dollars a year on IT, that a grinding recession, paired with new choices in terms of online software, mobile computing, outsourcing, open source, and more, has opened the door to alternatives in IT.

In particular, CIOs are rethinking significant parts of their software strategies, considering alternatives to conventional licenses, maintenance, and fee structures, as well as alternatives to lengthy internal development cycles, complex customization, and long global rollouts and upgrades.

This isn't trendy, it's reality. In fact, Bill Louv, CIO at pharmaceutical company GlaxoSmithKline, bristles at the idea that his company is chasing the cloud trend. "The evolution here isn't, 'Gee, let's do something in the cloud or SaaS,'" Louv told me in an interview. "Our Lotus Notes platform was getting to end of life. The question came up innocently that, given we'll have to spend a lot of money here, is there something we can do that's smarter?" What he decided is to move all 115,000 employees worldwide to the online, monthly subscription Exchange and SharePoint offerings.

Jim Miller’s Avanade finds growing Enterprise enthusiasm for the Cloud (see below) analysis of 10/22/2009 begins:

I covered Avanade’s Global Cloud Computing Survey for CloudAve back in February, and took a closer look at the security concerns it highlighted in a related post for this blog.

Avanade re-commissioned the same research firm, Kelton Research, to undertake some follow-up work between 26 August and 11 September, and the responses from over 500 C-level executives show a healthy dose of pragmatism with respect to the Cloud and its associated hype.

In amongst the pragmatism, it was interesting (and pleasing) to see a “320% increase” in respondents planning some sort of deployment. Whilst it’s worth noting that this increase only took the ‘planning to use’ contingent up to the dizzying heights of some 10% of respondents, the figure was a more respectable 23% in the USA. In the same data, companies reporting ‘no plans’ to adopt Cloud tools had fallen globally from 54% to 37%. That’s interesting.

Also interesting was the relatively small impact of the economic situation upon Cloud adoption, with only 13% suggesting it had ‘helped’ adoption plans and 58% reporting ‘no effect.’ In my conversations with Nick Carr and others, there’s been an underlying presumption (on my part, as well as theirs) that cost-saving arguments with respect to Cloud Computing would prove persuasive and compelling. It would appear not. This would suggest, of course, that Enterprise adopters are taking to the Cloud for reasons other than the budget sheet… which is hopefully one more nail in the coffin for IDC’s recent ‘advice’.

B. Guptill and M. West deliver Saugatuck’s Microsoft’s Q409 Rollout: How Will This Impact IT Spending in 2010? Research Alert of 10/22/2009. The What Is Happening section concludes:

Few IT vendors would attempt to roll out such a broad range of significant changes in core business, organization and technology in such a short period of time. Saugatuck believes the impact could easily reach well beyond Microsoft, its offerings, and its partners. In the end, it may trigger a significant change in IT spending, and as a result, catalyze and accelerate major change in how IT is bought, used and paid for.

The scale of that investment may be great enough to tip user organizations toward a much more rapid move to Cloud-based IT, including desktop virtualization, investment in netbooks and other new form factors, and a rapid move to SaaS, Cloud infrastructure services, and related Cloud Computing.

For partners, that investment is almost certain to include more and faster moves to expand their business to include SaaS and Cloud, which will require change well beyond adding a new offering or line of business, to include new business organizations, relationships, business models, etc. …

Rich Miller reports Demand Remains Strong in Key Markets for data center real estate in this 10/21/2009 post:

Data center demand is outpacing supply across the United States and pricing remains strong in key markets, according to the latest analysis by commercial real estate specialist Grubb & Ellis.

In a presentation Tuesday at DataCenterDynamics Chicago, Jim Kerrigan of Grubb & Ellis said more than 20 megawatts of data center critical load was leased in the third quarter, the strongest activity thus far in 2009. Kerrigan, the director of the data center practice at Grubb & Ellis, said demand is outpacing supply by “three-fold.”

Chicago is Hottest Colo Market
Kerrigan said downtown Chicago is the hottest colocation market in the country, while northern Virginia is seeing the strongest activity in leasing of wholesale data center space.

Kerrigan noted that the Chicago market is really two markets, with strong demand and limited supply in downtown, while larger blocks of space are available in the suburban market due to new construction. Driven by strong demand from financial trading firms, data center occupancy in downtown Chicago is pushing 95 percent.

In northern Virginia, supply is limited through the remainder of 2009, but several new projects will come online in early 2010, including new data center space from Digital Realty Trust, Power Loft, CoreSite and IT Server. …

Lori MacVittie emphasizes The Cloud Is Not A Synonym For Cloud Computing in this 10/21/2009 post:

… Thanks to the nearly constant misapplication of the phrase “The Cloud” and the lack of agreement on a clear definition from technical quarters I must announce that “The Cloud” is no longer a synonym for “Cloud Computing”. It can’t be. Do not be misled into trying, it will only cause you heartache and headaches. The two no longer refer to the same thing (if they ever really did) and there should be no implied – or inferred - relationship between them. “The Cloud” has, unfortunately, devolved into little more than a trendy reference for any consumer-facing application delivered over the Internet.

Cloud computing, on the other hand, specifically speaks to an architectural model; a means of deploying applications that abstracts compute, storage, network, and application network resources in order to provide uniform, on-demand scalability and reliability of application delivery. …

Jay Fry compares Scientists v. Cowboys: How cloud computing looks from Europe in this 10/21/2009 post:

Is Europe following the U.S. on cloud computing...or vice versa?
While I was over in Berlin for a chunk of the summer, I had a chance to connect up with some of the discussions going on in Europe around cloud computing. It's true, high tech information these days knows no international boundaries. Articles that originally run in North American IT pubs are picked up wholesale by their European counterparts. New York Times articles run everywhere. Tweets fly across oceans. And a lot of technology news is read directly from the U.S. sources, websites, communities, and the like.


However, homegrown European publications are brimming with cloud computing, too. I found references to cloud in the Basel airport newsrack and the Berlin U-Bahn newsstands, all from local European information sources (and some of their reporters are excellent). European-based and -focused bloggers are taking on the topic as well; take a look at blogs like http://www.saasmania.com/ and http://www.nubeblog.com/. Even http://www.virtualization.info/, one of the best news sources on (you guessed it) virtualization, is run by Alessandro Perilli out of Italy. And, of course, there are big analyst contingents from the 451 Group (hello, William Fellows), Gartner, Forrester, and many others in various European enclaves. …

Marketwire discusses Avenade’s Global Study: Recession Has Little Impact on Cloud Computing Adoption; C-Level Executives, IT Decision Makers Report More Than 300 Percent Increase in Planned Use in this 10/21/2009 press release subtitled “Companies Choosing Hybrid Path to Cloud Adoption; U.S. Adoption Faster Than Global Counterparts”:

Cloud computing is no longer just a buzzword. A recent study commissioned by Avanade, a business technology services provider, shows a 320 percent increase over the past nine months in respondents reporting that they are testing or planning to implement cloud computing. This is the first data that indicates a global embrace of cloud computing in the enterprise.

The study also found that while companies are moving toward cloud computing, there is little support for cloud-only models (just 5 percent of respondents utilize only cloud computing). Rather, most companies are using a combination of cloud and internally owned systems, or hybrid approach.

"For very large organizations, the hybrid approach is logical and prudent," said Tyson Hartman, global chief technology officer at Avanade. "No one is going to rip and replace decades of legacy systems and move them to the cloud, nor should they. Additionally, at this stage of cloud computing maturity, not every computing system is appropriate for the cloud." …

<Return to section navigation list> 

Cloud Security and Governance

Tim Green reports on a semi-annual security survey in his Trust the Cloud? Americans Say No Way article of 10/24/2009 for PC World’s Business Center:

Americans don't trust cloud storage for their confidential data, with identity theft ranking as their top security concern, according to a twice-yearly survey by network security consulting firm Unisys.

Asked what they felt about personal data being stored on third-parties' remote computers, 64% say they don't want their data kept by a third party, according to the latest installment of "Unisys Security Index: United States.” …

The U.S. Security Index is based on a telephone survey of 1,005 people 18 and older.

Jaikumar Vijayan’s Microsoft wants ISO security certification for its cloud services post of 10/23/2009 to ComputerWorld’s security section:

Microsoft Corp. wants to get its suite of hosted messaging and collaboration products certified to the ISO 27001 international information security standard in an effort to reassure customers about the security of its cloud computing services. The move comes at a time of broad and continuing doubts about the ability of cloud vendors in general to properly secure their services.

Google Inc., which has made no secret of its ambitions in the cloud computing arena, is currently working on getting its services certified to the government's Federal Information Security Management Act (FISMA) standards for much the same reason.

It's unclear how much value customers of either company will attach to the certifications, particularly because the specifications were not designed specifically to audit cloud computing environments. Even so, the external validation offered by the standards is likely to put both companies in a better position to sell to the U.S. government market.

Windows Azure already has ISO/IEC 27001:2005 certification. See my Microsoft Cloud Services Gain SAS 70 Attestation and ISO/IEC thread of 5/29/2009 in the Windows Azure forum regarding Windows Azure ISO/IEC 27001/27005 certification:

Charlie McNerney's Securing Microsoft’s Cloud Infrastructure post announces:

Independent, third-party validation of OSSC’s approach includes Microsoft’s cloud infrastructure achieving both SAS 70 Type I and Type II attestations and ISO/IEC 27001:2005 certification. We are proud to be one of the first major online service providers to achieve ISO 27001 certification for our infrastructure. We have also gone beyond the ISO standard, which includes some 150 security controls. We have developed 291 security controls to date to account for the unique challenges of the cloud infrastructure and what it takes to mitigate some of the risks involved [Emphasis added].

Charlie is GM, Business & Risk Management, Microsoft Global Foundation Services.

• Rafe Needleman explains in his Reporters' Roundtable: The Dangers of cloud computing post of 10/23/2009 to CNet News:

This week we are covering the dangers of cloud computing. Get it? With the major loss of consumer data for the Sidekick smartphone users -- the Sidekick is made by Danger, a Microsoft company -- the whole idea of "cloud" safety was brought front and center for consumers. Businesses, likewise, are wondering if they are exposed to similar risks when they put their apps and data in the cloud.

Can we trust the cloud?

Our guests to discuss this topic are Stephen Shankland, CNET Senior Writer and author of our Deep Tech blog, and our special expert guest is Christofer Hoff, author of the Rational Survivability Blog, which is about this very topic. Hoff is director of cloud and emerging solutions at Cisco, so has a vested interest in keeping the cloud safe and profitable. [Emphasis added.]

• John Pescatore asserts Risk Is Just Like Obscenity in his 10/23/2009 post from the Gartner Symposium in Orlando, FL:

Yesterday at our last security session at Gartner’s annual Symposium, I chaired a debate called “Is Government Regulation Required to Increase Cybersecurity?” The panelists were Gartner analysts French Caldwell, Paul Proctor and Earl Perkins. Basically, I was against government regulation and those three were for it.

Essentially, French felt regulation done right was needed and would increase cybersecurity. Earl said that capitalism had no conscience and regulation is always needed to inject that, security no different. Paul’s position was that regulation was needed to get management to pay attention.

My position is that regulation around cybersecurity can’t be done right, hasn’t and won’t inject security, and only causes management to pay attention to compliance not security. The difference is critical – government regulations can only work when something is stable enough for slow moving legislators to write regulations that can lead to some audit against some stable standard. Information technology is definitely not stable – software engineering is still an oxymoron. Most everyone agreed, and said that’s why the focus of legislation should be around “risk” not technology mandates.

I left the conference audience with my prediction: risk is pretty much like obscenity. It is impossible to define, but we all know it when we see it. But we all see it differently. Legislation around obscenity has a long torturous history of failing – especially where technology is involved. And technology is at the heart of the cybersecurity issue – that’s the cyber part.

My prediction is that any legislation in the next 5 years trying to mandate cybersecurity levels will be as completely ineffective and money wasting as the V-Chip legislation was in the US in trying to deal with inappropriate content over televisions. I’ve used this analogy before – back in 2001 when the browser industry was trying to claim the use of Platform for Privacy Preferences technology would solve web privacy issues, I wrote a Gartner research note “P3P Will Be the V-Chip of the Internet.” That proved to be pretty dead on.

Chris Hoff (@Beaker) asks Can We Secure Cloud Computing? Can We Afford Not To? in this 10/22/2009 “re-post from the Microsoft (Technet) blog I did as a lead up to my Cloudifornication presentation at Bluehat v9 I'll be posting after I deliver the revised edition tomorrow]”:

There have been many disruptive innovations in the history of modern computing, each of them in some way impacting how we create, interact with, deliver, and consume information. The platforms and mechanisms used to process, transport, and store our information likewise endure change, some in subtle ways and others profoundly.

Cloud computing is one such disruption whose impact is rippling across the many dimensions of our computing experience. Cloud – in its various forms and guises — represents the potential cauterization of wounds which run deep in IT; self-afflicted injuries of inflexibility, inefficiency, cost inequity, and poor responsiveness.

But cost savings, lessening the environmental footprint, and increased agility aren’t the only things cited as benefits. Some argue that cloud computing offers the potential for not only equaling what we have for security today, but bettering it. It’s an interesting argument, really, and one that deserves some attention.

To address it, it requires a shift in perspective relative to the status quo. …

Beaker’s Bluehat v9 Session was “Cloudifornication: Indiscriminate Information Intercourse Involving Internet Infrastructure,” described as follows:

What was in is now out.

This metaphor holds true not only as an accurate analysis of adoption trends of disruptive technology and innovation in the enterprise, but also parallels the amazing velocity of how our data centers are being re-perimiterized and quite literally turned inside out thanks to cloud computing and virtualization.

One of the really scary things that is happening with the massive convergence of virtualization and cloud computing is its effect on security models and the information they are designed to protect. Where and how our data is created, processed, accessed, stored, backed up and destroyed in what is sure to become massively overlaid cloud-based services – and by whom and using whose infrastructure – yields significant concerns related to security, privacy, compliance, and survivability.

Further, the "stacked turtle" problem becomes incredibly scary as the notion of nested clouds becomes reality: cloud SaaS providers depending on cloud IaaS providers which rely on cloud network providers. It's a house of, well, turtles.

We will show multiple cascading levels of failure associated with relying on cloud-on-cloud infrastructure and services, including exposing flawed assumptions and untested theories as they relate to security, privacy, and confidentiality in the cloud, with some unique attack vectors.

I’ll update this post when Chris makes his updated version available.

• Eric Chabrow reports NIST Suspends IT Lab Reorganization and asks Should Computer Security Division Become NIST's 11th Lab? in this 11/22/2009 post:

As the National Institute of Standards and Technology placed a hold a proposed reorganization of its Information Technology Laboratory (ITL), critics of the plan proposed making the lab's Computer Security Division (CSD) a lab itself.

"We have received expressions of both support and concern from various stakeholders," IT Lab Director Cita Furlani said Thursday in testimony before the House Science and Technology Subcommittee on Technology and Innovation. "We are seriously considering this input and plan to re-evaluate how to ensure that our structure is as flexible and efficient as possible in meeting the many challenges and opportunities ahead.."

Under the proposed reorganization, unveiled in August, the director of the lab's Computer Security Division would have been elevated to a position within the IT Lab director's office, serving as ITL's cybersecurity adviser. The proposal would have encouraged more multidisciplinary collaboration among NIST units in developing cybersecurity programs and guidance, a move some critics saw as weakening the CSD brand. …

• David Navetta’s Legal Implications of Cloud Computing -- Part Three (Relationships in the Cloud) article of 10/21/2009 on the Information Lawgroup site advises:

In the legal world, some take the position that Cloud is no different than “outsourcing”.    Unfortunately, making that comparison reveals a misunderstanding of the Cloud and its implications.  It is sort of like saying that running is no different than running shoes. Like “running,” outsourcing is a general term describing an activity. In this case the activity involves organizations offloading certain business processes to third parties. Cloud computing (like “running shoes”) is a “new” method for leveraging existing technologies (and technological improvements that have occurred in the past 20 years) that can be used by outsourcers to provide their services more effectively and cheaply (as running shoes represents a technology that can be used to achieve the activity of running more efficiently).  In other words, one can outsource utilizing a Cloud architecture provided by a third party, or by using a more traditional dedicated third party hosted technology solution. Both are different technologies or methods for achieving the same activity: outsourcing of business processes.

For lawyers analyzing outsourcing to the Cloud the question is whether the technology, operational aspects and various relationships of a given Cloud transaction create new legal issues or exacerbate known legal problems. To illuminate this question, this post explores the relationships that exist between organizations outsourcing in the Cloud (“Cloud Users”) and those providing services in the Cloud. Coincidentally (or maybe not so much) understanding these relationships is crucial for attorneys that need to address legal compliance risk and draft contracts to protect clients entering into the Cloud. …

David’s earlier members of the Legal Implications of Cloud Computing are:

Links to these articles in Part three are broken.

Barbara Darrow lists (and expands on) these Top five IT channel lessons for the quarter in her 10/21/2009 post to the IT Knowledge Exchange:

    1. Fear the cloud
    2. While you’re at it, fear Google
    3. Watch Cisco like a hawk
    4. Keep your eye on M&A
    5. Don’t equate small with easy

<Return to section navigation list> 

Cloud Computing Events

•• James Hamilton recounts his presentation to the Stanford Clean Slate CTO Summit in this 10/24/2009 post:

I attended the Stanford Clean Slate CTO Summit last week. It was a great event organized by Guru Parulkar. … I presented Networks are in my Way. My basic premise is that networks are both expensive and poor power/performers. But, much more important, they are in the way of other even more important optimizations. Specifically, because most networks are heavily oversubscribed, the server workload placement problem ends up being seriously over-constrained. Server workloads need to be near storage, near app tiers, distant from redundant instances, near customer, and often on a specific subnet due to load balancer or VM migration restrictions. Getting networks out of the way so that servers can be even slightly better utilized will have considerably more impact than many direct gains achieved by optimizing networks.

Providing cheap 10Gibps to the host gets networks out of the way by enabling the hosting of many data intensive workloads such as data warehousing, analytics, commercial HPC, and MapReduce workloads. Simply providing more and cheaper bandwidth could potentially have more impact than many direct networking innovations. …

David Pallman will join Aaron Skonnard, Simon Guest, Michael Steifel, and Chris Pels as hosts of PDC 2009’s Birds of a Feather (BOF) @Lunch Cloud Computing Session on Wed 11/18 according to his Will Cloud Computing Change Your Life? at PDC post of 10/23/2009.

• Randy Bias “gave a short, 5-minute ‘lightning talk’ at Cloud Camp in the Clouds” according to his State of the Cloud – Cloud Camp in the Clouds post of 10/23/2009:

Here is a recreation of the talk including my original deck, which has my fonts (the perfectionist in me, I can’t help it), and a professional (sort of) quality voice-over.  I’d also like to thank the Slideshare folks whose audio synchronization tool was a cinch to use.

• Eric Nolin’s 19 Days to Getting Ahead of the Curve - Free Tix Here post of 10/23/2009 promotes the Defrag 2009 Conference being held on 11/11 to 11/12/2009 in Denver, CO:

Defrag is now officially less than three weeks away, and I’m starting to feel like a kid on Christmas eve.

But not so much the kid that I miss interesting developments - like this piece from the Gartner Symposium about IT falling “behind the curve.” Gartner argues that there’s a “fundamental mismatch” between what IT is good at and what is happening on the internet. The result is IT departments everywhere losing a grip on what’s actually going on. The “digital natives” are on iPhones, google, facebook, twitter, etc — whether you even KNOW it or not (forget liking). You no longer have 18 months to roll out a collaboration suite. People are routing around the steady pace of the IT department and “rolling their own.”

This is all really just a continuation of what happened when Salesforce.com first launched. IT people everywhere were simply STUNNED by the idea that Billy Joe down in business unit X had simply *bought himself* a license, uploaded sales data, and started using a service that no one had vetted OR approved. Guess what? That’s now everything. And it’s exploding. And it’s only gonna get worse.

The big question in the SaaS world for the last few years has been “how to keep control” - where control means compliance, security, auditing, etc. You can just feel that wave of compliance/audit/security sweeping toward all things in the social computing world (it may be the thing that pushes us into the trough). Look for it next year.

See, and that right there is why if you’re on the fence about Defrag, you should hop off and come join us. Because if you’re anywhere CLOSE to thinking about these issues (and how they intersect with so many other technology touch points), then you need to start seeing what’s coming next. …

• Joe McKendrick reports Anne Thomas Manes: SOA can be resurrected, here's how for ZDNet on 10/23/2009 from this year’s International SOA Symposium in Rotterdam:

…[T]he prevailing theme is “Next-Gen” SOA, in which we see service-orientation emerge from its bout with the skeptics to take a stronger role within the enterprise.

Thomas Erl, the conference organizer and prolific author on SOA topics, launched the event, noting that we are moving into a period of Next-Generation SOA, with the foundation of principles and practices being laid within many entreprises.

Next up was Anne Thomas Manes of Burton Group, who declared in a post at the beginning of the year that “SOA” — at least as we knew it — was “dead.” However, the second part of Anne’s post was “Long Live Services,” which is the theme that she picked up on in her keynote address.

“Business wasn’t really interested in buying something called ‘SOA,” she declared, adding that in her own research, fewer than 10% of companies have seen significant business value in their efforts.

However, that is not to diminish the importance of service oriented architecture. “The average IT organization is in a mess,” she says. “The average business has 20 to 30 core capabilities. Why do they need 2,000 applications to support those 20-30 capabilities?” …

Joe and Anne are signatories of the SOA Manifesto, the almost-final version of which and its Guiding Principles appeared on 10/23/2009.

Barton George reviews Gartner’s Bittman: Private Cloud’s value is as Stepping Stone presentation at Gartner’s IT Symposium (see below) in this 10/23/2009:

Yesterday Gartner distinguished analyst Tom Bittman, who covers cloud computing and virtualization,  posted some thoughts and observations from the Gartner Symposium in Orlando.

Based on Tom’s observations, private cloud (however defined) seems to have captured the hearts and minds of IT.  Before he began his talk on virtualiztion he did a quick poll asking how many in the audience considered private cloud computing to be a core strategy of theirs.  75% raised their hands.  While not overly scientific, that’s a pretty big number.

… Tom sees private cloud’s value as a means to end and concludes his post by saying

“The challenge with private cloud computing, of course, is to dispel the vendor hype and the IT protectionism that is hiding there, and to ensure the concept is being used in the right way – as a stepping-stone to public cloud… [italics mine]”

This is where I disagree.  I believe that while private cloud can be a path to the public cloud, it can also be an end unto itself.  Unfortunately (or fortunately) we will always have heterogeneous environments and in the future that will mean a mixture of  traditional IT, virtualized resources, private clouds and public clouds.  In some case workloads will migrate from virtualizaiton out to the public cloud but in other cases they will stop along the way and decide to stay.

IT will become more efficient and more agile as the cloud evolves but there will be no Big Switch (see above illustration), it (IT) will need to manage a portfolio of computing models.

Thomas Bittman’s Talk of Clouds (and Virtualization) in Orlando post of 10/22/2009 from Gartner’s IT Symposium emphasizes attendee preference for private over public clouds:

At this week’s Gartner Symposium in Orlando, there was a noticeable shift in the end user discussions regarding virtualization and cloud computing, and a few surprises:

1) In my presentation on server virtualization on Monday, before I started, I asked the audience how many of them considered private cloud computing to be a core strategy of theirs. 75% raised their hands (I expected maybe one-third). Clearly, everyone has a different idea of what private cloud computing means (or doesn’t), but the fact that so many people have glommed onto the term is very interesting. …

2) My one-on-ones shifted heavily from virtualization toward cloud computing and private cloud computing. I had 18 one-on-ones that discussed server virtualization, and 26 that discussed cloud and private cloud. …

Tom concludes:

The challenge with private cloud computing, of course, is to dispel the vendor hype and the IT protectionism that is hiding there, and to ensure the concept is being used in the right way – as a stepping-stone to public cloud, based on a timing window, the lack of a mature public cloud alternative and a good business case to invest internally.

Andrea Di Maio’s A Day in the Clouds post from the Gartner Symposium on 10/21/2009 begins:

I spent a good part of today at Gartner Symposium in Orlando talking to government clients about cloud computing, moderating a vendor panel and finally running an analyst-user roundtable, all on the topic of cloud computing in government.

Two main take-aways for me:

  • Government clients are confused as to (1) whether vendor offerings really meet their security requirements, (2) which workloads could be easily moved to a public cloud and (3) what’s the roadmap from public to private/community clouds.
  • Some vendors have great confidence that their existing solutions already meet most of the security requirements and claim that there is already a lot of government stuff in the cloud.

Therefore, either vendors are not effectively or convincingly marketing their success stories, or government clients use “security” as a blanket topic not to move to the cloud.

My current reading though – and I know this won’t be welcome by some – is that most of these conversations really are about alternative service delivery models.

Except one or two cases, all conversations focused on “cloud computing can make me save money” rather than “I need scalability or elasticity”. …

Guy Rosen made a Presentation: Measuring the Clouds to an IGT workshop on 10/21/2009 and includes a link to a copy of his slides:

Following up on the cloud research I’ve been conducting and publishing here, yesterday I presented the topic at an IGT workshop. There was a lot of great discussion on the findings as well as ideas for new angles and fresh approaches to looking at the data. Thanks to IGT’s Avner Algom for hosting the session!

I’ve SlideShare’d the presentation below for my readers’ enjoyment. If anyone is interested in discussing feel free to reach out.

IGT is holding its primary cloud computing event of the year, IGT2009 – World Summit of Cloud Computing, on December 2-3. A ton of folks from the industry are attending and I’ll definitely be there. I look forward to meeting fellow cloud-ers. …

You can review his slides here.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

•• Steve Lesem contends that Salesforce and Box.net: Saas and Cloud Storage will Transform Enterprise IT in this beathless post of 10/25/2009:

The announcement that Salesforce is integrating directly with cloud-storage Box.net is the tip of the iceberg when it comes to the future of the cloud:
Techcrunch explains what Box.net is thinking:

“CEO Aaron Levie says that this is the first step in Box.net's plan to give businesses a secure way to share their files across multiple services on the web. He says that many of the cloud services geared toward the enterprise don't work well together -- oftentimes you'll have to reupload the same content to multiple sites to share or edit it. Box.net wants to help unify these services by serving as the central hub for your uploaded files, which you can then access from these other web-based services. Levie hints that we'll be seeing more integrations with other services in the near future.”

What we are witnessing is the future of enterprise IT infrastructure. We have been talking about programmatic access through RESTful APIs for some time now.  This move by Saleforce is an evolutionary step in how enterprise IT will manage its IT infrastructure - it will be a cross-cloud platform, with applications and open access to the storage cloud of your choice.

Security is not an issue, and the future is about cross-cloud collaboration.

I bet the Salesforce.com/Box.net won’t even qualify as even an “evolutionary step” let alone transformative. All cloud storage players offer more or less RESTful APIs, regardless of the overhead cost.

• Bob Evans reports “Two offshore-services execs and a prominent American CIO plan to open IT-services centers across the U.S. as low-risk and and price-competitive alternatives to offshore providers” in his New Tech Firm Hiring Hundreds In U.S. To Take On The World post of 10/22/2009 to InformationWeek’s Global CIO blog:

… Called Systems In Motion, the company's plan is to create a handful of U.S.-based centers offering world-class IT services at competitive prices and with less risk, fewer regulatory obstacles, and a level of flexibility and nimbleness that today's high-change global business environment requires [Systems In Motion link added].

The startup company expects to get up to speed quickly by utilizing leading-edge technologies and infrastructure such as Salesforce.com's Force.com, Amazon's Web Services, and Zephyr Cloud Platform to keep costs low and project cycles short, and to position itself on the front edge of transformative approaches that many enterprises are just beginning to fully comprehend. [Zephyr link added] …

• Jim Liddle reported that GigaSpaces finds a place in the Cloud on 10/23/2009:

[A] new report from analysts The 451 Group outlines the success to date  that GigaSpaces has had in the Cloud Sector. The report talks about how GigaSpaces now has 76 customers using its software on cloud-computing platforms. This is up from 25 on Amazon’s EC2 in February. GigaSpaces have moved forward their cloud strategy in recent weeks, announcing support for deployments on GoGrid and also recently announcing tighter integration with VMWare which enables GigaSpaces to dynamically manage and scale VMWare instances and enable them to participate in the scaling of GigaSpaces hosted applications.

GigaSpaces have a number of hybrid deployments in which their application stack is hosted in the cloud and the data or services are hosted on premise which have had some notable successes.

The GigaSpaces product provides a strong Cloud middleware stack which encompasses Logic, data, services and messages in memory underpinned by a real-time Service Level Agreement enforcement which functions at the application level enabling the stack to scale up and out in real time based on SLA’s set by the business. As everything is held in memory, it is faster than alternative ways of trying to build enterprise scale applications in the cloud, and it has sophisticated sync’ services that enable async (or sync) of data to a DB or persistent store.

• Tom LounibosWhat Happens When Vendors Repackage Old Technology and Call it a Cloud Service? post of 10/23/2009 takes on HP and its LoadRunner product:

In an effort to remain relevant, some software vendors take marketing advantage of the newest hot technology fad at the expense of their own customers. Cloud computing (the newest hot trend) has definitely been defined and positioned by some traditional software and hardware vendor’s marketing organizations to meet their specific agenda, which usually means extending the life of an existing product or product line.  It has become a virtual “cloud rush” as to how many times “cloud computing” can be mentioned in product and marketing collateral. . . including collateral for products that were first developed back when Bill Clinton was president!

A great example of this “cloud rush” is Hewlett Packard with its LoadRunner product.  LoadRunner was developed in the early 90’s to help corporate development teams test client/server applications.  It became, over time, the de-facto standard testing tool for most enterprise companies and was priced accordingly.  Entry-level pricing began at $100,000 and if you needed to simulate thousands of users the cost skyrocketed into the millions of dollars very quickly.

Today, HP is attempting to “perfume the pig” so to speak, by repackaging LoadRunner into a new cloud-based service called Elastic Test. To the uneducated observer it simply looks like a new cloud service for testing web applications.  The problem is that it’s the same LoadRunner product built almost 20 years ago to test client/server applications and it carries a lot of technology baggage along with it. Subsequently, HP chooses to pass along a lot of this baggage in the form of costs back to its customer base.  For example, an entry-level HP virtual test will take weeks to develop and will have a “starting” cost of $45,000.  Hardly living up to cloud computing’s value proposition for on-demand services that provide ease, speed and affordability.

Tom is CEO of SOASTA, one of the first cloud-based testing services.

• HP claims a “Quicker Path To Infrastructure and Application Modernization” in its Infrastructure Services Executive Overview publication:

Too many applications. Too much customization. Underutilized server capacity. Operations costs that are spiraling upward. Increases in power and cooling costs.

And that's just the beginning of the challenges.

HP Adaptive Infrastructure (AI) Services offering can help you overcome these challenges with a new approach to outsourcing that speeds the delivery of next-generation infrastructure benefits. HP Adaptive Infrastructure Services offers a prebuilt infrastructure with a design based on HP Adaptive Infrastructure innovations and delivered through highly automated processes and procedures. You will realize quicker infrastructure and application modernization at a reduced risk and cost. Because all assets are owned and managed by HP, you will be able to convert the capital investment associated with an infrastructure buildout into an ongoing operating expense.

This sounds similar to hosting to me but doesn’t mention “cloud.”

Steve Lesem asks BMC's Tideway Acquisition: A Stairway to the Cloud? in this 10/21/2009 post:

BMC Software's announcement that it has entered into a definitive agreement to acquire privately-held Tideway Systems Limited (Tideway), a provider of IT discovery solutions, can be interpreted as an extension of BMC's commitment to cloud-computing.
Here are two important statements in the press-release:

1. BMC will deliver unmatched visibility into the data center and rapidly reduce the time and resources required to model, manage and maintain applications and services. This is critical for IT organizations that are transitioning applications and services to cloud computing environments.

2. With the acquisition of Tideway, BMC adds the industry's leading application discovery and dependency mapping capabilities to manage and maintain complex data center environments including distributed, virtual and mainframe IT platforms and further extends its leadership in business service management.

So let's see what this could mean. 

It gives BMC the critical capability to discover and map complex data environments which are both physical and cloud-based.

This acquisition also puts BMC in a strong position to build a cloud-based CMDB.  While that might not happen right away, it is clearly now a key capability if they decide to pursue it. It also allows them to build a federated CMDB - and manage the hybrid cloud - private and public - across enterprise and hosted data centers. 

The evolution towards cloud-based ITIL continues.

<Return to section navigation list>