Friday, November 19, 2010

Windows Azure and Cloud Computing Posts for 11/19/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.

Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

Adron Hall (@adronbh) posted Windows Azure Table Storage Part 1 on 11/19/2010:

image This is the first part of a two part series on building a basic working web app using ASP.NET MVC to create, update, delete, and access views of the data in a Windows Azure Table Storage Service.  The second part will be published this next Monday.  With that stated, let’s kick off with some technical specifics about Windows Azure Table Storage.

imageThe Windows Azure Table service provides a structured storage in the form of tables.  The table storage you would setup within Windows Azure is globally unique.  Any number of tables can be created within a given account with the requirement that each table has a unique name.

The table storage account is specified within a unique URI such as:

Within each table the data is broken into collections called entities.  Entities are basically rows of data, similar to a row in a spreadsheet or a row within a database.  Each entity has a required primary key and a set of properties.  The properties are a name, typed-value pair, similar to a column.

Tables, Entities, and Properties

There are three core concepts to know when dealing with Windows Azure Tables; Table, Entities, and Properties.  For each of these core features of the Windows Azure Table Storage it is important to be able to add, possibly update, and delete the respective table, entity, or property.

Windows Azure Table Hierarchy;

  • Table – Similar to a spreadsheet or table in a relational database.
  • Entity – Similar to a row of data in a spreadsheet, relational database, or flat file.
  • Property – Similar to a cell in a spreadsheet or tuple in a relational database.

Each entity has the following system properties; a partition key, row key, and time stamp.  These properties are included with every entity and have reserved naming.  The partition and row key are responsibilities of the developer to insert into, while the time stamp is managed by the server and is read only.

Three properties that are part of every table;

  • Partition Key
  • Row Key
  • Time Stamp

Each table name must conform to the following rules; a name may have only alphanumeric characters, may not begin with a numeric character, are case-insensitive, and must be between 3 and 63 characters.

Tables are split across many nodes for horizontal scaling.  The traffic to these nodes is load balanced.  The entities within a table are organized by partition.  A partition is a consecutive range of entities possessing the same key value, the key being a unique identifier within a table for the partition.  The partition key is the first part of the entity’s primary key and can be up to 1 KB in size.  This partition key must be included in every insert, update, and delete operation.

The second part of the primary key is the row key property.  It is a unique identifier that should not be read, set on insert or update, and generally left as is.

The Timestamp property is a DateTime data type that is maintained by the server to record the entity for last modifications.  This value is used to provide optimistic concurrency to table storage and should not be read, inserted, or updated.

Each property name is case sensitive and cannot exceed 255 characters.  The accepted practice around property names is that they are similar to C# identifiers, yet conform to XML specifications.  Examples would include; “streetName”, “car”,  or “simpleValue”.

To learn more about the XML specifications check out the W3C link here:  This provides additional information about properly formed XML that is relatable to the XML usage with Windows Azure Table Storage.

Coding for Windows Azure Tables

What I am going to show for this code sample is how to setup an ASP.NET MVC Application using the business need of keeping an e-mail list for merges and other related needs.

I wrote the following user stories around this idea.

  1. The site user can add an e-mail with first and last name of the customer.
  2. The site user can view a listing of all the e-mail listings.
  3. The site user can delete a listing from the overall listings.
  4. The site user can update a listing from the overall listings.

This will provide a basic fully functional create, update, and delete against the Windows Azure Table Storage.  Our first step is to get started with creating the necessary projects within Visual Studio 2010 to create the site with the Windows Azure Storage and Deployment.

  • Right click on Visual Studio 2010 and select Run As Administrator to execute Visual Studio 2010.
  • Click on File, then New, and finally Project.  The new project dialog will appear.
  • Select the Web Templates and then ASP.NET MVC 2 Empty Web Application.
  • Name the project EmailMergeManagement.  Click OK.
  • Now right click on the Solution and select Add and then New Project.  The new project dialog will appear again.
  • Select the Cloud Templates and then the Windows Azure Cloud Service.
  • Name the project EmailMergeManagementAzure.  Click OK.
  • When the New Cloud Service Project dialog comes up, just click OK without selecting anything.
  • Right click on the Roles Folder within the EmailMergeManagementAzure Project and select Add and then Web Role Project in Solution.
  • Select the project in the Associate with Role Project Dialog and click OK.

The Solutions Explorer should have the follow projects, folders, files, and Roles setup.

Solution Explorer

Solution Explorer

  • Now create controller classes called StorageController and one called HomeController.
  • Now a Storage and Home directory in the Views directory.
  • Add a view to each of those directories called Index.aspx.
  • In the Index.aspx view in the Home directory add the following HTML.

Adron continues with extensive C# sample code and concludes:

That’s it for part 1 of this two part series.  The next entry I’ll have posted this coming Monday, so stay tuned for the final steps.  :)

Rinat Abdullin (@abdullin) observed that Lokad CQRS does not use Windows Azure Table Storage on 11/19/2010:

image We've got really interesting question in Ask Lokad community today. It actually highlights an important architectural decision made in Lokad.CQRS

imageWe're taking a look a the implementation of "Sample05 - Web + Worker", and wondering whether it is possible to use table storage rather than blob storage as the backing store for ViewReader/ViewWriter.

It looks like blob storage and file storage are the two provided implementations, but perhaps we're missing something.

It seems like it would it make sense to use TableStorage for many types of views (given Table Storage or the 100x cost factor).

Thoughts? Should we back off ViewReader/ViewWriter for now and go with CloudTable instead, or is there a vision behind ViewReader/ViewWriter that we should wait to unfold ?

Thanks Scott!

image Lokad.CQRS explicitly does not support Table Storage, while focusing on Windows Azure Blob Storage instead. This was a conscious choice based on experience, extreme performance scenarios and just resulting complexity in the code. Table Storage brings more problems that benefits for the CQRS in the Cloud approach:

  • Lokad.CQRS solutions built with TableStorage model will be less portable (between various cloud platforms and on-premises deployments).
  • There is a strong theory behind this choice (Pat Helland's works, DDD thinking and their mapping to CQRS).
  • It's possible to do logical batch inserts to Blobs as well (just depends on how you define partition boundaries and processing).
  • Azure Blobs map directly to the CDN features, which makes them perfect candidate for CQRS Views and scaling.
  • Table storage does not support MD5 hashing, while Blob Storage does.
  • Even with Lokad.Cloud, you can't put more than 960 Kb into a row in Table Storage.
  • There are also a few more caveats with the TableStorage, which Lokad R&D could probably explain better than I do.

Basically, I embedded into Lokad.CQRS some experience coming from Lokad.Cloud framework. The latter stresses Windows Azure in many ways (researchers tend to do this often) and provides invaluable source of information and feedback.

In short, Windows Azure Blob Storage with ViewReader and ViewWriter are the preferred option for CQRS view persistence (interfaces and classes will be refactored within the next 1-2 months). As for the command processing (write-side), there will probably be other approaches (since CQRS is inherently persistence-ignorant). Options will probably include:

  • NHibernate for SQL Azure (a la lightweight CQRS in Cloud) - production tested and available right now as a separate module in Lokad.CQRS (plugs directly into the transaction scopes);
  • Direct state persistence to Blob.
  • Event Sourcing.

In essence, current release of Lokad.CQRS was mostly about the CQRS-opinionated Open Source Service Bus for Windows Azure (with some guidance and and helpers). Second release will focus on building actual CQRS solutions, based on our current experience.

For more background on Command Query Responsibility Segregation (CQRS) see Zilvinas Saltys’ The Developer Day blog.

<Return to section navigation list> 

SQL Azure Database and Reporting

Steve Yi listed 7 Things You Need To Know about SQL Azure Reporting on 11/18/2010:

image Microsoft SQL Azure Reporting is a cloud-based reporting platform that provides comprehensive reporting functionality for a variety of data sources. SQL Azure Reporting includes a complete set of tools for you to create, manage, and deliver reports, and APIs that enable developers to integrate or extend data and report processing in custom applications. 

image Microsoft SQL Azure Reporting lets you use the familiar on-premises tools you’re comfortable with to develop and deploy operational reports to the cloud. There’s no need to manage or maintain a separate reporting infrastructure, which leads to the added benefit of lower costs (and less complexity). Your customers can easily access the reports from the developer portal, through a web browser, directly from the cloud reporting server, or embedded within your applications.

It’s Based on SQL Server Reporting Services

SQL Azure Reporting provides many of the features you know from SQL Server Reporting Services 2008 R2 to create your reports with tables, charts, maps, gauges, matrixes, and more using the familiar Business Intelligence Development Studio, and deploy them on-premises or in the cloud.  If you are looking for Business Intelligence Development Studio to build your reports it comes as part of the free download of Microsoft SQL Server 2008 R2 Express with Advanced Services (download it here).

Report Execution Happens in the Cloud

SQL Azure Reporting is server-side reporting and the servers are located in the Windows Azure Platform data centers. When the report executes the query or generates the graphics for the charts, it all happens in Microsoft’s data center.

If you have used the Microsoft Report Viewer control to embed reports into a web application, you know that you can have local processing (.rdlc) where the report is generated on your web site or remote processing (.rdl) where report generation is done on the on-premise SQL Server. With SQL Azure Reporting, the report viewer control is in the remote processing mode.

SQL Azure Reporting is Part of the Windows Azure Platform

Being part of the Windows Azure Platform means you are getting all the same tools as you use for Windows Azure and SQL Azure, along with the scalability that you expect. Benefits include quick provisioning in the data center, and it’s integrated into Azure Developer Portal.

The Data Source is SQL Azure

SQL Azure Reporting allows you to execute reports against SQL Azure Databases in the cloud – that means the Data Source the report is using for generating reports point to SQL Azure Database. If you are using shared data sources currently, you can just redirect them to SQL Azure once you have uploaded your data.

Reports Are Exactly the Same Format

The reports you deploy to SQL Azure Reporting are exactly the same as the reports that you deploy to an on-premises SQL Server Reporting Services – all that is different is the data source. This means that you can leverage all the reports that you have already written for SQL Server Reporting Services and can directly deploy them to SQL Azure Reporting. It also means you continue using Business Intelligence Development Studio to author reports.

There is Nothing New to Download and Install

Since you are using the same tools to develop the reports as your used previously, and you are executing the reports in the cloud – there is nothing additional you need to install on your local machine. If you are looking for Business Intelligence Development Studio to build your reports it comes as part of the free download of Microsoft SQL Server 2008 R2 Express with Advanced Services (download it here).

You Can Access SQL Azure Reporting With a Browser

From anywhere you can enter in your SQL Azure Reporting URL, login and password and view the reports, and download them as PDF or Excel files. You don’t need to deploy a web site with the embedded report viewer control to view the reports from anywhere. Unlike an on-premises SQL Server where you would have to open the firewall to have this kind of access, SQL Azure Reporting comes Internet ready.

Getting Started

View Introduction to SQL Azure Reporting by Nino Bice [and] Yi Liao as a pre-recorded video from PDC 2010.

To learn more about SQL Azure Reporting and how to sign up for the upcoming CTP, visit:

To learn more about authoring reports, visit these resources on SQL Server Reporting Services and Business Intelligence Development Studio, see: Working with Report Designer in Business Intelligence Development Studio.

Directions Magazine reported IDV Solutions Highlights SQL Azure in “District Mojo” Data Visualization in an 11/17/2010 press release:

image IDV Solutions announced today the release of District Mojo, an interactive map and data visualization built on Visual Fusion 5.0. Visual Fusion is innovative business intelligence software that unites data sources in a web-based, visual context for better insight and understanding.

District Mojo opens up the world of U.S. Congressional Districts and the tradition of gerrymandering, the practice of creating oddly shaped voting districts for political advantage. Lampooned when it first happened in 1812, the creativity of politically motivated district drawing has resulted in some remarkable geographic shapes. District Mojo uses data visualization and interactivity to highlight the more interesting examples and rate every congressional district based on its “gerrymanderedness”. With the 2010 Census prompting states to redistrict in the near future, understanding this topic is important for all Americans.

imageDistrict Mojo takes United States Congressional District data and puts it in the interactive context of charts, graphs, and a map. Data for this application is stored in SQL Azure, Microsoft’s self-managing, scalable, business-ready cloud storage. Visual Fusion 5.0 consumes many other data sources as well, such as SharePoint Lists, Excel Files, and ArcSDE content. Microsoft’s Bing Maps serves as the canvas for visualizing all the district information in the informative context of location.

Released on October 26, Visual Fusion 5.0 takes the product’s location intelligence and data visualization capabilities to a new level by adding complementary analytics out of the box. Now business users can see their data in the full context of location, time, and analytics, each context interacting with and amplifying the others. The new visualization, interaction, and application-building capabilities in this 5.0 release highlight the main value points Visual Fusion brings to the enterprise: enhanced insight, increased productivity, and business agility.

About IDV Solutions
IDV is a business intelligence software company committed to helping organizations gain more insight from data. By repeatedly solving key problems for customers in the Global 2000 and government, IDV and its products have earned a reputation for innovation, speed, and the highest quality user experience. For more information, please visit

Braulio Megias recently added SQL Azure: Filestream datatype through the blob storage to the Windows Azure Feature Voting Forum:

imageThis would reduce costs for those who store files inside their SQL Server 2008 databases when migrating from on-premise scenarios.

I think it’s a great idea! Please vote it up.

image For more information on SQL Server 2008’s FILESTREAM data type, see Lenni Lobel’s SQL Server 2008 FILESTREAM Part 1 of 3: Introducing FILESTREAM post of 11/18/2010 (et seq.)

<Return to section navigation list> 

Dataplace DataMarket and OData

Peter Laudati (@jrzyshr) covers WCF OData and SOAP connectivity to Android in his Connected Show #38 – WCF? Droid Does! podcast of 11/19/2010:

image And, we’re back! It’s been two months since our last show, but the Connected Show is alive and well!  Episode #38 is available now!

In an increasingly diverse world of mobile devices, the .NET developer is likely to end up supporting multiple mobile platforms. In this episode, guest Roger Heim joins Peter to talk about connecting .NET & WCF Services with Android clients.

Also, Dmitry unknowingly joins the show with his take on the PDC 2010 announcements. Peter covers the (in)sanity of Silverlight v. HTML5.

Show Notes & Resources For Episode #38
Items Discussed During News & Banter

Silverlight Dead?! Uh… NOT!

Windows Azure At The PDC 2010

Items Discussed During Interview with Roger

Connect With Roger

Most WCF Services use SOAP & WSDL. There is no native SOAP support in Android.  However, 3rd party libraries are available.  If you want to consume WCF SOAP-based services, in Android, consider the following library:

imageWCF Data Services makes it very easy to build data based services on the .NET platform.  WCF Data Services uses the OData protocol on the wire.  This can be consumed by Android too, via a 3rd party library.

All About oData & AtomPub

Consuming OData (produced by WCF Data Services) From Android

  • Restlet – A RESTful Framework for Android and Other Java Implementations That Can Consume OData Services.  This provides a nice Java client-side proxy class you can use on Android to talk to an oData service.
  • Library To Add JSONP Support to WCF Data Services for “$format=json” – Roger says Restlet works, but can be heavy. Recommends having WCF Data Services return JSON for perf & Android development ease.  But you need this library to make it work.  This is similar to plain old RESTful WCF, but using WCF Data Services (as a tool) makes it easier/faster to build the services in the first place.
  • LINQPad – 3rd Party Tool That Really Helps When Developing and Testing WCF Data Services

Of course, you can also build RESFful services with WCF that return JSON on your own.  It seems that’s what others are recommending too.

What Others Are Saying About Consuming WCF From Android

Bruce Kyle [pictured below] invites developers to Explore Data in Walkthrough of Windows Azure DataMarket in an 11/19/2010 to the US ISV Evangelism blog:

image My colleague Lynn Langit has put together a walkthrough of Windows Azure DataMarket. In the walkthrough, she shows you how to get to the data viewer, so that you can take a look at the dataset that you are interested in before you decide to work with it in your application.  You can work with datasets on the site itself or you can use other methods to explore the data. she starts by using the (recently released) new feature of Excel Power Pivot 2010. 

Lynn’s posting is at First Look - Windows Azure Data Market.

About Windows Azure DataMarket

imageDataMarket is a service that provides a single consistent marketplace and delivery channel for high quality information as cloud services. Content partners who collect data can publish it on DataMarket to increase its discoverability and achieve global reach with high availability. Data from databases, image files, reports and real-time feeds is provided in a consistent manner through internet standards. Users can easily discover, explore, subscribe and consume data from both trusted public domains and from premium commercial providers.

One way ISVs can now monetize aggregate data from your many customers so they can comparing their own data to the trends. Data can be hosted on Windows Azure or by your hosting provider or on premises. You can provide a preview, offer free data, or charge for data.

For Application Developers

imageApplication developers can use data feeds to create content rich solutions that provide up-to-date relevant information in the right context for end users. Developers can use built-in support for consumption of data feeds from DataMarket within Visual Studio or from any Web development tool that supports HTTP.

For End Users

End users who need data for business analysis and decision making can conveniently consume it directly in Microsoft Office applications such as Microsoft Excel and Microsoft BI tools (PowerPivot, SQL Server Reporting Services). Users can gain new insights into business performance and processes by bringing together disparate datasets in innovative ways.

Sudhir Hasbe announced a DataMarket Webcast: Enabling ISVs to build innovative applications across industry verticals on 11/19/2010:

image We have setup a webcast for all ISVs interested in leveraging DataMarket to build the next greatest app in the world.  Join us for this webcast where we will share how you can leverage DataMarket to build innovative apps. We will also share few scenarios across industry vertical that you can go build.

Date: December 1, 8 am-9am PST

image Session Abstract: This webcast will provide an overview of DataMarket and the tools and resources available in DataMarket to help ISVs build new and innovative applications, reduce time to market and cost. DataMarket also enables ISVs to access data as a service.

imageISVs across industry verticals can leverage datasets from partners like D&B, Lexis Nexis, Stats, Weather Central, US Governments, UN, EEA to build innovative applications. DataMarket includes data across different categories like demographic, environmental, weather, sports, location based services etc.

Registration link: Partner:

<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Ryan Dunn (@dunnry) announced posting of a 00:40:47 Cloud Cover Episode 32 - [Windows Azure] AppFabric Caching Channel 9 video segment on 11/19/2010:


Join Ryan and Steve (@smarx) each week as they cover the Microsoft cloud. You can follow and interact with the show at @cloudcovershow
In this episode:  

  • Wade Wegner (@wadewegner) rejoins us to talk about Windows Azure AppFabric Caching
  • We learn how to get started with the AppFabric Caching service
  • We discuss the various levels of caching and discover where AppFabric Caching fits

Show Links:

Windows Azure Diagnostics Monitor
Windows Azure Programming Model Whitepaper
Windows Azure StorageClient CloudBlob.DownloadToFile issue and workaround
Breaking Change in Windows Azure Guest OS 1.8 and 2.0
Windows Azure AppFabric Caching interview with Karan Anand
Windows Azure AppFabric LABS
Windows Azure AppFabric SDK v2.0
Caching Survey
Caching Demo

Are both Steve and Wade standing in a trench?

Myles Jeffery (@mjthinkscape) reminds Azure developers that Data Transfers Between Azure & SharePoint Online Are Billable in an 11/19/2010 post:

image Microsoft's policy for Azure is to charge for data transfer from Azure to the outside world, but not between the various Azure services when they are hosted in the same data centre.

Now we have built several business solutions that involve Azure talking to SharePoint Online. We deploy our solutions to the same data centre which happens to be in Dublin covering the EMEA region.

image722322That got me curious as to whether we would be billed for the data transfer between Azure and SharePoint Online. I raised a question on the Azure forums but didn't receive a definitive answer. However, one suggestion was to raise the question through the Microsoft Online Services Customer Portal.

I did this, and after a few email exchanges I got an answer, though not the one that I wanted!

"I have just heard back from the Azure Business desk, they have informed me that you will still be charged for data transfers between Share Point and Azure. They are in the same physical data center but the servers do not naturally speak to each other; for data transfer they are considered to be in different locations even though they share a building."

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, and CDN

imageNo significant articles today.

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Thomas Martinsen (@thomasmartinsen, AKA XAMLGeek) explained how to set a Start page in Azure Silverlight projects in this 11/19/2010 post:

image I have just completed a very exiting Silverlight project hosted in Windows Azure. One challenge I had during the development was setting the start page in the web role project.

During development this of course wasn’t an issue – I just right clicked the page I wanted as startpage as selected “Set as Start Page”.

imageWhen I deployed to Azure the start page wasn’t recognized. To set the start page in Azure it is necessary to do one of two things: 1) Implement a start page called “Default.aspx” or 2) specify the start page in the web.config file:


Setting the defaultDocument value in the web.config fixed my issue.

MSDevCom posted  Journey to the Cloud ISV Webinar Series: The Cloud Services Opportunity on 11/18/2010:


According to an email to Microsoft Partners:

The cloud offers tremendous and exciting new opportunities. We've created a webinar that discusses the cloud services opportunity and why it is such an excellent opportunity for you to grow revenue, expand your reach and gain better flexibility, control and cost savings.

imagePlus you'll learn how Windows Azure lets you maximize your potential and address customer needs with its flexible scalability, secure environment and 99.9% uptime.

IBS Services answered What's in store for Misys' Bankfusion core banking system? on 11/17/2010:

image Misys is touting its latest core banking software development, Bankfusion, as next-generation, built on new technology (Java) from the ground-up rather than being a re-write of a legacy mainframe-based solution. The system stems from a small Irish company, Trapedza, which was acquired by Misys in 2006 (IBS, October 2006, Misys acquires Trapedza). The company and its offering, Bankfusion, had been around for five years or so by then but failed to gain any clients. The original plans were to incorporate Bankfusion into Misys' old-timer, Bankmaster, but a year later Bankfusion emerged as a standalone and, furthermore, central system within Misys' core banking product range (IBS, April 2007, Misys unveils product strategy). Now the vendor is taking a step further and is bringing the system onto Microsoft's Windows Azure cloud, in line with its new strategy (IBS, September 2010, Misys predicts cloud and hot-housing future for delivering next-generation solutions).

image Dermot Briody, Misys' sales director for Europe, claims this to be the only offering of this kind in the world. He emphasises that the offering is 'not just a demo, but is an actual product that is capable and developed'. Microsoft and Misys are now devising detailed plans on the geographies and types of financial institutions to focus on initially, as well as target numbers and figures. He cites start-ups in Western Europe as one example, as this category requires flexibility of cost of ownership, scalability from the volume perspective and fast time-to-market. However, he insists that the ultimate vision is 'any bank in any geography'. No figures on potential savings offered by this model versus the standard deployment have been drawn up. The first taker is also yet to be signed.

imageMisys and Microsoft have been working together on the development of mission critical applications for over a year, but the Bankfusion/Azure tie-up has progressed 'in a matter of months', says Karen Topping Cone, GM, worldwide financial services at Microsoft. 'The speed with which this initiative has developed is symbolic of what the cloud brings – agility, fast time-to-market and flexibility. Banks today need to move much faster than they ever have,' she states. 'Working with partners like Misys means that we can offer banks not just an environment but also pre-existing solutions for it, like Bankfusion. We define this whole concept as "IT as a Service".'

imageFurthermore, the introduction of Microsoft's Windows Azure private cloud in the next couple of years 'will break the traditional data centre configuration', believes Cone, as it will offer 'all the benefits of the cloud on premises, with the appliance, which is like a data centre in a box – literally a container of servers, technology and so on – and it comes with Bankfusion'. Microsoft foresees 'a significant uptake of the offering as the private cloud comes to market', states Joseph Pagano, MD, banking and capital markets at Microsoft. 'This set-up offers all the benefits of the Windows Azure cloud, but will run behind the firewall of the bank and the data can be placed where it needs to be placed for privacy, security and regulatory reasons.'

Cone declines to comment on Microsoft' potential cloud tie-ups with Misys' competitors, saying that 'Microsoft hasn't announced any similar initiatives with other core banking providers'.

<Return to section navigation list> 

Visual Studio LightSwitch

Darryl K. Taft (@darrylktaft) reported “With its new Visual Studio 2010 Feature Pack 2, Microsoft adds three new capabilities for developers: Silverlight app testing, test playback on Firefox and a coded UI test editor” in a deck for his Microsoft Delivers Visual Studio 2010 Feature Pack 2 article of 11/17/2010 for

image Microsoft has released Visual Studio 2010 Feature Pack 2, which provides new testing functionality to the Visual Studio platform for applications built with Microsoft's Silverlight technology and more.

Brian Harry, a Microsoft technical fellow and product unit manager for Microsoft Team Foundation Server (TFS), said there are three major new features in Feature Pack 2: testing Silverlight applications, recorded test playback on Firefox and a coded UI test editor.

And in a Nov. 16 blog post, Amit Chatterjee, managing director of the Microsoft India Development Center (MSIDC) and general manager of the Visual Studio Test and Lab Management business at MSIDC, said, "Testing of Silverlight 4 applications has now become much easier with Visual Studio 2010 Feature Pack 2. From Microsoft Test Manager, you can now capture action recording of your manual tests of Silverlight 4  applications and fast forward it in future iterations of the test case. When a developer is creating a Silverlight 4 application, he needs to ensure that it is test-ready."

Meanwhile, in a Nov. 8 post on Visual Studio 2010 Feature Pack 2, Harry said:

image2224222"Now you can test your Silverlight apps as well as your other desktop applications.  We've added support both for coded UI tests and for record and playback in Microsoft Test Runner (part of Microsoft Test Professional). You are able to record the execution of your Silverlight app and gather rich bug data (including action logs, video, environment info and more). Unfortunately, you can't get Intellitrace logs at this time. We've tested it on a range of Silverlight apps, including ones with customer controls and apps generated by LightSwitch. We are waiting on a few fixes for LightSwitch issues we discovered—they should be available in the next LightSwitch pre-release. [Emphasis added.]

"Perhaps the biggest limitation is that, for now, our Silverlight testing support only works for Silverlight 4.0 applications hosted in IE. In the future, we will add support for desktop Silverlight applications too. There are a few other restrictions that you can read about in the docs that accompany the release but all-in-all, it's a big step forward for testing Silverlight apps."

With the new recorded test playback support on Firefox, cross-browser testing with Mozilla Firefox is now enabled on Microsoft Test Manager and Visual Studio 2010. Any existing action recording or coded UI test can now be played back on the Mozilla Firefox browser, Chatterjee said.

With the existing Visual Studio testing tools, users can record and play back Web applications in Internet Explorer. Visual Studio 2010 Feature Pack 2 enables users to play back those recordings on Firefox as well.

"Among other things, this will enable you to create a set of tests once and use them to regression test on both IE and Firefox," Harry said. "Now you can make sure your changes don't break apps across multiple browsers."

The third new feature in the feature pack, the Coded UI Test Editor, simplifies the process of managing or modifying UI test files. The UI test files store the information about the controls interacted with during a test and the actions performed on them.

"The Coded UI Test Editor now makes the overall maintenance story for coded UI test simpler and easier," Chatterjee said. "When you double-click on a UI test file in a Test Project, it will launch the graphical editor. You can now perform most of your maintenance activities from the Coded UI Test Editor [renaming a method or control, updating the properties of a control or action].

Return to section navigation list> 

Windows Azure Infrastructure

Eric Knorr reported “A curious manifesto on Microsoft's site offers a powerful argument for cloud computing dominance” in a preface to his Microsoft loves the cloud -- and wants to rule it post of 11/19/2010 to the InfoWorld Tech Watch blog:

Microsoft loves the cloud -- and wants to rule it

If there's any doubt in your mind whether Microsoft is serious about cloud computing, have a look at this extraordinary document written by Microsoft's Corporate Strategy Group: The Economics of the Cloud.

The authors, Rolf Harms and Michael Yamartino, begin their summary with this dramatic statement: "Information technology is undergoing a seismic shift towards the cloud, a disruption we believe is as game-changing as the transition from mainframes to client/server."

image Many people talk about the cloud as a return to the mainframe model, since the basic idea behind cloud computing is to centralize resources for economies of scale. Yet the authors go out of their way to dismiss the mainframe comparison, noting that the economies of scale for the cloud are greater than that for mainframes -- and that the cloud has more "modularity and agility" than client/server.

imageThe heart of The Economics of the Cloud is a thorough examination of the economies of scale enjoyed by public cloud services, from the cost of power to the variability of demand. Then comes a point-by-point comparison between public cloud services and the so-called private cloud. The authors' conclusion is that pound for pound, the public cloud beats the private cloud's pants off.

And make no mistake, over the past few years, Microsoft has invested heavily in building out cloud infrastructure, not to mention reallocating development resources. If Microsoft has become absolutely convinced that the cloud is where the major action is, can there be any doubt that it intends to dominate cloud computing the same way it dominated the desktop?

Soon, we'll have a look at the beta of Microsoft's cloud-based Office 365, and let you know how that's coming along.

Microsoft’s white paper is receiving more than its share of press.

John Brodkin reported “HPC Server customers to run workloads on Azure says Microsoft” as a preface to his Microsoft turns Windows Azure into cloud supercomputer article of 11/19/2010 for

imageWindows HPC Server customers will soon be able to run high-performance computing workloads on Windows Azure, Microsoft is announcing at this week's SC10 supercomputing conference in New Orleans.

image The first service pack for Windows HPC Server 2008 R2, due before the end of this year, will let customers connect their on-premises high performance computing systems to Windows Azure, giving them "on-demand scale and capacity for high performance computing applications," Microsoft said.

image Microsoft is also providing an Azure resource for scientists that will not require an installation of Windows HPC Server. The service makes the National Center for Biotechnology Information's BLAST technology, which lets scientists search the human genome, available on Azure. At SC10, Microsoft said it will demonstrate the NCBI BLAST application on Windows Azure performing 100 billion comparisons of protein sequences.

The new Windows HPC Service Pack's integration with Azure, meanwhile, gives Microsoft what it believes is a key differentiator between itself and the likes of Amazon's Elastic Compute Cloud: the ability to run supercomputing workloads across both in-house software and over the Internet on the cloud service.

"There is no on-premise Amazon, there is no on-premise Google computing resource," says Bill Hilf, general manager of Microsoft's technical computing group. "It's one of the big advantages we have."

HPC software is "really just a job scheduler that knows how to break up work and distribute it across a bunch of other servers," Hilf says. Integrating Windows HPC Server with Azure lets a customer's data center "talk to the Windows Azure system," and spread work across the two, he says. This makes sense for workloads that have large, temporary spikes in calculations.

Essentially, Microsoft is taking the concept of "cloud-bursting," the ability to access cloud-based computing resources in an automated way when applications need extra processing power, and applying it to the HPC world.

"This burst demand has been at the top of our HPC customers' requirements," Hilf says.

As for the NCBI BLAST announcement, Hilf notes that the code is in the public domain, but says running BLAST calculations on the Azure service will give scientists the ability to run gigantic database queries without investing in expensive hardware. In addition to porting BLAST to Azure, Microsoft built some Web-based user interfaces to make running calculations a bit easier, he says.

The cost of running BLAST on Azure will be the same as running any Azure workload, the price of which goes up as customers use more computing power. The 100 billion comparison BLAST workload, for example, was a query that took place on 4,000 cores over about six days for a price of less than $18,000.

While BLAST is the first HPC application Microsoft has offered on top of the Azure service, the vendor says more will come in the future. Even without such specific offerings, Microsoft says some customers have already begun running their own HPC workloads on the Azure cloud.

Simon Wardley (@swardley) discussed “aas Misconception” in his All in a word post of 11/19/2010:

image In my previous post, I provided a more fully fledged version of the lifecycle curve that I use to discuss how activities change. I've spoken about this for many years but I thought I spend a little time focusing on a few nuances [Link added].

Today, I'll talk about the *aaS misconception - a pet hate of mine. The figure below shows the evolution of infrastructure through different stages.

Figure 1 - Lifecycle (click on image for higher resolution)

I'll note that service bureau's started back in the 1960s and we have a rich history of hosting companies which date well before the current "cloud" phenomenon. This causes a great deal of confusion over who and who isn't providing cloud.

The problem is the use of the *aaS terms such as Infrastructure as a Service. Infrastructure clouds aren't just about Infrastructure as a Service, they're about Infrastructure as a Utility Service.

Much of the confusion has been caused by the great renaming of utility computing to cloud, which is why I'm fairly consistent on the need to return to Parkhill's view of the world (Challenge of the Computer Utility, 1966).

Cloud exists because infrastructure has become ubiquitous and well defined enough to support the volume operations needed for provision of a commodity through utility services. The commodity part of the equation is vital to understanding what is happening and it provides the distinction between a VDC (virtual data centre) and cloud environments.

If you're building an infrastructure cloud (whether public or private) then I'll assume you've got multi-tenancy, APIs for creating instances, utility billing and you are probably using some form of virtualisation. Now, if this is the case then you're part of the way there, so go check out your data centre.

IF :-

  • your data centre is full of racks or containers each with volumes of highly commoditised servers
  • you've stripped out almost all physical redundancy because frankly it's too expensive and only exists because of legacy architectural principles due to the high MTTR for replacement of equipment
  • you're working on the principle of volume operations and provision of standardised "good enough" components with defined sizes of virtual servers
  • the environment is heavily automated
  • you're working hard to drive even greater standardisation and cost efficiencies
  • you don't know where applications are running in your data centre and you don't care.
  • you don't care if a single server dies

... then you're treating infrastructure like a commodity and you're running a cloud.

The economies of scale you can make with your cloud will vary according to size, this is something you've come to accept. But when dealing with scale you should be looking at :-

  • operating not on the basis of servers but of racks or containers i.e. when enough of a rack is dead you pull it out and replace it with a new one
  • your TCO (incl hardware/software/people/power/building ...) for providing a standard virtual server is probably somewhere between $200 - $400 per annum and you're trying to make it less.

Obviously, you might make compromises for reasons of short term educational barriers (i.e. to encourage adoption). Examples include: you might want the ability to know where an application is running or to move an application from one server to another or you might even have a highly resilient section to cope with many legacy systems that have developed with old architectural principles such as Scale-up and N+1. Whilst these are valuable short term measures and there will be many niche markets carved out based upon such capabilities, they incur costs and ultimately aren't needed.

Cost and variability are what you want to drive out of the system ... that's the whole point about a utility.

Anyway, rant over until next week.

Michael J. Miller posted More on Supercomputing: 10 Teraflops Coming? on 11/18/2010 to his blog:

image While I wasn't able to attend the annual Supercomputing conference  (known as SC10), I've been quite impressed by many of the exciting announcements from the show. I wrote about some of these yesterday, but there have been a number of other announcements in high-performance computing that I find quite exciting, ranging from new cloud-based HPC solutions to plans for 10 teraflop processors.

imageMicrosoft announced that its Windows HPC Server has now surpassed a petaflop/sec (a quadrillion floating point operations per second), as part of the Tokyo Institute of Technology's Tsubame 2.0 supercomputer, ranked number 4 on the top 100 list. (Most of the top of the list run Unix or Linux).

imageMicrosoft also said it was releasing NCBI Basic Local Alignment Search Tool (BLAST) for Windows Azure, a tool designed to help scientists do biological research by sifting through large databases. This is the first specific high-performance computing (HPC) application the company has offered for Azure. More generally, Microsoft said that by the end of the year it will release Service Pack 1 for Windows HPC Server 2008 R2, which will let customers connect their on-premises high-performance computing systems to Windows Azure, its cloud-computing platform. Coupled with Amazon's announcement of GPU clusters, this should point the way for organizations using on-demand scale to augment internal capacity for high-performance computing applications.

While China has overtaken the U.S. on the top of the top 100 list, there are other projects  under development, including several in the U.S. The Lawrence Livermore National Laboratory is working on a new system called Sequoia, aimed at delivering 20 petaflops of performance, based on IBM's BlueGene technology and scheduled for delivery next year and deployment in 2012.  The Sequoia system is designed primarily for weapons science. Meanwhile, the National Center for Computational Sciences at Oak Ridge National Laboratory, which has the Jaguar system that is now in second place on the top 100 list, is apparently working on its own 20-teraflop system for 2012. Jaguar, which is based on 6-core AMD Opteron chips, has mostly been used exploring solutions to climate change and the development of new energy technologies

Nvidia today made a big point of saying that the "greenest" computers on the top 100 run its GPUs. But more interestingly, Nvidia's chief scientist William Dally seems to have used his keynote to discuss an upcoming project called Echelon, designed to produce a thousand-core graphic chip that can handle the equivalent of 10 teraflops on a single chip, based in part on improvements in process technology and in part on a new memory scheme. This is part of a DARPA competition called Ubiquitous High Performance Computing  that aims to produce "exascale" computing by 2108. Nvidia is said to be competing with Intel, Sandia National Labs, and MIT to produce such systems.

I'm sure other countries and other groups are also working on different approaches. Any way you look at it, we appear to be heading into a period of big changes in how we look at big scale computing, with more GPU-style computing, more cores per chip, and more cloud features.

Dennis Pombriant added Further Illumination on 11/18/2010 to his earlier Microsoft Targets Back (and Front) Office article of 11/17/2010:

I don’t like ambiguity and there was some in yesterday’s post so let’s get to it. [Link added (see below).] Yesterday I wrote:

imageMicrosoft is confidently offering replacement systems that have been the beneficiaries of significant investment over the last several years.  These systems also run on cloud infrastructure, though cloud does not necessarily mean multitenant.

Microsoft and others — with the notable exceptions of companies like NetSuite and — have decided to kick the can down the road with regard to multitenancy.  While multitenancy might have advantages, it is not advantageous enough yet to push the issue.  As a result, it may have to wait 10 more years — until the next wholesale replacement cycle — until multitenancy becomes more of a standard.

The “can” in this case is a metaphor referring to how vendors address the issue of single tenant vs. multi tenant cloud-based systems, and I thought the second paragraph did an acceptable job of illuminating the metaphor.

Not that long ago cloud and multitenant went together but a revolution in the last couple of years by major software vendors including Oracle, Microsoft and Sage among others, has changed the complexion of the situation.  Many vendors have adopted a strategy that leverages a single code base that can be deployed either as single or multitenant.  Moreover, the single tenant versions can still be housed in a common, cloud-based datacenter to deliver cloud services that are almost indistinguishable to the user.  But no conventional vendors are pushing multitenancy as the wave of the future.  They are letting the customer decide.

It’s still true that you need to work with your vendor to establish the right balance of cloud services to go with your cloud infrastructure.  For instance, do you want to manage your system from afar or do you want your vendor to provide management services including configuration, backup and upgrades?  The choices are numerous.  So when I spoke of kicking the can down the road, it was about the choice of deployment—as in letting the customer decide the deployment approach—rather than saying that any vendor did not possess the ability to deploy in multitenant mode.

Clear, right?

imageThe Channe9 Coffeehouse Forum’s Another Azure Confusion thread, started on 11/18/2010, illustrates the Windows Azure Team’s failure to teach .NET developers the details of their Visual Studio MSDN subscription benefits. It also demonstrates some developers’ understanding of how the Windows Azure Platform differs from Google App Engine [for Business].

Dennis Pombriant asserted Microsoft Targets Back (and Front) Office in an 11/17/2010 post to the Enterprise Irregulars blog:

image I spent part of last week in the Seattle area at a small meeting Microsoft organized for analysts.  The purpose was to brief us on product positioning and plans and much of the meeting was covered by non-disclosure.  Consequently, I am at a loss for how much I can divulge in this setting, at least until things are announced.

imageMicrosoft appears to be executing on a classic strategy of gaining footholds in key areas and expanding on them leveraging its low cost cloud infrastructure to tip the balance in favor of replacing ten-year-old ERP systems.

Much the same can be said of CRM.  Where first generation CRM systems are nearing end of life, Microsoft is confidently offering replacement systems that have been the beneficiaries of significant investment over the last several years.  These systems also run on cloud infrastructure, though cloud does not necessarily mean multi-tenant.

Microsoft and others—with the notable exceptions of companies like NetSuite and—have decided to kick the can down the road with regard to multi-tenancy.  While multi-tenancy might have advantages, it is not advantageous enough yet to push the issue.  As a result, it may have to wait ten more years—until the next wholesale replacement cycle—until multi-tenancy becomes more of a standard.

So adding up three key factors — the market cycle, cloud computing and the market reach enabled by more than sixty billion dollars in revenues and the marketing budgets that implies—and a picture emerges of an imperative for change to lower cost systems that can easily drive the recovery in IT.

Of course business products aren’t sold in shrink-wrap at retail and success will depend on the performance of the partner ecosystem.  But the partners have been a solid part of Microsoft’s success all along.  Nonetheless, some attention to elevating their games will be essential if Microsoft expects to reach a new plateau in enterprise computing.  I think they know that.

No significant articles today.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA) and Hyper-V Cloud

Ellen Rubin asserted “Public clouds have all the benefits that have been written about extensively” as a preface to her Why Public Clouds Are Looking Hot (Again) post of 11/18/2010, which favors hybrid clouds:

image Seems like it was only yesterday when industry pundits were backing away from public clouds in favor of the safer, more big-vendor-compliant “private clouds.” After Amazon shook things up with its new paradigm for computing and storage clouds in 2007, and started to gain traction (along with Rackspace and other cloud providers) in 2008 and 2009 – 2010 so far has been in many ways a retreat from the forces of innovation and the emergence of much fear, uncertainty and doubt about the perils of the public cloud. But lately, I’m seeing the pendulum start to swing back in favor of public clouds, albeit with a twist.

image Not surprisingly, private clouds look more familiar and comfortable to IT managers, big vendors and consulting/SI/service providers. They involve purchases of hardware, software and services through traditional enterprise procurement processes. They allow resources to stay behind the firewall under enterprise control. They fall within the existing legal, compliance and audit structures. With the addition of many flavors of “cloud in a box” offerings, they start to address the main issues that drove developers to the public clouds to begin with: self-service, provisioning on demand and the ability to get access to more scalable resources without requiring large upfront cap ex.

Public clouds have all the benefits that have been written about extensively (horizontal scaling, true on-demand capabilities, pure op ex, etc.). But for much of this year, the debate in the industry has been all about how worried everyone is about using public clouds (security, control, etc. etc.), and how uncertain they are about whether IaaS will really take off.


But there are some recent indications that the public cloud is hot again. A great study by Appirio speaks to growing industry comfort with public clouds and the likelihood that these will have a dominant place in IT infrastructure. At the Up2010 cloud event this week in San Francisco, Doug Hauger, GM of Microsoft’s Azure cloud, referred to this study extensively to make the point that public clouds are gaining credibility. James Staten of Forrester recently blogged about his predictions for 2011, including: “You will build a private cloud and it will fail.” His point is not to discredit private clouds as an approach but to remind companies beginning this process how incredibly hard it is to build a large, scalable, on-demand, multi-tenant cloud – even just for internal users.

Staten’s predictions make the case for how the cloud market has evolved in 2010, as enterprises planned their cloud strategies, implemented their pilots and defined their cloud architectures. Rather than seeing public clouds as “the other alternative” to private ones, enterprises and vendors have begun to view these as compatible strategies in a more sophisticated hybrid cloud model.

We’re huge fans of the hybrid model at CloudSwitch, and it’s great to see customers embracing public clouds as extensions of their private ones (as well as of their traditional virtualized data centers). The critical point about public clouds is that they allow testing, innovation and quick success or failure to happen in a low-cost way. This learning is imperative for the hybrid model, and public clouds are here now, today, working well and allowing enterprises to gain experience and log cloud mileage as they build out the rest of their cloud infrastructures. With CloudSwitch, these companies are now able to view the public cloud as a safe and seamless extension of their internal environment, in effect turning the public cloud into a “private” cloud as well.

<Return to section navigation list> 

Cloud Security and Governance

Kurt Mackie wrote Microsoft Document Outlines Its Cloud Security Infrastructure on 11/15/2010 for

image Microsoft today announced a new white paper that explains the organizational and standards-based underpinnings of its cloud security efforts.

The paper, "Information Security Management System for Microsoft Cloud Infrastructure" (PDF), describes the standards Microsoft follows to address current and evolving cloud security threats. It also depicts the internal structures within Microsoft that handle broad cloud security and risk management issues.

image This latest white paper is not a practical guide, but instead outlines some general principles. Its release follows two other Microsoft white paper publications designed to provide greater transparency about the company's cloud security efforts. Those earlier releases include "Securing Microsoft's Cloud Infrastructure" and "Microsoft Compliance Framework for Online Services."

The main notion from the newly released cloud infrastructure white paper is that Microsoft has a group within its Global Foundation Services organization that digs deep within standards, principally ISO/IEC 27001:2005. This ISO/IEC international standard describes security techniques and requirements for information security management systems. Microsoft uses ISO/IEC 27001:2005 as part of its Online Services Security and Compliance (OSSC) group's Information Security Management System (ISMS). [Link added.]

The OSSC's ISMS has three main programs, which cover information security management, risk management and information security policy. The group also coordinates various certifications, including SAS 70, Sarbanes-Oxley, the PCI Data Security Standard and the Federal Information Security Management Act. The OSSC's ISMS is validated by third parties, which aren't named in the white paper.

The new infrastructure white paper attempts to describe Microsoft's "recipe" for cloud computing, according to Mark Estberg, senior director of risk and compliance for Microsoft Global Foundation Services, in a blog post. Estberg is scheduled to speak with John Howie, senior director of Microsoft's Online Services security and compliance team, on Tuesday at the Cloud Security Alliance Congress in Orlando, Fla., where they will discuss Microsoft's best practices for the cloud.

The white paper admits that organizations may be stuck from adopting cloud computing based on privacy and security concerns. It also states that cloud business models and regulations are generally new and in flux. But it hopes that ISMS will become an overall strategy for both Microsoft's customers and partners to adopt.

Another attempt to explain approaches used for cloud security is the 76-page white paper from the Cloud Security Alliance, titled "Security Guidance for Critical Areas of Focus in Cloud Computing V2.1" (PDF). If that weren't enough, ThinkStrategies Inc., a consulting company focusing on the cloud computing and software-as-a-service industry, has issued a position paper today on why the U.S.A. PATRIOT Act, which prescribes limitations on privacy and civil liberty protections guaranteed by the U.S. Constitution, should not constrain companies from using U.S. cloud-based customer relationship management systems.

Assuring cloud security to organizations has been an uphill task. A March survey by the Information Systems Audit and Control Association found that half of 1,800 U.S. IT professionals polled felt that security concerns outweighed the potential benefits of cloud computing.

I’m surprised that Microsoft’s updated security white paper didn’t get wider press coverage.

Neil MacDonald asserted Cloud Computing Will be More Secure in an 11/9/2010 post to his Gartner blog:

image I presented a session exploring this provocative point of view at Gartner’s US Fall Symposium titled “Why Cloud Computing Will be More Secure Than What You Have Today”. This Wednesday afternoon presentation was a part of Gartner’s “Maverick Track” where presentations that challenge conventional wisdom are provided for clients. If you attended Symposium and weren’t able to make the session, all of the presentations are available online as well as (for the first time) videos of every session.

image Interestingly, on Thursday of that week at Symposium we had the chance to ask Steve Ballmer this question on stage during our mastermind interview session. Essentially, we asked him “Cloud Security – Oxymoron or Achievable?”. His answer – Achievable. You can see the longer version on the link enclosed. Essentially, his point was there was too much money at stake and that the market potential would spur innovative approaches to solving this problem.

I agree.

Here’s two recent examples. Trend Micro has introduced a new technology called SecureCloud based on technology it had acquired from Identum. Basically, think of this as full drive encryption for the Cloud. By using an agent (kernel driver) loaded into each VM, all traffic written to and from the Cloud provider’s storage is automatically encrypted. This keeps the Cloud provider’s staff from directly seeing your data, but is transparent to your applications running at higher levels within the VM. Of course encryption alone means nothing without control of the keys. Here’s the really interesting part of their innovation – in phase I your keys are stored in Trend Micro’s data centers. In phase II, the keys can be stored in your own data center. I’ve blogged about this beforeif the Cloud providers doesn’t have your keys, they don’t have your data.

Microsoft is trialing a technology called the “Windows Azure Platform Appliance” (WAPA) which allows larger enterprises and service providers to become a part of the Microsoft Azure Cloud fabric, but while maintaining compute and storage locally in an “appliance” (don’t let the name throw you, these are not toaster-sized appliances – think Winnebago! – with roughly about 1,000 CPUs in the current version). My colleague Tom Bittman and I explore WAPA in detail in this recent research note for clients along with recommendations for when it should be considered. WAPA will help enterprises to address security concerns where data needs to be held locally for security and/or regulatory concerns. Microsoft is just an early example. There will be other cloud providers that offer a local appliance option over time.

There are many more examples.

The point is that innovation is alive and well and that most of the concerns enterprises have about the security of Cloud computing will be addressed over the next decade — many of them within the next few years – just as happened with the adoption of the Internet starting in 1994.

Neil MacDonald is a vice president, distinguished analyst and Gartner Fellow in Gartner Research. Mr. MacDonald is a member of Gartner's information security and privacy research team, focusing on operating system and application-level security strategies. Specific research areas include Windows security…Read Full Bio

<Return to section navigation list> 

Cloud Computing Events

The HPC in the Cloud blog reported OpenStack Plans Next Two Cloud Platform Releases at First Public Design Summit, held on 11/9 throug 11/12/2010:

image OpenStack, an open source cloud project with broad developer and commercial support, completed its first public Design Summit last week, which attracted more than 250 people from 90 companies and 14 countries to plan the next two releases, code-named ‘Bexar’ and ‘Cactus.’ Taking place at the Weston Centre in San Antonio, Texas, the four-day event was hosted by Rackspace Hosting, a founding member of the open source project.

The OpenStack Design Summit featured two separate tracks, one consisting of developer-led sessions to plan the next two code releases, and one for interested users and the partner ecosystem to discuss deployment and commercial opportunities. The Summit also featured an 'InstallFest,' where attendees were able to test and document the installation process on a live, on-site environment provided by Dell and powered by the company's PowerEdge C server line.

“From development, to documentation and deployment, last week’s OpenStack Design Summit enabled the OpenStack community to come together to learn and make the key decisions for the next two code releases,” said Jim Curry, chief stacker and general manager, OpenStack. “The themes for the week were how to execute on enterprise and service provider deployments, and the immense opportunity for the commercial ecosystem.”

Before the Summit broke into the technical and business tracks, several keynote speakers recapped the progress and vision for the OpenStack community, including Chris C. Kemp, CTO for IT at NASA; Jesse Andrews, co-founder of Anso Labs; Joe Tobolski, senior director-research, data & platforms R&D group at Accenture; and Mark Interrante, vice president of product at Rackspace. Additional speakers in the business track included Christian Reilly, manager of global systems engineering at Bechtel; David Lemphers, director, cloud computing and SaaS at PricewaterhouseCoopers; Andrew Shafer, vice president of engineering at Cloudscaling; and Alex Polvi, CEO and founder of Cloudkick.

Dozens of developers contributed to the first ‘Austin’ code release in October and proposed features for the next ‘Bexar’ release in Q1 2011, which were reviewed and mapped out at the Design Summit. Awards were given to developers and documentation writers who made significant contributions to the first release, including Vish Ishaya of Anso Labs, Jay Pipes of Rackspace and Alex Polvi of Cloudkick for the developer awards, and Stephen Milton of ISO Media, Anthony Young of Anso Labs and David Pravec for documentation.

About OpenStack

OpenStack is a large-scale open source cloud project and community established to drive industry standards, end cloud lock-in and speed the adoption of cloud technologies by service providers and enterprises. The project currently includes OpenStack Object Storage, a fully distributed object store, and OpenStack Compute, a scalable compute provisioning engine. OpenStack was founded by Rackspace® Hosting through its wholly owned subsidiary, OpenStack, LLC, and has the support of more than 35 technology industry leaders. For more information and to join the community, visit


No significant articles today.

Sift Media announced on 11/18/2010 the agenda and private/public sessions for its Business Cloud Summit 2010 to be held 11/30/2010 at the Novotel London West hotel, Hammersmith, London, UK":

Private Sector:

Public Sector:

Bill Zack posted Ignition Showcase 11.15.10 on 11/15/2010, as you might expect, about the FireStarter scheduled for 12/2/2010 at the Microsoft campus and on the Internet:



Keynote Starts December 2, 2010 at 9:00 pacific time
  • Hear what’s coming next from Microsoft’s Scott Guthrie
  • Training, labs & swag
  • Online or in-person at Microsoft HQ, December 2nd 2010
  • It’s just like an extra day of PDC, dedicated to Silverlight
Register online button

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Tim Anderson (@timanderson) summarized comments about his Java crisis and what it means for developers post of 11/13/2010 in a What you are saying about the Java crisis article of 11/19/2010:

image A week or so ago I posted about the Java crisis and what it means for developers. The post attracted attention both here and later on The Guardian web site where it appeared as a technology blog. It was also picked up by Reddit prompting a discussion with over 500 posts.

So what are you saying? User LepoldVonRanke takes a pragmatic view:

I’d much rather have Java given a purpose and streamlined from a central authoritative body with a vision, than a community-run egg-laying, wool-growing, milk-giving super cow pig-sheep, that runs into ten directions at the same time, and therefore does not go anywhere. The Java ship needs a captain. Sun never got a good shot at it. There was always someone trying to wrestle control over Java away. With the Oracle bully as Uberfather, maybe Java has a place to go.

which echoes my suggestion that Java might technically be better of under more dictatorial control, unpalatable though that may be. User 9ren is sceptical:

Theoretically, the article is quite right that Java could advance faster under Oracle. It would be more proprietary, and of course more focussed on the kinds of business applications that bring in revenue for Oracle. It would be in Oracle’s interest; and the profit motive might even be a better spur than Sun had.

But – in practice – can they actual execute the engineering challenges?

Although Oracle has acquired many great software engineers (eg. from Sun, BEA Systems, many others), do they retain them? Does their organizational structure support them? And is Oracle known for attracting top engineering talent in general?

In its formation, Oracle had great software engineers (theirs was the very first commercial relational database, a feat many thought impossible). But that was 40 years ago, and now it’s a (very successful) sales-driven company.

There’s an important point from djhworld:

Java is hugely popular in the enterprise world, companies have invested millions and millions of pounds in the Java ecosystem and I don’t see that changing. Many companies still run Java 1.4.2 as their platform because it’s stable enough for them and would cost too much to upgrade.

The real business world goes at its own pace, whereas tech commentators tend to focus on the latest news and try to guess the future. It is a dangerous disconnect. Take no notice of us. Carry on coding.

On Reddit, some users focused on my assertion that the C# language was more advanced than Java. Is it? jeffcox111 comments:

I write in C# and Java professionally and I have to say I prefer C# hands down. Generics are very old news now in .Net. Take a look at type inference, lambdas, anonymous types, and most of all take a look at LINQ. These are all concepts that have been around for 3 years now in .Net and I hate living without them in Java. With .Net 5 on the horizon we are looking forward to better asynchronous calling/waiting and a bunch of other coolness. Java was good, but .Net is better these days.

and I liked this remark on LINQ:

I remember my first experience with LINQ after using C# for my final-year project (a visual web search engine). I asked a C# developer for some help on building a certain data structure and the guy sent me a pseudocode-looking stuff. I thanked him for the help and said that I’d look to find a way to code it and he said "WTF, I just gave you the code".

From there on I’ve never looked back.

Another discussion point is write once – run anywhere. Has it ever been real? Does it matter?

The company I work for has a large Java "shrinkwrap" app. It runs ok on Windows. It runs like shit on Mac, and it doesn’t run at all on Linux.

write once, run anywhere has always been a utopian pipe dream. And the consequence of this is that we now have yet another layer of crap that separates applications from the hardware.

says tonymt, though annannsi counters:

I’ve worked on a bunch of Java projects running on multiple unix based systems, windows and mac. GUI issues can be a pain to get correct, but its been fine in general. Non-GUI apps are basically there (its rare but I’ve hit bugs in the JVM specific to a particular platform)

Follow the links if you fancy more – I’ll leave the last word to A_Monkey:

I have a Java crisis every time I open eclipse.

Related posts:

  1. The Java crisis and what it means for developers
  2. IBM to harmonise its open source Java efforts with Oracle
  3. Oracle breaks, then mends Eclipse with new Java build

I chose VB.NET and later C# over Java when moving from VB6 and never looked back.

Alex Popescu posted Why We [StumbleUpon] Love HBase: An Interview with Ryan Rawson to his myNoSQL blog on 11/19/2010:

image Why We Love HBase: An Interview with Ryan Rawson:

A must read interview with Ryan Rawson[1] about HBase:

It’s cost-effective, fast at data retrieval, and dependable. Instead of buying one or two very large computers, HBase runs on a network of smaller and cheaper machines. By storing data across these multiple smaller machines, we get better performance since we can always add more machines to improve data storage and retrieval as StumbleUpon’s data store grows. Plus, we can worry less about any one machine failure.

Our developers love working with HBase, since it uses really cool technology and involves working with “big data.” We’re working with something that most people can’t imagine or never get the chance to work with. This is the leading edge of data storage technology, and we really feel like we’re inventing the future in a lot of ways. The fact that Facebook decided to build their next generation of messaging technology on HBase is a validation of what we’re doing and how we’re doing it.

  1. Ryan Rawson, System Architect at StumbleUpon, HBase committer, @ryanobjc  ()

Robert Duffner posted Thought Leaders in the Cloud: Talking with Chandra Krintz, Associate Professor at UC Santa Barbara on 11/18/2010:

Chandra Krintz is an Associate Professor in the Computer Science Department at the University of California, Santa Barbara. She is also the director of the AppScale open source project. Chandra has received a number of awards, including the 2010 IBM X10 Innovation Award, the 2009-2010 IBM OCR Award, 2008-2009 UCSB Academic Senate Distinguished Teaching Award, and 2008 CRA-W Anita Borg Early Career (BECA) Award. You can learn more about Chandra at and

In this interview, we discuss:

  • An open source implementation of the Google App Engine known as AppScale
  • Growing interest in dynamic languages
  • Using the same technology and skills to develop for public and private clouds
  • Any plans to take AppScale in a commercial direction
  • Will cloud computing require new programming languages?
  • How AppScale is already growing beyond a pure implementation of GAE

Robert continues with a complete transcription of the interview.

Thomas Claburn [pictured below] asserted “Executives from Amazon,, and VMware see freedom in the cloud” as a deck to his Web 2.0 Summit: 'Sacred' Cloud Computing Defies Lock-In article of 11/18/2010 for InformationWeek:

image Clouds are amorphous. They don't have well-defined boundaries. So it is with cloud computing, though that didn't stop a few leading cloud computing vendors from trying to nail the concept down.

On Tuesday at the Web 2.0 Summit, O'Reilly Media founder and conference co-chair Tim O'Reilly plumbed the depths of the cloud with Paul Maritz, CEO of VMware, Marc Benioff, CEO of, and Andy Jassy, senior VP of Amazon Web Services and Amazon Infrastructure.

imageIt was a group of cloud believers, which explains Benioff's repeated use of the word "sacred" to describe cloud computing. "This is about democratization," said Benioff. "We make hardware and software too expensive."

imageO'Reilly was more concerned with the profane than the sacred, with the perils of lock-in. Where are we in the shift toward an Internet operating system, he asked, and who's going to control it?

Maritz suggested the battle over points of control -- the theme of this year's Web 2.0 Summit -- is a passing phase, something that the industry will grow out of, as it grew out of the 1960s notion that if you wrote code for IBM you had to use IBM hardware.

Jassy insisted the battle has already been decided, at least at Amazon Web Services. "When we started AWS, we heard very loudly from developers that they did not want to be locked in," he said. "...It's trivially easy to move away from our services if customers are not happy. We want to earn that business every day, every month."

O'Reilly observed that when someone develops a app, he or she is not going to be running it elsewhere. Benioff responded that customers have different needs and goals, and that not all software is the same. Nonetheless, he maintained the the cloud gives developers a choice of what they write in and where the code executes.

"Cloud computing to me is something that's democratic," said Benioff. Then he described Larry Ellison, CEO of Oracle, as the prophet of a false cloud for selling expensive hardware and promoting a vertical computing stack. He quoted Amazon CTO Werner Vogels as having said that if it's about more hardware, it's not about the cloud.

O'Reilly wondered if that means the hardware market will shrink. Maritz hedged, noting that while the cloud allows people to get more done with less infrastructure, democratization and greater efficiency could stimulate demand. Jassy took a stronger stand, insisting that the cloud will empower more startups and more innovation, leading to greater hardware demand.

Benioff insisted that everyone is moving to the cloud, just that there are some outliers. And echoing Martiz's obsevations, he dismissed the concept of points of control that served as the organizing theme of the conference.

"I don't believe in the points of control metaphor for this industry," he said, characterizing industry gate-keeping as transient. "I think it's post traumatic stress disorder from Microsoft."

Register today for Enterprise Connect, which offers the most in-depth information on enterprise communications. It happens in Orlando, Fla., Feb. 28 to March 3. Find out more.

Derrick Harris posted Nov. 18: What We're Reading About the Cloud on 11/18/2010 to GigaOm’s Structure blog:

News-wise, I’d say it was a bittersweet day in the world of web infrastructure and the cloud. The FCC moved to Terremark’s cloud, while Rackspace’s CEO discussed why users not embracing the cloud; Juniper bought Blackwave, which doesn’t bode well for CDNs; NetApp had a good Q3 but a questionable forecast; and Gear6′s fall means fast flash NAS arrays for Violin Memory.

image Seven Reasons Why Companies Are Sitting Out Cloud Computing (From BusinessInsider) Six of these reasons are from Rackspace CEO John Engates, but Rackspace has some cloud-definition issues of its own. With a cloud business and a hosting business, it’s best to draw a clear line.

Juniper Networks on an Acquisition Roll: Takes Blackwave Assets for Video Storage (From SiliconANGLE) Juniper has made a couple of CDN-like moves lately, which means service providers can start looking a lot more like CDNs. Good for them, bad for Akamai et al.

FCC Selects Terremark’s Enterprise Cloud (From Terremark) Terremarks looks like the go-to cloud provider for large federal agencies, but its Enterprise Cloud is not pay-per-use. I wonder if AWS and others will have better luck for smaller projects.

NetApp: Strong Second Quarter, but Outlook Disappoints (From ZDNet) NetApp can’t keep up its blistering growth forever, so a slowdown probably should be expected. What’s more, it might have to go shopping to remain competitive, which will take its toll at some point.

Violin Uses Gear6 Tech to Make Super-fast Arrays (From The Register) You have to give Violin credit for spotting the valuable Gear6 IP. Flash-memory for storage arrays could mean big money; a memcached solution, though interesting, not so much.

For more cloud-related news analysis and research, visit GigaOM Pro.

Image courtesy of Salcan.

Alex Popescu reported ZooKeeper Promoted to Apache Top Level Project in a 11/18/2010 post to his myNoSQL blog:

image ZooKeeper, the centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services, that started under Hadoop umbrella has been promoted to an Apache Top Level Project, according to the ☞ report sent out by Doug Cutting.

In case you are wondering what it means, simply put it’s a proof of the project maturity and its community to be able to ensure the project future. On the other hand, if Hadoop, HBase, and ZooKeeper communities will not coordinate their efforts, it might mean more work for its users to match and test versions when using it together with Hadoop, HBase.

<Return to section navigation list> 


cloud hosting india said...

Amazing blog. Big business organizations benefit equally from cloud web hosting. Huge enterprises with regular online sales can face unexpected raises in traffic and accounts can cross the limit on resources allowed to these sites.