Monday, January 24, 2011

Windows Azure and Cloud Computing Posts for 1/24/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px3   
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

Steve Yi (pictured below) posted an Interview with Tom Naylor, CTO of Advanced Telemetry to the SQL Azure Team blog on 1/24/2011:

Real World SQL Azure: Interview with Tom Naylor, Founder and Chief Technology Officer, Advanced Telemetry

image As part of the Real World SQL Azure series, we talked to Tom Naylor, Founder and Chief Technology Officer at Advanced Telemetry about changing the fortunes of his startup company by moving to Windows Azure and using Microsoft SQL Azure data storage and Windows Azure AppFabric connectivity services. Here’s what he had to say:

MSDN: Can you tell us about Advanced Telemetry and the services you offer?

image Naylor: Our product is highly customizable telemetry software—a remote monitoring and control framework that can be used for many applications in different business scenarios and different vertical markets. We began with a single product and a local infrastructure, delivering an end-to-end solution under a brand name, EcoView, which brings the benefits of energy management to owners of medium and small buildings.

MSDN: As a startup, what were the biggest challenges that Advanced Telemetry faced prior to moving to Windows Azure?

Naylor: Originally, we wanted to leverage our core telemetry infrastructure for a number of applications. But we needed immediate and direct access to the market, which was why we developed EcoView. We got stuck there because we didn’t have the resources to support the physical infrastructure and rack space required to store our growing amount of customer data. I wanted to back out of the direct market approach and deliver our product to new verticals through OEM licensees, but first we needed to get away from the unavoidable link between adding new customers and buying more servers and rack space.

MSDN: Why did you want to change to an OEM model? 

Naylor: The OEM model could be used to showcase our uniquely adaptable and extensible middleware. It would absolve us from directly marketing and supporting our product to customers, freeing up time and resources to build more features into our middleware. Also, with OEM licensees marketing our product in new verticals, we would gain a cost-effective way of entering untapped markets.  

MSDN: What role did Windows Azure play in helping you make that change?

imageNaylor: Windows Azure was absolutely key to making this strategic business move: it could be the most important technological shift that I have made during my career. To be attractive to potential OEM licensees, we had to move our infrastructure to a cloud-computing model where our application and data could reside on remotely hosted servers for which the OEMs would not be responsible. Otherwise, they would end up with the same problem we had, the reliance on physical hardware that just eats into any profitability you may gain from adding new customers.

MSDN: How has SQL Azure helped you grow your business?

imageNaylor: SQL Azure gave us scalable, cloud-based data storage for our relational customer configuration information and metadata that’s required by the application at run time. Before, with our physical infrastructure, we were forced to cull this data every six months because we couldn’t afford to keep paying for more servers. With SQL Azure, we don’t have to worry about culling customers’ data because we have a cost-effective online data storage solution to grow our business.

MSDN: What benefits have you gained from Windows Azure AppFabric connectivity services?

Naylor: Windows Azure AppFabric gave us cloud-based middleware services that our developers can use to build more flexibility into our core telemetry software, really driving our competitive advantage with customizable services and business logic tailored to individual OEMs, vertical markets, or customers—all without changing the core functionality of our middleware.

MSDN: Summing up, what are the overall benefits of moving to cloud computing?

Naylor: Windows Azure, SQL Azure, and Windows Azure AppFabric are business enabling technologies that we’re using as a new computing paradigm to build our business through OEMs. When we signed our first OEM license deal, we became instantly profitable for the first time. We’ve also reduced our IT infrastructure expenses by 75 percent and marketing costs by at least 80 percent. SQL Azure and Windows Azure AppFabric offered all the compute and data storage services that we needed to customize our telemetry software for OEMs wanting to offer the product in different vertical markets, helping to generate a new revenue stream for us. At the end of the day, Windows Azure changed our world.

Read the full story at: http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000009076.


Steve Yi reported the availability of a Liam Cavanagh on Data Sync in a 1/24/2011 post to the SQL Azure Team blog:

image Last week I was able to take some time with Liam Cavanagh, Sr Program Manager of SQL Azure Data Sync.  Data Sync is a synchronization service built on top of the Microsoft Sync Framework to provide bi-directional data synchronization and data management capabilities.  This enables data to be easily share across SQL Azure and with multiple data centers within an Enterprise. 

imageIn this video we talk through how SQL Azure Data Sync provides the ability to scale in an elastic manner with no code, how enterprises can extend their reach to the cloud and how SQL Azure can provide geographic synchronization enabling remote offices and satellites. 

Visit Steve’s blog to view the Silverlight video segment.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Ralph Squillace reported a Technical Note: Reference to excellent Windows Phone 7 crypto discussion on 1/24/2011:

imageIan Miers has posted the code for doing "good" crypto for static data storage on the phone, but more importantly written about it, here. Understanding this code, the problems it poses, and the processes it implements is important to doing things with secure web services. More later.

Securing confidential data on cellphones probably is more important than on laptops and ordinary PCs.


imageSee “The OData Module for Drupal” topic of the Craig Kitterman (@craigkitterman) described Using Drupal on Windows Azure: Hands-On with 4 new Drupal Modules in a 1/24/2011 post to the Interoperability @ Microsoft blog in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section below.


Glenn Gailey explained Self-Hosting a WCF Data Service in the second post of 1/23/2011 to his new Writing…Data Services blog:

imageI’ve seen several customers looking for an example of how to directly host a WCF Data Service in a WCF application rather than using traditional ASP.NET hosting. While we discuss self-hosting WCF services in the topic Hosting the Data Service (WCF Data Services), there is current no example of this kind of hosting. I do have a working code example that I have shared out on the forums, but I’m not sure that I want to put it in the core documentation because a) it’s overly simplistic, running in a console application, and b) it doesn’t address crucial issues of security and reliability, which is one of the main reasons to use ASP.NET and IIS for hosting. In fact, whenever you self host a data service (either directly in WCF or in some other application), you need to make sure to consider the twin security requirements of authentication and authorization (unless it’s OK to run with wide-open access to your data).

Anyway, here’s sample code for a Northwind-based data service that is self-hosted in a console application:

using System;
using System.Data.Services;

namespace CustomHostedDataService
{
    // Self-hosted data service definition based on the Northwind
    // and using the Entity Framework provider.
    public class ConsoleService : DataService<NorthwindEntities>
    {
        public static void InitializeService(
           IDataServiceConfiguration config)
        {
            // Provide read-only access to all entries and feeds.
            config.SetEntitySetAccessRule(
               "*", EntitySetRights.AllRead);
        }
    }

    class Program
    {
        static void Main(string[] args)
        {
            Type serviceType = typeof(ConsoleService);
            Uri baseAddress = new Uri("http://localhost:6000/");
            Uri[] baseAddresses = new Uri[] { baseAddress };

            // Create a new hosting instance for the Northwind
            // data service at the specified address.
            DataServiceHost host = new DataServiceHost(
               serviceType,
               baseAddresses);
            host.Open();

            // Keep the data service host open while the console is open.
            Console.WriteLine(
              "Navigate to the following URI to see the service.");
            Console.WriteLine(baseAddress);
            Console.WriteLine("Press any key to exit...");
            Console.ReadKey();

            // Close the host.
            host.Close();
        }
    }
}

Note: You need to run the application as an administrator to be able to create the host instance.

If you think that this example should be added to the hosting topic, please leave a comment.

Glenn writes documentation for WCF Data Services. He explains his future intentions in his  Writing…Data Services: The Inaugural Post post of 1/23/2011:

Welcome to my new blog, which (to be clever) I am calling “Writing…Data Services”—clever in the sense that I work at Microsoft writing the documentation that supports WCF Data Services and OData and Silverlight (and to a lesser degree Windows Phone 7). I have also written MSDN documentation for the ADO.NET Entity Framework, and, even earlier, SQL Server replication. Before I got to Microsoft, I was in a totally different industry (but that is another story).

I am currently supporting the following sets of MSDN documentation:

My Blogging Philosophy:

The reason that I am blogging at all is that my goal, as a writer, is to tailor content to a specific audience and delivery vehicle. In my years as a writer, I have noticed that while some content clearly belongs in core product reference documentation, other content is better suited for other channels. For example, some more narrative style content (here’s how I did “____”) is often better in a blog format, or even as a video. I have already published some of this type of content on the WCF Data Services team blog, but there are some topics that don’t belong their either. Also, I need to have a place where I can muse, “out loud” about things I am learning and thinking about and other things that I am working on.  For example, my teammate Ralph Squillace and several others on my team are working on a cloud-based data service that exposes MSDN topics as an OData feed. Along with Ralph, I plan to blog some about this as the project unfolds. I also need somewhere to post issue-specific content that I would otherwise have to post over-and-over in the various forums that I support.
These are the reasons why I am starting this blog. [Emphasis added and Ralph Squillace link address corrected.]

As with any inaugural post, I finish by saying…
I hope that you a) find this blog, and b) find it interesting enough to visit again.

Cheers,

Glenn Gailey
WCF Data Services
User Education

Subscribed. I’m anxious to check out the MSDN OData feed.


image Another DataMarket (@datamarket), based in Iceland and having goals similar to the Windows Azure MarketPlace DataMarket, launched on 1/24/2011. The home page claims to offer “100 million time series from the most important data providers, such as the UN, World Bank and Eurostat”:

image

Here’s a capture of per-capita meat consumption by the French visualization for the period 1961 through 2009:

image

And here’s a tabular view of the above data:

image

A quick check with Fiddler2 shows that the service uses a proprietary format. API access ranges from $99 for up to 500 requests per month to $799 for up to 100,000 per month. I’ve signed up for the free service.

13 thousand data sets, 100 million time series, 600 million facts blog post of 1/23/2011 announced the service’s commercial startup on 1/24/2011. The site has been operational since March 2010 with Icelandic data only. The blog contains many interesting posts about Big Data and Data-as-a-Service (DaaS).


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

The Windows Azure AppFabric Team announced Announcing the Windows Azure AppFabric CTP Forum in a 1/24/2011 post:

image722322The AppFabric CTP services in our Labs environment preview the future enhancements and capabilities that we are planning to release in the near future.

In order to better serve our customers who are using the CTP AppFabric services we created a new forum: Windows Azure AppFabric CTP Forum. The new forum launched last week and has already attracted questions and views.

If you are using the AppFabric CTP services be sure to use this forum as a valuable resource to get help. Through the forum you will be able to get help from other AppFabric customers like you, as well as moderators from our team.

Questions related to the production version of AppFabric should be asked in the Windows Azure Platform Forum according to the specific scenario, and questions related to specific AppFabric services should go under the following:

clip_image002

If you haven’t started checking out Windows Azure AppFabric already make sure to take advantage of our free trial offer. Click the link [at the right] and get started!


Vittorio Bertocci (@vibronet) reported January 2011 Releases of the Identity Training Kit & the MSDN Online Identity Training Course in a 1/24/2011 post:

image Today we are releasing an update to the Identity Developer Training Kit; we are also releasing the first MSDN-hosted Identity Developer Training Course, an online version of the kit where all the labs instructions and setup files are unbundled and individually accessible. Both are heavily cloud-flavored.

TrainingKitJan2011

January 2011 Identity Developer Training Kit

Besides the usual WIF workshop videos & decks and the labs on WIF + ASP.NET, WIF+WCF, delegation, WIF & Silverlight integration and ACSv1, the news in the January release are:

  • New lab on using the ACS December Labs release for federating with multiple business identity providers (ie ADFS2), using the ACS management API and more
  • The introductory lab on web identity providers, rules and HDR has been updated to the December Labs release of ACS
  • The Windows Azure Single Sign On lab and the WIF+WCF on Windows Azure lab have been updated to the Windows Azure SDK 1.3 and the new Silverlight portal
  • In addition to the update, the single, extra-long exercise in WIF+WCF on Windows Azure has been refactored in 3 different exercises
  • New slide deck on identity and the cloud, complete with speaker notes, ready for you to re-deliver (taken from this, but with all the ink transformed into PowerPoint animations ;-))
  • The setup of all labs have been revved to the latest version of the dependency checker
  • The resources list has been updated

image722322Please continue to send us the bugs you find and we’ll queue them up for the next releases. Also, please let us know if there is a topic you would like us to cover in a lab! Although I can’t promise it will make it into the kit, it will certainly help in the prioritization process.

January 2011 Identity Developer Course on MSDN

Back at PDC09 we published our first online courses under http://channel9.msdn.com/learn, and the identity course was released in that wave. The community really appreciated the chance to access directly the instruction pages containing the solution to their problem at hand, instead of downloading entire kits.

Some of the courses are in the process of being migrated to MSDN, where they can enjoy improved discoverability and integration with the rest of the developer content, extra features such as PDF versions of the labs instructions and so on.

Today we are releasing our first MSDN version of the identity developer training course, which is fully aligned with the offline kit release described earlier. For a brief time the MSDN and Channel9 versions will coexist, but we will soon take the Channel9 one down and put proper redirection mechanisms in place

Have fun with the latest kit & course, and remember: if you don’t get identity right, good luck reaping the benefits of the cloud.


Steve Plank (@plankytronixx) produced a Video Screencast: Complete setup details for federated identity access from on-premise AD to Office 365 on 1/24/2011:

image How to set up IIS, ADFS 2.0, the Federation Config Tool and Office 365 Dirsync. The video starts with federated access and its setup. Shows how an existing user can use their AD credentials to log in to Office 365 entirely seamlessly; they hit CTRL-ALT-DEL and login to the machine, then when they go to Office 365 they are not prompted for the password because they have already logged in.

Then in Dirsync I do the setup so that you can create users and modify their attributes in AD and they automagically appear in Office 365 as federated accounts that you can use your AD credentials to log in to.

I’ll do a whiteboard flow of the protocols and how they work very soon, I just wanted to get the demo-video in place because so many folks have asked for it. I use the “magic of video editing” to avoid the confusion of error messages and to shorten the install-times of various components.

Complete setup details for federated identity access from on-premise AD to Office 365. from Steve Plank on Vimeo.

[Video Screencast: AD to Office 365 setup]

If you are going to do this yourself – here are a couple of notes worth mentioning:

Federated Identity:

The “Admin” account you need to use for the powershell commands that set up the ADFS server and the Microsoft Federation Gateway up in the cloud is not just any admin account. It needs to be the first admin account on the Office 365 system when the subscription is created.

There is a typo in the help files on the Microsoft Online portal (http://onlinehelp.microsoft.com/en-us/office365-enterprises/ff652560.aspx). The 3rd powershell command says:

Convert-MSOLDomainToconverFederated –DomainName <domain>”.

The part I’ve highlighted in red should be missed out completely – so the command becomes:

Convert-MSOLDomainToFederated –DomainName <domain>”.

DirSync:

Can only run on a 32-bit Server. I used Windows Server 2008 SP2. You can’t install it on a Domain Controller. Which means if you are going to set up a test-lab, you need a minimum of 2 machines: a DC (you could use for ADFS 2.0) and a member server for the Dirsync tool.

In my test environment I had:

image

Planky – GBR-257


David Kearns reported on 1/20/2011 Yahoo! steals a march on Facebook and Google by relying on Facebook and Google IDs (missed when published):

image Big news on the Internet identity front last week when Yahoo! announced they have now become an OpenID "relying party" for IDs from Google and Facebook. I thought it was a bold move on the part of the original Internet directory service, but more than one Facebook fanboy saw it as "Yahoo! Concedes Identity Race by Allowing Login with Facebook and Google OpenIDs."

What?

image The whole point of OpenID is that authentication to one site (the OpenID Provider, or OP) is used as authentication for other sites (called Relying Parties or RPs). The tragedy of OpenID was that everyone wanted to be an OP and few wanted to be an RP.

image The announcement from Yahoo! shows the first major breakdown of a walled garden where identity data was siloed by major Internet Web sites. Yahoo! has recognized that, given the value of access to its properties, it is in the company's interest to make authentication easier for users. Among the properties owned by the Sunnyvale firm are:

  • Associated Content -- AC is an online publisher and distributor of original content.
  • del.icio.us -- del.icio.us is a social bookmarking Web site that allows users to store and share bookmarks online.
  • Fire Eagle -- Fire Eagle is a location brokerage service created by Yahoo! Brickhouse.
  • Flickr -- Flickr is a popular photo sharing service that Yahoo! purchased on March 29, 2005.
  • FoxyTunes -- FoxyTunes is a browser extension that allows the user to control media players from the browser window.
  • Upcoming -- Upcoming is a social event calendar.
  • Wretch -- Wretch is a Taiwanese social networking service acquired by Yahoo!

There are also the various Yahoo! branded properties (Yahoo!Travel, Yahoo!RealEstate, Yahoo!Mail, etc.)

Facebook, which yet again had a major privacy scandal last week ("Facebook Update Exposes User Contact Info, Security Expert Says"), would appear to be a loser in this move. Once again, the Palo Alto Web tyro shows that it's all about monetizing its users and not about enhancing usability.

Bravo to Yahoo!

In other news, two entities that frequently grace these pages have now come together as Eve Maler (formerly with Sun and, most recently, PayPal, and noted speaker and writer on user-centric identity) has joined forces with Forrester Research as a principal analyst with a focus primarily on identity and access management. That's a win-win-win situation -- a win for Maler, for Forrester and for those of us who care about IdM and IAM.


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Rob Gillen (@argodev) reported the Return of the Windows Azure GAC Viewer in a 1/24/2011 post:

image I’m pleased to announce that the excellent utility – the Azure GAC Viewer – is once again online and available for general use. You can access it at http://gacviewer.cloudapp.net. This tool shows you a dynamically generated list of all of the assemblies present in the GAC for an Azure instance. Additionally, it also allows you to upload your project file (*.csproj or *.vbproj) to have the references scanned and let you know if there are any discrepancies between what you are using and what is available (by default) in Azure. You can then adjust your project file (copy-local=true) to ensure your application can run successfully.

gacviewer

imageIf you are familiar with the tool, you may be thinking “Wait! you aren’t Wayne Berry, and besides, the URL has changed!” – and you would be correct on both counts. Wayne developed the tool and posted about it back in September of last year. Since that time, however, Wayne has accepted a position on the Windows Azure team and is unable to continue to maintaining the site full time. As a gesture of kindness to the community, he has passed the source code to me and given me his blessing to re-launch the tool.

As it stands today, the tool is nearly exactly as Wayne developed, with a few tweaks to have it use Guest OS 2.1 rather than 1.6. I’ve also added a contributors page to give credit to Wayne and to the organizations that are allowing me to maintain and keep the site online.

In the future, I hope to make the source code available on CodePlex as well as to add to the list of tools that live on the site. If you have any bugs with the current site or ideas for future changes, please feel free to contact me.


Craig Kitterman (@craigkitterman) described Using Drupal on Windows Azure: Hands-On with 4 new Drupal Modules in a 1/24/2011 post to the Interoperability @ Microsoft blog:

image Since the launch of Windows Azure a couple years ago, we’ve been working on driving Interoperability scenarios that enable various developers to harness the power of the Windows Azure cloud platform. In parallel, we’ve supported interoperability projects, in particular on PHP and Drupal, in which the focus is showing how to simply bridge different technologies, mash them up and ultimately offer new features and options to the developers (azurephp.interoperabilitybridges.com).

imageToday, I’d like to show you the result of some hands-on work with Drupal on Windows Azure: We are announcing today the availability 4 new Drupal Modules, Bing Maps, Windows Live ID, OData and the Silverlight Pivot Viewer that can be used with Drupal running on Windows Azure. The modules are developed by Schakra and Mindtree.

image To showcase this work, the new Drupal 7 was deployed on Windows Azure with the Windows Azure Companion:

clip_image002[4]

Check out the Drupal & Windows Azure Companion tutorial

On top of this Drupal instance running on Windows Azure, are deployed the four NEW generic modules that allow Drupal administrators/developers to provide their users with new features:

The Bing Maps Module for Drupal provides for easy & flexible embedding of Bing Map in Drupal content types, such as a technical article or story.

clip_image004

clip_image006

The Windows Live ID Module for Drupal allows Drupal user to associate their Drupal account to their Windows Live ID, and then to login on Drupal with their Windows Live ID.

clip_image008

clip_image010

clip_image012

The OData Module for Drupal allows developers to include data sources based on OData in Drupal content types. The generic module includes a basic OData query builder and renders data in a simple HTML Table. In this case, we are taking the Netflix OData catalog and using a simple visual query engine, generating a filtered query to display on our Drupal “Article” page.

clip_image014

clip_image016

The Silverlight Pivot viewer Module for Drupal enables enables easy & flexible embedding of Silverlight PivotViewer in Drupal content types, using a set of preconfigured data sources like oData producers or existing Pivot collections

image

In this example, we are using the wedding venues pivot collection exposed on http://beta.hitched.co.uk to render the interactive Silverlight PivotViewer of that collection with deep zoom image support and a complete visual query experience.

image

These modules are independently developed and contributed by Schakra and Mindtree, with funding provided by Microsoft. The modules have all been made available on GitHub, and we hope to see them moved to the Drupal module gallery in the near future. As always I look forward to your comments and feedback.

Craig is a Senior Interoperability Evangelist, Microsoft


<Return to section navigation list> 

Visual Studio LightSwitch

image2224222No significant articles today.


Return to section navigation list> 

Windows Azure Infrastructure

David Chou published a detailed Designing for Cloud-Optimized Architecture analysis to his Architecture + Strategy MSDN blog on 1/23/2011:

I wanted to take the opportunity and talk about the cloud-optimized architecture, the implementation model instead of the popular perceptions around leveraging cloud computing as a deployment model. This is because, while cloud platforms like Windows Azure can run a variety of workloads, including many legacy/existing on-premises software and application migration scenarios that can run on Windows Server; I think Windows Azure’s platform-as-a-service model offers a few additional distinct technical advantages when we design an architecture that is optimized (or targeted) for the cloud platform.

Cloud platforms differ from hosting providers

First off, the major cloud platforms (regardless how we classify them as IaaS or PaaS) at the time of this writing, impose certain limitations or constraints in the environment, which makes them different from existing on-premises server environments (saving the public/private cloud debate to another time), and different from outsourced hosting managed service providers. Just to cite a few (according to my own understanding, at the time of this writing):

  • Amazon Web Services
    • EC2 instances are inherently stateless; that is, their local storage is non-persistent and non-durable
    • Little or no control over infrastructure that are used beneath the EC2 instances (of course, the benefit is we don’t have to be concerned with them)
    • Requires systems administrators to configure and maintain OS environments for applications
  • Google App Engine
    • Non-VM/OS instance-aware platform abstraction which further simplifies code deployment and scale, though some technical constraints (or requirements) as well. For example,
    • Stateless application model
    • Requires data de-normalization (although Hosted SQL will mitigate some concerns in this area)
    • If the application can't load into memory within 1 second, it might not load and return 500 error codes
    • No request can take more than 30 seconds to run, otherwise it is stopped
    • No file system access
  • Windows Azure Platform
    • Windows Azure instances are also inherently stateless – round-robin load balancer and non-persistent local storage
    • Also due to the need to abstract infrastructure complexities, little or no control for the underlying infrastructure is offered to applications
    • SQL Azure has individual DB sizing constraints due to its 3-replica synchronization architecture

Again, just based on my understanding, and really not trying to paint a “who’s better or worse” comparative perspective. The point is, these so-called “differences” exist because of many architectural and technical decisions and trade-offs to provide the abstractions from the underlying infrastructure. For example, the list above is representative of most common infrastructure approaches of using homogeneous, commodity hardware, and achieve performance through scale-out of the cloud environment (there’s another camp of vendors that are advocating big-machine and scale-up architectures that are more similar to existing on-premises workloads). Also, the list above may seem unfair to Google App Engine, but on the flip side of those constraints, App Engine is an environment that forces us to adopt distributed computing best practices, develop more efficient applications, have them operate in a highly abstracted cloud and can benefit from automatic scalability, without having to be concerned at all with the underlying infrastructure. Most importantly, the intention is to highlight that there are a few common themes across the list above – stateless application model, abstraction from infrastructure, etc.

Furthermore, if we take a cloud computing perspective, instead of trying to apply the traditional on-premises architecture principles, then these are not really “limitations”, but more like “requirements” for the new cloud computing development paradigm. That is, if we approach cloud computing not from a how to run or deploy a 3rd party/open-source/packaged or custom-written software perspective, but from a how to develop against the cloud platform perspective, then we may find more feasible and effective uses of cloud platforms than traditional software migration scenarios.

Windows Azure as an “application platform”

Fundamentally, this is about looking at Windows Azure as a cloud platform in its entirety; not just a hosting environment for Windows Server workloads (which works too, but the focus of this article is on cloud-optimized architecture side of things). In fact, Windows Azure got its name because it is something a little different than Windows Server (at the time of this writing). And that technically, even though the Windows Azure guest VM OS is still Windows Server 2008 R2 Enterprise today, the application environment isn’t exactly the same as having your own Windows Server instances (even with the new VM Role option). And it is more about leveraging the entire Windows Azure platform, as opposed to building solely on top of the Windows Server platform.

For example, below is my own interpretation of the platform capabilities baked into Windows Azure platform, which includes SQL Azure and Windows Azure AppFabric also as first-class citizens of the Windows Azure platform; not just Windows Azure.

image_thumb[13]

I prefer using this view because I think there is value to looking at Windows Azure platform holistically. And instead of thinking first about its compute (or hosting) capabilities in Windows Azure (where most people tend to focus on), it’s actually more effective/feasible to think first from a data and storage perspective. As ultimately, code and applications mostly follow data and storage.

For one thing, the data and storage features in Windows Azure platform are also a little different from having our own on-premises SQL Server or file storage systems (whether distributed or local to Windows Server file systems). The Windows Azure Storage services (Table, Blob, Queue, Drive, CDN, etc.) are highly distributed applications themselves that provide a near-infinitely-scalable storage that works transparently across an entire data center. Applications just use the storage services, without needing to worry about their technical implementation and up-keeping. For example, for traditional outsourced hosting providers that don’t yet have their own distributed application storage systems, we’d still have to figure out how to implement and deploy a highly scalable and reliable storage system when deploying our software. But of course, the Windows Azure Storage services require us to use new programming interfaces and models (REST-based API’s primarily), and thus the difference with existing on-premises Windows Server environments.

SQL Azure, similarly, is not just a plethora of hosted SQL Server instances dedicated to customers/applications. SQL Azure is actually a multi-tenant environment where each SQL Server instance can be shared among multiple databases/clients, and for reliability and data integrity purposes, each database has 3 replicas on different nodes and has an intricate data replication strategy implemented. The Inside SQL Azure article is a very interesting read for anyone who wants to dig into more details in this area.

Besides, in most cases, a piece of software that runs in the cloud needs to interact with data (SQL or no-SQL) and/or storage in some manner. And because data and storage options in Windows Azure platform are a little different than their seeming counterparts in on-premises architectures, applications often require some changes as well (in addition to the differences in Windows Azure alone). However, if we look at these differences simply as requirements (what we have) in the cloud environment, instead of constraints/limits (what we don’t have) compared to on-premises environments, then it will take us down the path to build cloud-optimized applications, even though it might rule out a few application scenarios as well. And the benefit is that, by leveraging the platform components as they are, we don’t have to invest in the engineering efforts to architect and build and deploy highly reliable and scalable data management and storage systems (e.g., build and maintain your own implementations of Cassandra, MongoDB, CouchDB, MySQL, memcarche, etc.) to support applications; we can just use them as native services in the platform.

The platform approach allows us to focus our efforts on designing and developing the application to meet business requirements and improve user experience, by abstracting away the technical infrastructure for data and storage services (and many other interesting ones in AppFabric such as Service Bus and Access Control), and system-level administration and management requirements. Plus, this approach aligns better with the primary benefits of cloud computing – agility and simplified development (less cost as a result).

Smaller pieces, loosely coupled

Building for the cloud platform means designing for cloud-optimized architectures. And because the cloud platforms are a little different from traditional on-premises server platforms, this results in a new developmental paradigm. I previously touched on this topic with my presentation at JavaOne 2010, then later on at Cloud Computing Expo 2010 Santa Clara; just adding some more thoughts here. To clarify, this approach is more relevant to the current class of “public cloud” platform providers such as ones identified earlier in this article, as they all employ the use of heterogeneous and commodity servers, and with one of the goals being to greatly simplify and automate deployment, scaling, and management tasks.

Fundamentally, cloud-optimized architecture is one that favors smaller and loosely coupled components in a highly distributed systems environment, more than the traditional monolithic, accomplish-more-within-the-same-memory-or-process-or-transaction-space application approach. This is not just because, from a cost perspective, running 1000 hours worth of processing in one VM is relatively the same as running one hour each in 1000 VM’s in cloud platforms (although the cost differential is far greater between 1 server and 1000 servers in an on-premises environment). But also, with a similar cost, that one unit of work can be accomplished in approximately one hour (in parallel), as opposed to ~1000 hours (sequentially). In addition, the resulting “smaller pieces, loosely coupled” architecture can scale more effectively and seamlessly than a traditional scale-up architecture (and usually costs less too). Thus, there are some distinct benefits we can gain, by architecting a solution for the cloud (lots of small units of work running on thousands of servers), as opposed to trying to do the same thing we do in on-premises environments (fewer larger transactions running on a few large servers in HA configurations).

I like using the LEGO analogy below. From this perspective, the “small pieces, loosely coupled” fundamental design principle is sort of like building LEGO sets. To build bigger sets (from a scaling perspective), with LEGO we’d simply use more of the same pieces, as opposed to trying to use bigger pieces. And of course, the same pieces can allow us to scale down the solution as well (and not having to glue LEGO pieces together means they’re loosely coupled). Winking smile

But this architecture also has some distinct impacts to the way we develop applications. For example, a set of distributed computing best practices emerge:

  • asynchronous processes (event-driven design)
  • parallelization
  • idempotent operations (handle duplicity)
  • de-normalized, partitioned data (sharding)
  • shared nothing architecture
  • fault-tolerance by redundancy and replication
  • etc.

Asynchronous, event-driven design – This approach advocates off-loading as much work from user requests as possible. For example, many applications just simply incur the work to validate/store the incoming data and record it as an occurrence of an event and return immediately. In essence it’s about divvying up the work that makes up one unit of work in a traditional monolithic architecture, as much as possible, so that each component only accomplishes what is minimally and logically required. Rest of the end-to-end business tasks and processes can then be off-loaded to other threads, which in cloud platforms, can be distributed processes that run on other servers. This results in a more even distribution of load and better utilization of system resources (plus improved perceived performance from a user’s perspective), thus enabling simpler scale-out scenarios as additional processing nodes and instances can be simply added to (or removed from) the overall architecture without any complicated management overhead. This is nothing new, of course; many applications that leverage Web-oriented architectures (WOA), such as Facebook, Twitter, etc., have applied this pattern for a long time in practice. Lastly, of course, this also aligns well to the common stateless “requirement” in the current class of cloud platforms.

Parallelization – Once the architecture is running in smaller and loosely coupled pieces, we can leverage parallelization of processes to further improve the performance and throughput of the resulting system architecture. Again, this wasn’t so prevalent in traditional on-premises environments because creating 1000 additional threads on the same physical server doesn’t get us that much more performance boost when it is already bearing a lot of traffic (even on really big machines). But in cloud platforms, this can mean running the processes in 1000 additional servers, and for some processes this would result in very significant differences. Google’s Web search infrastructure is a great example of this pattern; it is publicized that each search query gets parallelized to the degree of ~500 distributed processes, then the individual results get pieced together by the search rank algorithms and presented to the user. But of course, this also aligns to the de-normalized data “requirement” in the current class of cloud platforms, as well as SQL Azure’s implementation that resulted in some sizing constraints and the consequent best practice of partitioning databases, because parallelized processes can map to database shards and try not to significantly increase the concurrency levels on individual databases, which can still degrade overall performance.

Idempotent operations – Now that we can run in a distributed but stateless environment, we need to make sure that same process that gets routed to multiple servers don’t result in multiple logical transactions or business state changes. There are processes that could and prefer duplicate transactions, such as ad clicks; but there are also processes that don’t want multiple requests be handled as duplicates. But the stateless (and round-robin load-balancing in Windows Azure) nature of cloud platforms requires us to put more thoughts into scenarios such as when a user manages to send multiple submits from a shopping cart, as these requests would get routed to different servers (as opposed to stateful architectures where they’d get routed back to the same server with sticky sessions) and each server wouldn’t know about the existence of the process on the other server(s). There is no easy way around this, as the application ultimately needs to know how to handle conflicts due to concurrency. Most common approach is to implement some sort of transaction ID that uniquely identifies the unit of work (as opposed to simply relying on user context), then choose between last-writer or first-writer wins, or optimistic locking (though any form of locking would start to reduce the effectiveness of the overall architecture).

De-normalized, partitioned data (sharding) – Many people perceive the sizing constraints in SQL Azure (currently at 50GB – also note it’s the DB size and not the actual file size which may contain other related content) as a major limitation in Windows Azure platform. However, if a project’s data can be de-normalized to a certain degree, and partitioned/sharded out, then it may fit well into SQL Azure and benefit from the simplicity, scalability, and reliability of the service. The resulting “smaller” databases actually can promote the use of parallelized processes, perform better (load more distributed than centralized), and improve overall reliability of the architecture (one DB failing is only a part of the overall architecture, for example).

Shared nothing architecture – This means a distributed computing architecture in which each node is independent and self-sufficient, and there is no single point of contention across the system. With data sharding and maintained in many distributed nodes, the application itself can and should be developed using shared-nothing principles. But of course, many applications need access to shared resources. It is then a matter of deciding whether a particular resource needs to be shared for read or write access, and different strategies can be implemented on top of a shared nothing architecture to facilitate them, but mostly as exceptions to the overall architecture.

Fault-tolerance by redundancy and replication – This is also “design for failures” as referred to many cloud computing experts. Because of the use of commodity servers in these cloud platform environments, system failures are a common thing (hardware failures occur almost constantly in massive data centers) and we need to make sure we design the application to withstand system failures. Similar to thoughts around idempotency above, designing for failures basically means allowing requests to be processed again; “try-again” essentially.

Lastly, each of the topic areas above is worthy of an individual article and detailed analysis; and lots of content are available on the Web that provide a lot more insight. The point here is, each of the principles above actually has some relationship with, and dependency on, the others. It is the combination of these principles that contribute to an effective distributed computing, and cloud-optimized architecture.


Jason Kincaid described YC-Funded AppHarbor: A Heroku For .NET, Or “Azure Done Right” in a 1/24/2011 TechCrunch post:

image You may be noticing a trend: there are a lot of startups looking to mimic the easy-to-use development platform that made Heroku a hit with Ruby developers and offer a similar solution for use with other languages. In the last few weeks alone we’ve written about PHP Fog (which, as you’d guess, focuses on PHP) and dotCloud (which aims to support a variety of languages). And today we’ve got one more: AppHarbor, a ‘Heroku for .NET’. The company is funded by Y Combinator, and it’s launching today.

image AppHarbor will be going up against Microsoft Azure, a platform that developers can use to deploy their code directly from Visual Studio. But co-founder Michael Friis says that Azure has a few issues. For one, it uses Microsoft’s own database system, which can lead to developer lock-in[*]. And it also doesn’t support Git, which many developers prefer to use for collaboration and code deployment.

imageOther features: AppHarbor has automated unit testing, which developers can run before any code gets deployed (this reduces the chance that they’ll carelessly deploy something that breaks their site). The service also says that it takes 15 seconds to deploy code, rather than the fifteen minute wait seen on Azure.

Friis acknowledges that there are a few potential hurdles. For one, some .NET developers may be used to life without Git, so it may take some work to get them interested (Mercurial support is on the way, which many .NET developers already use, so this may not be a big deal). There’s also going to be competition for the small team, which currently includes Friis, Rune Sørensen and Troels Thomsen.

AppHarbor is first to launch, but there will be others: Meerkatalyst and Moncai are both planning to tackle the same problem, and they won’t be the last.

* Use of SQL Azure is optional; Windows Azure includes RESTful table, blob, and queue APIs.

Unless AppHarbor offers more advantages than Jason descibes, I’ll stick with Visual Studio 2010 and the Windows Azure Portal for deployment. I believe most enterprise developers will do the same.


Doug Rehnstrom continued his Windows Azure Training Series – Deploying a Windows Azure Application in a 1/24/2011 post to the Learning Tree blog:

image This is the fifth in a series I’ve been writing on learning Microsoft Windows Azure. In this post, you’ll deploy the application created earlier; see Windows Azure Training Series – Creating Your First Azure Project.

Configuring an Azure Role

imageWith your Square a Number Azure project open in Visual Web Developer (or Visual Studio), go to the Solution Explorer window. Remember, there will be two projects, the Web role project and the service project. Expand the Roles branch of the service project’s tree. Right-click on the Web role, in this case SquareANumber.Web, and select Properties.

The properties Window will open as shown below. You need to set instance count and VM size. For this example, set the size to ExtraSmall and the count to 2. Then, save your project.

image In Solution Explorer, double-click the ServiceConfiguration.cscfg file and you will see where the instance count was set. Double-click on the ServiceDefinition.csdef file, and you will see where the instance size was set. That’s it!

Creating the Deployment Package

Again in Solution Explorer, right-click on the service project, not the Web role, and select Publish. The following window will open. Select the Create Service Package Only option, and then click OK.

The deployment package is created and the configuration file is copied into the same folder. A Windows Explorer instance will open at that path. Make note of where the folder is, you’ll need it in a minute. (You might want to copy the full path to the clipboard to make finding it easier.)

Creating a Windows Azure Hosted Service

To upload the deployment package you need to go to the Windows Azure Management portal. The easiest way to get there is to again right-click on the service project, and select Browse to Portal. You will need to log in using your Windows Live ID.

In the management portal, click on the New Hosted Service button at the top left. The following window will open. Fill it out similar to what is show below. You’ll need to have a subscription before doing this. See the first article in this series, Windows Azure Training Series – Understanding Subscriptions and Users.

When filling out this form, you’ll need a URL prefix that is not already used. The URL prefix is used to determine the location of your Web application. You can deploy to staging or production. In a real project, you’d deploy to staging first for testing, but for this example just go directly to production. Click the Browse Locally button and find the package and configuration file you just created. Click the OK button and the files will be uploaded.

You’ll have to wait a few minutes for the instances to start. Once started, you should see two extra-small instances running. See the screenshot below.

When the management portal says it’s ready, test it in the browser. The URL will be, “http://<your URL prefix>.cloudapp.net”. You’re spending money now, so don’t forget to Stop and then Delete the deployment when you are finished.

To learn more about Windows Azure, come to Learning Tree course 2602, Windows Azure Platform Introduction: Programming Cloud-Based Applications


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

Paul Venezia asserted “As we outsource more and more to cloud services, those of us in IT will be reduced to pleading on the phone just like our users” as a preface to his Cloud computing makes users of us all post of 1/24/2011 to his InfoWorld DeepEnd blog:

image Here's yet another reason cloud computing is not a new idea: For users, corporate computing has always been a cloud. Users have applications they rely on to do their jobs; they load data to crunch on; they interact digitally with coworkers, clients, and partners -- and all of it comes from this amorphous blob known as IT. Without this stuff, most of them would have nothing to do.

image That obliviousness, for better or worse, defines the relationship between users and IT: From their desks, users look down the hall toward IT and see ... nothing. Night-vision goggles can't help, nor would a bridge allow them to cross the chasm and instantly discover what IT is about.

This distance from core technology extends to users' personal lives. They're using Gmail, streaming movies from Hulu or Netflix, and using sites like Flickr and Facebook to provide them with all kinds of services -- and they magically work. They're someone else's problems, and when it breaks, users get very angry.

In IT, we know exactly how the sausage is made. We too rely on data, applications, and communications tools to do our jobs, but we have the benefit of being able to see into the magical forest of the back end. We're also far likelier to fix technical problems on our own -- or we should be. An IT person would never be fazed by a dialog box that has a greyed-out Continue button and an empty check box, for instance.

The gulf between users and IT often leads to animosity. From the user's perspective, if a problem -- no matter how minute -- is preventing them from completing a task, the immediate assumption is that IT is incompetent and someone should get fired. From an IT perspective, the user is being an idiot who can't think clearly enough to tie a pair of shoes.

But as companies move more apps and services into the cloud, those of us in IT are going to become exactly like our users. As we shift from providing such services as email in-house to a cloud provider, we find that when things go wrong, not only do we have users at our throats, but we don't have any insight into the problem itself. We're at the mercy of the cloud provider, and all we can do is make angry phone calls and write angry forum posts and emails. We're destined to become squeezed in the middle between users and cloud services, in many cases without the power to fix anything.

Those of you moving into cloud services, be prepared to feel less confident about certain aspects of your job, and get used to the feeling of not having any part in problem solving and disaster prevention other than as a mineshaft canary. I suppose the upside is that we'll have a better understanding of how our users have felt all along.


The HPC in the Cloud blog claimed Brocade, Dell and VMware Demonstrate How to Build Open, Simplified Large-Scale Private Clouds on 1/24/2011:

Brocade (Nasdaq: BRCD) today announced a joint demonstration with Dell and VMware at West 2011 Conference and Exposition in San Diego, Calif., the largest event on the West Coast for communications, electronics, intelligence, information systems, military weapon systems, aviation and shipbuilding.

The joint demonstration at AFCEA West will feature a multi-vendor approach to systems design. Brocade, Dell and VMware have used commercial off-the-shelf products to engineer a sophisticated infrastructure model that can support thousands of VDI clients. By leveraging virtualization layers, such as VMware View™, replication from Dell, and Data Center Ethernet fabric switches from Brocade, customers gain the flexibility of an open platform that delivers higher-performance and lower capital expenditures. This promotes open competition and helps ensure the federal government receives the best value for its IT spending.

Highlighted in the VDI demonstration:

    * Brocade VDX 6720 Switches that create Ethernet fabrics, utilizing IEEE Data Center Bridging and TRILL
    * Use of Dell thin clients to protect sensitive information and reduce the “threat from within”
    * Dell Equalogic storage
    * Dell Replication Software Suite
    * Integration of VMware View VDI client desktop virtualization software
    * Virtualized VMware ESX vSphere servers on Dell platforms
    * Standards-based Layer 2-7 Performance Monitoring Tools by InMon

Please visit booth #1609 to see a full demonstration.


<Return to section navigation list> 

Cloud Security and Governance

Lori MacVittie (@lmacvittie) asserted Both are taken for granted but provide vital services without which you and your digital presence would be lost. In the case of DNS, that should be taken literally as a preface to her DNS is Like Your Mom post of 1/25/2011 to F5’s DevCentral blog.

Mom. She’s always there, isn’t she?

image

She kissed away your bumps and bruises. You treated her like Google before you had access to the web and, like Google, she came through every time you needed to write a report on butterflies or beetles or the pyramids at Giza. You asked her questions, she always had an answer. You didn’t spend as much time with her as you grew older (and discovered you knew way more than she did, didn’t you?) but when you needed money or life kicked you in the face, she was there for you, as always. Steady, reliable, good old mom.

You’d be lost without her, wouldn’t you?

Go ahead – give her a call, shoot her an e-mail, write on her Facebook wall, order some flowers. I’ll wait. 

Now that we’re ready, consider that there are some components of your infrastructure that are just as valuable to your organization’s digital presence as your Mom is to you. Unfortunately we also tend to take them for granted.

TAKEN FOR GRANTED
DNS is rarely mentioned these days except when it’s the target of an attack. Then we hear about it and for a few moments DNS is as it should be – a critical data center service. It’s like Mother’s Day, only without the dandelions posing as flowers and Hallmark cards.

But once the excitement over the attack is over, DNS goes back into the bowels of the data center and keeps chugging along, doing what it does without complaint, patiently waiting to be appreciated once again. 

Okay, okay. Enough of the guilt trip. But the truth is that DNS is often overlooked despite its importance to just about everything we do. It is the first point of contact with your customers, and it is the gatekeeper to your entire domain. Without it, customers can’t find you and if they can’t find you, you can’t do business. It’s the cornerstone, the foundation, the most critical service on the Internet. And yet it remains largely unprotected.

The reasons for that are many, but primarily it’s because DNS needs to interact with the public, with the unknown. Its purpose in the architecture of the Internet is to be the authoritative source of where a given service resides. Queries to the root servers happen on the order of millions of times a second. Other DNS services are similarly stressed.

There are two primary concerns for DNS:

  1. Load
  2. Authenticity

Load is a concern because it’s possible to “take out” a DNS server simply by overloading it with requests. It’s a denial of service attack on your entire organization that takes away the ability of clients to find you; making you all but invisible to the entire Internet. The second problem is one that’s more recent, but just as dangerous: authenticity. This issue speaks to the ability to “hijack” your DNS or poison the cache such that customers looking for your latest gadget or service or what have you end up listening to Justin Bieber instead.

Okay, maybe that’s too harsh an image. Maybe they redirect to a competitor’s site, or to a porn site, or something less horrifying than Bieber. You get the picture – it’s bad for you and your organization’s reputation when this happens.

Point is, your DNS services can be hijacked and the result is that your sites and applications are effectively lost to the general public. Customers can’t find you, remote or roaming employees can’t access business critical applications, and you might find yourself associated with something with which you’d rather not have your organization tied.

DON’T MAKE DAD ANGRY
Your DNS infrastructure is critical. DNS itself is an aging protocol, yes, but it does what it’s supposed to do and it does so in a scalable, non-disruptive way. But it does need some attention, particularly in the area of load and authenticity.

angry dadTo help relieve some of the stress of increasing load and the potentially devastating impact of an overload from attack, consider the benefits of load balancing DNS. Load balancing DNS – much in the same way as load balancing web services and applications – provides a plethora of options in architecture and resource distribution that can address heavy load on DNS infrastructure. cloud computing can play a role, here, if the DNS services are virtualized in the architecture by an upstream application delivery solution. The strategy here is to scale out DNS as needed to meet demand. That’s of particular importance if the upstream components (if there are any) are not capable of detecting and responding to a DNS-based DDoS attack. It’ll cost you a pretty penny to scale out your DNS farm to respond, but on the scales balancing uptime of your entire digital presence with costs, well, even the business understands that trade-off. image

Don’t discount the impact of a dynamic data center on DNS. DNS wasn’t designed to be constantly updated and yet the nature of highly virtualized and cloud computing environments requires just that – rapid changes, frequently. That will have an impact on your DNS infrastructure, and can negatively impact the costs associated with managing IP addresses. This is the core of the economy of scale problem associated with the network in relation to cloud computing and virtualization. It’s why automation and orchestration – process – will become critical to the successful implementation of IT as a Service-based initiatives. Yet another facet of DNS that might have been thus far overlooked.

To address the Justin Bieber problem, i.e. hijacking or poisoning of a DNS cache, implement DNSSEC. It’s one of the few “new” standards/specifications relating to DNS to hit the wires (literally) and it’s a good one. DNSSEC, if you aren’t familiar, leverages the foundation of a public key infrastructure to sign records such that clients can be assured of authenticity. Because miscreants aren’t likely to have the proper keys and certificates with which to sign responses, hijacking your name services really can’t happen in a fully DNSSEC-compliant world.

Problem is, it’s not a fully DNSSEC compliant world. Yet. Movement is slow, but the root servers are being protected with DNSSEC and that means it’s time for organizations everywhere to start considering how to implement a similar solution in their own infrastructure. Because it’s quite possible that one day, clients will reject any non-secured response from a DNS service. And if you think your mom feels bad when you don’t answer her calls (damn caller ID anyway) your DNS service will feel even badder. Or the business will, which is probably worse – cause that’s like your dad getting on your case for not answer your phone when your mom calls.  

DNS CRITICAL to CLOUD-BASED STRATEGIES
As we continue to move forward and explore how IT can leverage cloud-based compute to extend and enhance our IT strategies we find more and more that DNS is a critical component to those strategies.

Extending the data center to include external, cloud-deployed applications requires that customers be able to find them – which means DNS. When migrating applications between locations for any reason, DNS becomes a key player in the move – ensuring new and existing connections are properly directed to the right location at the right time. DNS is one of the core technologies required to implement a cloud bursting strategy pdf-icon. DNS is the foundation upon which dynamic and mobile compute can be leveraged both internal and external to the data center.

DNS is key to cloud computing, whether it’s public, private, or hybrid.

DNS is old, yes. It’s outdated, yes. It is like just like your mom. And like your mom, it should be treated with the respect it deserves given its critical role in just about every facet of your organization’s digital “life”.   


<Return to section navigation list> 

Cloud Computing Events

The Windows Azure Team announced 2-Day Windows Azure Boot Camp Coming Soon to A City Near You! (if you live in the midwest or southwest) on 1/24/2011:

image

Looking for guidance and training to help you deliver solutions on the Windows Azure platform?  Then you'll want to sign up for one of the two-day Windows Azure Boot Camps taking place around the country now through May 2011.  Each one of these 20 regional boot camps will be filled with training, discussion, reviewing real scenarios, and hands on labs taught by the region's best Windows Azure experts.

Click on [a] location to learn more and register:

Snacks and drinks will be provided and you will need to bring a computer loaded with the required software; details are included on the registration page for each event.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Nicole Hemsoth claimed Lew Tucker discusses the datacenter of the future, sheds light on the "many clouds" theory, and describes the perfect storm in computing that is leading to new paradigms in IT in a deck for her Cisco’s Cloud CTO Clarifies Strategy, Describes Datacenters of the Future post of 1/24/2011:

image Although Cisco has a viable stake in the future of cloud computing, its position has been difficult to pin down, despite the fact that their Unified Computing System (UCS) server architecture and network commitments present a solid chance for them to have an impact on the market.

Other than a few scattered announcements and the publicized positioning of Lew Tucker (of Sun and Salesforce fame) as Cloud CTO nearly six months ago, Cisco has been reluctant to announce a full-blown strategy around how it plans to stake its claim in the arena. The relative silence was broken this past week, when the company finally revealed its approach somewhat formally in a video interview with Tucker.

In something of a “coming out party” for Cisco’s cloud roadmap for the future, Lew Tucker chatted at length about what role the company might play in a space that is still shaking out its winners and losers on the cloud computing front.

image At the beginning of his tenure, Tucker restated the value of the network as the heart of cloud—a fact that he claims is overlooked in all of the hype and excitement over cloud computing. In the strategy interview, however, he expands on the role of the network in securely delivering applications and gives us a glimpse into his view of the datacenter of the future.

A World of Many Clouds

When asked about the vendor shakeout that is inevitable as the cloud market matures in coming years, Tucker stated that instead of seeing the mega-providers who stake a claim in all verticals, there will be a development of industry-specific clouds.

He notes that clouds will form around needs and communities, thus for example within the healthcare industry there will be a small throng of HIPPA-compliant clouds as well as similarly fine-tuned offerings with a keen eye on the regulatory and security needs of government, financial services and others.

In light of this concept of specialized clouds, Tucker stated that some of Cisco’s enterprise-class customers are looking at what types of enterprise-class private clouds can be hosted by service providers now.

Despite this focus on “many clouds” serving disparate needs-based communities, Lew Tucker feels that in the future there is a “much larger cloud on the horizon” that is visible when we step back and look at the breadth of connected devices that are available at the present—a number that is sure to grow. From automobiles to sensors to mobile devices of all shapes and sizes, this complexity and range provides “the greatest example for why networking is so critical to the cloud” and how security is now an even more pressing issue.

Page:  1  of  3
Read more: 2, 3, All »


Jim Duffy reported “Gartner says multivendor networks provide cost savings with no increase in complexity or unreliability” in a preface to his Gartner slams Cisco's single-vendor network vision post to NetworkWorld’s InfrastructureMgmt blog of 1/21/2011:

image Businesses are better off deploying multivendor networks, no matter what Cisco and other large network vendors may tell you, according to a recent report from Gartner.

image Vendors who want customers to put all their network eggs in their basket will argue that single-vendor, end-to-end networks yield operational consistency, higher service quality and better reliability. But that's just not true -- enterprises can significantly reduce their costs and simplify their operations by deploying gear from at least two vendors in their networks, according to a November 2010 report from Gartner.

image Gartner compiled its data from "hundreds" of client interactions and in-depth interviews with nine public- and private-sector organizations ranging in size from 1,000 users to more than 10,000 employees in 1,000 locations.

DEBATE: Single, dual or multivendor?

In the report, Gartner takes specific aim at Cisco and its proclamations that an "end-to-end" Cisco network is simpler, cheaper and more reliable.

"The idea of a single-vendor network has been promoted by Cisco (just like strong vendors in other market areas) as a way to simplify operations, ensure reliability and lower the TCO [total cost of ownership] for a network infrastructure," state Gartner analysts Mark Fabbi and Debra Curtis in the report. "However, after interviewing various organizations that have introduced a second vendor into their Cisco infrastructures, it is clear that in most cases today there is no financial, operational or functional basis for this argument. The reality is that a single-vendor Cisco network isn't necessarily less complex, easier to manage or more reliable than a network with multiple vendors when implemented with best practices."

Gartner found that introducing a second networking vendor into an enterprise infrastructure will reduce total cost of ownership for most organizations by at least 15% to 25% over five years. And most organizations that introduced a second vendor reported a "lasting decrease" in network complexity "compared to an all-Cisco network," Gartner found.

"We did not encounter one example were [sic] operational cost savings would offset the equipment cost premium that Cisco generally charges," Fabbi and Curtis state in their report.

FLIPSIDE: Why proprietary protocols are not necessarily bad

In every case reviewed by Gartner, the firm found that organizations did not require additional staff to manage a dual-vendor network compared with a Cisco network. Also, total initial capital costs and ongoing maintenance expenses were "clearly higher" in a Cisco-only network -- the interviewed organizations achieved capital cost savings of 30% to 50% less than competitive bids from Cisco; and savings on maintenance costs ranged from 40% to as much as 95% less than what was previously paid for Cisco's SMARTnet services for similar infrastructure and coverage, Gartner found.

"Sole-sourcing with any vendor will cost a minimum 20% premium, with potential savings generally reaching 30% to 50% or more of capital budgets when dealing with premium-priced vendors," Fabbi and Curtis state in the report. "Network architects and CIOs who don't re-evaluate long-held incumbent vendor decisions (with any vendor) on a periodic basis are not living up to fiduciary responsibilities to their organization."

Read more: 2, Next >


James Staten asked Is The IaaS/PaaS Line Beginning To Blur? in a 1/24/2011 post to his Forrester Research blog:

image Forrester’s survey and inquiry research shows that, when it comes to cloud computing choices, our enterprise customers are more interested in infrastructure-as-a-service (IaaS) than platform-as-a-service (PaaS) despite the fact that PaaS is simpler to use. Well, this line is beginning to blur thanks to new offerings from Amazon Web Services LLC and upstart Standing Cloud.

image The concern about PaaS lies around lock-in, as developers and infrastructure and operations professionals fear that by writing to the PaaS layer’s services their application will lose portability (this concern has long been a middleware concern — PaaS or otherwise). As a result, IaaS platforms that let you control the deployment model down to middleware, OS and VM resource choice are more open and portable. The tradeoff though, is that developer autonomy comes with a degree of complexity. As the below figure shows, there is a direct correlation between the degree of abstraction a cloud service provides and the skill set required by the customer. If your development skills are limited to scripting, web page design and form creation, most SaaS platforms provide the right abstraction for you to be productive. If you are a true coder with skills around Java, C# or other languages, PaaS offerings let you build more complex applications and integrations without you having to manage middleware, OS or infrastructure configuration. The PaaS services take care of this. IaaS, however, requires you to know this stuff. As a result, cloud services have an inverse pyramid of potential customers. Despite the fact that IaaS is more appealing to enterprise customers, it is the hardest to use.

Figure 1: Abstraction Determines DevOps Suitability

So how do IaaS providers widen their market? Becoming a PaaS layer makes them easier, but then lock-in concerns arise. How can you maintain the control of IaaS without requiring your customer base to become proficient in traditional IT ops tasks? The answer is in providing tools that ease deployment by making best practice infrastructure choices and set up IaaS services for availability and scale for you.

imageThis is precisely what AWS Elastic Beanstalk does for Tomcat-based applications today (more types will be supported as this offering matures). Standing Cloud provides a similar value, letting you deploy myriad open source software solutions to IaaS, but goes a step farther in letting your choose from multiple clouds.

Both are welcome improvements to the IaaS cloud computing market and achieve this stated goal of making IaaS more widely appealing but aren’t competitive approaches. Elastic Beanstalk is aimed at speeding the deployment of your application but lets you open the hood and revise AWS’ best practice choices at any time. If the customer has the skills, or involves the IT ops team, you can optimize the configuration and performance, tweak the services choices or economics of your deployment. Standing Cloud is targeting small and medium businesses and service providers for these customers who really shouldn’t (or shouldn’t want to) open the hood. They are going for almost SaaS-level simplicity.

These new offerings aren’t the only solutions that make IaaS simpler; RightScale, CloudKick and others have been doing this for a while. But these solutions has been more squarely targeted at DevOps or enterprise architect professionals — people with more deep design and infrastructure skills; the more traditional IaaS buyer.

image

Microsoft is attempting to blur the line between IaaS and PaaS as well. Its VM role on Windows Azure isn’t a pure IaaS play but does provide greater control over your deployment and less lock-in concern than the traditional app role on this PaaS.

As these markets continue to mature we will see further and further blurring of the lines here, which will open the market to more customers but also create confusion along the way. What’s important to focus on is the core of the offering, which doesn’t change through these efforts, and what value they deliver. Elastic Beanstalk is simpler on-ramping to IaaS. It doesn’t abstract the middleware or the cloud platform services. It just sets them up for you. Once it’s in place, you have to manage it. If you want that to be simpler, tools like RightScale and rPath still hold strong value. Don’t want to manage it or lack these skills? Consider hiring a firm with this expertise like Datapipe or CapGemini.

If you don’t want to manage the middleware or OS — you just want this to work — PaaS is more suited to your desires (as is keeping the app in-house).


Andrew Brust (@andrewbrust) asked and answered Will the Amazon Beanstalk Obscure the Azure Skies? on 1/21/2011:

image Amazon Web Services (AWS) today announced the beta release of its “Elastic Beanstalk” Platform as a Service (PaaS) offering.  The platform Initially is available to Java developers only, but it sounds pretty snazzy: you wrap your code up as a Java WAR file (a Web Application Archive), upload it and deploy it.  There are tools developers can integrate into Eclipse to do the upload or you can use the AWS Management Console.  Wait a few minutes and then visit your app at http://myapp.elasticbeanstalk.com/ or something similar.

File:Dead ivy vines cling to tree.jpg

What’s most notable about the Beanstalk offering, as far as I can tell, is that Amazon doesn’t have a hard distinction between their PaaS and IaaS (Infrastructure as a Service) offerings.  Instead of being two separate things, they are simply opposite ends of a spectrum, such that a Beanstalk PaaS instance can be customized slightly or modified to the degree that it becomes much like an IaaS EC2 instance. 

So what does this mean for Microsoft’s Azure cloud platform?  Its claim to fame thus far has been that it’s a PaaS platform, with an IaaS option merely being phased in (in the form of the Virtual Machine Role offering, which is now in Beta).  That’s been the story and that’s why, for many customers and applications, Azure has looked like the right choice.  Is it all over now? 

image

Let’s take inventory of some important, relevant facts:

  • Microsoft has said all along that they and Amazon would eventually each get into the other’s territory.  That is, MS knew it would have an IaaS offering and MS knew that Amazon would have a PaaS offering.
  • Microsoft is working feverishly to round out its platform with things like RDP access, VPN connections, Extra Small Instances, Azure Drive (making VHD files in Blog Storage available as mountable NTFS drives in Azure Role instances).
  • SQL Azure is a Database as a Service (DaaS) offering.  Although Amazon offers access to MySQL (via its RDS service) , Oracle and SQL Server, these services are essentially hosted services built around on-premise products.  That’s not the same as a DaaS database that is automatically replicated and tuned.  Plus AWS’ SQL Server offering includes the Express and Standard Editions only.  Moreover, SQL Azure offers automatic publishing of data in OData format and will soon offer cloud-based reporting too.
  • Perhaps most important: the fact that Azure treats its PaaS instances as distinct from its IaaS ones makes their architecture especially as well suited to spinning up and tearing down machine instances as possible.  By having apps able to execute on what is essentially a generic VM, Microsoft assures that developers will build apps that deploy seamlessly and with minimal risk of dependency on environment or software version specifics.  Arguably, that removes impediments to availability and smooth operational logistics.

image In any case, competition in this sphere is very important and helpful.  That’s a bit of a platitude, but it’s still true.  AWS offers virtual servers as a commodity; Azure offers an application execution and data processing environment.  Each offering has its place, and if Amazon and Microsoft both stretch from their home bases into the other’s territory, that rounds out each offering and keeps both companies honest and innovative.

It should also make each product increasingly more economical.  That’s good news for customers, and for their customers and investors.

Photo by flickr user Nate Steiner

Savio Rodrigues (@SavioRodrigues) asserted Amazon mooches as it bets on Java in the Cloud in an 1/21/2011 post:

image Amazon selecting open source Apache Tomcat as the Java application server powering Amazon’s entry into the Java Platform as a Service (PaaS) market came as little surprise to Java vendors and industry watchers. Amazon’s pricing strategy on the other hand will surely surprise some vendors and IT decision makers. Additionally, Amazon’s apparent lack of contributions to the Apache Tomcat project should be considered during Java PaaS selection decisions.

Betting on Java in the cloud
imageAmazon’s newly announced AWS Elastic Beanstalk beta cloud offering is being positioned as proof that Java is in fact alive and well. Sacha Labourey, CEO at CloudBees, a Java PaaS provider, writes: “This is great news as it reinforces the message that the future of Java is in the cloud, not on premises.” I’d adjust Labourey’s comment to read: “the future of Java is in the cloud, and on premises”.

Amazon’s Jeff Barr explains AWS Elastic Beanstalk as follows:

AWS Elastic Beanstalk will make it even easier for you to create, deploy, and operate web applications at any scale. You simply upload your code and we’ll take care of the rest. We’ll create and configure all of the AWS resources (Amazon EC2 instances, an Elastic Load Balancer, and an Auto Scaling Group) needed to run your application. Your application will be up and running on AWS within minutes.

When the de facto public cloud provider, Amazon, launches a Java-based PaaS offering, ahead of another language such as Ruby, that speaks volumes about Java’s future.

Amazon’s loss leader pricing for AWS Elastic Beanstalk
While AWS Elastic Beanstalk is seen as good news for Java, Amazon’s pricing strategy may not be welcome news for some Java vendors.

Amazon’s Barr mentions, almost in passing, “PS – I almost forgot! You can build and run Elastic Beanstalk applications at no charge beyond those for the AWS resources that you consume.”

Amazon has effectively set the price for the operating system, web server, Java runtime and application server software components of a public Java PaaS at $0.00/hr.

Aside from these software components, functionality to monitor a running environment and proactively provision and scale up and down resources to meet service level agreements would be considered key elements of a PaaS.

Amazon offers these capabilities through Amazon Elastic Load Balancing and Auto Scaling, the latter being a feature of the Amazon CloudWatch monitoring service.

Elastic Load Balancing costs $0.025 per hour per elastic load balancer, while Auto Scaling is available at no charge for an every-five-minute monitoring cycle frequency, or for $0.015 per instance hour if an every-one-minute monitoring cycle is required. When these costs are added into the picture, Amazon’s Java PaaS, excluding hardware, storage and bandwidth charges, costs as little as $0.04 per instance hour including one load balancer. Over a year, this setup would cost about $350.

Amazon’s loss leader pricing strategy poses a challenge for emerging PaaS providers to offer equivalent function at such a low price point. Emerging PaaS vendors will attempt to differentiate versus Amazon’s offering, thereby hoping to defend a higher price point for their offerings.

The price point could also impact established open source based Java providers that have grown based on a lower cost of acquisition value proposition. Enterprises drawn to these solutions for a departmental or less business critical application could become enamored with Amazon’s $350 per year price point. After years of telling IT buyers to make purchase decisions for certain projects based on acquisition cost alone, these open source vendors may have to face the stark reality of their buyers agreeing, and using Amazon’s PaaS as a negotiation tool.

Amazon taking more than it gives to open source?
What’s more troubling for customers is Amazon’s willingness to take seemingly an order of magnitude more from the open source commons than it contributes.

image For instance, while relying on the adoption and brand awareness of Apache Tomcat, Amazon is not even a current sponsor of the Apache Software Foundation. Additionally, it appears Amazon is not an active contributor to the Apache Tomcat project.

Amazon is not duty bound to sponsor or contribute to Apache simply because it’s using Apache developed code. However, if Amazon’s Java PaaS is wildly successful, or even successful enough to lower the price point customers are willing to accept for a public Java PaaS, then vendors who fund Apache Tomcat development, and who must now compete with Amazon’s Java PaaS price point, will have to reconsider their investments in the Apache Tomcat project.

Declining vendor sponsored contributions to the Apache Tomcat project would be of concern to the many customers that utilize Apache Tomcat either directly or indirectly, and in a cloud environment or not. Amazon could choose to contribute resources into the Apache Tomcat project to offset declining contributions from existing vendors in the Apache Tomcat community. This would however add to Amazon’s cost structure for AWS Elastic Beanstalk, and thereby necessitate a price increase or for Amazon to accept lower profit margins.

Advice for IT decision makers
IT decision makers interested in deploying public cloud PaaS workloads should start considering Amazon’s AWS Elastic Beanstalk. However, do so while understanding that Amazon’s current pricing may not fully reflect the true costs of developing and delivering a Java PaaS to customers.  Amazon can only rely on the contributions of a community while competing with the main contributing vendors to that community for so long. Also, don’t be surprised if Java PaaS vendors, established or emerging, are unwilling to compete at Amazon’s price point, but would rather offer differentiated value.

Interesting times ahead, for IT buyers and vendors alike.

Savio is a product manager with IBM Canada.

“[E]xcluding hardware, storage and bandwidth charges” seems to me to exaggerate Savio’s “loss leader pricing claim.”


<Return to section navigation list> 

0 comments: