Tuesday, May 31, 2011

Windows Azure and Cloud Computing Posts for 5/31/2011+

image2 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

image4    

• Updated 5/31/2011 12:00 Noon PST and later with articles marked by Michael Rhys, Kenneth van Sarksum, Herman Mehling, Chris Czarnecki, Debra Littlejohn Shinder, Klint Finley, Michael Stonebraker, Rick Cattell, Susan Mernit, Elisa Flasko and Ben Zimmerman.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

• Michael Rhys (@SQLServerMike) asked “How do large-scale sites and applications remain SQL-based?” as a preface to his Scalable SQL article for the June 2011 issue of Communications of the ACM. From the Introduction:

image One of the leading motivators for NoSQL innovation is the desire to achieve very high scalability to handle the vagaries of Internet-size workloads. Yet many big social Web sites (such as Facebook, MySpace, and Twitter) and many other Web sites and distributed tier 1 applications that require high scalability (such as e-commerce and banking) reportedly remain SQL-baseda for their core data stores and services.

The question is, how do they do it?

imageThe main goal of the NoSQL/big data movement is to achieve agility. Among the variety of agility dimensions—such as model agility (ease and speed of changing data models), operational agility (ease and speed of changing operational aspects), and programming agility (ease and speed of application development)—one of the most important is the ability to quickly and seamlessly scale an application to accommodate large amounts of data, users, and connections. Scalable architectures are especially important for large distributed applications such as social networking sites, e-commerce Web sites, and point-of-sale/branch infrastructures for more traditional stores and enterprises where the scalability of the application is directly tied to the scalability and success of the business.

These applications have several scalability requirements:

  • Scalability in terms of user load. The application needs to be able to scale to a large number of users, potentially in the millions.
  • Scalability in terms of data load. The application must be able to scale to a large amount of data, potentially in, either produced by a few or produced as the aggregate of many users.
  • Computational scalability. Operations on the data should be able to scale for both an increasing number of users and increasing data sizes.
  • Scale agility. In order to scale to increasing or decreasing application load, the architecture and operational environment should provide the ability to add or remove resources quickly, without application changes or impact on the availability of the application.

©2011 ACM  0001-0782/11/0600  $10.00

Michael (mrys@microsoft.com) is principal program manager on the SQL Server RDBMS team at Microsoft. He is responsible for the Beyond Relational Data and Services scenario that includes unstructured and semi-structured data management, search, Spatial, XML, and others.


Michael Stonebraker and Rick Cattell recommended that you “Partition data and operations, keep administration simple, do not assume one size fits all” as a preface to their 10 Rules for Scalable Performance in 'Simple Operation' Datastores article for the June 2011 issue of Communications of the ACM. From the Introduction:

image The relational model of data was proposed in 1970 by Ted Codd as the best solution for the DBMS problems of the day—business data processing. Early relational systems included System R and Ingres, and almost all commercial relational DBMS (RDBMS) implementations today trace their roots to these two systems.

Key Insights

image

Michael Stonebraker (stonebraker@csail.mit.edu) is an adjunct professor in the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology, consultant and founder, Paradigm4, Inc., consultant and founder, Goby, Inc., and consultant and founder, VoltDB, Inc.

Rick Cattell (rick@cattell.net) is a database technology consultant at Cattell.Net and on the technical advisory board of Schooner Information Technologies.

©2011 ACM  0001-0782/11/0600  $10.00


imageSee Savio Rodrigues (@SavioRodrigues) described the Cost effectiveness of Amazon RDS pay-per-usage software pricing in a 5/27/2011 post in the Other Cloud Computing Platforms and Services below.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Glenn Gailey (@ggailey777) described New OData Tombstoning Behavior in Mango in a 5/31/2011 post:

As I mentioned in my previous post OData Updates in Windows Phone “Mango”, new methods have been added to the DataServiceState class that improve performance and functionality when storing client state. You can now serialize nested binding collections as well as any media resource streams that have not yet been sent to the data service. But what does this new behavior look like?

Storing state in the State dictionary of the PhoneApplicationService essentially involves serializing a DataServiceState object, including a typed DataServiceContext object, one or more DataServiceCollection<T> objects, and all the entity data referenced by these objects. To make this behavior more correct, the old SaveState method is replaced with a new static Serialize method. This new method returns, quite simply, a string that is the XML serialized representation of the stored objects. This works much better for storing in the state dictionary because the DataServiceState is able to explicitly serialize everything before it gets stored. Also, nested collections should now work (this was broken in the Windows Phone 7 version).

The following code, based on the quickstart Consuming a Windows Azure Data Service by using the OData Client, has been updated to use the new Mango tombstoning behavior, including the new static Serialize method to store application state on deactivation:

// Code to execute when the application is deactivated (sent to background). 
// This code will not execute when the application is closing. 
private void Application_Deactivated(object sender, DeactivatedEventArgs e) 
{ 
    if (App.ViewModel.IsDataLoaded) 
    { 
        // Store application state in the state dictionary.              
            PhoneApplicationService.Current.State["ApplicationState"] 
                = ViewModel.SaveState();                
    } 
}

// Return a collection of key-value pairs to store in the application state. 
public Dictionary<string, string> SaveState() 
{ 
    if (App.ViewModel.IsDataLoaded) 
    { 
        Dictionary<string, string> state 
            = new Dictionary<string, string>();

        // Create a new dictionary to store binding collections. 
        var collections = new Dictionary<string, object>();

        // Add the current Titles binding collection. 
        collections["Titles"] = App.ViewModel.Titles;

        // Store the current context and binding collections 
        // in the view model state. 
        state.Add("DataServiceState", 
                DataServiceState.Serialize(_context, collections));

        state.Add("CurrentPage", CurrentPage.ToString()); 
        state.Add("TotalCount", TotalCount.ToString());

        return state; 
    } 
    else 
    { 
        return null; 
    } 
}

A new static Deserialize method on DataServiceState takes the stored serialization and returnes a rehydrated DataServiceState instance, so the re-activation code now looks like this:

// Code to execute when the application is activated (brought to foreground). 
// This code will not execute when the application is first launched. 
private void Application_Activated(object sender, ActivatedEventArgs e) 
{ 
    // If data is not still loaded, try to get it from the state store. 
    if (!ViewModel.IsDataLoaded) 
    { 
        if (PhoneApplicationService.Current.State.ContainsKey("ApplicationState")) 
        { 
            // Get back the stored dictionary. 
            Dictionary<string, string> appState = 
                PhoneApplicationService.Current.State["ApplicationState"] 
                as Dictionary<string, string>;

            // Use the returned dictionary to restore 
            // the state of the data service. 
            App.ViewModel.RestoreState(appState); 
        } 
    } 
}

// Restores the view model state from the supplied state dictionary. 
public void RestoreState(IDictionary<string, string> appState) 
{ 
    // Create a dictionary to hold any stored binding collections. 
    Dictionary<string, object> collections;

    if (appState.ContainsKey("DataServiceState")) 
    { 
        // Deserialize the DataServiceState object. 
        DataServiceState state 
            = DataServiceState.Deserialize(appState["DataServiceState"]);

        // Restore the context and binding collections. 
        var context = state.Context as NetflixCatalog; 
        collections = state.RootCollections;

        // Get the binding collection of Title objects. 
        DataServiceCollection<Title> titles 
            = collections["Titles"] as DataServiceCollection<Title>;

        // Initialize the application with stored data. 
        App.ViewModel.LoadData(context, titles);

        // Restore other view model data. 
        _currentPage = Int32.Parse(appState["CurrentPage"]); 
        _totalCount = Int32.Parse(appState["TotalCount"]); 
    } 
}

Note that with multi-tasking and the new fast application switching functionality in Mango, it is possible that your application state will be maintained in a “dormant” state in memory when your app loses focus. This means that even though the Activated event is still raised, the application data is still there and is immediately redisplayed. This is much faster than tombstoning because you don’t have to deserialize and re-bind everything (and the bound images don’t have to be downloaded again—hurray!). For more information, see Execution Model Overview for Windows Phone in the Mango beta documentation.


The MSDN Data Development Center posted an HTML version of David Chappell’s Introducing OData: Data Access for the Web, the cloud, mobile devices, and more white paper on 5/31/2011. From the introductory sections:

Describing OData

image Our world is awash in data. Vast amounts exist today, and more is created every year. Yet data has value only if it can be used, and it can be used only if it can be accessed by applications and the people who use them.

imageAllowing this kind of broad access to data is the goal of the Open Data Protocol, commonly called just OData. This paper provides an introduction to OData, describing what it is and how it can be applied. The goal is to illustrate why OData is important and how your organization might use it.

The Problem: Accessing Diverse Data in a Common Way

There are many possible sources of data. Applications collect and maintain information in databases, organizations store data in the cloud, and many firms make a business out of selling data. And just as there are many data sources, there are many possible clients: Web browsers, apps on mobile devices, business intelligence (BI) tools, and more. How can this varied set of clients access these diverse data sources?

One solution is for every data source to define its own approach to exposing data. While this would work, it leads to some ugly problems. First, it requires every client to contain unique code for each data source it will access, a burden for the people who write those clients. Just as important, it requires the creators of each data source to specify and implement their own approach to getting at their data, making each one reinvent this wheel. And with custom solutions on both sides, there's no way to create an effective set of tools to make life easier for the people who build clients and data sources.

Thinking about some typical problems illustrates why this approach isn't the best solution. Suppose a Web application wishes to expose its data to apps on mobile phones, for instance. Without some common way to do this, the Web application must implement its own idiosyncratic approach, forcing every client app developer that needs its data to support this. Or think about the need to connect various BI tools with different data sources to answer business questions. If every data source exposes data in a different way, analyzing that data with various tools is hard -- an analyst can only hope that her favorite tool supports the data access mechanism she needs to get at a particular data source.

Defining a common approach makes much more sense. All that's needed is agreement on a way to model data and a protocol for accessing that data -- the implementations can differ. And given the Web-oriented world we live in, it would make sense to build this technology with existing Web standards as much as possible. This is exactly the approach taken by OData.

The Solution: What OData Provides

OData defines an abstract data model and a protocol that let any client access information exposed by any data source. Figure 1 shows some of the most important examples of clients and data sources, illustrating where OData fits in the picture.


Figure 1: Any OData client can access data provided by any OData data source.

As the figure illustrates, OData allows mixing and matching clients and data sources. Some of the most important examples of data sources that support OData today are:

  • Custom applications: Rather than creating its own mechanism to expose data, an application can instead use OData. Facebook, Netflix, and eBay all expose some of their information via OData today, as do a number of custom enterprise applications. To make this easier to do, OData libraries are available that let .NET Framework and Java applications act as data sources.
  • Cloud storage: OData is the built-in data access protocol for tables in Microsoft's Windows Azure, and it's supported for access to relational data in SQL Azure as well. Using available OData libraries, it's also possible to expose data from other cloud platforms, such as Amazon Web Services.
  • Content management software: For example, SharePoint 2010 and Webnodes both have built-in support for exposing information through OData.
  • Windows Azure Marketplace DataMarket: This cloud-based service for discovering, purchasing, and accessing commercially available datasets lets applications access those datasets through OData.

While it's possible to access an OData data source from an ordinary browser -- the protocol is based on HTTP -- client applications usually rely on a client library. As Figure 1 shows, the options supported today include:

  • Web browsers: JavaScript code running inside any popular Web browser, such as Internet Explorer or Firefox, can access an OData data source. An OData client library is available for Silverlight applications as well, and other rich Internet applications can also act as OData clients.
  • Mobile phones. OData client libraries are available today for Android, iOS (the operating system used by iPhones and iPads), and Windows Phone 7.
  • Business intelligence tools: Microsoft Excel provides a data analysis tool called PowerPivot that has built-in support for OData. Other desktop BI tools also support OData today, such as Tableau Software's Tableau Desktop.
  • Custom applications: Business logic running on servers can act as an OData client. Support is available today for code created using the .NET Framework, Java, PHP, and other technologies.

The fundamental idea is that any OData client can access any OData data source. Rather than creating unique ways to expose and access data, data sources and their clients can instead rely on the single solution that OData provides.

OData was originally created by Microsoft. Yet while several of the examples in Figure 1 use Microsoft technologies, OData isn't a Microsoft-only technology. In fact, Microsoft has included OData under its Open Specification Promise, guaranteeing the protocol's long-term availability for others. While much of today's OData support is provided by Microsoft, it's more accurate to view OData as a general purpose data access technology that can be used with many languages and many platforms.

How OData Works: Technology Basics

Providing a way for all kinds of clients to access all kinds of data is clearly a good thing. But what's needed to make the idea work? Figure 2 shows the fundamental components of the OData technology family.


Figure 2: An OData service exposes data via the OData data model, which clients access with an OData client library and the OData protocol.

The OData technology has four main parts:

  • The OData data model, which provides a generic way to organize and describe data. OData uses the Entity 1 Data Model (EDM), the same approach that's used by Microsoft's Entity Framework (EF)[1].
  • The OData protocol, which lets a client make requests to and get responses from an OData service. At bottom, the OData protocol is a set of RESTful interactions -- it's just HTTP. Those interactions include the usual create/read/update/delete (CRUD) operations, along with an OData-defined query language. Data sent by an OData service can be represented on the wire today either in the XML-based format defined by Atom/AtomPub or in JavaScript Object Notation (JSON).
  • OData client libraries, which make it easier to create software that accesses data via the OData protocol. Because OData relies on REST, using an OData-specific client library isn't strictly required. But most OData clients are applications, and so providing pre-built libraries for making OData requests and getting results makes life simpler for the developers who create those applications.
  • An OData service, which exposes an endpoint that allows access to data. This service implements the OData protocol, and it also uses the abstractions of the OData data model to translate data between its underlying form, which might be relational tables, SharePoint lists, or something else, into the format sent to the client.

Given this basic grasp of the OData technology, it's possible to get a better sense of how it can be used. The best way to do this is to look at some representative OData scenarios. …


Klint Finley (@klintron) asked Is Microsoft's Future in Data-as-a-Service? in a 5/30/2011 post to the ReadWriteCloud blog:

image The "realist" view on Microsoft's future is that Windows and Microsoft Office licenses will continue to be the company's bread and butter, and that enterprise-focused cloud initiatives like Azure and Office 365 will supplement this growth. In this view Microsoft's struggles in mobile, the rapid growth of Apple and the proliferation of Linux aren't real threats to the company. After all, even though OSX and Linux are growing faster than Windows, Windows is still growing. And it's too early to write Microsoft out of the mobile game, before its partnership with Nokia comes to fruition and before it even releases its tablets. It's a reasonable view of where things are going.

Then there's the other vision, which we might call the Cassandra version.

imageIn this vision, Microsoft loses out in the operating system wars, Office becomes less relevant in the market and Microsoft's mobile ambitions were doomed from the start. This view sees Office 365 losing out to Google Docs and Azure losing out to the plethora of alternatives. Bing might be Microsoft's only hope in this view, and even that's a long shot.

TechCrunch's MG Siegler has called Microsoft's Online Services "the worst Internet startup ever" because of its massive losses ($726 million last quarter).

image Bing gained traction this year, but may already be slipping in popularity. And as Siegler wrote last year "While Microsoft is monetizing Bing, they're also spending a huge amount advertising it to get the eyeballs they eventually monetize."

But what if Bing's real value is in its APIs and SDKs, not its consumer facing side?

image The comments of our poll on Google's pending closure of several APIs including this comment from STHayden:

At work we wanted to use the translation api but they don't allow storage in a db and don't have any comercial license. We went with bing because they had a commercial license we could pay for.

We've mentioned before that some new startups are taking an API-only approach to doing business. Microsoft obviously isn't going to get rid of its Bing branded sites, but would it make sense for the company to focus on its Bing APIs as a revenue generator? It's an unproven model, and new APIs continually popping up, but it's an interesting possibility. Microsoft could become a big time data broker and provider of "plumbing" for applications and services that run on any number of platforms and devices.

We've covered some of Microsoft's

other big data projects, but it's not clear yet how the company plans to turn big data into big money. Targeting developers could be just the ticket.

image Microsoft does have one initiative along these lines already: Windows Azure DataMarket, which lets companies sell data through REST-based APIs.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF and Service Bus

Vittorio Bertocci (@vibronet) suggested that you Edit and Apply New WIF’s Config Settings in Your Windows Azure WebRole… Without Redeploying! in a 5/31/2011 post:

image In short: in this post I will show you how you can leverage the OnStart event of a WebRole to enable changing the WIF config settings even after deployment.

image722322222Since the very first time Hervey and I made the first foray in Windows Azure with WIF, all the way to the latest hands-on labs, books and whitepapers, one of the main challenges of using WIF in a WebRole has always been the impossibility of updating the settings in <microsoft.identityModel> without redeploying (or preparing in advance for a pool of alternative <service> elements fully known at deployment time).

Last Friday I was chatting with Wade about how to solve this very problem for some future deliverables in the toolkit, and it just came to me: why don’t we just leverage the WebRole lifecycle and use OnStart for setting the values we want even before WIF reads the web.config? All we need to do is create suitable <setting> entries in the ServiceConfiguration.cfg file, which can be modified without the need to redeploy, and use the events in WebRole.cs to ensure that our apps picks up the new values. Simple!

I created a new WebRole, hooked it to a local SelfSTS, and started playing with ServiceDefinition.csdef, ServiceConfiguration.cscfg and WebRole.cs. I just wanted to make sure the idea works, hence I didn’t pour much care in writing clean (or exhausting) code. Also, I totally ignored all the considerations about HTTPS, NLB session management and all those other things you learned you need to do in WIndows Azure. None of those really interferes with the approach, hence for the sake of simplicity I left them all out.

First, I created <Setting> entries  in the .csdef for every WIF config parameter generated by the Add STS Reference you’d likely want to control:

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="WindowsAzureProject5" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
<WebRole name="WebRole1">
<Runtime executionContext="elevated" />

<!--... stuff-->
<ConfigurationSettings>
<Setting name="audienceUri" />
<Setting name="issuer" />
<Setting name="realm" />
<Setting name="trustedIssuersThumbprint" />
<Setting name="trustedIssuerName" />
</ConfigurationSettings>
</WebRole>
</ServiceDefinition>

Yes, yes, having settings just for one issuer in the trusted issuers registry is not especially elegant; and adding a homeRealm would probably be useful. Some other time.

The important thing to notice here is the <Runtime executionContext=elevated” />. Without that, you won’t be able to save the modifications to the Web.Config.

Then I added the same settings in the .cscfg, leaving all the values empty (for now).

<?xml version="1.0" encoding="utf-8"?>
<ServiceConfiguration serviceName="WindowsAzureProject5" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">
<Role name="WebRole1">
<Instances count="1" />
<ConfigurationSettings>
<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" />
<Setting name="audienceUri" value="" />
<Setting name="issuer" value="" />
<Setting name="realm" value="" />
<Setting name="trustedIssuersThumbprint" value="" />
<Setting name="trustedIssuerName" value="" />
<!--...stuff-->
</ConfigurationSettings>
<!--...stuff-->
</Role>
</ServiceConfiguration>

Very straightforward. Then I went ahead and added to WebRole.cs  the code below:

namespace WebRole1
{
public class WebRole : RoleEntryPoint
{
public override bool OnStart()
{
RoleEnvironment.Changing += RoleEnvironmentChanging;

using (var server = new ServerManager())
{
var siteNameFromServiceModel = "Web";
var siteName =
string.Format("{0}_{1}", RoleEnvironment.CurrentRoleInstance.Id, siteNameFromServiceModel);

string configFilePath = server.Sites[siteName].Applications[0].VirtualDirectories[0].PhysicalPath + "\\Web.config";
XElement element = XElement.Load(configFilePath);

string strSetting;

if (!(String.IsNullOrEmpty(strSetting = RoleEnvironment.GetConfigurationSettingValue("audienceUri"))))
element.Element("microsoft.identityModel").Element("service").Element("audienceUris").Element("add").Attribute("value").Value = strSetting;
if (!(String.IsNullOrEmpty(strSetting = RoleEnvironment.GetConfigurationSettingValue("issuer"))))
element.Element("microsoft.identityModel").Element("service").Element("federatedAuthentication").Element("wsFederation").Attribute("issuer").Value = strSetting;
if (!(String.IsNullOrEmpty(strSetting = RoleEnvironment.GetConfigurationSettingValue("realm"))))
element.Element("microsoft.identityModel").Element("service").Element("federatedAuthentication").Element("wsFederation").Attribute("realm").Value = strSetting;

if (!(String.IsNullOrEmpty(strSetting = RoleEnvironment.GetConfigurationSettingValue("trustedIssuersThumbprint"))))
element.Element("microsoft.identityModel").Element("service").Element("issuerNameRegistry").Element("trustedIssuers").Element("add").Attribute("thumbprint").Value = strSetting;
if (!(String.IsNullOrEmpty(strSetting = RoleEnvironment.GetConfigurationSettingValue("trustedIssuerName"))))
element.Element("microsoft.identityModel").Element("service").Element("issuerNameRegistry").Element("trustedIssuers").Element("add").Attribute("name").Value = strSetting;

element.Save(configFilePath);
}


return base.OnStart();
}
private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
e.Cancel = true;
}
}
}

Let’s look at what happens in the using block first. If you want to read good writeups on this technique I suggest this msdn entry or this really nice entry from Andy Cross.

When OnStart runs, the WebRole application itself didn’t have a chance to do anything yet. What I want to do here is getting my hands on the web.config file, override the WIF settings with all the non-empty values I find in ServiceConfiguration.cscfg and save back the file even before WIF gets to read <microsoft.identityModel>.

What I do above with Linq to XML for modifying the WIF settings is pretty dirty, very brittle and definitely tied to the assumption that the config we’ll be working with is the one that comes out from a typical Add STS Reference run. I tried to use ConfigurationManager at first, but it complained that <microsoft.identityModel> has no schema hence I just went the quicker, easier, more seductive “let’s just see if it works”. But remember, for the one among you who caught the reference: the dark side is not stronger. No no no.

Aaanyway. The element.Save(configFilePath) is the line that will fail if you forgot to add the elevated directive in the csdef, you’re warned.

The RoleEnvironmentChanging handler hookup at the beginning of OnStart, and the handler itself, are meant to ensure that when you change the values in ServiceConfiguration.cscfg Windows Azure will properly restart the role. If you don’t add that, just changing the config will not drive changes in the WebRole behavior until a stop & restart occurs. Technically there are few things you may try to do to get WIF to pick up the new settings at mid flight, but all those would entail changing the application code and that’s exactly what I am trying to avoid with all this brouhaha.

BTW, you can thank Nick Harris for the RoleEnvironment.Changing trick.
Nick just joined the Windows Azure Evangelism team and he is already doing an awesome job.

That should be all. Now, try to ignore the impulse that would make you change the config before deploying, and publish the project in Windows Azure staging “as is”.

image

In few mins the instance is up and running, listening at a nice (and totally unpredictable) URL http://eddb883659d04d0bbbb570f17c52ea01.cloudapp.net. What do you think will happen if I just navigate there?

image

That’s right. WIF is still configured for the address the application had in the environment formerly known as devfabric (now Windows Azure simulation environment), as described in the realm entry, hence SelfSTS (which behaves like the WIF STS template if there’s no wreply in the signin message) sends the token back there instead of http://eddb883659d04d0bbbb570f17c52ea01.cloudapp.net. Normally we’d be pretty stuck at this point, but thanks to the modification we made we can fix the situation.

All you need to do is navigate to the Windows Azure portal, select the deployment and hit the Configure button.

image

Here you can pick the Edit current configuration option to update the values inline. In this case, all you need to do is pasting http://eddb883659d04d0bbbb570f17c52ea01.cloudapp.net in the audienceUri and realm settings, and hit OK.

image

You’ll see the portal updating the instance for few moments. As soon as it reports the role as ready, navigate to its URL and, surprise surprise, this time the authentication flow ends up in the right place! In the screenshot below you can see (thanks to the SecurityTokenVisualizerControl, which you can find in all the latest ACS labs in the identity training kit) that the audienceURI has been changed as well.

image

I think that’s pretty cool.

Now, you may argue that this scenario is an artifact of how the WIF STS template handles things, and if you would have ben dealing with an STS (like ACS) which keeps realm and return URLs well separated you could have solved the matter at the STS side. All true, but beside the point.

Here I used the staging & realm example because with its unknowable-until-it’s-too-late GUID in the URL it is (was?) the paradigmatic example of what can be challenging when using WIF with Windows Azure; but of course you can use the technique you saw here for pushing out any post-deployment changes, including pointing the WebRole to a different STS, updating certificate thumbprints as keys rollover takes place or any other setting you may want to modify.

Please use this technique with caution. I haven’t used extensively yet hence I am not 100% sure if there are gotchas just waiting to be found, but so far it seems to be solving the problem pretty nicely.


<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Ryan Parsley (@ryanparsley) interviewed Steve Marx (@smarx, pictured below) for the latest Cloud Plumbing podcast about Windows Azure interoperability:

audio/mpeg icon Steve Marx_Interview.mp3

image Steve Marx, Tactical Strategist for Microsoft, talks about interoperability on Azure. Steve writes Python and Ruby apps that he hosts on Azure and says you can too. He explains why it makes sense for Microsoft to support developers of all languages... not just on the .Net Stack.

Links referenced in the show:

The music in the show, Have Mercy — Big Walter Horton, was provided by Mevio’s Music Alley.

Not sure why inter is crossed out above.


The Windows Azure Team announced a Content Update: New Guidance Available on Using SSL Certificates with Windows Azure in a 5/31/2011 post:

imageWe have recently updated and reorganized content in Managing Certificates in Windows Azure to make it easier to find help on certificates. The most significant change is the addition of two new topics on using SSL certificates:

With these updates, you can now find complete end-to-end instructions for obtaining and configuring an SSL certificate on Windows Azure.


SDTimes reported DevArt’s New Free T4 Editor for Visual Studio with Intellisense, Syntax Highlighting, Outlining, and Code Formatting Support! in a 5/31/2011 post:

imageDevart today unveiled the release of new powerful Visual  Studio add-in for editing T4 templates with syntax highlighting,  intellisense, code outlining, and all features of first-class text editor  add-in for Visual Studio. It provides very high performance and makes  creating T4 templates easier and faster. With this new add-in, Devart  offers a fast and easy way to create and edit T4 templates with multilevel  template including, convenient template navigation, and rich code editing  features.

Intellisense
imageDevart T4 editor provides comprehensive intellisense including all Visual Studio C#  and Visual Basic intellisense features - tooltips, parameter info, code  completion, and additionally supports a completion list for template  directives. T4 editor intellisense lists all available C# classes and  members, even those that are in included template files and in referenced  assemblies. 

Syntax Highlighting
With  highlighting template directives, C# and Visual Basic code, you can easily  distinguish text from the function calls. Fonts and colors for templates  can be customized as for any Visual Studio code editor. 

Goto
Devart T4  Editor allows you to navigate to definitions and declarations of objects  and members if they are present in the template file or included files.  

Include
Devart T4  Editor supports multilevel template including. All classes from included  templates are available in intellisense, and you can navigate to them with  Go To menu commands. 

Outlining
Devart T4  editor supports fast and convenient code folding feature. You can hide or  display T4 control blocks, which simplifies template understanding and  editing. 

Editor Customization
You can enable  or disable intellisense, word wrapping, virtual whitespace, line numbers,  etc. Fonts and colors for syntax highlighting also can be changed.  

Indenting
Devart T4  editor provides customizable and intelligent indenting.You don't need to  add spaces or tab characters manually. 

Code Formatting
Devart T4  Editor allows you to format templates automatically. 

Support for Large Templates
Devart T4  Editor quickly parses even large template files with lots of included  files. It provides high performance when parsing and editing templates.  

Pricing and Availability
T4 Editor is an absolutely free tool,  but nevertheless you can receive full support. T4 Editor is available for  immediate download at http://www.devart.com/t4-editor/download.html

You might need the DevArt T4 editor to customize LINQ to SQL for use with WP7’s local data storage. I wish I had it when I was working with LINQ to SQL a couple years ago.


Maarten Balliauw (@maartinballiauw) described Creating your own private NuGet feed: MyGet in a 5/31/2011 post:

image Ever since NuGet came out, I’ve been thinking about leveraging it in a corporate environment. I've seen two NuGet server implementations appear on the Internet: the official NuGet gallery server and Phil Haack’s NuGet.Server package. As these both are good, there’s one thing wrong with them: you can't be lazy! You have to do some stuff you don’t always want to do, namely: configure and deploy.

myget - NuGet as a serverAfter discussing some ideas with my colleague Xavier Decoster, we decided it’s time to turn our heads into the cloud: we’re providing you NuGet-as-a-Service (NaaS)! Say hello to MyGet.

MyGet offers you the possibility to create your own, private, filtered NuGet feed for use in the Visual Studio Package Manager.

It can contain packages from the official NuGet feed as well as your private packages, hosted on MyGet. Want a sample? Add this feed to your Visual Studio package manager: http://www.myget.org/F/chucknorris

imageBut wait, there’s more: we’re open sourcing this thing! Feel free to fork over at CodePlex and extend our "product". We've already covered some feature requests we would love to see, and Xavier has posted some more on his blog. In short: feel free to add your own most-wanted features, provide us with bugfixes (pretty sure there will be a lot since we hacked this together in a very short time). We're hosting on WIndows Azure, which means you should get the Windows Azure SDK installed prior to contributing. Unless you feel that you can write code without locally debugging :-)

Chuck Norris Feed

Feel free to go ahead and create your private feed. Some ideas (more at Xavier's site):

  • A feed containing only the packages you or your company often use
  • A feed containing only your (open-source?) project and its dependencies
  • A feed containing just a few packages that you want to use for a certain project: tell your developers to just install them all

Bugs and feature requests? Feel free to post them as a comment below. Once we release the sources, I’ll kick your mailbox with a request to implement the stuff you proposed. Seems fair to me :-)


Derrick Harris (@derrickharris) reported Apache, Microsoft Take Baby Steps Toward Open Clouds in a 5/26/2011 post to Giga Om’s Structure blog:

image The dream of open cloud computing took a couple of small steps forward in the past 24 hours with Apache promoting the Libcloud project to top-level status and with Microsoft releasing a new version of its software development kit for PHP applications on its Windows Azure platform. However, although both moves do constitute noteworthy advancements in the cause of cloud openness, they also highlight how differently we might come to define open in the cloud context.

imageMicrosoft’s enhanced PHP support is a prime example of this new definition of open: the SDK is open source and does give non-.NET users access to more features of the Azure platform, but Azure itself is neither open source nor particularly open, in general. However, by giving developers what Microsoft calls “a ‘speed dial’ library to take full advantage of Windows Azure’s coolest features,” the company is trying to prove that it supports freedom of choice. PHP developers now can have their cake and eat it, too.

Microsoft has SDKs for Java and Ruby, as well, but they’re less full-featured than this latest version for PHP, which aims to give PHP developers an experience comparable to that of .NET developers within Windows Azure.

PHP happens to be a great place to start really eliminating the barriers for non-.NET development within Azure. Many Facebook apps are written in PHP, after all (including Hotel Peeps, which Microsoft counts as a customer) and it’s among the most-popular web-development languages overall. Microsoft certainly didn’t fail to notice that PHP, once relatively underserved in the Platform as a Service space, now has a dedicated PaaS in PHP Fog and support from others, including RightScale, Red Hat (OpenShift) and DotCloud.

The Libcloud case presents a different take on cloud openness. Libcloud gives developers an API to perform a set of standard actions — such as create, reboot, deploy and destroy — to cloud servers across a variety of providers. However, it’s written in Python (there’s also a less-capable Java (s orcl) version) and requires developers to work in Python, which does limit the scope of who’s likely to use Libcloud. It’s an open-source project that opens up developers’ abilities to manage resources across clouds, but it’s only open to the Python community.

Deltacloud, which Red Hat initiated and which is currently an Apache Incubator project, arguably takes a broader approach to cross-cloud management with its REST-based API that works with any application type. Both projects, however, fall short of the ultimate goal of interoperability standards among cloud providers, whereby users could actually move applications and data between clouds without first having to pull back in-house and reload it into its new home.

Whatever open actually means in cloud computing, it’s undeniable that we’re making progress. From open source software such as OpenStack to projects such as Libcloud to just multi-language support on PaaS platforms, we’re at least at a place where cloud users can be confident their applications can be moved from cloud to cloud, if need be, and where users might not even have to learn new APIs along the way. We might never have widely accepted cloud standards, but it looks like we’re coming along nicely on choice.

With that in mind, I’ll be talking at Structure 2011 next month with leaders from the OpenStack and Cloud Foundry projects, who have a lot more insight into how open source projects might ultimately change ideas around both interoperability and economics in the cloud.


<Return to section navigation list> 

Visual Studio LightSwitch

Brad Kingsley (@BradKingsley) pondered Deploying LightSwitch Applications in a 5/31/2011 post:

image Hopefully this is because LightSwitch is still in beta (as of this writing on 5/31/11) but from our research and testing, it seems that the documentation in the following article is correct:

"Publishing a 3-tier application requires that you have administrative access to a server that is running IIS and is preconfigured for LightSwitch, and also that you have administrative access to a computer that is running SQL Server." (http://msdn.microsoft.com/en-us/library/ff872288.aspx)

image2224222222That's a tremendous failure (if not addressed) of the product. It means that publishing a LightSwitch application to a shared host is not an option (though I've heard of possible manual configuration workarounds). It also means that in a corporate world you either need a trusted deployment team with elevated permissions, or you need to give your developers or QA people administrative access to servers (generally a no-no).

LightSwitch has a Publish Application Wizard that seems to leverage Web Deploy so hopefully the LightSwitch team will address this publishing challenge so that any host supporting standard Web Deploy services can accept and support LightSwitch applications.

Brad is founder, president and CEO of OrcsWeb - managed hosting solutions.


Return to section navigation list> 

Windows Azure Infrastructure and DevOps

Windows Azure is missing from the SD Times 100 2011 Cloud category:

image

Why is Google present and Microsoft not?

But Microsoft appears in the Tools & Frameworks list:

image

As well as the Database list:

image

and the Influencer and ALM & SCM lists.


• Chris Czarnecki asked What is Azure ? in a 5/31/2011 post to the Learning Tree blog:

image Last week I read an excellent article on Microsoft Azure about Azure and the death of the data centre [see post below]. The author of the article, Debra Littlejohn Shinder hit on a point that I have realised as part of my consultancy activities for some time too – that is Azure is one of the most misunderstood product offerings from Microsoft ever.

image It is interesting to consider why this is so. Firstly, the materials that Microsoft produce to describe Azure are often confusing. Secondly, there is a degree of paranoia amongst organisations about handing over control of IT resources to a third party as well as the perceived threat to job security of administrators. Those that are vocal, propagate the confusion. This is highlighted in a sample of the comments posted by readers if Debra Littlejohn Shinder’s article. I have summarised a few below:

  • Instead of hogging your server resources Azure will hog your bandwidth resources
  • In the event of problems, Microsoft will blame the ISP, the ISP will blame Microsoft, and no resolution found
  • Microsoft cannot keep HotMail running, what chance Azure
  • I would rather own my own assets and manage them and secure my own data
  • Clouds are unreliable, look how often Amazon AWS is down

All of these arguments can easily be dismissed when one has a thorough understanding of Cloud Computing, and Microsoft Azure. The technology and business benefits and risks can then be considered and an informed decision made. Uninformed comments are harmful not primarily to Microsoft, or Cloud Computing in general, but to the organisations that the uninformed work for. These employees may be holding back their organisations from becoming more agile and responsive to their customers needs, missing business opportunities and losing ground to competitors. Equally, Cloud Computing is not a solution for every business. If you would like to learn more on Cloud Computing and its business and technical benefits, why not consider attending Learning Tree’s Cloud Computing course. If you would like to learn more about the details of Azure, Doug Rehnstrom has developed an excellent 4 day hands-on course that will provide you with the skills to use Azure to the maximum benefit of your organisation.


• Debra Littlejohn Shinder asked Does Azure mean the death of the datacenter or the rebirth of Windows? in a 5/25/2011 post (missed when posted):

image It appears to many people, both inside and outside of the company, that Microsoft is putting most of its development efforts into Azure, its cloud-computing platform, rather than Windows Server. This makes sense as part of its oft-declared “all in with the cloud” philosophy, but in many ways Azure is still a mystery, especially to IT professionals.

Some see it — and cloud computing in general — as a threat to their jobs, as I discussed in a previous post: “It Pros Are Not Feeling the Love from Microsoft. Some, who apparently don’t understand what Azure is, are afraid it will subsume Windows Server, and I’ve even heard some predict that “the next release of Windows Server will be the last.”

There is no doubt that Microsoft is pouring resources into “the cloud thing.” That includes putting many of its best people to work on cloud-related projects. It was big news last summer when Mark Russinovich, of Sysinternals’ fame, was moved from the Windows Core Operating System Division to the Windows Azure team. The cloud OS team includes a number of key employees such as Dave Cutler, who is considered the father of Windows NT, and Yousef Khalidi, who was formerly a member of the Windows Core Architecture Group.

But does this focus on Azure really mean the death of the datacenter — and the Windows servers that empower it - or could it actually signal the rebirth of Windows as a fresh, new, more flexible foundation for both public cloud offerings and the private cloud-based, on-premise datacenters of the future?

Understanding Azure

Ralph Waldo Emerson once said, “To be great is to be misunderstood.” Perhaps Microsoft can take comfort in that, because based on my casual discussions with many in the IT industry, Azure is one of the most misunderstood products the company has produced (and that’s saying a lot). Microsoft’s own descriptions of Azure often leave you more confused than ever.

For instance, the video titled What Is Windows Azure that’s linked from their Windows Azure page talks about three components: the “fabric,” the storage service, and the “developer experience.” Huh? To confuse matters more, many of the papers you’ll find on Azure, such as Introducing Windows Azure by David Chappell & Associates, list its three parts as fabric, storage service, and compute service.

The most common point of puzzlement is over whether Azure is or isn’t an operating system. Mary Jo Foley, in her Guide for the Perplexed, called Azure the base operating system that used to be codenamed “Red Dog” and was designed by a team of operating system experts at Microsoft. But she goes on to say that it networks and manages the set of Windows Server 2008 machines that comprise the cloud. That leaves us wondering whether Azure is an OS on which applications run or an OS on which another operating system runs.

Microsoft’s own web sites and documents, in fact, rarely call Azure an operating system, but refer to it as a “platform.” That’s a word that is used in many different ways in the IT industry. We have hardware platforms such as x86/x64, RISC, and ARM. We have software platforms such as .NET and Java. We have mobile platforms such as BlackBerry, Android, and WinMo. And we have OS platforms such as Windows, Linux, Mac OS X, Solaris, and so forth.

So what sort of platform is Azure? According to MSDN, it’s “an Internet-scale cloud computing and services platform hosted in Microsoft datacenters” that comprises three developer services: Windows Azure, Windows Azure AppFabric, and SQL Azure. So there we have yet another, different list of Azure’s three components.

It’s important to note that although they differ in other respects, every one of these lists includes the “fabric” component. And that’s also the most mysterious of the components to those who are new to Azure. Mark Russinovich likens Azure to a “big computer in the sky” and explains that the Fabric Controller is analogous to the kernel of the operating system. He has a unique ability to take this very complex subject and make sense out of it, and his Channel 9 discussion of Azure, cloud OS, and PaaS is well worth watching if you want a better understanding of the Fabric Controller and what it does.

How it all fits together

If you’re familiar with the basics of cloud computing, you know there are three basic service models: Software as a Service (SaaS), Infrastructure as a Service (IaaS), and Platform as a Service (PaaS). Azure solutions exist for both the IaaS and the PaaS deployment models, with the former providing only compute, network, and storage services and the latter providing everything that the application code needs to run on.

This platform is not an operating system in the traditional sense; it has no OS interfaces like Control Panel, Server Manager, etc. on Windows Server. However, it does do some of the things, in the cloud, that conventional operating systems do, such as managing storage and devices and providing a run-time environment for applications, which is hosted by the Fabric Controller.

Where does Windows Server come in? You might remember that back in December 2009, Microsoft brought the Windows Server and Windows Azure teams together to form the Windows Server and Cloud Division within the Server and Tools Business that was led by Bob Muglia until the recent reorganization and is now under the wing of Satya Nadella. The fact that these two products are part of the same division should be a clue that they’re closely related. And in fact, if you take a look inside an instance of Windows Azure, you’ll find that they’re more closely related than you might have guessed.

So, after all the angst, we find that the operating system with which you interact in an Azure environment isn’t some brand-new, mysterious OS after all. It’s a virtualized version of Windows Server 2008 that has been preconfigured with a specific amount of resources (CPU, RAM, and storage) and delivered to you for a monthly fee. IT pros can breathe a sigh of relief; if you’re a Windows Server 2008 admin, you already know how to manage the OS that’s “visible” in Azure. Developers use the same tools and programming languages to create applications for Azure. The big difference is that the Fabric Controller manages the cloud environment, so applications must be structured with that in mind. For the nitty-gritty details about those differences, check out IaaS, PaaS and the Windows Azure Platform by Keith Pijanowski.


Kenneth van Sarksum (@kennethvs) reported Microsoft releases beta of Assessment Planning Toolkit 6.0 in a 5/31/2011 post to the CloudComputing.info blog:

image Microsoft has made released a beta of the next version of its capacity planning tool, called Microsoft Assessment & Planning (MAP) Toolkit. This version is het follow up of version 5.5. which was released beginning this year, which introduced assessment for migration to the Windows Azure and SQL Azure platform.

imageVersion 6.0 will include assessment and planning for evaluating workloads for public and private cloud platforms, identifying the workload and estimating the infrastructure size and resources needed for both Windows Azure and Hyper-V Fast Track. Besides that this version will include support for assessment of Microsoft’s Software as a Service (SaaS) offering for Office, called Office 365, enhanced VMware inventory, and Oracle schema discovery and reporting for migration to SQL Server. Also readiness for migration to Internet Explorer 9 is included.

clip_image001

The beta review period will run through mid-July, 2011 and is accessible through Microsoft Connect.


Todd Hoff described an Awesome List of Advanced Distributed Systems Papers in 5/31/2011 post to the High Scalability blog:

As part of Dr. Indranil Gupta's CS 525 Spring 2011 Advanced Distributed Systems class, he has collected an incredible list of resources on distributed systems. His research group is also doing some interesting work.

The various topics include: Before there Were Clouds, Cloud Computing, P2P Systems, Basic Distributed Computing Concepts, Sensor Networks, Overlays and DHTs, Cloud Programming, Cloud Scheduling, Key-Value Stores, Storage, Sensor Net Routing, Geo-Distribution, P2P Apps, In-network processing, Epidemics, Probabilistic Membership Protocols, Distributed Monitoring and  Management, Publish-Subscribe/CDNs, Measurement Studies, Old Wine: Stale or Vintage?, In Byzantium, Cloud Pricing, Other Industrial Systems, Structure of Networks, Completing the Circle, Green Clouds, Distributed Debugging, Flash!, The Middle or the End?, Availability-Aware Systems, Design Methodologies, Handling Stress, Sources of unreliability in networks, Handling Stress, Selfish algorithms, Security, Economic Theory, The future of sensor nets?, The End-to-End Approach, Automatic Computing and Inference, Caching, Classical Algorithms, Topology and Naming, Practical theory perspectives, Modular Systems.

That's just the list of topics! For every topic there's the slide deck used to teach the class, a main list of papers and a second list of optional papers. So there's a lot to choose from. Happy reading! If any of the papers really stand out for you, please share.


Lee Badger, Tim Grance, Robert Patt-Corner and Jeff Voas coauthored an 84-page DRAFT Cloud Computing Synopsis and Recommendations: Recommendations of the National Institute of Standards and Technology (NIST Special Publication 800-146) in May 2011. From the Executive Summary:

image Cloud computing allows computer users to conveniently rent access to fully featured applications, to software development and deployment environments, and to computing infrastructure assets such as network-accessible data storage and processing.

This document reviews the NIST-established definition of cloud computing,describes cloud computing benefits and open issues, presents an overview of major classes of cloud technology, and provides guidelines and recommendations on how organizations should consider the relative opportunities and risks of cloud computing. Cloud computing has been the subject of a great deal of commentary.

Attempts to describe cloud computing in general terms, however, have been problematic because cloud computing is not a single kind of system, but instead spans a spectrum of underlying technologies, configuration possibilities, service models, and deployment models. This document describes cloud systems and discusses their strengths and weaknesses.

Depending on an organization's requirements, different technologies and configurations are appropriate.

To understand which part of the spectrum of cloud systems is most appropriate for a given need, an organization should consider how clouds can be deployed (deployment models), what kinds of services can be provided to customers (service models), the economic opportunities and risks of using cloud
services (economic considerations), the technical characteristics of cloud services such as performance and reliability (operational characteristics), typical terms of service (service level agreements), and the security opportunities and risks (security).

Deployment Models. A cloud computing system may be deployed privately or hosted on the premises of a cloud customer, may be shared among a limited number of trusted partners, may be hosted by a third party, or may be a publically accessible service, i.e., a public cloud. Depending on the kind of cloud
deployment, the cloud may have limited private computing resources, or may have access to large quantities of remotely accessed resources. The different deployment models present a number of tradeoffs in how customers can control their resources, and the scale, cost, and availability of resources.

Service Models. A cloud can provide access to software applications such as email or office productivity tools (the Software as a Service, or SaaS, service model), or can provide a toolkit for customers to use to build and operate their own software (the Platform as a Service, or PaaS, service model), or can provide
network access to traditional computing resources such as processing power and storage (the Infrastructure as a Service, or IaaS, service model). The different service models have different strengths and are suitable for different customers and business objectives. Generally, interoperability and portability of customer workloads is more achievable in the IaaS service model because the building blocks of IaaS offerings are relatively well-defined, e.g., network protocols, CPU instruction sets, legacy device interfaces.

Economic Considerations. In outsourced and public deployment models, cloud computing provides convenient rental of computing resources: users pay service charges while using a service but need not pay large up-front acquisition costs to build a computing infrastructure. The reduction of up-front costs reduces the risks for pilot projects and experimental efforts, thus reducing a barrier to organizational flexibility, or agility. In outsourced and public deployment models, cloud computing also can provide elasticity, that is, the ability for customers to quickly request, receive, and later release as many resources as needed. By using an elastic cloud, customers may be able to avoid excessive costs from overprovisioning, i.e., building enough capacity for peak demand and then not using the capacity in non-peak periods. Whether or not cloud computing reduces overall costs for an organization depends on a careful analysis of all the costs of operation, compliance, and security, including costs to migrate to and, if necessary, migrate from a cloud.

Operational Characteristics. Cloud computing favors applications that can be broken up into small ndependent parts. Cloud systems generally depend on networking and hence any limitations on networking, such as data import/export bottlenecks or service disruptions, reduce cloud utility, especially for applications that are not tolerant of disruptions.

Service Level Agreements (SLAs). Organizations should understand the terms of the SLA, their responsibilities, and those of the service provider, before using a cloud service.

Security. Organizations should be aware of the security issues that exist in cloud computing and of applicable NIST publications such as NIST Special Publication (SP) 800-53. As complex networked systems, clouds are affected by traditional computer and network security issues such as the needs to provide data confidentiality, data integrity, and system availability. By imposing uniform management practices, clouds may be able to improve on some security update and response issues. Clouds, however, also have potential to aggregate an unprecedented quantity and variety of customer data in cloud data centers. This potential vulnerability requires a high degree of confidence and transparency that cloud providers can keep customer data isolated and protected. Also, cloud users and administrators rely heavily on Web browsers, so browser security failures can lead to cloud security breaches. The privacy and security of cloud computing depend primarily on whether the cloud service provider has implemented robust security controls and a sound privacy policy desired by their customers, the visibility that customers have into its performance, and how well it is managed.

Inherently, the move to cloud computing is a business decision in which the business case should consider the relevant factors some of which include readiness of existing applications for cloud deployment, transition costs and life-cycle costs, maturity of service orientation in existing infrastructure, and other
factors including security and privacy requirements.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

James Staten asserted Getting private cloud right takes unconventional thinking in a 5/31/2011 post to his Forrester Research blog:

image Recent Forrester inquiries from enterprise infrastructure and operations (I&O) professionals show that there's still significant confusion between infrastructure-as-a-service (IaaS) private clouds and server virtualization environments. As a result, there are a lot of misperceptions about what it takes to get your private cloud investments right and drive adoption by your developers. The answers may surprise you; they may even be the opposite of what you're thinking.

image From speaking with Forrester clients who have deployed successful private clouds, we've found that your cloud should be smaller than you think, priced cheaper than the ROI math would justify and actively marketed internally - no, private clouds are not a Field of Dreams. Our latest report, "Q&A: How to Get Private Cloud Right" details this unconventional thinking and you may find that internal clouds are much easier than you think.

First and foremost, if you think the way you operate your server virtualization environment today is good enough to call a cloud, you are probably lying to yourself. Per the Forrester definition of cloud computing, your internal cloud must be:

  1. Highly standardized - meaning that the key operational procedures of your internal IaaS environment (provisioning, placement, patching, migration, parking and destroying) should all be documented and conducted the same way every time.
  2. Highly automated - and to make sure the above standardized procedures are done the same time every time you need to take these tasks out of human error and hand them over to automation software
  3. Self-service to developers - We've found that many I&O pros are very much against this concept for fear that it will lead to chaos in the data center. But the reality is just the opposite because of 1 and 2. When you standardize what can be deployed into the cloud and how, you eliminate the risk of chaos. 
  4. Shared and metered - for your internal cloud to be cost effective and have a strong ROI you need it to be highly utilized - much more so than your traditional virtualziation environment. And the way to get there is to share a single cloud among all departments inside your company. And the way to cost-justify the cloud is to at least track everyone's consumption, if not to charge back for it.

Our survey data and discussions with clients show that only 6 percent of enterprise I&O shops operate their virtualized environments at this level of sophistication. So if you aren't here yet, you aren't alone.

There's much more to getting a private cloud right that is covered in the report. And I and Forrester researcher Lauren Nelson will be leading a discussion on this important topic on June 9 at Forrester's IT Forum EMEA in Barcelona. We hope you will join us.

Forrester ForrSights surveys show that 29 percent of I&O shops have put a high or critical priority on building a private cloud this year. You can successfully deploy and operate a private cloud. Whether you start with a cloud solution or build one yourself but ignoring these truths about IaaS environments will keep success at bey.


James Downey (@james_downey) offered a Review of Visible Virtual Ops Private Cloud in a 5/31/2011 post:

image A small book of deep insight, Visible Virtual Ops Private Cloud tackles the why and how of moving enterprise IT from virtualization to private cloud. Private cloud—better tailored, less costly, more secure than public cloud—means automation, dynamic workload management, and a service approach to IT consumption.

image Throughout the book, this concept of service takes center stage. To gain the benefits of a private cloud, the authors argue, enterprise IT must offer to the business a service catalog of standardized offerings—bundles of compute power, memory, networking, and service—tailored and continuously adapted to align with business needs. From this catalog, users shall make one-touch orders, orders that through rules and automation lead to the dynamic provisioning of IT services, moving the business rapidly from idea to reality.

The private cloud in Visible Ops Private Cloud goes beyond technology; it represents a vision of extraordinary IT efficiency and value creation. Enterprise IT achieves all this through the discipline of process, the rigorous formulation, documentation, adaptation, and adherence to policies and procedures around service standardization and delivery.

The four practical steps promised by the book’s subtitle depict a journey to ever greater process discipline and efficiency. Authors Andi Mann, Kurt Milne, and Jeanne Morain effectively distill lessons learned from leading organizations, making the book remarkably concrete and comprehensive for its size.

However, as I consider the importance of the service catalog, I wonder whether the private cloud as described in Visible Ops Private Cloud produces the kind of services that business users should care about. As a step beyond virtualization, this private cloud produces servers—virtual, dynamic, and effectively managed servers. But users do not so much care about servers, nor even about IT workload management. Rather, they want solutions to their problems. An effective IT department understands the business well enough to analyze the problem, see its connection to business processes, and collaborate with users in the design of creative solutions.

So while I appreciate the efficiency gains that a private cloud might provide for the management of IT infrastructure, I’d be concerned that basing a service catalog around this infrastructure would mean speaking in a language that only IT understands. The private cloud, if it is to have business meaning, must be framed as a solution to specific business problems, rather than just efficient IT management.

That being said, this book’s focus on process and IT efficiency offers tremendous food for thought. Take this book as a guide to action, though keep in mind that the subtitle reads four practical steps, not four easy steps.

Strangely, James didn’t include a link to the book; here’s Amazon page.


John Considine asserted “Hybrid clouds are achieving almost universal buy-in” as a deck for his Why Cloud Federation Requires Layer-2 Connectivity post of 5/31/2011:

Hybrid clouds are achieving almost universal buy-in as the way enterprises use the cloud. As we’ve described previously, the hybrid model federates internal and external resources so customers can choose the most appropriate match for workload requirements. The approach is already transforming enterprise computing, enabling a new generation of dynamic applications and deployments, such as:

  • Using multiple clouds for different applications to match business needs
  • Allocating components of an application to different environments (e.g., compute vs database tiers), whether internal or external (“application stretching”)
  • Moving an application to meet requirements at specific stages in its lifecycle, from early development through UAT, scale testing, pre-production and ultimately full production scenarios
  • Moving workloads closer to end users across geographic locations, including user groups within the enterprise, partners and external customers
  • Meeting peak demands efficiently in the cloud while the low steady-state is handled internally

While everybody’s talking about the hybrid cloud, making it work is another story. Enterprise deployment can require extensive reconfiguring to adapt a customer’s internal environment to a given cloud. The result, when it’s finally running, is a hybrid deployment limited to the customer’s internal infrastructure and one particular cloud, for one particular application.

Most cloud architectures are built with layer-3 (routing) topologies, where each cloud is a separate network with its own addressing scheme and set of attributes. This means that all address settings for applications deployed to the cloud have to be changed to those assigned by the cloud provider. It also means that applications and services running internally that need to interact with the cloud have to be updated to match the cloud provider’s requirements. The result is lots of re-configuring and re-architecting so the organization’s core network can communicate with the new external resources – exactly the opposite of the agile environment that cloud computing promises to deliver.

In our discussions with enterprise customers and technology leaders, we’re now seeing a broad recognition that cloud federation requires layer-2 (bridging) connectivity. We’ve always believed that layer-2 is the right way to enable cloud federation. This week’s announcement of Cloud Bridge by Citrix is a confirmation that tight network integration is critical for successful cloud deployments.  Although it’s great to see others now starting down the path of better cloud networking, it is critical that enterprises realize that this level of network integration also requires heightened security for cloud deployments – remember that you are now blending the cloud networks with your internal networks.  This is why CloudSwitch has developed a comprehensive solution that not only provides full network control independent of what networking gear the cloud provider has chosen, but also secures and isolates customers’ data and communications completely through our Cloud Isolation Technology™.

In contrast to layer-3, layer-2 networking is location-independent, allowing the network in the cloud to become a direct extension of the network in the data center. It does this by preserving IP and MAC addresses so that all servers have the same addresses and routing protocols, wherever they physically run. Users can select where they want to run their applications, locally or in the cloud, without the need to reconfigure their settings for different environments.

Don’t Change Anything
imageCloudSwitch is unique in providing layer-2 connectivity between the data center and the cloud, with innovation that resolves previous addressing and security challenges. Our Cloud Isolation Technology automatically creates a layer-2 overlay network that encrypts and encapsulates the network traffic in the cloud as a seamless extension of the internal environment. The customer has full control over the cloud network and server addressing, even in clouds that don’t natively support this capability. No configuration changes are required. You don’t have to update router or firewall settings for every subnet or cloud deployment. You don’t have to change address settings, or keep up with changes in the cloud providers’ networks – everything “just works.”

While layer-2 connectivity is essential for full integration of the hybrid model, some companies and applications will still want to use layer-3 routing for their cloud deployments.  Some practical applications for layer-3 connectivity include:

  • Cloud-only networks – providing access to the tiers of an application running in a cloud-only network
  • Remote access to cloud resources – VPN services for remote developers or users, branch office integration with the cloud resources where different network settings are required
  • Protected networks – for cases where the enterprise wants to centrally control who can access a specific network (utilizing their core switches and routers)

Keep in mind though, that most of these layer-3 deployments have use for layer-2 connectivity in the background as well.  For the cloud-only networks, other network tiers in the same deployment can benefit from a layer-2 connection back to the primary data center for application and database tiers.  For remote access deployments, management, operation, and maintenance for the cloud resources is greatly simplified by having a layer-2 connection to the data center in addition to the layer-3 access for remote users.

The CloudSwitch recommendation, and the way we’ve architected our product, is to offer layer-2, with support for layer-3 as an option. Our customers can choose to interact with their servers in the cloud using an automated layer-2 connection, or use layer-3 to create specific rules and routing to match their application and even infrastructure design.  We believe that enterprises should have the freedom to create arbitrary networks and blend layer-2 and layer-3 deployments as they need, independent of the networking gear and topologies selected both by the cloud and their own IT departments.

Making Federation Work
For hybrid computing to succeed, the cloud needs to appear like a resource on the customer network, and an application running in the cloud needs to behave as if it’s running in the data center. The ability to federate these disparate environments by mapping the data center configuration to the cloud can only happen at layer-2 in the networking stack. With innovations that make the cloud a seamless, secure extension of the internal environment, CloudSwitch helps customers turn the hype around hybrid cloud into reality.

John is Co-Founder & CTO of Cloudswitch.


<Return to section navigation list> 

Cloud Security and Governance

Chris Hoff (@Beaker) reminded us all on 5/31/2011 that The State of Cloud Security

image …is still firewalls and SSL.

Cloud: The “revenge of (overlay) VPN and PKI”

/Sad Panda


<Return to section navigation list> 

Cloud Computing Events

• Susan Mernit wants programmers to Come out for Code for Oakland: June 4, 2011 Kaiser Center--and build apps for our town! according to this 5/20/2011 post (missed when posted):

imageVisit codeforoakland.org to sign up!

Join us on June 4 for Code for Oakland, the first ever Oakland hackathon/bar camp dedicated to building applications that meet the needs of our local community.

This event is being organized by Oakland Local, Urban Strategies Council, Innovative Oakland,  Code for America and a slew of others.

Not a software developer but have great ideas? You can help!

What’s going to happen? 

Visit codeforoakland.org to sign up!

The Knight Foundation and the FCC came to Oakland, CA in April 2011 to announce a major new tech competition called Apps for Communities that will award $100,000 in prizes to reward mobile and web-based applications that use government and public data to "deliver personalized, actionable information to people that are least likely to be online."

So, that’s where Code for Oakland comes in.  We’re pulling together a low-cost one-day Bar Camp that will bring government officials, developers, designers, and interested parties together for a day that will be devoted to looking at local datasets of use to people in Oakland, and that gives teams a chance to talk through, brainstorm and prototype their ideas before the competition closes on July 11.

Come win prize money for your ideas! Win prize money to support building an application to submit to Apps for Communities—over $2,500 in potential awards on the day.

If you are a coder, designer, developer, database guru, hacker then read about the bar camp event below...

If you are not a coder but represent local government, the nonprofit community or you are an Oakland resident and have ideas that you would love to see built into powerful applications to help your community, then we invite you to join our

Community Listening Events.

What will happen at the bar camp event?
This will be a day of talking, brainstorming, planning. Optimally,  we hope to:

  • Discuss some of the local government data sets competition entrants might work with
  • Build connections between potential collaborators who want to build useful apps for local people and need to fill out their team/skills
  • Encourage the exchange of experiences, expertise and ideas between those involved in leading open government data initiatives around the East Bay and nearby
  • Prototype ideas: We hope there will be plenty of space for developers to hack on things — from refining core bits and pieces of technology to rapid prototyping of new ideas. …

Read the rest here.

Susan is the founder of Oakland Local. She is also a circuit rider for The Community Information Challenge, a program of The John S and James L Knight Foundation, and a consultant to non-profit and community organizations.


• Elisa Flasko (and Ben Zimmerman will present an MSDN Webcast: Using DataMarket to Explore the Power of Local Weather Data (Level 200) on 6/14/2011 at 8:00 AM PDT:

imageEvent ID: 1032487803
  • Language(s): English.
  • Product(s): Windows Azure.
  • Audience(s): Pro Dev/Programmer.

In this webcast, we explore how you can use local weather data to create compelling applications and revolutionize business intelligence. We focus on Weather Central's differentiating science, the ease of data integration offered through Windows Azure DataMarket, and several use case applications. Weather impacts every aspect of how we live, work, and play and will continue to be the focus of many next-generation technologies. Streamlined access to local weather information will empower product development ranging from consumer experiences like www.myweather.com to complex machine learning algorithms.

Presenters: Ben Zimmerman, Director of Business Development, Weather Central, LP and Elisa Flasko, Program Manager, Microsoft Corporation

Ben Zimmerman is the director of Business Development for Weather Central and is an expert in weather technology. His primary focus is developing and implementing strategic solutions that enable companies to improve operations and increase profit with the integration of next-generation weather data and services. Ben's areas of expertise include renewable energy, public safety, telematics, GIS, consumer applications, and corporate operations. He holds a bachelor of science degree in Atmospheric and Oceanic Sciences from Iowa State University and has extensive experience with climate reanalysis, forecast modeling, radar interpretation, remote sensing, and data integration.

Register


Joe Panettieri listed Ingram Micro Cloud Summit: Eight Trends Worth Watching in a 5/31/2011 post to the TalkinCloud blog:

image The Second Annual Ingram Micro Cloud Summit is set to run June 1-2 in Phoenix. So what’s on tap for the summit — and what cloud-related surprises should VARs and managed services providers expect? Here are eight trends and themes that Talkin’ Cloud expects to emerge at the conference.

image 1. Managed Services Meets Cloud Computing: There’s a reason why Ingram Micro VP of Managed Services and Cloud Computing Renee Bergeron has such a distinct business title. She joined Ingram Micro in September 2010. And since that time, the lines between Ingram Micro Seismic (a managed services push) and Ingram Micro Cloud have blurred. My best guess: The line will ultimately fade away…

image2. Cloud Aggregation Update: Bergeron launched the Ingram Micro Cloud portal in November 2010. Fast forward to February 2011, and Bergeron positioned Ingram Micro as the distribution industry’s leading cloud aggregator. The idea is for Ingram to be the one-stop shop for VARs and MSPs seeking third-party cloud services. Early partners include Amazon.com, Microsoft, Rackspace and Salesforce.com.

Still, Ingram’s cloud aggregator strategy faces plenty of challenges. Can Ingram actually make money aggregating third-party cloud applications? A few weeks ago, Bergeron told me the cloud effort is a for-profit strategy… not a charity.

But there’s also the question of competition. Just about every major distributor has announced a cloud push and/or a cloud aggregator strategy. Recent examples include:

Like we said: Distributors are making lots of cloud noise. We’ll see if Ingram can continue to stand out from the croud.

3. MSP Software: Ingram has a longstanding RMM (remote monitoring and management) software relationship with Nimsoft. To the best of my knowledge, Ingram also resells Level Platforms though it no longer hosts that platform for MSPs. Ingram also has a SaaS relationship with Kaseya in Australia. My best guess: Ingram will make at least two moves  this week with MSP software providers in North America…

4. The Countdown: I suspect Microsoft will release Office 365 to the masses within the next few weeks — before the Microsoft Worldwide Partner Conference (WPC) begins July 10 in Los Angeles. The SaaS suite — including everything from Exchange Online to SharePoint Online — will start at $6 per user per month. For VARs and MSPs, it’s time to get educated. And there will be plenty of Office 365 talk at Ingram Micro Cloud Summit.

5. A Cloud Bill of Rights: Some VARs and MSPs are calling on the industry to create a cloud bill of rights for the channel. Among the items mentioned to me, channel partners should have the right to…

  • Control end-customer billing and pricing for cloud services.
  • Maintain account control within cloud partner programs, blocking the cloud provider from directly contacting the VARs’ end-customers.
  • Control branding for third-party cloud services.
  • What else? I’ll be asking Ingram Micro Cloud Summit attendees for their thoughts.

6. The Business Model: Much like the managed services market before it, VARs and MSPs are asking how to set up pricing, sales force compensation and service level agreements for cloud solutions. I suspect IPED veteran Ryan Morris, now running Morris Management Partners, will be on hand to share some guidance.

7. Mergers and Acquisitions: Some Ingram cloud partners have been acquired. For instance, Oak Hill Capital Partners last week acquired Intermedia, the hosted Exchange specialist. Intermedia will be on hand at Ingram Micro Cloud Summit and the company remains committed to its channel partners. As I tour the conference I will certainly wonder: Who’s next on the M&A front?

8. Get Moving: The bottom line… There’s lots of cloud computing noise but the market is real. Our own Talkin’ Cloud 50 — which tracks the top VARs and MSPs navigating cloud computing — shows cloud computing revenues growing nearly 50 percent in 2010 vs. 2009. (The complete Talkin’ Cloud 50 report will debut on this site within days.) Savvy VARs and MSPs are already making their cloud bets. And we’ll get an update on those bets during the Ingram Micro Cloud Summit.

That’s all for now. I land soon in Phoenix for the summit. And Talkin’ Cloud will be blogging live throughout the conference.


Ralph Squillace reported Free Windows Azure training with Scott Klein in San Francisco, June 13-14 in a 5/30/2011 post:

image Hi all. If you're in the Bay Area and want to get up to speed on Windows Azure -- whether you want to learn how to use it or whether you want to validate your own approach! -- MVP Scott Klein of Blue Syntax Software (blog) (author and co-author of many books including Pro SQL Azure) is offering a free two-day hands-on training course in all of Windows Azure in downtown San Francisco on June 13-14 (registration and information here).

image722322222I'll also be presenting and discussing the forthcoming release of the Windows Azure AppFabric June CTP including the AppFabric Development Platform and show you how to get your distributed cloud-based applications up and running quickly. In addition, my colleague Brian Swan will also be there to discuss using PHP and Odata and Java in Windows Azure.

imageScott has tons of experience to help you understand Azure, its services, and get you started building applications over two days -- few could be better to learn from. I am really looking forward to it. If you're in the area and interested, please come.


1105 Media announced tracks and sessions for Windows Studio Live! at the Microsoft Redmond Campus on 10/17 through 10/21/2011:

Check out what we have planned for the FIVE action-packed days at Microsoft corporate headquarters in Redmond! (Fire up your scrollbar — it's a long list!)

TRACKS:

SILVERLIGHT/WPF DEVELOPING
SERVICES
VISUAL STUDIO 2010/.NET 4 WEB/HTML 5 DATA MANAGEMENT
LIGHTSWITCH PROGRAMMING PRACTICES CLOUD COMPUTING MOBILE DEVELOPMENT

Microsoft Session Silverlight Intense Intro - Billy Hollis AppFabric, Workflow and WCF - The Next Generation Middleware - Ron Jacobs If not IaaS, When Should I Use Azure VM Role? - Eric Boyd Best Kept Secrets in Visual Studio 2010 and .NET 4.0 - Deborah Kurata
Microsoft Session XAML: Achieving Your Moment Of Clarity - Miguel Castro What's New in WCF 4 - Ido Flatow What is Microsoft Marketplace DataMarket? - Michael Stiefel The LINQ Programming Model - Marcel de Vries
Microsoft Session Fundamental Design Principles for UI Developers - Billy Hollis Creating Scalable State Full Services Using WCF and WF - Marcel de Vries Deciding Between Relational Databases and Tables in the Cloud - Michael Stiefel NoSQL – Beyond the Key-Value Store - Robert Green
HTML5 and Internet Explorer 9: Developer Overview - Ben Hoelting Bind Anything to Anything in XAML - Rockford Lhotka AppFabric Caching: How It Works and When You Should Use It - Jon Flanders Microsoft Session How to Take WCF Data Services to the Next level - Rob Daigneau
HTML 5 and Your Web Sites - Robert Boedigheimer Microsoft Session Building Native Mobile Apps with HTML5 & jQuery - Jon Flanders Azure Platform Overview - Vishwas Lele ALM - Brian Randell
Styling Web Pages with CSS 3 - Robert Boedigheimer What's new and cool in Silverlight 5 - Pete Brown Getting Started with Windows Phone 7 - Scott Golightly Building Azure Applications - Vishwas Lele Visual Studio - Brian Randell
The Best of jQuery - Robert Boedigheimer Silverlight, WCF RIA Services, and Your Business Objects - Deborah Kurata Windows Azure and Windows Phone - Creating Great Apps - Scott Golightly Building Compute-Intensive Apps in Azure - Vishwas Lele Sponsored Session - Details TBA
SP.NET MVC, Razor, and jQuery - The New Face of ASP.NET - Ido Flatow Light up on Windows 7 with Silverlight and WPF - Pete Brown Handling Offline data in Silverlight and Windows Phone 7 - John Papa Building and Running the Windows Azure Developer Portal - Mullins Microsoft Session
WebMatrix and Razor - Rachel Appel Bringing the Silverlight PivotViewer to your Applications - Tony Champion CSLA 4 for WP7, Android, and iOS - Rockford Lhotka Microsoft Session Design for Testability: Mocks, Stubs, Refactoring, and User Interfaces - Ben Day
Orchard - Rachel Appel MVVM in Practice aka "Code Behind" - Free WPF - Tiberiu Covaci Working with Data on Windows Phone 7 - Sergey Barskiy Microsoft Session Team Foundation Server 2010 Builds: Understand, Configure, and Customize - Ben Day
Busy Developer’s Guide to (ECMA/Java)Script - Ted Neward Radically Advanced Templates for WPF and Silverlight - Billy Hollis Advanced patterns with MVVM in Silverlight and WP7 - John Papa Microsoft Session LS 202 - Andrew Brust
Getting Started with ASP.NET MVC - Philip Japikse Using MEF to Develop Composable Applications - Ben Hoelting WP7 Instrumentation - How to Learn from your App - Tony Champion So Many Choices, So Little Time: Understanding Your .NET 4.0 Data Access Options - Leonard Lobel Static Analysis in .NET - Jason Bock
Test Driving ASP.NET MVC - Philip Japikse Patterns for Parallel Programming - Tiberiu Covaci Microsoft Session Using Code First (Code Only) approach with Entity Framework - Sergey Barskiy Modern .NET Development Practices and Principles - Jason Bock

Note: There are several Windows Azure AppFabric sessions in the Developing/Services column.

Session details will be available next week.

Register now to lock in your place at Visual Studio Live! Redmond - last year year we SOLD OUT and we don't want you to miss any of the action!

Register Now

Save $300 with your SUPER Early Bird discount code NX6W!

The main Visual Studio Live! site is here.

Full disclosure: I’m a contributing editor for 1105 Media’s Visual Studio Magazine.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

• Klint Finley (@klintron) reported Heroku Gets Node.js and More in New Beta Version in a 5/31/2011 post to the ReadWriteCloud blog:

image Heroku, the platform-as-a-service provider that Salesforce.com acquired last year, has added Node.js to its existing Ruby offering as part of its new public beta called Celadon Cedar. Other new features include consolidated logging, real-time dynotype monitoring and instant roll-backs.

image The first question asked at the press and analyst Q&A session at Dreamforce on Salesforce.com's acquisition of Heroku was how long it would be until Heroku/Salesforce.com had a Node.js PaaS. We now have our answer.

Heroku has been a popular choice among Ruby developers and its name has become practically synonymous with PaaS. but it faces increased competition in the PaaS marketplace with companies like RedHat and VMware bringing Ruby-capable PaaSes online.

There are also several existing Node.js PaaSes, including Nodejitsu, Nodester, NodeSocket and dotCloud (which recently acquired DuoStack).

Heroko experienced a stumble when it suffered from the Amazon Web Services outage while its major competitor Engine Yard managed to stay online.

In short, Heroku is in a strong position and the new beta brings several important new features, but has its work cut out for it.


Herman Mehling asserted “New integration platform as a service (iPaaS) offerings aim to relieve the pain of SaaS and cloud integration, which has been so onerous that many organizations have pulled the plug on SaaS projects” as a deck for his Can Nascent iPaaS Solve Cloud and SaaS Integration Problems? article of 5/31/2011 for DevX:

imageNew integration platform as a service (iPaaS) offerings aim to relieve the pain of SaaS and cloud integration, which has been so onerous that many organizations have pulled the plug on SaaS projects.

The number one cloud and SaaS challenge for many developers and organizations might just surprise you. It's not security, lack of standards, or even reliability, but... dramatic drum roll... integration.

Recently, Gartner did a study of companies transitioning to SaaS. The study found that many businesses were actually pulling their data back out of cloud-based applications, so Gartner asked why.

The research firm asked 270 executives "Why is your organization currently transitioning from a SaaS solution to an on-premises solution?" For 56 percent of respondents, the number one reason was the unexpectedly significant requirements of integration.

More than half of the people who tried moving their businesses to a cloud-based application and pulled back did so because integrating those applications with the rest of their business proved too challenging to be worthwhile.

Based on this apparent pain point, Gartner has predicted that at least 35 percent of all large and midsize organizations worldwide will be using one or more integration platform as a service (iPaaS) offerings by 2015.

Of course, concerns about the integration challenges of adopting SaaS are not new. Another survey, done in 2009 by Saugatuck Technology, asked executives about their concerns regarding SaaS deployment and use. Thirty-nine percent of respondents predictably cited "data security and privacy concerns,"32 percent chose "integrating SaaS with existing enterprise applications," and 27 percent chose "integrating SaaS data with existing enterprise data." …

Read more: Next Page: iPaaS Tools for Cloud and SaaS Integration Woes


Savio Rodrigues (@SavioRodrigues) described the Cost effectiveness of Amazon RDS pay-per-usage software pricing in a 5/27/2011 post:

image Established software vendors face a difficult balancing act between meeting customer demands for pay-per-usage cloud pricing models while guarding against revenue erosion on traditionally priced offerings. If Amazon’s price for Oracle Database on RDS becomes the norm for price discrimination between traditional and per-per-usage licenses, IT buyers could find themselves paying over a 100 percent premium for the flexibility of pay-per-usage pricing.

image Note, I am only using Oracle as an example here because the pricing of Amazon RDS for Oracle Database is public. This post intends to make no judgments on Amazon or Oracle’s price points whatsoever.

imagePay-per-use software pricing limited to entry level product
Amazon RDS for Oracle Database offers two price models, “License Included” or “Bring Your Own License (BYOL)”. The License Included metric is fancy terminology for pay-per-usage, and includes the cost of the software, including Oracle Database, underlying hardware resources and Amazon RDS management.

Three editions of Oracle Database are offered by Amazon, Standard Edition One (SE1), Standard Edition (SE) and Enterprise Edition (EE), listed in order of lowest to highest functionality.

It’s important to note that pay-per-use pricing is only offered on the lowest function edition, namely, Oracle Database SE1. This should not be a surprise as Oracle, like other established vendors, is still experimenting with pay-per-usage pricing models. Customers can also run Standard Edition One using a BYOL model. This fact, along with Oracle’s list pricing, helps us do some quick and interesting calculations.

Oracle Database SE1 software price-per-hour ranges between $0.05 to $0.80
The License Included and BYOL prices both include the cost of the underlying hardware resources, OS and Amazon RDS management. The only difference between the two options is the price of the Oracle Database software license.

This allows us to calculate the per hour cost of Oracle Database Standard Edition One as follows:

The Oracle list price for Oracle Database SE1 is $5,800 plus 22 percent, or $1,276 for software update, support and maintenance. Like most enterprise software, customers could expect a discount between 25 to 85 percent. For lower priced software like Oracle Database SE1, let’s assume a 50 percent discount. Although, most customers buying Oracle software are encouraged to enter into Unlimited License Agreements (ULAs) which frequently offer discounts at the higher end of the spectrum.

All told, Oracle Database SE1 after a 50 percent discount would cost a customer $3,538 (($5,800 + $1,276) x 50%) for 1 year or $4,814 ($5,800 + $1,276 + $1,276 + $1,276) x 50%) for 3 years on a single socket quad core machine like this low end Dell server. Note that Oracle doesn’t use their typical processor core factor pricing methodology for products identified as Standard Edition or Standard Edition One as they are targeted at lower performance servers.

A single socket quad core machine would offer the performance of somewhere between the Amazon “Double Extra Large DB Instance” and the “Quadruple Extra Large DB Instance”.

Consider the long term costs of per-per-usage
Using “Double Extra Large DB Instance” pricing, with our calculated cost an Oracle Database SE1 software license on Amazon of $0.40/hr, we can calculate a 1 year cost of $3,504 and a 3 year cost of $10,512. These figures represent a 1 percent lower and 118 % higher cost of using Amazon’s per-per-usage offering versus licensing Oracle Database SE1 through Oracle for on premises deployment or a BYOL for deployment on Amazon RDS.

There are obviously multiple caveats to consider, like the ability to get lower or higher discounts from Oracle, or comparing with the “Quadruple Extra Large DB Instance” price point.

A customer that is unable to get a 50 percent discount from Oracle could save licensing costs by using Amazon’s pay-per-usage offering for Oracle Database SE1. For instance, with only a 25 percent discount from Oracle, the customer could save up to 34 percent on a 1 year basis, but stands to pay an extra 46 percent a 3 year basis.

Comparing the cost of Oracle Database SE1 using traditional licensing on premises with Amazon’s pricing through RDS, it appears that customers should look hard at Amazon’s per-per-usage offering for up to a 1 year term, but stick with Oracle’s traditional pricing model if the software is going to be used for the typical 3 to 5 year period that companies like to amortize costs over.

The obvious rebuttal to the above calculations would be that a customer electing for a pay-per-usage model would not necessarily run for 24 hours a day for a full year. While this is true, buyers should understand the long term cost implications before making short term decisions.


John Biggs reported Apple’s Cloud Product Officially Official And It’s Called iCloud in a 5/31/2011 post to the CrunchGear blog:

image Pop over to iCloud.com today and you’ll see a doomed web page. The domain, which redirects to Xcerion’s CloudMe software, is sitting on some prime real estate, namely Apple’s new iCloud service.

In a short release, Apple confirmed the existence and name:

Apple® CEO Steve Jobs and a team of Apple executives will kick off the company’s annual Worldwide Developers Conference (WWDC) with a keynote address on Monday, June 6 at 10:00 a.m. At the keynote, Apple will unveil its next generation software – Lion, the eighth major release of Mac OS® X; iOS 5, the next version of Apple’s advanced mobile operating system which powers iPad®, iPhone® and iPod touch®; and iCloud®, Apple’s upcoming cloud services offering.

image We’ve been hearing about the potential cloud services for months now and it seems the stars have finally aligned. The MobileMe service recently received some considerable upgrades to improve performance and stability and there has been oodles of talk about a potential music service in the cloud similar to Rdio or Spotify. That we now know it’s called iCloud, officially, is just icing on the cake.

imageWhat will iCloud include? It will probably be a considerable revamp of the Me.com services including calendar and email syncing. As TUAW notes, many parts of MobileMe will probably be available for free leaving us to wonder what the rest of the service will include.

We’ve also discovered that Apple is signing partners to offer what amounts to a mirrored version of your iTunes database, a service that will be considerably improved over current “locker” models used by Amazon and Google. However, there are currently plenty of those cloud-based sharing services on offer, which suggests Apple may have a trick or two up its sleeve.

This would probably also replace the nearly useless iDisk offering currently available with MobileMe. With competitors like Dropbox, the old ways just won’t cut it.

We’ll be there live on Monday June 6 but until then get out your prophesying hats and start prophesying in comments!


Martin Tantow (@mtantow) reported Project Olympus a Cloud IaaS Solution by Citrix to be Open Source in a 5/30/2011 post to the Cloud Times blog:

image “Project Olympus” is going to be launched soon by Citrix. This platform hopes to help businesses build an infrastructure for private cloud computing using their own firewall even while running a service provider on a public cloud. This announcement was made in the Citrix Synergy 2011 held last week at San Francisco. This is just one of the announcements by Citrix, a company known for its virtual desktop infrastructure (VDI).

In its tireless efforts, Citrix made additional enhancements to its existing portfolio to bring VDI to accommodate even small-to-medium enterprises; it also hopes to run them on various portable devices such as wireless laptops and netbooks, mobile phones and tablet PC’s.

In the recent Open Stack conference held in Santa Clara, Califoria, Project Olympus was born. It was based on the scheme made in Open Stack where a collaboration of various vendors and service providers where Citrix is a participant have come together to build a common open source platform for cloud computing.

Project Olympus will be operating together with the Citrix XenServer and will also support other virtualization platforms such as Microsoft’s Hyper-V and also VMWare’s vSphere. These platforms with the Citrix XenDesktop will bring together all ends meet to create a strong virtual platform.

Sameer Dholakia, Vice President of Production Marketing for Data Center and Cloud Computing at Citrix said while updating reporters present in the conference, “We’re serious about open, about giving people choice and leveraging the investments they have already made and so they don’t get locked into the legacy server virtualization.”

Just before the start of the conference, Mark Templeton, Citrix CEO announced that they have just completed their acquisition of Kaviza. Kaviza according to him is the maker of “VDI-in-a-box,” which is expected to deliver cloud computing to desktops from small-to-medium enterprises.  He said, “It’s simple and easy to install and yet has all the capabilities and user experience that Citrix is famous for with Xen Desktop,” and added, “complexity is optional.”

Another announcement Citrix made is the new upgrade they have for Citrix Receiver, their widespread software that delivers PC images, data and enterprise applications to many of their end user gadgets in the IT business. Citrix proudly announced that Receiver now supports 1,000 Mac and other PC’s, 149 various mobile phones, 37 tablet PC’s and 10 client desktops. In this announcement Citrix focused on the consumerization of the IT trend, which allows employees to use their personal gadgets and devices to work, so IT people need to find means how to make that work.

GoToMeeting from Citrix has also been improved, which includes collaboration features that can be used during a meeting. It now has a beta GoToMeeting that will include HD Faces plus high definition audio-video conference via Xen Desktop.

Paul Burrin, Citrix vice president said during the reporters briefing that, ‘We want telepresence, we think it’s a fantastic capability but we can’t afford these high-end systems,” he also mentioned that they hope to target SMB’s with this software.

Last of the announcements is their preview of Citrix HDX technology for improved performance in media delivered VDI environment, which includes 3-D graphics and better audio-video in Xen Desktop and Xen App with its multitasking tasks.


<Return to section navigation list>