Thursday, July 21, 2011

Windows Azure and Cloud Computing Posts for 7/20/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles. image222

image433

• Updated 4/21/2011 at 4:30 PM PDT with new articles marked by Beth Massi, Richard Seroter, Steve Marx, Jeff Barr and Joel Foreman.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

Bruce Kyle recommended that you Analyze Huge DataSets with New MapReduce for Windows Azure in a 7/20/2011 post to the US ISV Evangelism blog:

imageAn iterative MapReduce runtime for Windows Azure, code-named Daytona from Microsoft Research, allows laboratories, small groups, and individual researchers to use the power of the cloud to analyze data sets on gigabytes or terabytes of data and run large-scale machine learning algorithms on dozens or hundreds of compute cores.

imageProject Daytona on Window Azure is now available, along with a deployment guide, developer and user documentation, and code samples for both data analysis algorithms and client application. This community technical preview (CTP) release consists of a ZIP file that includes our Windows Azure cloud service installation package along with the documentation and sample codes.

Project Daytona is designed to support a wide class of data analytics and machine-learning algorithms. It can scale to hundreds of server cores for analysis of distributed data. Project Daytona was developed as part of the eXtreme Computing Group’s Cloud Research Engagement Initiative.

The download is available at Project Daytona: Iterative MapReduce on Windows Azure.


<Return to section navigation list>

SQL Azure Database and Reporting

The SQL Azure Service Dashboard reported [SQL Azure Database] [North Central US] [Yellow] SQL Azure Portal is unavailable on 7/20/2011:

imageJul 19 2011 10:48PM We are currently experiencing an outage preventing access to SQL Azure Portal.

Functionality of SQL Azure Databases are not impacted

Jul 20 2011 12:25AM Normal service availability is fully restored for SQL Azure Portal.

image

The same problem occurred at all Microsoft data centers.


<Return to section navigation list>

MarketPlace DataMarket and OData

imageNo significant articles today.


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

Richard Seroter (@rseroter) reported a Big Week of Releases: My Book and StreamInsight v1.2 on 7/21/2011:

imageThis week, Packt Publishing released the book BizTalk 2010: Line of Business Systems Integration. As I mentioned in an earlier post, I contributed three chapters to this book covering integration with Dynamics CRM 2011, Windows Azure AppFabric and Salesforce.com. The lead author, Kent Weare wrote a blog post announcing the release, and you can also find it on Amazon.com now. I hope you feel inclined to pick it up and find it useful. [Emphasis added.]

image72232222222In other “neat stuff being released” news, the Microsoft StreamInsight team released version 1.2 of the software. They’ve already updated the product samples on CodePlex and the driver for LINQPad. I tried both the download, the samples and the LINQPad update this week and can attest to the fact that it installs and works just fine. What’s cool and new?

  • Nested event types. You can now do more than just define “flat” event payloads. The SI team already put a blog post up on this. You can also read about it in the Product Documentation.
  • LINQ improvements. You can join multiple streams in a single LINQ statement, group by anonymous types, and more.
  • New performance counters. PerMon counters can be used to watch memory being used, how many queries are running, average latency and more.
  • Resiliency. The most important improvement. Now you can introduce checkpoints and provide some protection against event and state loss during outages.

Also, I might as well make it known that I’m building a full StreamInsight course for Pluralsight based on version 1.2. I’ll be covering all aspects of StreamInsight and even tossing in some version 1.2 and “Austin” tidbits. Look for this to hit your Pluralsight subscription within the next two months.


Justin Beckwith (@JustinBeckwith) described Using MSBuild to deploy your AppFabric Application in a 7/20/2011 post:

imageAs the hosting of applications moves from our local staging environments to the cloud, one of the areas that needs to improve is the ability to include deployment in our automated build processes. Using the June CTP AppFabric bits, Visual Studio does an excellent job of enabling developers to design, build, and deploy AppFabric applications. However, the current tools do not provide a way to integrate these tools into a standard, repeatable build process. The goal of this post is to outline the steps necessary to integrate automated AppFabric deployment into your build process, and show off some of the REST API features we’ve built into the Application Manager.

image72232222222Before we get started, let’s run through a list of tools I’m using for this sample:

Since the goal of this post is to use MSBuild to deploy our AppFabric Application, you’re going to need to register for an account over at our labs site. To request access to the CTP follow these steps:

  • Sign in to the AppFabric Management Portal at http://portal.appfabriclabs.com/.
  • Choose the entry titled “Applications” under the “AppFabric” node on the left side of the screen.
  • Click on the “Request Namespace” button on the toolbar on the top of the screen.
  • You will be asked to answer a few questions before you can request the namespace.
  • Your request will be in a “pending” state until it gets approved and you can start using the CTP capabilities.

Using the REST API

The AppFabric Application Manager provides a useful RESTful API to automate most tasks available in the Application Manager. We are going to take advantage of the application lifecycle methods (start, stop, deploy, etc) to write our custom task. To help you get started with the API, we’ve put together a ResourceAccessor.cs class to abstract some of the calls we’re making using the AtomPub protocol. For example, to get the details for an application you would instantiate the class using your namespace and management key:

// create a new instance of the Application Manager REST API wrapper
ResourceAccessor appManagerAPI = new ResourceAccessor(this.Namespace, this.ManagementKey);
// get some details about our application
ApplicationResource ar = appManagerAPI.GetApplication("myApplicationName");

This sample assumes you have an existing account on http://portal.appfabriclabs.com, and that you have already created a namespace. To get the management key for your namespace, click on the ‘View’ button located in the properties panel on the right side of the portal, then copy the key to your clipboard:

image

For our purposes, we’re mostly interested in automating the shutdown, un-deployment, import, deployment, then re-starting of the application. For example, to start the application we can issue a SendCommand call:

// attempt to start the application
Log.LogMessage(MessageImportance.Normal, "Starting the application ...");
appManagerAPI.SendCommand(this.ApplicationName, LifecycleCommand.Start);

If you’re interested in automating other application commands, the samples we’ve included should give you a head start.

Building the MSBuild Task

Now that we’re comfortable with the REST API, it’s time to start working on our custom MSBuild Task. This is relatively easy, and very well documented:

http://msdn.microsoft.com/en-us/library/t9883dzc.aspx

We need to create a new .NET Class Library project for our custom task. For this sample, I chose to implement a class that inherits from ‘Task’, and overrides the Execute method:

public class AppManagerDeploy : Task
{
    public override bool Execute()
    {    
        ...
    }
}

To deploy our custom package, the Execute method uses the REST API Wrapper to stop the running application, un-deploy the application, and then upload the new package:

/// <summary>
/// This is the main function that executes when creating a custom MSBuild Task. This 
/// function is responsible for uploading the given *.afpkg file to the Application 
/// Manager API.
/// </summary>
/// <returns></returns>
public override bool Execute()
{    
    // output debugging information to the MSBuild console
    ... 
    // create a new instance of the Application Manager API
ResourceAccessor appManagerAPI = new ResourceAccessor(this.Namespace, this.ManagementKey); 
    // check to see if the requested application is in a valid state for the upload 
// operation (stopped, undeployed)
    ...
      // upload the given *.afpkg file to the Application Manager deployment service
    appManagerAPI.UploadPackage(this.ApplicationName, this.PackagePath);
     // attempt to deploy the application
    Log.LogMessage(MessageImportance.Normal, "Deploying the application ...");
    crResult = appManagerAPI.SendCommand(this.ApplicationName, LifecycleCommand.Deploy);
     // attempt to start the application
    Log.LogMessage(MessageImportance.Normal, "Starting the application ...");
    crResult = appManagerAPI.SendCommand(this.ApplicationName, LifecycleCommand.Start);
     Log.LogMessage(MessageImportance.High, "Deployment Complete!");
     return true;
}

For the full source, please visit our GitHub.

Attaching the MSBuild Task to the Azure AppFabric Application

After the custom MSBuild task is complete, we now attach the task to our current application *.csproj file. I chose to use the Stock Ticker Application available in the June CTP Samples, and the modified version of this solution is available with the source code for this post. To modify the *.csproj file, you need to:

  • Open your AppFabric Application solution file (Ex. StockTickerApp.sln)
  • Right click on the AppFabric Project containing App.cs, and unload the project
  • Right click on the unloaded project and edit the *.csproj file
  • Scroll to the bottom of the *.csproj file, and add this target just above the </project> tag:
<UsingTask TaskName="AppManagerDeploy" 
    AssemblyFile="C:\>Path to Samples>\AFDeployTask\bin\Debug\Microsoft.Samples.AppFabric.MSBuildTask.dll" 
/>
<Target Name="DeployToAppFabric" AfterTargets="Build" Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">      
<Message Text="Deploying Project to AppFabric Portal" Importance="high" />        
    <AppManagerDeploy
        Namespace="justbe"
        ManagementKey="jmjMPi0GvG97U/eISgswcdt/K3zlrr+MyPS8+DQhlqk="
        ApplicationName="stockticker"                
        PackagePath="$(MSBuildProjectDirectory)\bin\release\publish\StockTickerApp.afpkg"
    />
</Target>

If you choose to implement this task as part of your build and deployment strategy, you could register the Microsoft.Samples.AppFabric.MSBuildTask.dll assembly in the GAC to avoid referencing the path to the *.dll each time. The UsingTask command attaches the new assembly to the build, and ensures we can use the AppManagerDeploy task we just created. The AppManagerDeploy task accepts the following fields:

  • Namespace – The application namespace generated in the Azure portal
  • ManagementKey – The namespace Management key accessible in the Azure portal
  • ApplicationName – the name of the application in the AppFabric Application Manager
  • PackagePath – the relative path to the *.afpkg file generated during the build

This target is configured to only execute when using the Release mode configuration. We now have two ways of executing our build with a deploy command: by building in Visual Studio using release mode, or by issuing an MSBuild command at the command prompt. Since the point of the exercise is to create an MSBuild task for automated builds, let’s step through executing our build at the Visual Studio command prompt. First navigate to the path where your application *.sln file is stored. Then execute the command to build your project:

clip_image001

Keep in mind; this will likely take some time to execute. However, once the deployment is complete you should get a success message in the console:

clip_image003

After allowing the deployment task to execute, check out the Admin Log in the Application Manager to review all of the commands that were executed:

clip_image005

Other Examples

For other examples of using the Windows Azure AppFabric Application Manager REST API, be sure to check out our PowerShell sample in the June CTP. For other great resources on using Azure AppFabric, please visit our blog at http://blogs.msdn.com/b/appfabric/.

Justin is a Microsoft program manager.


Alan Smith posted Tutorial: Create Custom External Services in Azure AppFabric June CTP on 7/18/2011 (missed when posted):

Introduction

image72232222222One of the first things I wanted to do after installing the Azure AppFabric June CTP was to create an AppFabric application that used the Bing Map SOAP services. That was when I hit my first roadblock. There is currently no option in the AppFabric Application Designer to add a reference to an external service. After asking around in the forums I learned that I should build an external service to do this. There is currently very little documentation on creating external services for Azure AppFabric, so after a couple of days experimenting I managed to get a basic external service working. As this is something that many AppFabric developers will want to do I thought I’d share what I learned in the form of a tutorial.

In this tutorial I will run through the creation of an AppFabric external service that will provide a proxy for the Bing Maps Geocode service. I have kept the implementation as simple as possible, there is a lot more you can do with external services, such as providing a configuration setting for the Bing Maps key, and customizing monitoring, but I will save this for future webcasts and tutorials. Also, bear in mind that as we are on the first CTP of these tools, things can, and will change. One of my loudest shouts to the AppFabric development team is to make it easier to consume services outside the AppFabric application, so hopefully we will see some development there.

Hopefully this tutorial will act as a starting point for creating your own external services that can be consumed by AppFabric applications. If you have any questions or comments, feel free to ping me using the comments section on my blog.

Consuming External Services

At present consuming a service outside the AppFabric application can be done in one of two ways. The quick-and-dirty way is to use the traditional “Add Service Reference” option. The proper way is to create a custom external service.

Traditional “Add Service Reference”

Whilst it is possible to use the traditional “Add Service Reference” method to consume an external service, there are a number of drawbacks to this. When consumed from an ASP.NET service the web.config is read by the application, and configuration can be used for the client endpoints, however when consumed from a stateless WCF service, configuration is not read and therefore client endpoint settings must be hardcoded. Either way the application must be re-deployed if anything changes.

Another drawback is that the external service is not recognized by the AppFabric application model, and therefore will not appear on the application diagram. There will also be no monitoring on the calls made to the external service.

Create a Custom External Service

Creating a custom external service will allow the external service to be recognized by the AppFabric application model. This means it can be used in the composition of an application, and will use the AppFabric service reference infrastructure. The external service can provide monitoring data to show usage of the service and also expose configuration settings that can be set at development time and reconfigured when the application has been deployed.

This tutorial will focus on the creation of a custom external service.

Pre-Requisites

If you are going to run through the tutorial as I have written it, you will need to install the relevant Visual Studio components and register for the relevant services. You can follow the steps using a different service if you don’t want to use the Bing Maps services.

Install Windows Azure AppFabric June CTP Developer Tools

I recommend installing the June CTP tools on a separate virtual environment to avoid possible compatibility issues with past and future releases.

  • You can find it here.

Install Visual Studio 2010 SDK

The Visual Studio SDK is required for creating the Visual Studio package that will contain the external service.

  • The Visual Studio SDK is here.
  • The Visual Studio Service Pack 1 SDK is here.

Create a Bing Maps Developer Account

If you want to test against the Bing Maps SOAP services you will need a developer account. It’s free to apply for, and you will receive a key to call the Bing Maps Geocode service. If you don’t want to do this you can pick another external service to consume and adjust the tutorial accordingly.

  • There are details on registering for an account here:

Create an AppFabric Labs Account (Optional)

If you want to test your application “In the Cloud”, and see the configuration options you will need an AppFabric account.

  • You can register for an account here.

Create a Bing Maps External Service

The Bing Maps Geocode service provides search functionality that takes address details as search parameters and returns a list of addresses and coordinates as a result. An external service will be created that allows the Bing Maps Geocode service to be called from an AppFabric application using the AppFabric application model. The service will be represented on the application diagram and the service endpoint will be configurable in the development and hosting environments.

Create a new MEF Component

An MEF Component will be created to contain the external service. This can then be installed in Visual Studio for the service to appear in the AppFabric designer. It is possible to do more advanced things with the Visual Studio extension, but in the interests of the tutorial I’ll keep it as simple as possible.

  • In Visual Studio, Create a new project, and select Visual Studio Package in the Visual C# / Extensibility section, name it BingMapsExternalService. (If you don’t see the Extensibility option, install the Visual Studio 2010 SDK.)
  • In the Select a Programming Language page, select Visual C# and Generate a new key file
  • Keep the default options for the next two pages.
  • In the Select Test Project options, clear both check boxes.

Visual Studio will create the project, and open the vsixmanifest file.

  • In the Content section of the vsixmanifest file, remove the VS Package content.
  • Add Content for an MEF Component and select the project name as the source.
  • Save and close the vsixmanifest file.

Add Required References

The extension project will need to hook in to the internals of the AppFabric application model, as well as serialize the metadata for the external service properties.

Add references to the following assemblies:

  • Microsoft.ApplicationServer.ApplicationModel
  • Microsoft.VisualStudio.ApplicationServer
  • System.ComponentModel.Composition
  • System.Fabric
  • System.Runtime.Serialization

Add the Bing Maps Service Reference

The external service will return a client proxy for the Bing Maps Geocode service. Adding a service reference to the project will create the classes for this client proxy and the data contracts for the request and response.

Note: The client configuration that is added to the app.config file will not be used in the external service.

Add Required Classes

The external service requires three classes to be implemented. Each of these will derive from a class in the AppFabric application model and provide overrides of methods and properties.

  • Add a folder named BingMaps to the project.
  • Add classes with the following names to the folder:
  • GeocodeServiceExport
  • GeocodeServiceExportDefinition
  • GeocodeServiceMetadata

Implement the Service Export Definition Class

Alan continues with C# source code to implement his Service Export and related classes.

Testing a Bing Maps External Service

To test the external service a simple AppFabric application will be created with an ASP.NET service that makes a call to the Bing Maps Geocode service to search for a location. The results from the search will be displayed on the web page.

Install the Visual Studio Package

Before the external service can be used in the AppFabric designer it must be installed in Visual Studio.

  • Build the BingMapsExternalService project.
  • Navigate to the output folder for the extension project:  ..\BingMapsExternalService\BingMapsExternalService\bin\Debug
  • Double-click BingMapsExternalService.vsix and follow the instructions to install the MEF Component.
  • Close all instances of Visual Studio.

Create an AppFabric Application

A new application can now be created to test the external service.

  • Start Visual Studio and create a new AppFabric Application, named BingMapsExternalServiceTest.
  • Select Tools à Extension manager and notice that the BingMapsExternalService extension appears in the list of installed extensions.
    Note: You can uninstall the extension with this dialog box.
  • Add an ASP.NET service named TestWeb.
  • Add a Bing Maps Geocode Service service named GeocodeTest.
    Note: The external service name and description is listed in the New Service dialog box.
  • In the Service References section of TestWeb, add a service reference to the GeocodeTest service, and change the name the reference from Import1 to GeocodeTestImport. Note: The Uri property of the external service is configurable in theGeocodeTest properties window.
  • Right-click App.cs and select View Diagram.
    Note: The Geocode external service appears on the application diagram with a default icon.
  • Open ServiceReferences.g.cs.
    Note: This class provides two static methods to return client proxies for the Bing Maps Geocode service.

Add Code to Call the External Service

  • Implement the BodyContent section of Default.aspx as follows:
    Note: You can copy-paste the code [from the original post] to save time.
  • In Default.aspx.cs add a button click event for btnGeocode and implement it as follows, using your own Bing Maps key:
    Note: You will need to add the relevant using declarations [from the original post].
  • Note: The ServiceReferences class can now be used to return the client proxy for the Bing Maps Geocode service.
  • Build the solution, there will be errors…
  • In the TestWeb project, add references to System.ServiceModel and System.Runtime.Serialization.
  • Build the solution, fix any remaining errors.

Test the Application in the AppFabric Emulator

The application can now be tested in the AppFabric Emulator.

  • Start the application without debugging.
  • When the web page appears, enter an address and click “Geocode Me!”
  • You can test the search by copy-pasting the coordinates into the search box for the Bing Maps website: http://www.bing.com/maps/

Test the Application in the AppFabric Hosting Environment

If you have an account for the AppFabric Labs portal you can test the application in the hosting environment. I’m assuming you know how to publish an application.

  • Publish, deploy and start the application in the AppFabric hosting environment.
  • Navigate to the GeocodeTest service and note the configuration for the service URI is available.
  • Test the application by navigating to the TestWeb endpoint.

Summary

Creating a custom external service for the AppFabric Application Designer requires a bit of work, but provides an implementation that follows the development model for AppFabric applications. The Bing Maps Geocode example used in this tutorial is one of the most basic implementations of an external service, the implementation could be improved to provide a configuration for the Bing Maps key, provide a custom icon, implement other Bing Maps services, and provide custom monitoring.

The CustomServices tutorial in the AppFabric samples provides some other examples of custom external services. The ReadMe document provides some explanation on creating custom external services.

If you have any questions or comments, feel free to ping me using the comments section on my blog.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Joel Foreman explained Increasing Developer Productivity with Windows Azure in a 7/21/2011 post to the Slalom Consulting Blog:

imageFor the past two and a half years I have been building solutions on the Windows Azure Platform. It’s fun to reflect on how developing for Windows Azure has changed my day-to-day activities as a developer. One of the greatest areas of impact is how much less time I spend installing and configuring environments. Amazing to think that now all of that time gets re-purposed into the application I am working on.

imageIt wasn’t that long ago that at the start of a project I would be working on the following…

  • Determining a hosting provider; or establishing communication channels within IT of an organization
  • Determining and communicating our infrastructure and software requirements for the project
  • Determining storage capacity requirements
  • Waiting on allocation of infrastructure and/or means of access
  • Installing and configuring software, from IIS and the .NET Framework to SQL Server and other dependencies
  • Determining and creating service accounts to be used
  • Determining public and private IPs, ports and firewall rules, and other networking requirements to enable connectivity

At this point I might be able to deploy something (manually) by copying over the files (manually) and access the solution from the internet. Yes! Then I would start to think about a number of other concerns…

  • Documenting the installation and configuration process of environments for the next time
  • Building out of additional environments that would be needed (TEST, UAT, PROD, etc.)
  • The needs and associated tasks around implementing SQL Server replication
  • How we will automate the deployment process or work with existing groups with their existing deployment processes
  • Thinking ahead to additional capacity needs and capacity planning

All of these tasks are vital, but take time and resources away from building the solution itself. Isn’t there an easier way?

Over the past two and a half years working with solutions targeting the Windows Azure Platform I have not had to spend my time in these areas. If you told me that I was starting on a new project right now, I could probably have all of this done within the hour. Windows Azure is ready when you are — no waiting. Once you have signed up for a Windows Azure subscription, it becomes a matter of clicking buttons in a Management Portal to provision new environments for services, new SQL Azure databases, new Windows Azure Storage accounts. In addition, the deployment process has been automated for me. I now have the capability to build my solution (from Visual Studio or a Build Server) into a package file, and Azure (or more specifically, the Fabric Controller) can take this package and its configuration and give me back fresh a VM instance (or instances!) with my application installed, configured and running.

What will I do will all this extra time? Guess I will just deliver more features in our solution and more value back to our clients. Do I miss not being able to have access to the underlying infrastructure? Not one bit.


• Steve Marx (@smarx) described A Minimal Windows Azure Storage Client Library in Perl in a 7/21/2011 post:

imageEarlier this week I wrote a minimal Windows Azure storage client library in Perl. You can find it on GitHub: https://github.com/smarx/waz-storage-perl. So far it only supports storing and retrieving blobs, and it hasn’t been well tested, but if you’re using Perl to interact with Windows Azure storage, this is probably a good starting point. (It’s also a reasonable piece of code to read if you just want to understand how to construct a signature for a storage REST API call.)

imageIt’s been a long time since I’ve written any Perl code, so I welcome any feedback on the code (style, correctness, efficiency). My favorite Perl idiom from the signing code is @{[$req->header('Content-MD5')]}, which makes string interpolation work with the arrow operator.


Jim O’Neil (@jimoneil) began his photo mosaics series with Photo Mosaics Part 1: Application Workflow on 7/21/2011:

imageHopefully you’ve had a chance to grab the Photo Mosaics application and set it up in your own environment, if not, I’d encourage you to at least read the initial blog post introducing this series to get some context on what the application does.

Below you’ll see images captured from a PowerPoint presentation, with accompanying screen-by-screen narrative to walk you through the workflow of the application. You can flip through the slides at your leisure using the numbered links below the image [in the original post], or simply download the full presentation if you prefer.

image

This deck provides a walkthrough of the workflow of the Windows Forms client application and the Windows Azure cloud application described in my blog series. The source code for the application is also available for download.

The slides that follow build the architecture from the point of initiating a client request. Aspects of the architecture that are not actively involved in a given step will be greyed out allowing you to focus on the specific actions at hand.

In the first step of the process, the end-user runs the Windows Forms client application and selects an image from her local machine to be converted to a photo mosaic. It is not apparent from this slide, but when the client application starts up, it also hosts a WCF service endpoint which is exposed on the Windows Azure AppFabric Service Bus. You’ll see how that plays a role in the application a bit later on.

The end-user next selects an image library (stored in Windows Azure Storage). An image library contains the tiles used to construct the mosaic and is implemented as a blob container with each of the images stored as blobs within that container. The container is marked with an access policy to permit public access to the blobs, but enumeration of the containers and of the blobs must occur via an authorized request. That request is made via a WCF service housed in the ClientInterface Web Role in response to the end-user browsing for an image library.

The red arrow indicates a WCF service call, while the blue lines indicate data transfer.

Once a specific image library is selected and the list of image URIs has been returned, the end-user can peruse each image in the Tessera Preview section of the client user interface (“tessera” by the way the term for an individual mosaic tile). The preview functionality accesses the blob images directly via an HTTP GET request by setting the ImageLocation of a PictureBox control to the image URI, for example http://azuremosaics.blob.core.windows.net/mylibrary/0001.jpg.
Here the purple line is meant to indicate a direct HTTP request.

Once the user has selected the image to be converted as well the image library containing the tiles and furthermore has specified the tile size (1 to 64 pixels square) and the number of segments (or slices) in which to divide the image when processing it, she submits the request via a WCF service call to the ClientInterface Web Role, passing the original image bytes and the other information.

The ClientInterface Web Role stores the original image in a blob container named imageinput, renaming that image to a GUID to uniquely identify it. It also places a message on a Window Azure queue named imagerequest containing a pointer to the blob that it just stored as well as additional elements like the URI of the tile library container and the number of slices and tile size specified by the end-user.
I’ve introduced orange as the color for queues and messages that move in and out of them.

In response to that same request from the end-user, the ClientInterface Web Role logs an entry for the new request in the Windows Azure table named jobs. Each job is identified by the GUID that the ClientInterface assigned to the request, the same GUID used to rename the image in blob storage. Additionally, the ClientInterface makes a call via the AppFabric Service Bus to invoke a notification service that is hosted by the client application (the self-hosted one I mentioned at the beginning of the walkthrough). This call informs the client application that a new request has been received by the service and prompts the client to display the assigned GUID in the notification area at the lower right of the main window.

Listening to the imagerequest queue is the JobController Worker Role. When a new message appears on that queue, the JobController has all of the information it needs to start the generation of a new mosaic, namely, the URI of the original image (in the imageinput container), the number of slices in which to divide the image, the tile size to use (1-64 pixels), and the URI of the container housing the images that will be the tesserae for the mosaic.

The JobController accesses the original image from the blob container (imageinput) and evenly splits that image into the requested number of slices, generating multiple images that it stores as individual blobs in the container named sliceinput. Each blob retains the original GUID naming convention with a suffix indicating the slice number. Additionally a message corresponding to each slice is added to the slicerequest queue.

A second Worker Role, the ImageProcessor, monitors the slicerequest queue and jumps into action when a new message appears there. A slice request message may represent an entire image or just a piece of one, so this is a great opportunity to scale the application by providing multiple instances of the ImageProcessor. If there are five instances running, and the initial request comes in with a slice count of five, the image can be processed in roughly 1/5th of the time.

Currently the number of slices requested is an end-user parameter, but it could just as easily be a value determined dynamically. For instance, the JobController could monitor the current load, the instance count, and the incoming image size to determine an optimal value – at that point in time - for the number of slices. Hypothetically speaking, it could also implement a business policy in which more slices are generated to provide a speedier result for premium subscribers, but a free version of the service always employs one slice.

The job of the ImageProcessor is to take the slice of the original image along with the URI of the desired tile library and the tile size and create the mosaic rendering of that slice. As you might expect, this is the most CPU intensive aspect of the application – another reason it’s a great candidate for scaling out.

The underlying algorithm isn’t really important from a cloud computing perspective, but since you’re probably interested, it goes something like this:

for each image in the image library 
   shrink the image to the desired pixel size (1 to 64 square) 
   calculate the average color of the resized image by adding up all 
     the R, G, and B values and dividing by the total number of pixels 

for each pixel in the original slice 
   calculate the Euclidean distance between the R, G, B components 
     of the pixel’s color and the average color of each tile image in the library 
   determine which tile images are ‘closest’ in color 
   pick a random tile image from those candidates 
   copy the tile image into the analogous location in the generated mosaic slice 

You’ll notice here that the ImageProcessor also writes to another Windows Azure table called status. In actuality, all of the Windows Azure roles write to this table (hence the * designation), but it makes the diagram look too much like a plate of spaghetti to indicate that! The current implementation has the ImageProcessor role adding a new entry to the table for each 10% of completion in the slice generation.

Once an image slice has been ‘mosaicized’, the ImageProcessor role writes the mosaic image to yet another blob container, sliceoutput, and adds a message to another queue, sliceresponse, to indicate it has finished its job. As you might expect by now, the message added to this queue includes the URI of the mosaic image in the sliceoutput container.

The JobController monitors the sliceresponse queue as well as the imagerequest queue, which I covered earlier. As each slice is completed, the JobController has to determine whether the entire image is now also complete. Since the slices may be processed in parallel by multiple ImageProcessor instances, the JobController cannot assume that the sliceresponse messages will arrive in any prescribed order.

What the JobController does then is query the sliceoutput container whenever a new message arrives in the sliceresponse queue. The JobController knows how many slices are part of the original job (that information was carried in the message), so it can determine if all of the processed images are now available in sliceoutput. If not, there’s nothing really for the JobController to do. If, however, the message that arrives corresponds to the completion of the final slice, then….

The JobController stitches together the final image from each of the processed slices and writes the new, completed image to a final blob container, imageoutput. It also signals that it’s finished the job by adding a message to the imageresponse queue.

At this point in time, the job is done and the only thing left to do is really housekeeping. Here the JobController responds to the message it just placed on the imageresponse queue. While this may seem redundant – why not just do all the subsequent processing at once, in the previous step? – it separates the task and provides a point of future extensibility.
What if, for instance, you wanted to introduce another step in the process or pass the image off to a separate workflow altogether. With the queue mechanism in place, it wouldn’t be hard for some other new Worker Role to monitor the queue for the completion message. In response to the completion message, the JobController does two things:

  1. updates the appropriate entry (keyed by GUID) in the job table to indicate that it’s completed that request, and
  2. uses the Service Bus to notify the client that the job is complete.

On the client end, the response is to update the status area in the bottom right of the main window and refresh the job listing with the link to the final image result (if the Job List window happens to be open).

Now that the image is complete it can be accessed via the client application, or actually by anyone knowing the URI of the image: it’s just an HTTP GET call to a blob in the imageoutput container (e.g., http://azuremosaics.blob.core.windows.net/imageoutput/{GUID}).

The final piece of the application to mention is functionality accessed via a secondary window in the client application, the Job List. This window is a master-detail display, using DataGridViews, of the jobs submitted by the current client along with the status messages that were recorded at various points in the workflow (I specifically called out the status data added by ImageProcessor, but there are other points as well). To display this data, the client application makes additional WCF calls to the ClientInterface Web role to request the jobs for the current client as well as to retrieve the status information for the currently selected job. Here the WCF service essentially implements a Repository pattern to abstract the entities in the two Windows Azure tables.

Lastly, for completeness, here is the full workflow in glorious living color!

The workflow image appears to be missing on Jim’s post.


Patriek van Dorp (@pvandorp) described Hosting Java Enterprise Applications on Windows Azure in a 7/21/2011 post to his Cloudy Thoughts blog:

imageLast April 29th, Java Champion Bert Ertman did a great session at Techdays 2011, showing us how easy it is to deploy a Java EE application to Windows Azure with the new Windows Azure plugin for Eclipse. In his session Bert showed us how we can deploy a .WAR package to Windows Azure by including the Java Development Kit (JDK) and GlassFish in a new type of project in Eclipse provided to us by the Windows Azure plugin for Eclipse. Bert also showed us how we can use the Windows Azure SDK for Java Developers to store data into Windows Azure Storage, among other things. He was very open and honest about his findings in relation to Microsoft’s PDC 10 announcement that they were going to make Java a ‘first class citizen’ on Windows Azure.

java_coffeeThe coming few months I’ll be drinking my coffee in the Cloud hosted by Microsoft and I will tell you all about it. But before I dive into some hardcore Java development (which is completely new for me, so please be gentle on me), I want to point out some misconceptions in Bert’s story, which might stand in your way when considering to use Windows Azure as the Cloud platform to host your Java EE applications.

Absence of a Java Runtime and Application Server

imageThe default VM images used to create Windows Azure Role instances are based on Windows Server 2008 or Windows Server 2008 R2. These images don’t have a Java runtime installed, let alone an Application Server such as Tomcat, Jetty or GlassFish. Bert states that he would expect some kind of ‘JavaRole’ with a Java Runtime and an Application Server installed, when he thinks of Java being a ‘first class citizen’ on Windows Azure.

Although he has a point here, I think we must keep in mind that, since there are so many flavors of Application Servers to choose from, it would be impossible for any Cloud Provider to please every Java Developer. Additionally, the Application Servers will need specific configuration varying from solution to solution. It would be very hard for a Cloud Provider to maintain multiple flavors of Application Servers in multiple versions etc. Also providing the developer with an user friendly way of configuring all of these Application Servers would be undoable.

Luckily, Windows Azure Roles provide a way of running tasks at startup time to execute scripts to configure or install applications and feature on each instance. Bert explains how this is done in his session perfectly, but I will blog on this more often in upcoming posts. Once you have a startup script that works for a specific setup (e.g. JDK 1.6 + GlassFish 3 + JDBC), you can re-use that script in all your projects.

By allowing the Java Developer to choose, install and configure the Java runtime and Application Server herself, I think Microsoft provides the greatest flexibility possible for Java Developers to run their application on a platform they prefer. On top of that Microsoft in cooperation with Eclipse, provides this plugin that makes it easy to script, debug and deploy Java EE applications to Windows Azure.

Port Limitations

Bert also pointed out that a Windows Azure Role can define only 5 endpoints that can be reached from the outside world (so-called Input Endpoints). Which would mean that you could only open up 5 ports per VM instance. When you install, for instance, GlassFish you need to open up 4 to 5 ports by default to make it work. There would be no endpoints left if you’d want to listen on other ports in your application.

Now this was true until March 2011, when the definition of endpoints was slightly altered. Microsoft’s David Makogon wrote a blog post on this subject explaining that a deployment can contain 5 Roles and 25 endpoints can be divided among those Roles. This means that you can have 5 endpoints per Role (which was the limit before March 2011) or you can have 25 endpoints on one Role. This leaves enough ports to use for administrative connections and remote debugging and such.

Statelessness

Another reason that Java would be no ‘first class citizen’ on Windows Azure, according to Bert, is the fact that VMs don’t keep state. Bert states that the Application Server uses state that is persisted on the actual server for a full fletched Java EE application, like session information and cache. Now, I don’t know much about the Application Servers’ internal workings, so I’ve been asking around with some pretty experienced Java Architects here at Sogeti in the Netherlands.

My findings are that the Java world and the .NET world are not so far apart. In .NET we use session state and caching as well. We have various ways of dealing with load balancing, for instance, saving the session state in a SQL database, storing it in a cookie or using memcached for maintaining a distributed memory for caching purposes. My Java sources tell me that such mechanisms are also available in the Java world. They tell me that a commonly used pattern is to use the Apache httpd and add and configure the JBoss mod_cluster module. This way, sticky sessions could be emulated even behind the round-robin load balancer that is automatically set up by Windows Azure. In my humble opinion, using sticky sessions is detrimental to the principal load balancing, but it would be a way to keep the developer experience for Java Developers as it is on premises or in Hosted scenarios, where sticky sessions can be configured by IT Professionals.

All this can be scripted and run from the startup.cmd, provided by the Windows Azure plugin for Eclipse.

So yes, I agree with Bert that, for moving applications into the Cloud, these application need some modification. This is, however, no different for .NET applications. This is inherent to the difference between applications on premises and applications in the Cloud, regardless of the programming language that is used. So Java is, like .NET, Python, Ruby on Rails, PHP, C++ (Did I forget any? Sorry for that), still a ‘first class citizen’ on the Windows Azure platform.

SDK Feature Mismatch

At the moment of this writing, there are two 3rd party SDKs available for communicating with Windows Azure Storage (Tables, Queues and Blobs) and for communicating with the Windows Azure AppFabric ServiceBus and Access Control Service. Both SDKs provide a Java based API that communicates with REST services offered by the Windows Azure platform. Currently, the Windows Azure SDK for Java Developers matches the Windows Azure SDK 1.2 for .NET (We’re currently on 1.4.1) and the Windows Azure AppFabric SDK for Java Developers matches the Windows Azure AppFabric April 2010 Update (There’s been a CTP Release June 2011). So Bert is right when he says these 3rd party SDKs are somewhat out dated.

However, this is the case with all interoperability projects. Look at Hibernate, for instance. There is a .NET port for Hibernate called NHibernate, which is based in the .NET Framework 1.1/2.0 (We’re currently on 4.0/4.1). This doesn’t stop .NET Developers from using NHibernate (Although Microsoft has recently released Entity Framework 4.1, which is a good competitor for NHibernate). Also, my Java sources didn’t seem to have any problem with writing their own framework that could connect with specific REST services offered by Windows Azure.

Apart from that, Microsoft’s Interoperability Team is very aware of the short comings of these 3rd party SDKs and is working very hard to ever improve the developer experience for Java Developers.

Conclusion

Bringing applications to a Cloud platform requires us to think differently. We need to revise our architecture and take into account aspects like cost, scalability, statelessness, load balancing, etc. to get the most out of the Cloud. I think Bert has perfectly shown us how we can package and deploy our Java EE application to Windows Azure. He’s was very clear on some of the aspects of bringing Java to Windows Azure that require extra attention and I think that’s what we are doing for .NET Developers as well. I wrote this blog post to make clear to the Developer Community (Java and .NET) that the Java world and the .NET world are not so far apart and that we face the same challenges in regard to bringing our applications to the Cloud. I also think that Microsoft (contrary to what we’ve seen in the past), as a Cloud Provider, is doing a great job in making the Windows Azure platform as open as possible so that anyone can use it.


The Windows Azure Team (@WindowsAzure) announced Real World Windows Azure: Interview with Henning Volkmer, CEO of ThinPrint, Inc. on 7/21/2011:

As part of the Real World Windows Azure series, we talked to Henning Volkmer, CEO of ThinPrint, Inc., about using the Windows Azure platform to deliver its innovative print solution. Here’s what he had to say:

MSDN: Provide a quick overview of your company and the services you offer.

Volkmer: Cortado and ThinPrint specialize in offering innovative print solutions and leading mobile business applications for any cloud strategy. Our solutions range from print management solutions for distributed network environments, solutions for the virtual desktop environment market, and confidential printing with various authentication methods. With our solutions, organizations are able to seamlessly integrate home offices, mobile employees and complete branch offices into an existing IT infrastructure. ThinPrint customers benefit from high-quality and fast printouts, reduced administration costs, optimized security and full control of the entire print environment, together resulting in significant cost savings.

MSDN: Who are your customers?

Volkmer: Cortado’s ThinPrint customers include some of the leading companies in all industries around the world. Cortado solutions are deployed among many small and mid-sized corporations and also help some of the world’s largest companies manage and optimize their print environments.

MSDN: What was the biggest challenge you faced prior to implementing on Windows Azure?

imageVolkmer: ThinPrint wanted a platform that would collect user data, analyze, transform and report it back to the users. A solution capable of reporting cost averages needs data from more than one user and the ability to order supplies from within the application needed an easy connection to the Internet. ThinPrint determined if all of these services would run on premise at each customer, the setup effort would be far too great and customer costs with supply delivery companies such as Amazon would be almost impossible.

MSDN: Why did you decide to adopt Windows Azure?

Volkmer: As a long standing Microsoft Gold Certified Partner, it was logical to use a Microsoft solution to help scale our datacenter while guaranteeing high performance access from all over the world.

MSDN: Is your solution deployed and live?

Volkmer: Yes, Printer Dashboard was released in March 2011 and has seen a rapid adoption rate across the globe.

MSDN: Is your solution 100% in the cloud or is it a hybrid model?

Volkmer: Printer Dashboard is a 100% cloud solution. Everything from the intelligence to the user interface resides on Windows Azure and is fed with data from agents on the customers’ network.

MSDN: Can you describe the solution you built with Windows Azure and how it helped address your challenge?

Volkmer: Printer Dashboard is based on Silverlight and hosted by Windows Azure, which allows administrators to monitor printer environments in real time and from any location via a web browser. With just a single click, the user obtains information on the status of printers within the organization as well as printing volume, paper jams and consumables data such as toner and paper levels, regardless of printer location.

MSDN: What makes your solution unique?

Volkmer: Printer Dashboard captures the printing costs of all users and reports the average values back to the users. This offers objective information about the average cost of individual printer models. It also allows you to see that you’re paying more or less money for printing than the average user. Service documents detail the consumption of material such as toner, and offer early notifications when a replacement is due, which can significantly reduce downtime and avoid bottlenecks. Printer Dashboard also collects user feedback on error messages of printers and explains the error messages and points to possible solutions saving customers time and money.

In addition, materials such as toner, ink and paper can easily be ordered directly from within the application, further simplifying the process.

MSDN: What are the benefits of using Windows Azure?

Volkmer: As with any new project we experienced a few challenges but thanks to the excellent Microsoft support everything was resolved quickly and with great flexibility. We are also using the Windows Azure Application Monitoring Management Pack to keep on top of our applications perfomance and availability. In case and problems are reported we can easily log in for example via Remote Desktop Services to address the issue.

Printer Dashboard is a real cloud service - only the agent feeding data into the cloud is run locally on a PC or server - everything else happens in the Windows Azure environment. This guarantees availability with full redundancy at 99.9% of the time and allows us to scale the solution to exact customer demand. This offers us a great way for us to control operational cost and offer a cost effective solution to our customers.

To read more Windows Azure customer success stories, visit: www.windowsazure.com/evidence


Mary Jo Foley (@maryjofoley) reported Microsoft delivers early build of Windows Azure toolkit for social-game developers in a 7/20/2011 post to her All About Microsoft blog on ZDNet:

Mary-Jo FoleySocial gamers have been one of the first developer groups to make use of Microsoft’s Windows Azure cloud platform.

On July 19, Microsoft took a step to make it easier for social gamers to target Azure by releasing an alpha version of a Windows Azure Toolkit for Social Games.

The preview of the toolkit, available for download from Microsoft’s CodePlex, includes accelerators, libraries, developer tools, and samples that developers can use in their own .NET or HTML5 games. Other language support will be added in the future, according to a Microsoft Windows Azure blog post.

“The toolkit also enables unique capabilities for social gaming prerequisites, such as storing user profiles, maintaining leader boards, in-app purchasing and so forth,” according to information on the download page.

The toolkit preview includes a new proof-of-concept game called Tankster, built with HTML5, from Grant Skinner and his team..

The preview of the new toolkit is free. I’ve asked Microsoft officials whether it will remain free once it is released (and for a target release date). I’ll add that information once I hear back. The toolkit will remain free when it goes final, according to company officials, who declined to provide an ETA as to when that might happen.

The Windows Azure toolkit for social gaming is not meant to replace or supersede the Azure toolkit for Facebook, officials added, when I asked.

Microsoft made the announcement about the new toolkit as the Seattle Casual Connect conference.

Kick off your day with ZDNet's daily e-mail newsletter. It's the freshest tech news and opinion, served hot. Get it.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

• Beth Massi (@bethmassi) reported E-Book Available: Beginners Guide to Visual Studio LightSwitch on 7/21/2011:

imageVisual Studio LightSwitch is the simplest way to create professional quality, data-centric, business applications that can be deployed to the desktop or the cloud. Today MVP Kunal Chowdhury released an E-Book for the beginner wanting to get started building LightSwitch applications.

E-Book: Beginners Guide to Visual Studio LightSwitch

image222422222222Kunal is another one of our LightSwitch community rock stars and is a huge contributor to the SilverlightShow.net website, a community portal dedicated to Microsoft Silverlight and Windows Phone 7 technologies. Thank you Kunal for bringing LightSwitch training and tutorials to this site!

http://www.silverlightshow.net/Learn/LightSwitch.aspx

And as always, you can find more from the LightSwitch Team here:

Enjoy!


The Visual Studio Lightswitch Team announced DevExpress Reporting Extension for LightSwitch in a 7/20/2011 post:

image222422222222DevExpress has added LightSwitch support to their award winning reporting solution, XtraReports. This LightSwitch extension lets you create and display client-side reports right in your LightSwitch applications!

Download XtraReports directly from DevExpress

imageXtraReports will also be available on the Visual Studio Gallery on Tuesday July 26th at which time you will be able to either install XtraReports directly from Visual Studio LightSwitch via the Extension Manager or you can download it from the Visual Studio Gallery.

The following tutorials provide step-by-step instructions on installing and using DevExpress XtraReports for LightSwitch.

For more details about XtraReports see the DevExpress Reporting Blog and stay tuned for more extensions available from our partners


Orville McDonald recorded a 00:01:39 Visual Studio LightSwitch - Make Your Apps Do More with 3rd-Party Extensions video presentation for the Microsoft Showcase on 7/14/2011 (missed when posted):

Extend the functionality of your LightSwitch application with third-party extensions from Microsoft partners to further customize your app to suit your needs.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Brian Loesgen described using Multiple Windows Live IDs and Windows Azure in a 7/21/2011 post:

imageI have multiple Windows Live IDs (WLID), which has caused some confusion when I try to use the Windows Azure portal as I may have already signed in using another WLID, one that is not linked to my Azure account. I know this won’t be a common problem out there, but I am sure other people are in the same situation, so I thought I’d do a blog post about how to make this easier.

Tactic 1: Co-admin

[Thanks to David Aiken for this simple and effective approach, this is the one I will use from here on in]

imageA portal update earlier this year allowed have multiple administrators for a Windows Azure account. All you have to do is add the second WLID as a co-admin, and you’re done. This is seamless, transparent and elegant.

Tactic 2: In-private Browsing

Open a new instance of Internet Explorer, enable in-private browsing (Ctrl-Shift-P). When you go to the Azure portal you will be asked for a WLID (even if you are logged in outside that browser session with another WLID.

Tactic 3: Use a VM

I am a huge fan of VMs anyhow, and cringe when I install developer tools on my host/productivity machine. As these are separate machines, you can obviously use whichever WLID you want in either machine. However, even though I do development in a VM whenever possible, sometimes I won’t have a VM running and want to jump into the portal for something, in which case I will use Tactic 1.

Brian is a Principal Architect Evangelist with Microsoft, on the Azure ISV team and works from San Diego.


Dana Gardner asserted “Major trends around cloud, mobile, and SaaS are dramatically changing the requirements and benefits of application integration” as an introduction to his Cloud and SaaS Force a Rethinking of Integration and Middleware post of 7/21/2011 to his Dana Gardner Briefings Direct blog:

imageMajor trends around cloud, mobile, and software as a service (SaaS) are dramatically changing the requirements and benefits of application integration.
In many respects, the emphasis now on building hybrid business processes from a variety of far-flung sources forces a rethinking of integration and middleware. Integration capabilities themselves often need to be services in order to support a growing universe of internal and external constituent business process component services.

And increasingly, integration needs to be ingrained in applications services, with the costs and complexity hidden. This means that more people can exploit and leverage integration, without being integration technology experts. It means that the applications providers are also the integration providers. It also means the stand-alone integration technology supplier business -- and those that buy from it -- are facing a new reality.

Here to explore the new era of integration-as-a-service and what it means for the future is David Clarke, Director of Integration at SaaS ERP provider Workday. He is interviewed by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Workday is a sponsor of BriefingsDirect podcasts.]
Here are some excerpts:

Clarke: I remember when we originally became part of Workday several years ago, we were doing some sort of product planning and strategic thinking about how we were going to integrate the product lines and position them going forward. One of the things we had in our roadmap at the time was this idea of an [integration] appliance. So we said, "Look, we can envision the future, where all the integration is done in the cloud, but we frankly think it's like a long way off. We think that it's some years off."

... We thought the world wasn’t going to be ready soon enough to put the integration technology and stack in the cloud as well.

It just became clearer and clearer to us that there was an appetite and a willingness in our customer and prospect base to use this technology in the cloud.

Happily that turned out to have been incorrect. Over the course of the ensuing 12 months, it just became clearer and clearer to us that there was an appetite and a willingness in our customer and prospect base to use this technology in the cloud.

We never really went ahead with that appliance concept, it didn’t get productized. We never used it. We don’t need to use it. And now, as I have conversations with customers and with prospects, it just is not an issue.

In terms of it being any kind of philosophical or in principle difficulty or challenge, it has just gone away. It totally surprised me, because I expected it to happen, but thought it would take a lot longer to get to where it has got to already.

Gardner: We see that a “consumerization” of IT is taking place, where the end-users want IT in the enterprise to work as well and in the same manner as it does for their personal lives. How does that shift the thinking of an enterprise architect?

Clarke: Superficially, enterprise architects are under a lot of pressure to present technologies in ways that are more familiar to customers from their personal lives. The most specific example of that is the embrace of mobile technologies. This isn't a huge surprise. It's been a pretty consistent pattern over a number of years that workforce mobility is a major influence on product requirements.

Mobile devices

We've seen that very significant proportions of access to our system is via mobile devices. That informs our planning and our system architecture. We're invested heavily in mobile technologies -- iPad, Android, BlackBerry, and other clients. In my experience, that’s something that's new, with the customer enterprise architects. This is something they have to articulate, defend, and embrace.

Historically, they would have been more concerned with the core issues of scalability, reliability, and availability. Now, they've got more time to think about these things, because we as SaaS vendors have taken a lot of things that they used to do off of their plates.
Historically, a lot of time was spent by enterprise architects worrying about the scalability and reliability of the enterprise application deployments that they had, and now that’s gone away. They get a much higher service level agreement (SLA) than they ever managed to operate by themselves when they run their own systems.

So, while they have different and new things to think about because of the cloud and mobility, they also have more head space or latitude to do that, because we have taken some of the pain that they used to have away.

Gardner: I suppose that as implications pan out around these issues, there will be a shift in economics as well, whereby you would pay separately and perhaps on a capital and then operating basis for integration.

They also have more head space or latitude to do that, because we have taken some of the pain that they used to have away from them.

If integration by companies like Workday becomes part-and-parcel of the application services -- and you pay for it on an operating basis only -- how do traditional business models and economics around middleware and integration survive?

Clarke: I'd certainly hate to be out there trying to sell middleware offerings stand-alone right now, and clearly there have been visible consolidations in this space. I mentioned BEA earlier as being the standard bearer of the enterprise Java generation of middleware that’s been acquired by Oracle.

They are essentially part of the application stack, and I'm sure they still sell and license stand-alone middleware. Obviously, the Oracle solutions are all on-premise, so they're still doing on-premise stuff at that level. But, I would imagine that the economics of the BEA offering is folded very much into the economics of the Oracle application offering.

In the web services generation of middleware and integration, which essentially came after the enterprise Java tier, and then before the SOA tier, there was a pretty rapid commoditization. So, this phenomenon was already starting to happen, even before the cloud economics were fully in play.

Then, there was essentially an increased dependence or relevance of open source technologies -- Spring, JackBe, free stacks -- that enabled integration to happen. That commoditization was already starting to happen. …

Gardner continues the interview and concludes:

Configure your data set

You can very easily and rapidly and without programming configure your specific data set, so that it can be mapped into and out of your specific set of benefits providers, without needing to write any code or build a custom integration.
We've done that domain analysis in a variety of areas, including but not limited to benefits. We've done it for payroll and for certain kinds of financial categories as well. That's what's enabling us to do this in a scalable and repeatable way, because we don’t want to just give people a raw set of tools and say, "Here, use these to map anything to anything else." It's just not a good experience for the users.


Kenneth van Suksum (@kennethvs) ported Release: Microsoft Assessment and Planning Toolkit 6.0 in a 7/20/2011 post:

imageAfter releasing a public beta in May this year, Microsoft has now released version 6.0 of its capacity planning tool, the Assessment and Planning Toolkit (MAP), which is the follow up of version 5.5, released beginning this year.

imageVersion 6 includes assessment and planning for evaluating workloads for public and private cloud platforms, identifying the workload and estimating the infrastructure size and resources needed for both Windows Azure and Hyper-V Fast Track. MAP 6.0 also provides an Office 365 client assessment, enhanced VMware inventory, and Oracle schema discovery and reporting.

imageVersion 6 can be downloaded from the Microsoft Download Center.


David Hardin asserted Designed for the Cloud–it is all about State! in a 7/19/2011 post:

imageI just got back from a proof of concept (POC) review. A POC is what we call the investigation stage where a developer or architect has a great idea and wants to prove it out in code; the review is the chance to share what was learned with peers. Needless to say we’ve had a lot of Azure POC’s in the last year or so! Everyone has a great idea for using Azure to reduce cost or derive some other benefit.

imageI keep seeing two reoccurring themes. The first is that most POC’s are great ideas, capable of providing real business value. The second is that most are poorly architected from a cloud computing perspective!

The poor architecture slips past even our most experienced architects and developers. Why?

Simple, cloud computing is a paradigm shift and there is a lack of distributed system design knowledge. Cloud designs must address the effect of latency on data consistency and too often the architecture suffers by simply applying existing relational database knowledge.

I confess that I’ve presented a poorly architected Azure POC. I’m learning though and here is what I’ve figured out.

State management is a critical aspect of distributed system design. The cloud requires partition tolerance so according to Brewer’s CAP theorem you tradeoff either consistency or availability. Latency is driving Brewer’s CAP theorem. Latency is the result of physical laws, such as the speed of light, which we aren’t going to circumvent without a major change in mankind’s understanding of physics. Simply put, your state management must account for latency.

Relational databases favor consistency and hence provide reduced availability. Distributed transactions, for example, only succeed if all systems are available and respond. The system’s overall availability is a combination of each individual system’s availability; the combination is less available than the least available individual system. We usually only consider an outage as impacting a system’s availability but the network latency between servers has the same effect, it just completes before your transaction times out.

The distributed transaction example is easy to understand but the same thing applies to transactions within a single database; transactions are reducing data availability. Ever had to add “WITH (NOLOCK)” to your queries? If so you’ve experienced the tradeoff between consistency and availability. The tradeoff’s impact on data quality is easier to mitigate in a single database due to how little latency exists on a single server.

Key-value stores, such as Azure Storage, and other NOSQL stores are popular for cloud applications because they favor availability, but in doing so give up ACID transactions. After gaining an understand of the effect of latency on distributed systems it is easy to see why the much requested Azure Storage transaction and multiple index features are challenging to implement.

Many smart people over in the SQL Azure and Azure Storage teams are working hard to simplify the issue but the reality is that good state management involves tradeoffs that are application dependent. Doug Terry, from Microsoft Research, explains it nicely using a Baseball analogy. Essentially there is a spectrum of consistency choices including: Strong Consistency, Eventual Consistency, Consistent Prefix, Bounded Inconsistency, Monotonic Reads, Read Your Writes, etc. If the application were baseball then the scorekeeper, umpire, sportswriter, and radio reporter all have different data consistency requirements to accomplish their jobs. Just think how an umpire needs Strong Consistency while the sportswriter can accept Eventual Consistency (BASE) as long as consistency is reached by the time he/she starts writing the article, i.e. Bounded Inconsistency.

Lessons learned by Azure first adopters, and their advice, is to use SQL Azure if its abilities satisfy your application’s needs. With SQL Azure you’ll enjoy the strong consistency of transactions and efficient query options of multiple indexes that you’re used to. Switch to Azure Storage when your application’s needs exceed SQL Azure’s capabilities; just understand how important your state management design becomes.

There is another approach though which offers you more state management control called a Replicated State Machine, also known as a Fault-tolerant State Machine. The approach hits a sweet spot in the consistency verse availability tradeoff and is designed for cloud computing. I suspect the approach will become very popular as more developers learn distributed system design. The paper titled “Replication Management using the State Machine Approach” gives a good introduction in the first 9 pages then becomes harder to understand.

You’ll notice the paper makes many references to Leslie Lamport, from Microsoft Research. He is the inventor of the Paxos consensus algorithm and several Paxos variations. He first described the Paxos algorithm in his paper “The Part-Time Parliament” then again in “Paxos Made Simple”. To get a sense of what you can do with Paxos take a look at Microsoft Research’s Farsite project and Service Migration and Replication Technique (SMART).

You can tell from this research that this is the way to manage state in large, self-healing cloud services which scale horizontally without single points of failure. You can also tell that developing a replicated state machine from scratch is exceptionally complex. Luckily several teams at Microsoft have already developed them; such implementations are at the heart of our fabric controllers, Azure Storage, SQL Azure, AppFabric, etc.

If you want your code to directly leverage this form of state management, develop for the AppFabric Container. From the overview page these are the AppFabric Container features you utilize:

  • Scale-out and High Availability

      The AppFabric Container provides scale-out by allowing application components to be cloned and automatically distributed; for stateful components, the container provides scale-out and high availability using partitioning and replication mechanisms. The AppFabric Container shares the partitioning and replication mechanisms of SQL Azure.

  • State Management

      The AppFabric Container provides data and persistence management for application components hosted in the container.

You can get started at:

http://blogs.msdn.com/b/windowsazure/archive/2011/06/20/introducing-windows-azure-appfabric-applications.aspx

The programming model is surprisingly simple to use, especially the attribute based syntax, when you consider all that happens behind the scenes.

Anyhow, give the AppFabric team a warm round of applause for productizing this useful Microsoft research. I suspect you’ll really enjoy the simplified, architecturally sound, state management AppFabric surface as they build out their roadmap.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

CloudTimes reported Gartner: Why Enterprises Choose Private Cloud on 7/20/2011:

imageThe potential advantages of cloud computing are well documented. If designed and provisioned properly, cloud deployments can lower capital and operating costs, increase flexibility, and improve service levels. Private cloud deployments are particularly important because they increasingly represent an enterprise’s first step toward the ultimate goal of dynamically matching IT service demand with IT service supply; a concept Gartner termed “real-time infrastructure” in this report.

Private Cloud Computing Ramps Up in 2011

imageData center executives are showing a definite interest in pursuing a private cloud computing strategy through 2014, and client inquiries and polls are also showing that many early deployments are already in place. Feedback from data center executives is that 30% more enterprises plan to invest in private cloud computing in 2011.

Download this report and learn key questions and answers companies should know before deploying a private cloud and how Internap can deliver value at every stage of your IT organization’s evolution toward the cloud.


<Return to section navigation list>

Cloud Security and Governance

Lori MacVittie (@lmacvittie) asserted We need to be careful that we do not repeat the era of “HTML programmers” with “cloud programmers” as a introduction to her Beware the Cloud Programmer warning of 7/20/2011 on F5’s DevCentral blog:

imageIf you’re old enough you might remember a time when your dad or brother worked on the family car themselves. They changed the oil, bled the brakes, changed the fluids and even replaced head gaskets when necessary. They’d tear apart the engine if need be to repair it; no mechanic necessary. But cars have become highly dependent on technology and today it’s hard to find anyone who hasn’t been specifically trained that works on their own car. Sure, an oil change or topping off the fluids might be feasible, but diagnosing and subsequently repairing a car today is simply not a task for everyone.

This is not necessarily because the core technology has changed – the engines still work the same way, the interaction between fuel-injectors and pistons and axles is no different, but the interfaces and interconnects between many of the various moving parts that make an engine go have changed, and changed dramatically. They’re computerized, they’re automated, they’re complicated.

This is the change we’re seeing in IT as a result of cloud computing , virtualization and automation. The core functions that deliver applications are still the same and based on the same protocols and principles, but the interfaces and increasingly the interconnects are rapidly evolving. They’re becoming more complicated.

MANAGEMENT COST OFFLOAD
The change in skills necessary to effectively deploy and manage emerging data center architectures drives one of the lesser spoken of benefits of public cloud computing: offloading the cost of managing components in this new and more complicated way. Most server admins and network operators do not have the development-oriented skills necessary to integrate systems in a way that promotes the loosely coupled, service-oriented collaboration necessary to fully liberate a data center and shift the operational management burden from people to technology.

Conversely, the developers with those very skills do not have the knowledge of the various data center network and application delivery network components necessary to implement the integration required to enable that collaboration.

Public cloud computing, with its infrastructure as a black box mentality, promises to alleviate the need to make operators into developers and vice-versa. It promises to lift the burden and pressure on IT to transform itself into a services-enabled system. And in that respect it succeeds. When you leverage infrastructure as a black box you only need to interact with the management framework, the constrained interfaces offered by the provider that allow you to configure and manage components as a service. You need not invest in training, in architecture, or in any of the costly endeavors necessary to achieve a more service-focused infrastructure.

The danger in this strategy is that it encourages investing in admins and operators who are well-versed in interfaces (APIs) and know little about the underlying technology.

HTML “PROGRAMMERS” and WEB 2.0
We saw this phenomenon in the early days of the web, when everything was more or less static HTML and there was very little architecture in the data center supporting the kind of rich interactive applications prevalent on the web today. There was a spate of HTML “programmers”: folks who understood markup language, but little more.

They understood the interface language, but nothing about how applications were assembled, how an application generated HTML, nor how that “code” was delivered to the client and subsequently rendered into something useful. It’s like being trained to run the diagnostic computers that interface with a car but not knowing how to do anything about the problems that might be reported.

The days of the HTML “programmers” were fleeting, because Web 2.0 and demand for highly interactive and personalized applications grew faster than the US national debt. A return to professionals who not only understood the interfaces but the underlying technological principles and practices was required, and the result has been a phenomenal explosion of interactive, interconnected and highly integrated web applications requiring an equally impressive infrastructure to deliver, secure and accelerate.

We are now in the days when we are seeing similar patterns in infrastructure; where it is expected that developers become operators through the use of interfaces (APIs) without necessarily needing or requiring any depth of knowledge regarding how it is the infrastructure is supposed to work.

NON-DISRUPTIVE ≠ NON-IMPACTFUL

image

Luckily, routers still route and switches still switch and load balancers still balance the load regardless of the interface used to manage them. Firewalls still deny or allow access, and identity and access management solutions still guard the gates to applications regardless of where they reside or on what platform.

But the interfaces to these services has and is evolving; they’re becoming API driven and integration is a requirement for automation of the most complicated operational processes, the ones in which many components act in concert to provide the services necessary to deliver applications.

Like the modern mechanic, who uses computer diagnostics to interface with your car before he pulls out a single tool, it is important to remember that while interfaces change, in order to really tune up your data center infrastructure and the processes that leverage it, you need people who understand the technology. It doesn’t matter whether that infrastructure is “in the cloud” or “in the data center”, leveraging infrastructure services requires an understanding of how they work and how they impact the overall delivery process. Something as simple as choosing the wrong load balancing algorithm for your application can have a profound impact on its performance and availability; it can also cause the application to appear to misbehave when the interaction between load balancing services and applications is not well understood.

It’s a fine thing to be able to provision infrastructure services and indeed we must be able to do so if we are to realize IT as a Service, the liberation of the data center. But we should not forget that provisioning infrastructure is the easy part; the hard part is understanding the relationship between the various infrastructure components not only as they relate to one another, but to the application as well. It is as important, perhaps even more so, that operators and administrators and developers – whomever may be provisioning these services – understand the impact of that service on the broader delivery ecosystem. Non-disruptive does not mean non-impactful, after all.

quote-badgeAn EFI [Electronic Fuel Injection] system requires several peripheral components in addition to the injector(s), in order to duplicate all the functions of a carburetor. A point worth noting during times of fuel metering repair is that early EFI systems are prone to diagnostic ambiguity.

-- Fuel Injection, Wikipedia

Changes to most of those peripheral components that impact EFI are non-disruptive, i.e. they don’t require changes to other components. But they are definitely impactful, as changes to any one of the peripheral components can and often does change the way in which the system delivers fuel to the engine. Too fast, too slow, too much air, not enough fuel. Any one of these minor, non-disruptive changes can have a major negative impact on how the car performs overall. The same is true in the data center; a imagenon-disruptive change to any one of the delivery components may in fact be non-disruptive, but it also may have a major impact on the performance and availability of the applications it is delivering.

BEWARE ARCHITECTURAL AMBIGUITY
Public cloud computing lends itself to an “HTML programmer” mode of thinking; where those who may not have the underlying infrastructure knowledge are tasked with managing that infrastructure simply because it’s “easy”. Just as early EFI systems were prone to “diagnostic ambiguity” so too are these early cloud computing and automated systems prone to “architectural ambiguity”.

Knowing you need a load balancing service is not the same as knowing what kind of load balancing service you need, and it is not the same as understanding its topological and architectural requirements and constraints.

The changes being wrought by cloud computing and IT as a Service are as profound as the explosion of web applications at the turn of the century. Cloud computing promises easy interfaces and management of infrastructure components and requires no investment whatsoever in the underlying technology. We need to be cautious that we do not run willy-nilly toward a rerun of the evolution of web applications, with “cloud programmers” whose key strength is in their understanding of interfaces instead of infrastructure. A long-term successful IT as a Service strategy will take into consideration that infrastructure services are a critical component to application deployment and delivery. Understanding how those services work themselves as well as how they interact with one another and with the applications they ultimately deliver, secure and accelerate is necessary in order to achieve the efficient and dynamic data center of the future.

A successful long term IT as a Service strategy includes public and private and hybrid cloud computing and certainly requires leveraging interfaces. But it also requires that components be integrated in a way that is architecturally and topologically sound to maintain a positive operational posture. It requires that those responsible for integrating and managing infrastructure services – regardless of where they may be deployed – understand not only how to interface with them but how they interact with other components.

The “cloud programmer” is likely only to understand the interface; they’re able to run the diagnostic computer, but can’t make heads or tails of the information it provides. To make sense of the diagnostics you’re still going to need a highly knowledgeable data center mechanic.


The Nubifer Team posted Strategies for Cloud Security on 7/20/2011:

imageSecurity continues to be the number one obstacle to cloud adoption. Yet, despite widespread security concerns, cloud computing is taking off. The question now is not “will my organization move to the cloud?” Rather, it is “when?”

In this article, Nubifer’s Research Team explores how to get started with cloud security. What are the bare essentials? How do you merge traditional controls with advanced technologies like DLP (Data Loss Prevention) and risk management? How will you convince auditors that your cloud projects are as secure as your on-premise ones?

Security Concerns Slowing Cloud Adoption

A recent Cloud Trends Report for 2011 found that the number of organizations that are prioritizing the move to cloud computing nearly doubled from 2009 (24%) to 2010 (44%). However, the study also found that cloud security is the number one obstacle to adoption. Of those surveyed, 26% cited security as their chief cloud concern, while 57% included security their top three.

However, a recent study commissioned by CA Technologies learned that, despite all of the concerns about security, roughly 50% of those embracing the cloud fail to properly evaluate providers for security prior to deployments. The study, Security of Cloud Computing Users: A Study of Practitioners in the US & Europe, discovered that IT practitioners vary wildly in their assessment of who is responsible for securing sensitive data in the cloud and how to go about it.

According to Chad Collins, CEO of Nubifer Inc., many CIO’s are projecting their own internal security weaknesses onto cloud providers. “When security is used as an excuse, often the fact is that CIO’s want to avoid examining themselves. If you don’t have a handle on governance, risk management and regulatory compliance internally, you’ll expose just how lacking your security is if you try to move to the cloud.”

Determining a Cloud Security Plan

Even if many organizations lack the intestinal fortitude to scrutinize their own (possibly deficient) security practices, there are still plenty of valid cloud security fears. Transferring the responsibility of protecting sensitive data to a third party is hair-raising, especially in an industry that has to comply with regulations such as HIPAA, SOX or PCI DSS. Throw in hypervisor vulnerabilities, DDoS attacks, application-level malware and other problems, and the line between rationalizations and legitimate worries is blurred.

Cloud risks still involve many unknowns, so formulating a comprehensive cloud strategy is a must. But if you don’t have some sort of workable plan in place, will you be prepared to adapt and improvise as conditions change?

Your CFO or comptroller is your biggest risk for financial applications and data. Your head of HR needs to be properly managed to ensure that leaky personnel files don’t come back to haunt you. And, of course, the biggest risk of all is your CEO.

Attackers know this, which is why C-level executives are constantly targets of so-called “whaling attacks,” such as the CEO subpoena phishing scam.

Privileged users can also be the most difficult to secure, though, because they will often veto any security control they don’t like. After all, these are the bosses. Thus, it’s not going to be easy to put a blanket ban on riskier devices, such as smartphones or tablets, so you’d better have a Plan B. Instead of banning the devices, you can establish proper authentication, access control and identity enforcement to ensure that your privileged users are at least who they say they are.

A plan to protect your most privileged users has the added benefit of providing you with an overall cloud security roadmap. Are remote-user risks a concern? Your most privileged users will probably want remote access. How about data loss protection? Your privileged users have more rights to more data than anyone else. What about securing mobile devices? Your CEO probably has several of them.
Moving from internal controls to third-party evaluation

As you move from evaluating yourself to evaluating potential cloud vendors, don’t forget to investigate how far cloud services have already spread into your organization. Has your sales team signed up for Salesforce.com? Are your project managers using Basecamp? Has HR invested in Taleo?

As name brand cloud/SaaS providers, Microsoft, Salesforce.com and Google all have solid reputations. Getting those projects to conform with internal security controls shouldn’t be an issue. You’ll want to vet others, though, and make sure they aren’t fly-by-night providers that don’t take the time to properly secure their environments.

After your internal controls are in place, get out of the data center business and start shifting resources into private clouds.

Finally, as licenses expire and as upgrade cycles hit, you’ll be in position to knowledgeably and safely begin scrutinizing the public cloud vendors you’ll begin to trust with your mission-critical resources.

Effective security involves policies, technology and operational controls. Yes, you can drill down – way down – within those three categories, but those are the general areas. “If you focus on the bookends when evaluating vendors, you should learn a lot about how they will handle your data,” Collins said.

Those bookends are governance on one end, or how will data be managed and secured; and auditing on the other end, or how do providers prove they’re doing everything they claim to be doing?

Following that advice will get you started. For more information on formulating a Cloud Security strategy visit Nubifer.com.


Ed Moyle described What the PCI virtualization guidance means for PCI compliance in the cloud in a 7/14/2011 post to SearchCloudSecurity.com:

imageThe recent guidance on virtualization issued by the PCI Security Standards Council comes as a bit of a mixed blessing for many organizations. On the one hand, most of the industry has been waiting with baited breath for PCI virtualization guidance since the Data Security Standard was first published. Questions like,“What does ‘one function per server’ mean for virtual environments?” have stymied end users and auditors alike, with little to go on in the form of official guidance. So on the positive side, the guidance answers many of those questions and adds much-needed detail to what was always a murky topic.

imageThis clarity comes at a cost though. The downside of the PCI virtualization guidance is it’s likely that some of the decisions and assumptions organizations have made in the past about these topics will turn out to be wrong. Some organizations, for example, may have made the assumption that a “guest VM” slice could be in scope without the hypervisor also being in scope, which we now know is incorrect. Organizations that were mistaken in this way will likely require additional spending or effort to get into compliance.

Cloud is one area where these issues are particularly pronounced. While the guidance has made it clear that compliance in the cloud is feasible (not something universally accepted previously), the council has also made it clear that PCI in the cloud is no pushover from a technical standpoint. Getting to compliance in the cloud involves the active participation of both the organization itself, as well as the cloud provider, and unfortunately, not every provider is willing to play ball.

PCI compliance in the cloud: A shared responsibility

One of the issues the PCI virtualization guidance clarifies is that PCI compliance in the cloud is a shared effort between customer and provider. According to the guidance:

“Additionally, as with any managed service, it is crucial that the hosted entity and provider clearly define and document the responsibilities assigned to each party for maintaining PCI DSS requirements and any other controls that could impact the security of cardholder data.”

This means that while organizations ultimately shoulder the burden for compliance, the cloud provider’s implementation is a key part of overall compliance. Both the provider and the hosted organization need to take action. Hosted entities need to document their processes and controls and make sure all of the controls are thoroughly addressed, either by themselves or the provider. Providers, on the other hand, may need to provide documentation (for example in the form of an explanation on how they meet controls), initiate auditing efforts to provide evidence of implementation, or modify their environment to make sure they meet controls.

This is where cloud providers feel the pinch most. Because the PCI virtualization guidance does categorically link together scope for the hypervisor and the guest image, whole environments may need to be changed en masse if the goal is to get them PCI compliant. Specifically, if you have a hypervisor supporting multiple tenants, and one or more of those guest OSes are in scope for PCI, so is the hypervisor.. Because the hypervisor is in scope, anything not segregated from it is in scope as well. In many situations, this can mean the entire environment.

Some cloud providers, like those that specifically target the merchant community, are already planning how to assist customers in this regard. But it bears saying that this is by far not a guarantee: Not every provider will have this on their roadmap, and not every provider that does will do so for every environment. Some environments may be built around sets of security controls that -- let’s face it -- are just flat-out inappropriate for use in the cardholder data environment (CDE).

Options if the cloud provider won’t support PCI compliance

So the question arises about how firms should react in the event that their cloud provider can’t (or won’t) support PCI compliance, either because the provider’s controls are unacceptable and the provider is unwilling to change them, or because the audit/documentation effort is something the provider is unwilling to undertake. In other words, what do firms do if they find out their cloud environments can’t achieve compliance?

Obviously, the ideal situation is to find and use an environment certified to PCI, for example, one that has gone through an audit process and had controls validated to meet the standard. Ideally, your cloud provider already supports this or can rapidly accomplish it within an acceptable timeframe; but if not, it may be time to evaluate the marketplace to find one that is willing/able to support you. In fact, even the threat of taking your business elsewhere can provide pressure for a vendor to certify; one large customer might cause them to rethink their position on this point, as might a “critical mass” of smaller customers.. Of course, security controls and audits aren’t free, particularly those required by PCI. An environment that has appropriate controls and that has been through the audit process is likely to cost more than one that has not. If you need to make a change and you haven’t budgeted for it, the hit here can be painful.

Another option is to catalogue the controls required by the PCI DSS in light of the virtualization guidance and compare that list with your environment. For any control that is both missing and the responsibility of your cloud provider, assess if you can meet the control without your vendor’s participation. DIY is not going to be possible with certain kinds of controls that are highly dependent on location (for example, physical security controls), but there could be some flexibility with respect to controls that have wide latitude for different implementations. For example, if your vendor doesn’t provide adequate logging at the platform level, you might choose to implement it yourself at the application level to meet the requirement without needing to rely on the provider.

If you change providers or implement controls yourself but still require vendor participation to close gaps, you have a few options such as removing cardholder data from the environment through technologies like tokenization or encryption. Tokenization limits scope by replacing the PAN with an inert value while encryption (provided your cloud provider does not have access to the keys) can limit scope as well, per guidance issued by the council.

Of course, any option takes time to implement. So consider in the short term what compensating controls you might be able to implement if your audit validation cycle is currently in progress. As you carve out your strategy for moving forward, implement those short-term compensating controls and have them at the ready. And remember: It’s much easier if you start this planning on your own rather than waiting for the issue to get flagged in your audit cycle; planning ahead and recognizing where you may have issues is time well spent.

Ed Moyle is a senior security strategist with Savvis as well as a founding partner of Security Curve.

Full disclosure: I’m a paid contributor to SearchCloudComputing.com another TechTarget publication.


<Return to section navigation list>

Cloud Computing Events

CloudTweaks reported VMworld® 2011, the Leading Virtualization and Cloud Computing Event of the Year will occur on 8/29 through 9/1/2011 at The Venetian hotel in Las Vegas NV:

imageWith more than 27,000 attendees expected worldwide, VMworld 2011 will spotlight VMware and the industry’s commitment to virtualization, cloud computing and the transformation to IT as a Service.

imageThis year’s VMworld theme, “Your Cloud, Own it,” emphasizes the powerful role attendees play in designing clouds that meet the specific business needs and demands of their companies – whether it’s private, hybrid or public cloud.

VMworld offers attendees informative deep-dive technical sessions and hands-on labs training, plus access to a broad set of technology and cloud partners gathering in Las Vegas and Copenhagen. Attendees will share and gain practical knowledge around virtualization best practices, building a private cloud, leveraging the public cloud, managing desktops as a service, virtualizing enterprise applications and more.

VMworld 2011 Registration
To register to attend VMworld 2011, please visit www.vmworld.com.

About VMware
VMware delivers virtualization and cloud infrastructure solutions that enable IT organizations to energize businesses of all sizes. With the industry leading virtualization platform – VMware vSphere® – customers rely on VMware to reduce capital and operating expenses, improve agility, ensure business continuity, strengthen security and go green. With 2010 revenues of $2.9 billion, more than 250,000 customers and 25,000 partners, VMware is the leader in virtualization which consistently ranks as a top priority among CIOs. VMware is headquartered in Silicon Valley with offices throughout the world and can be found online at www.vmware.com.

Source: VMware


The ISC Cloud’11 Conference on high-performance computing will be held 9/26 to 9/27/2011 at the Dorint Hotel in Mannheim, Germany. From HPC in the Cloud:

imageNowadays high performance computing (HPC) is increasingly moving into the mainstream. With commodity off-the-shelf hardware and software, and thousands of sophisticated applications optimized for parallel computers, every engineer and scientist today can perform complex computer simulations on HPC systems -- small and large.

However, the drawback in practice is that these systems are monolithic silos, application licenses are expensive and they are often either not fully utilized, or they are overloaded. Besides, there is a long procurement process and a need to justify the expenses including space, cooling, power, and management costs that go into setting up an HPC cluster.

With the rise of cloud computing, this scenario is changing. Clouds are of particular interest with the growing tendency to outsource HPC, increase business and research flexibility, reduce management overhead, and extend existing, limited HPC infrastructures. Clouds reduce the barrier for service providers to offer HPC services with minimum entry costs and infrastructure requirements. Clouds also allow service providers and users to experiment with novel services and reduce the risk of wasting resources.

Rather than having to rely on a corporate IT department to procure, install and wire HPC servers and services into the data center, there is the notion of self-service, where users access a cloud portal and make a request for servers with specific hardware or software characteristics, and have them provisioned automatically in a matter of minutes. When no longer needed, the underlying resources are put back into the cloud to service the next customer.

This idea of disposable computing dramatically reduces the barrier for research and development! Because of its utilitarian usage model, clouds will surely revolutionize how HPC is applied and make it genuinely mainstream.

The ISC Cloud'11 conference will help you to understand all the details of this massive trend. The conference will focus on compute and data-intensive applications, their resource needs in the cloud, and strategies on implementing and deploying cloud infrastructures. It will address members of the HPC community, especially decision makers in small, medium, and large enterprises and in research (chief executives, IT leaders, project managers, senior scientists, and so on). Speakers will be world-renowned experts in the field of HPC and cloud computing. They will, undoubtedly present solutions, guidelines, case studies, success stories, lessons learned and recommendations to all attendees.

The remarkable success of the first international ISC Cloud'10 Conference held last October in Frankfurt, Germany, has motivated ISC Events to continue this series and organize a similar cloud computing conference this year, with an even more profound focus on the use of clouds for HPC.

Following the recommendations of last year's participants, this year, we will be inviting more expert speakers with real end-user hands-on experiences reporting on advanced topics, thus providing all attendees insightful details. This conference is highly valuable for members of the HPC community who want to understand this massive trend and mainstream HPC. For sponsors, this year, we have an additional goody: table-top exhibition space!

The ISC Events team and I look forward to welcoming you. For more information visit http://www.isc-events.com/cloud11/.

Source: Wolfgang Gentzsch, ISC Cloud General Chair

Microsoft is a sponsor.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

• Jeff Barr (@jeffbarr) described Additional CloudWatch Metrics for Amazon SQS and Amazon SNS in a 7/21/2011 post:

imageI spent yesterday morning working in a coffee shop while waiting to have an informal discussion with a candidate for an open position. From my vantage point in the corner I was able to watch the shop's "processing pipeline" in action. There were three queues and three types of processing!

The customers were waiting to place an order, waiting to pay, or waiting for their coffee.

The employees functioned as order takers, cashiers, or baristas.

It was fascinating to watch this dynamically scaled system in action. Traffic ebbed and flowed over the course of the three hours that I spent in my cozy little corner. The line of people waiting to place an order grew from one person to twenty people in just a few minutes. When things got busy, the order taker advanced through the line, taking orders so that the barista(s) could get a head start. The number of baristas varied from one to three. I'm not sure what was happening behind the scenes, but it was clear that they could scale up, scale down, and reallocate processing resources (employees) in response to changing conditions.

You could implement a system like this using the Amazon Simple Queue Service. However, until now, there was no way to scale the amount of processing power up and down as the number of items in the queue varied.

We've added some additional Amazon CloudWatch metrics to make it easier to handle this particular case. The following metrics are now available for each SQS queue (all at 5 minute intervals):

  • NumberOfMessagesSent
  • SentMessageSize
  • NumberOfMessagesReceived
  • NumberOfEmptyReceives
  • NumberOfMessagesDeleted
  • ApproximateNumberOfMessagesVisible
  • ApproximateNumberOfMessagesNotVisible

We have also added the following metrics for each Amazon SNS topic, also at 5 minute intervals:

  • NumberOfMessagesPublished
  • NumberOfNotificationsDelivered
  • NumberOfNotificationsFailed
  • PublishSize

imageYou can create alarms on any of these metrics using the AWS Management Console and you can use them to drive Auto Scaling actions. You can scale up when ApproximateNumberOfMessagesVisible starts to grow too large for one of your SQS queues, and scale down once it returns to a more reasonable value. You can also watch NumberOfEmptyReceives to make sure that your application isn't spending too much of its time polling for new messages. A rapid increase in the value of ApproximateNumberOfMessagesNotVisible could indicate possible bug in your code. Depending on your application, you could also watch NumberOfMessagesSent (SQS) or NumberOfMessagesPublished (SNS) to make sure that the application is still healthy. Here is how all of the pieces (An SQS queue, its metrics, CloudWatch, Auto Scaling, and so forth) fit together:

You can read more about these features in the newest version of the CloudWatch Developer Guide.


Matthew Weinberger (@MattNLM) reported VMware Q2 2011 Earnings Increase 37 Percent Over Q2 2010 in a 7/21/2011 post to the TalkinCloud blog:

imageVirtualization provider and Talkin’ Cloud Stock Index member VMware is seeing its cloud investments pay off as the company was able to report a 37 percent year-over-year boost in earnings for the second quarter of 2011 – that’s $921 million, for those keeping track at home.

imageThe company’s press release has the financial nitty-gritty. But here are a few standout statistics from that statement:

  • U.S. revenues for Q2 2011 grew 35 percent to $450 million from Q2 2010
  • License revenues for Q2 2011 were $465 million, an increase of 44 percent from the Q2 2010 as reported and an increase of 40 percent measured in constant currency
  • Service revenues, which include software maintenance and professional services, came in at $456 million, an increase of 30 percent from Q2 2010

imageFor the year, VMware is expecting annual revenues to be between $3.65 billion and $3.75 billion, which hovers around a 30 percent boost from 2010. That’s in spite of an expected slight drop in third-quarter revenues.

So what does VMware credit with its success? Well, despite the company’s investment in desktop virtualization, it’s the VMware vFabric 5 cloud and virtualization application platform, HP cloud partnerships and vSphere 5 cloud infrastructure solution unveiling that got all the ink. And that’s not to mention the hype VMware had for its recent SaaS acquisitions including presentation tool Sliderocket and IT management platform Shavlik.

Strong figures, to be sure — it’s going to be an interesting Talkin’ Cloud Stock Index this week. But VMware is facing stiff competition from the likes of Citrix, which recently acquired Cloud.com and its CloudStack platform to compete on the cloud infrastructure layer.

Read More About This Topic

Matthew Weinberger (@MattNLM) announced VMTurbo Launches Cloud Operations Manager in a 7/21/2011 post to the TalkinCloud blog:

imageVMTurbo, which specializes in addressing the optimization and control of the virtual data center, has announced the general availability of the VMTurbo Cloud Operations Manager, designed to orchestrate across entire layers of the cloud stack, from services to infrastructure. And IaaS provider 6fusion has already signed on to integrate it into its multiple data centers across the globe.

imageThe VMturbo Cloud Operations Manager is based on the free VMturbo Community Edition, which already provides infrastructure management, problem detection, performance reporting/altering and capacity reporting/alerting. But here’s the fact sheet on what’s exclusive to this premium version:

  • Grouping support for standard groups (data centers, clusters, storage tiers, folders) as well as custom groups
  • Workload placement policies
  • Policy driven corrective action execution
  • Email/SNMP notification
  • Workload service levels
  • Resource analysis settings
  • Storage configuration settings
  • Active directory support
  • REST/Perl API

imageThe press release also affirmed VMTurbo Cloud Operations Manager is “geared to manage multiple virtual centers, multiple hypervisors and provides multi-tenancy customer-scoped views.” If that sounds like a good fit for managing your complex cloud infrastructure, VMTurbo Cloud Operations Manager is priced at $49 per socket or $9 per VM.

As far as that 6fusion integration, the metered cloud service provider promises VMTurbo Cloud Operations Manager will make it easier for customers and partners to manage and scale their virtualized infrastructures.

Read More About This Topic

Hector Gonzalez, Alon Halevy, Christian S. Jensen, Anno Langen, Jayant Madhavan, Rebecca Shapley and Warren Shen coauthored a Google Fusion Tables: Data Management, Integration and Collaboration in the Cloud paper released as a *.pdf file on 7/20/2011. From the Abstract:

Google Fusion Tables is a cloud-based service for data management and integration. Fusion Tables enables users to upload tabular data files (spreadsheets, CSV, KML), currently of up to 100MB. The system provides several ways of visualizing the data (e.g., charts, maps, and timelines) and the ability to filter and aggregate the data. It supports the integration of data from multiple sources by performing joins across tables that may belong to different users. Users can keep the data private, share it with a select set of collaborators, or make it public and thus crawlable by search engines.

The discussion feature of Fusion Tables allows collaborators to conduct detailed discussions of the data at the level of tables and individual rows, columns, and cells. This paper describes the inner workings of Fusion Tables, including the
storage of data in the system and the tight integration with the Google Maps infrastructure.

image

Check out the Google Fusion Table API and sample code for a variety of languages (not including C# or VB) at Google Code.

Fusion Tables appear to be related/competitive to the Windows Azure Marketplace DataMarket and OData. Google Public Data Explorer might be based on Fusion Tables or a related technology. Unlike OData, the service and software to access it are proprietary to Google; see the Google Fusion Tables API Terms of Service.


Joe Panettieri reported SUSE Linux Gears Up for Cloud Strategy in a 7/20/2011 post to the TalkingCloud blog:

imageSUSE Linux, now owned by Attachmate, is preparing a cloud strategy. That’s good news for partners that have remained loyal to SUSE amid the transition from Novell to Attachmate. But SUSE will also face plenty of challenges in the cloud.

imageNils Brauckmann, president and GM of Attachmate’s SUSE Business Unit, expects to announce a comprehensive cloud strategy in the next 60 to 90 days, according to ZDNet. In some ways SUSE is well-suited for the cloud. Linux is a defacto standard for many cloud deployments. Plus, SUSE has close relationships with IBM and VMware. In fact, VMware was rumored to be among the bidders for Novell’s SUSE business, but Attachmate eventually acquired all of Novell.

imageSo far, the SUSE cloud effort has included support for Amazon Web Services; SUSE Studio to help software developers deliver services; and SUSE Manager for Linux management. Plus, SUSE works closely with the KVM and Xen virtualization standards.

Still, SUSE lacks an all-encompassing cloud rally cry. More than a year ago, Microsoft CEO Steve Ballmer announced the all-in cloud initiative. And Microsoft updated that cloud rally cry during the Microsoft Worldwide Partner Conference 2011 (WPC11), held in July. Meanwhile, Red Hat has been evangelizing multiple cloud efforts — including OpenShift (platform as a service). Plus, Red Hat will pitch its cloud strategy to partners during the North America Partner Conference (Oct. 25-27, Miami).

SUSE’s cloud strategy is going to require a channel partner component. And so far, SUSE is still working to de-couple its partner program from the Novell partner program. As of this writing, SUSE’s partner database was still hosted on Novell’s web site.

It’s a safe bet that SUSE will pitch its cloud vision during the Brainshare conference (Oct. 10-14, Salt Lake City, Utah). Under Novell’s ownership, brainshare had been a Novell-centric conference. But this year content will cover each of Attachmate’s business divisions: Novell, NetIQ and SUSE.

Read More About This Topic

Derrick Harris (@derrickharris) posted OpenStack turns 1. What's next? to Giga Om’s Structure blog on 7/18/2011:

imageOpenStack, the open-source, cloud-computing software project founded by Rackspace (s rax) and NASA, celebrates its first birthday tomorrow. It has been a busy year for the project, which appears to have grown much faster than even its founders expected it would. A year in, OpenStack is still picking up steam and looks not only like an open source alternative to Amazon Web Services (s aws) and VMware (s vmw) vCloud in the public Infrastructure as a Service space, but also a democratizing force in the private-cloud software space.

imageLet’s take a look at what happened in the past year — at least what we covered — and what to expect in the year to come.

OpenStack year one

July

October

January

February

  • Feb. 3: OpenStack releases “Bexar” code and new corporate contributors, including Cisco (s csco).
  • Feb. 10: Rackspace buys Anso Labs, a software contractor that wrote Nova, the foundation of OpenStack Compute, for NASA’s Nebula cloud project.

March

April

May

July

imageAlthough the new code, contributors and ecosystem players came fast and furious, OpenStack wasn’t without some controversy regarding the open-source practices it employs. Some contributors were concerned with the amount of control that Rackspace maintains over the project, which led to the changes in the voting and board-selection process. Still, momentum was overwhelmingly positive, with even the federal government supposedly looking seriously at OpenStack as a means to achieving one of its primary goals of cloud interoperability.

What’s next

According to OpenStack project leader Jonathan Bryce, the next year for OpenStack likely will be defined by the creation of a large ecosystem. This means more software vendors selling OpenStack-based products — he said Piston is only the first-announced startup to get funding — as well as implementations. Aside from public clouds built on OpenStack, Bryce also thinks there will be dozens of publicly announced private clouds build atop the OpenStack code. Ultimately, it’s a self-sustaining cycle: More users means more software and services, which mean more users.

There’s going to be competition, he said, but that’s a good thing for the market because everyone will be pushing to make OpenStack itself better. The more appealing the OpenStack source code looks, the more potential business for Rackspace, Citrix, Piston, Dell, Internap and whoever else emerges as a commercial OpenStack entity.

If this comes to fruition, it’ll be a fun year to cover cloud computing and watch whether OpenStack can actually succeed on its lofty goals of upending what has been, up until now, a very proprietary marketplace.

Image courtesy of Flickr user chimothy27.


<Return to section navigation list>

0 comments: