Friday, February 11, 2011

Windows Azure and Cloud Computing Posts for 2/10/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px33

• Updated 2/12/2011 with a link to my SearchCloudComputing article marked

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

Avkash Chauhan explained Uploading blobs using UploadFromStream to Azure Storage and using ResponseReceivedEventArgs to track the upload progress in a 2/10/2011 post:

image While uploading a stream of various sizes using UploadFromStream, and using ResponseReceivedEventArgs to track the HTTPStatusCode you will see the following behavior:

1) For a blob whose size less than(or equal to)32 MB, the lib will send it in one piece

2) For a blob whose size is bigger than 32 MB, the lib will chuck it into blocks(Max size is 4MB) to send.

imageAfter spending some time I was able to get ResponseReceivedEventArgs to track how much data have been uploaded. The code snippet is as below:

using System;

using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;
using System.Diagnostics;
using System.Data.Services.Common;
using System.IO;
using System.Net;
using System.Security.Cryptography.X509Certificates;
using System.Net.Security;
using System.Threading;

namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
try
{
string accountSettings = "";
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(accountSettings);
CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobStorage.GetContainerReference("test");
container.CreateIfNotExist();
string blobName = "testBlob";
var blob = container.GetBlockBlobReference(blobName);
MemoryStream randomDataForPut = RandomData(33*1024*1024);
long totalBytes = 0;
blobStorage.ResponseReceived += new EventHandler<ResponseReceivedEventArgs>((obj, responseReceivedEventArgs)
=>
{
if (responseReceivedEventArgs.RequestUri.ToString().Contains("comp=block&blockid"))
{
totalBytes += Int64.Parse(responseReceivedEventArgs.RequestHeaders["Content-Length"]);
}
});

blob.UploadFromStream(randomDataForPut);
}
catch (Exception e)
{
System.Diagnostics.Trace.TraceError(e.ToString());
}
}

public static MemoryStream RandomData(long length)
{
var result = new MemoryStream();
Random r = new Random();
for (long i = 0; i < length; i++)
result.WriteByte((byte)(r.Next(256)));
result.Position = 0;
return result;
}
}
}


<Return to section navigation list> 

SQL Azure Database and Reporting

Mark Kromer (@mssqudude) described What Makes SQL Azure Compelling? in a 2/10/2011 post:

What makes the Cloud-based version of SQL Server, SQL Azure, compelling? Part 1 …

image I am going to have to do this in parts because there are different IT & business roles that will see different value in SQL Azure based on the aspects of their jobs that are affected by a move from on-premises to Cloud. The change can be minimal in some cases or dramatic in others.

imageLet’s start with the primary role interacting with SQL Server databases: DBAs.

Compelling features:

  1. Quick & easy to provision a new database. By going to your Windows Azure portal, you can request a new database of varying max size, up to 50 GB and have a new database ready in minutes.
  2. Costs. You pay monthly like your database was a utility. If you averaged 10 Gb size for your database in SQL Azure, you pay that rate.
  3. Elasticity. Now, if you start growing to 20 GB and 30 GB, your database is just fine in SQL Azure. You now step-up to the price points for those larger database sizes.
  4. Easy to migrate. There are tools on Codeplex and a new version coming of Data Sync to let you integrate and sync data similar to replication seamlessly between Cloud and on-prem SQL Server.

Things to be aware of – differences from SQL Server:

  1. You do not have control of the system & instance levels of SQL Server in SQL Azure. This is not the same as standing up a box or a VM with SQL Server. You get a database that is part of PaaS (Platform as a Service) in a Microsoft data center.
  2. Backup & recovery is not the same. There is a database copy feature that you can schedule to create “snapshots” of the database that are copies. Backups are built-in through the fact that 3 replicas of your database are constantly maintained by the Azure infrastructure.
  3. High availability is through database replicas and the Azure infrastructure. You get the same replicas and the same infrastructure for each database. You do not set-up mirroring, log shipping, clustering, etc.

That is not everything there is to know, just a few pointers to get you started. I’ll move on to developers next in part 2 …


Liran Zelkha analyzed Database Isolation Levels And Their Effects on Performance and Scalability in a 2/10/2011 post to the High Scalability blog:

Some of us are not aware of the tremendous job databases perform, particularly their efforts to maintain the Isolation aspect of ACID. For example, some people believe that transactions are only related to data manipulation and not to queries, which is an incorrect assumption. Transaction Isolation is all about queries, and the consistency and completeness of the data retrieved by queries. This is how it works:

Isolation gives the querying user the feeling that he owns the database. It does not matter that hundreds or thousands of concurrent users work with the same database and the same schema (or even the same data). These other uses can generate new data, modify existing data or perform any other action. The querying user must be able to get a complete, consistent picture of the data, unaffected by other users’ actions.

Let’s take the following scenario, which is based on an Orders table that has 1,000,000 rows, with a disk size of 20 GB:

  1. 8:00: UserA started a query “SELECT * FROM orders”, which queries all the rows of the table. In our scenario, this query usually takes approximately five minutes to complete, as the database must fully scan the table’s blocks from start to end and extract the rows. This is called a FULL TABLE SCAN query, and is not recommended from a performance perspective.
  2. 8:01: UserB updates the last row in the in the Orders table, and commits the change.
  3. 8:04: UserA’s query process arrives at the row modified by UserB. What will happen?

Any guess? Will UserA get the original row value or the new row value? The new row value is legitimate and committed, but it was updated after UserA’s query started.

The answer is not clear cut, and depends on the isolation level of the transaction. There are four isolation levels, as follows (see more information at: http://en.wikipedia.org/wiki/Isolation_(database_systems):

  1. READ UNCOMMITTED: UserA will see the change made by UserB. This isolation level is called dirty reads, which means that read data is not consistent with other parts of the table or the query, and may not yet have been committed. This isolation level ensures the quickest performance, as data is read directly from the table’s blocks with no further processing, verifications or any other validation. The process is quick and the data is asdirty as it can get.
  2. READ COMMITTED: UserA will not see the change made by UserB. This is because in the READ COMMITTED isolation level, the rows returned by a query are the rows that were committed when the query was started. The change made by UserB was not present when the query started, and therefore will not be included in the query result.
  3. REPEATABLE READ: UserA will not see the change made by UserB. This is because in the REPEATABLE READ isolation level, the rows returned by a query are the rows that were committed when the transaction was started. The change made by UserB was not present when the transaction was started, and therefore will not be included in the query result.
    This means that “All consistent reads within the same transaction read the snapshot established by the first read” (from MySQL documentation. See http://dev.mysql.com/doc/refman/5.1/en/innodb-consistent-read.html).
  4. SERIALIZABLE: This isolation level specifies that all transactions occur in a completely isolated fashion, meaning as if all transactions in the system were executed serially, one after the other. The DBMS can execute two or more transactions at the same time only if the illusion of serial execution can be maintained.
    In practice, SERIALIZABLE is similar to REPEATABLE READ, but uses a different implementation for each database engine. In Oracle, the REPEATABLE READ level is not supported and SERIALIZABLE provides the highest isolation level. This level is similar to REPEATABLE READ, but InnoDB implicitly converts all plain SELECT statements to “SELECT … LOCK IN SHARE MODE.

The default isolation level in MySQL’s InnoDB is REPEATABLE READ, which provides a rather high isolation. Consider again this key sentence for the REPEATABLE READ isolation level: “All consistent reads within the same transaction read the snapshot established by the first read”.

Since old values of row data are required for current queries, databases use a special segment to store old row values and snapshots. MySQL calls this segment a Rollback Segment. Oracle once called it this as well, but now calls it an Undo Segment. The premise is the same.

During query execution, each row is examined and if it is found to be too new, an older version of this row is extracted from the rollback segment to comprise the query result. This examination‑lookup‑comprise action chain takes time to complete, resulting in a performance penalty. It also produces a snowball effect. Updates occur during a query, which makes that query slower so that it takes more time. During the time it takes to process the query, more updates come in, making query execution time even longer!

This is why a query that executes in 10 seconds in our testing environment may take a full 10 minutes to execute in a much stronger production environment.

You can read the entire post here - http://www.scalebase.com/isolation-levels-in-relational-databases/

According to Bob Beauchemin in a Transactions, isolation, and SQL Azure post of 2/11/2011:

image[The] SQL Azure FAQ … says "snapshot isolation" is the default. Technically it's "read committed snapshot" (known also as "statement-level snapshot") that's the default, although the SQL Server snapshot isolation level (known as "transaction-level snapshot") is available and works as advertised.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Nick J. Trogh reported Dynamics CRM 2011 Developer Training Kit Available in a 2/11/2011 post:

image On January 17 we launched Dynamics CRM Online 2011 worldwide, and today we’re making available the Dynamics CRM 2011 Developer Training Kit.  It’s a great collection of materials that allows .NET developers to learn the development features of Dynamics CRM and helps them build applications using Microsoft Dynamics CRM 2011 and CRM Online.

dyn-CRM2011_v_rgbDownload the training kit here: Dynamics CRM 2011 Developer Training Kit.

The training kit includes various resources such as

clip_image001[14] Presentations - Presentation decks in PowerPoint (.pptx) format that you can use to learn the concepts.
clip_image001[15] Videos - Video recordings of the presentation along with demos delivered by expert trainers.
clip_image001[16] Hands-on Labs - Hands-on labs with detailed instructions and source code that will walk you through the various development features.

No prior Dynamics CRM experience is required to go through this training kit. Familiarity with the .NET Framework, Microsoft Visual C#, Microsoft JScript, Microsoft SQL Server and general Web development is recommended.

imageWhat topics does this kit cover?

  • Introduction
  • Solutions
  • User Experience Extensibility
  • Visualizations and Dashboards
  • WCF Web Services
  • LINQ and oData
  • Plugins [Emphasis added.]
  • Processes
  • Client Programming
  • Silverlight
  • SharePoint & CRM
  • Windows Azure & CRM
  • Upgrading from CRM 4.0 to CRM 2011
  • Dynamics Marketplace

What’s Next?

  1. The hands on labs will be upgraded to RTM build as soon as CRM 2011 RTM is made generally available.
  2. The training kit will be published as a training course on MSDN for easy consumption.
  3. We’re looking to add new modules on accessing CRM Online from Java & PHP for non-.NET devs.

We look forward to seeing your apps on the Dynamics Marketplace.


Marcelo Lopez Ruiz (@mlrdev) offered a Short datajs walk-through in a 2/10/2011 post:

image For today's post, I simply want to give you a walk-through on how to create a web application that uses datajs from scratch. We'll be doing the whole thing - server database, middle-tier model, OData service, and web pages, so now would be a good time to grab some coffee.

imageTo begin, I'll create a new database with some sample data just from the command prompt, which is easier than explaining in a blog how to go through the IDE motions.

C:\>sqlcmd -E -S .\SQLEXPRESS
CREATE DATABASE DjsSimpleDb;
GO
USE DjsSimpleDb;
GO
CREATE TABLE Comments (
  CommentId INT IDENTITY(1, 1) PRIMARY KEY,
  Author NVARCHAR(128) NOT NULL DEFAULT '(unknown)',
  CommentText NVARCHAR(MAX) NOT NULL
);
GO
INSERT INTO Comments (Author, CommentText)
VALUES ('Marcelo', 'Life is like a chocolate value type held in a reference type.');
INSERT INTO Comments (Author, CommentText)
VALUES ('Marcelo', 'My last comment was pretty awful.');
INSERT INTO Comments (Author, CommentText)
VALUES ('Asad', 'See the kind of stuff I have to put up with?');
GO

Now, create a new web application project, DjsSimple.

Add a new ADO.NET Entity Data Model, and generate one from the DjsSimpleDb database. I'm naming the item DjsSimpleModel.edmx.

Next, I'm adding a WCF Data Service and naming it DjsSimpleService.svc, and touching it up as follows.

using System.Data.Services;
using System.Data.Services.Common;
namespace DjsSimple
{
public class DjsSimpleService : DataService<DjsSimpleDbEntities>
    {
public static void InitializeService(DataServiceConfiguration config)
        {
            config.SetEntitySetAccessRule("Comments", EntitySetRights.All);
            config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
        }
    }
}

Great, now I will drag the datajs-0.0.1.js file to the Scripts folder under the DjsSimple folder in the Solution Explorer window.

Next, it's time to create our sample application. All we will do is show existing comments and allow comments to be posted. Here is all the code required for Default.aspx.

<%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true"
    CodeBehind="Default.aspx.cs" Inherits="DjsSimple._Default" %>
<asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent">
<style>
.error-area { border: 1px solid red; }
</style>
</asp:Content>
<asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent">
<h2>Welcome to my comments app!</h2>
<div><button id="refreshCommentsButton">Refresh Comments</button></div>
<div><button id="addCommentButton">Add a new comment</button></div>
<div>Author: <br /><input id="authorBox" type="text" value="Me" /></div>
<div>Comment: <br /><input id="commentBox" type="text" value="My witty comment." /></div>
<div id="commentsArea"></div>
<script src="Scripts/jquery-1.4.1.js" type="text/javascript"></script>
<script src="Scripts/datajs-0.0.1.js" type="text/javascript"></script>
<script type="text/javascript">
OData.defaultError = function (err) {
    $("button").attr("disabled", false);
    $("#commentsArea").addClass("error-area").text(err.message);
};
function simpleHtmlEscape(value) {
if (!value) return value;
return value.replace(/&/g, "&amp;").replace(/</g, "&lt;").replace(/>/g, "&gt;");
}
function refreshComments() {
    $("button").attr("disabled", true);
    OData.read("DjsSimpleService.svc/Comments", function (data) {
        $("button").attr("disabled", false);
var text = "Comments:";
        $.each(data.results, function () {
            text += "<br /> " +
                "<b>" + simpleHtmlEscape(this.Author) + "</b>: " +
                simpleHtmlEscape(this.CommentText);
        });
        $("#commentsArea").removeClass("error-area").html(text);
    });
}
function addComment() {
    $("button").attr("disabled", true);
var request = {
        method: "POST",
        requestUri: "DjsSimpleService.svc/Comments",
        data: { Author: $("#authorBox").val(), CommentText: $("#commentBox").val() }
    }
    OData.request(request, function (data) {
        $("#commentsArea").removeClass("error-area").html("Comment posted, refreshing...");
        refreshComments();
    });
}
$(function () {
    $("#refreshCommentsButton").click(function (ev) {
        refreshComments();
        ev.preventDefault();
    });
    $("#addCommentButton").click(function (ev) {
        addComment();
        ev.preventDefault();
    });
});
</script>
</asp:Content>

Let's look at this bit by bit. The HeaderContent area is used by the ASP.NET web site template to hook up content for the document body, so we'll include the CSS for error messages there. We can also include scripts here, but I've decided to put them at the bottom of the page, to allow it to render faster.

In the actual content, we start with some HTML that presents the user interface for the page. There's a button to refresh comments, and one to add comments, with an input box for the author name and comment text each. Finally, there's an area where we will display comments, or an error message, if desired.

After the user interface, we pull in jQuery and datajs, and then we start the code for this web page.

The first function we see is assigned to OData.defaultError, and simply sets up the error handler that is used unless we specify one for each operation. Many web pages have a simple error handling strategy, commonly displaying an error message and reseting some state (in our case, we make sure our buttons are enabled). OData.defaultError gives you a convenient place to define this handler.

Next we write up simpleHtmlEscape, which is used to exemplify a good practice: whenver you have something that is supposed to be plain text, if you're assigning it to an innerHTML property (or you use jQuery's .html() function), make sure to escape it to make sure that data can't be "poisoned" to include unwanted HTML tags or even scripts.

After this, we have two functions that make up the interesting functionality. The first one reads all comments, builds up an HTML string, and assigns it to the resulting area. Notice that we simply iterate over the results of the array. We could have used some form of templating, but I didn't want to introduce yet another technique to this sample.

The second function creates a request to POST a new comment to the service, and then kicks off a refresh. Note that we simply create an object that looks like what we'd like the server to see and assign it to the 'data' property of the request we build - datajs takes care of making sure things look right for the server.

Note that in both functions, we take care to disable the buttons while the operation is taking place, so the user knows that there's no need to keep clicking, and to avoid re-entrancy problems where we might fire multiple requests by accident.

Finally, we simply wire our functions to the buttons, and make sure that FORM postbacks don't fire for ASP.NET.

Hit F5 to start the browser on the default web page, look at the initial comments, add some more, and play around with the code. If you run into any problems, just drop me a comment here.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Eugenio Pace (@eugenio_pace) posted ACS as a Federation Provider–Home Realm Discovery Part 2 on 2/10/2011:

image In my previous post, I had a question for all you: [Link added.]

What would happen if Adatum’s FP didn’t supply ACS with the whr parameter?

image7223222An[d] the answer is: ….. ACS will simply ask the user!

image

ACS has no way (besides the whr parameter) of knowing where to go next (unless you configured your app with only 1 Identity provider I guess).

You might say that this page is ugly, and my look & feel is lost and you would be absolutely right. However, the good news is that this is just the default page. You can actually query ACS for all configured Identity Providers and build a page that will send the right whr.

Now, any ideas on question #2? (reminder: who is responsible for issuing the claims a-Order needs: role, organization, etc?)


Microsoft’s San Antonio, Texas data center reported [AppFabric Access Control] [South Central US] [Yellow] We are currently investigating a potential problem impacting Windows Azure AppFabric via RSS on 2/11/2011:

Feb 11 2011 7:11PM We are currently investigating a potential problem impacting Windows Azure AppFabric.

No further data was available on the status as of 2/11/2011 at 3:30 PM PST.

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Bruce Kyle posted Azure Email-Enables Lists, Low-Cost Storage for SharePoint to Channel9 on 2/11/2011:

vorApps for SharePoint Online delivers an email enabled information management solution leveraging the power of Microsoft SharePoint Online.

imageDiane Gallagher of Vorsite Corp talks to Greg Oliver, ISV Architect Evangelist, about how vorApps was built on Windows Azure and how Office 365 customers can use it with Exchange Online and SharePoint Online.

vorApps captures knowledge so it can be searched within SharePoint. Workers can create ways to securesly share their inboxes. Whether team members are in the next room or on the next continent, the powerful combination of automation and storage drives business efficiencies and builds a more agile organization.

About Vorsite

Vorsite is at the forefront of a new generation of cloud-based technologies, offering customers a full spectrum of cost-effective cloud solutions and services.  Focusing on hosted email, collaboration, and cloud applications Vorsite is unique positioned to help businesses streamline processes and reduce IT costs.  At Vorsite we deliver the right solution that addresses each customer’s unique business needs. Paired with our strong commitment to customer satisfaction and our unique qualifications as a Microsoft Gold Certified Partner, Windows Intune Blackbelt, and Microsoft Online Services Accelerated Partner, we have the experience and the skillset to bring innovation and inspiration to every opportunity.

For more information: Vorsite Cloud Solutions.

Other ISV Videos

For other videos about independent software vendors (ISVs):


Jonathan Rozenblit suggested Azure developers Know If You’ve Got Your Cloud On in a 2/10/2011 post about the Grey Box:

image If you’re like me and are actively doing work with Windows Azure, whether you’re learning how to develop Cloud-based applications, actually building an application, or just demoing Windows Azure capabilities to various audiences, you probably have one or more deployments configured in the Windows Azure Developer Portal. You start up your deployment, do your thing, and then stop your deployment so don’t get billed when you’re not actively working.  Right?

If you remember to stop your deployment every time, consider yourself a rare breed! For the rest of us who don’t always remember to do it right away and then realize in the middle of the night that we forgot, there is hope!

imageDuring my most resent perusal of CodePlex, I came across a nifty little utility written by Windows Azure MVP Michael Wood (@mikewo) called GreyBox.  GreyBox is an application that sits in your system tray and visually indicates to you whether you have any deployments running in your Windows Azure subscription.

As Michael describes in his blog post, if the box is blue, you have one or more deployments running (a.k.a. consuming compute hours). If your deployments have all been stopped, the box is grey. Simple! Now if that wasn’t useful enough, GreyBox will also alert you, on intervals that you define in the configuration, that you have deployments running. Commit this to memory - “If the cube is grey, you’re OK. If the cube is blue, a bill is due”.  Thanks for that nice and simple summary, Brian Prince.

Give GreyBox a try and hopefully, it will remind you, like it reminds me, that your deployments are still running - before you get into bed! 

Have you had an interesting or funny remembering that you forget to turn off your deployments? Share your story with the community.  

This post also appears in Dev Pulse.


Avkash Chauhan said “No” to Share Point 2010 and Windows Azure in a 2/10/2011 post:

imageI have seen lots of requests for and interest in hosting SharePoint 2010 in Windows Azure, so I decided to write some information about this topic to help everyone. As of now, SharePoint 2010 cannot be hosted as business solution on Windows Azure and it is not a supported product for Windows Azure.

After the release of Windows Azure VM Role BETA, there was a big interest in installing Share Point 2010 in VM Role and hosting it within Windows Azure. The current VM Role infrastructure is not technically capable of handling SharePoint’s requirements, i.e., proper integration with SQL Azure. High availability is missing due to the fact that VM Role is stateless and, if the VM Role goes down, you will lose of all of your data. So for a research perspective you can run Share Point 2010 on Windows Azure, however you can’t use this solution as business offering. If you go further and try to move your data in Cloud Drive and use VM Role, then you can run only one instance because Cloud Drives can connect to only one instance with Read/Write access. In theory, SharePoint 2010 can run on Windows Azure VM Role for demonstration purposes; however, this will be a conceptual or demonstration of the technology, not as Share Point 2010 as business application.

In the future there may be more announcements in this regard. However, we do not have any timeline to tell you when and how SharePoint2010 will be supported on Windows Azure.

Because of the foregoing, if you decide to use Share Point 2010 on Windows Azure as following backup server, these configurations are also not supported on Windows Azure as of now:

  • Hot standby: A second data center that can provide availability within seconds or minutes.
  • Warm standby: A second data center that can provide availability within minutes or hours.
  • Cold standby: A second data center that can provide availability within hours or days

Edited for clarity.


The Windows Azure Team reported Now Available: Library Topics on Using the Windows Azure Diagnostics Configuration File on 2/10/2011:

New technical content is now available  that  explains how to use Windows Azure diagnostics configuration file in applications. This file must be used when your application contains startup tasks that perform diagnostics operations that are not included in the default operations, or when you use the VM role type, which does not support startup tasks or the OnStart method. You can define all of the same diagnostics settings as can be set by using the Windows Azure Diagnostics API's.

See Using the Windows Azure Diagnostics Configuration File for information about creating and installing the diagnostics configuration file.

See Windows Azure Diagnostics Configuration Schema for an element-by-element breakdown of how the diagnostics configuration file is structured.


<Return to section navigation list> 

Visual Studio LightSwitch

Dan Moyer (@danmoyer) explained How to Interface LightSwitch to a Windows Workflow Part 1 in a 2/10/2011 post:

Overview

image In my last blog posts  I described how to connect to a generic WCF service from LightSwitch. In this blog post I will build on the previous postings to describe how to connect to a Workflow (WF) Service.

image22242222Like my earlier posts describing how to interface to a WCF service from LightSwitch, this will also be a multiple part posting.

This first part will describe how to create a simple WF service.  There is a lot of configuration to describe, for even a simple WF service.   So I want to limit this first post to just focus on getting the WF service ready for deployment and testing.

The second post will describe how to deploy the WF service, create a proxy to the service, and implement a simple Console application to test the service.

The third post will describe the code to write in LightSwitch to use the WF service.

Why Windows Workflow?

I think Windows Workflow adds value to a business application. You can use a workflow to define and implement business logic for a variety of business related problems such as order processing, document management and approval, defining how to flow data through a customer contact process.

What I like about workflow is it allows you to decouple your business logic from your application.

One advantage of decoupling is you can make changes in your workflow logic without the necessity of updating your application. As long as the interfaces to your workflow remain consistent, you can make internal changes to your workflow without needing to recompile, retest, and redeploy your business application.

Another advantage of using WF is you can have a library of WF implementations used by different applications in your organization. You may have ASP.NET applications currently using a WF library. If you can reuse the business logic in your WF in another application, that’s less code you need to write and test as you migrate to another application platform such as LightSwitch.

Technology changes, and LightSwitch will be version 1.0 when it goes RTM. Will the interfaces and internal implementations change for V2.0? Will some other product based on different tooling replace what you do in LightSwitch? By decoupling business logic, you can reduce the risk of future work.

Finally, a current issue with LightSwitch it is difficult to perform automated testing. Business logic tends to be hard to get correct due to multiple conditions and paths which need verified. It tends to change based on business needs. Business logic (rules / workflow) often needs persistence, tracking, and bookmarking to resume the logic.

Putting that logic into LightSwitch becomes problematic. You can build the business logic into a DLL and test the library via automated tests. But likely, that implementation will have dependencies which preclude usage in other application environments. WF is designed to solve these kinds of problems for business logic. And testing workflows in WF is arguably easier than testing equivalent ‘home grown’ logic hosted within LightSwitch.

Defining the Workflow

At first I thought it possible to create a workflow called from LightSwitch by creating a simple WF library. I quickly found that can’t work. The problem is LightSwitch, being a Silverlight based application, only allows you to run a subset of the .NET 4.0 assemblies. I found it necessary to create a Workflow Service and call that service from LightSwitch—similar to calling a vanilla WCF service.

To start off, I wanted a very simple WF example. In future posts I plan to create a more complex workflow. But for this first example, I want something easy to understand which demonstrates how to wire up a WF service and use it from LightSwitch without getting hung up in workflow complexities.

I’ve been reading Bruce Bukovics book, Pro WF Windows Workflow in .NET 4, published by Apress (ISBN 978-1-4302-2721-2) to learn more about WF.

For this example, I’m using an example workflow described his book which calculates shipping charges on an order. Since I’ve using the AdventureWorks database which contains a OrderHeader table, this is a reasonable starting point.

I’ll call the workflow CalculateShipping. The business rules are as follows:

If the shipping method is ‘normal’ is specified, the following rules apply:

  • If the total order exceeds 75.00, the calculated shipping is 0. (free shipping).
  • Else the calculated shipping is the weight * 1.95.

If the shipping method is ‘express’, the following rules apply:

  • There is always a shipping charge for an express order
  • The shipping is calculated as weight * 3.5

There is a minimum shipping charge:

  • If the shipping charge is > 0 and less than 12.5, the shipping charge is 12.50.

Data provided to the workflow to calculate the shipping charge comes from the AdventureWorks OrderHeader table. OrderHeader.ShipMethod contains the data for the Shipping Method. In the database, the table is populated with ‘CARGO TRANSPORT 5’. For expediency, I implemented the workflow to process ‘CARGO TRANSPORT 4’ as ‘normal’ and ‘CARGO TRANSPORT 5’ as ‘express’

There is no Weight field in OrderHeader, so for expediency in demonstration, I’m using the data in the Freight column as ‘weight’

OrderHeader.TotalDue provides the data for “total order”

Implementing the CalculateShipping Workflow

The first step to implementing the workflow is to create a Visual Studio 2010 Workflow Services project and call it ServiceLibrary.

clip_image002

First I delete the ReceiveRequest and SendResponse activities in the designer.

clip_image003

Next, under the Messaging category in the Toolbox, I drag the ReceiveAndSendReply activity onto the design service, inside the Sequential Service activity:

clip_image004

Next I rename the xamlx file from Service1.xamlx to OrderProcessing.xamlx

Next I add three class files to the project:

1. OrderProcessingRequest.cs

2. OrderProcessingResponse.cs

3. CalcShipping.cs

This is the implementation for OrderProcessingRequest:

clip_image005

This is the implementation for OrderProcessingResponse:

clip_image006

As you can see, both of these classes are simple containers.

CalculateShipping implements the business rules described earlier for calculating the shipping rate.

The implementation for the CalculateShipping activity is:

clip_image007

Key points of the implementation:

1. I want CalculateShipping to be a custom code activity which returns a decimal value of the calculated shipping rate. You can do this by deriving the class from CodeActivity<decimal> as seen in Line 22 of the source.

2. This custom activity has three input parameters: Weight, OrderTotal, and ShipVia (which is the Shipping Method). These parameters are defined as InArgument<decimal> as shown in lines 24 – 26 of the source.

3. The work for this custom activity is done in the overridden Execute() method, starting at line 33.

4. To get the value of the passed in argument in a workflow, you need to use the CodeActivityContext passed as a parameter to the Execute() method. For example, to get the value of the ShipVia argument, you use ShipVia.Get(context) as seen in line 39.

5. The implementation of the business rules is straight forward– a simple switch statement, and some calculations based on the Shipping Method, Weight, and OrderTotal.

6. When the CalculateShipping activity ends, it returns the calculated shipping amount that’s contained in the result variable, at line 67.

7. On line 37, the commented Debugger.Break()is one way to easily debug the service when you deploy it to IIS. When you uncomment this line and deploy the WF service to IIS, a dialog prompting you to attach debugger to the IIS thread pops up when the client executes the workflow.

With these classes defined, you should get a clean compile.

Having built the CalculateShipping custom activity built as part of the project, double click the xamlx file again to view the designer and open the Toolbox. You should see the CalculateShipping now included in the ServiceLibrary category of the Toolbox. Drag and drop the CalculateShipping activity onto the designer followed by dragging and dropping an Assign activity, located in the Primitives category of the toolbox.

Your workflow should appear similar to the following:

clip_image009

Now that you have the classes built and the activities laid out in the designer, you need to do more configuration!

In the lower left of the designer, click the Variables tab. You need to define variables which the workflow uses as it runs the activities contained in the Sequence activity. Define a Request, a Response, and a CalculatedShipValue variables.

The Request variable is a OrderProcessingRequest type. It contains the data received from the client, in the Receive activity.

The Response variable is a OrderProcessingResponse type. It contains the data sent back to the client by the SendReplyToReceive activity

The CalculatedShipValue variable is a decimal type. It holds the calculated result from the CalculateShipping custom activity.

The Assign activity added after the CalculateShipping custom activity assigns the value in CalculatedShipValue variable to the Response.CalculatedShipping.

After you have defined your workflow variables, your designer should look similar to the following:

clip_image011

Some key points to note when you’re defining the variables:

  • The Response variable needs to be New’d from the OrderProcessingResponse.
  • The decimal type- you can browse to mscorlib [4.0.0.0], then System, then scroll down to Decimal. If there’s a faster / better way of doing this, let me know. It took me a while to figure that out and it was a PITA to get it defined as a decimal type the first time around.

Ok, now that you have the variables defined for the workflow activities, you have to configure the activities.

In the designer, click on the outer most activity. In the property window, set the DisplayName property to Sequence. (this is just so we’re on the same page if you’re configuring the service as in the attached archive to this post.)

Next, click on the Receive activity.

Set the properties as follows:

clip_image013

Next, click the Content text box of the Receive activity to display the Content Definition dialog. In the Message Data field, enter the name of the variable containing the data received from the client—Request.

clip_image015

Next, click the CalculateShipping custom activity. The property dialog contains the input and output variables of the custom activity. You need to set the input variables to the value of the data contained in Request. And set the Result (the output variable for this activity) to the variable CalculatedShipValue:

clip_image017

In the screen shot above, when you click the button with three dots, the designer will pop up a dialog like the following where you can enter your expression. Note that the dialog does indicate the type, such as Decimal.

clip_image018

Next, click the Assign activity. The Assign activity is where you copy data from the CalculatedShipValue variable set by the workflow to the Response.CalculatedShipping variable.

clip_image020

Finally, click the SendReplyToReceive activity and configure the Request property to get its data from the Receive variable.

clip_image022

And click the text box next to Content to display the Content Definition dialog. Set the Message Data to use the Response variable:

clip_image024

Next, open the Web.Config file and verify it appears as follows:

clip_image025

The default setting for serviceDebug is false. I changed it to true in my configuration file so that the client would receive fault diagnostic data during testing.

One more configuration: close the designer and open it by double clicking the xamlx file. In the property window, make sure you have the ConfigurationName and Name properties set correctly:

clip_image026

That completes the implementation and configuration of the CalculateShipping WF service.

The next blog post will describe how to deploy this service, create a test Console application and generate a proxy for use by the Console test application.

Once you have the WF service deployed and tested, you’ll be ready to start using it from LightSwitch!


Julie Lerman (@julilerman) reported Oracle’s Entity Framework Beta Released Today in a 2/11/2011 post:

image Just got this very nice note from Alex Keh at Oracle.

Hi Julie,

Since I know you have a special interest in Entity Framework, I wanted to let you know that Oracle released its EF beta today. It can be downloaded from OTN.

http://www.oracle.com/technetwork/topics/dotnet/whatsnew/index.html

Regards,

imageAlex

So, go get it, check it out, give them feedback!


Return to section navigation list> 

Windows Azure Infrastructure

Mary Jo Foley (@maryjofoley) asked What will the Nokia deal mean for Microsoft's other phone-maker partners? in a 2/11/2011 post to ZDNet’s All About Microsoft blog:

What does a multi-partner ecosystem look like when not all participants are deemed equal?

imageGiving Windows Phone a huge shot in the arm, the number one worldwide mobile phone vendor Nokia announced a sweeping partnership with Microsoft on February 11. (Yes, I was wrong. I thought Nokia would go Android, a move Nokia CEO Stephen Elop acknowledged today that he considered. Gobble, gobble.)

Nokia didn’t become just another Microsoft handset partner via today’s agreement, like HTC, LG, Samsung and Dell. According to the announcement, Nokia would is going to have direct input on the future of Windows Phone, influencing key areas like maps, imaging and the marketplace. From today’s Microsoft/Nokia announcement:

Nokia will help drive and define the future of Windows Phone. Nokia will contribute its expertise on hardware design, language support, and help bring Windows Phone to a larger range of price points, market segments and geographies.”

imageMicrosoft reportedly is paying Nokia hundreds of millions of dollars to secure the deal. In exchange, Nokia has agreed to make Windows Phone its principal smartphone operating system. Nokia, in turn, becomes a key backer of Bing, adCenter, Office Mobile, Visual Studio, Silverlight and XNA.[*]

So if you’re HTC or Samsung, do you keep your eggs in the Windows Phone basket or put more in the Android one? (The smartphone market is now, for the most part, a three-horse race, with partner-free Apple being the third horse.) And what will this mean for Windows Phone customers, in terms of device choice?

The issue is already on people’s minds. Here’s a tweet from CNET’s Stephen Shankland, covering the Nokia-Microsoft press conference this morning:

Microsoft has made much of its decision with Windows Phone 7 to “lock down” the base platform, providing OEMs with less opportunity to customize. That has been seen by most company watchers, developers and customers as a plus and a way for Microsoft to avoid the problems that plagued Windows Mobile (and Android) — specifically too many designs with too little in common. But Microsoft is changing the rules for Nokia and allowing Nokia to customize the WP7 platform. Does that mean Microsoft is going to grant other OEMs the same concessions? (And if not, will that lead them to walk?)

Next week’s Mobile World Congress should be an interesting one. Wish I could be a fly on the wall in Microsoft’s meetings with its partners….

* Windows Azure, too, undoubtedly will benefit with increased connections to Nokia WP7 phones.


Sharon Chan chimed in with Nokia bets the company on Windows Phone 7 in a 2/11/2011 post to the Seattle Times’ Business/Technology blog:

image Nokia will make Microsoft's Windows Phone 7 the primary operating system on the Finnish company's smartphones.

Today's announcement is a major boon for Microsoft, which has seen a slow start to sales of Windows Phone 7 since it launched in the fall. It also is a significant turning point for Nokia. While still the world's largest phone maker, the company has seen its share diminish as its operating system, Symbian, has fallen behind Google's platform, Android.

BallmerElop2_web.jpgMicrosoft and Nokia have shared interest in competing against Apple and Google. The ties go deeper. Nokia's chief executive is Stephen Elop, the former president of Microsoft's Business division.

In an open letter written by Microsoft Chief Executive Steve Ballmer and Elop, the two said, "There are other mobile ecosystems. We will disrupt them."

Elop and Ballmer both spoke at a news briefing in London Friday that was broadcast on the Internet. "This partnership with Nokia will accelerate, dramatically accelerate, the development of a vibrant, strong Windows Phone ecosystem," Ballmer said.

The partnership brings together software from Microsoft and phones from Nokia. The two companies say they will combine services, such as Nokia Maps with Microsoft's search engine, Bing, and ad platform, AdCenter. Microsoft would provide software tools to developers building for Nokia phones. Nokia will also play a significant role in the future development of the Windows Phone operating system. The Nokia application marketplace will also be added to the Windows Phone app market.

Elop said he considered staying on Nokia's current development path with Symbian and a new system, Meego, but thinks Windows Phone 7 offered a faster path. He said Nokia also met with Google to discuss Android, but concluded, "we would have difficulty differentiating within that ecosystem," he said at the news conference.

Will Stofega, an analyst at research firm IDC, said. "It did show it was a very careful and considered move" to go with Microsoft, he said. "It shows to me he really did a lot of due diligence. I think it inspires confidence going forward."

Stofega said for the partnership to be successful, "They will need to be very, very aggressive in their time to market." Elop declined to say at the news conference when the first Nokia phone running Windows Phone 7 would be introduced.

Nokia will continue to support Symbian in its lower-end phones, which is the market leader among mobile operating systems, installed on 1.1 billion phones. But it does remove Symbian as a continuing competitive force on smartphones, a market now featuring Windows Phone 7, Apple's iPhone, Google Android and Research in Motion BlackBerry. Hewlett Packard is also focused on building out the market for WebOS on mobile phones.

Nokia and Microsoft still need to finalize a definitive agreement.

Elop also announced a major reorganization, dividing the company into Smart Devices and Mobile Phones, and shuffled its senior managers.

Microsoft stock is down 27 cents in interday trading, at $27.26 per share.

It appears to me that Steve has shed some weight. Good idea.


Tim Anderson (@timanderson) delivers a developer-oriented analysis in his MeeGo, Qt, and the new Nokia: developers express their doubts post of 2/11/2011:

image

image What are the implications of the new partnership between Nokia and Microsoft for MeeGo, the device-oriented Linux project sponsored by Intel and Nokia? What about Qt, the application framework that unifies Symbian and MeeGo development?

Here is what Nokia says:

image Under the new strategy, MeeGo becomes an open-source, mobile operating system project. MeeGo will place increased emphasis on longer-term market exploration of next-generation devices, platforms and user experiences. Nokia still plans to ship a MeeGo-related product later this year.

Nokia is retaining MeeGo but it has moved from centre-stage to become more niche and experimental.

The snag for developers is that there are no known plans to support Qt on Windows phone. According to the letter to developers, Qt developers can look forward to the targeting low-end Symbian devices and at least one solitary MeeGo phone:

Extending the scope of Qt further will be our first MeeGo-related open source device, which we plan to ship later this year. Though our plans for MeeGo have been adapted in light of our planned partnership with Microsoft, that device will be compatible with applications developed within the Qt framework and so give Qt developers a further device to target.

Reaction from developers so far is what you might expect:

By this announcement, I’m afraid you’ve lost many faithful people (developer and consumers) like myself, who’s been a Nokia user ever since I’ve started using cellphones..

and

Wow what can I say, nokia just flat out killed any enthusiasm I had to develop on nokia platforms, I never have and never will use a windows platform. You have just killed QT, even worse killed the most promising OS out there in Meego. Elop is the worst thing that has ever happened Nokia.

and

Weak on execution, you choose to flee. What a sad day in the history of a once proud and strong company.

Nokia could fix this by demanding Qt support for Windows Phone 7.

Related posts:

  1. Nokia Maemo, Intel Moblin gives way to MeeGo
  2. MeeGo NoGo: things look bad for the Intel/Nokia Linux project
  3. Nokia plus Windows Phone 7 – would that be a smart move?


Arik Hesseldahl (@ahess247, pictured below) delivers more background about Ripples in Microsoft’s Cloud as Amitabh Srivastava Leaves in a 2/9/2011 post to the D|All Things Digital blog’s New Enterprise section:

image The ripple effects at Microsoft in the wake of the pending departure of Microsoft Server and Tools head Bob Muglia continued today. First Satya Nadella, as reported by BoomTown’s Kara Swisher, was promoted from head of the Bing search effort to the helm of STB.

Second, Amitabh Srivastava, head of Microsoft’s Azure cloud platform business, announced that he’s leaving the company. Srivastava, who joined Microsoft in 1997, was widely considered to be a possible successor to Muglia, but lost out to Nadella.

One of the few ever to be named a Distinguished Engineer at Microsoft–an honor now known as Technical Fellow–he was tapped, along with Brian Valentine, by then Windows chief Jim Allchin to take over the Windows engineering efforts in 2003 at a time when the operating system was being widely derided as plagued with security and other problems. Srivastava had his team draw up a map depicting how all the various pieces of the Windows source fit together. It was eight feet tall, 11 feet wide, and was described in a 2005 Wall Street Journal story as looking like a “haphazard train map with hundreds of tracks crisscrossing each other.”

Srivastava and Valentine are credited with the 2004 proposal to streamline how all those pieces functioned, a plan that would allow various features to be added or removed without disrupting the whole operating system. The idea was a partial response to the looming threat from Google, which that year had launched Gmail. The problem was their plan required throwing out a lot of legacy source code that had been in Windows for years, and starting fresh.

Srivastava’s changes included automating testing on features that had for years been done by hand. Code with too many bugs were sent to “code jail.” Over time, code flowing into what was to become Windows Vista improved.

We all know what happened with Vista–it too was widely panned. But the engineering processes put in place had a lot to do with the many improvements that appeared in Windows 7.

Srivastava then moved on to a new Microsoft project in 2006, code named Red Dog, now known as Azure, which launched in 2008. From this he pivoted to running the server and cloud division, overseeing Microsoft’s relationships with enterprise and data center customers.

People I’ve been talking to who tend to know a lot about the internal politics at Microsoft say this isn’t the last of the changes. Now that Muglia’s replacement has been announced, Nadella is going to want to name key members for his team, which means that those not tapped will probably choose to leave as well. The management shake-up at Microsoft is not over yet.


Dileep Bhandarkar described a Holistic Approach to Energy Efficient Datacenters in an 8/2/2010 post to the MS Datacenters blog (missed when published):

image A little over three years ago, I joined Microsoft to lead the hardware engineering team that helped decide which servers Microsoft would purchase to run its online services. We had just brought our first Microsoft-designed datacenter online in Quincy and were planning to use the innovations there as the foundation for our continued efforts to innovate with our future datacenters. Before the launch of our Chicago datacenter, we had separate engineering teams: one team that designed each of our datacenters and another team that designed our servers. The deployment of containers at the Chicago datacenter marked the first collaboration between the two designs teams, while setting the tone for future design innovation in Microsoft datacenters.

As the teams began working together, our different perspectives on design brought to light a variety of questions. Approaches to datacenter and server designs are very different with each group focusing on varying priorities. We found ourselves asking questions about things like container size and if the server-filled containers should be considered megaservers or their own individual datacenters? Or was a micro colocation approach a better alternative? Should we evaluate the server vendors based on their PUE claims? How prescriptive should we be with the power distribution and cooling approaches to be used?


View inside a Chicago datacenter container

After much discussion we decided to take a functional approach to answering these questions. We specified the external interface – physical dimensions, amount and type of power connection, temperature and flow rate of the water to be used for cooling, and the type of network connection. Instead of specifying exactly what we needed in the various datacenter components we were purchasing through vendors, we let the vendors design the products for us and were surprised at how different all of their proposals were. Our goal was to maximize the amount of compute capability per container at the lowest cost. We had already started to optimize the servers and specify the exact configurations with respect to processor type, memory size, and storage.

After first tackling the external interface, we then worked on optimizing the server and datacenter designs to operate more holistically. If the servers can operate reliably at higher temperatures, why not relax the cooling requirements for the datacenter? To test this, we ran a small experiment operating servers under a tent outside one of our datacenters, where we learned that the servers in the tent ran reliably. This experiment was followed by the development of the ITPAC, the next big step and focus for our Generation 4 modular datacenter work.

Our latest whitepaper, “A Holistic Approach to Energy Efficiency in Datacenters,” details the background and strategy behind our efforts to design servers and datacenters as a holistic, integrated system. As we like to say here at Microsoft, “the datacenter is the server!” We hope that by sharing our approach to energy efficiency in our datacenters we can be a part of the global efforts to improve energy efficiency in the datacenter industry around the world.

The paper can be found on GFS’ web site at www.globalfoundationservices.com on the Infrastructure page here.

Dileep is a Distinguished Engineer, Global Foundation Services, Microsoft


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

Nicole Hemsoth reported DRC Energizes Smith-Waterman, Opens Door to On-Demand Service in a 2/9/2011 post to the HPC in the Cloud blog:

image News emerged recently that might reshape how genomics researchers think about the speed and accuracy of gene sequencing analysis projects that rely on the Smith-Waterman algorithm.

image Sunnyvale, California-based coprocessor company, DRC Computer Corporation, announced a world record-setting genetic sequencing analysis appliance that was benchmarked in the multi-trillion cell updates per second range—a figure that could have gone higher, according to DRC’s Roy Graham. Although similar claims to supremacy have been made in the past, the company states that this marks a 5x improvement over previously published results. [Link added.]

While it might be tempting to think this is just another acceleration story about toppling old benchmarks, this one does have something of a unique slant.

While one of their FPGAs has the equivalent performance of 1000 cores, and this is interesting in itself, the company has advocated that there is a defined cloud computing angle since ideally, their FPGA-based Accelium board can be plugged into a standard x86 server via standard PCIe slots.

imageDRC claims that the “time and cost to complete [gene sequence analysis] can be reduced by a factor of 20 using standard Intel-based servers installed with their own DRC Accelium processors running on Windows HPC Server 2008 R2. They suggest that analysis time is sliced in addition to “over 90% the computing cost, power, real estate and infrastructure required to obtain the results.” [Emphasis added.]

The beauty here, as they see it, is that standard commodity hardware can be significantly enhanced in a plug and play fashion that becomes thus cloud-enabled and more accessible to a broader array of potential users than before.

DRC is pitching this solution as cloud-ready when built in a private cloud, which was the environment they chose for their benchmarking effort. All debates about the validity (or newness) of private clouds aside, there could be changes coming for life sciences companies who want to make use of Smith-Waterman but have been barred due to the high costs of running this hungry algorithm in-house.

Roy Graham from DRC stated that the cloud value of the company’s announcement lies in the fact that eventually, many common sequencing services will be cloud-based and right now, what they’re looking at is a very high volume, scalable and cost-effective platform. He claims that the company is currently in discussions with a number of cloud services companies and at this point, what they’re looking for is a proof point.

DRC claims that due to the inherent parallelism of their reconfigurable coprocessors, such solutions are extremely scalable and adaptable to modern cloud computing environments where computing resources can be shared across multiple users and applications.

According to Steve Casselman, CEO of DRC Computer Corporation, there is definitely a future in the clouds for Smith-Waterman. During a conversation with HPC in the Cloud last week, he speculated on the concept of a “corporate biocloud” where users will be able to run Smith-Waterman on as much the hardware as needed while at the same time running other processes in an on-demand format. This is what he calls an example of "acceleration on-demand," noting that there are several different algorithms ripe for this kind of capability.

Read more: 2, Next >

 

image

No significant articles today.


<Return to section navigation list> 

Cloud Security and Governance

image

No significant articles today.


<Return to section navigation list> 

Cloud Computing Events

Database Trends and Applications (DBTA) Magazine announced by email a Using SQL to Explore Any Data on Hadoop in the Cloud streaming audio event on 2/17/2011 from 8:00 to 11:00 AM PST:

Join Database Trends and Applications (DBTA) for the third in an educational series of webcasts focused on managing and leveraging big data. In our February 17 webcast, DBTA will tackle the issue of exploring and analyzing data of any type via Hadoop in the cloud.

Until now, working with big data was restricted to specialized software developers and required deploying Hadoop and Hive on your own cluster. Now, Karmasphere and Amazon Web Services have teamed to provide a solution for data intensive enterprises to leverage existing SQL skills to query and understand structured and unstructured data of any size in Hadoop.

Attend this live web event to learn how Amazon Elastic MapReduce, a hosted Hadoop web service, combined with Karmasphere Analyst provide a rapid onramp to Big Data using SQL. The presenters will demonstrate how you can explore your big data in the cloud from within a graphical SQL environment in five minutes, with virtually no learning curve. You will see how to discover or create schema from existing data, whether structured or unstructured and then explore and visualize the results with SQL from your desktop.

Learn about customer implementations and how this solution delivers value to enterprises challenged with large, complex and quickly growing data. Join speakers from Amazon Web Services, Karmasphere and Database Trends and Applications on February 17 at 2pm Eastern, 11am Pacific time.

Register today to attend this free webevent.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

SearchCloudComputing.com posted a How much are free cloud computing services worth? article on 2/9/2011, my (@rogerjenn) first as a paid contributor, to the TechTarget blog. From the RSS blurb:

image Free trials from cloud computing providers like Microsoft and Google are one way to lead new users toward cloud adoption. Our expert breaks down the value of each offering.

The article also covers Amazon Web Services’ Free Usage Tier. Reading the full article requires site registration.


David Linthicum warned “Though it may be initially dodgy, the tool will likely help enterprises make necessary link between private and public services” is a preface to his VMware's cloud connector drives the hybrid cloud home post of 2/11/2011 to InfoWorld’s Cloud computing blog:

image Earlier this week, EMC VMware announced a free cloud connector plug-in and hosted services, a development that will set a well-defined path to the hybrid cloud for many enterprises. Last month I pointed out the compelling reasons to drive to the hybrid cloud, and most of those with private and/or public cloud offerings are already on that path.

image The VMware Cloud Connector, available sometime next month, provides a link between internal and external clouds that moves virtual machines between a hosted service (public clouds) and an organization's own internal systems (private clouds). In addition, the company announced that three service providers -- BlueLock, Colt, and Verizon -- are now offering VMware vCloud-certified hosting services.

image VMware says the connector should work with any hosting provider running vCloud and vSphere Director. It is guaranteed to work with any public hosting services proven compliant with the VMware vCloud Datacenter Services program, which assures that the provider has implemented the vCloud API and is praying to the OVF (Open Virtualization Format) gods.

I suspect that this connector software will be a bit dodgy out of the gate, considering the network bandwidth required, and other performance issues are bound to arise. I would not move too quickly to this type of solution without a great deal of prototyping and testing before the production phase.

Still, this type of architecture should quickly become the focus of those moving into cloud computing. The idea of offering choice in deployment platform, whether public or private, and the ability to move between them as the business needs change are just too compelling to pass up. Moreover, wherever VMware leads, enterprises seem to quickly follow. I suspect this product will be no exception.


Bill Claybrook analyzed Microsoft, VMWare, Eucalyptus, and Red Hat in his Comparing four cloud computing strategies article of 2/10/2011 for TechTarget’s SearchCloudComputing blog:

Comparing cloud computing strategies

image When planning to invest in cloud computing technology, prospective adopters must examine factors like degree of lock-in, openness of application programming interfaces, service-level agreements, open standards support, scalability, licensing and pricing. What most IT pros sometimes ignore, however, are the long-term strategies of cloud vendors.

image To understand how a cloud provider's plans can affect your environment, we will examine the strategies for four vendors: Microsoft, VMware, Eucalyptus and Red Hat. These four were chosen because they all have something in common -- private cloud offerings -- yet their cloud strategies are different enough to demonstrate the effects of choosing one over the other.

Microsoft's cloud strategy
Microsoft’s cloud approach involves two distinct sets of offerings:

  1. An on-premise enterprise private cloud (Hyper-V Cloud) built around Windows Server 2008 with Hyper-V and System Center (and its components)
  2. Windows Azure, Microsoft’s cloud platform

There is little interoperability between these two Microsoft cloud services, although the introduction of the Virtual Machine (VM) Role tool allows virtual machine images to be uploaded from Hyper-V Cloud to Azure.

Microsoft believes that Platform as a Service (PaaS) is the future of cloud computing, but it is not giving up on the enterprise cloud market. An important part of Microsoft’s strategy is to enhance its System Center components to be cloud-capable. Microsoft’s focus is its huge installed base; with more than 70% of servers running Windows, it's one of the major reasons that Microsoft is not really working to interoperate with other clouds.

Microsoft has positioned itself to be a private and public enabler for Infrastructure as a Service (IaaS), a private enabler and public provider for PaaS and a public provider for Software as a Service (SaaS).

VMware's cloud strategy
VMware’s initial cloud strategy was to be the leading cloud enabler, meaning it would provide the infrastructure and management components to build clouds. With the acquisition of SpringSource and Zimbra, however, VMware is trying to take on more public cloud attributes. The company's strategy includes signing up numerous partners -- telecom, hosting, and service providers -- to support its vCloud effort. VMware is promoting vSphere for IaaS (inside the firewall) and vCloud Express for public enablement of IaaS; setting up its VMforce platform for public PaaS and vFabric for private PaaS; and beginning to position Zimbra for SaaS partners.

VMware’s cloud strategy also includes using vCloud Director to turn a vSphere environment into an automated self-service cloud environment. The vCloud API will work across internal VMware environments and vCloud Express hosters, so users can treat public clouds as if they are a part of the VMware IT environment -- fully managed and secure.

Eucalyptus Systems' cloud strategy
Eucalyptus Systems presents itself as an open source software company; however, it markets and sells a commercial version of its product, Enterprise Edition 2.0, with features not contained in the open source version.

Eucalyptus Systems is a private enabler for IaaS; the basic premise is that a customer treats its data center as an extension of Amazon's cloud (or vice versa) and uses the same tools to migrate from one to the other. Eucalyptus also implements most of the Amazon EC2 API. Another component of the Eucalyptus strategy is to give users the ability to leverage the investments made in VMware virtualization software.

Red Hat's cloud strategy
Red Hat’s cloud strategy involves promoting interoperability and application portability. It includes a PaaS strategy around its JBoss middleware and other software that runs natively on Red Hat Enterprise Linux, including LAMP and Ruby. The goal is to offer the most programming languages, frameworks and platforms on the broadest range of clouds. JBoss cloud images will be available in Red Hat’s software, as well as Amazon EC2 and Windows Hyper-V.

Red Hat will continue to focus on its Cloud Foundations portfolio of software. This cloud stack is designed to run consistently across physical servers, private clouds and public clouds. Red Hat is positioned to be private and public enablers for IaaS, and a private enabler and public provider for PaaS.

What you get with each cloud provider
If you choose one of these four cloud providers, what will your cloud environment be like in a few years? What happens if you select a vendor but soon want, or even need, to move clouds? That's why one of the most important factors to consider is whether your initial vendor's strategy will facilitate switching clouds, as open standards for cloud APIs are not expected to be ratified for at least three more years. Here's what else you can expect from each of these cloud vendors:

Microsoft provides extensive cloud offerings
Microsoft is almost exclusively focusing on its installed base. If you buy into Microsoft and its cloud strategy, you can almost forget about being able to interoperate with any other clouds, including public ones like Amazon EC2. And Microsoft’s cloud software only works with Windows systems, so no Linux (Hyper-V, however, supports Linux guest operating systems). The good part is that Microsoft will provide you with most of the cloud technology that you need, albeit at a relatively high cost.

With Microsoft Hyper-V Cloud, you can build private clouds. Microsoft and its Hyper-V Cloud partners (including Dell, Fujitsu Ltd., Hitachi Ltd., HP, IBM and NEC) are attempting to deliver a set of predefined, validated configurations for private cloud deployments. These deployments consist of compute, storage and networking resources, along with virtualization and management software.

Microsoft also offers the Azure platform, which is hosted in six Microsoft data centers scattered around the world. It provides three things:

  • Application hosting, or the ability to run x86 and x86-64 applications in Microsoft data centers.
  • The ability to store large amounts of data at scale, either structured or unstructured.
  • Automated service management, with integrated deployment, management and monitoring all rolled up in the Azure platform.

There is also the Windows Azure Platform Alliance, a "cloud in a box" strategy Microsoft is creating with its hardware providers. The first wave of the alliance includes Fujitsu, Dell and HP. The intent is for these hardware partners to house Microsoft pre-configured systems as an appliance, which could then fit into any ISP. As a result, customers can go to their local ISP for a virtual cloud environment, all based on Microsoft’s cloud products. The Windows Azure Appliance gives customers that have been unwilling or unable to have their data hosted in Microsoft’s data centers the opportunity to take advantage of Azure’s scalability without having to move anything off-premise.

Microsoft is looking at a future where applications will be delivered primarily by PaaS. As a result, it is working to ensure that everything on Windows Server 2008 with Hyper-V will run on Windows Azure by second half of 2011. It is possible to move VM images from Windows Server 2008 R2 (with Hyper-V) to Windows Azure using the VM Role tool but not vice versa.

Even if you do not fully buy into Microsoft’s cloud strategy, you can use its Online SaaS service. Online SaaS includes a number of products, such as Business Productivity Online Suite (Exchange Online, SharePoint Online, Office Live Meeting, Office Communications Online), Exchange Hosted Services, Dynamics CRM Online, Office Web Apps (the Web version of Microsoft Office) and Office 365.

If you're interested in Microsoft’s cloud strategy but have already virtualized your Windows data center servers with VMware virtualization software, Microsoft will most likely try to get you to replace VMware with its own technology. A good plan would be to leverage your investment by using vCloud Director to create VMware clouds where there is VMware virtualization software. In conjunction, you can also use Azure to develop and deliver new applications. This creates a two-vendor cloud, but it's a solid approach at a lower cost.

VMware provides an aggressive strategy with long-term stability
Generally, you are in good stead with VMware. The company is trying to marry private clouds and public clouds with its vCloud API, which has a good chance of becoming the de facto standard for both cloud models. VMware has more than 25,000 partners, many of which are vCloud partners.

VMware believes that the transformation of IT infrastructure to a hybrid cloud model has already begun, and it is looking to dominate this change. Because the vCloud API is used in VMware-based private and public clouds, you can move images and data back and forth between clouds and easily establish a hybrid model.

VMware vCloud Express is the public cloud part of VMware’s cloud strategy. It allows VMware service provider partners to create cloud computing platforms with functionality and pricing advantages. Much like Amazon Web Services, it is positioned as a cheap and easy on-ramp to the cloud for customers that may later be talked into migrating to VMware-based enterprise cloud offerings.

While still pushing its bread-and-butter enterprise business, VMware is also moving ahead with other cloud services. Last year, the company collaborated with Salesforce.com on VMforce, a PaaS platform that will compete with Azure and other cloud platforms. VMforce allows developers to write Java applications and get them up and running very quickly inside Salesforce.com’s data centers.

VMware also offers vFabric, a version of VMware’s development platform that large enterprises or VMware service providers can use to create their own internal PaaS for Java developers. Many enterprises will find it tough to let go of their existing investments in applications, but many modern apps will be built for the cloud, not the internal data centers of yesterday. vFabric is being positioned so that enterprises can work with both.

If you have a Windows-installed base with VMware virtualization software, you can turn it into a cloud environment with VMware vCloud Director. However, you could also be caught in a situation where having VMware installed complicates your interactions with Microsoft’s cloud technologies. Adopting the two-cloud strategy suggested earlier makes sense: creating VMware clouds using vCloud Director, leveraging your VMware software investment and developing and delivering new applications with the Azure platform. As VMware starts to face more competition from Microsoft, however, expect VMware to enhance its focus on the Linux-themed base.

Eucalyptus provides interoperability with Amazon EC2 and VMware
Selecting Eucalyptus Systems makes sense for anyone with a Linux environment; if you want to move forward to Windows, you will have to wait a bit. Because Eucalyptus is designed to be API-compatible with Amazon’s EC2 platform, you will be able to move virtual images created using Eucalyptus Enterprise Edition 2.0 over to EC2 and run applications. Virtual images can also be downloaded from Amazon EC2 and run on your private Eucalyptus cloud platform.

Eucalyptus Systems’ support for VMware virtualized environments also allows a private Eucalyptus cloud to be created on top of a VMware virtualized environment, but you will have to use a third-party graphical user interface (GUI) if you want more than the Eucalyptus command line interface. And if you want to utilize automation management tools, you will have to get them from a company like Makara, newScale or enStratus.

Red Hat provides an open strategy for the Linux base
The overarching strategy behind Red Hat’s cloud offerings is to provide a consistent environment where users can run workloads in enterprise data centers or public clouds. For example, if capacity is exhausted in your data center, Red Hat software -- specifically MRG Grid -- can automatically schedule workloads on virtual machines in the Amazon EC2 cloud.

Red Hat’s primary audience is its installed base, but its reach and market share is expected to expand as the market transitions to cloud. This would come from developers building new applications on Red Hat’s public PaaS offerings and migrations from non-Red Hat platforms.

Red Hat designs its offerings to avoid lock-in with its cloud stack. If you have installed VMware virtualization software, you can keep it and expand with Red Hat cloud offerings. You can also use Red Hat’s migration tool, virt v2v, to move workloads from VMware ESX to Red Hat KVM and back again.

You can also use Red Hat’s PaaS capability to develop JBoss-based applications using multiple development frameworks, ranging from Java to Spring to Ruby. Finally, the Red Hat-sponsored open source Deltacloud tool permits the enabling and managing of a heterogeneous cloud infrastructure, including Amazon EC2, GoGrid, OpenNebula and Rackspace.

These cloud computing strategies only specify the best intentions of the vendor today. If you have specific requirements for cloud computing, such as private IaaS and public PaaS, it would be wise to track each vendor's progress and make sure that they are adhering to their stated focus.

Bill is president of New River Marketing Research in Concord, Mass.

Full disclosure: I’m a paid contributor to SearchCloudComputing.com.


Klint Finley described Megastore: Google's Answer to NoSQL Databases briefly in a 2/10/2011 post to the ReadWriteCloud blog:

image Last month Google released a paper on its high availability datastore Megastore. Megastore "blends the scalability of a NoSQL datastore with the convenience of a traditional RDBMS in a novel way, and provides both strong consistency guarantees and high availability," the paper says. Megastore is the technology behind Google's High Replication Datastore, which covered here previously.

image It's a short paper, only 12 pages long. But in case you want something quicker, here are two summaries:

For an even quicker overview, here are three key points:

  • It has been widely deployed for internal use at Google for several years.
  • It uses the Paxos algorithm to manage replication between data centers.
  • Although it offers an eventual consistency mode, it can instead prioritize consistency over performance. It offers three levels of read consistency: current, snapshot and inconsistent.

At the moment, this is only available to the public through App Engine. Perhaps Google will open source it eventually?


Chris Czarnecki asked Facebook Mobile Phones – A New Way of Accessing Cloud Computing? in a 2/10/2011 post to the Learning Tree blog:

image With the Mobile World Congress trade fair being held in Barcelona next week, after many rumours and denials about Facebook developing a mobile phone, the announcement has been made that there will not be a Facebook phone but rather Facebook phones made by many manufacturers!  The manufacturer INQ has announced two models, named the Cloud touch and the Cloud Q. This announcement got me thinking about a question that is raised every time I teach Learning Tree’s Cloud Computing course – are applications such as Facebook and Twitter examples of Cloud Computing?

image Without doubt they most definitely are as they display many of the key characteristics of Cloud Computing. The announcement of the Facebook phones has the potential to completely change the way we use mobile devices for communication. The primary usage of these devices to date has been and still is for voice and SMS communication together with email. With the Facebook phone the way we communicate with these devices changes. Now we have real-time chat based communication, personal and business status updates, location based interaction with friends as well as business. The possibilities seem endless.

image Behind the exciting release of the Facebook phone is a number of enabling technologies, key to this being Cloud Computing. The Facebook Phones now offer a new interface to this form of Cloud Computing which has the potential to completely change the way we communicate on a daily basis. This has a significant positive impact for business, opening new opportunities and markets. If you would like to know how Cloud Computing and platforms such as Facebook can be utilised by your organisation why not consider attending Learning Trees Cloud Computing course. [Link added.]


Lydia Leong (@cloudpundit) described Gartner’s cloud computing surveys (for Gartner subscribers) in a 2/9/2011 post to her Cloud Pundit blog:

image My colleague Jim Browning, who is focused on SMB research, has just published a survey of mid-sized businesses in North America. If you’re a vendor client, I encourage you to check out his Survey Analysis: North American Midsize Businesses Cite Cloud Intentions.

Also, I want to draw your attention to some research that you might not have noticed before.

image Back in 2010, Gartner fielded a huge global survey — 7,300 respondents in the United States, Western Europe, and China — with the goal of looking at cloud adoption trends. Out of them, 650 correctly answered a bunch of screening questions about what constitutes cloud computing, and got to answer the rest of the survey.

The results show:

  • Why people called something cloud
  • Adoption patterns by type of service
  • Key drivers of adoption
  • Key stakeholders in driving adoption
  • Whose budget is it?
  • Key concerns
  • Influential factors when selecting a provider
  • Project IT budget allocated to cloud
  • Impact on the IT organization

If you haven’t looked at this data, I’d highly encourage you to.

Gartner clients only, sorry. (US data has not been consolidated into a publishable form, i.e., turned from big tables of raw numbers into pretty graphs.)


Richard L. Santalesa (@RichNet) reported NIST Issues Two New Draft Cloud Computing Documents, A Call for Public Comment and a Cloud Wiki in a 2/7/2011 post to The InfoLaw Group blog:

image Last week the National Institute of Standards and Technology (NIST), an agency within the Department of Commerce, released for public comment two “new” draft documents centered on cloud computing. The first is a NIST-codified Definition of Cloud Computing (Draft SP 800-145), and the second document is what NIST calls “the first set of guidelines for managing security and privacy issues in cloud computing,” titled Guidelines on Security and Privacy in Public Cloud Computing (the "Guidelines", Draft SP 800-144). In conjunction with the release NIST has also unveiled a new NIST Cloud Computing Collaboration site, which includes various working group listservs and Wikis, to “enable two-way communication among the cloud community and NIST cloud research working groups.”

UPDATE:  Richard Santalesa was interviewed by DataGuidance for his thoughts on NIST's cloud computing drafts.  See, USA: NIST seeks public comment on revised cloud computing definition and guidelines, available here.

While both of the released draft documents are open for public comments, due no later than Feb. 28, 2011 (comments with suggested changes or enhancements to the Definition should be sent to 800-145comments@nist.gov; comments on the Guidelines should be sent to 800-144comments@nist.gov), SP 800-145 is essentially identical to NIST's existing Definition of Cloud Computing, Version 15, dated 10-7-09. However, rather than attempt to put the cart before the horse and issue a new or updated definition NIST has wisely chosen the tactic of opening the now officially numbered definition up for comment ahead of any subsequent revision.

For their part the Guidelines are the result of several years of active research that NIST has been part of in the area of our cloud computing. At 60 pages in length the Guidelines expressly recognize and start at the position that “Cloud computing can and does mean different things to different people.”

From there the Guidelines go on to provide a fairly robust overview of the inherent security and privacy issues raised by cloud computing, including a brief look at public cloud service agreements. The Guidelines also give a nod, whether by design or by accident, to the FTC's recent privacy framework (discussed here, here, here with the report itself here), in urging that “security and privacy must be considered from the initial planning stage at the start of the systems development life cycle ” - essentially adopting the “privacy by design” approach proposed by the FTC, which highlights our belief that the combined influence of the Commerce Dept's Greenpaper and the FTC's privacy framework will make significant headway this year in setting parameters in the ongoing privacy debate.

Some key recommendations by the Guidelines, for both federal departments and agencies and private sector public cloud initiatives, include:

  • Identifying when it is advisable and necessary to press for negotiation of offered cloud provider contracts and Service Level Agreements;
  • The importance of not overlooking any security and privacy issues raised by the client-side of cloud computing efforts;
  • Stressing accountability and monitoring throughout the cloud initiative, as well as communicating use of how and when cloud services are used;
  • A detailed review of the security downside of cloud computing; and
  • Compliance specifics, with necessary understanding of data location, data ownership, applicable laws and regulations, ramifications for e-discovery, etc.

A final noteworthy topic covered in the Guidelines is the issue of identity and access management, which also touches on the crucial issue of authentication.

The entire Guidelines are worth a quick reading, and, as always, feel free to contact me or any attorney at the InfoLawGroup to discuss the Guidelines or your own planned or ongoing cloud computing initiatives.

I’m on the NIST mailing list, which is very active.


<Return to section navigation list> 

0 comments: