Friday, May 28, 2010

Windows Azure and Cloud Computing Posts for 5/27/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this daily series.

 
• Updated 5/28/2010: Steve Marx explains how he coded and debugged his live Python Azure music demo (Swingify) described in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section and Microsoft sends Platform Ready message to Front Runner users.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated in June 2010 for the January 4, 2010 commercial release. 

Azure Blob, Drive, Table and Queue Services

No significant articles today.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

ETH Zurich’s Systems Group gave the performance and cost nod to SQL Azure in its End-To-End Performance Study of Cloud Services (SQL Azure, Amazon EC2 and S3, and Google App Engine) posted on 5/26/2010 to the High Scalability blog:

image Cloud computing promises a number of advantages for the deployment of data-intensive applications. Most prominently, these include reducing cost with a pay-as-you-go pricing model and (virtually) unlimited throughput by adding servers if the workload increases. At the Systems Group, ETH Zurich, we did an extensive end-to-end performance study to compare the major cloud offerings regarding their ability to fulfill these promises and their implied cost.

The focus of the work is on transaction processing (i.e., read and update work-loads), rather than analytics workloads. We used the TPC-W, a standardized benchmark simulating a Web-shop, as the baseline for our comparison. The TPC-W defines that users are simulated through emulated browsers (EB) and issue page requests, called web-interactions (WI), against the system. As a major modification to the benchmark, we constantly increase the load from 1 to 9000 simultaneous users to measure the scalability and cost variance of the system.  Figure 1 shows an overview of the different combinations of services we tested in the benchmark.

SUT

Figure 1: Systems Under Test

The main results are shown in Figure 2 and Table 1 - 2 and are surprising in several ways. Most importantly, it seems that all major vendors have adopted a different architecture for their cloud services (e.g., master-slave replication, partitioning, distributed control and various combinations of it). As a result, the cost and performance of the services vary significantly depending on the workload. A detailed description of the architectures is provided in the paper.

Furthermore, only two architectures, the one implemented on top of Amazon S3 and MS Azure using SQL Azure as the database, were able to scale and sustain our maximum workload of 9000 EBs, resulting in over 1200 Web-interactions per second (WIPS).  MySQL installed on EC2 and Amazon RDS are able to sustain a maximum load of approximate 3500 EBs. MySQL Replication performed similar to MySQL standalone with EBS, so we left it off the picture. Figure 1 shows that the WIPS of Amazon’s SimpleDB grow up to about 3000 EBs and more than 200 WIPS. In fact, SimpleDB was already overloaded at about 1000 EBs and 128 WIPS in our experiments. At this point, all write requests to hot spots failed. Google AppEngine already dropped out at 500 emulated browsers with 49 WIPS. This is mainly due to Google’s transaction model not being built for such high write workloads. [Emphasis added.]

When implementing the benchmark, our policy was to always use the highest offered consistency guarantees, which come closest to the TPC-W requirements. Thus, in the case of AppEngine, we used the offered transaction model inside an entity group. However, it turned out, that this is a big slow-down for the whole performance. We are now in the process of re-running the experiment without transaction guarantees and curio[u]s about the new performance results.

Scalability

Figure 2: Comparison of Architectures [WIPS] …

Table 1 shows the total cost per web-interaction in milli dollar for the alternative approaches and a varying load (EBs). Google AE is cheapest for low workloads (below 100 EBs) whereas Azure is cheapest for medium to large workloads (more than 100 EBs).  The three MySQL variants (MySQL, MySQL/R, and RDS) have (almost) the same cost as Azure for medium workloads (EB=100 and EB=3000), but they are not able to sustain large workloads.

CostWIPS

Table 1: Cost per WI [m$], Vary EB

The success of Google AE for small loads has two reasons.  First, Google AE is the only variant that has no fixed costs. There is only a negligible monthly fee to store the database. Second, at the time these experiments were carried out, Google gave a quota of six CPU hours per day for free.  That is, applications which are below or slightly above this daily quota are particularly cheap.

Azure and the MySQL variants win for medium and large workloads because all these approaches can amortize their fixed cost for these workloads. Azure SQL server has a fixed cost per month of USD 100 for a database of up to 10 GB, independent of the number of requests that need to be processed by the database.  For MySQL and MySQL/R, EC2 instances must be rented in order to keep the database online.  Likewise, RDS involves an hourly fixed fee so that the cost per WIPS decreases in a load situation.  It should be noted that network traffic is cheaper with Google than with both Amazon and Microsoft.  

Table 2 shows the total cost per day for the alternative approaches and a varying load (EBs). (A "-" indicates that the variant was not able to sustain the load.)  These results confirm the observations made previously:  Google wins for small workloads;  Azure wins for medium and large workloads.  All the other variants are somewhere in between.  The three MySQL variants come close to Azure in the range of workloads that they sustain. Azure and the three MySQL variants roughly share the same architectural principles (replication with master copy architectures). SimpleDB is an outlier in this experiment. With the current pricing scheme, SimpleDB is an exceptionally expensive service.  For a large number of EBs, the high cost of SimpleDB is particularly annoying because users must pay even though SimpleDB drops many requests and is not able to sustain the workload.

CostDay

Table 2: Total Cost per Day [$], Vary EB

Turning to the S3 cost in Table 2, the total cost grows linearly with the workload.  This behavior is exactly what one would expect from a pay-as-you-go model.  For S3, the high cost is matched by high throughputs so that the high cost for S3 at high workloads is tolerable. This observation is in line with a good Cost/WI metric for S3 and high workloads  (Table 1). Nevertheless, S3 is indeed more expensive than all the other approaches (except for SimpleDB) for most workloads.  This phenomenon can be explained by Amazon's pricing model for EBS and S3. For instance, a write operation to S3 is hundred times more expensive than a write operation to EBS which is used in the MySQL variant.  Amazon can justify this difference because S3 supports concurrent updates with an eventual consistency policy whereas EBS only supports a single writer (and reader) at a time.

In addition to the here presented results, the paper also compares the overload behavior and presents the different cost-factors leading to the here presented numbers. If you are interested in these results and additional information about the test-setup, the paper will be presented at this year's SIGMOD conference and can also be downloaded here.

Be sure to read the comments for readers’ issues with the research methodology.

Alan Shimel claims “Complex legacy databases are just not built to scale in the cloud, Terracotta enables scalability” in his Databases Are The Bottleneck In The Cloud. Terracotta Is The Open Source Answer article (cum advertisement) of 5/27/2010 for NetworkWorld’s Open Source Fact and Fiction blog:

So you think you can just take that MySQL or Oracle database with all of that data that you have been using for 4 years or more and transfer it up to the cloud? Cloud don't work like that. But Terracotta does. Terracotta provides scale using open source.

In fact most public cloud infrastructure doesn't give you the ability to customize much in the way of database configurations. The databases available are rather rudimentary. On the other hand, keeping your database at your own data center is never going to give you the scalability and redundancy the cloud can offer you.

The answer at least according to Terracotta is caching. Over the past few years they have become the standard for elastic caching, hibernate and distributed caching. This allows your application and data instant scalability, outgrowing your database and even your own hardware limits.

I had a chance to speak with Mike Allen, head of product at Terracotta about this.  Terracotta was not originally an open source project or business when they launched in 2004. Recognizing that open source was a better distribution method, they open sourced their product in 2006 and that is when things started to take off for the company. This is a bit unusual, as most companies start open source and then go to sort of a dual license model.

Terracotta did another out of the ordinary move, when in 2009 they "bought" an open source project/product called Ehcache. Ehcache was the brainchild of Greg Luck who besides selling the IP to Terracotta, now works there. Ehcache was a de facto standard in Java enterprise environments. Its API was also the standard for hibernate which allows for elastic and distributed caching.

Allen says that complex databases are not going to be able to move up to public cloud providers anytime soon. The money put into their development to date and what it would cost to replace them with "no SQL" solutions like Cassandra are prohibitive. Therefore using Terracotta's solutions are the only viable alternative for the foreseeable future.

Terracotta already has 100,000 deployments with over 250 paying customers. As the swing to the cloud accelerates, they anticipate that to rise dramatically.  This is one open source company poised to capitalize on the cloud.

Alan’s post appears to fall in the fiction category. SQL Azure’s performance ratings in the preceding post belie Alan’s assertion that “Complex legacy databases are just not built to scale in the cloud.” 

Walter Wayne Berry explains Testing Client Latency to SQL Azure in this 5/27/2010 post to the SQL Azure Team blog:

SQL AzureSQL Azure allows you to create your database in datacenters in North Central US, South Central US, North Europe, and Southeast Asia. Depending on your location and your network connectivity, you will get different network latencies between your location and each of the data centers.

Here is a quick way to test your Network latency with SQL Server Management Studio:

1) If you don’t have one already, create a SQL Azure server in one of the data centers via the SQL Azure Portal.

2) Open the firewall on that server for your IP Address.

3) Create a test database on the new server.

4) Connect to the server/database with SQL Server Management Studio 2008 R2. See our previous blog post for instructions.

5) Using a Query Window in SQL Server Management Studio, turn on Client Statistics. You can find the option on the Menu Bar | Query | Include Client Statistics, or on the toolbar (see image below.)

clip_image002

6) Now execute the query:

SELECT 1

7) The query will make a round trip to the data center and fill in the client statistics.

clip_image004

8) Execute the same query several times to get a good average against the data center.

9) If you are just using this server for testing, drop your server, choose another data center and repeat the process with a new query window.

Reading the Results

The first two sections (Query Profile Statistics and Network Statistics) are not interesting and should be very similar to mine in the image above. The third section, Time Statistics, is what we want to study.

Client processing time: The cumulative amount of time that the client spent executing code while the query was executed. Alternatively, is the time between first response packet and last response packet.

Total execution time: The cumulative amount of time (in milliseconds) that the client spent processing while the query was executed, including the time that the client spent waiting for replies from the server as well as the time spent executing code.

Wait time on server replies: The cumulative amount of time (in milliseconds) that the client spent while it waited for the server to reply. Alternatively, the time between the last request packet left the client and the very first response packet returned from the server.

You want to find out which data center has a low average Wait time on server replies, this will be the least amount of network latency and with the best performance network for your location.

If you are reading this before June 7th 2010, you have a chance to attend Henry’s Zhang’s talk at TechEd, called: “COS13-INT: Database Performance in a Multi-tenant Environment”. This talk will cover this topic and more.

Brian Swan shows you How to Get the SQL Azure Session Tracing ID using PHP in this 5/27/2010 post:

SQL AzureThe SQL Azure team recently posted a blog about SQL Azure and the Session Tracing ID. The short story about the Session Tracing ID is that it is a new property (a unique GUID) for connections to SQL Azure. The nice thing about it is that if you have a SQL Azure error, you can contact Azure Developer Support and they can use it to look-up the error and help figure out what caused it. (If you are just getting started with PHP and SQL Azure, see this post: Getting Started with PHP and SQL Azure.)

Getting the Session Tracing ID is easy with PHP…just execute the following query: SELECT CONVERT(NVARCHAR(36), CONTEXT_INFO()). Here’s the PHP code for doing this:

$server = "tcp:YourServerID.database.windows.net,1433";
$user = "YourUserName@YourServerID";
$pass = "YourPassword";
$database = "DatabaseName";
$connectionoptions = array("Database" => $database, "UID" => $user, "PWD" => $pass);
$conn = sqlsrv_connect($server, $connectionoptions);

if($conn === false)
{
    die(print_r(sqlsrv_errors(), true));
}

$sql = "SELECT CONVERT(NVARCHAR(36), CONTEXT_INFO())";
$stmt = sqlsrv_query($conn, $sql);
$row = sqlsrv_fetch_array($stmt);
print_r($row);

Of course, the code above assumes you have the SQL Server Driver for PHP installed. And, if you are watching closely, you’ll notice that I didn’t have to include the “MultipleActiveResultSets”=> false in my connection options array…because SQL Azure now supports Multiple Active Result Sets (MARS).

The MSDN Library appears to have updated its Transact-SQL Reference (SQL Azure Database) topic recently:

Microsoft SQL Azure Database supports Transact-SQL grammar that you can use to query a database and to insert, update, and delete data in tables in a database. The topics in this section describe the Transact-SQL grammar support provided by SQL Azure. 

Important noteImportant: The Transact-SQL Reference for SQL Azure is a subset of Transact-SQL for SQL Server.

This section provides a series of foundational topics for understanding and using the Transact-SQL grammar with SQL Azure. To view details about data types, functions, operators, statements, and more, you can browse through the table of contents in these sections or search for topics in the index.

Mafian911 gets the answers to his OData Service and NTLM problems in this thread on the Restlet Discuss forum:

image Can anyone tell me how to access an OData service using NTLM security? I have crawled all over the web trying to find out how to do this, and the Tutorial site mentions something about a connector and throws out some source code, but I have no idea what to do with it. …

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Yavor Georgiev reports Updated WCF samples for Azure on 5/27/2010:

image Our samples over at http://code.msdn.com/wcfazure had gotten pretty stale, so I just put out an update that gets everything working on Visual Studio 2010, Silverlight 4, and the latest Azure tools.

Yavor is a Program Manager for WCF

Vittorio Bertocci (a.k.a. Vibro) explains how to put Your FedAuth Cookies on a Diet: IsSessionMode=true in this 5/26/2010 post:

More occult goodness for your programming pleasure! The Session Mode is a great feature of WIF which is not known as widely as it should be.

Sometimes you will be in situations in which it is advisable to limit the size of the cookie you send around. WIF already take steps for being parsimonious with the cookie size. By default, the cookie will contain just the layout defined by the SessionSecurityToken: more or less the minimal information required for being able to reconstruct the IClaimsPrincipal across requests (as opposed to a verbatim dump of the entire incoming bootstrap token, with its logorrheic XML syntax, key references & friends).

Let’s see if we can visualize the quantities at play here. If you take the FedAuth cookie generated from the default token issued from the default STS template implementation in the WIF SDK, the one with just name & role claims hardcoded in a SAML1.0 assertion, you get the following:

FedAuth
77u/PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz48U2VjdXJpdHlD b250ZXh0VG9rZW4gcDE6SWQ9Il83NTNmMzFiYi01N2QxLTQ2YzAtOWY5ZS02MTNj YTBhY2VmYWQtQkQzN0YzRTdGQUJCMzg5NTYzMEExNDUzQkEyQTlCOEUiIHhtbG 5zOnAxPSJodHRwOi8vZG9jcy5vYXNpcy1vcGVuLm9yZy93c3MvMjAwNC8wMS9vYX Npcy0yMDA0MDEtd3NzLXdzc2VjdXJpdHktdXRpbGl0eS0xLjAueHNkIiB4bWxucz0iaH R0cDovL2RvY3Mub2FzaXMtb3Blbi5vcmcvd3Mtc3gvd3Mtc2VjdXJlY29udmVyc2F0aW 9uLzIwMDUxMiIPElkZW50aWZpZXIXJuOnV1aWQ6OWQ2MzE5YmYtZTg3MC00Yz Q4LWIxNmYtNWU1MjhhYzVmMjU5PC9JZGVudGlmaWVyPjxJbnN0YW5jZT51cm46dXVpZD
o3NjdmNjBmZC1jYzZmLTQ2ZWEtYjI3OC0zZGQ2MmIxYTg5NjE8L0luc3RhbmNlPjxDb2
9raWUgeG1sbnM9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vd3MvMjAwNi8wNS
VFBQUFOQ01uZDhCRmRFUmpIb0F3RS9DbCtzQkFBQUFxYzlGQWs2YlBVRzVHY0JP bUJKTWJRQUFBQUFDQUFBQUFBQURaZ0FBd0FBQUFCQUFBQUROZndoSGJsZDJrbU g3UWcvUENPQlFBQUFBQUFTQUFBQ2dBQUFBRUFBQUFJQVF2ZCt1bGNnOFIyRXZS MytjUllHWUFnQUExNTYvR0ovLzNNNVd0Y1RNT243N0pDYlFpTHkzdkRnTjVTbzBCd XIzaVlTaUUxUnFSWjJHWVJaRGQ1UWc1YktlS0JGTjhDZi82VXhHbG1SY2Z5eU5oejlNM lY3WUFTV1lvWDl6NTZ0cnpORnVJbk9kWUJaYXhaZFF4SGs5MHZEakd1cWN1ZEdCU2 NjSGJFbmNuTHVDT01HTWJ3TEhOdzhJbEwwcDM0TlYvRS9CbGRUWWZDUkViVWd2 cU5xS3NJV2locnZHbzZYMzA1ajBMWVdqSDY0bnI4bENiU1ZiTnJEVHhJNGsvTGhOan ovZExNN3c3YkkyNGdTWHhEMXFyaEpsZDZIRVFtWTkybVJUY0Z2eGFPamlpbm1lSEN mWDJXbFB1anZEMldvcW5pb0tNZ0c4K0laL0REMlhQVjBsRU5USjlwK0R4RXdwU3htW jJCR1U1eGs3MlNZYjIxc2ovQXdNNmZGc1dacWEyUlJYK2FEZkozVzN6WUJlV1N5U3dv eSt1MjFNRUxiaDVJaTFRamJTVUxaa3IyTG1OenA4ZkpzMC94ZWNReHA0c1ovbnpsT2x CVng1ZVlHMEV5MDBVMHZDQ0poVDBHeEU3Y3JtbXFiTm00UDg1di8rSWkzNGQ2Qjh TVWkwTjFrL001aFpiRGFaejg0a2wxcXF5SzRLcmQ4eXdoT1ZtZGFsUnNpWUFUSzdTdkd xRFNxdlBYRjN2cGJ6d0d4Y0NLeGFReTdUY2hkeFNNakNEdUdLcmExNGY1U00vZUszcH JCTDlxNSsxaXVRcXpXK1JQWlIvVEMyVTdjdjRNTGhwaEhsT1FFVVlOTzYyYWljQXppQ3B qODRrOThHUW5EYWJsdlp2Rm1aaFg1TE5WUkt3QjNpZUxreGFsaiswVmJSejZoQnpTM 2JxQTB3ZHNHakpLS3Q4VjQzNXZuN2RjaVVNWk9mVlpTcWxOd1N2WnBzdHZBSTVVe XUvbVRKWit0bnM5M0ZBaXVxRHBJOXdOV3MyeE5LNXhjUDNyNms5TENEL1lHdkdhb UdDWWVPWXpjcnA1ei82b2g2K2ZSRThBSXVEOWNURHdsV2VYUVQyM3pZVU14aEFN OGtzQUttU1kyQmVmaGJM  FedAuth1
U1ZBVnJFbTJ5SnhmaGtLQlQzbnJTM0pYaXNMMUx5SmFHWUxLQXlXejEwMGRoUUF BQUQ4a2l4K3Q4V0EyaVFZVkVDeGdPdk85VUVxaXc9PTwvQ29va2llPjwvU2VjdXJpdHlD b250ZXh0VG9rZW4

Slightly more than 2K. Not the nimblest thing you’ve ever seen, but not a cetacean cookie either. On the other hand we have just two claims here; what happens if we have more than those, or we have big claims such as base64’ed images or similar? Moreover: sometimes we do need to include the bootstrap token in the session, for example when we call a frontend which needs to invoke a backend acting as the caller.

Let’s pick this last case: keeping the same token we used above, let’s save it in the session (by adding saveBootstrapTokens="true" to the microsoft.identityModel/service element on the RP) and see how big a cookie we get:

image

Vibro continues with more examples and concludes with a much shorter cookie when IsSessionMode = True.

Michelle Leroux Bustamante describes her WCF and the Access Control Service article for DevProConnections magazine’s June 2010 issue as providing “Custom components and code for securing REST-based WCF services.”

Unfortunately, the publisher outsources the online version to ZMags, who overuses Flash and makes the content difficult to navigate and read.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Steve Marx explains how he coded (and debugged) his live Windows Azure Swingify demo app (see below) with Python in this 00:33:08 Cloud Cover Episode 13 - Running Python - the Censored Edition Channel9 video of 5/28/2010:

image Join Ryan and Steve each week as they cover the Microsoft cloud. You can follow and interact with the show at @cloudcovershow
In this special censored episode:  

  • We show you how to run Python in the cloud via a swingin' MP3 maker
  • We talk about how Steve debugged the Python application
  • Ryan and Steve join a boy band

Show Links:
SQL Azure Session ID Tracing
Windows Azure Guidance Part 2 - AuthN, AuthZ
Running MongoDb in Windows Azure
We Want Your Building Block Apps

Steve Marx created his live Windows Azure Swingify demo app on 5/27/2010:

Browse for and upload an *.MP3 file, click the Swingify! button and make the music swing:

image 

According to Steve:

The Echo NestThis application is powered by Tristan's "The Swinger" application, which is built on the wonderful music APIs of EchoNest.

Steve Marx then put the whole thing into a Windows Azure application, which is what you see here.

Paul Lamere posted The Swinger and a collection of Swingified tracks to his Music Machine blog on 5/21/2010:

One of my favorite hacks at last weekend’s Music Hack Day is Tristan’s Swinger.  The Swinger is a bit of python code that takes any song and makes it swing.  It does this be taking each beat and time-stretching the first half of each beat while time-shrinking the second half.  It has quite a magical effect.

Swinger uses the new Dirac time-stretching capabilities of Echo Nest remix. Source code is available in the samples directory of remix.

I agree that the Jefferson Airplane’s Swingified White Rabbit is hypnotic, but the lead doesn’t really sound like the Gracie Slick I remember from the Fillmore Auditorium days.

Steve MarxMaking Songs Swing with Windows Azure, Python, and the Echo Nest API post of 5/27/2010 begins:

imageI’ve put together a sample application at http://swingify.cloudapp.net that lets you upload a song as an MP3 and produces a “swing time” version of it. It’s easier to explain by example, so here’s the Tetris theme song as converted by Swingify.

Background

The app makes use of the Echo Nest API and a sample developed by Tristan Jehan that converts any straight-time song to swing time by extended the first half of each beat and compressing the second half. I first saw the story over on the Music Machinery blog and then later in the week on Engadget.

I immediately wanted to try this with some songs of my own, and I thought others would want to do the same, so I thought I’d create a Windows Azure application to do this in the cloud.

How it Works

We covered this application on the latest episode of the Cloud Cover show on Channel 9 (to go live tomorrow morning – watch the teaser now). In short, the application consists of an ASP.NET MVC web role and a worker role that is mostly a thin wrapper around a Python script.

The ASP.NET MVC web role accepts an MP3 upload, stores the file in blob storage, and enqueues the name of the blob:

[HttpPost]
public ActionResult Create()
{
    var guid = Guid.NewGuid().ToString();
    var file = Request.Files[0];
    var account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
    var blob = account.CreateCloudBlobClient().GetContainerReference("incoming").GetBlobReference(guid);
    blob.UploadFromStream(file.InputStream);
    account.CreateCloudQueueClient().GetQueueReference("incoming").AddMessage(new CloudQueueMessage(guid));
    return RedirectToAction("Result", new { id = guid });
}

The worker role mounts a Windows Azure drive in OnStart(). Here I used the same tools and initialization code as I developed for my blog post “Serving Your Website From a Windows Azure Drive.” In OnStart():

var cache = RoleEnvironment.GetLocalResource("DriveCache");
CloudDrive.InitializeCache(cache.RootPath.TrimEnd('\\'), cache.MaximumSizeInMegabytes);

drive = CloudStorageAccount.FromConfigurationSetting("DataConnectionString")
    .CreateCloudDrive(RoleEnvironment.GetConfigurationSettingValue("DriveSnapshotUrl"));
drive.Mount(cache.MaximumSizeInMegabytes, DriveMountOptions.None);

Then there’s a simple loop in Run():

while (true)
{
    var msg = q.GetMessage(TimeSpan.FromMinutes(5));
    if (msg != null)
    {
        SwingifyBlob(msg.AsString);
        q.DeleteMessage(msg);
    }
    else
    {
        Thread.Sleep(TimeSpan.FromSeconds(5));
    }
}

Steve continues with code for the implementation of SwingifyBlob(), which calls out to python.exe on the mounted Windows Azure drive and suggests running “the Portable Python project, which seems like an easier (and better supported) way to make sure your Python distribution can actually run in Windows Azure.”

Tony Bailey suggests Intuit Developers – Learn about Windows Azure in this 5/27/2010 post to msdev.com:

This series of Web seminars is designed to quickly immerse you in the world of the Windows Azure Platform. You’ll learn what Azure is all about, including the fundamental concepts of cloud computing and Windows Azure. You’ll learn why you should target Windows Azure, and see the tangible business benefits you can gain by deploying your apps to the cloud.

Tony is a Senior Marketing Manager for Microsoft.

Microsoft Case Studies reported Real World Windows Azure: Interview with Markus Mikola, Partner at Sopima on 5/27/2010:

Software Firm Launches Business Contract Service with Lean Staff, Low Investment

image Sopima, creator of an online solution for managing business contract life cycles, needed to minimize its capital investment to deliver a viable offering. It also wanted to offer an affordable monthly subscription service to gain new customers quickly. Using the Windows Azure™ platform , the company hosts its application in Microsoft® data centers, providing customers with fast response times and high scalability.

With the solution, Sopima has removed barriers that would have otherwise prohibited its entry into competitive markets. The company limited its investment in infrastructure and can focus on development rather than hardware administration. Sopima estimates that, without the Windows Azure platform, it would have had to hire additional full-time staff members at an annual cost of approximately U.S.$500,000. Its status as a Microsoft Partner will lend Sopima credibility in a competitive marketplace.

Situation

Sopima, a software development firm based in Helsinki, Finland, set out to simplify and streamline the processes of creating, managing, and storing business contracts for companies of all sizes.

Many companies manage hundreds to thousands of contracts each year for business arrangements with customers, clients, suppliers, and other external partners. With many stakeholders involved, including administrative assistants, sales associates, account managers, engineers, and legal representatives, collaboration through the contract creation process can be time-consuming and inefficient. The process requires close collaboration among individuals and departments, as well as with external business partners. Antti Makkonen, Research and Development Lead at Sopima, says, “Getting a contract signed can mean months and months of ‘back and forth’ between companies, often involving complex negotiations among legal teams. …

Eric Nelson answers “In the main, yes” to his Q&A: Can you develop for the Windows Azure Platform using Windows XP? post of 5/27/2010:

Longer answer:

The question is sparked by the requirements as stated on the Windows Azure SDK download page.

Namely:

Supported Operating Systems: Windows 7; Windows Vista; Windows Vista 64-bit Editions Service Pack 1; Windows Vista Business; Windows Vista Business 64-bit edition; Windows Vista Enterprise; Windows Vista Enterprise 64-bit edition; Windows Vista Home Premium; Windows Vista Home Premium 64-bit edition; Windows Vista Service Pack 1; Windows Vista Service Pack 2; Windows Vista Ultimate; Windows Vista Ultimate 64-bit edition

Notice there is no mention of Windows XP. However things are not quite that simple.

The Windows Azure Platform consists of three released technologies

Windows Azure

SQL Azure

Windows Azure platform AppFabric

The Windows Azure SDK is only for one of the three technologies, Windows Azure. What about SQL Azure and AppFabric? Well it turns out that you can develop for both of these technologies just fine with Windows XP:

SQL Azure development is really just SQL Server development with a few gotchas – and for local development you can simply use SQL Server 2008 R2 Express (other versions will also work).

AppFabric also has no local simulation environment and the SDK will install fine on Windows XP (SDK download)

Actually it is also possible to do Windows Azure development on Windows XP if you are willing to always work directly against the real Azure cloud running in Microsoft datacentres. However in practice this would be painful and time consuming, hence why the Windows Azure SDK installs a local simulation environment. Therefore if you want to develop for Windows Azure I would recommend you either upgrade from Windows XP to Windows 7 or… you use a virtual machine running Windows 7.

If this is a temporary requirement, then you could consider building a virtual machine using the Windows 7 Enterprise 90 day eval. Or you could download a pre-configured VHD – but I can’t quite find the link for a Windows 7 VHD. Pointers welcomed. Thanks.

“In the main …” reminds me of “In the long run …” about which Lord Keynes reminds “In the long run, we are all dead.”

The Microsoft Partner Network sent the following message by e-mail to Front Runner users on 5/27/2010:

 Front Runner

 

imageWe are making it easier for you to get your software products compatible with the newest Microsoft technologies. On June 1st, Green Light is changing to Microsoft Platform Ready.

Microsoft Platform Ready, built on Azure, simplifies access to the information you need to develop, test and market your Windows based applications to millions of potential customers. [Emphasis added.]

In the coming months you will also find free integrated testing tools, starting with Windows Server 2008 R2, making app certification and Microsoft Partner competency attainment easier to manage.

Don't worry - All of your existing information will be migrated to the new site so you will not have to create new logins or re-profile your applications.

Simply login at www.microsoftplatformready.com on June 1st with your Live ID to access all of your benefits and experience the ease of Microsoft Platform Ready.

Thank you for your continued support of the Microsoft Platform. If you have any questions or comments please contact mprsupport@microsoft.com.

Microsoft Green Light program is the equivalent of Front Runner for non-US partners. Clicking the Green Light link leads to the Front Runner landing page. It appears to me that the Platform Ready team got their links mixed up.

Microsoft sent the same message on 5/28/2010 with Front Runner substituted for GreenLight and different graphics.

You can learn more about Microsoft’s new partner competency offerings in the The Value of Earning a Microsoft® Competency white paper of May 2010 and at the Worldwide Partner Conference (WPC) 2010 on 7/11 to 7/15/2010.

Reuben Krippner added VIDEO: PRM PORTAL Step-by-Step Installation Guide for Windows Azure documentation on 5/26/2010 to his Partner Relationship Management (PRM) Accelerator for Microsoft Dynamics CRM project on Codeplex.

Step-by-Step Installation Video for Windows Azure. This video provides full guidance on how to deploy the Partner portal to a Windows Azure portal. For deployment on your own servers you can follow all the steps up to editing the web.config file in the web portal project. A separate video will be posted for setting up the portal on your own IIS Windows Server. Also note that this solution will work with Microsoft Dynamics CRM Online, On-Premise and Partner-Hosted!

For more about Reuben’s project, see Windows Azure and Cloud Computing Posts for 5/26/2010+.

Alex WilliamsOpen API Madness: The Party Has Just Begun for Cloud Developers post of 5/26/2010 to the ReadWriteCloud blog begins:

Glue2010_Header_10.gifIt's like an API festival here at Gluecon. I tweeted that this afternoon. But it's not just Gluecon, though - they're one of the hottest topics in discussions about cloud computing.

In his presentation today at Gluecon, John Musser of Programmable Web illustrated how hot APIs have become and how they've matured.

Perhaps most illustrative is his "API Billionaire's Club." Members of the club include Google and Facebook with 5 billion AP calls per day. Twitter has 3 billion per day. Ebay has 8 billion per month. NPR gets 1.1 billion calls per month for its API-delivered stories. Saleforce.com gets 50% of its traffic through its API.

According to Musser, it took eight years to get to 1,000 API's but just 18 months to get to 2,000. This year, the number of API's are double what they were last year on a month-per-month basis.

Internet/platform as a service (PaaS) API's are now number one. That's illustrative of the increased usage of services like Amazon S3 and all its competitors. Maps are the number three API, dropping from the number one spot last year. Social API's are number two.

REST API's are far surpassing SOAP.

PW_GlueCon_May2010 - _Google Docs_-3.jpg

There's a real energy here at Gluecon around the discussions about APIs. The room was packed for the presentations on the topic.

We'll pour more into the topic in later posts.

Return to section navigation list> 

Windows Azure Infrastructure

Chris Czarnecki asks Does Working Effectively with Azure Require New Skills? in this 5/27/2010 post to the Learning Tree International blog:

image Recently I wrote a post that discussed developing applications for Microsoft’s Azure. This article was stimulated by an interview with Steve Balmer. My views drew a strong healthy response from Microsoft developers, insisting ASP.NET applications can be moved directly to the Azure cloud.

Related to my post an article on Information Week had Bob Muglia, president of Microsoft’s server and tools division saying “There are few people in the world who can write cloud applications. Our job is to enable everybody to be able to do it.”

I think the quote raises a number of interesting points. Companies will move existing applications to the cloud, or develop new cloud based applications for a variety of reasons. Cost savings in infrastructure are a primary motivation of course. But the scalability, reliability, rich media, reduced administration are also potentially equally or more significant. It is when the aspects of scalability, reliability, rich media are considered that new skills are required by developers to enable them maximize the benefits the cloud provides. With Azure, Microsoft have provided a rich environment for developing cloud based applications, much of which they have announced will find its way into their next generation of Windows Server and System Center management software. This raises the exciting prospect of running cloud applications on private networks or on the public Azure cloud together with hybrids integrating the two.

Developers for Azure can utilize their existing .NET development skills. However, the Azure platform features, application architecture and libraries to build true cloud applications are the areas in which developers require new skills. Microsoft have provided the tools, developers now need to know what these are and how to apply these. That’s where the training need for Azure arises, and effective training courses can provide a kick start to developers wanting to exploit the cloud. These new skills, combined with developers existing knowledge base open up a wide range of new business and technical opportunities.

Brenda Michelson’s @ Forrester IT Forum: James Staten, How much of your future is in the Cloud? covers James’ Keynote Speech: How Much Of Your Future Will Be In The Cloud? Strategies For Embracing Cloud Computing Services:

image Cloud computing has shifted from being a question of “if” to one of “when” and “where” in your IT future and portfolio. Is it best to stick with SaaS, or should you be deploying new services directly to the public clouds like Amazon EC2 or Windows Azure? What applications are candidates for the cloud, and which should remain in-house? And for how long? This session will explore the enterprise uses of cloud computing thus far and synthesize the thinking across Forrester on this issue to present you with a road map and a strategy for embracing the cloud that benefits both your business and the IT function. Cloud can be a catalyst for the IT-to-BT transition so long as you harness it effectively.

Session attendees can expect to learn:

  • How to tell a true cloud solution and its relative maturity from simple cloud washing.
  • The truth behind the economics of cloud computing.
  • The best places to start and strategies to build your own path to cloud efficiency.

Prior to the conference, James wrote a positioning/discussion piece, which is published on ZDNet.  From what I saw on Twitter, the most controversial idea was the “Pay per use or metered consumption” requirement to be consider cloud computing.

James opens with “Cloud computing isn’t an if, it’s a when and a how”.  He says it won’t change your entire world (Nick Carr), nor is it complete hype (Ellison).  But, you will use it, situationally.

Definition: A standardized IT capability (services, software or infrastructure) delivered in a pay-per-use and self-service way.  On the pay-per-use, you should be able to go to zero.  That’s the economic power, according to Staten.  On self-service, the request is immediately processed, in an automated manner.  [This eliminates all the “managed service” cloud players].

Brenda continues liveblogging James’ keynote.

The Microsoft Partner Network opened their new Windows Azure Platform Partner Hub on 5/13/2010 and added a link to the OakLeaf blog on 5/17/2010:

image

OakLeaf's Windows Azure-pedia

Monday, May 17, 2010

The OakLeaf crew is helping the world stay on top of the Windows Azure Platform.  Check out this evolving blog post. http://oakleafblog.blogspot.com/2010/05/windows-azure-and-cloud-computing-posts_17.html.

<Return to section navigation list> 

Cloud Security and Governance

Lori MacVittie asserts IT organizations that fail to provide guidance for and governance over public cloud computing usage will be unhappy with the results… in her Why IT Needs to Take Control of Public Cloud Computing post of 5/27/2010:

While it is highly unlikely that business users will “control their own destiny” by provisioning servers in cloud computing environments that doesn’t mean they won’t be involved. In fact it’s likely that IaaS (Infrastructure as a Service) cloud computing environments will be leveraged by business users to avoid the hassles they perceive (and oft times actually do) exist in their quest to deploy a given business application. It’s just that they won’t themselves be pushing the buttons.

There have been many experts that have expounded upon the ways in which cloud computing is forcing a shift within IT and the way in which assets are provisioned, acquired, and managed. One of those shifts is likely to also occur “outside” of IT with external IT-focused services, such as system integrators like CSC and EDS HP Enterprise Services.

ROBBING PETER to PAY PAUL

The use of SaaS by business users is a foregone conclusion. It makes sense. Unfortunately SaaS is generally available only for highly commoditized business functions. That means more niche applications are unlikely to be offered SaaS because of the diseconomy of scale factors involved with such a small market. But  that does not mean that businesses aren’t going to acquire and utilize those applications. On the contrary, it is just this market that is ripe for Paul the SI to leverage.

imageFor example, assume a business unit needed application X, but application X is very specific to their industry and not offered as SaaS by any provider today – and is unlikely to be offered as such in the future due to its limited addressable market. But IT is overburdened with other projects and may not have the time – or resources – available until some “later” point in time. A savvy SI at this point would recognize the potential of marrying IaaS with this niche-market software   and essentially turning it into a SaaS-style, IaaS-deployed solution. An even savvier SI will have already partnered with a select group of cloud computing providers to enable this type of scenario to happen even more seamlessly. There’s quite a few systems’ integrators that are already invested in cloud computing, so the ones that aren’t will be at a distinct disadvantage if they don’t have preferred partners and can provide potential customers with details that will assuage any residual concerns regarding security and transparency.

Similarly, a savvy IT org will recognize the same potential and consider whether or not they can support the business initiative themselves or get behind the use of public cloud computing as an option under the right circumstances. IT needs to understand what types of applications can and cannot be deployed in a public cloud computing environment and provide that guidance to business units. An even savvier IT org might even refuse to locally deploy applications that are well-suited to a public IaaS deployment and reserve IT resources for applications that simply aren’t suited to public deployment. IT needs to provide governance and guidance for its business customers. IT needs to be, as Jay Fry put it so well in a recent post on this subject, “a trusted advisor.”

“So what things would IT need to be able to do in order to help business users make the best IT sourcing choices, regardless of what the final answer is? They’d need to do less of what they’ve typically done – manually making sure the low-level components are working the way that are supposed to – and become more of a trusted adviser to the business.”

[From] Thinking about IT as a supply chain creates new management challenges [by] Jay Fry (formerly VP of Marketing for Cassatt, now with CA)

IT needs to be aware that it may be advantageous to use IaaS as a deployment environment for applications acquired by business units when it’s not possible or necessary to deploy locally. Because if Peter the CIO doesn’t, Paul the SI will.  

<Return to section navigation list> 

Cloud Computing Events

tbTechNet announced Windows Azure Virtual Boot Camp V June 1st – June 7th 2010 in a 5/27/2010 post:

Windows Azure                              SQL Azure

Announcing… Virtual Boot Camp V !

Learn Windows Azure at your own pace, in your own time and without travel headaches.

Windows Azure one week pass is provided so you can put Windows Azure and SQL Azure through their paces.

NO credit card required.

You can start the Boot Camp any time during June 1st and June 7th and work at your own pace.

The Windows Azure virtual boot camp pass is valid from 5am USA PST June 1st through 6pm USA PST June 7th

Follow these steps:

  1. Request a Windows Azure One Week Pass here

  2. Sign in to the Windows Azure Developer Portal and use the pass to access your Windows Azure account.

  3. Please note: your Windows Azure application will automatically de-provision at the end of the virtual boot camp on June 7th

    1. Since you will have a local copy of your application, you will be able to publish your application package on to Windows Azure after the virtual boot camp using a Developer Accelerator Offer to test and dev on Windows Azure. See the Azure Offers here

  4. For USA developers, no-cost phone and email support during and after the Windows Azure virtual boot camp with the Front Runner for Windows Azure program

  5. For non-USA developers - sign up for Green Light at https://www.isvappcompat.com/global

  6. Startups - get low cost development tools and production licenses with BizSpark - join here

  7. Get the Tools

    1. To get started on the boot camp, download and install these tools:

    2. Download Microsoft Web Platform Installer

    3. Download Windows Azure Tools for Microsoft Visual Studio

  8. Learn about Azure

    1. Learn how to put up a simple application on to Windows Azure

    2. Learn about PHP on Windows Azure

    3. Take the Windows Azure virtual lab

    4. Read about Developing a Windows Azure Application

    5. View the series of Web seminars designed to quickly immerse you in the world of the Windows Azure Platform

    6. Why Windows Azure - learn why Azure is a great cloud computing platform with these fun videos

  9. Dig Deeper into Windows Azure

    1. Download the Windows Azure Platform Training Kit

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Paul Krill claims “Business-oriented enhancements lead to increased developer interest, though cloud caution still rules” in the preface to his Google App Engine gains developer interest in battle with EC2, Azure NetworkWorld article of 5/27/2010:

image While the Google App Engine cloud platform has trailed Amazon and Microsoft clouds in usage, it is nonetheless gaining traction among developers. That interest was bolstered by Google's recent extension to its cloud, dubbed Google App Engine for Business, which is intended to make the cloud more palatable to enterprises by adding components such as service-level agreements and a business-scale management console.

Built for hosting Web applications, App Engine services more than 500,000 daily page views, but App Engine's 8.2 percent usage rate, based on a Forrester Research survey of developers in late 2009, trails far behind Amazon.com's Elastic Compute Cloud (EC2), which has nearly a 41 percent share. Microsoft's newer Windows Azure cloud service edges out App Engine, taking a 10.2 percent share. Forrester surveyed 1,200 developers, but only about 50 of them were actually deploying to the cloud. [Emphasis added.]

Developer Mike Koss, launch director at Startpad.org, which hosts software development companies, is one of those using App Engine. "[The service is] for developers who want to write pure JavaScript programs and not have to manage their own cloud; they can write their app completely in JavaScript," Koss says. He adds that he likes cloud capabilities for data backup and availability.

Restraints on App Engine separate it in a good way from Amazon.com's cloud, Koss says: "App Engine abstracts away a lot of the details that developers need to understand to build scalable apps and you're a little bit more constrained on App Engine, so you kind of can't get into trouble like you can with an EC2." Amazon gives users a virtual box in which they are responsible for their own OS and security patches, whereas App Engine is abstracted at a higher level, he notes.

But not everyone believes App Engine is ready for prime time. "I think it's got a ways to go," says Pete Richards, systems administrator at Homeless Prenatal Program. "The data store technology for it is not very open, so I really don't know about getting information and out of that," he notes, referring to data access methods deployed in App Store. Still, "it's a promising platform," Richards says.

Cloud computing "is in the middle of something of a hype cycle," says Randy Guck, chief architect at Quest Software. But he thinks the cloud hype might be less than the hype a decade ago for SaaS (software as a service), something his company is now looking at developing using a cloud platform. "Right now, we're Microsoft-centric, so we're looking at Azure," Guck says, but he notes that Quest may have a role for App Engine in the future.

The question of whether the cloud is really ready for enterprise usage remains a key one for developers. As the Forrester study found, few are willing to commit now. InfoWorld's interviews echoed that caution. For example, Ryan Freng, a Web developer from the University of Wisconsin at Madison, says cloud computing is interesting but not something he would use anytime soon. "Right now, it's important that we maintain all our data and that we don't send it to the cloud," Freng says.

Paul continues his analysis on page 2.

<Return to section navigation list> 

Wednesday, May 26, 2010

Windows Azure and Cloud Computing Posts for 5/26/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this daily series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated in June 2010 for the January 4, 2010 commercial release. 

Azure Blob, Drive, Table and Queue Services

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Wayne Walter Berry explains SQL Azure and Session Tracing ID in this 5/26/2010 post to the SQL Azure Team blog:

image If you have been paying close attention, you will have noted that SQL Server Management Studio 2008 R2 has added a new property for connections to SQL Azure -- the Session Tracing ID.

A session tracing identifier is a unique GUID that is generated for every connection to SQL Azure. On the server side, the SQL Azure team tracks and logs all connections by the Session Tracing Id and any errors that arise from that connection. In other words, if you know your session identifier and have an error, Azure Developer Support can look-up the error in an attempt to determine what caused it.

SQL Server Management Studio

In SQL Server Management Studio, you can get your session tracing identifier in the properties window for the connection.

clip_image002[4]

Transact-SQL

You can also ask for your Session Tracing ID directly in Transact-SQL using this query:

SELECT CONVERT(NVARCHAR(36), CONTEXT_INFO())
C#

Alternatively, you can use this C# code:

using (SqlConnection conn = new SqlConnection(…))
{
    // Grab sessionId from new connection
    using (SqlCommand cmd = conn.CreateCommand())
    {
        conn.Open();
         cmd.CommandText = "SELECT CONVERT(NVARCHAR(36), CONTEXT_INFO())";
        sessionId = new Guid(cmd.ExecuteScalar().ToString());
    }
}

It is important to note that the Session Tracing ID is per connection to the server, and ADO.NET pools connections on the client side. Which means that some instances of SqlConnection will have same Session Tracing Id, since the connection they represent is recycled from the connection pool.

Summary

If you have the Session Tracing ID, along with the server name and the approximate time when calling Azure Developer Support you can expedite the debugging process and save yourself valuable time. Do you have questions, concerns, comments? Post them below and we will try to address them.

Azret Botash continues his OData-XPO series with OData WCF Data Service Provider for XPO - Part 1 of 5/26/2010:

image In the previous post, we have introduced a WCF Data Service Provider for XPO. Let’s now look at how it works under the hood.

For a basic read-only data service provider we only need to implement two interfaces:

  • IDataServiceMetadataProvider : Provides the metadata information about your entities. The data service will query this for your resource types, resource sets etc…
  • IDataServiceQueryProvider : Provides access to the actual entity objects and their properties. Most importantly, it is responsible for the IQueryable on which data operations like $filter, $orderby, $skip are performed.

Custom provider implementations are picked up via IServiceProvider.GetService.

Azret continues with sample code and concludes:

What’s next?
  • Get the WCF Data Service Provider Source.
  • Learn about implementing custom data service providers from Alex. (I will only cover XPO related details.)

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Kim Cameron wrote in his Interview on Identity and the Cloud post of 5/24/2010 to the IdentityBlog:

I just came across a Channel 9 interview Matt Deacon did with me at the Architect Insight Conference in London a couple of weeks ago.  It followed a presentation I gave on the importance of identity in cloud computing.   Matt keeps my explanation almost… comprehensible - readers may therefore find it of special interest.  Video is here.

image 

In addition, here are my presenation slides and video .

Following is Channel9’s abstract:

"The Internet was born without an identity". -Kim Cameron.
With the growing interest in "cloud computing", the subject of Identity is moving into the limelight. Kim Cameron is a legend in the identity architecture and engineering space. He is currently the chief architect for Microsoft's identity platform and a key contributor to the field at large.
For more info on Architect Insight 2010, including presentation slides and videos go to www.microsoft.com/uk/aic2010

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

msdev.com is moving to Windows Azure according to this news item of 5/26/2010:

msdev is moving into the cloud with the Windows Azure Platform. We want to share our experience with you and will be uploading a series of videos documenting our move.

You can find more information about the move to Azure below: -

What's Happening?
We're moving all of our web properties from running in a hosting environment on physical servers to the Windows Azure Platform. In doing so, we hope to: -
  • Reduce monthly expenses for hosting and bandwidth
  • Improve streaming and downloading experiences to our end users
The project has been divided into 3 main releases as follows: -
  • March: Migrate all msdev content approx 60gb worth: Release 1
  • April: Migrate Partner site and Channel Development site: Release 2
  • May: Migrate the MSDev site and Admin Tools: Release 3
Release One: Migration of +60Gb msdev training content into the cloud
This will allow the incorporation of the Content Delivery Network (CDN) features of Windows Azure. This first release was completed in March 2010. The major items to address in this first release were as follows:
  • Physically moving the existing training content from their current location to Windows Azure Blob storage.
  • Updating the URLs stored in the database to point to the new location
  • Changing the way content producers provide video content to the site administrator
  • Updating the tools that allow an administrator to associate a training video with a training event
Release Two: Migration of two web properties – the Partner and Channel Development Websites
A team consisting of individuals with deep knowledge of these properties and knowledge of the Windows Azure Platform has been assembled to work on this effort. These two properties will be live on the Windows Azure Platform in April 2010.
Release Three: Migration of the msdev site and associated admin tools
This migration will occur in May 2010. With the completion of this third release, the entire suite of properties will be running on the Windows Azure Platform.
Coming Soon!
Look out for a series of videos documenting our move to the Windows Azure Platform.

Lori MacVittie cautions Just when you thought the misconceptions regarding cloud computing couldn’t get any worse…they do in her And That, Young Cloudwalker, Is Why You Fail to F5’s DevCentral blog:

image

We have, in general, moved past the question “what is cloud” and onto “what do I need to do to move an application to the cloud?” But the question “what is cloud” appears not to have reached consensus and thus advice on how to move an application into the cloud might be based on an understanding of cloud that is less than (or not at all) accurate. The problem is exacerbated by the reality that there are several types or models of cloud: SaaS, PaaS, and IaaS. Each one has different foci that impact the type of application you can deploy in its environment. For example, moving an application to SaaS doesn’t make much sense at all because SaaS is an application; what you’re doing is moving data, not the application. That makes it difficult to talk in generalities about “the cloud”. 

This lack of consensus results in advice based on assumptions regarding cloud that may or may not be accurate. This is just another reason why it is important for any new technological concept to have common, agreed upon definitions. Because when you don’t, you end up with advice that’s not just inaccurate, it’s downright wrong. 

CLOUD ONLY SUPPORTS WHAT KIND of APPLICATIONS?

Consider this post offering up a “Practical Top Ten Checklist” for migrating applications to “cloud.” It starts off with an implied assumption that cloud apparently only supports web applications:

1. Is your app a web app? It sounds basic, but before you migrate to the web, you need to make sure that your application is a web application. Today, there are simple tools that can easily convert it, but make sure to convert it.

First and foremost, “cloud” is not a synonym for “the Internet” let alone “the web.” The “web” refers to the collective existence of applications and sites that are delivered via HTTP. The Internet is an interconnection of networks that allow people to transfer data between applications. The “cloud” is a broad term for an application deployment model that is elastic, scalable, and based on the idea of pay-per-use. None of these are interchangeable, and none of  them mean the same thing.  The wrongness in this advice is the implied assertion that “cloud” is the same as the “web” and thus a “cloud application” must be the same as a “web application.” This is simply untrue.

blockquote A 'cloud' application need not be a web application.  Using on-demand servers from infrastructure-as-a-service (IaaS) providers like Rackspace, Amazon, and GoGrid, you can operate almost any application that can be delivered from a traditional data center server.  This includes client/server architecture applications, non-GUI applications, and even desktop applications if you use Citrix, VNC or other desktop sharing software.  Web applications have obvious advantages in this environment, but it is by no means a requirement.

-- David J. Jilk, CEO Standing Cloud

As David points out, web applications have obvious advantages – including being deployable in a much broader set of cloud computing models than traditional client/server applications – but there is no requirement that applications being deployed in “a cloud” be web applications. The goodness of virtualization means that applications can be “packaged up” in a virtual machine and run virtually (sorry, really) anywhere that the virtual machine can be deployed: your data center, your neighbor’s house, a cloud, your laptop, wherever. Location isn’t important and the type of application is only important with regards to how you access that application. You may need policies that permit and properly route the application traffic applied in the cloud computing provider’s network infrastructure, but a port is a port is a port and for the most routers and switches don’t care whether it’s bare nekkid TCP or HTTP or UDP or IMAP or POP3 or – well, you get the picture.

This erroneous conclusion might have been reached based on the fact that many cloud-based applications have web-based user interfaces. But when you dig down, under the hood, the bulk of what they actually “do” in terms of functionality is not web-based at all. Take “webmail” for example. That’s a misnomer in that the applications are mail servers; they use SMTP and POP3/IMAP to exchange data. If you take away the web interface the application still works and it could be replaced with a fat client and, in fact, solutions like Gmail allow for traditional client-access to e-mail via those protocols. What’s being scaled in Google’s cloud computing environment is a mix of SMTP, POP3, IMAP (and their secured equivalents) as well as HTTP. Only one of those is a “web” application, the rest are based on Internet standard protocols that have nothing to do with HTTP. 

REDUX: “THE CLOUD” can support virtually any application. Web applications are ideally suited to cloud, but other types of client/server applications will also benefit from being deployed in a cloud computing environment. Old skool COBOL applications running on mainframes are, of course, an exception to the rule. For now.

Lori continues with a “SCALABILITY and REDUNDANCY are WHOSE RESPONSIBILITY??” topic and concludes:

REDUX: ONE of “THE CLOUD”s fundamental attributes is that it enables elastic scalability and, through such scalability implementations, a measure of redundancy. Nothing special is necessarily required of the application to benefit from cloud-enabled scalability, it is what it is. Scalability is about protocols and routing, to distill it down to its simplest definition, and as long as it’s based on TCP or UDP (which means pretty much everything these days) you can be confident you can scale it in the cloud.

D. Johnson’s Different Types of Cloud ERP post of 5/26/2010 to the ERPCloudNews blog explains:

Cloud Infrastructure and it’s impact on Hosting and SaaS

Cloud technology enables SaaS and powerful new forms of hosting that can reduce the cost of service delivery. Note that cloud does not equal SaaS and cloud is not mutually exclusive from hosting.

How much cloud do you need?

Customers can purchase services with different amounts of “cloud” in the service delivery stack. Assume that we have four distinct layers of delivery: cloud infrastructure (hardware resources for the cloud), cloud platform (operating system resources for the cloud), cloud applications (application resources built for the cloud), and client resources (user interface to the cloud). This distinction helps us illustrate the way cloud services are offered in the diagram below.

The Cloud Stack

Cloud Delivery Options

In this simplified diagram, we show three types of cloud services:

  • Cloud Infrastructure (for example: Amazon, GoGrid) delivers an cloud infrastructure where you install and maintain a platform and an application.
  • Cloud Platform (for example: Windows Azure) delivers a cloud platform where you install and maintain your applications without worrying about the operating environment.
  • Cloud Application (for example: Salesforce.com) delivers a complete application, all you maintain is your client access program which is frequently a browser.
SaaS ERP and Cloud Models

Even legacy ERP vendors are moving to cloud technologies to offer software as a service to their customers. When vendors offer SaaS, the customer is only responsible for maintaining their client device (usually just a browser).

Vendors can offer SaaS utilizing all three cloud infrastructures above. Some vendors such as Acumatica offer all three types of services.

  • Offering SaaS using a cloud application is straightforward. In this case the vendor builds an application which is tightly integrated with infrastructure and hardware so that the three components cannot be separated.
  • Offering SaaS using a cloud platform means that the vendor must manage the application layer separately from the platform layer. This architecture gives the vendor the flexibility to move the application to a separate cloud platform provider.
  • Offering SaaS using a cloud infrastructure is similar to a managed hosting scenario. In this case the vendor installs and manages both an operating system and their application on top of a multi-tenant hardware infrastructure. This technique provides maximum flexibility, but may increase overhead slightly.
Comparing SaaS Offering Options
  • SaaS using a Cloud Application
    Maximizes efficiencies for “cookie cutter” applications
    Vendor lock-in, customer does not have option to move application to a different provider
  • SaaS using a Cloud Platform
    Mix of flexibility and savings
    Coordination challenges – vendor manages the application while a service provider manages infrastructure
  • SaaS using a Cloud Infrastructure
    Maximizes flexibility to switch providers or move on-premise
    Some would argue this is nothing more than a hosted service with a slightly lower pricing structure
  • Multi-tenant applications
    Multi-tenant applications can be deployed in any scenario to reduce overhead associated with upgrading multiple customers and maintaining different versions of software. This implies that multi-tenancy reduces the flexibility to run an old version of software and limits customization and integration potential. Multi-tenant options should be priced lower to offset the loss of flexibility.
** Recommendation **

For a complex application such as enterprise resource planning (ERP), we advise selecting a vendor that can provide flexibility. ERP systems are not like CRM, email, or other cookie-cutter applications. Your ERP application needs to grow and change as your business changes.

Key questions that you need to ask:
1. Do you need significant customizations and interfaces with on-premise systems?
2. Will you need to move your ERP architecture on-premise in the future?
3. Do you need to own your operating environment and the location of your data?
4. Do you prefer to own software instead of renting it?

If you answered “yes” or “maybe” to any of these questions, you should consider the Cloud Platform or Cloud Infrastructure options. These options provide maximum flexibility as well as the option to own your software.

If you answered “no” to these questions, then a cloud application may provide price benefits that offset the vendor lock-in issues. Be careful that the price that the vendor quotes in year 1 is not going to change significantly in the future when it may be difficult to leave the platform.

Reuben Krippner updated his PRM Accelerator (R2) for Dynamics CRM 4.0 project on 5/25/2010 and changed the CodePlex license from the usual Microsoft Public License (MsPL):

The Partner Relationship Management (PRM) Accelerator allows businesses to use Microsoft Dynamics CRM to distribute sales leads and centrally manage sales opportunities across channel partners. It provides pre-built extensions to the Microsoft Dynamics CRM sales force automation functionality, including new data entities, workflow and reports. Using the PRM Accelerator, companies can jointly manage sales processes with their channel partners through a centralized Web portal, as well as extend this integration to automate additional business processes.

The accelerator installation package contains all source code, customizations, workflows and documentation.

Please note that this is R2 of the PRM Accelerator - there are a number of new capabilities which are detailed in the documentation.

Please also review the new License agreement before working with the accelerator.

According to Reuben’s re-tweet, CRMXLR8 offers “optional use of #Azure to run your partner portal; can also run your portal on-prem[ises] if you like!”

Return to section navigation list> 

Windows Azure Infrastructure

Jonathan Feldman’s Cloud ROI: Calculating Costs, Benefits, Returns research report for InformationWeek::Analytics is available for download as of 5/25/2010:

The decision on whether to outsource a given IT function must be based on a grounded discussion about data loss risk, lock-in and availability, total budget picture, reasonable investment life spans, and an ability to admit that sometimes, good enough is all you need.

Cloud ROI: Calculating Costs, Benefits, Returns
Think that sneaking feeling of irrelevance is just your imagination? Maybe, maybe not. Our April 2010 InformationWeek Analytics Cloud ROI Survey gave a sense of how nearly 400 business technology professionals see the financial picture shaking out for public cloud services. One interesting finding: IT is more confident that business units will consult them on cloud decisions than our data suggests they should be.

Fact is, outsourcing of all types is seen by business leaders as a way to get new projects up fast and with minimal miss, fuss and capital expenditures. That goes double for cloud services. But when you look forward three or five years, the cost picture gets murkier. When a provider perceives that you’re locked in, it can raise rates, and you might not save a red cent on management in the long term. In fact, a breach at a provider site could cost you a fortune—something that’s rarely factored into ROI projections.

In our survey, we asked who is playing the Dr. No role in cloud. We also examined elasticity and efficiency. Premises systems—at least ones that IT professionals construct—are always overbuilt in some way, shape or form. We all learned the hard way that you’d better build in extra, since the cost of downtime to add more can be significant. Since redundancy creates cost, we asked about these capacity practices, flexibility requirements, key factors in choosing business systems, and how respondents evaluate ROI for these assets.

Your answers showed us that adopting organizations aren’t nearly as out to lunch as cloud naysayers think. In this report, we’ll analyze the current ROI picture and discuss what IT planners should consider before putting cloud services into production, to ensure that the fiscal picture stays clear. (May 2010)

  • Survey Name: InformationWeek Analytics Cloud ROI Survey
  • Survey Date: April 2010
  • Region: North America
  • Number of Respondents: 393

Download

Daniel Robinson reported “Ryan O'Hara, Microsoft's senior director for System Center, talks to V3.co.uk about bringing cloud resources under the control of existing management tools” in his Microsoft to meld cloud and on-premise management article of 5/26/2010 for the V3.co.uk site:

Microsoft's cloud computing strategy has so far delivered infrastructure and developer tools, but the company is now looking to add cloud support into its management platform to enable businesses to control workloads both on-premise and in the cloud from a single console.

imageMicrosoft's System Center portfolio has focused on catching up with virtualisation leader VMware on delivering tools that can manage both virtual and physical machines on-premise, according to Ryan O'Hara, senior director of System Center product management at Microsoft.

"Heretofore we've been investing in physical-to-virtual conversion integrated into a single admin experience, and moving from infrastructure to applications and service-level management," he said.

Microsoft is now looking at a third dimension, that of enabling customers to extend workloads from their own on-premise infrastructure out to a public cloud, while keeping the same level of management oversight.

"We think that on-premise architecture will be private cloud-based architecture, and this is one we're investing deeply in with Virtual Machine Manager and Operations Manager to enable these private clouds," said O'Hara.

Meanwhile, the public cloud element might turn out to be a hosted cloud, an infrastructure-as-a-service, a platform-as-a-service or a Microsoft cloud like Azure.

The challenge is to extend the System Center experience to cover both of these with consistency, according to O'Hara. He believes this is where Microsoft has the chance to create some real differentiation in cloud services, at least from an enterprise viewpoint.

"I think, as we extend cross these three boundaries, it puts System Center and Microsoft into not just an industry leading position, but a position of singularity. I don't think there is another vendor who will be able to accomplish that kind of experience across all three dimensions," he said.

This is territory that VMware is also exploring with vSphere and vCloud, and the company signaled last year that it planned to give customers the ability to move application workloads seamlessly between internal and external clouds.

Daniel continues his story on page 2 and concludes:

Microsoft is also spinning back the expertise it has gained from Azure and its compute fabric management capabilities into Windows Server and its on-premise infrastructure, according to O'Hara. Announcements around this are expected in the next couple of months.

The next generation of Virtual Machine Manager and Operations Manager, which Microsoft has dubbed its 'VNext' releases, are due in 2011 and will have "even more robust investments incorporating cloud scenarios", O'Hara said.

Ryan Kim’s Survey: Bay Area more tech and cloud savvy article of 5/26/2010 for the San Francisco Chronicle’s The Tech Chronicles blog reports:

image We here in the Bay Area are tech savvy lot, down with the cloud computing (when we understand it) and emerging technologies.

That's the upshot of a survey by Penn Schoen Berland, a market research and consulting firm that is opening an office in San Francisco. Not necessarily ground-breaking stuff considering we're in Silicon Valley but it's still interesting to see how we stand compared to the rest of the country.

According to the survey, Bay Ares residents are more excited by technology (78 percent of Bay Area respondents vs. 67 percent for the U.S.) and are more involved in technology innovations (60 percent Bay Area compared to 50 percent for the U.S)

While only 18 percent of Americans can accurately define the cloud and cloud computing, 23 percent of Bay Area residents know what it is. Bay Area residents are more interested in using the cloud for things like applications (81 percent Bay Area vs. 65 percent U.S.), backing up computer or phone data (72 percent Bay Area vs. 64 percent U.S.) and online document collaboration (66 percent bay Area vs. 51 percent U.S.)

When it comes to new technology, 60 percent of Bay Area respondent[s] said they like to have the latest and greatest, compared to 50 percent for the rest of the country. People here also want to be involved in making new tech (57 percent Bay Area vs. 49 percent U.S.)

Bay Area resident are significantly more likely to use Facebook, Firefox, Gmail, iTunes and Google Chrome, Google Docs and LinkedIn than others around the country. We also really like Microsoft Office but we're we're less likely to use Microsoft's Internet Explorer browser.

Penn Schoen Berland’s findings are certainly no surprise.

John Soat asks Cloud 2.0: Are We There Yet? in this 5/25/2010 post to Information Week’s Plug Into the Cloud blog (sponsored by Microsoft):

image Cloud computing is still partly hype, part reality. Broken down into its constituent parts (SaaS, PaaS, IaaS) it is a pragmatic strategy with a history of success. But the concept of “the cloud” still has some executives scratching their heads over what’s real, what’s exactly new about it, and how it fits into their IT plans. Is it time to move to Cloud 2.0?

Jeffrey Kaplan, managing director of ThinkStrategies, a cloud consultancy, has written an interesting column for InformationWeek’s Global CIO blog about the evolution of cloud computing and how it is poised to enter the “2.0” stage. The ubiquitous “2.0” designation passed from software development to popular culture several years ago, and it is used to express a significant advancement or shift in direction. Because of its ubiquity, the 2.0 moniker has lost some of its specificity.

Right now, public cloud computing is dominated by the “XaaS” models: software-as-a-service (SaaS), platform-as-a-service (PaaS), and infrastructure-as-a-service (IaaS). Some organizations are experimenting with private clouds, which import the public cloud’s capabilities around resource scalability and dynamic flexibility into proprietary data centers.

To me, Cloud 2.0 is exemplified in one word: hybrid. Cloud computing will offer its most compelling advantages when organizations can combine private clouds and public clouds (XaaS) in IT architectures that stretch the definition of flexibility and agility.

An example of that Cloud 2.0 direction is Microsoft’s Azure strategy. Azure is Microsoft’s platform-as-a-service offering, capable of supporting online demand for dynamic processing and services. Microsoft is also building similar automation and management capabilities into its Windows Server technology, which should enable the development of private clouds and their integration with the public cloud, specifically in this instance Azure. [Emphasis added.]

It might be a little soon to jump to the Cloud 2.0 designation just yet. But as development continues, it’s not that far away, and it’s not too soon to start figuring it into your IT strategy.

Ellen Rubin continues the public-vs-private-cloud controversy in her Private Clouds: Old Wine in a New Bottle post of 5/25/2010, which describes “The Need for Internal Private Cloud”:

I recently read a Bank of America Merrill Lynch report about cloud computing, and they described private clouds as "old wine in a new bottle." I think they nailed it!

The report points out that a typical private cloud set-up looks much the same as the infrastructure components currently found in a corporate data center, with virtualization added to the mix. While the virtualization provides somewhat better server utilization, the elasticity and efficiency available in the public cloud has private clouds beat by a mile.

In short, the term "private cloud" is usually just a buzzword for virtualized internal environments that have been around for years. By replicating existing data center architectures, they also recreate the same cost and maintenance issues that cloud computing aims to alleviate.

Despite their limitations, there is still a lot of industry talk about creating internal private clouds using equipment running inside a company’s data center. So why do people consider building private clouds anyway?

To answer this question, you have to step back and examine some of the fundamental reasons why people are looking to cloud computing:

  1. The current infrastructure is not flexible enough to meet business needs
  2. Users of IT services have to wait too long to get access to additional computing resources
  3. CFOs and CIOs are tightening budgets, and they prefer operational expenses (tied directly to business performance) vs. capital expenses (allocated to business units)

In every case, the public cloud option outperforms the private cloud. Let’s examine each point:

  1. Flexibility – the ability to access essentially unlimited computing resources as you need them provides the ultimate level of flexibility. The scale of a public cloud like Amazon’s EC2 cannot possibly be replicated by a single enterprise. And that’s just one cloud – there are many others, allowing you to choose a range of providers according to your needs.
  2. Timeframes – to gain immediate access to public cloud compute resources, you only need an active account (and of course the appropriate corporate credentials). With a private cloud, users have to wait until the IT department completes the build out of the private cloud infrastructure. They are essentially subject to the same procurement and deployment challenges that had them looking at the public cloud in the first place.
  3. Budgets – everyone knows that the economic environment has brought a new level of scrutiny on expenses. In particular, capital budgets have been slashed. Approving millions of dollars (at least) to acquire, maintain and scale a private cloud sufficient for enterprise needs is becoming harder and harder to justify — especially when the "pay as you go" approach of public clouds is much more cost-effective.

There are many legitimate concerns that people have with the public cloud, including security, application migration and vendor lock-in. It is for these reasons and more that we created CloudSwitch. We’ve eliminated these previous barriers, so enterprises can take immediate advantage of the elasticity and economies of scale available in multi-tenant public clouds. Our technology is available now, and combines end-to-end security with point-and-click simplicity to revolutionize the way organizations deploy and manage their applications in public clouds.

Sir Isaac Newton may not have dreamed about clouds, but his first Law of Motion, "a body at rest tends to stay at rest", has been a good harbinger of cloud adoption until now. It is fair to expect that people will grasp for private clouds simply because it’s more comfortable (it’s the status quo). However, the rationale for public cloud adoption is so compelling that a majority of organizations will choose to embrace the likes of Amazon, Terremark, and other clouds. As adoption increases, private clouds will be used only for select applications, thus requiring far fewer resources than they currently demand. We’re also seeing the emergence of “hybrid” clouds that allow customers to toggle compute workloads between private and public clouds on an as-needed basis.

In the end, we will have new wine and it will be in a new bottle. With CloudSwitch technology, 2010 is shaping up to be a great vintage.

Rory Maher, CFA claims in his THE MICROSOFT INVESTOR: Cloud Computing Will Be A $100 Billion Market post of 5/18/2010 to the TBI Research blog:

imageCloud Computing Is A $100 Billion Market (Merrill Lynch)
Merrill Lynch analyst Kash Rangan believes the addressable market for cloud computing is $100 Billion (that's about twice Microsoft's annual revenue).  This is broken down between applications ($48 Billion), platform ($26 Billion), and infrastructure ($35 Billion).  Along with Google and Salesforce.com, Microsoft is one of the few companies positioned well across all segments of the industry.  Why is this important?  "Azure, while slow to take off could accelerate revenue and profit growth by optimizing customer experience and generating cross-sell of services."  This is true, but the bigger story is if Microsoft can gain enough traction in cloud computing to offset losses in share by its Windows franchise.  At this early stage it is not looking like this will be the case.

and reports:

Microsoft Presents At JP Morgan Conference: Tech Spend Encouraging, But Cloud A Risk (JP Morgan)
Stephen Elop, President of Microsoft Business Division (MBD) presented at JP Morgan's TMT conference yesterday.  Analyst John DiFucci had the following takeaways:

  • Elop was cautious but did indicate he was seeing early signs of an increase in business spending.
  • The company is rolling out cloud-like features to its products in order to fend off competitors, but cloud products would likely decrease overall profit margins.
  • Office 2010 getting off to a strong start.  Elope noted "there were 8.6 million beta downloads of Office 2010, or three times the number of beta downloads seen with Office 2007."

<Return to section navigation list> 

Cloud Security and Governance

See Chris Hoff (@Beaker) presented his “Cloudifornication: Indiscriminate Information Intercourse Involving Internet Infrastructure” Gluecon keynote in the Cloud Computing Events section below.

Lydia Leong analyzes Shifting the software optimization burden in her 5/26/2010 post to her CloudPundit: Massive-Scale Computing (not Gartner) blog:

image Historically, software vendors haven’t had to care too much about exactly how their software performed. Enterprise IT managers are all too familiar with the experience of buying commercial software packages and/or working with integrators in order to deliver software solutions that have turned out to consume far more hardware than was originally projected (and thus caused the overall project to cost more than anticipated). Indeed, many integrators simply don’t have anyone on hand that’s really a decent architect, and lack the experience on the operations side to accurately gauge what’s needed and how it should be configured in the first place.

Software vendors needed to fix performance issues so severe that they were making the software unusable, but they did not especially care whether a reasonably efficient piece of software was 10% or even 20% more efficient, and given how underutilize enterprise data centers typically are, enterprises didn’t necessarily care, either. It was cheaper and easier to simply throw hardware at the problem rather than to worry about either performance optimization in software, or proper hardware architecture and tuning.

Software as a service turns that equation around sharply, whether multi-tenant or hosted single-tenant. Now, the SaaS vendor is responsible for the operational costs, and therefore the SaaS vendor is incentivized to pay attention to performance, since it directly affects their own costs.

Since traditional ISVs are increasingly offering their software in a SaaS model (usually via a single-tenant hosted solution), this trend is good even for those who are running software in their own internal data centers — performance optimizations prioritized for the hosted side of the business should make their way into the main branch as well.

I am not, by the way, a believer that multi-tenant SaaS is inherently significantly superior to single-tenant, from a total cost of ownership, and total value of opportunity, perspective. Theoretically, with multi-tenancy, you can get better capacity utilization, lower operational costs, and so forth. But multi-tenant SaaS can be extremely expensive to develop. Furthermore, a retrofit of a single-tenant solution into a multi-tenant one is a software project burdened with both incredible risk and cost, in many cases, and it diverts resources that could otherwise be used to improve the software’s core value proposition. As a result, there is, and will continue to be, a significant market for infrastructure solutions that can help regular ISVs offer a SaaS model in a cost-effective way without having to significantly retool their software.

<Return to section navigation list> 

Cloud Computing Events

The Glue 2010 conference (#gluecon) at the Omni Interlocken Resort in Broomfield (near Denver), CO is off to a roaring start on 5/26/2010 with controversial keynotes by database visionaries:

Hopefully, Gluecon or others have recorded the session and will make the audio and/or video contact available to the public. If you have a link to recorded content, please leave a copy in a comment.

Michael Stonebraker will present a Webinar - Mike Stonebraker on SQL "Urban Myths" on 6/3/2010 at 1:00 PM - 2:00 PM PDT (advance registration required.) His Errors in Database Systems, Eventual Consistency, and the CAP Theorem post to the Communications of the ACM blog appears to be the foundation of his Gluecon 2010 keynote. (Be sure to read the comments.)

Chris Hoff (@Beaker) presented his “Cloudifornication: Indiscriminate Information Intercourse Involving Internet Infrastructure” keynote at 11:40 to 12:10 MDT. Here’s a link to an earlier 00:58:36 Cloudincarnation from TechNet’s Security TechCenter: BlueHat v9: Cloudifornication: Indiscriminate Information Intercourse Involving Internet Infrastructure:

Where and how our data is created, processed, accessed, stored, backed up and destroyed in what are sure to become massively overlaid cloud-based services - and by whom and using whose infrastructure - yields significant concerns related to security, privacy, compliance, and survivability. This presentation shows multiple cascading levels of failure associated with relying on cloud-on-cloud infrastructure and services, including exposing flawed assumptions and untested theories as they relate to security, privacy, and confidentiality in the cloud, with some unique attack vectors.

Presented by Chris Hoff, Director of Cloud and Virtualization Solutions, Cisco

image
image image
Eric Brewer Michael Stonebraker Chris Hoff

See also BlueHat v9: Interview with Katie Moussouris and Chris Hoff:

Chris Hoff is Director of Cloud and Virtualization Solutions, Data Center Solutions at Cisco Systems. He has over fifteen years of experience in high-profile global roles in network and information security architecture, engineering, operations and management with a passion for virtualization and all things cloud.

Presented by Katie Moussouris, Senior Security Strategist, Security Development Lifecycle, Microsoft and Chris Hoff, Director of Cloud and Virtualization Solutions, Cisco

The Voices of Innovation Blog posted this Engagement in Washington: Brad Smith on Cloud Computing article about the Gov 2.0 Expo on 5/26/2010:

This morning, Microsoft Senior Vice President and General Counsel Brad Smith gave a keynote speech, "New Opportunities and Responsibilities in the Cloud," at the Gov 2.0 Expo in Washington, DC. Voices for Innovation has been covering cloud computing policy and business opportunities for several months, and earlier this year, Smith spoke with VFI after delivering a speech at the Brookings Institution. You can view that video at this link.

We were going to recap Smith's speech at Gov 2.0, but Smith himself posted a blog, "Unlocking the Promise of the Cloud in Government," on the Microsoft on the Issues blog. We have re-posted this brief essay below. One significant takeaway: engagement. Smith writes, "Microsoft welcomes governments and citizens alike to participate in shaping a responsible approach to the cloud." That's what VFI is all about. VFI members are on the front lines of technology, developing and implementing innovative solutions. You should bring your expertise to discussions when the opportunity arises. Now, from Brad Smith...

Unlocking the Promise of the Cloud in Government

By Brad Smith
Senior Vice President and General Counsel

Over the past few months, starting with my January speech at the Brookings Institution in Washington, D.C., I’ve talked a lot about the great potential for cloud computing to increase the efficiency and productivity of governments, businesses and individual consumers. To realize those benefits, we need to establish regulatory and industry protections that give computer users confidence in the privacy and security of cloud data.

Today, I returned to Washington to continue the discussion as one of the plenary speakers at the Gov 2.0 Expo 2010.

As I shared during my presentation, we are constantly seeing powerful new evidence of the value of cloud computing.

Today, for example, we announced that the University of Arizona chose Microsoft’s cloud platform to facilitate communications and collaboration among the school’s 18,000 faculty and staff.   After initially looking at various supposedly “free” online services, the institution selected Microsoft’s Business Productivity Online Suite to update its aging e-mail system and to provide new calendaring and collaboration tools.  U. of A. officials concluded that, as a research university that conducts $530 million in research annually, it needed the enterprise-level security and privacy protections that BPOS could provide, but which the alternative services could not match.

I also talked about how cloud computing offers governments new opportunities to provide more value from publicly available data. The city of Miami, for instance, is using Microsoft’s Windows Azure cloud platform for Miami311, an online service that allows citizens to map some 4,500 non-emergency issues in progress.  This capability has enabled the city to transform what had essentially been a difficult-to-use list of outstanding service requests into a visual map that shows citizens each and every “ticket” in progress in their own neighborhood and in other parts of the city.

Stories like these are increasingly common.  Across the United States, at the state and local level, Microsoft is provisioning 1.4 million seats of hosted services, giving customers the option of cloud services. 

At Microsoft, we see how open government relies heavily on transparency, particularly around the sharing of information. This means not only making data sets available to citizens, but making the information useful.  If we want to engage citizens, then the cloud can play a role in bringing government information to life in ways that citizens can use in their daily activities.

But with new opportunities come new challenges.  The world needs a safe and open cloud with protection from thieves and hackers that will deliver on the promise of open government.  According to a recent survey conducted by Microsoft, more than 90 percent of Americans already are using some form of cloud computing. But the same survey found that more than 75 percent of senior business leaders believe that safety, security and privacy are top potential risks of cloud computing, and more than 90 percent of the general populations are concerned about the security and privacy of personal data.

Given the enormous potential benefits, cloud computing is clearly the next frontier for our industry.  But it will not arrive automatically.   Unlocking the potential of the cloud will require better infrastructure to increase access.  We will need to adapt long-standing relationships between customers and online companies around how information will be used and protected.  And we will need to address new security threats and questions about data sovereignty.

The more open government we all seek depends, in part, on a new conversation within the technology industry, working in partnership with governments around the world.  Modernizing security and privacy laws is critical, and broad agreement is needed on security and privacy tools that will help protect citizens.  We need greater collaboration among governments to foster consistency and predictability.  Microsoft welcomes governments and citizens alike to participate in shaping a responsible approach to the cloud.

***

You can follow VFI on Twitter at http://twitter.com/vfiorg.

Mike Erickson posted the Agenda - Azure Boot Camp SLC on 5/25/2010:

As I stated in my last post we are holding a day of Windows Azure training and hands-on labs in Salt Lake City on June 11th. Here is the link to register:
Register for Salt Lake City Azure Boot Camp
And here is the agenda:

  • Welcome and Introduction to Windows Azure
  • Lab: "Hello, Cloud"
  • Presentation: Windows Azure Hosting
  • Presentation: Windows Azure Storage
  • Lab: Hosting and Storage
  • Presentation: SQL Azure
  • Lab: SQL Azure
  • Presentation: AppFabric
  • Wrap-up

The day will be a pretty even split of presentations and actual hands-on work. You will be given 2 days of access to the Windows Azure platform allowing you to work through the labs and have a little more time to explore yourself.

Please let me know if you have any questions mike.erickson@neudesic.com - I'm hoping to see the event fill up!

image_thumb[7][1]My List of 34 Cloud-Related Sessions at the Microsoft Worldwide Partner Conference (WPC) 2010 presents the results of a search for sessions returned by WPC 2010’s Session Catalog for Track = Cloud Services (19) and Key Word = Azure (15).

Scott Bekker claims “With a little more than a month until the Microsoft Worldwide Partner Conference in Washington, D.C., session descriptions are beginning to appear in earnest on the Microsoft Partner Network Portal” in a preface to his 11 Things to Know About... WPC Sessions article of 5/26/2010 for the Redmond Channel Partner Online blog:

imageWith a little more than a month until the Microsoft Worldwide Partner Conference (WPC) in Washington, D.C., session descriptions are beginning to appear in earnest on the Microsoft Partner Network (MPN) Portal. We've scoured the available listings for some highlights.

Steve Ballmer Keynote. The CEO is confirmed for his usual keynote. Even when Ballmer doesn't have news, partners tell us they draw energy from Ballmer's WPC speeches. This year, he probably will have news about what Microsoft being "all in" on the cloud means to partners.

Kevin Turner Keynote. Partners are in the COO's portfolio, so it's always crucial to hear what he has to say. At the least, he's usually entertaining in his unbridled competitiveness.

Allison Watson Keynotes. Worldwide Partner Group CVP Watson will play her usual role, introducing the big keynotes each day and giving her own. Expect a lot of detail about cloud programs and MPN transition specifics.

Cloud Sales. If you're interested in making money with the cloud Microsoft-style, sessions galore await at the WPC. A few that caught our attention: "Best Practices: Selling Cloud-Based Solutions to a Customer" and "Better Together: The Next Generations of Microsoft Online Services + Microsoft Office 2010."

A Wide Lens on the Sky. For those with a more philosophical bent, there's "Cloud as Reality: The Upcoming Decade of the Cloud and the Windows Azure Platform."

Geeking Out in the Cloud. If you want to drill down, there are sessions like this one for ISVs: "Building a Multi-Tenant SaaS Application with Microsoft SQL Azure and Windows Azure AppFabric."

Vertical Clouds. Many sessions are geared toward channeling cloud computing into verticals, such as, "Education Track: Cloud Computing and How This Fits into the Academic Customer Paradigm."

Other Verticals. If a session like "Driving Revenue with Innovative Solutions in Manufacturing Industries" doesn't float your boat, there's probably something equally specific in your area that will.

Market Research. Leverage Microsoft's ample resources in sessions like, "FY11 Small Business Conversations."

"Capture the Windows 7 Opportunity." Judging by the adoption curve, you'll want to execute on anything you learn from this session PDQ.

"Click, Try, Buy! A Partner's Guide to Driving Customer Demand Generation with Microsoft Dynamics CRM!" How could you pass up a session with that many exclamation points?

Rethinking an SMB Competency

One of the new competencies of the Microsoft Partner Network that was supposed to go live in May was the Small Business Competency. Hold the music. Eric Ligman, global partner experience lead for Microsoft, wrote the following on the SMB Community Blog on May 6:

"In the small business segment, we are 'doubling-down' on the [Small Business Specialist Community (SBSC)] designation by making it our lead MPN offering for Partners serving the needs of small business. We are postponing the launch of the Small Business Competency and Small Business Advanced Competency in the upcoming year to further evaluate the need to have a separate offering outside of SBSC in the small business segment."

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

David Linthicum claims “Perhaps it's time these two 800-pound cloud computing gorillas got engaged” and asks Is a Salesforce.com and Google marriage in the works? in this 5/25/2010 post to InfoWorld’s Cloud Computing blog:

imageInfoWorld's editor Eric Knorr asked the question on everyone's mind: Will Google buy Salesforce next? He makes a good point in his blog post:

“Google I/O last week provided yet another indication of the two companies' converging interests. Just as VMware and Salesforce struck an alliance last month to enable Java applications to run on the Force.com cloud development platform, VMware and Google announced a similar arrangement for the Google App Engine platform.”

Google and Salesforce chase the same market and, thus, could provide strong channels for each other. Though Salesforce's Force.com and Google's App Engine overlap a bit, they can be combined fairly easily, and the hybrid product would be compelling to developers looking to get the best bang for the line of code.

A few more reasons:

  • Google and Salesforce have well-defined points of integration established from an existing agreement. Thus, creating additional bindings for combined offerings shouldn't be much of a learning effort.
  • Salesforce won't have its market dominance forever, which investors and maybe even the executives at Salesforce.com should understand. I suspect they'll want to sell at the top of the market, which is now.
  • What would you say to a free version of Salesforce.com driven by ad revenue? I have a feeling the market would be there to support this, and it would provide a great upsell opportunity into the subscription service.

William Vambenepe (@vambenepe) wrote in his Dear Cloud API, your fault line is showing post of 5/25/2010:

image Most APIs are like hospital gowns. They seem to provide good coverage, until you turn around.

I am talking about the dreadful state of fault reporting in remote APIs, from Twitter to Cloud interfaces. They are badly described in the interface documentation and the implementations often don’t even conform to what little is documented.

If, when reading a specification, you get the impression that the “normal” part of the specification is the result of hours of whiteboard debate but that the section that describes the faults is a stream-of-consciousness late-night dump that no-one reviewed, well… you’re most likely right. And this is not only the case for standard-by-committee kind of specifications. Even when the specification is written to match the behavior of an existing implementation, error handling is often incorrectly and incompletely described. In part because developers may not even know what their application returns in all error conditions.

After learning the lessons of SOAP-RPC, programmers are now more willing to acknowledge and understand the on-the-wire messages received and produced. But when it comes to faults, there is still a tendency to throw their hands in the air, write to the application log and then let the stack do whatever it does when an unhandled exception occurs, on-the-wire compliance be damned. If that means sending an HTML error message in response to a request for a JSON payload, so be it. After all, it’s just a fault.

But even if fault messages may only represent 0.001% of the messages your application sends, they still represent 85% of those that the client-side developers will look at.

Client developers can’t even reverse-engineer the fault behavior by hitting a reference implementation (whether official or de-facto) the way they do with regular messages. That’s because while you can generate response messages for any successful request, you don’t know what error conditions to simulate. You can’t tell your Cloud provider “please bring down your user account database for five minutes so I can see what faults you really send me when that happens”. Also, when testing against a live application you may get a different fault behavior depending on the time of day. A late-night coder (or a daytime coder in another time zone) might never see the various faults emitted when the application (like Twitter) is over capacity. And yet these will be quite common at peak time (when the coder is busy with his day job… or sleeping).

All these reasons make it even more important to carefully (and accurately) document fault behavior.

The move to REST makes matters even worse, in part because it removes SOAP faults. There’s nothing magical about SOAP faults, but at least they force you to think about providing an information payload inside your fault message. Many REST APIs replace that with HTTP error codes, often accompanied by a one-line description with a sometimes unclear relationship with the semantics of the application. Either it’s a standard error code, which by definition is very generic or it’s an application-defined code at which point it most likely overlaps with one or more standard codes and you don’t know when you should expect one or the other. Either way, there is too much faith put in the HTTP code versus the payload of the error. Let’s be realistic. There are very few things most applications can do automatically in response to a fault. Mainly:

  • Ask the user to re-enter credentials (if it’s an authentication/permission issue)
  • Retry (immediately or after some time)
  • Report a problem and fail

So make sure that your HTTP errors support this simple decision tree. Beyond that point, listing a panoply of application-specific error codes looks like an attempt to look “RESTful” by overdoing it. In most cases, application-specific error codes are too detailed for most automated processing and not detailed enough to help the developer understand and correct the issue. I am not against using them but what matters most is the payload data that comes along.

On that aspect, implementations generally fail in one of two extremes. Some of them tell you nothing. For example the payload is a string that just repeats what the documentation says about the error code. Others dump the kitchen sink on you and you get a full stack trace of where the error occurred in the server implementation. The former is justified as a security precaution. The latter as a way to help you debug. More likely, they both just reflect laziness.

In the ideal world, you’d get a detailed error payload telling you exactly which of the input parameters the application choked on and why. Not just vague words like “invalid”. Is parameter “foo” invalid for syntactical reasons? Is it invalid because inconsistent with another parameter value in the request? Is it invalid because it doesn’t match the state on the server side? Realistically, implementations often can’t spend too many CPU cycles analyzing errors and generating such detailed reports. That’s fine, but then they can include a link to a wiki a knowledge base where more details are available about the error, its common causes and the workarounds.

Your API should document all messages accurately and comprehensively. Faults are messages too.

<Return to section navigation list>