Friday, May 28, 2010

Windows Azure and Cloud Computing Posts for 5/27/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this daily series.

 
• Updated 5/28/2010: Steve Marx explains how he coded and debugged his live Python Azure music demo (Swingify) described in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section and Microsoft sends Platform Ready message to Front Runner users.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated in June 2010 for the January 4, 2010 commercial release. 

Azure Blob, Drive, Table and Queue Services

No significant articles today.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

ETH Zurich’s Systems Group gave the performance and cost nod to SQL Azure in its End-To-End Performance Study of Cloud Services (SQL Azure, Amazon EC2 and S3, and Google App Engine) posted on 5/26/2010 to the High Scalability blog:

image Cloud computing promises a number of advantages for the deployment of data-intensive applications. Most prominently, these include reducing cost with a pay-as-you-go pricing model and (virtually) unlimited throughput by adding servers if the workload increases. At the Systems Group, ETH Zurich, we did an extensive end-to-end performance study to compare the major cloud offerings regarding their ability to fulfill these promises and their implied cost.

The focus of the work is on transaction processing (i.e., read and update work-loads), rather than analytics workloads. We used the TPC-W, a standardized benchmark simulating a Web-shop, as the baseline for our comparison. The TPC-W defines that users are simulated through emulated browsers (EB) and issue page requests, called web-interactions (WI), against the system. As a major modification to the benchmark, we constantly increase the load from 1 to 9000 simultaneous users to measure the scalability and cost variance of the system.  Figure 1 shows an overview of the different combinations of services we tested in the benchmark.

SUT

Figure 1: Systems Under Test

The main results are shown in Figure 2 and Table 1 - 2 and are surprising in several ways. Most importantly, it seems that all major vendors have adopted a different architecture for their cloud services (e.g., master-slave replication, partitioning, distributed control and various combinations of it). As a result, the cost and performance of the services vary significantly depending on the workload. A detailed description of the architectures is provided in the paper.

Furthermore, only two architectures, the one implemented on top of Amazon S3 and MS Azure using SQL Azure as the database, were able to scale and sustain our maximum workload of 9000 EBs, resulting in over 1200 Web-interactions per second (WIPS).  MySQL installed on EC2 and Amazon RDS are able to sustain a maximum load of approximate 3500 EBs. MySQL Replication performed similar to MySQL standalone with EBS, so we left it off the picture. Figure 1 shows that the WIPS of Amazon’s SimpleDB grow up to about 3000 EBs and more than 200 WIPS. In fact, SimpleDB was already overloaded at about 1000 EBs and 128 WIPS in our experiments. At this point, all write requests to hot spots failed. Google AppEngine already dropped out at 500 emulated browsers with 49 WIPS. This is mainly due to Google’s transaction model not being built for such high write workloads. [Emphasis added.]

When implementing the benchmark, our policy was to always use the highest offered consistency guarantees, which come closest to the TPC-W requirements. Thus, in the case of AppEngine, we used the offered transaction model inside an entity group. However, it turned out, that this is a big slow-down for the whole performance. We are now in the process of re-running the experiment without transaction guarantees and curio[u]s about the new performance results.

Scalability

Figure 2: Comparison of Architectures [WIPS] …

Table 1 shows the total cost per web-interaction in milli dollar for the alternative approaches and a varying load (EBs). Google AE is cheapest for low workloads (below 100 EBs) whereas Azure is cheapest for medium to large workloads (more than 100 EBs).  The three MySQL variants (MySQL, MySQL/R, and RDS) have (almost) the same cost as Azure for medium workloads (EB=100 and EB=3000), but they are not able to sustain large workloads.

CostWIPS

Table 1: Cost per WI [m$], Vary EB

The success of Google AE for small loads has two reasons.  First, Google AE is the only variant that has no fixed costs. There is only a negligible monthly fee to store the database. Second, at the time these experiments were carried out, Google gave a quota of six CPU hours per day for free.  That is, applications which are below or slightly above this daily quota are particularly cheap.

Azure and the MySQL variants win for medium and large workloads because all these approaches can amortize their fixed cost for these workloads. Azure SQL server has a fixed cost per month of USD 100 for a database of up to 10 GB, independent of the number of requests that need to be processed by the database.  For MySQL and MySQL/R, EC2 instances must be rented in order to keep the database online.  Likewise, RDS involves an hourly fixed fee so that the cost per WIPS decreases in a load situation.  It should be noted that network traffic is cheaper with Google than with both Amazon and Microsoft.  

Table 2 shows the total cost per day for the alternative approaches and a varying load (EBs). (A "-" indicates that the variant was not able to sustain the load.)  These results confirm the observations made previously:  Google wins for small workloads;  Azure wins for medium and large workloads.  All the other variants are somewhere in between.  The three MySQL variants come close to Azure in the range of workloads that they sustain. Azure and the three MySQL variants roughly share the same architectural principles (replication with master copy architectures). SimpleDB is an outlier in this experiment. With the current pricing scheme, SimpleDB is an exceptionally expensive service.  For a large number of EBs, the high cost of SimpleDB is particularly annoying because users must pay even though SimpleDB drops many requests and is not able to sustain the workload.

CostDay

Table 2: Total Cost per Day [$], Vary EB

Turning to the S3 cost in Table 2, the total cost grows linearly with the workload.  This behavior is exactly what one would expect from a pay-as-you-go model.  For S3, the high cost is matched by high throughputs so that the high cost for S3 at high workloads is tolerable. This observation is in line with a good Cost/WI metric for S3 and high workloads  (Table 1). Nevertheless, S3 is indeed more expensive than all the other approaches (except for SimpleDB) for most workloads.  This phenomenon can be explained by Amazon's pricing model for EBS and S3. For instance, a write operation to S3 is hundred times more expensive than a write operation to EBS which is used in the MySQL variant.  Amazon can justify this difference because S3 supports concurrent updates with an eventual consistency policy whereas EBS only supports a single writer (and reader) at a time.

In addition to the here presented results, the paper also compares the overload behavior and presents the different cost-factors leading to the here presented numbers. If you are interested in these results and additional information about the test-setup, the paper will be presented at this year's SIGMOD conference and can also be downloaded here.

Be sure to read the comments for readers’ issues with the research methodology.

Alan Shimel claims “Complex legacy databases are just not built to scale in the cloud, Terracotta enables scalability” in his Databases Are The Bottleneck In The Cloud. Terracotta Is The Open Source Answer article (cum advertisement) of 5/27/2010 for NetworkWorld’s Open Source Fact and Fiction blog:

So you think you can just take that MySQL or Oracle database with all of that data that you have been using for 4 years or more and transfer it up to the cloud? Cloud don't work like that. But Terracotta does. Terracotta provides scale using open source.

In fact most public cloud infrastructure doesn't give you the ability to customize much in the way of database configurations. The databases available are rather rudimentary. On the other hand, keeping your database at your own data center is never going to give you the scalability and redundancy the cloud can offer you.

The answer at least according to Terracotta is caching. Over the past few years they have become the standard for elastic caching, hibernate and distributed caching. This allows your application and data instant scalability, outgrowing your database and even your own hardware limits.

I had a chance to speak with Mike Allen, head of product at Terracotta about this.  Terracotta was not originally an open source project or business when they launched in 2004. Recognizing that open source was a better distribution method, they open sourced their product in 2006 and that is when things started to take off for the company. This is a bit unusual, as most companies start open source and then go to sort of a dual license model.

Terracotta did another out of the ordinary move, when in 2009 they "bought" an open source project/product called Ehcache. Ehcache was the brainchild of Greg Luck who besides selling the IP to Terracotta, now works there. Ehcache was a de facto standard in Java enterprise environments. Its API was also the standard for hibernate which allows for elastic and distributed caching.

Allen says that complex databases are not going to be able to move up to public cloud providers anytime soon. The money put into their development to date and what it would cost to replace them with "no SQL" solutions like Cassandra are prohibitive. Therefore using Terracotta's solutions are the only viable alternative for the foreseeable future.

Terracotta already has 100,000 deployments with over 250 paying customers. As the swing to the cloud accelerates, they anticipate that to rise dramatically.  This is one open source company poised to capitalize on the cloud.

Alan’s post appears to fall in the fiction category. SQL Azure’s performance ratings in the preceding post belie Alan’s assertion that “Complex legacy databases are just not built to scale in the cloud.” 

Walter Wayne Berry explains Testing Client Latency to SQL Azure in this 5/27/2010 post to the SQL Azure Team blog:

SQL AzureSQL Azure allows you to create your database in datacenters in North Central US, South Central US, North Europe, and Southeast Asia. Depending on your location and your network connectivity, you will get different network latencies between your location and each of the data centers.

Here is a quick way to test your Network latency with SQL Server Management Studio:

1) If you don’t have one already, create a SQL Azure server in one of the data centers via the SQL Azure Portal.

2) Open the firewall on that server for your IP Address.

3) Create a test database on the new server.

4) Connect to the server/database with SQL Server Management Studio 2008 R2. See our previous blog post for instructions.

5) Using a Query Window in SQL Server Management Studio, turn on Client Statistics. You can find the option on the Menu Bar | Query | Include Client Statistics, or on the toolbar (see image below.)

clip_image002

6) Now execute the query:

SELECT 1

7) The query will make a round trip to the data center and fill in the client statistics.

clip_image004

8) Execute the same query several times to get a good average against the data center.

9) If you are just using this server for testing, drop your server, choose another data center and repeat the process with a new query window.

Reading the Results

The first two sections (Query Profile Statistics and Network Statistics) are not interesting and should be very similar to mine in the image above. The third section, Time Statistics, is what we want to study.

Client processing time: The cumulative amount of time that the client spent executing code while the query was executed. Alternatively, is the time between first response packet and last response packet.

Total execution time: The cumulative amount of time (in milliseconds) that the client spent processing while the query was executed, including the time that the client spent waiting for replies from the server as well as the time spent executing code.

Wait time on server replies: The cumulative amount of time (in milliseconds) that the client spent while it waited for the server to reply. Alternatively, the time between the last request packet left the client and the very first response packet returned from the server.

You want to find out which data center has a low average Wait time on server replies, this will be the least amount of network latency and with the best performance network for your location.

If you are reading this before June 7th 2010, you have a chance to attend Henry’s Zhang’s talk at TechEd, called: “COS13-INT: Database Performance in a Multi-tenant Environment”. This talk will cover this topic and more.

Brian Swan shows you How to Get the SQL Azure Session Tracing ID using PHP in this 5/27/2010 post:

SQL AzureThe SQL Azure team recently posted a blog about SQL Azure and the Session Tracing ID. The short story about the Session Tracing ID is that it is a new property (a unique GUID) for connections to SQL Azure. The nice thing about it is that if you have a SQL Azure error, you can contact Azure Developer Support and they can use it to look-up the error and help figure out what caused it. (If you are just getting started with PHP and SQL Azure, see this post: Getting Started with PHP and SQL Azure.)

Getting the Session Tracing ID is easy with PHP…just execute the following query: SELECT CONVERT(NVARCHAR(36), CONTEXT_INFO()). Here’s the PHP code for doing this:

$server = "tcp:YourServerID.database.windows.net,1433";
$user = "YourUserName@YourServerID";
$pass = "YourPassword";
$database = "DatabaseName";
$connectionoptions = array("Database" => $database, "UID" => $user, "PWD" => $pass);
$conn = sqlsrv_connect($server, $connectionoptions);

if($conn === false)
{
    die(print_r(sqlsrv_errors(), true));
}

$sql = "SELECT CONVERT(NVARCHAR(36), CONTEXT_INFO())";
$stmt = sqlsrv_query($conn, $sql);
$row = sqlsrv_fetch_array($stmt);
print_r($row);

Of course, the code above assumes you have the SQL Server Driver for PHP installed. And, if you are watching closely, you’ll notice that I didn’t have to include the “MultipleActiveResultSets”=> false in my connection options array…because SQL Azure now supports Multiple Active Result Sets (MARS).

The MSDN Library appears to have updated its Transact-SQL Reference (SQL Azure Database) topic recently:

Microsoft SQL Azure Database supports Transact-SQL grammar that you can use to query a database and to insert, update, and delete data in tables in a database. The topics in this section describe the Transact-SQL grammar support provided by SQL Azure. 

Important noteImportant: The Transact-SQL Reference for SQL Azure is a subset of Transact-SQL for SQL Server.

This section provides a series of foundational topics for understanding and using the Transact-SQL grammar with SQL Azure. To view details about data types, functions, operators, statements, and more, you can browse through the table of contents in these sections or search for topics in the index.

Mafian911 gets the answers to his OData Service and NTLM problems in this thread on the Restlet Discuss forum:

image Can anyone tell me how to access an OData service using NTLM security? I have crawled all over the web trying to find out how to do this, and the Tutorial site mentions something about a connector and throws out some source code, but I have no idea what to do with it. …

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Yavor Georgiev reports Updated WCF samples for Azure on 5/27/2010:

image Our samples over at http://code.msdn.com/wcfazure had gotten pretty stale, so I just put out an update that gets everything working on Visual Studio 2010, Silverlight 4, and the latest Azure tools.

Yavor is a Program Manager for WCF

Vittorio Bertocci (a.k.a. Vibro) explains how to put Your FedAuth Cookies on a Diet: IsSessionMode=true in this 5/26/2010 post:

More occult goodness for your programming pleasure! The Session Mode is a great feature of WIF which is not known as widely as it should be.

Sometimes you will be in situations in which it is advisable to limit the size of the cookie you send around. WIF already take steps for being parsimonious with the cookie size. By default, the cookie will contain just the layout defined by the SessionSecurityToken: more or less the minimal information required for being able to reconstruct the IClaimsPrincipal across requests (as opposed to a verbatim dump of the entire incoming bootstrap token, with its logorrheic XML syntax, key references & friends).

Let’s see if we can visualize the quantities at play here. If you take the FedAuth cookie generated from the default token issued from the default STS template implementation in the WIF SDK, the one with just name & role claims hardcoded in a SAML1.0 assertion, you get the following:

FedAuth
77u/PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz48U2VjdXJpdHlD b250ZXh0VG9rZW4gcDE6SWQ9Il83NTNmMzFiYi01N2QxLTQ2YzAtOWY5ZS02MTNj YTBhY2VmYWQtQkQzN0YzRTdGQUJCMzg5NTYzMEExNDUzQkEyQTlCOEUiIHhtbG 5zOnAxPSJodHRwOi8vZG9jcy5vYXNpcy1vcGVuLm9yZy93c3MvMjAwNC8wMS9vYX Npcy0yMDA0MDEtd3NzLXdzc2VjdXJpdHktdXRpbGl0eS0xLjAueHNkIiB4bWxucz0iaH R0cDovL2RvY3Mub2FzaXMtb3Blbi5vcmcvd3Mtc3gvd3Mtc2VjdXJlY29udmVyc2F0aW 9uLzIwMDUxMiIPElkZW50aWZpZXIXJuOnV1aWQ6OWQ2MzE5YmYtZTg3MC00Yz Q4LWIxNmYtNWU1MjhhYzVmMjU5PC9JZGVudGlmaWVyPjxJbnN0YW5jZT51cm46dXVpZD
o3NjdmNjBmZC1jYzZmLTQ2ZWEtYjI3OC0zZGQ2MmIxYTg5NjE8L0luc3RhbmNlPjxDb2
9raWUgeG1sbnM9Imh0dHA6Ly9zY2hlbWFzLm1pY3Jvc29mdC5jb20vd3MvMjAwNi8wNS
VFBQUFOQ01uZDhCRmRFUmpIb0F3RS9DbCtzQkFBQUFxYzlGQWs2YlBVRzVHY0JP bUJKTWJRQUFBQUFDQUFBQUFBQURaZ0FBd0FBQUFCQUFBQUROZndoSGJsZDJrbU g3UWcvUENPQlFBQUFBQUFTQUFBQ2dBQUFBRUFBQUFJQVF2ZCt1bGNnOFIyRXZS MytjUllHWUFnQUExNTYvR0ovLzNNNVd0Y1RNT243N0pDYlFpTHkzdkRnTjVTbzBCd XIzaVlTaUUxUnFSWjJHWVJaRGQ1UWc1YktlS0JGTjhDZi82VXhHbG1SY2Z5eU5oejlNM lY3WUFTV1lvWDl6NTZ0cnpORnVJbk9kWUJaYXhaZFF4SGs5MHZEakd1cWN1ZEdCU2 NjSGJFbmNuTHVDT01HTWJ3TEhOdzhJbEwwcDM0TlYvRS9CbGRUWWZDUkViVWd2 cU5xS3NJV2locnZHbzZYMzA1ajBMWVdqSDY0bnI4bENiU1ZiTnJEVHhJNGsvTGhOan ovZExNN3c3YkkyNGdTWHhEMXFyaEpsZDZIRVFtWTkybVJUY0Z2eGFPamlpbm1lSEN mWDJXbFB1anZEMldvcW5pb0tNZ0c4K0laL0REMlhQVjBsRU5USjlwK0R4RXdwU3htW jJCR1U1eGs3MlNZYjIxc2ovQXdNNmZGc1dacWEyUlJYK2FEZkozVzN6WUJlV1N5U3dv eSt1MjFNRUxiaDVJaTFRamJTVUxaa3IyTG1OenA4ZkpzMC94ZWNReHA0c1ovbnpsT2x CVng1ZVlHMEV5MDBVMHZDQ0poVDBHeEU3Y3JtbXFiTm00UDg1di8rSWkzNGQ2Qjh TVWkwTjFrL001aFpiRGFaejg0a2wxcXF5SzRLcmQ4eXdoT1ZtZGFsUnNpWUFUSzdTdkd xRFNxdlBYRjN2cGJ6d0d4Y0NLeGFReTdUY2hkeFNNakNEdUdLcmExNGY1U00vZUszcH JCTDlxNSsxaXVRcXpXK1JQWlIvVEMyVTdjdjRNTGhwaEhsT1FFVVlOTzYyYWljQXppQ3B qODRrOThHUW5EYWJsdlp2Rm1aaFg1TE5WUkt3QjNpZUxreGFsaiswVmJSejZoQnpTM 2JxQTB3ZHNHakpLS3Q4VjQzNXZuN2RjaVVNWk9mVlpTcWxOd1N2WnBzdHZBSTVVe XUvbVRKWit0bnM5M0ZBaXVxRHBJOXdOV3MyeE5LNXhjUDNyNms5TENEL1lHdkdhb UdDWWVPWXpjcnA1ei82b2g2K2ZSRThBSXVEOWNURHdsV2VYUVQyM3pZVU14aEFN OGtzQUttU1kyQmVmaGJM  FedAuth1
U1ZBVnJFbTJ5SnhmaGtLQlQzbnJTM0pYaXNMMUx5SmFHWUxLQXlXejEwMGRoUUF BQUQ4a2l4K3Q4V0EyaVFZVkVDeGdPdk85VUVxaXc9PTwvQ29va2llPjwvU2VjdXJpdHlD b250ZXh0VG9rZW4

Slightly more than 2K. Not the nimblest thing you’ve ever seen, but not a cetacean cookie either. On the other hand we have just two claims here; what happens if we have more than those, or we have big claims such as base64’ed images or similar? Moreover: sometimes we do need to include the bootstrap token in the session, for example when we call a frontend which needs to invoke a backend acting as the caller.

Let’s pick this last case: keeping the same token we used above, let’s save it in the session (by adding saveBootstrapTokens="true" to the microsoft.identityModel/service element on the RP) and see how big a cookie we get:

image

Vibro continues with more examples and concludes with a much shorter cookie when IsSessionMode = True.

Michelle Leroux Bustamante describes her WCF and the Access Control Service article for DevProConnections magazine’s June 2010 issue as providing “Custom components and code for securing REST-based WCF services.”

Unfortunately, the publisher outsources the online version to ZMags, who overuses Flash and makes the content difficult to navigate and read.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Steve Marx explains how he coded (and debugged) his live Windows Azure Swingify demo app (see below) with Python in this 00:33:08 Cloud Cover Episode 13 - Running Python - the Censored Edition Channel9 video of 5/28/2010:

image Join Ryan and Steve each week as they cover the Microsoft cloud. You can follow and interact with the show at @cloudcovershow
In this special censored episode:  

  • We show you how to run Python in the cloud via a swingin' MP3 maker
  • We talk about how Steve debugged the Python application
  • Ryan and Steve join a boy band

Show Links:
SQL Azure Session ID Tracing
Windows Azure Guidance Part 2 - AuthN, AuthZ
Running MongoDb in Windows Azure
We Want Your Building Block Apps

Steve Marx created his live Windows Azure Swingify demo app on 5/27/2010:

Browse for and upload an *.MP3 file, click the Swingify! button and make the music swing:

image 

According to Steve:

The Echo NestThis application is powered by Tristan's "The Swinger" application, which is built on the wonderful music APIs of EchoNest.

Steve Marx then put the whole thing into a Windows Azure application, which is what you see here.

Paul Lamere posted The Swinger and a collection of Swingified tracks to his Music Machine blog on 5/21/2010:

One of my favorite hacks at last weekend’s Music Hack Day is Tristan’s Swinger.  The Swinger is a bit of python code that takes any song and makes it swing.  It does this be taking each beat and time-stretching the first half of each beat while time-shrinking the second half.  It has quite a magical effect.

Swinger uses the new Dirac time-stretching capabilities of Echo Nest remix. Source code is available in the samples directory of remix.

I agree that the Jefferson Airplane’s Swingified White Rabbit is hypnotic, but the lead doesn’t really sound like the Gracie Slick I remember from the Fillmore Auditorium days.

Steve MarxMaking Songs Swing with Windows Azure, Python, and the Echo Nest API post of 5/27/2010 begins:

imageI’ve put together a sample application at http://swingify.cloudapp.net that lets you upload a song as an MP3 and produces a “swing time” version of it. It’s easier to explain by example, so here’s the Tetris theme song as converted by Swingify.

Background

The app makes use of the Echo Nest API and a sample developed by Tristan Jehan that converts any straight-time song to swing time by extended the first half of each beat and compressing the second half. I first saw the story over on the Music Machinery blog and then later in the week on Engadget.

I immediately wanted to try this with some songs of my own, and I thought others would want to do the same, so I thought I’d create a Windows Azure application to do this in the cloud.

How it Works

We covered this application on the latest episode of the Cloud Cover show on Channel 9 (to go live tomorrow morning – watch the teaser now). In short, the application consists of an ASP.NET MVC web role and a worker role that is mostly a thin wrapper around a Python script.

The ASP.NET MVC web role accepts an MP3 upload, stores the file in blob storage, and enqueues the name of the blob:

[HttpPost]
public ActionResult Create()
{
    var guid = Guid.NewGuid().ToString();
    var file = Request.Files[0];
    var account = CloudStorageAccount.FromConfigurationSetting("DataConnectionString");
    var blob = account.CreateCloudBlobClient().GetContainerReference("incoming").GetBlobReference(guid);
    blob.UploadFromStream(file.InputStream);
    account.CreateCloudQueueClient().GetQueueReference("incoming").AddMessage(new CloudQueueMessage(guid));
    return RedirectToAction("Result", new { id = guid });
}

The worker role mounts a Windows Azure drive in OnStart(). Here I used the same tools and initialization code as I developed for my blog post “Serving Your Website From a Windows Azure Drive.” In OnStart():

var cache = RoleEnvironment.GetLocalResource("DriveCache");
CloudDrive.InitializeCache(cache.RootPath.TrimEnd('\\'), cache.MaximumSizeInMegabytes);

drive = CloudStorageAccount.FromConfigurationSetting("DataConnectionString")
    .CreateCloudDrive(RoleEnvironment.GetConfigurationSettingValue("DriveSnapshotUrl"));
drive.Mount(cache.MaximumSizeInMegabytes, DriveMountOptions.None);

Then there’s a simple loop in Run():

while (true)
{
    var msg = q.GetMessage(TimeSpan.FromMinutes(5));
    if (msg != null)
    {
        SwingifyBlob(msg.AsString);
        q.DeleteMessage(msg);
    }
    else
    {
        Thread.Sleep(TimeSpan.FromSeconds(5));
    }
}

Steve continues with code for the implementation of SwingifyBlob(), which calls out to python.exe on the mounted Windows Azure drive and suggests running “the Portable Python project, which seems like an easier (and better supported) way to make sure your Python distribution can actually run in Windows Azure.”

Tony Bailey suggests Intuit Developers – Learn about Windows Azure in this 5/27/2010 post to msdev.com:

This series of Web seminars is designed to quickly immerse you in the world of the Windows Azure Platform. You’ll learn what Azure is all about, including the fundamental concepts of cloud computing and Windows Azure. You’ll learn why you should target Windows Azure, and see the tangible business benefits you can gain by deploying your apps to the cloud.

Tony is a Senior Marketing Manager for Microsoft.

Microsoft Case Studies reported Real World Windows Azure: Interview with Markus Mikola, Partner at Sopima on 5/27/2010:

Software Firm Launches Business Contract Service with Lean Staff, Low Investment

image Sopima, creator of an online solution for managing business contract life cycles, needed to minimize its capital investment to deliver a viable offering. It also wanted to offer an affordable monthly subscription service to gain new customers quickly. Using the Windows Azure™ platform , the company hosts its application in Microsoft® data centers, providing customers with fast response times and high scalability.

With the solution, Sopima has removed barriers that would have otherwise prohibited its entry into competitive markets. The company limited its investment in infrastructure and can focus on development rather than hardware administration. Sopima estimates that, without the Windows Azure platform, it would have had to hire additional full-time staff members at an annual cost of approximately U.S.$500,000. Its status as a Microsoft Partner will lend Sopima credibility in a competitive marketplace.

Situation

Sopima, a software development firm based in Helsinki, Finland, set out to simplify and streamline the processes of creating, managing, and storing business contracts for companies of all sizes.

Many companies manage hundreds to thousands of contracts each year for business arrangements with customers, clients, suppliers, and other external partners. With many stakeholders involved, including administrative assistants, sales associates, account managers, engineers, and legal representatives, collaboration through the contract creation process can be time-consuming and inefficient. The process requires close collaboration among individuals and departments, as well as with external business partners. Antti Makkonen, Research and Development Lead at Sopima, says, “Getting a contract signed can mean months and months of ‘back and forth’ between companies, often involving complex negotiations among legal teams. …

Eric Nelson answers “In the main, yes” to his Q&A: Can you develop for the Windows Azure Platform using Windows XP? post of 5/27/2010:

Longer answer:

The question is sparked by the requirements as stated on the Windows Azure SDK download page.

Namely:

Supported Operating Systems: Windows 7; Windows Vista; Windows Vista 64-bit Editions Service Pack 1; Windows Vista Business; Windows Vista Business 64-bit edition; Windows Vista Enterprise; Windows Vista Enterprise 64-bit edition; Windows Vista Home Premium; Windows Vista Home Premium 64-bit edition; Windows Vista Service Pack 1; Windows Vista Service Pack 2; Windows Vista Ultimate; Windows Vista Ultimate 64-bit edition

Notice there is no mention of Windows XP. However things are not quite that simple.

The Windows Azure Platform consists of three released technologies

Windows Azure

SQL Azure

Windows Azure platform AppFabric

The Windows Azure SDK is only for one of the three technologies, Windows Azure. What about SQL Azure and AppFabric? Well it turns out that you can develop for both of these technologies just fine with Windows XP:

SQL Azure development is really just SQL Server development with a few gotchas – and for local development you can simply use SQL Server 2008 R2 Express (other versions will also work).

AppFabric also has no local simulation environment and the SDK will install fine on Windows XP (SDK download)

Actually it is also possible to do Windows Azure development on Windows XP if you are willing to always work directly against the real Azure cloud running in Microsoft datacentres. However in practice this would be painful and time consuming, hence why the Windows Azure SDK installs a local simulation environment. Therefore if you want to develop for Windows Azure I would recommend you either upgrade from Windows XP to Windows 7 or… you use a virtual machine running Windows 7.

If this is a temporary requirement, then you could consider building a virtual machine using the Windows 7 Enterprise 90 day eval. Or you could download a pre-configured VHD – but I can’t quite find the link for a Windows 7 VHD. Pointers welcomed. Thanks.

“In the main …” reminds me of “In the long run …” about which Lord Keynes reminds “In the long run, we are all dead.”

The Microsoft Partner Network sent the following message by e-mail to Front Runner users on 5/27/2010:

 Front Runner

 

imageWe are making it easier for you to get your software products compatible with the newest Microsoft technologies. On June 1st, Green Light is changing to Microsoft Platform Ready.

Microsoft Platform Ready, built on Azure, simplifies access to the information you need to develop, test and market your Windows based applications to millions of potential customers. [Emphasis added.]

In the coming months you will also find free integrated testing tools, starting with Windows Server 2008 R2, making app certification and Microsoft Partner competency attainment easier to manage.

Don't worry - All of your existing information will be migrated to the new site so you will not have to create new logins or re-profile your applications.

Simply login at www.microsoftplatformready.com on June 1st with your Live ID to access all of your benefits and experience the ease of Microsoft Platform Ready.

Thank you for your continued support of the Microsoft Platform. If you have any questions or comments please contact mprsupport@microsoft.com.

Microsoft Green Light program is the equivalent of Front Runner for non-US partners. Clicking the Green Light link leads to the Front Runner landing page. It appears to me that the Platform Ready team got their links mixed up.

Microsoft sent the same message on 5/28/2010 with Front Runner substituted for GreenLight and different graphics.

You can learn more about Microsoft’s new partner competency offerings in the The Value of Earning a Microsoft® Competency white paper of May 2010 and at the Worldwide Partner Conference (WPC) 2010 on 7/11 to 7/15/2010.

Reuben Krippner added VIDEO: PRM PORTAL Step-by-Step Installation Guide for Windows Azure documentation on 5/26/2010 to his Partner Relationship Management (PRM) Accelerator for Microsoft Dynamics CRM project on Codeplex.

Step-by-Step Installation Video for Windows Azure. This video provides full guidance on how to deploy the Partner portal to a Windows Azure portal. For deployment on your own servers you can follow all the steps up to editing the web.config file in the web portal project. A separate video will be posted for setting up the portal on your own IIS Windows Server. Also note that this solution will work with Microsoft Dynamics CRM Online, On-Premise and Partner-Hosted!

For more about Reuben’s project, see Windows Azure and Cloud Computing Posts for 5/26/2010+.

Alex WilliamsOpen API Madness: The Party Has Just Begun for Cloud Developers post of 5/26/2010 to the ReadWriteCloud blog begins:

Glue2010_Header_10.gifIt's like an API festival here at Gluecon. I tweeted that this afternoon. But it's not just Gluecon, though - they're one of the hottest topics in discussions about cloud computing.

In his presentation today at Gluecon, John Musser of Programmable Web illustrated how hot APIs have become and how they've matured.

Perhaps most illustrative is his "API Billionaire's Club." Members of the club include Google and Facebook with 5 billion AP calls per day. Twitter has 3 billion per day. Ebay has 8 billion per month. NPR gets 1.1 billion calls per month for its API-delivered stories. Saleforce.com gets 50% of its traffic through its API.

According to Musser, it took eight years to get to 1,000 API's but just 18 months to get to 2,000. This year, the number of API's are double what they were last year on a month-per-month basis.

Internet/platform as a service (PaaS) API's are now number one. That's illustrative of the increased usage of services like Amazon S3 and all its competitors. Maps are the number three API, dropping from the number one spot last year. Social API's are number two.

REST API's are far surpassing SOAP.

PW_GlueCon_May2010 - _Google Docs_-3.jpg

There's a real energy here at Gluecon around the discussions about APIs. The room was packed for the presentations on the topic.

We'll pour more into the topic in later posts.

Return to section navigation list> 

Windows Azure Infrastructure

Chris Czarnecki asks Does Working Effectively with Azure Require New Skills? in this 5/27/2010 post to the Learning Tree International blog:

image Recently I wrote a post that discussed developing applications for Microsoft’s Azure. This article was stimulated by an interview with Steve Balmer. My views drew a strong healthy response from Microsoft developers, insisting ASP.NET applications can be moved directly to the Azure cloud.

Related to my post an article on Information Week had Bob Muglia, president of Microsoft’s server and tools division saying “There are few people in the world who can write cloud applications. Our job is to enable everybody to be able to do it.”

I think the quote raises a number of interesting points. Companies will move existing applications to the cloud, or develop new cloud based applications for a variety of reasons. Cost savings in infrastructure are a primary motivation of course. But the scalability, reliability, rich media, reduced administration are also potentially equally or more significant. It is when the aspects of scalability, reliability, rich media are considered that new skills are required by developers to enable them maximize the benefits the cloud provides. With Azure, Microsoft have provided a rich environment for developing cloud based applications, much of which they have announced will find its way into their next generation of Windows Server and System Center management software. This raises the exciting prospect of running cloud applications on private networks or on the public Azure cloud together with hybrids integrating the two.

Developers for Azure can utilize their existing .NET development skills. However, the Azure platform features, application architecture and libraries to build true cloud applications are the areas in which developers require new skills. Microsoft have provided the tools, developers now need to know what these are and how to apply these. That’s where the training need for Azure arises, and effective training courses can provide a kick start to developers wanting to exploit the cloud. These new skills, combined with developers existing knowledge base open up a wide range of new business and technical opportunities.

Brenda Michelson’s @ Forrester IT Forum: James Staten, How much of your future is in the Cloud? covers James’ Keynote Speech: How Much Of Your Future Will Be In The Cloud? Strategies For Embracing Cloud Computing Services:

image Cloud computing has shifted from being a question of “if” to one of “when” and “where” in your IT future and portfolio. Is it best to stick with SaaS, or should you be deploying new services directly to the public clouds like Amazon EC2 or Windows Azure? What applications are candidates for the cloud, and which should remain in-house? And for how long? This session will explore the enterprise uses of cloud computing thus far and synthesize the thinking across Forrester on this issue to present you with a road map and a strategy for embracing the cloud that benefits both your business and the IT function. Cloud can be a catalyst for the IT-to-BT transition so long as you harness it effectively.

Session attendees can expect to learn:

  • How to tell a true cloud solution and its relative maturity from simple cloud washing.
  • The truth behind the economics of cloud computing.
  • The best places to start and strategies to build your own path to cloud efficiency.

Prior to the conference, James wrote a positioning/discussion piece, which is published on ZDNet.  From what I saw on Twitter, the most controversial idea was the “Pay per use or metered consumption” requirement to be consider cloud computing.

James opens with “Cloud computing isn’t an if, it’s a when and a how”.  He says it won’t change your entire world (Nick Carr), nor is it complete hype (Ellison).  But, you will use it, situationally.

Definition: A standardized IT capability (services, software or infrastructure) delivered in a pay-per-use and self-service way.  On the pay-per-use, you should be able to go to zero.  That’s the economic power, according to Staten.  On self-service, the request is immediately processed, in an automated manner.  [This eliminates all the “managed service” cloud players].

Brenda continues liveblogging James’ keynote.

The Microsoft Partner Network opened their new Windows Azure Platform Partner Hub on 5/13/2010 and added a link to the OakLeaf blog on 5/17/2010:

image

OakLeaf's Windows Azure-pedia

Monday, May 17, 2010

The OakLeaf crew is helping the world stay on top of the Windows Azure Platform.  Check out this evolving blog post. http://oakleafblog.blogspot.com/2010/05/windows-azure-and-cloud-computing-posts_17.html.

<Return to section navigation list> 

Cloud Security and Governance

Lori MacVittie asserts IT organizations that fail to provide guidance for and governance over public cloud computing usage will be unhappy with the results… in her Why IT Needs to Take Control of Public Cloud Computing post of 5/27/2010:

While it is highly unlikely that business users will “control their own destiny” by provisioning servers in cloud computing environments that doesn’t mean they won’t be involved. In fact it’s likely that IaaS (Infrastructure as a Service) cloud computing environments will be leveraged by business users to avoid the hassles they perceive (and oft times actually do) exist in their quest to deploy a given business application. It’s just that they won’t themselves be pushing the buttons.

There have been many experts that have expounded upon the ways in which cloud computing is forcing a shift within IT and the way in which assets are provisioned, acquired, and managed. One of those shifts is likely to also occur “outside” of IT with external IT-focused services, such as system integrators like CSC and EDS HP Enterprise Services.

ROBBING PETER to PAY PAUL

The use of SaaS by business users is a foregone conclusion. It makes sense. Unfortunately SaaS is generally available only for highly commoditized business functions. That means more niche applications are unlikely to be offered SaaS because of the diseconomy of scale factors involved with such a small market. But  that does not mean that businesses aren’t going to acquire and utilize those applications. On the contrary, it is just this market that is ripe for Paul the SI to leverage.

imageFor example, assume a business unit needed application X, but application X is very specific to their industry and not offered as SaaS by any provider today – and is unlikely to be offered as such in the future due to its limited addressable market. But IT is overburdened with other projects and may not have the time – or resources – available until some “later” point in time. A savvy SI at this point would recognize the potential of marrying IaaS with this niche-market software   and essentially turning it into a SaaS-style, IaaS-deployed solution. An even savvier SI will have already partnered with a select group of cloud computing providers to enable this type of scenario to happen even more seamlessly. There’s quite a few systems’ integrators that are already invested in cloud computing, so the ones that aren’t will be at a distinct disadvantage if they don’t have preferred partners and can provide potential customers with details that will assuage any residual concerns regarding security and transparency.

Similarly, a savvy IT org will recognize the same potential and consider whether or not they can support the business initiative themselves or get behind the use of public cloud computing as an option under the right circumstances. IT needs to understand what types of applications can and cannot be deployed in a public cloud computing environment and provide that guidance to business units. An even savvier IT org might even refuse to locally deploy applications that are well-suited to a public IaaS deployment and reserve IT resources for applications that simply aren’t suited to public deployment. IT needs to provide governance and guidance for its business customers. IT needs to be, as Jay Fry put it so well in a recent post on this subject, “a trusted advisor.”

“So what things would IT need to be able to do in order to help business users make the best IT sourcing choices, regardless of what the final answer is? They’d need to do less of what they’ve typically done – manually making sure the low-level components are working the way that are supposed to – and become more of a trusted adviser to the business.”

[From] Thinking about IT as a supply chain creates new management challenges [by] Jay Fry (formerly VP of Marketing for Cassatt, now with CA)

IT needs to be aware that it may be advantageous to use IaaS as a deployment environment for applications acquired by business units when it’s not possible or necessary to deploy locally. Because if Peter the CIO doesn’t, Paul the SI will.  

<Return to section navigation list> 

Cloud Computing Events

tbTechNet announced Windows Azure Virtual Boot Camp V June 1st – June 7th 2010 in a 5/27/2010 post:

Windows Azure                              SQL Azure

Announcing… Virtual Boot Camp V !

Learn Windows Azure at your own pace, in your own time and without travel headaches.

Windows Azure one week pass is provided so you can put Windows Azure and SQL Azure through their paces.

NO credit card required.

You can start the Boot Camp any time during June 1st and June 7th and work at your own pace.

The Windows Azure virtual boot camp pass is valid from 5am USA PST June 1st through 6pm USA PST June 7th

Follow these steps:

  1. Request a Windows Azure One Week Pass here

  2. Sign in to the Windows Azure Developer Portal and use the pass to access your Windows Azure account.

  3. Please note: your Windows Azure application will automatically de-provision at the end of the virtual boot camp on June 7th

    1. Since you will have a local copy of your application, you will be able to publish your application package on to Windows Azure after the virtual boot camp using a Developer Accelerator Offer to test and dev on Windows Azure. See the Azure Offers here

  4. For USA developers, no-cost phone and email support during and after the Windows Azure virtual boot camp with the Front Runner for Windows Azure program

  5. For non-USA developers - sign up for Green Light at https://www.isvappcompat.com/global

  6. Startups - get low cost development tools and production licenses with BizSpark - join here

  7. Get the Tools

    1. To get started on the boot camp, download and install these tools:

    2. Download Microsoft Web Platform Installer

    3. Download Windows Azure Tools for Microsoft Visual Studio

  8. Learn about Azure

    1. Learn how to put up a simple application on to Windows Azure

    2. Learn about PHP on Windows Azure

    3. Take the Windows Azure virtual lab

    4. Read about Developing a Windows Azure Application

    5. View the series of Web seminars designed to quickly immerse you in the world of the Windows Azure Platform

    6. Why Windows Azure - learn why Azure is a great cloud computing platform with these fun videos

  9. Dig Deeper into Windows Azure

    1. Download the Windows Azure Platform Training Kit

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Paul Krill claims “Business-oriented enhancements lead to increased developer interest, though cloud caution still rules” in the preface to his Google App Engine gains developer interest in battle with EC2, Azure NetworkWorld article of 5/27/2010:

image While the Google App Engine cloud platform has trailed Amazon and Microsoft clouds in usage, it is nonetheless gaining traction among developers. That interest was bolstered by Google's recent extension to its cloud, dubbed Google App Engine for Business, which is intended to make the cloud more palatable to enterprises by adding components such as service-level agreements and a business-scale management console.

Built for hosting Web applications, App Engine services more than 500,000 daily page views, but App Engine's 8.2 percent usage rate, based on a Forrester Research survey of developers in late 2009, trails far behind Amazon.com's Elastic Compute Cloud (EC2), which has nearly a 41 percent share. Microsoft's newer Windows Azure cloud service edges out App Engine, taking a 10.2 percent share. Forrester surveyed 1,200 developers, but only about 50 of them were actually deploying to the cloud. [Emphasis added.]

Developer Mike Koss, launch director at Startpad.org, which hosts software development companies, is one of those using App Engine. "[The service is] for developers who want to write pure JavaScript programs and not have to manage their own cloud; they can write their app completely in JavaScript," Koss says. He adds that he likes cloud capabilities for data backup and availability.

Restraints on App Engine separate it in a good way from Amazon.com's cloud, Koss says: "App Engine abstracts away a lot of the details that developers need to understand to build scalable apps and you're a little bit more constrained on App Engine, so you kind of can't get into trouble like you can with an EC2." Amazon gives users a virtual box in which they are responsible for their own OS and security patches, whereas App Engine is abstracted at a higher level, he notes.

But not everyone believes App Engine is ready for prime time. "I think it's got a ways to go," says Pete Richards, systems administrator at Homeless Prenatal Program. "The data store technology for it is not very open, so I really don't know about getting information and out of that," he notes, referring to data access methods deployed in App Store. Still, "it's a promising platform," Richards says.

Cloud computing "is in the middle of something of a hype cycle," says Randy Guck, chief architect at Quest Software. But he thinks the cloud hype might be less than the hype a decade ago for SaaS (software as a service), something his company is now looking at developing using a cloud platform. "Right now, we're Microsoft-centric, so we're looking at Azure," Guck says, but he notes that Quest may have a role for App Engine in the future.

The question of whether the cloud is really ready for enterprise usage remains a key one for developers. As the Forrester study found, few are willing to commit now. InfoWorld's interviews echoed that caution. For example, Ryan Freng, a Web developer from the University of Wisconsin at Madison, says cloud computing is interesting but not something he would use anytime soon. "Right now, it's important that we maintain all our data and that we don't send it to the cloud," Freng says.

Paul continues his analysis on page 2.

<Return to section navigation list> 

blog comments powered by Disqus