Sunday, June 20, 2010

Windows Azure and Cloud Computing Posts for 6/18/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this daily series.

 
Added items marked on 6/19/2010.
Added items marked •• on 6/20/2010.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated in June 2010 for the January 4, 2010 commercial release. 

Azure Blob, Drive, Table and Queue Services

•• Clemens Vasters delivered his Azure Platform Compute & Storage session at the Norwegian Developers Conference (NDC) 2010 on 6/16/2010 and the Webcast is now available:

image At the heart of Windows Azure are its compute and management capabilities, which are foundational not only for ISV and enterprise solutions, but also for the Windows Azure platform components themselves. In this session Clemens will introduce Windows Azure’s notion of service and configuration models, the deployment and upgrade mechanisms, and introduce the diagnostics and management capabilities. You will furthermore learn about the various storage capabilities including the relational SQL Azure database service. Clemens will also show how Windows Azure’s different roles provide an architectural guidance framework for building scalable apps.

Anders NoråsBe prepared for bandwidth issues. @anoras tweeted “Massive demand for #ndc2010 videos at the moment. We're working on increasing the the bandwidth for http://bit.ly/a8xrKU Stay patient” on 6/20/2010.

Swedes are fond of asking “How many Norwegians does it take to … ?”

Wayne Walter Berry explains Writing BLOBs from SQL Azure to Windows Azure Storage in this 6/18/2010 post:

imageOne of the more interesting things that we can do with the SqlStream class introduced in this blog post is to write to Windows Azure Storage from SQL Azure. Windows Azure storage provides persistent, durable storage in the cloud. To access the storage services, you must have a storage account, created through the azure portal.

imageImagine that our company mandates images of their products to be stored in the database along with other product data like title, price, and description. One way to serve those images up on a web page is to retrieve them from the database and stream them to the browser.

However, a better idea might be to read them from the database on the first request, and write them to Windows Azure Storage where they are served up on every request. You would get the data integrity of keeping your images with your other product data in SQL Azure and the performance benefit of streaming a static file from Windows Azure Storage. This is especially beneficial if we can take advantage of the Windows Azure Content Delivery Network.

The following code sample implements this scenario using the Adventure Works database, which stores the product thumbnails as varbinary(max). The code is designed to do these things:

  • Run from Windows Azure platform.
  • Serve images from a request to an ASP.NET page using a product id in the query string.
  • Reads the image from the Products table in Adventure Works database and writes the thumbnail image to Windows Azure Storage if it doesn’t already exist
  • Redirects the browser to Windows Azure Storage to serve the image.
  • Uses streaming so that the whole images is not loaded into the memory space.

When the SQL Azure database, the Windows Azure Web Role, and the Windows Azure Storage container are in the same data center, there is no cost to transfer the data between SQL Azure and Windows Azure Storage. Note that this is not a direct transfer from SQL Azure to Windows Azure Storage, the code reads the image from SQL Azure and writes it to Windows Azure Cloud Storage. When you run the code the data passes through your Windows Azure role. …

Wayne continues with the sample source code. …

… Using the Code

This code is very universal and can be used in other scenarios. One interesting idea is to run this from a locally hosted IIS server using a local SQL Server database. The code would read your local SQL Server, and push the images in your database to the Windows Azure Storage where they could be served off the Windows Azure CDN.

The first request is going to take longer than all the other requests, since it has to read from SQL Azure and write to Windows Azure Storage. Another idea is that you could preload Windows Azure Storage so that the first request does not take as long to respond. If you are uploading images in your Windows Azure Web Role, you could write to Windows Azure Storage after the upload. This would populate the image data in two places, SQL Azure along with the product data and the image container on Windows Azure Storage.

Sergey Abakumoff’s Data Dynamics Reports: Show the Data from the Windows Azure Table Storage post shows you how to use Data Dynamics Reports’ Object Data Provider to create reports from Azure tables:

imageSome time ago I tested Data Dynamics Reports on Windows Azure platform. I never dealt with Windows Azure before and was happy to learn something new. In particular I've learned the basics of Windows Azure Storage Services. There are 3 storage types in Windows Azure: blobs, queues and tables. I thought about the scenario when a developer wants to visualize the data from Window Azure Table Storage using Data Dynamics Reports. I've found that Data Dynamics Report provides developer with all needed ammunition to achieve that goal, so I thought that it would be worth to write about.

Sergey continues with a brief description of Azure tables and continues with a modification of Jim Nakashima’s "Windows Azure Walkthrough: Table Storage", aligned with the changes in Windows Azure SDK v1.1 and with added the steps to show the data using Data Dynamics Reports.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

•• Gil Fink posted Consuming OData Feed using Microsoft PowerPivot on 6/20/2010 (requires Office 2010):

imagePowerPivot is a data analysis add-in for excel that brings to it computational power. It also helps “to create compelling self-service BI solutions, facilitates sharing and collaboration on user-generated BI solutions” (taken from the PowerPivot site). This post will help you to understand how to consume OData feed from PowerPivot in order to use the data the OData exposes.

The OData Feed

OData exposed feeds can be found in the htp://www.odata.org/ site. Under the Producers tab you will find OData producers which expose their data using the OData protocol. In my example I’ll use the Netflix’s feed that can be found here: http://odata.netflix.com/Catalog/.

Consuming Netflix OData Feed using PowerPivot

In order to use PowerPivot for excel you first need to download it. There are some prerequisites for doing that including having Office 2010 installed on your machine. After installing PowerPivot open your excel and go to the PowerPivot tab and open PowerPivot window. One of the options in the PowerPivot window is to get data from data feeds (you will see the OData protocol logo beside it):

PowerPivot Window
Use the From Data Feeds menu item in order to open the Table Import Wizard:

Table Import Wizard
In the Data Feed Url textbox insert the relevant OData feed (in my case the Netflix’s feed Url). Press Next to import the data definition and then you will see the entity sets the feed exposes as tables:

Data Feed Tables
Choose the tables you want to import and press Finish to end the process and import the tables. This can take some time depending on the size of the exposed data set.

I chose the Genres set and this is the result:

Imported Results

Now I can use the PowerPivot’s features to build my relevant report or to manipulate the data I’ve received.

Summary

Let[‘s] sum up, I showed how to consume OData feed using Microsoft’s PowerPivot. The OData ecosystem is growing very fast and as you could see from this example it consumers can also be BI tools like PowerPivot and many more. This is one of the reasons to start learning this protocol now.

•• Kevin Kline posted Live! TechNet Radio: Microsoft Cloud Services – SQL Azure on 6/18/2010:

imageJust wanted to let you know that a TechNet Radio episode and interview I did about cloud computing is now live on TechNet Edge. It was the featured spot on Thursday, June 3rd and is also featured on the TechNet homepage.

I’ve been trying to wear more of an analyst’s hat these days, so this webcast has a lot of my “deep thinking” on issues related to cloud computing – hopefully at a higher level of quality than Jack Handy.

A salient point that I think many analysts are overlooking is the changing nature of data as it exists in the cloud.  For decades, data has primarily been about people (and their activities) for consumption by other people.  The cloud is enabling a major shift in data generation and consumption where data is produced by machines for consumption by other machines.  We’ll soon be looking at situations, now rather rare, in which sensors are extremely commonplace.  These sensors, whether they be in traffic signals or high-end medical devices, will create enormous amounts of data far more frequently than ever before, loading that data directly into cloud databases.  The cloud databases will consume and process the data and, when automated analysis (made all the easier through features like StreamInsight in SQL Server 2008 R2) will flag important findings for review by a real-live human being.  Check out the interview for several real-world examples being played out even as we speak.

Here is a direct link: http://edge.technet.com/Media/TechNet-Radio-Microsoft-SQL-Azure-Growing-Opportunities-for-Data-in-the-Cloud/

Or  if brevity is your thing and you prefer a surrogate key over a natural key: http://bit.ly/cdLTeP

Perhaps I can persuade you to blog, tweet, or place a link to it in your Facebook or team newsletter?  Maybe with a few deep thoughts? Please?  Pretty please?

How about the above, Kevin?

•• Jason Rowe posted Odata Osht! and some examples of corresonding LINQ queries on 6/19/2010:

imageMy head is spinning from looking at Odata tonight. I keep going back and forth on if this is the coolest thing in the world or just a way to browse data via the URI. Either way it's fun. Below are my notes and examples showing some limitations and features of Odata.

The first area I had some misunderstandings was when working with collections. Take a look at this question on StackOverflow "use linq to query nested odata collection". This pretty much sums up the issue I was stuck on for awhile. How do relationships work and how can I query collections. This blog post helped me start to understand how keys and collections work.

Then, I ran across this post and I'm back to being sold on oData. The ease of filtering the data makes for some nice queries. Filters also support functions if needed. Thanks to this tip for pointing that out.

Here is a set of examples for the Odata Netflix feed. I've listed the URI with the corresponding C# expression below.

http://odata.netflix.com/Catalog/Titles?$filter=Name eq 'The Name of The Rose'

(from t in Titles where t.Name == "The Name of The Rose" select t)

http://odata.netflix.com/Catalog/Titles?$filter=AverageRating lt 2 and Instant/Available eq true

(from t in Titles where t.AverageRating < 2 && t.Instant.Available == true
select t)

http://odata.netflix.com/Catalog/Titles?$filter=ReleaseYear le 1989 and ReleaseYear ge 1980 and AverageRating gt 4&$expand=Awards

(from t in Titles.Expand("Awards") where t.ReleaseYear < 1989 && t.ReleaseYear > 1980 && t.AverageRating > 4 select t)

Jason continues with more live queries and LINQ examples. IE 7+ users must have Feeds view turned off to view the OData query results in AtomPub format.

•• Damien White posted ruby_odata v.0.0.6 on 6/17/2010 to RubyForge:

imageAn OData Client Library for Ruby. It allows Ruby applications to consume OData services like the Netflix implenation of OData services.

•• Matt Milner presents PluralSight’s two-hour OData Introduction On Demand! course (requires subscription or sign-up for a free guest pass):

imageMany services today expose data. Unfortunately, each service has unique data formats and APIs. OData provides a set of conventions and protocol for reading and writing data across the web using open web standards. This course will give you an understanding of what OData and how to build both clients and services.

Bart Robertson explained newly updated SQL Azure Connection Retry technology in this 6/19/2010 post:

imageThis is a summary of a relatively long thread from the Microsoft internal SQL Azure discussion alias.  I basically stole some ideas and put them together in an implementation that works well for our solution.

Our solution
As I mentioned in my previous post, we released the Social eXperience Platform (SXP) in April of this year.  We heavily instrumented our code as Windows Azure operations was new to us (Design for Operations is a good practice regardless of the newness of the platform).  Our solution consists of a web service running as a web role on Windows Azure compute and a set of SQL Azure databases.  We are a multi-tenant solution and chose to use a SQL Azure database per tenant (more on that in a future post).  We have been live for 8 weeks and have delivered over 4 - 9s of uptime - on a version 1 release on new technology.  Other than the SQL retry code and caching our configuration data, every call to our service is at least one round trip to SQL Azure.  We intentionally chose not to build in redundancy at the aplication level, but to rely on the platform.  So far, we're extremely pleased with Windows Azure and SQL Azure, but our goal is 100% availability, so we keep tweaking. 

The problem
One of the things that SQL Azure does to deliver high availability is it sometimes closes connections. SQL Azure does some pretty cool stuff under the covers to minimize the impact, but this is a key difference in SQL Azure development vs. SQL Server development.

When a connection gets closed, the vast majority of the time, closing and opening the connection object solves the problem.  Sometimes, you have to wait a few hundred ms to a few seconds.  Very seldom does the retry totally fail.

These retries seem to come in waves and very seldom affect more than one of our databases at a time.  This makes sense because SQL Azure spreads databases independently.  The odds of having multiple databases on the same virtual or physical hardware is remote – unless you have hundreds or thousands of databases.

We know all of this because we heavily instrumented our code.  Design for operations is very important, particularly when the platform is new to you.

A solution
Your typical ADO.NET data access code looks something like this (note that if you’re using Linq, Entity Framework, or other higher level data libraries, something like this is likely happening under the covers):

try
{
    using (SqlConnection conn = new SqlConnection(sqlConnectionString))
    {
        using (SqlCommand cmd = new SqlCommand(sqlStatement, conn))
        {
            conn.Open();

            using (SqlDataReader dr = cmd.ExecuteReader())
            {
                while (dr.Read())
                {
                }
            }
        }
    }
}

catch (SqlException ex)
{
  SxpLog.WriteSqlException(ex, "some", "other", "data");
}

One approach is to put retry logic around the entire try – catch block.  A simple for loop with a sleep does the trick.  For reads, this works just fine, but for writes, you have to make sure that the call really failed and deal with it accordingly.  That is much more difficult to generalize.  You also have to repeat the logic unless you're using Lambda expressions or something similar.

Another solution
What we’ve noticed is that all of our failures are due to connection issues.  conn.Open() doesn’t fail, but rather the first SQL statement we try to execute fails.  Remember that our “client” is running in the same Azure data center as our database.  Remote connections could cause the command execute or the results processing to fail.  Our queries are very small and very fast (by design), so I doubt we will ever see many failures outside of connect.  Larger queries could fail.  Large, remote queries seem the most likely to fail in the execute or results processing.  Adjust your approach accordingly.

Something else we wanted to do is get the SQL Azure context information.  This information helps the SQL Azure team debug any issues you run into.  There is a great blog post on that here.

We think we came up with an elegant solution to both issues.  Here’s how the generic ADO.NET code is affected:

string sqlContext = string.empty;

try
{
    using (SqlConnection conn = new SqlConnection(sqlConnectionString))
    {
        using (SqlCommand cmd = new SqlCommand(sqlStatement, conn))
        {
sqlContext = GetSqlContextInfo(conn);

            using (SqlDataReader dr = cmd.ExecuteReader())
            {
                while (dr.Read())
                {
                }
            }
        }
    }
}

catch (SqlException ex)
{
  SxpLog.WriteSqlException(ex, sqlContext, "some", "other", "data");
}

As you can see, we made 3 code changes and kept the generalization.  Our first line of code declares the sqlContext variable.  We do this outside the try block for scoping.  The second change is the replacement of conn.Open() with sqlContext = GetSqlContextInfo(conn).  GetSqlContextInfo encapsulates our retry logic, opens the connection, and retrieves our SQL context from the server.  This actually works out well because conn.Open() doesn’t fail – we have to issue a command execute in order to determine if the connection is valid.  Rather than "waste" a roundtrip, we do something useful.  The 3rd change is that we added the sqlContext to our exception log.  Again, this will help the SQL Azure team when debugging.

GetSqlContextInfo
public string GetSqlContextInfo(SqlConnection conn)
{
    string sqlContext = string.Empty;

    // Omitted the code to read these from configuration
    int sqlMaxRetries = 4;
    int sqlRetrySleep = 100;
    int sqlMaxSleep = 5000;
    int sqlMinSleep = 10;

    // start a timer
    TimeSpan ts;
    DateTime dt = DateTime.UtcNow;

    for (int retryCount = 0; retryCount <= sqlMaxRetries; retryCount++)
    {
        try
        {
            conn.Open();

            // get the SQL Context and validate the connection is still valid
            using (SqlCommand cmd = new SqlCommand("SELECT CONVERT(NVARCHAR(36), CONTEXT_INFO())", conn))
            {
                sqlContext = cmd.ExecuteScalar().ToString();
            }

            ts = DateTime.UtcNow - dt;

            // log opens that take too long
            if (ts.TotalMilliseconds >= 90)
            {
                SxpLog.WriteKnownEvent(8001, ts.TotalMilliseconds, "Connect", conn.DataSource, conn.Database, conn.WorkstationId, sqlContext);
            }

            break;
        }

        catch (SqlException ex)
        {
            if (retryCount < sqlMaxRetries)
            {
                conn.Close();

                SxpLog.WriteSqlRetry(5902, ex, conn.DataSource, conn.Database, conn.WorkstationId, "Connect");

                // don't sleep on the first retry
                // Most SQL Azure retries work on the first retry with no sleep
                if (retryCount > 0)
                {
                    // wait longer between each retry
                    int sleep = (retryCount + 1) ^ 2 * sqlRetrySleep;

                    // limit to the min and max retry values
                    if (sleep > sqlMaxSleep)
                    {
                        sleep = sqlMaxSleep;
                    }
                    else if (sleep < sqlMinSleep)
                    {
                        sleep = sqlMinSleep;
                    }

                    // sleep
                    System.Threading.Thread.Sleep(sleep);
                }
            }
            else
            {
                // Log the exception
                SxpLog.WriteSqlException(ex, conn.DataSource, conn.Database, conn.WorkstationId, "Connect");

                // we thought about rethrowing the exception, but chose not to
                // this will give us one more chance to execute the request - we might get lucky ...
            }
        }
    }

    // we log this value and null might cause an issue
    if (sqlContext == null)
    {
        sqlContext = string.Empty;
    }

    return sqlContext;
}

Notice that we don't sleep on our first retry and we sleep longer on each subsequent retry.  This gives a good balance between recovering quickly and giving SQL Azure time to re-open the connection.

We debated re-throwing the exception on retry failure, but decided not to.  Our next statement is a command execute and it will fail if the connection isn’t ready.  There’s a chance SQL Azure might recover between our last retry and our command execute.  We don’t bother checking the return value as we want to take that last shot at success.

We’ve had this change in production for a couple of days now and, so far, so good.  It's definitely an improvement over our previous logic, but only time will tell if it needs to be tweaked some more.  Again, this might not be 100% right for your solution, particularly if you have remote clients and large, long-running queries, but hopefully it gives you some ideas.  If you run into any bugs, please let me know and I’ll send you a Dove bar.

imageSee Wayne Walter Berry explains Writing BLOBs from SQL Azure to Windows Azure Storage in the Azure Blob, Drive, Table and Queue Services section above.

The Dxjo.net site announced new features, including full support for SQL Azure, in LINQPad v.2.10.1 on 6/18/2010:

imageNote: A version of LINQPad is now available for .NET Framework 4.0. This has an identical feature set to LINQPad 2, but lets you query in C# 4 and VB 9 and access Framework 4.0 features. LINQPad 4 is a separate product and is updated in parallel to LINQPad 2.

Version 2.10
  • LINQPad exposes a new extensiblity model that lets you offer first-class support for third-party ORMs and other querying sources. Visit http://www.linqpad.net/Extensibility.aspx for information on writing a driver.
  • You can now query WCF Data Services as easily as a database. Click ‘Add Connection’ and then click the ‘Data Services’ radio button.
  • SQL Azure is now fully supported. Just click the ‘Azure’ radio button in the connection dialog.
  • A single-click plugin is available to query SQLite and MySQL Enterprise/Community servers via the IQToolkit. To download, click ‘Add Connection’, and then click ‘View more drivers’.
  • LINQPad now lets you specify an arbitrary provider and connection string when connecting to an Entity Framework EDM. This means you can use third-party EF providers that support other databases such as Oracle or Sqlite.
  • SQL CE databases larger than 256MB are now supported via an option on the Connection dialog.
  • There’s now a button to the right of ‘Results’ that exports results into Excel or Word – with or without formatting.
  • With queries that Dump multiple result sets or values, LINQPad now displays data in the output window as soon as it becomes available.
  • You can now define extension methods in queries of type ‘C# Program’. Just write a static class below the Main method with the desired extension methods.
  • There’s a new ‘Find All’ option on the Edit menu that searches all queries and samples.
  • You can inject hyperlinks into the output by instantiating the new Hyperlinq class. You can populate a Hyperlinq with either an ordinary URL or a query that will execute in another window when clicked.
  • You can create additional DataContexts of the same schema by instantiating the UserQuery class, passing in a new connection or connection string.
  • There’s now a ‘Close all connections’ option on the Connection context menu. This is handy when you want to drop a database.
  • LINQPad supports SSL connections to SQL Server via a new checkbox on the connection dialog.
  • On the Edit menu, there’s a new ‘Execute Shell Command’ option which inserts a calls a helper method that lets you easily execute operating system commands. The output is returned as an array of lines, and is written to LINQPad’s output window.
  • In Edit | Preferences, there’s a new Advanced tab with more customization options, including options for editor tabs and column widths.
  • Performance has improved considerably with very large schemas (more than 1000 tables).
  • The System.Transactions assembly + namespace is now automatically imported.

Download

The Windows Azure Team added a Common Tasks for Dallas topic to MSDN’s “Introducing Dallas” topic on 6/18/2010:

image This section describes how to perform common tasks in Microsoft Codename “Dallas.”

Signing Up for a Dallas Account

To sign up as a data provider, email dallasbd@microsoft.com to begin the process. To sign up as a developer, follow the latest instructions at the “Get Started With Dallas” section of the “Dallas” developer page at http://www.microsoft.com/windowsazure/developers/dallas/

Managing Account Keys

To access the page that manages account keys, log in to the developer portal at http://www.sqlazureservices.com, click the tab labeled “Home,” and click the link labeled “Manage” that is next to the default account key. From here, more account keys can be added to the account, existing account keys can be removed from the account, and Access Control Service (ACS) functionality can be enabled or disabled. Activating the ACS option for an account key enables integration with Active Directory Federation Server v2 (AD FS v2) and allows others to access “Dallas” on the account holder’s behalf.

For more information on ACS integration visit the “Access Control Service (ACS) Developer Quickstart for “Dallas”” page at https://www.sqlazureservices.com/DallasACSWalkthrough.aspx while logged in to the developer portal.

Multiple keys can be associated with one account

Viewing the Access Report

To view the service activity associated with the account’s access key(s), log in to the developer portal at http://www.sqlazureservices.com, and click the tab labeled “Access Report.” The “Access” column will show the number of times the various account keys were used to access the listed datasets on each day that falls within the report’s date range. The report’s date range can be modified by changing the “Start Date” and “End Date” values at the top of the page and clicking “Refresh.” If a dataset was not accessed by any of the account’s access keys on a particular day, it will not be listed on the report.

For more information on access reports, see Lesson 4: Generating an Access Report.

Subscribing to and Unsubscribing from a Dataset

To subscribe to a dataset, log in to the developer portal at http://www.sqlazureservices.com, click the tab labeled “Catalog,” and click the link labeled “Subscribe” that is next to the dataset of interest.

To unsubscribe from a dataset, log in to the developer portal at http://www.sqlazureservices.com, click the tab labeled “Subscription,” and click the link labeled “Unsubscribe” that is next to that dataset.

For more information about dataset subscriptions, see the “Exploring Data Sets and Subscriptions in the Developer Portal” section in Lesson 2: Exploring the Developer Portal.

Browsing Subscriptions

To see the dataset subscriptions that are currently active for a developer account, log on to the developer portal at http://www.sqlazureservices.com and then click “Subscriptions.” On this page, the date and cost of the subscriptions are viewable, as are the subscribed datasets’ Service Explorer links.

Previewing a Dataset with the Service Explorer

For information about how to use Service Explorer, see Lesson 3: Exploring the Service Explorer.

See Also: Concepts

Frequently Asked Questions

Frans Senden posted Adding/Creating data to OData/WCF Data Service batch explained on 6/18/2010:

imageI want to show you how to “upload” data to a OData Service, that is make a Http POST from client to your service. For this example I started a new Project by adding an AdventureWorks Entity Model and configuring it to a Wcf Data Service. How to you can read at:msdn (Don’t worry, this is about the Northwind DB, instead choose the AdventureWorks DB, if you not yet downloaded: check this!

Frans shows the source code for the service and continues:

Once we’ve setup our service, it’s time for a client that makes some calls to it. First add a new project to the solution (ConsoleApplication for now) and make a ServiceReference to the service.

In this example we want to add 3 new products and persist them in the DB producttable. For the last part we don’t have to worry, since the Entity Framework will handle this all. …

Client source code omitted here for brevity. …

A short walkthrough of what is happening here.
First, take note of instead of setting up the context by calling “http://localhost:51824/AdventureService.svc” we include “http://ipv4.fiddler:51824/AdventureService.svc”. This is done so that later on we can monitor our calls in Fiddler.

When the application fires, a context of our service is setup, which will track all changes. In the end, if we agree with those changes we can call “context.SaveChanges()”;

By calling SaveChanges() requests are finally made to store the products. Let’s take a look in Fiddler:

fiddler products add requests

As you see, there are 3 requests made to the service. Note!: by calling SaveChanges() only once. …

Valery Mizonov posted a detailed Best Practices for Building Reliable SQL Azure Database Client Applications tutorial and a link to its source code to the Sqlcat.com site on 6/17/2010:

imageThe recent technical engagement with a customer building impressively large data warehouse on SQL Azure has been instrumental in enabling us to capture some interesting lessons learned and be able to produce the following best practices for the worldwide community. We are confident that our learnings can be of use for many .NET developers working with SQL Azure.

The Challenge

The developers who already had the opportunity to start working with our cloud-based relational database service, widely known as SQL Azure, may know that SQL Azure introduced some specific techniques and approaches to implementing data access service layer in the applications leveraging the SQL Azure infrastructure.

One of the important considerations is the way how client connections are to be handled. SQL Azure brings in some unique behavioral attributes which can manifest themselves when a client is establishing connections to a SQL Azure database or running queries against it. The database connections can be throttled internally by the SQL Azure fabric for several reasons, such as excessive resource usage, long-running transactions, nodes being taken offline for upgrade and other possible failover and load balancing actions, leading to termination of a client session or temporary inability to establish a new connections. The database connections may also be dropped due to the variety of reasons related to network connectivity between the client and distant Microsoft data centers: quality of network, intermittent network faults in the client’s LAN or WAN infrastructure and other transient technical reasons.

The behavior in question was well discussed in the article posted on by the SQL Azure team back in May 2010. It articulates the need for implementing retry logic in the client code to provide reliable connectivity to the SQL Azure databases. What appears to have been missing in the overall equation until now is a generic, reusable, professional quality implementation for the .NET developers to avoid having to repetitively write their own custom retry mechanisms. To this end, we have created a solution bridging the gap between great guidance provided by the SQL Azure team and much-needed framework for managing client connectivity to the SQL Azure databases.

The Solution

At a glance, our solution was implemented as a fully reusable framework for dealing with connection loss and failed SQL commands due to terminated sessions between a client application and a SQL Azure database. The underlying technical implementation:

  • Supports flexible choice of retry policies (fixed interval, progressive interval, random exponential interval);
  • Supports separate retry policies for SQL connections and SQL commands;
  • Supports retry callbacks to notify the user code whenever a retry condition is encountered;
  • Enables to specify the retry settings directly in the connection strings for even better flexibility;
  • Provides extension methods to support retry capabilities directly in SqlConnection and SqlCommand objects;
  • Available as fully documented, unit-tested source code ready for being plugged into any .NET solutions with no major customizations required.

The next sections drill down into specific implementation details and are intended to help the developers understand when and how they should make use of the SQL Azure connectivity framework referenced above.

Valery continues with detailed recommendations and source code for the following topics:

  • Understanding Retry Conditions
  • Technical Implementation
  • SqlAzureConnection Class Constructors, Properties and Methods (Core Logic)
  • RetryPolicy Class Constructors, Properties/Delegates and Methods
  • SqlConnection Extension Methods
  • SqlCommand Extension Methods
  • Usage Patterns
  • Opening SQL Azure Database Connections Reliably
  • Executing Queries Against SQL Azure Database Reliably

and concludes:

The traditional way of handling SQL connections and SQL commands has changed with the introduction of SQL Azure. The underlying fabric managing SQL Azure nodes unlocks some powerful capabilities enabling virtually unlimited scale. It however comes with specific elements of behavior which need to be fully understood by the client applications accessing the SQL Azure databases. This includes the need for handling transient connectivity exceptions by ensuring that the client code is able to sustain and behave reliably in the event of SQL Azure database connections being throttled by the Resource Manager. There are also other conditions which need to be accounted for. Consequently, having a robust retry mechanism in the SQL Azure client applications is the key.

We aimed to provide the community with validated best practices to help the .NET developers build a reliable data access layer taking into account these specific behavioral attributes of our cloud-based database infrastructure. Our best practices have been reflected in a form a reusable framework which developers can easily plug in and adopt in their solutions.

To access the source code, please see below for a link to the complete Visual Studio 2010 solution available from the MSDN Code Gallery. Please share your feedback to help improve the community sample: Microsoft.SQL.CAT.BestPractices.SqlAzure.Framework.zip

For more information on the related topic, please visit the following resources:

Be sure to review Bart Robertson’s description of newly updated SQL Azure Connection Retry technology (above) when working your way through Valery’s RetryPolicy class topics.

Kip Kniskern describes Messenger Connect’s new support for OAuth WRAP, ActivityStrea.ms, PortableContacts and OData in his Welcome “Windows Live for Developers” post of 6/16/2010:

imageAngus Logan has his first post up on a new Windows Team blog, “Windows Live for Developers”.  Angus is the Senior Technical Product Manager for Messenger Connect (and before that, the same title for Windows Live Platform), which itself is an upcoming single API promising “a new way for partners and developers to connect with Messenger”.

Ori Amiga posted on the Inside Windows Live blog back in April about Messenger Connect, as John Richards and Angus were announcing Messenger Connect at The Next Web Conference in Amsterdam.  Ori explained the basics:

“Messenger Connect brings the individual APIs we’ve had for a long time (Windows Live ID, Contacts API, Messenger Web Toolkit, etc.) together in a single API that's based on industry standards and specifications (OAuth WRAP, ActivityStrea.ms, PortableContacts, OData) and adds a number of new scenarios.

The new Messenger Connect provides our developer partners with three big things:

  • Instantly create a user profile and social graph: Messenger user profile and social graph information allows our shared customers to easily sign in and access their friends list and profile information. This allows our partners to more rapidly personalize their experiences, provides a ready-made social graph for customers to interact with, and provides a channel to easily invite additional friends to join in.
  • Drive engagement directly through chat indirectly through social distribution: By enabling both real-time instant messaging conversations (chat) and feed-based sharing options for customers on their site, developers can drive additional engagement and usage of their experiences by connecting to the over 320 million Messenger customers worldwide.
  • Designing for easy integration in your technical environment: We are delivering an API service that will expose a RESTful interface, and we’ll wrap those in a range of libraries (including JavaScript, .NET, and others). Websites and apps will be able to choose the right integration type for their specific scenario. Some websites prefer to keep everything at the presentation tier, and use JavaScript libraries when the user is present. Others may prefer to do server-side integration, so they can call the RESTful endpoints from back-end processes. We're aiming to provide the same set of capabilities across the API service and the libraries that we offer.”

Looks like we’re about to hear a lot more about Messenger Connect soon.  We added Windows Live for Developers to our “Blogs We Like” section, and you can subscribe via rss.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

•• Sam Vanhoutte described Hosting workflow services in Windows Azure in this 6/17/2010 post:

image Since the latest release of the Windows Azure Development Kit (June), Azure provides support for .NET 4 applications, which is a real important step from the point of adoption, I believe.

The first thing we wanted to try was to host a XAMLX Workflow Service in a web role on Azure.

Example workflow service

I created a standard Workflow Service that accepted two parameters, on one operation (Multiply) and returns the result of the multiplication to the client. This service was called Calc.xamlx.

These are the steps I followed to make my service available. I added the exceptions, because that’s typically what users will search for.

Enable visualization of errors

Standard behavior of web roles, is to hide the exception for the users, browsing to a web page. Therefore, it is advised to add the following in the system.web section of the web.config:

<customErrors mode="Off"/>

Configure handler for Workflow Services

The typical error one would get, when just adding the Workflow Service to your web role and trying to browse it, is the following:

The requested url '/calc.xamlx' hosts a XAML document with root element type 'System.ServiceModel.Activities.WorkflowService'; but no http handler is configured for this root element type. Please check the configuration file and make sure that the 'system.xaml.hosting/httpHandlers' section exists and a http handler is configured for root element type 'System.ServiceModel.Activities.WorkflowService'.

We need to specify the correct HTTP handler that needs to be used for XAMLX files. Therefore, we link the workflow services and activities to the correct Workflow Service ServiceModel handler. …

Sam continues with the web.config additions for the handler.

Enabling metadata exposure of the workflow service

To enable consumers of our workflow service to generate their proxy classes, we want to expose the WSDL metadata of the service, by providing the following configuration section in the web.config.

Notice the simplicity of configuration, compared to WCF3.5. Making use of the default behaviors, allows us to only specify what we want to override. …

Web.config additions omitted for brevity.

Testing and publishing

After successfully testing locally in the Azure Development Fabric, I uploaded and deployed the package, using the new Tools in Visual Studio to my Azure test account.

Ron Jacobs posted CancellationScope Activity Updated for .NET 4 RTM for workflow on 6/18/2010:

Nearly a year ago I created a CancellationScope Activity sample and posted it on CodeGallery for .NET 4 Beta 1. I went back yesterday and updated it for .NET 4 RTM – the sample has some interesting points to demonstrate.

It shows how the CancellationScope Activity works but it also shows

  1. How to create a custom activity that causes the workflow to go idle for testing purposes
  2. How to create a unit test that verifies the behavior of a console application by simply verifying the text output to the console.

You can download the sample from http://code.msdn.microsoft.com/wf4CancellationScope

CancellationScope Activity

How do you cancel work in your application? Many middle tier components and services rely on transactions to handle cancellation for them. This makes a great deal of sense because transactional programming is well understood. However there are many times when you must cancel work that cannot be done under a transaction. Cancellation can be very difficult because you have to track work that has been done to know how to cancel it. WF4 also includes compensation activities that can be used to do clean up work as well.

Windows Workflow Foundation 4 helps you with this by providing a CancellationScope activity.
CancellationScope1
In today’s example I’m going to show you how you can use a cancellation scope to manage cleanup work.

How does cancellation happen?

Before we consider how the CancellationScope, let’s think about how cancellation happens in a workflow. There are 2 ways it can happen, from inside of a workflow or from the outside. Child workflow activities are scheduled by their parent activity (such as a Sequence, Parallel, Flowchart or some custom activity). The parent activity can decide to cancel child activities for any reason. For example, a Parallel activity with three child branches will cancel the remaining child branches whenever it completes a branch and the CompletionCondition expression evaluates to true.

The workflow can also be cancelled from the outside by the host application by calling the Cancel method on the WorkflowInstance.

How does the cancellation scope work?
It’s very simple really, the cancellation scope has a Body. This is where you do the work that you might have to cancel later. The CancelHandler contains the activities you want to run if the activity is canceled.
Does an unhandled exception cause the cancellation handler to be invoked?
No it does not. Unhandled exceptions cause the workflow to abort, cancellation handlers are not invoked.
What if my cancellation handler throws an unhandled exception?
If you have an unhandled exception in the cancellation handler the workflow will immediately terminate with the exception. This is similar to throwing an exception from within a catch block.
Show me a sample

I created some sample code for this blog post that cancels the workflow from the outside. In this example you can observe the behavior of nested cancellation.
When it starts you enter a number between 1 and 3 to determine the number of nested levels you want to call. When you get to the limit of that level the workflow will call a custom activity I created call GoIdle that simply sets a bookmark. This will cause the workflow to become idle and the WorkflowInstance.OnIdle handler will be invoked handing control back to the host.

In program.cs I’ve created two AutoResetEvents, one that will be signaled when the workflow goes idle and another that will be signaled when the workflow completes or terminates with an exception. This is a little unusual but I know that my workflow will go idle when it matches one of the nesting levels 1,2 or 3.

Try It Out
Download the sample code from the Downloads tab. (This sample requires Visual Studio 2010 / .NET 4)
  1. Explore the sample, check out program.cs and CancellationScop.xaml
  2. Press Ctrl+F5 to compile and run the sample
  3. When it prompts you for a level try level 1 or 2. You will see that cancellation happens in reverse order of invocation (level 2 cancels before level 1).
  4. To see what happens when you have an unhandled exception thrown in the cancel handler, try level 3.
CancelCmd
Test It Out

This sample also includes a unit test project CancelScope.Tests that you can study to see how we tested this sample application.

Posted under AppFabric because the former .NET Services intended to include workflow features (but didn’t).

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

•• Sinclair Schuller, CEO of Apprenda, explains in this recent 00:03:09 Windows Azure – Sinclair Schuller YouTube video why his company adopted Windows Azure:

image Sinclair Schuller, CEO – Apprenda (Albany, NY). Apprenda offers [a] .NET application server for companies who are building applications to be delivered in the cloud. Windows Azure provides a reliable and scalable simple storage solution and thus it was a no-brainer for Apprenda to go with Windows Azure.

•• The Windows Azure Team presents seven live recordings of an all-day Windows Azure Firestarter event held on the Microsoft campus at Redmond, WA:

imageThe cloud is everywhere and here at Microsoft we're flying high with our cloud computing release, Windows Azure. As most of you saw at the Professional Developers Conference, the reaction to Windows Azure has been nothing short of "wow" – and based on your feedback, we organized and recorded this special, all-day Windows Azure Firestarter event to help you take full advantage of the cloud.Maybe you've already watched a webcast, attended a recent MSDN Event on the topic, or done your own digging on Azure. Well, here's your chance to go even deeper. We start by revealing Microsoft's strategic vision for the cloud, and then offer an end-to-end view of the Windows Azure platform from a developer's perspective. We also talk about migrating your data and existing applications (regardless of platform) onto the cloud. We finish up with an open panel and lots of time to ask questions. This video is a recording of a live event.

These episodes showed up on almost all warez sites over the weekend. Get your original links here.

Suprotim Agarwal reported the Windows Azure Platform Training Kit Update on 6/18/2010:

imageMicrosoft recently released the latest update to its FREE Windows Azure Platform Training Kit.

Quoted from the site “The Azure Services Training Kit includes a comprehensive set of technical content including hands-on labs, presentations, and demos that are designed to help you learn how to use the Windows Azure platform including: Windows Azure, SQL Azure and AppFabric. The June release includes new and updated labs for Visual Studio 2010.”

Here is what is new in the kit:

  • Introduction to Windows Azure - VS2010 version
  • Introduction To SQL Azure - VS2010 version
  • Introduction to the Windows Azure Platform AppFabric Service Bus - VS2010 version
  • Introduction to Dallas - VS2010 version
  • Introduction to the Windows Azure Platform AppFabric Access Control Service - VS2010 version
  • Web Services and Identity in the Cloud
  • Exploring Windows Azure Storage VS2010 version + new Exercise: “Working with Drives”
  • Windows Azure Deployment VS2010 version + new Exercise: “Securing Windows Azure with SSL”
  • Minor fixes to presentations – mainly timelines, pricing, new features etc.

This training kit also contains Hands On Labs, Presentations and Videos, Demos, Samples and Tools.

Download the Windows Azure Platform Training Kit.

Larry Carvalho claims “Cloud computing hype being caused by cloud washing of existing solutions” in his Multi-Tenancy – Crucial Requirement for Cloud Computing Service Providers post of 6/17/2010:

A lot of vendors are announcing “cloud washed” solutions that is just adding to the hype in this nascent technology. It also adds to customer cynicism on what really makes is a cloud solution? I see an increasing number of ISV’s say we have a cloud-enabled solution. Sometimes it is just press release that they have “announced” a cloud solution in partnership with a hosting provider. Under the covers, this press release is just the capability to host their application in the data center of the hosting company. This is the kind of solution that customers have been doing for years and is fueling the skepticism of cloud computing among customers.

There are a number of characteristics that make up a cloud solution that you can get from the NIST site. However, when your providers hype up agility, flexibility, security and other cloud computing benefits, ask them one question: what is your multi-tenancy capability? Multi tenancy can vary from the single instance/single tenant model to multiple tenants sharing storage, application and servers.

Business agility and cost savings can truly be gained from multi-tenancy. It is not easy to re-write applications to be multi-tenant capable. Cloud washing existing applications is not helping customers understand the value of cloud computing. Read between the lines when you are pitched a solution and it should cause your “hype-meter” to start frantically beeping if they are not multi-tenant capable!

Microsoft Program Manager Abu Obeida Bakhach wants you to Voice your opinion on .NET interoperability with Windows Communication Foundation (WCF), according to this 6/17/2010 post:

image[6]

Who says Microsoft doesn’t listen? Here is a chance to voice your opinion and make .NET more interoperable and easier to use when dealing with other languages/runtime through Web Services (WS-*).

The .NET Windows Communication Foundation (WCF) team is planning its next set of features and wants to hear from developer’s experiences. For that purpose, they have provided a quick, to-the-point survey for you to provide a developer to developer feedback.

Achieving interoperability between platforms should be easy and straightforward, right? We know it’s not always the case. So, go ahead provide your feedback today on what is keeping you awake at night and what would make you happy.  We are early in the product cycle, but need your feedback by July 15th to truly make the impact we all want.

The survey is right here: http://mymfe.microsoft.com/WCF/Feedback.aspx?formID=283

If you have any questions on the survey, please contact Abu Obeida Bakhach, Interoperability Program Manager at abu.obeida@microsoft.com.

Ron Jacobs points to endpoint.tv - How to Do API Key Verification with a WCF WebHttp (REST) Service in this 6/17/2010 post:

In .NET 3.5, we created the REST Starter Kit as a way to get you up and running with RESTful services. Now that .NET 4 is out, people are asking how to do things like API Key Verification in .NET 4. In this episode, I'll walk you through a sample based on my blog post on the subject.

Return to section navigation list> 

Windows Azure Infrastructure

Tim Clark and Margaret Dawson assert “Integration enables business process automation across the supply chain, demand chain, general operations, and more” in their Ten IT and Business Benefits of Cloud-Based Integration essay of 6/19/2010:

With today's global and distributed commerce, organizations of all sizes are having to collaborate and exchange information with a growing ecosystem of divisions, partners and customers. Most companies want to communicate electronically and in real time, but beyond email, managing the exchange of data, messages and documents can be challenging and expensive.

Traditional EDI, networks or point-to-point integration systems are not providing the interoperability, agility and real-time information exchange businesses need to compete. In addition, companies need to do more than merely exchange data; they need to integrate complete business processes, such as procurement, supply chain management, eCommerce, benefit claims processing, or logistics, to name just a few.

As with other technologies, integration solutions are moving to the cloud in order to provide this increased flexibility and complexity. Today, there are an increasing number of technology vendors giving customers a choice of traditional on-premise integration - where the company manages the connections, mapping and business processes itself - or cloud-based products with strong self-service or managed service support.

While the cloud may not be appropriate for every company or solution, it is an ideal platform for integration, as it enables seamless interaction and collaboration across communities and systems. From clear economic benefits to increased IT agility to real business impact, a cloud-based integration solution brings value across the IT and business aspects of the organization. Below we've outlined the top 10 IT and business benefits of conducting multi-enterprise integration in the cloud.

  1. Improved partner and customer relations and retention
  2. Increased revenue and margin
  3. Improved order accuracy
  4. Faster time-to-market
  5. Greater competitive advantage
  6. Reduced costs and capital expenditures (CapEx)
  7. Increased operational efficiencies and reduced manual processes, allowing ways to move headcount to more strategic projects.
  8. Extended investments in legacy applications and systems
  9. Aligning IT with business goals
  10. Scalability and flexibility …

Tim and Margaret then expand each of these points in more detail with real-world examples.

Ryan Dunn and Steve Marx delivered Cloud Cover Episode 15 - Certificates and SSL, a 00:29:08 Channe9 video segment, on 6/18/2010:

image Join Ryan and Steve each week as they cover the Microsoft cloud. You can follow and interact with the show at @cloudcovershow
In this episode:  

  • Catch up with the announcements made at TechEd North America.
  • Learn how certificates work in Windows Azure and how to enable SSL.
  • Discover a tip on uploading public key certificates to Windows Azure. 

Show Links:

Panagiotis Kefalidis recounts My great experience with Windows Azure customer support in this 6/18/2010 post:

imageOnce again, Microsoft proved that it values its customers, either big enterprise or small startups. We’re a small private-held business and I personally have a major role in it as I’m one of the founders. Recently, I’ve been giving some pretty nice presentations and a bunch of sessions for Microsoft Hellas about Windows Azure and Cloud computing in general.

I was using my CTP account(s) I have since PDC 08 and I had a lot of services running there from times to times all for demo purposes. But with the 2nd commercial launch wave, Greece was included and I had to upgrade my subscription and start paying for it. I was ok with that, because MSDN Premium subscription has 750 hours included/month, SQL Azure databases and other stuff included for free. I went through the upgrade process from CTP to Paid, everything went smoothly and there I was waiting for my CTP account to switch on read-only mode and eventually “fade away”. So, during that process, I did a small mistake. I miscalculated my instances running. I actually missed some. That turned out to be a mistake that will cost me some serious money for show-case/marketing/demoing projects running on Windows Azure.

About two weeks ago, I had an epiphany during the day and I was like “Oh, crap.. Did I turn that project off? How many instances do I have running?”. I logged on the billing portal and, sadly for me, I was charged like 4500 hours because of the forgotten instances and my miscalculation. You see, I’ve did a demo about switch between instance sizes and I had some instances running like big VMs. That’s four (4) times the price per hour.

It was clearly my mistake and I had to pay for it (literally!). But then I tweeted my bad luck to help others avoid the same mistake and the thing I was been warning my clients all this time and some people from Microsoft got interested in my situation, I explained what happened and we ended up in a pretty good deal just 3 days after I tweeted. But, that was an exception and certainly DON’T count on it.

Bottom line is be careful and plan correctly. Mistakes do happen but the more careful we are, the more rare they will be.

* I want to publicly say thank you to anyone who was involved in this and helped me sort things out so quickly.

Charlotte Dunlap offers an “analysis of the software giant's Azure cloud computing platform” as she explains Why Microsoft Windows Azure is a threat to Google in this 6/17/2010 article for Forbes.com:

image Microsoft's new features announced last week to its cloud computing platform, Azure, gives developers the option to build applications in the cloud via Microsoft's data centers, rather than in a company's server. Giving customers this sort of flexibility in IT and software development is important to customers looking for ways to reduce costs, bandwidth demands and management responsibilities. Flexibility in delivery models is also Microsoft's key differentiator over cloud giant Google.

Cloud computing has become a key piece of an enterprise's IT strategy, typically used in a hybrid (cloud plus on-premise) model of computing that offers customers the best of both worlds: the ability to keep their data on-premise, while leveraging the cloud's accelerated software development speeds and lower costs by eliminating the need to invest in ongoing on-premise hardware and software. A common example of hybrid is being able to develop applications and test them in the cloud before releasing them onto internal networks.

This scenario gives Microsoft … a major advantage over cloud-only hosted service providers Google … and Amazon, one that creates great opportunities for Microsoft's broader partner ecosystem. Developers can use the same development tools, frameworks and execution environment for either cloud or on-premise applications. Developers can build a single application that leverages the cloud's scalability for transactional processing while supporting the security of on-premise data storage.

Microsoft has created a comprehensive services platform from a software developer's perspective. Users of Microsoft's technology, including .NET 4, Visual Studio and IntelliTrace, can tap hosted services to build applications that scale, addressing the problematic issue of traffic spikes so that customers don't need to invest in additional on-premise servers. Microsoft says it is mulling features in Windows Server or SQL Server that are not yet in Windows Azure and could be delivered through the cloud.

One piece of technology not included in Windows Azure is BizTalk Server. Microsoft chose to include the less robust service bus capabilities found in AppFabric instead. BizTalk is an essential ESB tool in Microsoft's portfolio that would have allowed customers to build integration-centric solutions in the cloud, taking advantage of the company's mature, on-premise integration customer base.

Recapping the Azure announcements: an updated version of Windows Azure Software Development Kit; support for .NET Framework 4; support for Visual Studio 2010's IntelliTrace feature; under SQL Azure, Microsoft now offers developers 50 GB (from 10 GB) of database storage; and Windows Azure Content Delivery Network is also now in production.

Yes, Microsoft is late to the party, as usual, but I expect it will have a major impact on the cloud computing's competitive landscape within the year. Since its beta launch in November 2008, the company says it has acquired 10,000 customers, a startling following considering Microsoft's lack of experience in the services arena. The company's success will come because cloud computing is making more sense to enterprises as a complementary form factor, and this latest distribution method is a natural progression for Microsoft's traditional development platform.

Charlotte Dunlap is an independent application infrastructure and security analyst.

<Return to section navigation list> 

Cloud Security and Governance

If you missed the June 15th webcast, “Retailers: Avoid PCI Burnout with a Security-focused Approach” featuring Gartner’s Avivah Litan, VP and Distinguished Analyst and  Ed Rarick, PCI Evangelist with Tripwire:

image In this presentation, Avivah and Ed discussed why retailers carry a heavy PCI load, their typical PCI approach, and a security-focused approach that could lighten that load.

The recorded webcast is now available on-demand at http://www.tripwire.com/register/?resourceId=19009. Please tune in to this webcast and share it with a colleague. …

<Return to section navigation list> 

Cloud Computing Events

•• salesforce.com announced that Cloudforce 2010 takes place 6/22/2010 at the San Jose Convention Center,150 West San Carlos St., San Jose, CA 95113:

image The next generation of cloud computing is here! It’s mobile, collaborative, and social—and it’s available today with Cloud 2. Now you can The next wave of cloud computing is here, and it’s changing everything! Find out how—at this landmark industry event with the cloud leaders whose mobile, collaborative, and social innovations are driving the Cloud 2 revolution.

Join salesforce.com Chairman and CEO Marc Benioff, industry luminaries from CA, VMware, and BMC, and thousands of your peers to see how the next generation of cloud computing can transform your entire business. Choose from 18 targeted breakout sessions for sales, customer service, developers, IT executives, ISVs, and entrepreneurs. And test drive 50+ live demos of cutting-edge apps.

Lunch is included. And you can cap off your day by networking1:1 networking industry’s top cloud computing experts, salesforce.com customers, and partners in our Cloud Expo.
Space is limited and there’s only a week to go, so register now for this free event.

Eucalyptus Systems, newScale, Inc. and rPath, Inc. will hold a Focus on IT Agility webinar on 6/24/2010 at 9:00 AM PDT:

Enterprise IT is under pressure to transform from bottleneck to business enabler. The rise of public cloud services such as Amazon EC2 have provided a clear example of what enterprise IT is expected to become: A simple, self-service on-demand infrastructure provider.

IT organizations that fail to make this transformation will watch in vain as rogue workloads follow the path of least resistance to the public cloud.

Is hybrid cloud the answer? Attend this complimentary webinar to find out.

Topic: Focus on IT Agility
Date: Thursday, June 24, 2010
Time: 9:00 am PT / 12:00 noon ET / 5:00 pm UK
Click here to convert the Webinar time to your local time zone
Duration: 1 hour
Featured Presenters:

Cameron Haight

Rich Wolski
Founder & CTO
Eucalyptus Systems
Gartner

Scott Hammond

Rodrigo Flores
Founder & CTO
newScale, Inc.

Scott Hammond

Erik Troan
Founder & CTO
rPath, Inc.
newScale

Join this webinar to learn how enterprise IT organizations can make this transformation, becoming an on-demand service provider by combining a catalog of standard offerings available thru self-service with automation and private cloud infrastructure to achieve true IT agility.

Don’t miss this opportunity to learn about:

  • The new tool chain for IT agility: self-service, automation and elasticity
  • Frameworks for transformation: real-world best practices for success
  • Case study examples of leading-edge enterprise IT organizations
  • Hybrid compute infrastructures: how to blend physical, virtual and cloud

Register Today!

Elizabeth White reported Microsoft’s Bart Robertson to Present at Cloud Expo Silicon Valley on 6/19/2010:

image The real-world experiences based on Microsoft's multi-tenant Windows Azure-based social media web services used by www.microsoft.com/Showcase will be discussed in this case study.

In his session at the 7th International Cloud Expo, Bart Robertson, an architect with Microsoft Corporation, will discuss the differences and lessons learned in the following categories: Deployment, Monitoring, Availability, Scaleability, Costs & Return on Investment, and Future Plans.

Bart Robertson is an architect with Microsoft Corporation, where he is responsible for architecting and implementing highly scalable, on-premise and cloud-based software plus services solutions. During his 18-year career at Microsoft, he has held leadership and management roles in IT, product development, sales, marketing, and services. Robertson has extensive experience in software, high-tech, and retail, with a heavy focus on high-volume, low-latency systems. He blogs at http://blogs.msdn.com/bartr.

The growth and success of Cloud Computing will be on display at the upcoming Cloud Expo conferences and exhibitions in Prague June 21-22 and Santa Clara November 1-4.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Audrey Watters reported U.S. Department of Energy Asks, Is Cloud Computing Fast Enough for Science? on 6/19/2010 for the ReadWriteCloud blog:

doe_logo.jpgWith cloud computing gaining acceptance in the business world, the U.S. Department of Energy wants to know if cloud computing can also meet the needs of the scientific computing. The National Energy Research Scientific Computing Center (NERSC) has launched Magellan, a cloud computing testbed to explore this question, with facilities that will test the effectiveness of cloud computing for scientific projects.

Magellan is built to meet the special requirements of scientific computing using technology and tool sets unavailable in commercial clouds, including high-bin processors, high-bandwidth parallel file systems, high-capacity data archives, and pre-installed scientific applications and libraries.

But will cloud computing be fast enough?

According to a report in Federal Computing Week, preliminary results suggest that commercially available clouds operating Message Passing Interface (MPI) applications such as weather calculations suffer in performance. "For the more traditional MPI applications there were significant slowdowns, over a factor of 10," said Kathy Yelick, NERSC division director.

Performance isn't the only factor the Magellan cloud is testing. It's also looking at cloud computing's efficiency and its suitability for various research projects. Magellan is a two-year $32 million project involving about 3000 NERSC scientists. The goal, in part, is to help inform the DOE, scientists, and the tech industry what is needed in terms of configuration and management of scientific cloud computing resources.

Royal Pingdom posted Exploring the software behind Facebook, the world’s largest site on 6/19/2010:

At the scale that Facebook operates, a lot of traditional approaches to serving web content break down or simply aren’t practical. The challenge for Facebook’s engineers has been to keep the site up and running smoothly in spite of handling close to half a billion active users. This article takes a look at some of the software and techniques they use to accomplish that.

Facebook’s scaling challenge

Before we get into the details, here are a few factoids to give you an idea of the scaling challenge that Facebook has to deal with:

  • Facebook serves 570 billion page views per month (according to Google Ad Planner).
  • There are more photos on Facebook than all other photo sites combined (including sites like Flickr).
  • More than 3 billion photos are uploaded every month.
  • Facebook’s systems serve 1.2 million photos per second. This doesn’t include the images served by Facebook’s CDN.
  • More than 25 billion pieces of content (status updates, comments, etc) are shared every month.
  • Facebook has more than 30,000 servers (and this number is from last year!)
Software that helps Facebook scale

In some ways Facebook is still a LAMP site (kind of), but it has had to change and extend its operation to incorporate a lot of other elements and services, and modify the approach to existing ones.

For example:

  • Facebook still uses PHP, but it has built a compiler for it so it can be turned into native code on its web servers, thus boosting performance.
  • Facebook uses Linux, but has optimized it for its own purposes (especially in terms of network throughput).
  • Facebook uses MySQL, but primarily as a key-value persistent storage, moving joins and logic onto the web servers since optimizations are easier to perform there (on the “other side” of the Memcached layer).

Then there are the custom-written systems, like Haystack, a highly scalable object store used to serve Facebook’s immense amount of photos, or Scribe, a logging system that can operate at the scale of Facebook (which is far from trivial).

But enough of that. Let’s present (some of) the software that Facebook uses to provide us all with the world’s largest social network site.

Memcached

MemcachedMemcached is by now one of the most famous pieces of software on the internet. It’s a distributed memory caching system which Facebook (and a ton of other sites) use as a caching layer between the web servers and MySQL servers (since database access is relatively slow). Through the years, Facebook has made a ton of optimizations to Memcached and the surrounding software (like optimizing the network stack).

Facebook runs thousands of Memcached servers with tens of terabytes of cached data at any one point in time. It is likely the world’s largest Memcached installation.

HipHop for PHP

HipHop for PHPPHP, being a scripting language, is relatively slow when compared to code that runs natively on a server. HipHop converts PHP into C++ code which can then be compiled for better performance. This has allowed Facebook to get much more out of its web servers since Facebook relies heavily on PHP to serve content.

A small team of engineers (initially just three of them) at Facebook spent 18 months developing HipHop, and it is now live in production.

Haystack

Haystack is Facebook’s high-performance photo storage/retrieval system (strictly speaking, Haystack is an object store, so it doesn’t necessarily have to store photos). It has a ton of work to do; there are more than 20 billion uploaded photos on Facebook, and each one is saved in four different resolutions, resulting in more than 80 billion photos.

And it’s not just about being able to handle billions of photos, performance is critical. As we mentioned previously, Facebook serves around 1.2 million photos per second, a number which doesn’t include images served by Facebook’s CDN. That’s a staggering number.

BigPipe

BigPipe is a dynamic web page serving system that Facebook has developed. Facebook uses it to serve each web page in sections (called “pagelets”) for optimal performance.

For example, the chat window is retrieved separately, the news feed is retrieved separately, and so on. These pagelets can be retrieved in parallel, which is where the performance gain comes in, and it also gives users a site that works even if some part of it would be deactivated or broken.

Cassandra

CassandraCassandra is a distributed storage system with no single point of failure. It’s one of the poster children for the NoSQL movement and has been made open source (it’s even become an Apache project). Facebook uses it for its Inbox search.

Other than Facebook, a number of other services use it, for example Digg. We’re even considering some uses for it here at Pingdom.

Scribe

Scribe is a flexible logging system that Facebook uses for a multitude of purposes internally. It’s been built to be able to handle logging at the scale of Facebook, and automatically handles new logging categories as they show up (Facebook has hundreds).

Hadoop and Hive

HadoopHadoop is an open source map-reduce implementation that makes it possible to perform calculations on massive amounts of data. Facebook uses this for data analysis (and as we all know, Facebook has massive amounts of data). Hive originated from within Facebook, and makes it possible to use SQL queries against Hadoop, making it easier for non-programmers to use.

Both Hadoop and Hive are open source (Apache projects) and are used by a number of big services, for example Yahoo and Twitter.

Thrift

Facebook uses several different languages for its different services. PHP is used for the front-end, Erlang is used for Chat, Java and C++ are also used in several places (and perhaps other languages as well). Thrift is an internally developed cross-language framework that ties all of these different languages together, making it possible for them to talk to each other. This has made it much easier for Facebook to keep up its cross-language development.

Facebook has made Thrift open source and support for even more languages has been added.

Varnish

VarnishVarnish is an HTTP accelerator which can act as a load balancer and also cache content which can then be served lightning-fast.

Facebook uses Varnish to serve photos and profile pictures, handling billions of requests every day. Like almost everything Facebook uses, Varnish is open source.

James Hamilton comments on the preceding Royal Pingdom summary in his Facebook Software Use post of 6/20/2010:

image… The article was vague on memcached usage saying only “Terrabytes”. I’m pretty interested in memcached and Facebook is, by far, the largest user, so I periodically check their growth rate. They now have 28 terabytes of memcached data behind 800 servers. See Scaling memcached at Facebook for more detail.

The mammoth memchached fleet at Facebook has had me wondering for years how close the cache is to the entire data store? If you factor out photos and other large objects, how big is the entire remaining user database? Today the design is memcached insulating the fleet of database servers. What is the aggregate memory size of the memcached and database fleet? Would it be cheaper to store the entire database 2-way redundant in memory with changes logged to support recovery in the event [of] a two server loss?

Facebook is very close if not already able to store the entire data store minus large objects in memory and within a factor of two of being able to store in memory twice and have memcached be the primary copy completely omitting the database tier. It would be a fun project.

Mary Jo Foley contends Microsoft Research has its head in the clouds, too in this 6/18/2010 post to her All About Microsoft ZDNet blog:

image It’s not just the Microsoft product groups that are charging ahead with getting Microsoft “All In” with the cloud. Microsoft Research seems to be stepping up its work on a variety of cloud-related projects, and plans to share details about several of them during the 2010 Usenix Federated Conferences shows in Boston the week of June 21.

Here are just a few of the new projects from Microsoft Research site that are on the Usenix 2010 docket:

Stout: An adaptive interface to scalable cloud storage
Microsoft Research and UC San Diego

Stout provides distributed congestion control for client requests. Its goal is to improve the performance of applications that are being delivered as scalable multi-tier services deployed in large data centers. These kinds of services often get bogged down with delays and dropped requests under high workloads. Sout is designed to help these kinds of apps adapt to variations in storage-layer performance. “Under heavy workloads, Stout automatically batches the application’s requests together before sending them to the store, resulting in higher throughput and preventing queuing delay,” a white paper on the technology explains. …

Mary Jo continues with descriptions of and links to the following projects:

  • The Utility Coprocessor: Massively Parallel Computation from the Coffee Shop
    Microsoft Research
  • Seawall: Performance Isolation for Cloud Datacenter Networks
    Microsoft Research and Cornell University researchers
  • CloudCmp: Shopping for a Cloud Made Easy
    Microsoft Research and Duke University researchers
  • Distributed Systems Meet Economics: Pricing in the Cloud
    Microsoft Research, Shanghai Jiao Tong University and Peking University

Aaron Mannenbach asks Still waiting for QuickBase to come back online? Try TrackVia: 99.99% Uptime Record for +5 Years on 6/16/2010:

It’s never a good situation when one of our competitors goes down – especially for +13 hours!

It’s not good for the customers who need access to their applications and data to do their jobs. It’s not good for the overall cloud industry. Just as ‘the cloud’ is becoming more widely accepted and adopted, a significant and painful outage from one of the larger cloud service providers – Intuit – occurs. The impact of having an outage of this length cannot be under-estimated. Certainly, many businesses have built mission critical applications that could not be accessed, essentially erasing more than a full day’s worth of productivity.

I don’t know what specifically drove QuickBase’s outage. What I do know is that it is critical to ask key questions of your cloud provider. Some of these include:

  • What has your provider done to minimize the likelihood of an outage?
  • How is your provider’s solution architected to give you comfort that you can quickly and securely access your data – whenever you need it?
  • What type of service level commitment does your provider deliver?

TrackVia has delivered continuous 99.99% availability to our 1000 customers for +5 years. The team has accomplished this feat through a number of critical factors:

  • Proven LAMP-stack components. TrackVia’s product suite is built on the most widely-used open-source web-based application stack in the world today — LAMP (Linux, Apache, MySQL, and Perl). This has enabled TrackVia to develop a solution with components that are well-documented, supported, scalable, and dependable.
  • Use of a truly relational back-end. Some of our competitors ‘emulate’ relational capabilities by placing table data into memory. This works well with a small number of records, but as you grow and scale, using ‘cache’ to emulate a relational construct will impact response time, resource utilization, and ultimately usability and reliability. TrackVia uses MySQL, the pre-eminent relational database for web-based businesses, as the back-end for storing its tables and data. By using MySQL, TrackVia is able to deliver true relational capabilities – that perform with speedy response times and extremely high levels of reliability.
  • Shard-based architecture. TrackVia uses a horizontally scalable, shard-based architecture. In plain english, this means TrackVia will be FAST and PERFORMANT whether you are working with 1K records or 10M records; or whether TrackVia is supporting 100 customers or 15K customers. In order to support additional load (whether users or records), TrackVia simply needs to add an additional application server or database server to back-end to continue delivering its services at speed and scale. The addition of these ‘nodes’ is done behind the scenes, with zero performance or downtime impact to our customers.
  • Redundant components at every layer of the delivery stack. Our operations team has ensured there are no single points of failure in TrackVia’s production stack. We have immediate fail-over elements at every layer of the delivery stack – including the firewalls, routers, switches, network connections, app servers, file servers, database servers, etc. etc. In the event, that any single component goes down, a back-up kicks in so that your service remains up.
  • SAS 70 Type II Hosting Facility. This is table-stakes, but the fact is your data is stored on servers in a data center that are secured in their own separate cage, that are guarded by armed security officers and monitored 24×7. The data center has redundant coolers and diesel generators, has blast walls, and is surrounded by a dry moat.

TrackVia isn’t perfect. If we were, we’d be touting a 100% Uptime Record. In fact, several months ago we experienced performance problems that affected a subset of our customers. Our team proactively reached out to every affected customer, notified them of the impact, and promptly provided a service credit to each and every customer. That’s the type of commitment we strive to deliver to our customers, and that every user of the cloud should expect of their providers.

QuickBase Customers: While you’re waiting for your QuickBase apps to come back online, come check TrackVia out. We have a 14-day free trial and a super easy QuickBase migration path.

TrackVia’s Architecture:

Sydney Muntaneau reported Dabble DB Acquired by Twitter, TrackVia a Compelling Alternative on June 11, 2010:

Dabble DB was acquired by Twitter yesterday (http://dabbledb.com/), which is exciting news. Dabble DB customers have 60 days to migrate to another platform – not so exciting. Ouch.

While this is a painful development for Dabble DB customers, the good news is that free or low-cost migration offers are sprouting up like wildfire:

  • QuickBase – 3 months free!
  • Zoho – free migration
  • Caspio – 1st month value-package, free migration consultation, training, and support

We can’t be happier that the players in our space have stepped in to provide Dabble customers with a variety of soft-landing options. That’s great.

TrackVia is also an amazing option – worthy of serious consideration. Just as Dabble DB positioned itself as “software for people, not programmers”, TrackVia positions itself as a Cloud Database and Application Platform designed for Business People. Significantly, we also tout being secure and performant enough to be championed by Internal IT. We have a powerful, rich feature set that is intuitive and easy-to-use. We deliver 99.99% availability. Our proven LAMP-stack architecture ensures fast, scalable and reliable performance. We focus on data security with capabilities that include granular user access controls, change history, audit trails, and comprehensive click forensics. Our product pros are readily accessible via phone, email, and our forum.

This type of enterprise-class capability is non-trivial to build. It is more challenging to deliver. TrackVia has delivered – every day, every hour for 5+ years.

Dabble DB customers:

  • Come join over 1000 TrackVia customers that span a wide array of industry verticals.
  • We have value-based pricing plans (starting at only $99/mo) that meet virtually any budget. You can try us out with a no-obligation, 14-day free trial at www.trackvia.com.
  • Feel free to reach out with any questions you may have about TrackVia, the cloud database market, or the pros/cons of the migration alternatives you might be considering.

We’re all ears, and we’re ready to help.

I reviewed DabbleDB’s offering more than four years ago in my Dabble DB: The New Look in Web Databases post of 3/16/2006, which has some formatting problems. DabbleDB went live on 3/24/2006.

<Return to section navigation list> 

blog comments powered by Disqus