Saturday, January 29, 2011

Windows Azure and Cloud Computing Posts for 1/29/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px3   

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

Matthew Sorvaag (@matthewsorvaag) described Migrating a SQL Server Database to SQL Azure with George Huey’s SQL Azure Migration Wizard on 1/29/2011:

I have had a number of interesting conversations with both clients and colleagues about the feasibility of Windows and SQL Azure and the whole “cloud” concept. Every always asks whether or not it is appropriate to use for our clients. I don’t believe there is a straight answer and it depends on your clients requirements around things such as where the data is hosted, how can I get support, backups, redundancy, SLA’s, cost etc.

imageYesterday I had a similar conversation with another one of my colleagues who is concerned about the additional latency between Australia and the USA or Singapore, so we decided to initiate an experiment of migrating an existing application that already runs in a small local Australian public cloud to Windows and SQL Azure.

The aim of this experiment is not to prove that there is additional latency between Australia and the USA or Singapore (Southeast Asia), because there is, and it can easily be proven by tracing the network route to the destination, or performing a simple ping test. What we are trying to determine is whether or not the average user of a web based business application will actually notice the additional latency or not, and if the additional latency causes any issues with their day to day use of the application.

There are a number of simple code enhancements that can be made such as the use of Content Delivery Networks and combining CSS and JS, which have not been completed for this test and the file uploading will be ignored for this test as Windows Azure does not allow you to write to disk by default.

In part one if this three part series, we are going to be migrating the SQL Server 2008 R2 Database schema and data to SQL Azure.

We are going to be deploying to both the USA and Singapore to determine whether or not the are any noticeable differences between these two geographically dispersed locations using different internet service providers, and whether or not we can use these locations for production and/or redundancy.

Generally the latency between Australia and Singapore is lower if you are with a quality ISP such as Internode, however If you are with a budget ISP, your internet traffic will more than likely reach Singapore via the USA because it is both easier and more cost effective for the ISP.

We are going to be using SQL Azure Migration Wizard v3.5.2 (available from CodePlex) to migrate the SQL Server database from a local machine to SQL Azure.

image

Fig 1. Microsoft Azure Data Centre Locations

Getting Started with the SQL Server Database Migration
The steps that I followed to migrate the database are detailed below:

Confirm that you have access to SQL Azure and know the username and password – this is required to create a new SQL Azure database and migrate the content.

If you do not already have a SQL Azure Instance available, create a new instance and add your source IP address as an exception in the firewall rules, otherwise you will be blocked by the Azure firewalls and not be able to connect to SQL Azure.

Run the SQL Azure Migration Wizard to Analyse and Migrate the database

image

Fig 2. Select the option to “Analyze and Migrate” your SQL Database.

image

Fig 3. Connect to the local SQL Server Database that you would like to migrate to SQL Azure.

image

Fig 4. Confirm that you have selected the correct source database and click “Next”.

image

Fig 5. Select the database objects that you would like to migrate to SQL Azure. In this case I am migrating everything, so I have selected “Script all database object”. This option will script all database objects that are compatible with SQL Azure.

image

Fig 6. Confirming the database objects that are going to be scripted.

image

Fig 7. Confirm that you are ready to generate the SQL Script and BCP files that will be used to create your SQL Azure database.

image

Fig 8. The SQL Azure Migration Wizard has finished and generated two types of output files:

  1. A script file to create the SQL Azure database and
  2. BCP Files containing the data to be migrated – see the images below

image

Fig 9. The output BCP files in the file system

image

Fig 10. The generated SQL script that will be executed against the SQL Azure database.

image

Fig 11. Connect to SQL Azure to create the database and begin the data migration.

image

Fig 12. Select to “Create Database” option to create a new SQL Azure database. The script files above will be used to generate the new database.

image

Fig 13. Enter the name for the new database.

image

Fig 14. Confirm the creation of the SQL Azure database – this will initiate the database migration and data migration.

image

Fig 15. The SQL Azure Migration Wizard begins uploading BCP files and executing the scripts against SQL Azure.

image

Fig 16. The wizard has completed, but the “Next” button is not available – this appears to be the expected behaviour. This is because all the script files have already been executed and there is no further action required.

As a precautionary measure, you should scroll through the output window in Fig 16 to make sure that there is no red text. Red text indicated there was an issue and the script may have failed. If you find some red text in the output window, you should manually check status of the highlighted items and make sure that they have been migrated successfully.

You can use SQL Server 2008 R2’s management studio to connect to SQL Azure and confirm the database status.

In the next post we are going to look at what is involved in migrating the ASP.NET MVC 2 based web tier to Windows Azure.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Hosk described Dynamics CRM 2011 Discovery and Web Service Urls in a 1/29/2011 post to the Crm Business blog:

imageI am starting to write code for CRM 2011 and a problem I kept having was forgetting the darn WCF URLs to connect to CRM 2011.

The I found this webpage which has them on.  For some reason it’s always difficult to find them in the SDK.  I have also created a txt page with these urls on with my on premise URLs so I have them at hand.  I thought I would create this if you were like me and kept forgetting to whack them into a text file.

Discovery and Web Service Urls

For CRM On-premises customers:
http://{server}/XRMServices/2011/Discovery.svc for the Discovery service endpoint
http://{server}/{OrgName}/XRMServices/2011/Organization.svc for the Organization Service endpoint (SOAP)

http://{server}/{OrgName}/XRMServices/2011/OrganizationData.svc

For CRM Online customers:

The following URLs should be used to access the discovery service (use the appropriate URL for your location):

https://dev.crm.dynamics.com/XRMServices/2011/Discovery.svc (North America)

https://dev.crm4.dynamics.com/XRMServices/2011/Discovery.svc (EMEA)

https://dev.crm5.dynamics.com/XRMServices/2011/Discovery.svc (APAC)

The following URLs should be used to access the Organization service(SOAP endpoint):

https://{Organization Name}.api.crm.dynamics.com/XrmServices/2011/Organization.svc (North America)
https://{Organization Name}.api.crm4.dynamics.com/XrmServices/2011/Organization.svc (EMEA)
https://{Organization Name}.api.crm5.dynamics.com/XrmServices/2011/Organization.svc (APAC)

Where {Organization Name} refers to the Organization that you specify in the URL when accessing the Web application. For example, for Contoso.crm.dynamics.com, the {Organization Name} is Contoso.

The following URLs should be used to access the Organization Data service(OData REST endpoint)

https://{Organization Name}.api.crm.dynamics.com/XrmServices/2011/OrganizationData.svc (North America)
https://{Organization Name}.api.crm4.dynamics.com/XrmServices/2011/OrganizationData.svc (EMEA)
https://{Organization Name}.api.crm5.dynamics.com/XrmServices/2011/OrganizationData.svc (APAC)


Mike Ormond posted OData and Windows Phone 7 on 1/28/2011:

Last week at the Tech Days Online Conference, I did a quick WP7 demo that showed a simple app connecting to the Twitpic OData feed and pulling back a set of images for a particular user. I’m a newbie to OData and I must admit I found the process of figuring out how to achieve this relatively simple task to be quite frustrating, primarily because there’s so much “stale” content out there. The WP7 OData story has evolved rapidly in a relatively short time and what worked 6 months simply doesn’t work today. I thought it might be worth a walk-through post so others new to OData can at least get something simple working…

Get the libraries

image

First up, you’ll want the OData client library for WP7 (also linked from the Developers page on the OData site along with many other client libraries). You can build this solution and grab the generated WP7 binaries. Or, perhaps easier (as you’re going to need it anyway), you can just grab the binaries themselves (check the Other Available Downloads section). I say you’ll need this anyway as you’re going to want the code gen tool – more of this in a mo.

So, in summary, get the ODataClient_BinariesAndCodeGenToolForWinPhone (direct download link) and optionally the ODataNetFx4_SL4_WinPhone7_Client (which contains the source). There’s also a sample app you can download ODataClient_WinPhone7SampleApp if you’re so inclined. Unblock and unzip to a suitable location.

Generate a proxy

Now you’ve got the libraries etc, we can move on to creating a proxy for the OData service. I’m going to use the Twitpic service – its OData feed is at http://odata.twitpic.com. You can hit this in a browser to get an idea of the “shape” of the feed (turn off feed reading view in IE: Internet Options –> Content –> Feeds and Web Slices).

My first mistake was assuming that I could use the standard .NET 4.0 DataSvcUtil tool (which you’ll find in \Windows\Microsoft.NET\Framework\v4.0.30319) to build the OData proxy. Instead you have to use the tool that comes as part of the ODataClient_BinariesAndCodeGenToolForWinPhone download.

Open a command prompt in the folder where you unzipped the file – you’ll find a WP7 specific version of DataSvcUtil in there. In order the generate the proxy classes, run the following command:

DataSvcUtil /out:"twitpic.cs" /uri:"http://odata.twitpic.com" /Version:2.0 /DataServiceCollection

This will generate a file called twitpic.cs containing the necessary data service proxy classes which we’ll include in our WP7 app.

Create a Windows Phone application

In Visual Studio, create a new Windows Phone Application.

WindowsPhoneApp

Add a reference to the OData client library assembly you downloaded earlier.

AddReference

Add the twitpic.cs file we created earlier to the project (in Visual Studio – Add Existing Item – Shift+Alt+A)

AddExisting

Add some simple UI

Let’s start with something super-simple to check things are working. We’ll need some UI. In MainPage.xaml, modify the ContentPanel Grid XAML to be as follows:

01.<Grid x:Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0">

02. <Grid.ColumnDefinitions>

03. <ColumnDefinition Width="*" />

04. <ColumnDefinition Width="Auto" />

05. </Grid.ColumnDefinitions>

06. <Grid.RowDefinitions>

07. <RowDefinition Height="*" />

08. <RowDefinition Height="Auto" />

09. </Grid.RowDefinitions>

10. <ListBox Grid.ColumnSpan="2"

11. ItemsSource="{Binding}">

12. <ListBox.ItemTemplate>

13. <DataTemplate>

14. <TextBlock Text="{Binding ShortId}" />

15. </DataTemplate>

16. </ListBox.ItemTemplate>

17. </ListBox>

18. <TextBox Name="textBox1" Text="ZDNet" Grid.Row="1" />

19. <Button Content="Go"

20. Grid.Column="1" Grid.Row="1"

21. Name="button1"

22. Click="button1_Click" />

23.</Grid>

This adds a ListBox with a simple ItemTemplate that displays the ShortId of each item retrieved by our query. We can inspect the structure of the response either by looking through the twitpic.cs proxy code or by exploring the OData feed itself. For the query we’ll use the contents of the TextBox (initially set to a known Twitpic username) as the sole parameter. The query will be initiated by pressing the Button.

[Sorry, I haven’t renamed anything and I’ve embedded templates in controls – I am a bad, bad person]

And a few lines of code

In the code-behind (MainPage.xaml.cs) we can query the feed and bind it to the Listbox in just a few lines of code (if you cut and paste this you’ll have to resolve a couple of namespaces):

01.private void button1_Click(object sender, System.Windows.RoutedEventArgs e)

02.{

03. DataServiceCollection<TwitpicOData.Model.Entities.Image> images;

04.

05. TwitpicData ctx =

06. new TwitpicData(new Uri("http://odata.twitpic.com"));

07. string uriString =

08. string.Format("/Users('{0}')/Images?$top=20", textBox1.Text);

09. Uri queryUri =

10. new Uri(uriString, UriKind.Relative);

11. images =

12. new DataServiceCollection<TwitpicOData.Model.Entities.Image>(ctx);

13. images.LoadAsync(queryUri);

14. this.DataContext = images;

15.}

The DataServiceCollection is an ObervableCollection with some additional functionality specifically for OData. TwitpicData is a DataServiceContext from the generated proxy we can use to query against our OData feed.

Unfortunately the LINQ query syntax isn’t available to you in the WP7 client libraries at this time so you need to generate the query URL yourself. Not too difficult in our case, we create a relative URI that points to

http://odata.twitpic.com/Users('zdnet')/Images?$top=20

In other words (working from the far right), get the first 20 images for the user ‘zdnet’.

Then we create our DataServiceCollection (passing our DataServiceContext) and request it to do asynchronous load with a call to LoadAsync() passing the query URI we want to execute. LoadAsync() will automagically populate our collection when the query completes (and there’s a LoadCompleted event you can hook into as well in case you need to take action).

Finally, set the DataContext on the page to our collection and the binding in the ListBox ItemTemplate will pick up on the Image’s ShortId property.

OdataQuery1Tada!

This gives us something that at least executes a query against the OData feed and binds the results to the ListBox. Run it, and you should see something like this (click the button to execute the query).

Not very exciting but it works. In the next post we’ll actually get some images to display (which isn’t quite as straightforward as it could be…)

.

.


Anthony Adame contributed OData for IQToolkit to CodePlex on 1/28/2011:

Project Description
imageOData for IQToolkit, converts OData expressions into a usable expressions for IQToolkit providers.

Plug in a IQToolkit provider, add a data mapping and basic POCO class to expose your data.

The project includes examples of:

  • SQL Server and Oracle implementations.
  • DataAnnotations based validation.

This project builds off of the following resources:


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Paolo Salvatori posted How to use Duplex MEP to communicate with BizTalk from a .NET application or a WF workflow running inside AppFabric Part 3 in a 1/25/2010 post to the AppFabricCAT blog (missed when published):

Introduction

image In the first article of the series we discussed how to exchange messages with an orchestration via a two-way WCF Receive Location using the Duplex Message Exchange Pattern. This form of bi-directional communication is characterized by the ability of both the service and the client to send messages to each other independently either using one-way or request/reply messaging. In a service-oriented architecture or a service bus composed of multiple, heterogeneous systems, interactions between autonomous client and service applications are asynchronous and loosely-coupled. All communications require published and discoverable service contracts, well-known data formats and a shared communication infrastructure.

image722322In the second part of the article we saw how to implement an asynchronous communication between a client application and a WCF Workflow Service running within IIS AppFabric Hosting Services using the Durable Duplex Correlation provided by WF 4.0. Besides, we discussed how to create a custom Activity for extending AppFabric Tracking with user-defined events and how to exploit the XML-based data transformation capabilities provided by the new BizTalk Server Mapper directly in a WF project thanks to the new Mapper Activity contained in the AppFabric Connect.

In the final article of the series, we’ll examine how to implement an asynchronous communication between a WCF Workflow Service and an Orchestration using WS-Addressing and Content-Based Correlation.

Before explaining the architecture of the demo, let me briefly introduce and discuss some of the techniques that I used to implement my solution.

Correlation in WF 4.0

If you are a WF or a BizTalk developer, you are surely familiar with the concept of correlation. Typically, at runtime workflows or orchestrations have multiple instances executing simultaneously. Therefore, when a workflow service implements an asynchronous communication pattern to exchange messages with other services, correlation provides the mechanism to ensure that messages are sent to the appropriate workflow instance. Correlation enables relating workflow service messages to each other or to the application instance state, such as a reply to an initial request, or a particular order ID to the persisted state of an order-processing workflow. Workflow Foundation 4.0 provides 2 different categories of correlation called, respectively, Protocol-Based Correlation and Content-Based Correlation. Protocol-based correlations use data provided by the message delivery infrastructure to provide the mapping between messages. Messages that are correlated using protocol-based correlation are related to each other using an object in memory, such as a RequestContext, or by a token provided by the transport protocol. Content-based correlations relate messages to each other using application-specified data. Messages that are correlated using content-based correlation are related to each other by some application-defined data in the message, such as a customer number.

Protocol-Based Correlation

Protocol-based correlation uses the transport mechanism to relate messages to each other and the appropriate workflow instance. Some system-provided protocol correlation mechanisms include Request-Reply correlation and Context-Based correlation. A Request-Reply correlation is used to correlate a single pair of messaging activities to form a two-way synchronous inbound or outbound operation, such as a Send paired with a ReceiveReply, or a Receive paired with a SendReply. The Visual Studio 2010 Workflow Designer also provides a set of activity templates to quickly implement this pattern. A context-based correlation is based on the context exchange mechanism described in the .NET Context Exchange Protocol Specification. To use context-based correlation, a context-based binding such as BasicHttpContextBinding, WSHttpContextBinding or NetTcpContextBinding must be used on the endpoint.

For more information about protocol correlation, see the following topics on MSDN:

For more information about using the Visual Studio 2010 Workflow Designer activity templates, see Messaging Activities. For sample code, see the Durable Duplex and NetContextExchangeCorrelation samples.

Content-Based Correlation

Content-based correlation uses data in the message to associate it to a particular workflow instance. Unlike protocol-based correlation, content-based correlation requires the application developer to explicitly state where this data can be found in each related message. Activities that use content-based correlation specify this message data by using a MessageQuerySet. Content-based correlation is useful when communicating with services that do not use one of the context bindings such as BasicHttpContextBinding. For more information about content-based correlation, see Content Based Correlation. For sample code, see the Content-Based Correlation and Correlated Calculator samples.

A content-based correlation takes data from the incoming message and maps it to an existing instance. This kind of correlation can be used in the following 2 scenarios:

  • when a WCF workflow service has multiple methods that are accessed by a single client and a piece of data in the exchanged messages identifies the desired instance;

  • when a WCF workflow service submits a request to a downstream service and asynchronously waits for a response that could potentially arrive after some minutes, hours or days.

In my demo I used 2 different types of protocol-based correlation, respectively, the Durable Duplex Correlation to realize an asynchronous message exchange between the client application and the WCF workflow service and the Content-Based Correlation to implement an asynchronous communication between the WF workflow service and the underlying BizTalk orchestration.

Architecture of the Demo

The following picture depicts the architecture of the demo. The idea behind the application is quite simple: a Windows Forms application submits a question to a WCF workflow service hosted in IISAppFabric and asynchronously waits for the related answer. The AsyncMagic8Ball WCF workflow service uses the Mapper activity to transform the incoming request in a format suitable to be consumed by the underlying BizTalk application. For more information on how using the Mapper activity in a WF workflow to implement message transformation, please refer to the second part of this series. Next, the WCF workflow service sends the request message to BizTalk Server via via a one-way WCF-NetTcp Receive Location. In particular, the client endpoint used by the WCF workflow service to transmit the request to BizTalk is configured to use a custom message inspector called ReplyToMessageInspector. At runtime, this component assigns the URL where the WCF workflow service is asynchronously waiting for the response to the ReplyTo header of the outgoing request message. Once received, the request message is read and processed by a new instance of the AsyncMagic8Ball orchestration that returns a response message containing one of 20 standardized answers.  In particular, the orchestration copies the request ID from the request to the response message. This piece of information will be used by the WF runtime to correlate the response message back to the appropriate instance of the AsyncMagic8Ball WCF workflow service. Then the orchestration reads the callback URL from the WCF.ReplyToAddress context property and assigns its value to the Address property of the dynamic send port used to return the response to the appropriate instance of the WCF workflow service. Upon receiving the response message from BizTalk, the WCF workflow service applies another map using the Mapper activity and returns the resulting message to the client application.

Message Flow

  1. The  Windows Forms Client Application enables a user to specify a question and a delay in seconds. When the user presses the Ask button, a new request message containing the question and the delay is created and sent to a the WCF workflow service. Before sending the first message, the client application creates and opens a service host to expose a callback endpoint that the workflow can invoke to return the response message. In particular, the binding used to expose this callback contract is the NetTcpBinding, whereas the binding used to send the request message to the WCF workflow service is the NetTcpContextBinding. We will expand on this point in the next sections when we’ll analyze the client-side code.

  2. The WCF workflow service receives the request message of type WFRequest using the Receive activity of a ReceiveAndSendReply composed activity. In particular, the Receive activity is used to initialize 2 distinct correlation handles:

    • The callbackHandle is configured to use the Durable Duplex Correlation and is used to correlate the incoming request with the response returned to the client application.

    • The correlationHandle is instead configured to use the Content-Based Correlation and is used to correlate the outgoing request with the response returned by BizTalk Server.

  3. The WCF workflow service uses the CustomTrackingActivity to keep track of individual processing steps and uses an instance of the Mapper activity to transform the WFRequest object into an instance of the BizTalkRequest class. See the second part of this series for more information on the CustomTrackingActivity  and Mapper activities.

  4. The WCF workflow service uses a composed SendAndReceiveReply activity to send the BizTalkRequest message to the WCF-NetTcp receive location exposed by the BizTalk application. The ReplyToMessageInspector assigns the URL where the WCF workflow service is asynchronously waiting for the response to the ReplyTo header of the outgoing request message.

  5. A one-way WCF-NetTcp receive location receives the request message and the XmlReceive pipeline promotes the MessageType context property.

  6. The Message Agent submits the request message to the MessageBox (BizTalkMsgBoxDb).

  7. A new instance of the AsyncMagic8Ball orchestration receives the request message via a one-way logical port and uses a custom helper component called XPathHelper to read the value of the Question and Delay elements from the inbound message.

  8. The AsyncMagic8Ball orchestration invokes the SetResponse static method exposed by the ResponseHelper class to build the response message containing the answer to this question contained in the request message. Then it copies the request ID from the request to the response message, reads the callback URL from the WCF.ReplyToAddress context property and assigns its value to the Address property of the dynamic send port used to return the response to the appropriate instance of the WCF workflow service. The response message is finally published to the MessageBox (BizTalkMsgBoxDb) by the Message Agent.

  9. The response message is retrieved by a one-way Dynamic Send Port.

  10. The PassThruTransmit send pipeline is executed by the Dynamic Send Port.

  11. The response message is returned to the WCF workflow service.

  12. The WCF workflow service uses a Mapper activity to transform the BizTalkResponse object into an instance of the WFRequest class.

  13. The WCF workflow service uses a Send activity to send back the response message to the client application. The Send activity is configured to use the callback correlation that contains the URI of the callback endpoint exposed by the client application.

Client Code

Please refer to the second article of this series for a full explanation of the code used by the client application to invoke the WCF workflow service using the Durable Duplex communication pattern.

The AsyncMagic8Ball WCF Workflow Service

In this section I will focus my attention on how the AsyncMagic8Ball WCF workflow service exchange messages with both the client application and the BizTalk orchestration. When I created the WCF Workflow Service, the initial workflow just contained a Sequence activity with a Receive activity followed by a SendReply activity as shown in the following illustration.

I selected the Sequential activity and clicked the Variables button to display the corresponding editor. Next, I created a variable for each message to exchange with the client and BizTalk application and then I created two CorrelationHandle variables called respectively callbackHandle and correlationHandle.  The first of these two variables is used to implement the Durable Duplex Correlation with the client application, whereas the second one is configured to use the Content-Based Correlation and holds the Id of the incoming request. This information is copied into the request sent to BizTalk and is contained in the corresponding response message. Upon receiving a response from BizTalk Server, the WF runtime uses this correlation handle to identify the appropriate instance of the AsyncMagic8Ball WCF workflow service to pass the message to. In order to expose a NetTcpContextBinding endpoint I configured the Receive activity as shown in the following picture:

In particular, I used the ServiceContractName property of the Receive activity to specify the target namespace and the contract interface of the service endpoint and I used the Action property to specify the action header of the request message. To initialize the callback correlation handle, I selected the Receive activity and then I clicked the ellipsis button next to the (Collection) text for the CorrelationInitializers property in the property grid for the Add Correlation Initializers dialog box to appear. As shown in the picture below, I specified callbackHandle as correlation handle and I selected Callback correlation initializer in as correlation initializer.

Before invoking the downstream BizTalk application, the WCF workflow service immediately returns an ACK message to the caller. Therefore, I configured the SendReply activity, bound to the initial Receive activity, to return a WFAck message, as shown in the picture below.

As you can notice, the workflow uses a CustomTrackingActivity to emit a user-defined event. This pattern is used throughout the workflow. Custom tracking records generated at runtime by the WCF workflow service can be analyzed using the AppFabric Dashboard. For more information on the CustomTrackingActivity and how to use the AppFabric Dashboard to monitor the runtime behavior of the WCF workflow service, please read the previous article of this series.

There are two options to invoke the WCF-NetTcp receive location exposed by the BizTalk application:

  • The first possibility is to generate a custom WCF proxy activity using the  Add Service Reference and use this activity to invoke the underlying WCF receive location in a synchronous way. For more information on this technique, please refer to the previous article of this series.

  • The second alternative is using the the messaging activities provided out-of-the-box by WF and the Content-Based Correlation to implement an asynchronous communication between the WCF workflow service and the underlying orchestration. In this article, we examine this approach.

The following picture depicts the central part of the AsyncMagic8Ball WCF workflow service.

In a nutshell, this section of the workflow executes the following actions:

  • Uses a SendAndReceiveReply activity to send the request message to the one-way WCF-NetTcp receive location exposed by the BizTalk application.
  • Tracks a user-defined event using the CustomTrackingActivity.
  • Uses a Receive activity to receive the response back from BizTalk.

The following figure shows how I configured the Send activity to transmit the request message to BizTalk.

As highlighted above, I used the ServiceContractName property to specify the target namespace and the contract interface utilized by the client endpoint and the EndpointConfigurationName property to define the name of the client endpoint used to transmit the request message to BizTalk. The bizTalkAsyncNetTcpBinding endpoint is defined in the configuration/system.serviceModel/client section of the web.config (the configuration file is shown later in the article). In particular, this endpoint is configured to use a custom message inspector that assigns the callback address of the workflow service to the ReplyTo header of request messages.

Next, I initialized the correlation handle used to implement the content-based correlation. To accomplish this task, I selected the Send activity and then I clicked the ellipsis button next to the (Collection) text for the CorrelationInitializers property in the property grid for the Add Correlation Initializers dialog box to appear. On the left panel of the dialog, I specified correlationHandle as correlation handle and then I selected Query correlation initializer as correlation initializer. Finally, I specified the correlation key by selecting the Id element from the data contract of the BizTalkRequest message.

To complete the configuration of the content-based correlation, I selected the Receive activity and I specified the correlationHandle as value for the CorrelatesWith property. The latter defines the correlation handle that is used to route the message to the appropriate workflow instance, whereas the CorrelatesOn property sets the MessageQuerySet used to query the message to extract correlation data.

To specify the correlation key on the response message, I clicked the ellipsis button next to the (Collection) text for the CorrelatesOn property to open the Add Correlation Initializers dialog box.  Then, as shown in the picture below, I selected the Id property from the data contract of the BizTalkResponse message.

The last part of the WCF workflow invokes the callback endpoint exposed by the client application to return the response to the initial request. In particular, the latter contains the Id of the original request, and this allows the client application to correlate the response to the corresponding request, especially when the client has multiple in-flight requests. For more details on how the client handles the callback, please refer to the second part of this series.

This portion of the workflow performs just 2 steps:

  • Uses a Send activity  to send the response message back to the caller. This activity is configured to use the callback handle correlation.

  • Tracks a user-defined event using the CustomTrackingActivity.

The following figure shows how I configured the Send activity used to transmit the response message back to the caller.

As highlighted above, I assigned to the the callbackHandle, previously initialized, to the CorrelatesWith property. Then I properly set the other properties like OperationName, Action, and ServiceContractName to match the characteristics of the callback service endpoint exposed by the client application.

The ReplyToMessageInspector component

In order to transparently add the ReplyTo header to outgoing request messages, I created a custom message inspector. For your convenience, I include below the code of this component along with the code of endpoint behavior used to register at runtime this extension.

ReplyToBehaviorExtensionElement  Class

#region Using Directives
using System;
using System.Configuration;
using System.Diagnostics;
using System.ServiceModel;
using System.ServiceModel.Channels;
using System.ServiceModel.Configuration;
#endregion

namespace Microsoft.AppFabric.CAT.Samples.DuplexMEP.ReplyToHelper
{
    /// <summary>
    /// This ReplyToBehaviorExtensionElement inherits from BehaviorExtensionElement base class    /// and allows to register the ReplyToEndpointBehavior in the configuration file.
    /// </summary>
    public class ReplyToBehaviorExtensionElement : BehaviorExtensionElement
    {
        #region Private Constants
        private const string AddressProperty = "address";
        private const string EnabledProperty = "enabled";
        private const string TraceEnabledProperty = "traceEnabled";
        #endregion

        #region Protected Methods
        protected override object CreateBehavior()
        {
            return new ReplyToEndpointBehavior(this.Enabled,
                                               this.TraceEnabled,
                                               this.Address);
        }
        #endregion

        #region Public Methods
        public override Type BehaviorType
        {
            get
            {
                return typeof(ReplyToEndpointBehavior);
            }
        }
        #endregion

        #region Configuration Properties
        [ConfigurationProperty(EnabledProperty, DefaultValue = true)]
        public bool Enabled
        {
            get
            {
                return (bool)base[EnabledProperty];
            }
            set
            {
                base[EnabledProperty] = value;
            }
        }

        [ConfigurationProperty(TraceEnabledProperty, DefaultValue = false)]
        public bool TraceEnabled
        {
            get
            {
                return (bool)base[TraceEnabledProperty];
            }
            set
            {
                base[TraceEnabledProperty] = value;
            }
        }

        [ConfigurationProperty(AddressProperty,                                DefaultValue = "http://www.w3.org/2005/08/addressing/anonymous")]
        public string Address
        {
            get
            {
                return (string)base[AddressProperty];
            }
            set
            {
                base[AddressProperty] = value;
            }
        }
        #endregion
    }
}

ReplyToEndpointBehavior class

#region Using Directives
using System;
using System.ServiceModel;
using System.ServiceModel.Description;
using System.ServiceModel.Channels;
using System.ServiceModel.Dispatcher;
#endregion

namespace Microsoft.AppFabric.CAT.Samples.DuplexMEP.ReplyToHelper
{
    /// <summary>
    /// The ReplyToEndpointBehavior class implements the IEndpointBehavior interface 
    /// and adds the ReplyToMessageInspector to the client runtime. 
    /// </summary>
    public class ReplyToEndpointBehavior : IEndpointBehavior
    {
        #region Private Fields
        private bool enabled = true;
        private bool traceEnabled = false;
        private string address = null;
        #endregion

        #region Public Constructors
        public ReplyToEndpointBehavior(bool enabled,
                                       bool traceEnabled,
                                       string address)
        {
            this.enabled = enabled;
            this.traceEnabled = traceEnabled;
            this.address = address;
        }
        #endregion

        #region IEndpointBehavior Members
        public void AddBindingParameters(ServiceEndpoint endpoint, BindingParameterCollection bindingParameters)
        {
            return;
        }

        public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime)
        {
            clientRuntime.MessageInspectors.Add(new ReplyToMessageInspector(enabled, traceEnabled, address));
        }

        public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher)
        {
            return;
        }

public void Validate(ServiceEndpoint endpoint)
{
return;
}
#endregion
}
}

ReplyToMessageInspector class

#region Using Directives
using System;
using System.Diagnostics;
using System.ServiceModel;
using System.ServiceModel.Channels;
using System.ServiceModel.Dispatcher;
#endregion

namespace Microsoft.AppFabric.CAT.Samples.DuplexMEP.ReplyToHelper
{
    /// <summary>
    /// The ReplyToMessageInspector adds the ReplyTo header to outgoing messages.
    /// </summary>
    public class ReplyToMessageInspector : IDispatchMessageInspector, IClientMessageInspector
    {
        #region Private Constants
        private const string ReplyToFormat = "[ReplyToMessageInspector] ReplyTo header set to {0}.";
        #endregion

        #region Private Fields
        private bool enabled = true;
        private bool traceEnabled = false;
        private string address = null;
        #endregion

        #region Public Constructors
        public ReplyToMessageInspector(bool enabled,
                                       bool traceEnabled,
                                       string address)
        {
            this.enabled = enabled;
            this.traceEnabled = traceEnabled;
            this.address = address;
        }
        #endregion

        #region IDispatchMessageInspector Members
        public object AfterReceiveRequest(ref Message request,
                                          IClientChannel channel,
                                          InstanceContext instanceContext)
        {
            throw new NotImplementedException();
        }

        public void BeforeSendReply(ref Message reply, object correlationState)
        {
            throw new NotImplementedException();
        }
        #endregion

        #region IClientMessageInspector Members
        public void AfterReceiveReply(ref Message reply, object correlationState)
        {
            return;
        }

        public object BeforeSendRequest(ref Message request, IClientChannel channel)
        {
            if (enabled &&
                request != null &&
                !string.IsNullOrEmpty(address))
            {
                request.Headers.ReplyTo = new EndpointAddress(address);
                if (traceEnabled)
                {
                    Trace.WriteLine(string.Format(ReplyToFormat, address));
                    Trace.WriteLine(new string('-', 100));
                }
            }
            return request;
        }
        #endregion
    }
}

The following table shows an excerpt from the configuration file used by the AsyncMagic8Ball  service and by the SyncMagi8cBall service introduced in the second part of this article. You can find the original configuration file in the companion code for this article. Note in particular the definition of the ReplyToBehaviorExtensionElement component in the configuration/system.serviceModel/extensions section.

<?xml version="1.0" encoding="utf-8"?>
<configuration>  ...
  <system.serviceModel>
    ...
    <bindings>
      <netTcpBinding>
        <binding name="netTcpBinding">
          <security mode="Transport">
            <transport protectionLevel="None" />
          </security>
        </binding>
      </netTcpBinding>
      <netTcpContextBinding>
        <binding name="netTcpContextBinding">
          <security mode="Transport">
            <transport protectionLevel="None" />
          </security>
        </binding>
      </netTcpContextBinding>
    </bindings>
    <client>
      <endpoint address="net.tcp://localhost:7171/Magic8BallBizTalk/Sync"
                binding="netTcpBinding"
                bindingConfiguration="netTcpBinding"
                contract="Magic8Ball"
                name="bizTalkSyncNetTcpBinding"/>
      <endpoint address="net.tcp://localhost:7172/Magic8BallBizTalk/Async"
                binding="netTcpBinding"
                behaviorConfiguration="replyToBehavior"
                bindingConfiguration="netTcpBinding"
                contract="Magic8Ball"
                name="bizTalkAsyncNetTcpBinding"/>
    </client>
    <services>
      <service name="SyncMagic8Ball">
        <endpoint address=""
                  binding="basicHttpContextBinding"
                  contract="IMagic8BallWF"
                  name="basicHttpBinding_SyncMagic8Ball" />
        <endpoint address=""
                  binding="netTcpContextBinding"
                  bindingConfiguration="netTcpContextBinding"
                  contract="IMagic8BallWF"
                  name="netTcpBinding_SyncMagic8Ball" />
      </service>
      <service name="AsyncMagic8Ball">
        <endpoint address=""
                  binding="basicHttpBinding"
                  contract="IMagic8BallWF"
                  name="basicHttpBinding_AsyncMagic8Ball" />
        <endpoint address=""
                  binding="netTcpContextBinding"
                  bindingConfiguration="netTcpContextBinding"
                  contract="IMagic8BallWF"
                  name="netTcpBinding_AsyncMagic8Ball" />
        <endpoint address=""
                  binding="netTcpContextBinding"
                  bindingConfiguration="netTcpContextBinding"
                  contract="IMagic8BallWFCallback"
                  name="netTcpBinding_AsyncMagic8BallCallback" />
      </service>
    </services>
    <behaviors>
      <endpointBehaviors>
        <!-- This behavior configuration is adopted by the client endpoint used by the Send Activity 
             that transmits the request message to the AsyncMagic8Ball orchestration. -->
        <behavior name="replyToBehavior">
          <replyTo address="net.tcp://localhost/Magic8BallWF/AsyncMagic8Ball.xamlx"
                   enabled="true"
                   traceEnabled="true" />
        </behavior>
      </endpointBehaviors>
      ...
    </behaviors>
    <serviceHostingEnvironment multipleSiteBindingsEnabled="true" />
    <extensions>
      <behaviorExtensions>
        <!-- It's necessary to register the custom behavior extension element -->
        <add name="replyTo"
             type="Microsoft.AppFabric.CAT.Samples.DuplexMEP.ReplyToHelper.ReplyToBehaviorExtensionElement,                    Microsoft.AppFabric.CAT.Samples.DuplexMEP.ReplyToHelper,                    Version=1.0.0.0, Culture=neutral,                    PublicKeyToken=80577993de400321" />
      </behaviorExtensions>
    </extensions>
  </system.serviceModel>
  ...
</configuration>
The AsyncMagic8Ball Orchestration

The following picture shows the structure of the AsyncMagic8Ball orchestration.

The orchestration uses a one-way logical port to receive the inbound request message and  dynamic send port to return the corresponding response message. The Trace Request Expression Shape contains the following code to extract the information from the request message. The namespace of the LogHelper and XPathHelper static classes have been eliminated for ease of reading.

LogHelper.WriteLine(System.String.Format("[SyncMagic8Ball] Transport: {0}",
                                         RequestMessage(BTS.InboundTransportType)));
id = XPathHelper.GetValue(RequestMessage, 0, "Id Element XPath Expression");
if (!System.Int32.TryParse(XPathHelper.GetValue(RequestMessage, 0, "Delay Element XPath Expression"),
                           out delayInSeconds))
{
    delayInSeconds = 0;
}
LogHelper.WriteLine(System.String.Format("[SyncMagic8Ball] Id: {0}", id));
LogHelper.WriteLine(System.String.Format("[SyncMagic8Ball] Question: {0}",
                                         XPathHelper.GetValue(RequestMessage,
                                                              0,
                                                              "Question Element XPath Expression")));
LogHelper.WriteLine(System.String.Format("[SyncMagic8Ball] Delay: {0}", delayInSeconds));

You can use DebugView, as shown in the picture below, to monitor the trace produced by the orchestration and helper components.

Quotes_Icon Note

My LogHelper class traces messages to the standard output using the capability supplied by the Trace class. This component is primarily intended to be used for debugging a BizTalk application in a test environment, rather than to be used in a production environment. If you are looking for a tracing framework which combines the high performance and flexibility provided by the Event Tracing for Windows (ETW) infrastructure, you can read the following whitepaper by Valery Mizonov:

The value of the Delay Shape is defined as follows:

new System.TimeSpan(0, 0, delayInSeconds);

Therefore, the orchestration waits for the time interval in seconds specified in the request message before returning the response message to the caller. Finally, the table below shows the code used to set the response message. As you can see, the code specifies the value for the context properties exposed by the WCF Adapter. The penultimate line of code reads the callback URL from the WCF.ReplyToAddress context property and assigns its value to the Address property of the dynamic send port used to return the response to the appropriate instance of the AsyncMagic8Ball workflow service, whereas the last line of code specifies to use the WCF-NetTcp to send the response back to the initial caller.

ResponseMessage = null;
Microsoft.AppFabric.CAT.Samples.DuplexMEP.Helpers.ResponseHelper.SetResponse(ResponseMessage, id);
ResponseMessage(WCF.Action) = "AskQuestionResponse";
ResponseMessage(WCF.SecurityMode) = "Transport";
ResponseMessage(WCF.TransportClientCredentialType) = "Windows";
Magic8BallOutPort(Microsoft.XLANGs.BaseTypes.Address) = RequestMessage(WCF.ReplyToAddress);
Magic8BallOutPort(Microsoft.XLANGs.BaseTypes.TransportType) = "WCF-NetTcp";
Testing the Application

To test the application, you can proceed as follows:

  • Makes sure to start the DuplexMEP BizTalk application.
  • Open a new instance of the Client Application, as indicated in the picture below.
  • Enter an existential question like “Why am I here?”, “What’s the meaning of like?” or “Will the world end in 2012?” in the Question textbox.
  • Select one of NetTcpEndpointAsyncWF in the Endpoint drop down list.
  • Specify a Delay in seconds in the corresponding textbox.
  • Press the Ask button.

Now, if you press the Ask button multiple times in a row, you can easily notice that the client application is called back by the WCF workflow service in an asynchronous way, that in turn invokes the underlying AsyncMagic8Ball orchestration in an asynchronous manner. Therefore, the client application doesn’t need to wait for the response to a previous question before posing a new request.

Make some calls and then open the AppFabric Dashboard. This page is composed of three detailed metrics sections: three detailed metrics sections: Persisted WF Instances, WCF Call History, and WF Instance History. These sections display monitoring and tracking metrics for instances of .NET Framework 4 WCF and WF services. Let’s focus our attention on the WF Instance History section, highlighted in red in the figure below. The latter displays historical statistics derived from tracked workflow instance events stored in one or more monitoring databases. It can draw data from several monitoring databases, if the server or farm uses more than one monitoring database for services deployed at the selected scope.

If you click the Completions link you can review the instances of the AsyncMagic8Ball service that completed in the selected period of time. You can use the Query control on the Tracked WF Instances Page to run a simple query and restrict the number of rows displayed in the grid below.

Finally, you can right-click one of the completed WF instances and select View Tracked Events to access the Tracked Events Page where you can examine events generated by WCF and WF services. On this page you can group events by Event Type, as shown in the figure below, and analyze the user-defined events emitted by the current WCF instance using the CustomTrackingActivity that we saw at the beginning of this article.

In particular, you can quickly investigate the details of a selected event in the Details pane, as highlighted in red in the figure above.

Conclusions

In the final article of this 3-posts series we have seen how to implement an asynchronous communication between a WCF Workflow Service and an Orchestration using WS-Addressing and Content-Based Correlation that represent probably the safest and most reliable way to correlate across truly disparate and disconnected applications in an asynchronous manner. As we have observed in the introduction of the present article, the use of asynchronous communication patterns can dramatically improve the scalability and performance of a distributed application platform where multiple systems exchange messages using WCF. In the first part of this series we have examined how to implement an asynchronous message exchange between a .NET application and an orchestration via a two-way WCF receive location using the Duplex Message Exchange Pattern, whereas in the second part we have seen how to realize an asynchronous communication between a client application and a WCF Workflow Service running within IISAppFabric Hosting Services using the Durable Duplex Correlation. In the second article we have also seen how to create a custom activity to emit user-defined events, how to use the AppFabric Dashboard to monitor custom tracking events generated by WF services, and finally how to exploit the Mapper activity provided by AppFabric Connect to implement message transformations in a WCF workflow service. I hope this article can provide you with useful ideas on how to improve the scalability and flexibility of your AppFabric/BizTalk applications. Here you can download the companion code for this article. As always, you feedbacks are more than welcome!

Reviewed by Christian Martinez. Thanks mate!


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Vishwas Lele (@vlele) posted Introducing Netizen – a Windows Azure + Windows Phone 7 Application on 1/29/2011:

image Netizen (http://netizen.cloudapp.net/ is a Windows Phone 7 application (available in the marketplace for free) that brings the voting record of your congressional representative to your finger tips, directly from your mobile phone. Simply select the Member of Congress you want to follow and "flick through" their voting record. Information about each member’s voting record is stored in Windows Azure Storage and is updated daily.

image But don’t just follow how your representative is voting in congress, make your voice heard. By clicking on the "Like It" button on the mobile application, you can influence your friends and neighbors about the bill through the power of social networking. For each bill, Netizen automatically provisions a Facebook page[1] dedicated to your member of congress. This page acts almost as a virtual ballot for a bill as well as a community hub where fellow constituents can gather to express their support. Pasted below are some screenshots from the application:

Not using a mobile device? No problem! Simply point your browser to the addresses listed below: To get a list of representatives for the state of Virginia, use this address:

http://netizen.cloudapp.net/NetizenService.svc/reps/VIRGINIA/atom

To view the recent votes cast by the representative from Virginia 11th district, use this address:

http://netizen.cloudapp.net/NetizenService.svc/vote/VIRGINIA-11th/atom

To view the Facebook page for the Bill H.R.4853 and Virginia 8th District, use this address:

http://netizen.cloudapp.net/H.R.359/VIRGINIA-8TH/

Screenshot #1 – Select state

clip_image003

Screenshot #2 – Select Representative

clip_image005

Screenshot #3 – “Flick through” the most recent roll call votes

clip_image007

Screenshot #4 –  “Virtual ballot” – Dynamically generated Facebook page,

clip_image009

Technical Details

Netizen combines the power of Windows Phone 7 & Windows Azure. Here is how the application works

WP7 App

Windows Phone 7 app is designed to have a small footprint. As a result, the bulk of the functionality is located inside the Azure hosted service. Here are some of key highlights:

1) The main screen (display of votes) is based on Pivot Control. Pivot Control provides a quick way to manage views. Each view (PivotItem) displays the details associated with a vote. Pivot control makes it really convenient for switching between different vote views. The Pivot control employs lazy loading to conserve resources. In addition, in order to limit the resources used, we only display the ten most recent votes.

2) Some addition performance related items include:

  • Invoking garbage collection explicitly after the rebinding the pivot control.
  • Storing the state in the isolated storage in order to make application resumption quicker.
  • Doing all the data transformation on the server, again in order to reduce the processing foot-print on WP7.
  • Keeping a close watch on the overall size of the XAP file – the current version is under 150K.

The following link provides great performance related tips:

http://msdn.microsoft.com/en-us/library/ff967560(v=VS.92).aspx

3) Since it is important to support a disconnected mode of operation, we always store the most recent data obtained from the Azure service into isolated storage. Another possibility is use of Sync Framework 4.0.

4) We experimented with embedding the browser control for web-based logins. It is possible for the web page to callback into the Silverlight code.

5) All the Netizen Azure service methods are based on REST. This made it quite easy to use the WebClient class to access them. A better alternative would have been the use reactive extensions as it would have made the code a bit cleaner.

6) One gotcha that we ran into was based on our decision to use SyndicationFeed as the data format. This was based on our desire to allow users to access the voting records
via their favorite feed readers (in addition to the WP7 app). In order to process the returned SyndicationFeed type, we had to explicitly add a reference to System.ServiceModel.Syndication. Here is the relevant code:

   1: XmlReader reader = XmlReader.Create(new StringReader(xmlContent));
   2: SyndicationFeed sFeed = SyndicationFeed.Load(reader);
   3:                    var entries = from en in sFeed.Items.Take(MaxNumberOfItemsPerFeed)
   4:                                  select en;
   5:                    foreach (var en in entries)
   6:                    {
   7:                        var x = en;
   8:                        SyndicationContent sc = x.Content;
   9:                        XmlSyndicationContent xsc;
  10:                       if (x.Content.Type == "text/xml" || x.Content.Type == "xml")
  11:                       {
  12:                           xsc = (XmlSyndicationContent)x.Content;
  13:                           VoteDetails vds = xsc.ReadContent<VoteDetails>();
  14:                           Votes.Add(vds);
  15:                       }
  16:                 }

Where the class VoteDetails is defined as shown below:

   1: [DataContract(Namespace = "http://schemas.datacontract.org/2004/07/Netizen.Service.Entities")]
   2:         public class VoteDetails
   3:         {
   4:             [DataMember]
   5:             public string BillID { get; set; }
   6:             [DataMember]
   7:             public string Date { get; set; }
   8:             [DataMember]
   9:             public string Democratic_AYEs { get; set; }
  10:             [DataMember]
  11:             public string Democratic_NAYs { get; set; }
  12:             [DataMember]
  13:             public string Description { get; set; }
  14:             [DataMember]
  15:             public string Question { get; set; }
  16:             [DataMember]
  17:             public string Republican_AYEs { get; set; }
  18:             [DataMember]
  19:             public string Republican_NAYs { get; set; }
  20:             [DataMember]
  21:             public string Result { get; set; }
  22:             [DataMember]
  23:             public string Roll { get; set; }
  24:             [DataMember]
  25:             public string Vote { get; set; }
  26:         }
Azure Service

As stated earlier, the bulk of processing resides within the Azure based service.

This includes the following:

· A WCF/ REST service hosted on Windows Azure (x-small instance) that exposes the voting record as an RSS/ATOM feed. Here is what the WCF contract looks like.

   1: namespace Netizen.Contract
   2: {
   3:     [ServiceContract]
   4:     [ServiceKnownType(typeof(Atom10FeedFormatter))]
   5:     [ServiceKnownType(typeof(Rss20FeedFormatter))]
   6:     public interface INetizenService
   7:     {
   8:  
   9:         [OperationContract]
  10:         [WebGet(UriTemplate = "rep/{zip}-{zipFour}")]
  11:         string GetRepresentative(string zip, string zipFour);
  12:  
  13:         [OperationContract]
  14:         [WebGet(UriTemplate = "reps/{state}/{feedType}")]
  15:         SyndicationFeedFormatter GetRepresentatives(string state, 
  16: ng feedType);
  17:  
  18:  
  19:         [OperationContract]
  20:         [WebGet(UriTemplate = "vote/{repID}/{feedType}")]
  21:         SyndicationFeedFormatter GetVotes(string repID, string feedType);
  22:  
  23:         [OperationContract]
  24:         [WebInvoke(UriTemplate =
  25: bscribe/{state}/{district}/{notificationUri}")]
  26:         void SetSubscription(string state, string district, 
  27: ng notificationUri);
  28:  
  29:     }
  30: }
  31:  

All the data is stored in Azure tables. A batch program is used to obtain the congressional voting record from Office of the Clerk of the U.S. House of Representatives and stored in Azure Tables. Data is appropriately transformed to make sure that Azure Table queries are efficient.

· Since the batch job runs once a day, it did not make sense to have dedicated worker role just loading data. Instead, we rely on a concept of “dynamic worker” within the web role job. We launch a console based data loader executable using the concept of startup tasks.

2. A MVC2 based web application hosted on Windows Azure (co-located within the same worker) Is used to dynamically generate Open Graph API  pages (that are deemed as Facebook Pages).

http://netizen.cloudapp.net/H.R.359/VIRGINIA-8TH/


[1] A page hosted on our site that includes Open Graph tags, making it equivalent to a Facebook Page

Vishwas is a Chief Technology Officer (.NET Technologies) at Applied Information Sciences, Inc.


The Windows Azure Team recommended that you Catch the Latest Cloud Cover Episode with Steve and Wade To Learn about Windows Azure Startup Tasks in a 1/28/2011 post:

imageIf you're looking for a great introduction to Windows Azure startup tasks, don't miss the latest episode of Cloud Cover on Channel 9 with Steve Marx and new co-host Windows Azure Technical Evangelist Wade Wegner.  In this episode, Steve and Wade will walk through how to set up startup tasks on Windows Azure and share some of their learnings, tips and tricks.

image

Click here to watch this video on Channel 9 if you're having trouble watching it here.

For more information, you should also check out Steve's introductory blog post about startup tasks and Wade's blog post about running Expression Encoder in Windows Azure.


Eric Nelson (@ericnel) posted A little gem from MPN–FREE online course on Architectural Guidance for Migrating Applications to Windows Azure Platform to the UK ISV Evangelism blog on 1/27/2011:

image I know a lot of technical people who work in partners (ISVs, System Integrators etc).

I know that virtually none of them would think of going to the Microsoft Partner Network (MPN) learning portal to find some deep and high quality technical content. Instead they would head to MSDN, Channel 9, msdev.com etc.

I am one of those people :-)

imageHence imagine my surprise when i stumbled upon this little gem Architectural Guidance for Migrating Applications to Windows Azure Platform (your company and hence your live id need to be a member of MPN – which is free to join).

This is first class stuff – and represents about 4 hours which is really 8 if you stop and ponder :)

Course Structure

The course is divided into eight modules.  Each module explores a different factor that needs to be considered as part of the migration process.

  • Module 1:  Introduction: 
    • This section provides an introduction to the training course, highlighting the values of the Windows Azure Platform for developers.
  • Module 2:  Dynamic Environment:
    • This section goes into detail about the dynamic environment of the Windows Azure Platform. This session will explain the difference between current development states and the Windows Azure Platform environment, detail the functions of roles, and highlight development considerations to be aware of when working with the Windows Azure Platform.
  • Module 3:  Local State:
    • This session details the local state of the Windows Azure Platform. This section details the different types of storage within the Windows Azure Platform (Blobs, Tables, Queues, and SQL Azure). The training will provide technical guidance on local storage usage, how to write to blobs, how to effectively use table storage, and other authorization methods.
  • Module 4:  Latency and Timeouts:
    • This session goes into detail explaining the considerations surrounding latency, timeouts and how to assess an IT portfolio.
  • Module 5:  Transactions and Bandwidth:
    • This session details the performance metrics surrounding transactions and bandwidth in the Windows Azure Platform environment. This session will detail the transactions and bandwidth costs involved with the Windows Azure Platform and mitigation techniques that can be used to properly manage those costs.
  • Module 6:  Authentication and Authorization:
    • This session details authentication and authorization protocols within the Windows Azure Platform. This session will detail information around web methods of authorization, web identification, Access Control Benefits, and a walkthrough of the Windows Identify Foundation.
  • Module 7:  Data Sensitivity:
    • This session details data considerations that users and developers will experience when placing data into the cloud. This section of the training highlights these concerns, and details the strategies that developers can take to increase the security of their data in the cloud.
  • Module 8:  Summary
    • Provides an overall review of the course.


<Return to section navigation list> 

Visual Studio LightSwitch

Beth Massi (@bethmassi) warned Visual Studio 2010 SP1 Beta is incompatible with LightSwitch Beta 1 in a 1/28/2011 post to the Visual Studio LightSwitch Team Blog:

image Last month Jason Zander announced the availability of Visual Studio SP1 Beta. We want to inform you that if you’re working with the LightSwitch Beta 1, do not install the Visual Studio SP1 Beta as these Beta releases are not compatible. If you want to install LightSwitch Beta 1 into Visual Studio 2010 Pro or Ultimate versions please be aware that LightSwitch Beta 1 will only work with Visual Studio 2010 RTM (released version).  If you just install LightSwitch Beta 1 please do not install the SP1 Beta after that.

image2224222If you’ve already installed SP1 Beta then to remove it, go to your Add/Remove Programs and uninstall it there. If you are still having problems, perform a Repair on LightSwitch Beta 1 and you should be up and running again.

Sorry for the inconvenience but these are Beta releases! ;-)

Better a late warning than no warning.


Return to section navigation list> 

Windows Azure Infrastructure

Rajani Baburajan reported Microsoft Unveils Cloud Computing Support Policy for Windows Azure, Office 365 in a 1/28/2011 post to TMCNet’s InfoTech Spotlight blog:

image Microsoft has unveiled the new cloud computing support policy for Windows Azure and Office 365, according to a report from Network World (News - Alert).

image According to the new Online Services policy, cloud customers will be given a minimum of 12 months' notice before discontinuing online services, or making any "disruptive" changes or upgrades, the report said.

Microsoft’s efforts are intended to standardize the support life cycle for cloud-based software.

The one-year termination notice is more important for cloud software than on-premise products, the report said. Customers who buy Microsoft's (News - Alert) packaged software can continue to use it indefinitely, ever after Microsoft drops support. If Microsoft stops providing an online service, customers will be left with no option but to buy another software solution.

imageFollowing the implementation of this policy, Windows Azure and Office 365 customers can plan for changes that would eliminate services, take them temporarily offline or require an overhaul of management practices, the report said.

Microsoft program manager David Carrington names this concept as “disruptive change,” which refers to changes that require significant action whether in the form of administrator intervention, substantial changes to the user experience, data migration or required updates to client software.

Under the new policy, Microsoft is committed to providing their customers a minimum of 12 months of prior notification before implementing potentially disruptive changes that may result in a service interruption, Carrington added.

Additionally, Microsoft announced it would provide 12 months' notice before terminating any "Business and Developer-oriented Online Service." The company is also committed to preserve customer data for at least 30 days in renewals or migrations that involve customers moving off a service.

Some of the “less drastic changes” also would receive a year's notice. These changes would be a "required upgrade to Microsoft Outlook to ensure continued functionality with Microsoft Exchange Hosted Services prior to the change actually occurring with the cloud-based service," according to Carrington.

If customers have deployed on-premises software that is connecting to a Microsoft Online Service, they may need to implement changes to their on-premises software for it to remain operable with the Online Service, but the timeframes of Mainstream Support and Extended Support for the on-premises software remain intact and unchanged."

In this case, the policy applies to "regular maintenance and service updates," and not security problems, which are fixed as soon as possible, Carrington added.

Microsoft is competing with its rival solution Google (News - Alert) Apps by offering a more stable and predictable experience for cloud software customers.

Microsoft's general support policy promises at least 10 years of support for Business and Developer products, including five years of Mainstream Support.

As the adoption of cloud services among small and mid-size business (SMBs) continues to rise, Microsoft’s cloud solutions are offering enormous market opportunities for hosting and communications service providers.

Microsoft said that it’s helping service providers take advantage of those opportunities via its software, services and programs, enabling them to become trusted advisors and full-service IT providers to businesses.

Rajani is a contributing editor for TMCnet.


Kevin McLaughlin (@kmclaughlincrn) quoted Microsoft Exec: Tablets May Be Temporary, Cloud Will Last in a 1/28/2011 post to Computer Reseller News:

image Microsoft (NSDQ:MSFT) has some impressive Windows 7 tablets coming to market from OEM partners, but to say the software giant has an uphill climb in this market would be an understatement. Its lack of urgency was underscored on Friday when an executive dismissed the notion that Apple (NSDQ:AAPL) has cornered the tablet market.

"Devices are going to go and come," Jean-Philippe Courtois, president of Microsoft International, told Reuters at the World Economic Forum in Davos, Switzerland.

image Courtois said cloud computing will be a more important aspect of IT infrastructure going forward. "This is a deep transformation of the scenario in IT over the last decade," Courtois told Reuters.

image

It makes sense that Courtois would feel this way since Microsoft is spending billions of dollars to build data centers to deliver its cloud services and support its Windows Azure infrastructure-as-a-service platform. However, his comments are somewhat odd given that the emergence of tablets has come in large part from an expansion of cloud and virtualization infrastructure.

"Microsoft has made a huge bet on cloud computing so it is natural for them to see it as the most important thing. The problem I see is that the tablet and the cloud are interconnected tightly -- one doesn't work well without the other," said Clinton Fitch, a Dallas-based Microsoft Windows Mobile MVP (Most Valuable Professional).

Microsoft, whose initial tablet strategy failed to take hold a decade ago, can't be happy about the way the market has embraced Apple's iPad. Microsoft's strategy of shrinking the Windows environment down to the smaller form factor didn't work then and, in the opinion of many partners, isn't likely to threaten the iPad and Android tablets.

Without reading too much into Courtois' comments, it seems that Microsoft believes that real money lies in the returns it expects to reap over the next several years from its massive cloud expenditure. "This gives me the impression that while Microsoft is working hard to get into the tablet market, they see it as a battle that will be difficult to win with the iPad so well established and the Galaxy Tab making big gains," said Fitch.

Allen Nogee, an analyst with In-Stat in Scottsdale, Ariz., has a similar impression. "There is no question that Microsoft has had a difficult time in positioning themselves [in the tablet space]," said Nogee. "It sounds like they are moving on, hoping that their work in cloud will eventually pay off."

However, Chris De Herrera, a Los Angeles-based Windows Mobile MVP and editor of the Pocket PC FAQ, still thinks there's time for Microsoft's Windows-tablets-strategy to win out, particularly because current tablets haven't been optimized for the Web or cloud computing.

"In fact, no changes have been made to the HTML standard, or how we use cloud computing, that are specific to tablets," he said.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

Stephen O’Grady (@sogrady) posted Public vs Private Cloud Adoption: The Economics of Accessibility and Sustainability to his tecosystems (Redmonk) blog on 1/28/2011:

image Attempts to address questions of private vs public cloud adoption necessarily involve detailed examinations of the economics. Obscured in these discussions, however, is the influence time has on perceptions of the economic costs and the theoretical return. Current return versus future costs is an important equation, and how it is parsed depends heavily on the availability of local resources.

image The fundamental value proposition of the public cloud is accessible economics, as we discussed with Marten Mickos recently. As Infochimps’ Flip Kromer puts it, “EC2 means anyone with a $10 bill can rent a 10-machine cluster with 1TB of distributed storage for 8 hours.” The pricing established by the original market entrant, Amazon, have served two purposes. First, they have made resources available to even individual developers that would have been out of reach absent the pay per use model. Second, and perhaps more importantly, Amazon imposed a ceiling on cloud pricing; a ceiling which they continue to lower as they’re able to leverage larger economies of scale. Whatever the larger ambitions of other systems vendors with respect to the cloud market, Amazon defined the context in which they all must compete. At a pricepoint sufficiently low that the majority have chosen not to.

imageDevelopers have flocked to Amazon’s platform [coverage] not because it is the lowest cost option, but rather because it is the most accessible. From an economic perspective, they’re trading up front capital expenses for potentially higher ongoing operational costs. This trade is most attractive when resources – hardware and people – are few. When a developer can’t outsource the task of hardware acquisition and setup, and wouldn’t have the hosting facilities even if they could, the cloud is a compelling alternative.

It is less apparent, however, to larger entities that the public cloud economics compare favorably to their internal operational costs. Setting aside the question of whether their conclusions are sound, discussions with those implementing private clouds as an alternative to public implementations focus on the sustainable aspect of the economics. The accessibility of public cloud services is of less value to larger institutions, both because their expectations in terms of deployment speed are very modest and because they are typically not resource poor with either available hardware and IT resources.

Instead, larger enterprises focus on the operational margins of public cloud. Amazon’s pricepoint has historically commanded a margin above the cost of traditional hosting suppliers [coverage]. Given that larger enterprises generally argue that they can deliver infrastructure at a lower cost than traditional suppliers, the economics tilt even more strongly against large scale public cloud implementations.

The benefits of the public cloud remain of interest, however. What enterprise would not want a more elastic, easily provisioned infrastructure? Private cloud is the inevitable compromise. Promising feature benefits similar to those available on public infrastructure but with what is perceived by enterprises to be a more sustainable economic model, the private cloud is an increasingly attractive proposition for large enterprises.

Supporters of one or the other approaches may question the substance of the above characterizations, but these are the behaviors we have observed repeatedly. If you’re selling private cloud solutions, then, you’d do well to understand the appeal of accessible economics. Conversely, public cloud vendors may want to more clearly articulate the sustainability of their economics over time.

Disclosure: Amazon is not a RedMonk client.

See the related Jo Maitland (@Jo MaitlandTT) asked Cloud computing costs how much? in a 1/28/2011 post to SearchCloudComputing.com’s Cloud Computing News blog item in the Other Cloud Computing Platforms and Services section near the end of this post.


<Return to section navigation list> 

Cloud Security and Governance

image

No significant articles today.


<Return to section navigation list> 

Cloud Computing Events

No significant articles today.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Jeff Barr (@jeffbarr) claimed Amazon S3 - Bigger and Busier Than Ever in a 1/28/2011 post to the Amazon Web Services blog:

The number of objects stored in Amazon S3 continues to grow:

image Here are the stats, measured at the end of the fourth quarter of each year:

  • 2006 - 2.9 billion objects
  • 2007 - 14 billion objects
  • 2008 - 40 billion objects
  • 2009 - 102 billion objects
  • 2010 - 262 billion objects

image The peak request rate for S3 is now in excess of 200,000 requests per second.

If you want to work on game-changing, world-scale services like this, you should think about applying for one of the open positions on the S3 team:

Note: The original graph included an extraneous (and somewhat confusing) data point for Q3 of 2010. I have updated the graph for clarity.


Alex Williams (@alexwilliams) explained Why the Telcos Will Go On a Spending Spree in a 1/28/2011 post to the ReadWriteCloud blog:

image The news came yesterday that Verizon is buying Terremark, a cloud services provider. The deal is worth a reported $1.4 billion.

We've seen a lot of talk about possible acquisition targets since the news broke. But it feels like it's more interesting to explore why the market conditions are right for telecommunications companies executives to continue spending mergers and acquisition budgets on infrastructure providers and cloud management companies.

Chuck Hollis is vice president of global marketing and chief technology officer at EMC Corporation. It's clear to him that the acquisition is evident of the move to providing information technology more as a service than anything else:

If you believe in the secular trend that -- over time -- more IT will be delivered as a service vs. consumed in a traditional fashion, you quickly realize that telcos can have a compelling position.

They've got lots and lots of pipe. They know how to deliver a related form of service -- communications. They know how to price their offerings and bill for them.

Their strategic motivations are usually clear as well. More ordinary network services are quickly becoming commoditized. There's only so much content you can sell people. And, before long, you go looking for the next big market to attack.
Indeed, early on, many people thought that IT-as-a-service would go to the telco carriers, and that would be that.

Stacey Higginbotham of GigaOm writes that telecommunications companies see the ability to manage customers networks and their infrastructure.

MoMo Cisco Router in Network closet - 0630201022053That's a new play that correlates to the deeper interest that the telecommunications companies have in extending the data they offer through APIs.

To offer the computing, the network and also data capabilities means telecommunications providers need to invest in cloud management technologies and new routing capabilities for data services. We'll see how these kinds of companies are viewed as telecommunications executives target more companies for acquisition in the year ahead.


Lisa Pierce posted Verizon-Terremark a Win-Win to Saugatuck Technology’s Lens 360 blog on 1/28/2011:

image Yesterday Verizon and Terremark announced a definitive agreement under which Verizon will acquire and operate Terremark as a wholly owned subsidiary under the leadership of Terremark’s current management team.

This announcement is a win-win for both companies, and is external confirmation that the Cloud Computing market is a key propellant of future growth in the information technology sector (see “Key SaaS, PaaS, and IaaS Trends Through 2015 – Business Transformation Via the Cloud”, 834SSR, 17Jan2011).  

image At a megatrends, level, the Terremark acquisition is clear-cut evidence that while Verizon appreciates the strategic value of IaaS, it also is  objective enough to know when to go outside to acquire the necessary competency (vs. attempting to organically develop).   This is not the first time Verizon has acquired strategically critical assets—its acquisition of MCI Worldcom, which propelled it from a tier 3/4 enterprise WAN services provider, to tier 1, is another example.  Another example is its acquisition of Cybertrust, which helped create much of the necessary foundation to fill a long-gaping hole in the company’s security portfolio.

As Verizon has recently demonstrated through these acquisitions, and its aggressive stance against AT&T Mobility in the rollout of LTE and in competition for Apple iPhone and iPadusers, it has both the will and the financial resources to make the level of commitments necessary to propel it into the role of a major contender, with a strong chance of becoming a key global market leader across several high-growth, high-margin technology services sectors.

Unlike any of its major competitors, Verizon is the only one headquartered in a city with true international roots - New York, one famous for its level of drive and competitive spirit.  I’ve expressed concern before about how the choice of headquarters/senior leaders’ location can drive corporate culture.  Now Verizon’s major carrier rivals, headquartered in other parts of the US, can begin to fully appreciate how much a difference in culture can impact the willingness and ability to seize the moment, and by so doing, to shape the trajectory of future revenues.

Saugatuck projects that by YE 2014, 50 percent or more of NEW enterprise IT spend will be Cloud-based or Hybrid (i.e., traditional on-premises combined with Cloud-based). And, by YE 2014, 65 percent or more of NEW enterprise IT workloads will be Cloud-based or Hybrid. The Terremark acquisition highlights Verizon’s strategy to aggressively pursue the Public Cloud and Hosted Private Cloud segments of the Cloud Computing market.

Verizon’s proposed acquisition of Terremark is good for Terremark because:

  • Unlike many mega-carriers that acquire a company and then essentially destroy it via ‘integration’, Verizon plans to continue to allow Terremark to operate as a separate Business Unit, complete with its name and current management team.  This will allow Terremark to focus on its core competencies. 
  • Verizon’s global WAN of high-speed, low-latency switched Ethernet services, and far-reaching MPLS ports, can provide Terremark with a very competitive bundled (network+cloud) price.
  • Verizon’s extensive domestic and international central office network can also provide Terremark with as many new data center locations as it could possibly need.

In addition to the strategic competitive benefits discussed above, the Terremark acquisition is good for Verizon because:

  • It further legitimizes Verizon’s cloud ambitions and introduces Verizon to new customers, both at the company and department level.  Previous Saugatuck research shows that data center managers at medium and large businesses rarely consider traditional telecommunications carriers as likely providers of Cloud IT solutions (see “To Compete, Telcos Need to Shape Cloud IT Sales Strategies”, MKT798, 22Oct2010)
  • It deepens or extends relationships with important Terremark partners, like CSC and VMWare, to Verizon.
  • It can help to solidify and deepen relationships both companies have with common customers, such as the US GSA. 

Lisa is a is a Strategy Consultant and Associate Research Analyst for Saugatuck Technology. Her area of focus is cloud-based services offered by telecommunications providers.


Jo Maitland (@Jo MaitlandTT) asked Cloud computing costs how much? in a 1/28/2011 post to SearchCloudComputing.com’s Cloud Computing News blog:

    Weekly cloud computing update

    image Say WHAT? Private cloud systems like vBlock from EMC/Cisco/VMware and IBM CloudBurst can cost anywhere from $250,000 to a cool million, depending on how much software is included. Sounds like highway robbery. It has to be cheaper to build a commodity cloud like Amazon Web Services, sourcing components directly, right?

    image Couldn’t you just buy servers from Quanta Computer in Taiwan, or one of the many other original design manufacturers that sell in bulk to Google, Facebook, Amazon, Dell, HP etc.; load up on open source Xen hypervisor technology and open source monitoring and management software from Nagios, OpenTSDB or Zabbix; and Bob’s your uncle, you’ve got a cheap private cloud?

    Well, yes and no. As always with, open source software needs a lot of customization to really make it work at scale. So even though the acquisition costs are much less, it would probably be painful to support. Just look at how fast Amazon is hiring engineers to support its cloud services; the company’s job page has over 200 openings just in North America alone.

    Then there’s Moore’s Law, which continues to drive down the cost of hardware. Industry experts say that big names like HP, Dell and IBM are selling hardware at ridiculously low prices -- less than $3,000 for a current-generation dual-socket box with 16 GB of RAM, a battery-backed array controller, 146 GB SAS disks and out-of-band management.

    So why the exorbitant mark-up on the "cloud in a box" appliances? (And I haven’t even mentioned Oracle’s much-hyped Exalogic, which lists for just over a million). Integration work, maybe? That takes some time, but how well integrated are vBlock and CloudBurst, anyway? Do they work with everything else in your IT shop?

    Could it be that the vendors are just throwing a number out there and seeing if anyone is crazy enough to pay? Is it the notion of seeing what the market will bear and adjusting accordingly? In other words, the price has nothing to do with actual system costs. Cloud buyers, beware.

    Jo is the Senior Executive Editor at SearchCloudComputing.com.

    Full disclosure: I am a paid contributor to SearchCloudComputing.com.


    Yury Izrailevsky explained the current state of NoSQL at Netflix in a 1/28/2011 post to the Netflix Tech Blog:

    image This is Yury Izrailevsky, Director of Cloud and Systems Infrastructure here at Netflix. As Netflix moved into the cloud, we needed to find the appropriate mechanisms to persist and query data within our highly distributed infrastructure. Our goal is to build fast, fault tolerant systems at Internet scale. We realized that in order to achieve this goal, we needed to move beyond the constraints of the traditional relational model.

    In the distributed world governed by Eric Brewer’s CAP theorem , high availability (a.k.a. better customer experience) usually trumps strong consistency. There is little room for vertical scalability or single points of failure. And while it is not easy to re-architect your systems to not run join queries, or not rely on read-after-write consistency (hey, just cache the value in your app!), we have found ourselves braving the new frontier of NoSQL distributed databases.

    Our cloud-based infrastructure has many different use cases requiring structured storage access. Netflix is all about using the right tool for the job. In this post, I’d like to touch on the reasons behind our choice of three such NoSQL tools: SimpleDB, Hadoop/HBase and Cassandra.

    imageAmazon SimpleDB was a natural choice for a number of our use cases as we moved into AWS cloud. SimpleDB is highly durable, with writes automatically replicated across availability zones within a region. It also features some really handy query and data format features beyond a simple key/value interface, such as multiple attributes per row key, batch operations, consistent reads, etc. Besides, SimpleDB is a hosted solution, administered by our friends at AWS. We love it when others do undifferentiated heavy lifting for us; after all, this was one of the reasons we moved to the cloud in the first place. If you are accustomed to other AWS products and services, using SimpleDB is… well, simple – same AWS account, familiar interfaces, APIs, integrated support and billing, etc.

    image For our systems based on Hadoop, Apache HBase is a convenient, high-performance column-oriented distributed database solution. With its dynamic partitioning model, HBase makes it really easy to grow your cluster and re-distribute load across nodes at runtime, which is great for managing our ever-growing data volume needs and avoiding hot spots. Built-in support for data compression, range queries spanning multiple nodes, and even native support for distributed counters make it an attractive alternative for many of our use cases. HBase’s strong consistency model can also be handy, although it comes with some availability trade offs. Perhaps the biggest utility comes from being able to combine real-time HBase queries with batch map-reduce Hadoop jobs, using HDFS as a shared storage platform.

    image Last but not least, I want to talk about our use of Cassandra. Distributed under the Apache license, Cassandra is an open source NoSQL database that is all about flexibility, scalability and performance. DataStax, a company that professionally support Cassandra, has been great at helping us quickly learn and operate the system. Unlike a distributed database solution using e.g. MySQL or even SimpleDB, Cassandra (like HBase) can scale horizontally and dynamically by adding more servers, without the need to re-shard – or reboot, for that matter. In fact, Cassandra seeks to avoid vertical scalability limits and bottlenecks of any sort: there are no dedicated name nodes (all cluster nodes can serve as such), no practical architectural limitations on data sizes, row/column counts, etc. Performance is strong, especially for the write throughput.

    Cassandra’s extremely flexible data model deserves a special mention. The sparse two-dimensional “super-column family” architecture allows for rich data model representations (and better performance) beyond just a simple key-value look up. And there are no underlying storage format requirements like HDFS; all you need is a file system. Some of the most attractive features of Cassandra are its uniquely flexible consistency and replication models. Applications can determine at call level what consistency level to use for reads and writes (single, quorum or all replicas). This, combined with customizable replication factor, and special support to determine which cluster nodes to designate as replicas, makes it particularly well suited for cross-datacenter and cross-regional deployments. In effect, a single global Cassandra cluster can simultaneously service applications and asynchronously replicate data across multiple geographic locations.

    image The reason why we use multiple NoSQL solutions is because each one is best suited for a specific set of use cases. For example, HBase is naturally integrated with the Hadoop platform, whereas Cassandra is best for cross-regional deployments and scaling with no single points of failure. Adopting the non-relational model in general is not easy, and Netflix has been paying a steep pioneer tax while integrating these rapidly evolving and still maturing NoSQL products. There is a learning curve and an operational overhead. Still, the scalability, availability and performance advantages of the NoSQL persistence model are evident and are paying for themselves already, and will be central to our long-term cloud strategy.

    Building the leading global content streaming platform is a huge challenge. NoSQL is just one example of an exciting technology area that we aggressively leverage (and in the case of open source projects, contribute back to). Our goal is infinite scale. It takes no less than a superstar team to make it a reality. For those technology superstars out there: Netflix is hiring (http://jobs.netflix.com).

    Adrian Cockcroft, a Netflix Cloud Architect, added the following in a comment:

    In other forums, there has already been quite a lot of more detailed information posted about Netflix use of NoSQL and cloud architectures in general. For example check out this SlideShare presentation : Netflix's Transition to High-Availability Storage http://slidesha.re/9Hn9X5

    There are videos on the QConSF conference web site.

    We have a lot of production experience with SimpleDB, and are at an earlier stage of development on other NoSQL platforms, so, yes, this is a call to recruit developers, we have a lot still to do...

    image MongoDB is in use at Netflix for a non-customer-facing project that needed a good way to integrate many data feeds in json and XML, as it is good for querying structured documents.


    The Geek and Poke Blog suggests Leveraging the NoSQL Boom in its How to Write a CV cartoon strip of 1/28/2011:

    image


    Derrick Harris listed 5 Cloud Software Vendors Dell Should Buy in a 1/28/2011 post to GigaOm’s Structure blog:

    Michael Dell is at The World Economic Forum this week talking about Dell having acquisition plans in “software, data centers, cloud computing, storage and virtualization,” which has speculators venturing guesses as to what’s on its shopping list. Timothy Prickett Morgan gave his thoughts in The Register, dropping companies from Brocade to Cray to Rackspace as possibilities, but I don’t think Dell will make a system-centric play this time around. There are two trends right now – cloud computing and big data – that are dependent on software and services, and I think Dell gets this, if only because the company knows it doesn’t want to go blow-to-blow with IBM, HP and Cisco on high-end systems. It has already shown as much with its recent purchases of Scalent and Boomi.

    image Here are the companies I think Dell should consider buying this time around. They’re not huge companies by any stretch of the imagination, but they would provide very-relevant software products for advancing Dell’s mission of adding value to the growing number of servers it’s selling:

    Aster Data Systems

    image Thus far, Dell has about the same in-house big data prowess as does HP, which is to say none at all. But Dell does resell Aster Data Systems’ nCluster massively parallel analytic database as a part of the Dell Cloud Solution for Data Analytics. That’s why I think Aster Data would be a natural fit for Dell: It already knows the product and the business, and it lets Dell keep selling commodity boxes while letting the software do the work. Dell pushes openness in terms of hardware choice, so if it wants to get into database space, buying a company with an appliance business might not make too much sense. Aster Data won’t come cheap, with a rumored valuation easily north of $100 million, but it should cost less than the $300 million-plus EMC reportedly paid for Greenplum, and certainly less than the $1.7 billion IBM paid for Netezza.

    Joyent

    image Joyent would let Dell kill three birds with one stone, as it encompasses software, cloud computing and data centers. Furthermore, as with Aster Data, Dell already has an OEM deal with Joyent through which it resells Joyent’s SmartDataCenter software as the Dell Cloud Solution for Web Applications. As I’ve written before, Dell has formed a fairly holistic portfolio of cloud offerings, of which Joyent is a key part, so closing the loop and bringing that software in-house makes sense. It also would be good for Joyent, which would have a larger channel and sales team through which to sell its software. Of course, Joyent’s business also extends into cloud hosting, which would get Dell into the service-provider business, as some have speculated it wants to do, without buying Rackspace (which could be a complex integration) or relying on the Windows Azure Appliance.

    DynamicOps

    image DynamicOps presents a similar situation as both Aster Data and Joyent, because Dell also has an OEM deal with it, although DynamicOps’ deal with Dell definitely is more limited in scope. Presently, its cloud-management software provides the self-service capability for Dell’s Virtual Integrated System software package, which is Dell’s attempt to give customers the converged infrastructure experience of managing computing, storage and networking from one place without forcing them to buy expensive vertically integrated systems such as Cisco’s UCS or HP’s BladeSystem Matrix. DynamicOps also sells virtualization management software, which would give Dell customers that aren’t ready for the cloud a more down-to-earth option.

    Univa

    image Univa could be a good choice, especially if Dell wants to provide its Data Center Solutions customers, who buy large quantities of customized hyperscale servers from Dell, with tools to manage their scale-out data centers and clusters. Univa is a newly technology-rich company thanks to its forking of the Sun Grid Engine software, and it already has an Austin, Texas office as a result of its purchase of United Devices a few years ago. There are other options in this space – Platform Computing (which Morgan suggested) and Adaptive Computing – come to mind, but I think Univa’s Austin roots and relatively low price will make it the most-appealing choice of the three HPC vendors that have expanded into the cloud-data-center-management space.

    Appistry

    image As with the other four suggestions, Appistry is another software company that’s a perfect complement for Dell’s scale-out-focused Data Center Solutions group. Appistry’s CloudIQ Platform is all about achieving high application performance across a distributed set of commodity servers, and it already has established a fairly strong customer base across the intelligence and defense industries. The companies already have partnered, in fact, on a petabyte-scale Private Storage Cloud that combines Appistry’s CloudIQ Storage software with Dell hardware. CloudIQ Storage would give Dell a differentiating story for customers, as it focuses on not just on scaling out, but also on placing data near computing logic to ensure that storage doesn’t slow application performance as the numbers of servers grows.

    Image courtesy of Flickr user tanakawho.

    Related content from GigaOM Pro (sub req’d):


    <Return to section navigation list> 

    0 comments: