Wednesday, February 16, 2011

Windows Azure and Cloud Computing Posts for 2/15/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px33   

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

Dinesh Haridas warned about using Windows Azure Drives with Full IIS in SDK 1.3 in a 2/15/2011 post to the Windows Azure Storage Team blog:

image With the Windows Azure SDK 1.3 it is now possible to run web roles with Full IIS. If you are unfamiliar with Full IIS you might want to look at this blog which captures the differences between Full IIS and Hosted Web Core (HWC) the only option available with prior SDKs. Additionally you may find this blog, which drills into the impact on the storage configuration setting publisher useful too. The content from those blogs provides context for the rest of this article.

imageIn this post we’ll discuss coding patterns for Windows Azure Drive APIs recommended for use with full IIS. All of these guidelines should work for Hosted Web Core as well. In this context we’ll also discuss one known issue with .Net 4.0. In addition we call out issues that have surfaced with SDK 1.3 and workarounds for them.

Perform Drive Initialization in the Global.asax file

In Full IIS, the OnStart() method in the web role and Page_Load() run in different processes. Consequently a drive letter saved to a global variable in the OnStart method is unavailable to the Page_Load() method. To address this, we recommend that applications perform all drive initialization actions including setting up the cache, creating and mounting the drives in the Application_Start() method of the Global.asax file. This approach is also suitable for Hosted Web Core.

Caveats for .Net 4.0

With .Net 4.0 there is a known issue that will cause all Storage Client API calls for Blobs, Tables and Queues to fail if they are invoked in Application_Start(). The exception surfaced in this case is HttpException (0x80004005): Response is not available in this context” from Global.ApplicationStarton .NET 4.0.

This issue will be addressed in an upcoming service pack for .Net 4.0. Until then one way to mitigate this issue, is to move all Windows Azure Blob, Table and Queue API calls to some other location like the OnStart() method or to another method in the Global.asax file depending on when the calls need to be executed. It should be noted that the OnStart() method will always execute before Application_Start().

You might also choose to stay with IIS Hosted Web Core until the issue is addressed in the next .Net 4.0 service pack. You can disable Full IIS in an existing project by commenting out the Sites element in the ServiceDefinition.csdef.

In the Storage Emulator with Full IIS, Drives are not visible across user contexts

In the development environment, a drive that is mounted in one user context is not visible in another user context, and the OnStart() and Application_Start methods do not run in the same user contexts. For example if a drive is mounted in the OnStart method and the drive letter passed to the IIS process through some IPC means, the simulated drive will be unavailable in the IIS process when running in the development environment.

When running in the cloud, drives mounted are visible across the entire role instance. Note that with IIS HWC, Page_Load and OnStart are always in the same process and consequently this is not an issue.

Storage Emulator workarounds for Full IIS & SDK 1.3

These are a few issues that have surfaced with SDK 1.3 and full IIS that we’ll cover in this section. It should be noted that none of these issues occur with IIS Hosted Web Core.

  1. When running in the Windows Azure Emulator, ERROR_UNSUPPORTED_OS is returned from CloudDrive.InitializeCache() when called from Application_Start().
    To work around this issue, set the environment variable AZURE_DRIVE_DEV_PATH to a suitable directory on your system. This directory is used by the Azure Storage Emulator to store Windows Azure Drives.
    To do this, right click on “My Computer”, choose Properties, then Advanced System Settings, then Environment Variables, click “New…” under “System Variables”, name the variable AZURE_DRIVE_DEV_PATH, and set the value to a directory on your machine (e.g. “C:\AzureDriveDevPath”). You should create this path yourself so that any process you create can access it. To propagate this variable to the appropriate processes, you need to reboot your machine.
  2. When running with Full IIS on the Windows Azure Emulator on x86 (not x64), the following error is surfaced “Could not load file or assembly ‘mswacdmi.dll’ or one of its dependencies.”
    The workaround is to add the path C:\Program Files\Windows Azure SDK\v1.3\bin\runtimes\storage\simulation\x86 to your system path environment variable. After adding this path, reboot your machine.
  3. When running with Full IIS on the Windows Azure Emulator and calling CloudDrive.Snapshot(), the following error is surfaced ‘Unknown Error HRESULT=800040005’.
    This issue crops up because the Storage Emulator uses ‘backup’ semantics to make snapshots, but Full IIS runs as the ‘NETWORK_SERVICE’ user which does not have ‘backup’ rights.
    To work around this problem, you can add ‘NETWORK_SERVICE’ to your ‘Backup Operators’ group. To do so, right click on “Computer” or “My Computer” and choose “Manage”. Go to “Local Users and Groups”, click on “Groups” and then double click on “Backup Operators”. Push “Add”, then “Locations” and select your computer at the top, and push “ok”. Now type ‘NETWORK SERVICE’ without the quotes into the box, and push “Ok”. Finally, push “Ok” again. You should remove ‘NETWORK SERVICE’ from ‘Backup Operators’ when you no longer need this workaround.


Doug Rehnstrom posted Windows Azure Training Series – Understanding Azure Storage to the Learning Tree blog on 2/15/2011:

Windows Azure Storage Choices

image There are a couple common ways of storing data when using Microsoft Windows Azure. One is SQL Azure, which is a cloud-based version of Microsoft SQL Server. The other is Azure storage. SQL Azure will be familiar to those who already understand relational databases. It may also make moving an application to cloud easier, if that application already uses SQL Server.

image Azure storage has some advantages as well. First it is inexpensive. Azure storage costs about 15 cents per gigabytes per month, compared to $10 per gigabyte per month for SQL Azure. It can also be very large. Depending on your instance size, it can be up to 2 terabytes. It is also cross-platform, and accessed using standard internet requests.

Types of Windows Azure Storage

There are four types of Azure storage: blob storage, table storage, queue storage and Azure drives.

Blob storage is used to store binary data. This could be pictures, videos, or any other binary data.

Table storage is used to store structured data. It is similar to a database, but not relational. Table storage is a convenient way of saving business entities in an object-oriented program. In many ways it is simpler than relational storage.

Queue storage provides a simple messaging system that allows different Azure roles to communicate. For example, a user may request a report to be run using an application running in a Web role. That request could be sent to an Azure queue. Later a worker role can process the request, and then email the completed report to the user.

Azure drives allow storage to be access using standard NTFS APIs. This could be particularly useful if you have an application that already writes to a hard disk and you want to migrate it to the cloud.

Accessing Azure Storage

Azure storage can be access using a REST-based API via HTTP. This means storage can be used from any application, whether it is written in .NET, Java, Python, JavaScript or something else.

If you’re using .NET though, accessing storage is made easier using the Azure SDK. If you don’t already have it, go to this link, http://www.microsoft.com/windowsazure/windowsazure/, and then click on the “Get tools and SDK” button. You might also like to read this article, Windows Azure Training Series – Setting up a Development Environment for Free.

Once you have the SDK installed, set a reference to Microsoft.WindowsAzure.StorageClient.dll, and you’re ready to go.

In later posts, we’ll take a look at some code to write to Azure storage. In the meantime, you might like to read the prior posts in this series.

You might also like to come to Learning Tree course 2602, Windows Azure Platform Introduction: Programming Cloud-Based Applications.


<Return to section navigation list> 

SQL Azure Database and Reporting

Dhananjay Kumar posted WCF Data Service with SQL Azure Tutorial to the DotNetSpark blog on 2/15/2011:

image In This Tutorial I will describes exposing a cloud or SQL Azure database as a WCF Data Service.

The two steps mainly involved in this

  1. Creating local Database and migrating to SQL Azure
  2. Exposing SQL Azure Database as WCF DATA Service

Step 1:  Creating local Database and migrating to SQL Azure
imageCreating Database
The first step is to create a database. We are going to use a School database.  The script for a sample School Database can be copied from here.
Generate Database Script for SQL Azure
Right click on School Database and select Tasks. From Tasks select Generate Script.

WCFDataserviceSQlAzure1.gif
From Pop up select Set Scripting option.
WCFDataserviceSQlAzure2.gif
Give the file name by selecting Save to file option.
Now the main thing to be noticed here is we need to change an advanced setting. For that click on Advanced options.
WCFDataserviceSQlAzure3.gif
And in types of data to script select Script and Data both option.
WCFDataserviceSQlAzure4.gif
After that click Next and Finish. You will see that a SQL file is created and we will be using this script to migrate our in-house school database to SQL Azure.
Create School Database in SQL Azure
Login to SQL Azure portal with your live credential.
https://sql.azure.com/
Click on SQL Azure tab. You will get the project that you created for yourself.
WCFDataserviceSQlAzure5.gif
Click on the project. In my case the project name is debugmode.  After clicking on project, you will get listed the entire database created in your SQL Azure account.
WCFDataserviceSQlAzure6.gif
Here in my account there are two databases already created.  They are master and student database.  Master database is default database created by SQL Azure for you.
Click on Create Database 
WCFDataserviceSQlAzure7.gif
Give the name of your database.  Select the edition as Web and specify the max size of database. 
WCFDataserviceSQlAzure8.gif
You can select other option also for the edition as business
After that click on Create you can see on Databases tab that Demo1 database has been created.
WCFDataserviceSQlAzure9.gif
Run the Script in SQL Azure
Open SQL Server management studio
WCFDataserviceSQlAzure10.gif
You will get Connected to server dialog box. Click cancel on that.
WCFDataserviceSQlAzure11.gif
After canceling the dialog box click on New Query from left top
WCFDataserviceSQlAzure12.gif
On clicking New Query, you will get the connect to server dialog box again.
WCFDataserviceSQlAzure13.gif
Now here you need to provide, Server name of SQL Azure and Login credential of SQL Azure. 
To know what is database server name of SQL Azure portal, login to Windows Azure portal with your live credential and then click on SQL Azure tab
WCFDataserviceSQlAzure14.gif
You will get the server name in the form of
abc.database.windows.net, where abc is name of your SQL Azure server.  We need to provide this server name at local sql server management studio. 
WCFDataserviceSQlAzure15.gif
Make sure to select SQL Server Authentication and provide login user name and password of your SQL Azure database portal.
After that before clicking Connect click on Option
WCFDataserviceSQlAzure16.gif
From Option select School database. 
WCFDataserviceSQlAzure17.gif
Run the Script
Now once you successfully got connected to School Database in SQL Azure.  Copy the script and Run like below. 
WCFDataserviceSQlAzure18.gif
After successfully running of script, run the below command and all the tables name will get listed.
WCFDataserviceSQlAzure19.gif
In this way you successfully migrated database to SQL AZURE.
Step 2: Exposing SQL Azure Database as WCF DATA Service
Create a Web Application
Create a new project and select ASP.Net Web Application project template from Web tab.  Give a meaning full name to the web application.
WCFDataserviceSQlAzure20.gif
Create a Data Model
We can create a Data Model, which can be exposed as WCF Data Service in three ways
  1. Using ADO.Net Entity model.
  2. Using LINQ to SQL class.
  3. Custom Data Model.
For our purpose, I am going to use ADO.Net Entity model to create the data model.  So to create an entity model
  1. Right click on web application and add a new item
  2. Select ADO.Net Entity model from Data tab.
    WCFDataserviceSQlAzure21.gif
  3. Since we have table in SQL Azure Dat aBase. So we are going to choose option, select from database.
    WCFDataserviceSQlAzure22.gif
  4. Choose a new connection.
    WCFDataserviceSQlAzure23.gif
    After clicking on New Connection, this is the more important step. We need to give extra care here.
    So provide the information as below,
    WCFDataserviceSQlAzure24.gif
    Click on Test Connection to test connection established successfully or not?  After that you will get prompted as Connection string contains sensitive data, do you want to keep that in configuration file or mange through program. Which to use is your choice.
    WCFDataserviceSQlAzure25.gif
    After selecting your option, click on the Next button; you will then see all the Tables, Views and Stored Procedures available as part of a data model for WCF Data Service.
    WCFDataserviceSQlAzure26.gif
  5. Select the tables, views and stored procedure from the data base that you want to make as the part of your data model.
Creating WCF Data Service
  1. Right click on Web Application project and add a new item.
  2. Select WCF Data Service from Web tab. Give any meaningful name. I am leaving the default name here.
    WCFDataserviceSQlAzure27.gif
  3. After adding the WCF Data Service, we can see a service file with extension .svc has been added to the solution explorer.
The first thing to do is to provide a data source name. To do that, uncomment the first commented line and give the data source name. In our case we are using the name of the model, which we created in the second step; the data source. Our data model name is SchoolEntities.
WCFDataserviceSQlAzure28.gif
Now we need to set access rules for entity or entity set.  Since we have only one table, either we can use the name of the table explicitly or if we want to set the same access rule for all the tables in the data model or data source we can use  *.
WCFDataserviceSQlAzure29.gif
So we are setting the access rule that, on the entity in the data source performs all the operations.
So finally the svc file looks like:
WCFDataserviceSQlAzure30.gif
Run the WCF Data Service
Just press F5 to run the WCF Data Service. Data Service will be hosted in the default ASP.Net server.
When running you will see all the tables listed there.
WCFDataserviceSQlAzure31.gif
Append the URL with People and you will get all the records from People table
WCFDataserviceSQlAzure32.gif
Note: If your browser is not showing the expected result, make sure Feed reading of browser is off.  Do that from the menu of IE and select tool and then Internet Option then Content.
WCFDataserviceSQlAzure33.gif
WCFDataserviceSQlAzure34.gif
So we have exposed data from cloud using WCF Data service. Now any OADTA client can consume the data from cloud by consuming WCF Data Service.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Beth Massi (@bethmassi) posted  Fun with OData and Windows Phone 7 on 2/16/2011:

image Tonight I’m speaking in San Francisco on one of my favorite topics, OData. Here’s the info: Creating and Consuming OData Services for Business Applications. (By the way, the Open Data Protocol (OData) is a protocol for querying and updating data over the web. If you’re not familiar with OData I encourage you to check out www.odata.org and come to my talk tonight!)

imageSince I’ve done this talk a couple times I thought it would be a good idea to add some new demos. One that I thought would be fun would be using OData in a Windows Phone 7 application. It turns out that it’s actually pretty easy do once you have all the right tools and libraries. Here’s what you’ll need:

Get the Tools

Windows Phone Developer Tools RTW – This includes Visual Studio 2010 Express but if you already have VS 2010 Pro or higher the Windows developer tools will just integrate into those versions. It also gives you Expression Blend, XNA Game Studio, and a nifty phone emulator and deployment tools.

Visual Basic for Windows Phone Developer Tools RTW – This allows you to develop Windows Phone 7 apps using Visual Basic. It’s a really light-weight install and includes the project templates you need to build phone apps with VB.

OData Client Library for Windows Phone 7 – You can grab just the binaries and client proxy generator in the ODataClient_BinariesAndCodeGenToolForWinPhone.zip. You’ll need to add a reference to the Windows Phone 7 OData client library in your phone projects to use OData.

Create the Project

First thing to do is fire up Visual Studio, File –> New Project and select Silverlight for Windows Phone –> Windows Phone Application:

image

This sets up the project files and opens up the designer with a design view on the left and the XAML view on the right. For this example let’s create an application that browses the public Netflix catalog here: http://odata.netflix.com/v1/Catalog/ (By the way there are a lot of OData producers and the list is growing. Check them all out here: http://www.odata.org/producers)

Create the Client Proxy and Add the Assembly Reference

I’ve written a lot about OData in the past but I’ve always used clients that take advantage of the full .NET framework like Console apps, WPF apps and Excel add-ins. When you create these projects it’s easy to just add a service reference to the OData service and the client proxy and client assemblies are automatically added to your project. Unfortunately these steps are manual in a Windows Phone 7 project, but it’s not too bad. Here are the steps:

1. Extract the ODataClient_BinariesAndCodeGenToolForWinPhone.zip and unblock the files (Right-click –> Properties –> click the “Unblock” button on the General tab).

image

2. Use the DataSvcUtil.exe contained in here to generate the client proxy. This will generate the client-side classes based on the OData service. So for my example, I’ll open up a command prompt and generate my Netflix classes in the file “NetflixModel.vb” like so:

>datasvcutil /uri:http://odata.netflix.com/v1/Catalog/ /out:.\NetflixModel.vb /Version:2.0 /DataServiceCollection /language:VB

3. Copy the output file to your Windows Phone 7 Project. You can just copy the file from Windows explorer and paste it directly into the project in Visual Studio (I love that feature :-)).

4. In your Windows Phone 7 project, add a reference to the System.Data.Services.Client assembly also contained in the zip you extracted in step 1.

image

Build the Application

Okay now that we have all the pieces in place let’s add some basic UI and some code to call the Netflix OData service. What I’ll do is provide a simple Listbox that lists titles that fall into a genre that the user can type in a textbox. The Title class has a Name property and that’s what we’ll databind our listbox to. Here’s the XAML for my UI contained in the MainPage.xaml:

  <!--ContentPanel - place additional content here-->
        <Grid x:Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0">
            <Grid.ColumnDefinitions>
                <ColumnDefinition Width="*" />
                <ColumnDefinition Width="Auto" />
            </Grid.ColumnDefinitions>
            <Grid.RowDefinitions>
                <RowDefinition Height="*" />
                <RowDefinition Height="Auto" />
            </Grid.RowDefinitions>
            <ListBox Grid.ColumnSpan="2" ItemsSource="{Binding}">
                <ListBox.ItemTemplate>
                    <DataTemplate>
                        <TextBlock Text="{Binding Name}" />
                    </DataTemplate>
                </ListBox.ItemTemplate>
            </ListBox>
            <TextBox Name="textBox1" Text="Adventures" Grid.Row="1" />
            <Button Content="Go" Grid.Column="1" Grid.Row="1" Name="button1" />
        </Grid>

image

Titles and Genres participate in a many-to-many relationship so we need to query the Genre based on its name and include the Titles for that Genre. When the user clicks the Go button that’s when I’m going to execute the query. A query against an OData service is just an HTTP GET so we need to construct the right URI to the service. When working with the OData client in full .NET framework, you have the the ability to write LINQ queries and the client library will translate them to http calls. Unfortunately this isn’t supported on the phone yet. (See this announcement on the team blog for details.)

Fortunately it’s pretty easy to construct the URIs.  If you open up your favorite browser to http://odata.netflix.com/v1/Catalog/Genres that will execute a query that returns all the Genres in the Netflix catalog. In order to pull up a specific Genre called “Adventures” we just use http://odata.netflix.com/v1/Catalog/Genres('Adventures') and in order to get the Titles returned for this Genre we use the $expand syntax: http://odata.netflix.com/v1/Catalog/Genres('Adventures')?$expand=Titles (Take a look at the URI conventions for all the operations that are supported.)

Once you have the URI you can call the LoadAsync method to fetch the data. For my application, in order to get an ordered list of titles into the ListBox I execute an in-memory LINQ query over the results to manipulate them further. So here’s the code for the MainPage:

Imports System.Data.Services.Client
Imports WindowsPhoneApplication1.NetflixCatalog.Model

Partial Public Class MainPage
    Inherits PhoneApplicationPage

    Dim WithEvents ctx As New NetflixCatalog.Model.NetflixCatalog(New Uri("http://odata.netflix.com/v1/Catalog"))
    Dim WithEvents genres As New DataServiceCollection(Of Genre)(ctx)
    Dim titles As IEnumerable(Of Title)

    Public Sub New()
        InitializeComponent()
    End Sub

    Private Sub button1_Click(sender As System.Object,
                              e As System.Windows.RoutedEventArgs) Handles button1.Click

        genres.Clear()
        Dim url = String.Format("/Genres('{0}')?$expand=Titles", Me.textBox1.Text)

        Dim uri = New Uri(url, UriKind.Relative)
        genres.LoadAsync(uri)
    End Sub

    Private Sub genres_LoadCompleted(sender As Object,
                                     e As LoadCompletedEventArgs) Handles genres.LoadCompleted
        If genres.Any Then
            titles = From g In genres.ToList()
                          From t In g.Titles
                          Select t Distinct
                          Order By t.Name

        Else
            titles = New List(Of Title)(New Title With {.Name = "No titles found in that genre."})
        End If

        Me.DataContext = titles
    End Sub
End Class
Run it!

Okay hit F5 and watch the phone emulator fire up. Type in a genre and hit the Go button to see the results loaded from the Netflix Catalog OData service. Sweet!

image

Tips & Tricks

Here’s a couple tips for an easier time when building Windows Phone 7 apps that consume OData services.

Visualize your OData - If you don’t know the schema of the OData service you’re working with it may be kind of hard to visualize the relations between entities. I recommend installing the OData Protocol Visualizer extension and then just add a console application to your Solution in Visual Studio, add a service reference to the OData service to generate the proxy classes, and then right-click on the service reference and select “Show in Diagram”.

image

Using LINQ - The System.Data.Service.Client library for the full .NET framework has the ability to take your LINQ queries and translate them to http calls. If you’re more comfortable with LINQ (like me) you can use the same console application to see how the queries are translated. You can either view the http call by putting a debugger breakpoint on your LINQ queries or you can install Fiddler (which I highly recommend) to see all the http traffic.

So I hope this helps in getting you started with OData on Windows Phone 7. OData is fun and easy and it’s a great way to exchange data over the web in a standard way. Now I just need to brush up on my design “skillz” and see if I can make a prettier looking WP7 app ;-)


Bruce Kyle recommended that you Expose Data as OData Through Web Services in a 2/15/2011 post to the US ISV Evangelism blog:

imageWCF Data Services Toolkit has recently been released to make it easier to expose arbitrary data sources as OData services.

image Whether you want to wrap OData around an existing API (SOAP/REST/etc.), mash-up SQL Azure and Windows Azure Table storage, re-structure the shape of a legacy database, or expose any other data store you can come up with, the WCF Data Services Toolkit will help you out.


S. Burges updated the Open Data Protocol - .NET/Silverlight/WP7 Libraries CodePlex project on 2/11/2011 (missed when published):

imageThe Open Data Protocol (OData) is a Web protocol for querying and updating data that provides a way to unlock your data and free it from silos that exist in applications today. OData does this by applying and building upon Web technologies such as HTTP, Atom Publishing Protocol (AtomPub) and JSON to provide access to information from a variety of applications, services, and stores. The protocol emerged from experiences implementing AtomPub clients and servers in a variety of products over the past several years. OData is being used to expose and access information from a variety of sources including, but not limited to, relational databases, file systems, content management systems and traditional Web sites.

Project Description
This is an open source release of the .NET, Silverlight, and Windows Phone 7 client libraries for the Open Data Protocol (OData). For more information on OData, see http://www.odata.org.

Latest Build and Tools for Windows Phone 7
To download the latest built of the Windows Phone 7 library and the Visual Studio 2010 tools visit the downloads page and select the ODataClient_BinariesAndCodeGenToolForWinPhone.zip download or use the direct link.

  • OData .NET 4, SL 4, WinPhone7 Client Source Code
  • Rating: No reviews yet
  • Downloads: 3852
  • Change Set: 9f6b1932eb44
  • Released: Oct 27 2010
  • Updated: Feb 11 2011 by sburges
  • Dev status: Stable

download file icon Recommended Download
download file icon Other Available Downloads
Release Notes
  • ODataNetFx4_SL4_WinPhone7_Client.zip - contains a Visual Studio 2010 solution with the source code for the OData .NET 4, Silverlight 4 and Windows Phone 7 Client library. To use, unzip the file locally and open the solution in Visual Studio 2010.
  • ODataClient_BinariesAndCodeGenToolForWinPhone.zip - contains just the OData client assemblies and code generation tools for use on Windows Phone 7. These libraries are permitted for use in production applications and thus can be used to build applications that are submitted to the Windows Phone application marketplace.
  • ODataClient_WinPhone7SampleApp.zip - contains a sample Windows Phone 7 application which uses the OData client library for Windows Phone 7. Before opening this sample ensure you have the Windows Phone Developer Developer Tools installed. After unzipping the sample project to your local machine, you may need to unblock the Binaries\System.Data.Services.Client.dll file before you can build. Instructions to unblock a file can be found here: http://msdn.microsoft.com/en-us/library/ee890038(VS.100).aspx

<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Eugenio Pace (@eugenio_pace) described Single Sign Out–WebSSO in a 2/16/2011 post:

image While reviewing all the existing samples we’ve noticed that our implementation of Single Sign Out was kind of….weak.  It wasn’t really fully implemented and wasn’t very clear what was happening either (or what it should happen)

image7223222We’ve fixed all that now in scenario 1: WebSSO. Things get more complicated when more than 1 STS is in the picture, and even more so when the identity provider uses other protocols (for example, all our scenarios using ACS and Google or ACS and LiveID). But for WebSSO, things are more or less straight forward.

WebSSO scenario recap:

If you remember for previous posts or the book, in our first chapter we had Adatum with 2 applications: a-Order and a-Expense. We wanted that Adatum employees login to one or the other seamlessly:

image

  1. John opens the browser on his desktop (that already has a been authenticated with AD)
  2. Opens a-Order home page
  3. Gets redirected to the IdP. He’s authenticated (e.g. Kerberos ticket)
  4. A token is given to him. The token is posted back to a-Order. He’s in.

Sometime later, he browses a-Expense.

  1. No session with a-Expense yet, he’s redirected to IdP
  2. He’s already authenticated, gets a new token for a-Expense.
  3. Voila

So far, so good. Nothing really new. What happens now if John wants to signoff. We don’t want our system to be a roach motel: once in, never out… what happens is actually pretty straight forward, but there are some subtle considerations. When John signs-off, he should sign off from all relying parties and the IdP.

This diagram illustrates the process:

clip_image002

John clicks on “Logout” (on any of the relying parties: a-Order in our example). This results on a wa=signout1.0 to be sent to the IdP. The Idp cleans up its own session with the user and then (here comes the tricky part)returns a page to the user with a list of image tags (HTML image tags). An image tag for each RP the IdP has issued a security token for John. The src url for these images will be actually something like:

src=http://localhost/a-Expense/?wa=signoutcleanup1.0

src=http://localhost/a-Order/?wa=signoutcleanup1.0

With these tags, the browser will attempt to get the image from these URL (which happens to be located in each RP: a-Order and a_expense), and in fact you will see something like this:

image

The HTML:

image

Where do these green checks images come from? There are nowhere in a-Order or in a-Expense… you would spend hours looking for the PNG, or JPG or GIF and you will never find it, because it is very well concealed. Can you guess where it comes from?

Hint: page 115 of Vittorio’s excellent book.

Bonus question: how does the IdP know all the applications the user accessed to?

By the way, all this is now working on the new updated samples which we will post to our CodePlex site  very soon. [Emphasis added.]


Alex Koval posted an Introduction to Azure AppFabric to the Code Project on 2/15/2011:

image The Azure platform is being rapidly upgraded as new features are rolled in it. In this blog article I would like to review the applications Azure AppFabric to the existing problems of the enterprise software companies. The article includes:

  1. Overview of existing problems of enterprise companies when developing web distributed apps.
  2. Overview of Azure AppFabric
  3. Detailed review of Access Control Service
  4. Detailed review of Azure Service Bus.
Existing problems of enterprise software companies

Software applications are not new in today’s world, there are tons of good quality software solutions created by generations of developers. Today we create distributed applications centered on integration with existing software systems, components over various platforms, protocols, and standards. We see at least following problems:

  1. Connectivity challenges. The software applications have low interoperability because they often exist in different networks, platforms, etc. How can your client connect to a service endpoint if a service is in a different network behind a firewall?
  2. Authentication challenges. Most of the systems of the past came with their own security model. In order to work with such systems the user has to maintain various sets of credentials. Maintaining increasing number of logins presents a threat as it increases the chance of compromising the security credentials.
  3. Authorization challenges. The dispersed security landscape presents a problem as now it is increasingly difficult to administration user authorization.

Azure AppFabric presents possible solutions to the above mentioned problems. The article below will review the AppFabric with concentration on Access Control Service and Service Bus.

Overview of Windows Azure AppFabric

image7223222Windows Azure AppFabric presents a set of middleware services. The middleware services target to increase interoperability between the components of your software solution. The services of Azure ApFabric are presented in Fig. 1. The pricing for Azure AppFabric is reasonably cheap and could be looked up at http://www.microsoft.com/windowsazure/pricing/.

Services provided by Azure AppFabric

Fig.1 Services provided by Azure AppFabric

Service Bus Service – provides connectivity between the services and its consumers. The Service Bus Service is commercially available.

Access Control Service – allows to de-couple the security management from application logic. The Access Control Service is commercially available.

AppFabric Caching Service – allows centralized caching of your application data. The AppFabric Caching Service is available within the CTP since October 2010.

Integration Services – “AppFabric Connect” – is your BizTalk-like service in the cloud. The service will be available sometimes in 2011.

Composite App and AppFabric Scale Out Infrastructure. Later this year (2011) Azure AppFabric will introduce the ability to define a Composite App and upload the definitions to AppFabric Container. The Composite App represents your distributed applications with cloud- and on premises- based services. The AppFabric  provides an environment to host your Composite App through AppFabric Container’s Multi-tenant host and Composition Runtime . The AppFabric container is responsible for the lifecycle of your Composite App.

AppFabric Access Control Service

Let us see how AppFabric Access Control Service solves the problem of dealing with multiple identity providers. The idea is to de-couple the identity management logic from the application logic. The identity management is not a concern of the new application. To do so AppFabric uses the Claims Based Security Pattern.

Let is take a look at the concept of Claims Based Security Pattern, which you have to use in order to get a beer in Sloppy Joe’s, Key West (inspired by Vittorio Bertocci)

Claims Based Security in Key West

Fig. 2 Claims Based Security in Key West

  1. User submits the request for the Drivers License
  2. User receives the Drivers License authorized by the State of Florida
  3. User presents the Drivers License to the bartender of Sloppy Joe’s
  4. User get’s his beer

Now let us look at what occurs in the AppFabric. First we need to get clear on the terminology:

  • Relying Party – a service with application logic which expects a security token and relies on the Issuer to generate such token.
  • Issuer – a service which is responsible for evaluating the user credentials and generate a token which contains a set of Claims.
  • Claim – an attribute of the user.
  • Security Token – a set of Claims signed by the Issuer.

Having defined the terminology I would like to proceed with the explanation of the AppFabric’s Access Control Service. The interaction between the Client, the Issuer, and the Relying Party is described in the Fig.3 below.

Interactions in the AppFabric Access Control Service

Fig.3 Interactions in the AppFabric Access Control Service

According to the Fig.3, the Client obtains the Security Token from the Security Token Service (STS) which in turn accesses the Identity Store. Once the client has the token he or she can submit the Security Token as a part of the request to the Relying Party. The identity layer of the Relying Party validates the Token, extracts the claims from it. If your Security Token is valid and your claims set allow the access to the application, the user gets the requested data.

When you create a Service Namespace, the Azure AppFabric provides following build-in service endpoints (Fig.4):

  • STS Endpoint
  • Management Endpoint
  • Management STS Endpoint

Endpoints created with Azure AppFabric Service Namespace

Fig.4 Endpoints created with Azure AppFabric Service Namespace

As seen on Fig.4, besides the STS, the STS Management service and the STS for the Management Service are created. All endpoints expose RESTful services could be accessed by clients of various platforms including JAVA, PHP, etc. The STS is configured through the Management Service and could be configured to use other Identity Providers, for example Active Directory (through ADFS v2). As seen on Fig. 5, the STS can federate over existing Identity Providers, Active Directory, Facebook, Google, etc.

ACS with federated Identity Providers

Fig.5 ACS with federated Identity Providers

As we may see, Access Control Services allows several key benefits:

1)      Security is no longer a concern of the Application

2)      Existing Identity Providers are re-used

3)      Cross-platform interoperability via Restful services

To see how Access Control works:

1)      Download the Windows Azure Training Kit http://www.microsoft.com/downloads/en/details.aspx?FamilyID=413e88f8-5966-4a83-b309-53b7b77edf78&displaylang=en)

2)      Go through the IntroAppFabricAccessControl Lab.

Service Bus Service

The Azure Service Bus allows bridging the possible networks and firewalls which may exist between the client and the server. You may imagine a common situation with the client and the service in located in different networks behind the firewalls. Let’s say the firewalls have only port 80 opened for outbound. Let us take a look at how the Azure Service Bus works

As described on the Fig.6, there are following steps:

1)      Service initiates a connection with the Relay Service via port 80 outbound.

2)      The Client initiates a connection with the Relay Service via port 80 outbound.

3)      The Client can send messages to the Relay Service and the Relay Service forwards the messages to the Service.

Interactions within Azure Service Bus

Fig.6 Interactions within Azure Service Bus

Generally speaking I just described the Service Remoting scenario, however there are more scenarios with Azure Service Bus including Eventing and Protocol Tunneling.

Azure Service Bus scenarios

Fig.7 Azure Service Bus scenarios

With Eventing scenario, you may subscribe multiple services to the client events. Such configuration allows multi-casting your messages. For each subscriber AppFabric creates a FIFO message buffer to store the client messages. Once the service connects it will read the messages. The Protocol Tunneling scenario assumes a situation in which you can re-use the opened ports to communicate between the client and a server.

To see how Service Bus works:

  1. Download the Windows Azure Training Kit http://www.microsoft.com/downloads/en/details.aspx?FamilyID=413e88f8-5966-4a83-b309-53b7b77edf78&displaylang=en)
  2. Go through the IntroServiceBus2010Part1 and IntroServiceBus2010Part2 Labs
Conclusion

Windows Azure AppFabric is a viable enterprise software solution which encompasses best practices to enforce security, caching, connectivity, and integration. Through AppFabric the enterprise can de-couple the security management. Such approach allows re-using existing Identity Providers and concentrate on developing new application. Through the AppFabric Service Bus the enterprise can re-use existing services. Although some services of Azure AppFabric are commercially available, the middleware services with work-in-progress which presents risks future changes.
CodeProject

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Tony Bailey (a.k.a. tbtechnet) explained Free to Paying [Azure Accounts]–Yes You Can [Migrate] on 2/16/2011:

imageA common question we get is how to move applications, data and databases that have been created using a Windows Azure free pass to a regular, paying Azure account.

It can be done.

  1. Sign up for a Windows Azure Developer offer here:  http://www.microsoft.com/windowsazure/offers/
  2. If you have created a Windows Azure application and also a SQL Azure database, migrate your SQL Azure using the SQL Azure Migration Wizard
  1. imagehttp://sqlazuremw.codeplex.com/
  2. http://sqlazuremw.codeplex.com/releases/view/32334#DownloadId=86938
  1. On page 10 of the document if you want to go from SQL Azure to SQL Azure, then you need to have “Analyze and Migrate” option selected.  (Check “Analyze and Migrate”)
  • If your application uses a Windows Azure storage account and you need to preserve the data, move the data using a Azure storage management utility such as Azure Storage Explorer
  1. http://azurestorageexplorer.codeplex.com/
  2. Note: the data transfer itself generates additional bandwidth charges (for the download/upload).


The Windows Azure Team recommended that you Check Out Interview with Windows Azure MVP, Cory Fowler, On Port 25 Blog! in a 2/16/2011 post:

image A great interview with Cory Fowler, Windows Azure expert and one of our Windows Azure MVPs, [pictured at right] was just posted to the Port 25 blog.  Check it out to learn more about Cory, find out what he's been working on and his views on Windows Azure and Open Source.  He's also a contributor to their blog, so also be sure to check out his first post, "Installing PHP on Windows Azure leveraging Full IIS Support: Part 1". The Port 25 blog is Canada's home for communications from the Open Source Community at Microsoft.


Avkash Chauhan described Decrypting Windows Azure Package (CSPKG) in Windows Azure SDK in a 2/15/2011 post:

image When you build your Windows Azure Application the final products is a CSPKG file which is uploaded to Windows Azure portal along with ServiceConfiguration.cscfg. The CSPKG file actually a ZIP file which consist your whole solution along with configuration needed to deploy your solution in Windows Azure.

If you look inside the CSPKG file you will see a file with extension CSSX name as Your_Role_Name_GUID.cssx and this file also is a ZIP file however this file is initially encrypted by MSBuild process. You can decrypt this file if you wish to do so by using any of the below described method:

Option 1: Modifying MSBuild properties for "CorePublish" target and CPACK command (Applicable to Windows Azure SDK 1.3)

  1. Go to C:\Program Files (x86)\MSBuild\Microsoft\Cloud Service\1.0\Visual Studio 10.0
  2. Open Microsoft.CloudService.targets in any text Editor
  3. Look for the target names as CorePublish and setup NoEncryptPackage property to "True" as below inside the CSPACK Command within the target

NoEncryptPackage="true"

Completed "CorePublish" target should look like as below:

  1. <Target
  2.       Name="CorePublish"
  3.       DependsOnTargets="$(CorePublishDependsOn)">
  4.     <Message Text="CorePublish: PackageWebRole = $(PackageWebRole)" />
  5.     <Message Text="Publishing starting..." />
  6.     <Message Text="RolePlugins       is @(RoleProperties->'%(RolePlugins)')" />
  7.     <Message Text="Publishing to '$(OutDir)Publish'" />
  8.     <MakeDir Directories=" $(OutDir)Publish " />
  9.     <Message Text="ServiceDefinitionCopy is @(ServiceDefinitionCopy)" />
  10.     <Message Text="ServiceConfigurationCopy is @(ServiceConfigurationCopy)" />
  11.     <Message Text="Roles is @(Roles)" />
  12.     <CSPack
  13.       ServiceDefinitionFile="@(ServiceDefinitionCopy)"
  14.       Output="$(OutDir)Publish\$(ProjectName).cspkg"
  15.       PackRoles="@(Roles)"
  16.       SiteMapping="@(SiteMapping)"
  17.       RoleProperties="@(RoleProperties)"
  18.       CopyOnly="false"
  19.    NoEncryptPackage="true"
  20.       >
  21.     </CSPack>
  22.     <!-- Copy service configuration to output directory -->
  23.     <Message Text="Copying the service configuration file." />
  24.     <Copy SourceFiles="@(ServiceConfigurationCopy)" DestinationFolder="$(OutDir)Publish" />
  25.     <Message Text="DiagnosticsFilesCreated is @(DiagnosticsFilesCreated)" />
  26.    <Delete Files="@(DiagnosticsFilesCreated)" ContinueOnError="true" />
  27.     <Message Text="Publishing process has completed."/>
  28.   </Target>

Option 2: Setting Up Environment Variable (Applicable to all Windows Azure SDK)

You can set environment variable _CSPACK_FORCE_NOENCRYPT_ to "true" which will force the build system to generate an unencrypted package. This method applies for Windows Azure SDK 1.2 and 1.3.


The Windows Azure Download Center began offering a SharePoint and Azure Development Primer on 2/15/2011:

Overview

image The February 2011 release of the SharePoint and Azure Development Kit is essentially a developer primer to help you ramp up on different ways to integrate SharePoint and Azure. The kit contains four modules that include PowerPoint decks, hands-on labs and source code covering these areas:

  1. Introduction to Windows Azure Development
  2. Introduction to SharePoint Development
  3. Getting Started with SharePoint and Azure
  4. WCF, Windows Azure and SharePoint
System Requirements
  • Supported Operating Systems: Windows 7; Windows Server 2008; Windows Server 2008 R2

imageTo complete the modules in the primer, you must have the following items installed or configured:

  • Windows Server 2008 R2 or Windows 7
  • SharePoint Server 2010
  • Visual Studio 2010 Azure Tools and SDK
  • Silverlight Tools and SDK
  • SQL Server 2008 R2
  • Web Browser
  • Azure developer account


James Staten (@Staten7) answered Which applications should I move to the cloud? in a 5/15/2011 post to his Forrester Research blog:

image Forrester took over a thousand inquiries from clients on cloud computing in 2010 and one of the common themes that kept coming up was about which applications they should plan to migrate to Infrastructure as a Service (IaaS) cloud platforms. The answer: Wrong question.

image What enterprises should really be thinking about is how they can take advantage of the economic model presented by cloud platforms with new applications. In fact, the majority of applications we find running on the leading cloud platforms aren't ones that migrated from the data center but were built for the cloud.

A lot of the interest in migrating applications to cloud platforms stems from the belief that clouds are cheaper and therefore moving services to them is a good cost savings tactic. And sure, public clouds bring economies of scale shared across multiple customers that are thus unachievable by nearly any enterprise. But those cost savings aren't simply passed down. Each public cloud is in the profit-making business and thus shares in the cost savings through margin capture.

For enterprises to make the most of a public cloud platform they need to ensure their applications match the economic model presented by public clouds. Otherwise the cloud may actually cost you more. In our series of reports, "Justify Your Cloud Investment" we detail the sweet spot uses of public cloud platforms that fit these new economics and can help guide you towards these cost advantages.

But job one when building a strategy for IaaS success needs to focus on new applications and services which can be built to take advantage of these economics best. We all know that clouds deliver agility, letting your developers gain access to resources and build new services faster and at lower cost. More abstracted services such as Platforms as a Service (PaaS) and discrete cloud services that can be integrated with custom code running on IaaS and PaaS can speed up time to market even more. But understanding the cost model and mapping that to the revenue model associated with the services you are building is key to making the most of these investments. This is how NetFlix, the Associated Press, Pathwork Diagnostics, NVoicePay and hundreds of other companies are improving their profitability by building anew for the cloud. They are taking payment before spending to fire up certain services; splitting services between pay-per-use and subscription platforms based on which give the right cost advantage to what parts of the application; and spinning up services on demand and rapidly turning them off when not needed.

In many cases the new services being created in the cloud are directly tied to revenue generation - delivering value in new ways, accelerating business insight so costs can more quickly be identified and taken out, or finding vastly cheaper ways to do what has been done before. The reason the business (and not IT) does this is because it understands how revenue is generated and how the costs of the business impact the profitability of the company. We in IT often have no clue how the actions we take, the applications we build and how we operate them affect the profitability of our companies products and services or the business bottom line. Sure we may know what percent of company spend goes to IT but do we know the cost breakdown for our top 3 services or products and what we contribute to them? If the business came to us with an idea for a new service do we honestly believe we could advise them on how that service could be built most cost effectively? How many of you could propose a new service to the business and show the profit impact of that investment? Could you explain the profit impact of doing it in-house versus in the cloud?

If you don't understand how the economics differ between public clouds and in-house deployment you cant have this conversation. And if you can't, you might just be asked to help far less in the future.

At Forrester's Enterprise Architecture Forum in San Francisco this week, I'll be leading an interactive session on cloud economics where we will discuss the tools cloud platforms provide for affecting service profitability and how you can apply them to your business. If you are putting applications in the cloud, bring those stories so we can discuss them and make them better. I look forward to seeing you there.


Stephane Boss claimed You're Never Boxed in if You're in the Cloud in a 2/15/2011 post to the Partner Side Up blog:

This is a snapshot of the Industry Partner Communiqué I send out every month. Every month I focus on a specific topic among: Cloud, CRM and Application Platform and once in a while I cover a related topic such as Microsoft Dynamics in the industry (to be released in Feb.).

Goal of this communiqué is to give a snapshot of what I think are big opportunities for our customers and partner ecosystem to embrace. All content is public and everyone has access to it even are dear friend competitors.
You can “easily” subscribe (search for Microsoft Industry Partner Communiqué) or read related stories. Feedback are always welcome and I do respond to all requests, questions on email.

For the cloud Communiqué, I have interviewed several industry subject matter experts including Marty Ramos, Technology Strategy Manager for Worldwide Manufacturing & Resources at Microsoft.

Marty Ramos: “For online retailers, one of the most intriguing advantages offered by Windows Azure is the inherent flexibility it offers.

image While all industries have some fluctuation in customer demand, what other sector actually has a day-Black Friday- pinpointed as date on which its companies (historically) stop operating at a loss? Because retailers face such seasonally fluctuating demands, stores need to size their data centers to meet the greatest possible need. But what about the rest of the time? Who wants to pay for unused capacity the other ten months of the year?

imageWith Windows Azure, retailers can enjoy the benefits of a robust data center, while only paying for the resources that are needed and used.

In addition, this flexibility offers compelling risk management advantages to stores introducing prototype applications. For example, when a retailer debuts a new mobile application, the "app" could be a hit, or it could be a bust. But the retailer needs to size its data center before it has any real idea how strong demand will prove to be. If the application exceeds anticipated demand-maybe even goes viral-the retailer quickly must backpedal and resize its whole data center to handle demand. On the other hand, demand could be much less than anticipated, so stores want to avoid over-investing in infrastructure to support an application that might die on the vine.

Because Windows Azure provides access to an infinitely big data center, if that application takes off, a retailer using Windows Azure can resize its available data center through a simple configuration option, and immediately bring up more servers as they are needed. Windows Azure minimizes the investment risk from the infrastructure point of view and also eliminates the opportunity risk of not being able to support a sudden surge in demand.

Watch a video about what you can do with cloud power, read how cloud power is tailor-made for retailers, or talk to an expert at the Microsoft Retail Experience Center.”

Thanks and don’t forget to read the Industry Partner Communiqué to learn more about what the cloud has to offer in specific industries


Gaston Hillar described Multicore Programming Possibilities with Windows Azure SDK 1.3 in a 2/11/2011 post to Dr. Dobbs Journal (missed when published):

image You can use your parallelized algorithms that exploit modern multicore microprocessors in Windows Azure. Since the release of Microsoft Azure SDK v1.2, you can use .NET Framework 4 as the target framework for any of your cloud-targeted projects. The newest Windows Azure SDK version is 1.3 and you can take advantage of Parallel Extensions with this SDK.

imageWhen Microsoft launched Windows Azure, almost a year ago, you had to use .NET Framework 3.5 or earlier versions as the target framework for any of your cloud-targeted projects. Remember that Microsoft launched Windows Azure before Visual Studio 2010 RTM. Thus, you will find many Windows Azure sample applications that use .NET Framework 3.5 as the target framework. You can change the target framework for an existing cloud-targeted project from previous .NET Framework versions to .NET Framework 4, and you will have access to Parallel Extensions.

If you right-click on an existing cloud-targeted project in the Solution Explorer, in Visual Studio 2010, and then you go to the Application page, you will see .NET Framework 4 as one of the available options in the Target framework dropdown list.

.NET Framework 4 is available as a Target framework for your cloud-targeted projects with Windows Azure SDK 1.3 installed.

The entry point for a Windows Azure Worker Role is the Run() method and it's already famous infinite loop that processes messages in the queue and calls the Thread.Sleep method. You can reduce the time required to process compute-bound messages by using Parallel Extensions to take advantage of the available cores. Then, you can select the target VM (short for Virtual Machine) size according to the desired response time for each request and your budget. Windows Azure allows you to scale from 1 to 8 cores. You can use your Parallel Extensions knowledge to design and code a Worker Role that scales as the VM size increases. Each worker can run code optimized to take advantage of multicore. However, one of the problems with the Windows Azure Platform is that the ExtraLarge VM size is twice the price of the Large VM size. The former provides 8 cores and the latter 4 cores. You shouldn't expect your algorithm to achieve linear speedup when you change the VM size from Large to ExtraLarge. You can check the different VM sizes in the "How to Configure Virtual Machine Sizes" MSDN documentation article. The lowest price ExtraSmall VM size shares the CPU cores with others and it is still in Beta version.

I received dozens of e-mails with similar questions about the relationship between Windows Azure and Parallel Extensions. Developers and architects have many doubts about the usage of Parallel Extension with Azure projects. You can use Parallel Extensions to create efficient algorithms that scale as the number of cores available in the target VM increase. If you need a better response time in a compute-intensive algorithm prepared to take advantage of multicore, you just need to select a target VM with more cores. Of course, you also have to pay more money for the new VM size. Your code should be as efficient as possible because you want to take full advantage of the cores that you’re paying for. If you create an efficient service or building block component that scales with the VM size, you can submit it to the new Windows Azure Marketplace.

The Windows Azure VM Role is still in Beta version. However, this new role provides an excellent opportunity to move a server image to the cloud. Applications that already take advantage of multicore running on Windows Server 2008 R2 can run in the VM Role with the desired number of cores. If you don't have an octo-core CPU and you want to test scalability for an existing algorithm on Windows Server 2008 R2, you can create and deploy a VM Role. Then, you can select the desired VM size and you will be able to test the scalability from 1 to 8 cores. The VM Role is very helpful for developers that want to test scalability for parallelized algorithms and to make it simpler to move existing Windows applications to the cloud.

I do believe multicore programming is very important for Windows Azure and for any cloud-targeted project. Each VM size provides different number of cores and you pay for those cores. Many services will require multicore optimization and there is a great opportunity for Parallel Extensions to be extremely popular in Windows Azure projects.

Dennis Gannon and Dan Reed provide an excellent overview of the relationship between parallelism and the cloud in "Parallelism and the Cloud."

Amit Chatterjee wrote a very interesting article "Parallelism: In the Cloud, Cluster and Client." The article promotes Microsoft tools and languages. However, Amit's post provides an interesting explanation of the challenges in parallel computing and the complexity of modern IT solutions.


<Return to section navigation list> 

Visual Studio LightSwitch

Julia Kornich continued the CTP series with an EF Feature CTP5: Code First and WinForms Databinding post of 2/16/2011:

In December we released ADO.NET Entity Framework Feature Community Technology Preview 5 (CTP5). In addition to the Code First approach this CTP also contains a preview of a new API that provides a more productive surface for working with the Entity Framework. This API is based on the DbContext class and can be used with the Code First, Database First, and Model First approaches.

This post provides an introduction to creating your model using Code First development and then using the types defined in the model as data sources in the “master-detail” Windows Forms (WinForms) application.

In this walkthrough, the model defines two types that participate in one-to-many relationship: Category (principal\master) and Product (dependent\detail). Then, the Visual Studio tools are used to bind the types defined in the model to the WinForm controls. The WinForm data-binding facilities enable navigation between related objects: selecting rows in the master view causes the detail view to update with the corresponding child data. Note, that the data-binding process does not depend on what approach is used to define the model (Code First, Database First, or Model First).

In this walkthrough the default code-first conventions are used to map your .NET types to a database schema and create the database the first time the application runs. You can override the default code-first conventions by using Data Annotations or the Code First Fluent API. For more information see: EF Feature CTP5: Code First Walkthrough (section 9 - Data Annotations) and EF Feature CTP5: Fluent API Samples

Install EF CTP5

1. If you haven’t already done so then you need to install Entity Framework Feature CTP5.

Create a solution and a class library project to which the model will be added

1. Open Visual Studio 2010.

2. From the menu, select File -> New -> Project… .

3. Select “Visual C#” from the left menu and then select “Class Library” template.

4. Enter CodeFirstModel as the project name and CodeFirstWithWinForms as the solution name. Note, to be able to specify different names for the project and the solution names must check the “Create directory for solution” option (located on the right bottom corner of the New Project dialog).

5. Select OK.

Create a simple model

When using the code-first development you usually begin by writing .NET classes that define your domain model. The classes do not need to derive from any base classes or implement any interfaces. In this section you will define your model using C# code.

1. Remove the default source code file that was added to the CodeFirstModel project (Class1.cs).

2. Add reference to the EntityFramework assembly.  To add a reference do:

    1.1. Press the right mouse button on the CodeFirstModel project, select Add Reference…. .

    1.2. Select the “.NET” tab.

    1.3. Select EntityFramework from the list.

    1.4. Click OK.

3. Add a new class to the CodeFirstModel. Enter Category for the class name.

4. Implement the Category class as follows:

Note: The Products property is of ObservableListSource<T> type. If we just wanted to facilitate two-way data binding in Windows Forms we could have made the property of the BindingList<T> type. But that would not support sorting. The ObservableListSource<T> class enables sorting. This class will be implemented and explained later in this walkthrough.

  1. using System.ComponentModel;
  2. public class Category
  3. {
  4. public int CategoryId { get; set; }
  5. public string Name { get; set; }
  6. public virtual ObservableListSource<Product> Products { get { return _products; } }
  7. private readonly ObservableListSource<Product> _products =
  8. new ObservableListSource<Product>();
  9. }

5. Add another new class to the project. Enter Product for the class name. Replace the Product class definition with the code below.

  1. public class Product
  2. {
  3. public int ProductId { get; set; }
  4. public string Name { get; set; }
  5. public virtual Category Category { get; set; }
  6. public int CategoryId { get; set; }
  7. }

The Products property on the Category class and Category property on the Product class are navigation properties. Navigation properties in the Entity Framework provide a way to navigate an association\relationship between two entity types, returning either a reference to an object, if the multiplicity is either one or zero-or-one, or a collection if the multiplicity is many. 

The Entity Framework gives you an option of loading related entities from the database automatically whenever you access a navigation property. With this type of loading (called lazy loading), be aware that each navigation property that you access results in a separate query executing against the database if the entity is not already in the context.

When using POCO entity types, lazy loading is achieved by creating instances of derived proxy types during runtime and then overriding virtual properties to add the loading hook. To get lazy loading of related objects, you must declare navigation property getters as public, virtual (Overridable in Visual Basic), and not sealed (NotOverridable in Visual Basic). In the code above, the Category.Products and Product.Category navigation properties are virtual.

6. Add a new class called ObservableListSource to the project. This class enables two-way data binding as well as sorting. The class extends ObservableCollection<T> and adds an explicit implementation of IListSource. The GetList() method of IListSource is implemented to return an IBindingList implementation that stays in sync with the ObservableCollection. The IBindingList implementation generated by ToBindingList supports sorting.

Implement the ObservableListSource <T> class as follows:

  1. using System.Collections;
  2. using System.Collections.Generic;
  3. using System.Collections.ObjectModel;
  4. using System.ComponentModel;
  5. using System.Data.Entity;
  6. using System.Diagnostics.CodeAnalysis;
  7. public class ObservableListSource<T> : ObservableCollection<T>, IListSource
  8. where T : class
  9. {
  10. private IBindingList _bindingList;
  11. bool IListSource.ContainsListCollection
  12. {
  13. get { return false; }
  14. }
  15. IList IListSource.GetList()
  16. {
  17. return _bindingList ?? (_bindingList = this.ToBindingList());
  18. }
  19. }
Create a derived context

In this step we will define a context that derives from System.Data.Entity.DbContext and exposes a DbSet<TEntity> for each class in the model. The context class manages the entity objects during runtime, which includes retrieval of objects from a database, change tracking, and persistence to the database. A DbSet<TEntity> represents the collection of all entities in the context of a given type.

1. Add a new class to the CodeFirstModel. Enter ProductContext for the class name.

2. Implement the class definition as follows:

  1. using System.Data.Entity;
  2. using System.Data.Entity.Database;
  3. using System.Data.Entity.Infrastructure;
  4. public class ProductContext : DbContext
  5. {
  6. public DbSet<Category> Categories { get; set; }
  7. public DbSet<Product> Products { get; set; }
  8. }

3. Build the project.

In the code above we use a “convention over configuration” approach. When using this approach you rely on common mapping conventions instead of explicitly configuring the mapping. For example, if a property on a class contains “ID” or “Id” string, or the class name followed by Id (Id can be any combination of upper case and lower case) the Entity Framework will treat these properties as primary keys by convention. This approach will work in most common database mapping scenarios, but the Entity Framework provides ways for you to override these conventions. For example, if you explicitly want to set a property to be a primary key, you can use the [Key] data annotation. For more information about mapping conventions, see the following blog: Conventions for Code-First.

Create a Windows Forms application

In this step we will add a new Windows Forms application to the CodeFirstWithWinForms solution.

1. Add a new Windows Forms application to the CodeFirstWithWinForms solution.

1.1. Press the right mouse button on the CodeFirstWithWinForms solution and select Add -> New Project… .

    1.2. Select “Windows Forms Application” template. Leave the default name (WindowsFormsApplication1).

    1.3. Click OK.

2. Add reference to the CodeFirstModel class library project. That is where our model and the object context are defined.

1.1. Press the right mouse button on the project and select Add Reference… .

    1.2. Select the “Projects” tab.

    1.3. Select CodeFirstModel from the list.

    1.4. Click OK.

3. Add reference to the EntityFramework assembly.

4. Add the classes that are defined in the model as data sources for this Windows Forms application.

    1.1. From the main menu, select Data -> Add New Data Sources… .

    1.2. Select Objects and click Next.

    1.3. In the “What objects do you want to bind to” list, select Category. There no need to select the Product data source, because we can get to it through the Product’s property on the Category data source.

    1.4. Click Finish.

5. Show the data sources (from the main menu, select Data -> Show Data Sources). By default the Data Sources panel is added on the left of the Visual Studio designer.

6. Select the Data Sources tab and press the pin icon, so the window does not auto hide. You may need to hit the refresh button if the window was already visible.

7. Select the Category data source and drag it on the form. By default, a new DataGridView (categoryDataGridView) and Navigation toolbar controls are added to the designer. These controls are bound to the BindingSource (categoryBindingSource) and Binding Navigator (categoryBindingNavigator) components that were created as well.

8. Edit the columns on the categoryDataGridView. We want to set the CategoryId column to read-only. The value for the CategoryId property is generated by the database after we save the data.

  1.1. Click the right mouse button on the DataGridView control and select Edit Columns… .

  1.2. Select the CategoryId column and set ReadOnly to True.

9. Select Products from under the Category data source and drag it on the form. The productDataGridView and productBindingSource are added to the form.

10. Edit the columns on the productDataGridView. We want to hide the CategoryId and Category columns and set ProductId to read-only. The value for the ProductId property is generated by the database after we save the data.

    1.1. Click the right mouse button on the DataGridView control and select Edit Columns… .

    1.2. Select the ProductId column and set ReadOnly to True.

    1.3. Select the CategoryId column and press the Remove button. Do the same with the Category column.

So far, we associated our DataGridView controls with BindingSource components in the designer. In the next section we will add code to the code behind to set categoryBindingSource.DataSource to the collection of entities that are currently tracked by DbContext. When we dragged-and-dropped Products from under the Category, the WinForms took care of setting up the productsBindingSource.DataSource property to categoryBindingSource and productsBindingSource.DataMember property to Products. Because of this binding, only the products that belong to the currently selected Category will be displayed in the productDataGridView.

11. Enable the Save button on the Navigation toolbar by clicking the right mouse button and selecting Enabled.

12. Add the event handler for the save button by double-clicking on the button. This will add the event handler and bring you to the code behind for the form. The code for the categoryBindingNavigatorSaveItem_Click event handler will be added in the next section.

Add the code that handles data interaction

1. Implement the code behind class (Form1.cs) as follows. The code comments explain what the code does.

  1. using System.Data.Entity;
  2. using CodeFirstModel;
  3. using System.Data.Entity.Database;
  4. public partial class Form1 : Form
  5. {
  6. ProductContext _context;
  7. public Form1()
  8. {
  9. InitializeComponent();
  10. }
  11. protected override void OnLoad(EventArgs e)
  12. {
  13. base.OnLoad(e);
  14. _context = new ProductContext();
  15. // Call the Load method to get the data for the given DbSet from the database.
  16. // The data is materialized as entities. The entities are managed by
  17. // the DbContext instance.
  18. _context.Categories.Load();
  19. // Bind the categoryBindingSource.DataSource to
  20. // all the Unchanged, Modified and Added Category objects that
  21. // are currently tracked by the DbContext.
  22. // Note that we need to call ToBindingList() on the ObservableCollection<TEntity>
  23. // returned by the DbSet.Local property to get the BindingList<T>
  24. // in order to facilitate two-way binding in WinForms.
  25. this.categoryBindingSource.DataSource =
  26. _context.Categories.Local.ToBindingList();
  27. }
  28. private void categoryBindingNavigatorSaveItem_Click(object sender, EventArgs e)
  29. {
  30. this.Validate();
  31. // Currently, the Entity Framework doesn’t mark the entities
  32. // that are removed from a navigation property (in our example the Products) as deleted in the context.
  33. // The following code uses LINQ to Objects against the Local collection
  34. // to find all products and marks any that do not have a Category reference as deleted.
  35. // The ToList call is required because otherwise the collection will be modified
  36. // by the Remove call while it is being enumerated.
  37. // In most other situations you can do LINQ to Objects directly against the Local property without using ToList first.
  38. foreach (var product in _context.Products.Local.ToList())
  39. {
  40. if (product.Category == null)
  41. {
  42. _context.Products.Remove(product);
  43. }
  44. }
  45. // Save the changes to the database.
  46. this._context.SaveChanges();
  47. // Refresh the controls to show the values
  48. // that were generated by the database.
  49. this.categoryDataGridView.Refresh();
  50. }
  51. protected override void OnClosing(CancelEventArgs e)
  52. {
  53. base.OnClosing(e);
  54. this._context.Dispose();
  55. }
  56. }
Test the application

When you run the application the first time, the Entity Framework uses the default conventions to create the database on the localhost\SQLEXPRESS instance and names it after the fully qualified type name of the derived context (CodeFirstModel.ProductContext). The subsequent times, unless the model changes, the existing database will be used. You can change the default behavior by overriding the code-first default conventions. For more information, see EF Feature CTP5: Code First Walkthrough.

1. Set the WindowsFormsApplication1 project as a startup project.

    1.1. Click the right mouse button on the WindowsFormsApplication1 project and select “Set as StartUp project”.

2. Compile and run the application.

3. Enter a category name in the top grid and product names in the bottom grid.

4. Press the Save button to save the data to the database. After the call to DbContext’s SaveChanges(), the CategoryId and ProductId properties are populated with the database generated values.

Summary

In this post we demonstrated how to create a model using Code First development and then use the types defined in the model as data sources in the “master-detail” Windows Form application.


Patrick Emmons posted How to Simplify App Development Using Microsoft Visual Studio LightSwitch to eWeek’s Application Development News blog on 2/15/2011:

From the abstract:

image22242222Microsoft Visual Studio LightSwitch is a rapid development environment that makes it easy for developers to create line-of-business applications. Although many developers don't see Microsoft Visual Studio LightSwitch as a useful tool, its cost-effectiveness and efficiency for both startup companies and prototyping are just a couple of its key benefits. Here, Knowledge Center contributor Patrick Emmons further explains why Microsoft Visual Studio LightSwitch can be very beneficial for developers to use in some circumstances.

Table of Contents:

  1. How to Simplify App Development Using Microsoft Visual Studio LightSwitch
  2. The Best of Both Worlds
  3. Efficiency vs. Maturation

Microsoft's Visual Studio Live! is a developer's conference that is all about development in the Visual Studio environment. Visual Studio Live! Orlando was held at the Hilton Walt Disney World Resort in November 2010. One of the big announcements from this conference was the launch date for Visual Studio LightSwitch.

Visual Studio LightSwitch is a rapid development environment that gives technical and somewhat technical people the ability to create lightweight, line-of-business applications. While many developers don't think Visual Studio LightSwitch will be useful for creating applications, I think it can be very beneficial to use in the right circumstances. Here are some reasons why.

Right-sized versus enterprise-ready

In recent years there has been a growing philosophy that everything needs to be enterprise-ready. The prevailing thought is that all solutions need to be scalable, flexible, "anything-able." While that is true for anything that really does need to be enterprise-ready, there are situations where enterprise-ready is too much.

Imagine you are a small startup. You are not focused on enterprise-ready. You are focused on getting through your first year. Alternately, you might be an established organization that is considering getting into a new line of business. Focusing on getting something up and running to let your employees share information in a cost-effective way would ensure that you are not risking valuable resources (that is, capital). In today's economy, capital budgets are limited (and nonexistent in some companies).

Read more: Next: The Best of Both Worlds >>


Reji Riyadh asked Not much happening in Lightswitch Dev Cen as compared to MVC, Silverlight or Sharepoint. Why?? in a Visual Studio LightSwitch - General (Beta1) forum thread:

image22242222Is LightSwitch a serious product? Why is then very little activity in the Developer Center? Except for Beth Massi's occassional blogs no one else in Microsoft seems to be interested. Why doesn't Somasegar or Scott Guthrie write about it and tell us how imp this product will be for Microsoft? They normally cover every technology especially in the developer tools.

Steve Hoag, a Microsoft moderator, replied:

Yes, Visual Studio LightSwitch is a serious product. Please keep in mind that this is Beta 1 of a brand new product, so the fact that there already is a Dev Center, team blog and forum devoted to it is a pretty good sign that Microsoft is very interested in it's future.


Return to section navigation list> 

Windows Azure Infrastructure

The Windows Azure Platform, Web Hosting and Web Services blog announced the availability of a Learning Plan for Azure and Dynamic Scaling on 2/15/2011:

imageThis package will help Web VAPs [Value-Added Providers] understand the opportunity with developing on the Windows Azure Platform, and also review the technical aspects of deployment with a special concentration on dynamic scaling.

Windows Azure Platform for Web VAPs

Not a Microsoft partner? Join for free here

image

VAP appears to be a new acronym to cover partners who aren’t Value-Added Resellers (VARs), such as Independent Software Vendors (ISVs) who aren’t independent of Microsoft.


Nicole Hemsoth convened a CTO Panel: Are Public Clouds Ripe for Mission Critical Applications? and reported its conclusions in a 2/15/2011 post to the HPC in the Cloud blog:

image This week we gathered the opinions of five technical leaders at cloud service companies to gauge their views on customer reception of the idea of placing mission-critical applications on public cloud resources. Chief Technical Officers from smaller public cloud-focused companies, including Stelligent, Hyperstratus, Appirio, Arcus Global,and Nube Technologies, weighed in on their sense of customer acceptance of putting core applications in the cloud.

Just as important as the initial question about viability is a secondary query—for those that did decide to send mission-critical apps to the public cloud, what was the driving factor?

image A number of surveys have been conducted over the course of the past year to gauge general sentiments about placing business-critical or mission-critical applications in the cloud. More specifically, on a public cloud resource such as that offered by Amazon Web Services.

Although survey data varies according to the respondent base, the consensus seems to be that there is still quite a bit of hesitancy to place mission-critical applications in an environment where there is not a complete sense of control—not to mention concerns about data protection and location, compliance and regulatory risks, fear of lock-in…the list tends to go on.

One recent survey conducted by ESG Research found that of the 600 American and European IT professionals questioned, 42% said that public clouds would not enter into their business models in the next five years. Among the top reasons listed were, perhaps not surprisingly, data and privacy concerns (43%), loss of control (32%), existing investments in current infrastructure (also at 32%), the need to feel that the cloud ecosystem is mature before diving in (29%) and 28% responded that were satisfied with their current infrastructure currently.

While conversations with enterprise IT leaders often follow this same trajectory in terms of response, the time seemed ripe to check in with technical leaders at a number of cloud services companies to see if their sense of customer concerns about placing mission-critical applications in the cloud matched with the hesitant reflected in the survey data.

In addition to gauging their sense of the climate for mission-critical applications running on public cloud resources, we also asked a secondary question—“is it a ‘tough sell’ for customers to put business critical applications on such resources and when it is not, what is the motivating factor?”

To provide some depth to the issue of the viability of mission-critical applications for public clouds (and what does eventually tip the scale for some companies to make that decision) we gathered opinions from Lars Malmqvist, CTO and Director of Arcus Global Ltd.; Sonal Goyal, CTO/CEO Nube Technologies; Paul Duval, CTO at Stelligent; Glenn Weinstein, CTO at Appirio, and Bernard Golden from HyperStratus.

We'll start with sentiments from a company that has experience dealing with public sector clients, Arcus Global Ltd.

Lars Malmqvist serves as Director and CTO at Arcus Global Ltd., a company that deals specifically with the needs of public sector clients in the UK. The company supports pilots, migration, development and planning for cloud computing projects for large government organizations. This public sector focus made the company a natural choice for the question of whether or not the concerns outweigh the benefits for core applications on public cloud resources since governments everywhere are approaching the concept of clouds with caution.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

image

No significant articles today.


<Return to section navigation list> 

Cloud Security and Governance

Lori MacVittie (@lmacvittie) recommended Challenging the Firewall Data Center Dogma in her 2/15/2010 post to F5’s DevCentral blog:

image Do you really need a firewall to secure web and application services? Some organizations would say no based on their experiences while others are sure to quail at the very thought of such an unnatural suggestion.

datacenterdogma

Firewalls are, in most organizations, the first line of defense for web and application services. This is true whether those services are offered to the public or only to off-site employees via secure remote access. The firewall is, and has been, the primary foundation around which most network security architectures are built.

We’ve spent years designing highly-available, redundant architectures that include the firewall. We’ve deployed them not only at “the edge” but moved them further and further into the data center in architectures that have commonly become known as “firewall sandwiches”. The reasons for this are simple – we want to protect those services that are critical to the business and the primary means by which we accomplish that task is by controlling access to them via often simple but powerful access control.

In later years we’ve come to rely upon additional intrusion detection systems such as IPS (Intrusion Prevention Systems) that are focused on sniffing out (sometimes literally) malicious attacks and attempts to circumvent security policies and stop them.

One of the core attacks against which such solutions protect services is a denial of service.

>Unfortunately, it is increasingly reality that the firewall is neither able to detect or withstand such attacks and ultimately such devices fail – often at a critical moment. The question then is what to do about it. The answer may be to simply remove the firewall from the critical data path for web services./p>

THAT’S UNNATURAL!

Just about anything is unnatural the first time you try it, but that doesn’t mean it isn’t going to work or that it’s necessarily wrong. One of my favorite fantasy series – David Eddings’ Belgariad – illustrates this concept quite nicely. A couple of armies need to move their ships up an escarpment to cross a particular piece of land to get where they need to be. Now usually fording – historically – involves manhandling ships across land. This is hard and takes a lot of time. No one looked forward to this process. In the story, someone is wise enough to put these extremely large ships on wheels and then leverage the power of entire herds of horses to move them over the land, thus improving performance of the process and saving a whole lot of resources. One of the kings is not all that sure he likes violating a precept that has always been akin to dogma – you ford ships by hand.

quote-badge

enchanters end game King Rhodar put on a perfectly straight face. “I’ll be the first to admit that it’s probably not nearly as good as moving them by hand, Anheg. I’m sure there are some rather profound philosophical reasons for all that sweating and grunting and cursing, but it is faster, wouldn’t you say? And we really ought to move right along with this.”

“It’s unnatural,” Anheg growled, still glaring at the two ships, which were already several hundred yards away.

>Rhodar shrugged. “Anything’s unnatural the first time you try it.” 

-- “Enchanter’s End Game”, David Eddings (p 147) >

Needless to say King Anheg eventually gave in and allowed his ships to be moved in this new, unnatural way, finding it to be more efficient and faster and ultimately it kept his men from rebelling against him for making them work so hard.

This same lesson can be applied to removing the firewall from the critical inbound data path of services. Sure, it sounds unnatural and perhaps it is if it’s the first time you’re trying it, but necessity is the mother of invention and seems to also help overcome the feeling that something shouldn’t be done because it hasn’t been done before. If you need convincing as to why you might consider such a tactic, consider a recent survey conducted by Arbor Networks showing an increasing failure rate of firewalls and IPS solutions due to attacks.

“Eighty-six percent of respondents indicated that they or their customers have placed stateful firewall and/or IPS devices in their IDCs. Nearly half of all respondents—a solid majority of those who actually have deployed these devices within their IDCs— experienced stateful firewall and/or IPS failure as a direct result of DDoS attacks during the survey period. Only 14 percent indicated that they follow the IDC BCP of enforcing access policy via stateless ACLs deployed on hardware-based routers/Layer 3 switches capable of handling millions of packets per second.”[emphasis added]

>-- Network Infrastructure Security Report Volume VI, Arbor Networks, Feb 1 2011

That is a lot of failures, especially given that firewalls are a critical data center component and are almost certainly in the path of a business critical web or application service.

But it’s dogma; you simply must have a firewall in front of these services. Or do you?

BASIC FIREWALLING ISN’T ENOUGH

The reality is that you need firewall functionality – services - but you also need a lot more. You need to control access to services at the network layers but you also need to clip_image002mitigate access and attacks occurring at the application layers. That means packet-based firewalls – even with their “deep packet inspection” capabilities – are not necessarily up to the task of protecting the services they’re supposed to be protecting. The Anonymous attacks taught us that attacks are now not only distributed from a client perspective, they’re also distributed from a service perspective; attacking not only the network but the application layers. That means every device between clients and servers must be capable of handling not only the increase in traffic but somehow detecting and preventing those attacks from successfully achieving their goal: denial of service.

During the anonymous attacks, discussions regarding what to do about traffic overwhelming firewalls resulted in what might be considered an “unnatural” solution: removal of the firewall. That’s because the firewall was actually part of the problem, not the solution, and removing it from the inbound data path resulted in a more streamlined (and efficient) route that enabled continuous availability of services despite ongoing attacks – without compromising security.

Yes, you heard that right. Some organizations are running sans firewall and finding that for inbound web services, at least, the streamlined path is maintaining a positive security posture while ensuring availability and performance. That doesn’t mean they are operating without those security services in place, it just means they’ve found that other components in the inbound data path are capable of providing those basic firewalling services without negatively impacting availability.

ATTACKS AREN’T the ONLY PROBLEM

It isn’t just attacks that are going to pose problems in the near future for firewalls and IPS components. The increase in attacks and attack surfaces are alarming, yes, but it’s that combined with an increase in traffic in general that’s pushing load on all data center components off the charts. Cisco recently shared the results of its latest Visual Networking Index Forecast:

quote-badge“By 2015, Cisco says that mobile data traffic will grow to 6.3 exabytes of data or about 1 billion gigabytes of data per month. The report indicates that two-thirds of the mobile data traffic on carrier networks in 2015 will come from video services. This trend follows a similar trend in traditional broadband traffic growth.” 

>Read more: http://news.cnet.com/8301-30686_3-20030291-266.html#ixzz1CtYWZPAk

Cisco’s report is obviously focused on service providers as they will bear the brunt of the increase in traffic (and in many cases they bear the majority of the impact from denial of service attacks) but that traffic is going somewhere, and somewhere is often your data center, accessing your services, increasing load on your data center infrastructure.

Load testing, to be sure, of an active architecture is important. It’s the only way to really determine what the real capacity for your data center will be and how it will respond under heavy load – and that includes the additional strain resulting from an attack. Cloud-based load testing services are available and can certainly be of assistance in performing such testing on live infrastructure. And yes, it  has to be live or it won’t find all the cracks and fissures in your architecture. It isn’t your lab environment, after all, that’s going to be under attack or stressed out by sudden surges in traffic. Perhaps no problems exist, but you really don’t want to find out there are when the pressure’s on and you have to make the decision in the heat of the moment. Try testing with your firewall, and without (assuming you have solutions capable of providing the security services required in the inbound data path). See if there is an impact (positive or negative) and then you’ll be better able to make a decision in the event it becomes necessary.

Putting firewalls in front of your Internet services has been dogma for a long, long time.  But are they up to the task?  It would appear in many cases they aren’t. When a solid majority of folks have found their sites down due to firewall failure, we may need to rethink the role of a firewall in securing services. That doesn’t mean we’ll come to a different conclusion, especially as only part of the architectural decisions made regarding data center security are dependent on technological considerations; other factors such as risk tolerance by the business are often the driving factor and play a much larger role in such decisions whether IT likes it or not. But it does mean that we should occasionally re-evaluate our data center strategies and consider whether traditional architectural dogma is still appropriate in today’s environment. Especially when that architectural dogma may be part of the problem.


<Return to section navigation list> 

Cloud Computing Events

Forrester Research is holding Forrester’s Enterprise Architecture Forum 2011 on 2/17 and 2/18/2011 at the Palace Hotel in San Francisco, CA:

Every business is a digital business. But the most successful businesses seamlessly couple information and technology with their continuously evolving business processes. This isn’t easy – in most organizations, processes are uncharted, information is unreliable, and technology has created stovepipes rather than integrated platforms for business execution.

Two roles are critical to break down the barriers between process, information, and technology: Enterprise Architects, to connect business goals to information, application and technology strategy, and Business Process professionals, to find the opportunities to streamline and improve their operations. These two roles can be synergistic when they work together – creating intelligent, globally consistent processes, enabling fast change to seize new opportunities. But they can also be destructive when uncoordinated.

Forrester’s Enterprise Architecture Forum 2011, through keynotes and four tracks of sessions, will provide an integrated understanding of business, information, and technology architecture – and the benefits possible when these domains are harnessed together. Attendees will explore four key areas:

Key Issues That EA Forum Will Answer
  • Connecting business process, and architecture. Business architects in EA and in business areas, and business process professionals must work together to improve and transform their businesses – but today they often work at cross-purposes. This ‘how to’ oriented track will feature sessions on linking business process and business architecture efforts. We’ll cover the methodologies these roles use to develop integrated programs for business and IT change.
  • Creating business-driven technology strategies.IT’s value comes from how it enables business models through technology. This track will cut through the theory to show how to create information, application and technology architecture & strategies which resonate with business leaders and position IT for greater value.
  • Key technology trends EAs should watch. Our top analysts will discuss the latest trends in the technologies – from Cloud-based services to pervasive BI to new collaboration capabilities – that will impact your organization in positive (or detrimental) ways, tell you what the impact could be, and how to prepare.
  • Learning best practices from the EA Award winners. Forrester, in conjunction with InfoWorld Magazine, has identified five leading EA organizations through our EA Awards program. This track will feature case studies from the EA Award winners, highlighting their journey and best practices that all EA organizations should adopt.

As noted in the James Staten (@Staten7) answered Which applications should I move to the cloud? in a 5/15/2011 post to his Forrester Research blog item in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section above:

At Forrester's Enterprise Architecture Forum in San Francisco this week, I'll be leading an interactive session on cloud economics where we will discuss the tools cloud platforms provide for affecting service profitability and how you can apply them to your business. If you are putting applications in the cloud, bring those stories so we can discuss them and make them better.

James’ Track C “Key Technology Trends That Will Change Your Business:
How Application Design Can Power Cloud Economics” session will be a 4:15 to 5:00 PM on 2/17/2011.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Alex Williams described Enomaly’s SpotCloud as a A Service Similar to Google Adwords and Google AdSense - But for the Cloud in a 2/16/2011 post to the ReadWriteCloud:

image Reuven Cohen [pictured at right] compares Enomaly's new Spotcloud service to Google Adwords and Google AdSense.

"Google built AdWords and Adsense into a clearinghouse for unused ad space," said Cohen, founder of Enomaly and the Spotcloud service. "This is a clearing house for unused computing."

imageIn essence, Spotcloud is a marketplace that leverages supply and demand in the market to determine the price of the service. This is comparable to a spot market, where buyers and sellers make deals through an intermediary.

The Spotcloud service is primarily designed for edge based capacity. Buyers can choose from regional or national providers. Sellers offer capacity on a transient basis, meaning it is available for a defined period of time. The minimum contract is one day with the longest duration possible being 30 days.

Sellers that want to use the service have two options. They may use a free version of the Enomaly virtual machine technology to install on the cluster of machines that have the excess capacity . Alternatively, they can access the Enomaly API to offer capacity.

Spotcloud is built on Google App Engine. Cohen said he and his crew are a Python shop. Plus, for a service like Spotcloud, bandwidth is the issue more than anything else. [Emphasis added.]

Spotcloud shows that any data center can serve as a cloud service. It reminds us of Data Center Map, which we looked at last week.

For now, the services available through Spotcloud are not renewable which means that customers will use Spotcloud on an ad-hoc basis. But the difference here is it shows that data centers and co-location facilities do not have to be left out. They can offer a service that can help make the most of what they have and offer it to a marketplace with global reach.


Dana Gardner asserted “HP has announced a comprehensive service that simplifies the process of designing and building data centers” in a preface to his Cloud Computing: HP Has Framework for One-Stop Data Center Transformation article of 2/15/2011:

As more companies look toward building or expanding data centers, HP has announced a comprehensive service that simplifies the process of designing and building data centers by offering design, construction and project management from a single vendor.

image The new HP Critical Facilities Implementation service (CFI) enables clients to realize faster time-to-innovation and lower cost of ownership by providing a single integrator that delivers all the elements of a data center design-build project from start to finish. An extension of the HP Converged Infrastructure strategy, HP CFI is an architectural blueprint that allows clients to align and share pools of interoperable resources. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

A recent Gartner survey indicated that 46 percent of respondents reported that they will build one or more new data centers in the next two years, and 54 percent expected that they will need to expand an existing data center in that time frame.

“Constructing a data center is an enormous undertaking for any business, and taking an integrated approach with a single vendor will help maximize cost and efficiency, while reducing headaches,” said Dave Cappuccio, research vice president, Gartner. “As customers’ data center computing requirements add complexity to the design-build process, comprehensive solutions that provide clients with an end-to-end experience will allow them to realize their plans within the required timeframe and constraints.”

Extensive experience
Based on its experience in “greenfield” and retrofit construction, HP is delivering CFI for increased efficiency when designing and building data centers. The company draws on its experience in designing more than 50 million square feet of raised-floor data center space and its innovations in design engineering to create fully integrated facility and IT solutions.
Benefits of CFI include:

  • HP’s management of all of the elements of the design-build project and vision of integrating facilities development with IT strategy.
  • A customized data center implementation plan that is scalable and flexible enough to accommodate their existing and future data center needs.
  • Access to experience based on a track record of delivering successful customer projects around data center planning and design-build. These projects include the world’s first LEED-certified data center, the first LEED GOLD-certified data center, India’s first Uptime Institute Tier III-rated data center as well as more than 60 “greenfield” sites, including 100-megawatt facilities.

HP CFI is available through HP Critical Facilities Services. Pricing varies according to location and implementation. More information is available at www.hp.com/services/cfi.

You might also be interested in:


Klint Finley described Enabling the Mobile Cloud: Appcelerator Partners with Engine Yard in a 2/15/2011 post to the ReadWriteCloud blog:

image Appcelerator, the makers of the mobile development framework Titanium, announced a partnership with Ruby on Rails platform-as-a-service provider Engine Yard today. Engine Yard developers will be able to create cross-platform applications using Titanium while leaving their Rails backend environments virtually unchanged. Titanium developers will be able to take advantage of Engine Yard's scalable services for building backends.

image According to the announcement, the two companies will "integrate and certify each other's technologies, jointly develop best practices, and create common architectural patterns to help developers build mobile applications using Appcelerator Titanium with Ruby on Rails backends developed and deployed on the Engine Yard platform."

image Making data available across applications is important for developers, so an integrated mobile frontend and cloud backend makes a lot of sense. We've been looking at the ways that companies are preparing to bridge the worlds of mobile and cloud. See, for example, our coverage of Yahoo's use of Azure for building mobile apps.

image This solution seems particularly valuable for developers building Web services with Rails and wanting to create mobile apps based on those services.

Titanium enables developers build applications for Android, iOS, Linux, OSX and Windows using HTML, CSS and JavaScript. You can build your application once and have it run anywhere. It competes with other mobile development frameworks like PhoneGap.

Engine Yard competes with other PaaS providers like Heroku (which was recently acquired by Salesforce.com). The company is currently working with Amazon.com to bring Ruby to the new Elastic Beanstalk service from Amazon Web Services.


<Return to section navigation list> 

0 comments: