Saturday, July 31, 2010

Windows Azure and Cloud Computing Posts for 7/29/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb3  
Update 7/31/2010: Added horizontal lines to separate articles within the following sections to make it easier for readers to skip to the next story. Also updated my Test Drive Project Houston CTP1 with SQL Azure tutorial and fixed two missing screen captures in the SQL Azure Database, Codename “Dallas” and OData section.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.

Azure Blob, Drive, Table and Queue Services

Yves Goeleven posted Introducing the NHibernate Azure Table Storage Driver to the Capgemini blog on 7/29/2010:

imageFor some time now, I’ve been working, together with Koen Callens, on a driver for NHibernate on top of the azure table storage services api. Today, I’m proud to announce that the project has finally found a public home at: http://nhazuredriver.codeplex.com/

image The primary goal of the NHibernate Azure Table Storage Driver is to ease the coding experience on top of the azure table storage services api and to benefit from the features offered by NHibernate such as identity mapping, caching, and the like. If you’ve ever dealt with handcrafting these requests or used the storage client, than you know what I mean…

But remember, azure table storage is not a relational database, so you cannot benefit from a lot of the features that you might have grown accustomed to. In the next few days, I’ll give you some more insight into how to use NHibernate on top of azure table storage.

If you’re interested in contributing to the project, just send me a mail…


Steve Marx tackles Smooth Streaming with Windows Azure Blobs and CDN with his usual aplomb on 7/29/2010:

imageIn this post, I’ll show you how to use Windows Azure Blobs and the Windows Azure CDN to deliver Smooth Streaming video content to your users. For those who just want to try it out, head over to the Smooth Streaming with Windows Azure Blobs Uploader project on Code Gallery. The instructions there will get you going.

Understanding Smooth Streaming

Before we get into the details of how Smooth Streaming works on top of Windows Azure Blobs, it’s necessary to understand what Smooth Streaming actually is and how it works.

Smooth Streaming is Microsoft’s HTTP-based adaptive streaming protocol. As Alex Zambelli’s wrote in his excellent “Smooth Streaming Technical Overview”:

Adaptive streaming is a hybrid delivery method that acts like streaming but is based on HTTP progressive download. It's an advanced concept that uses HTTP rather than a new protocol.

In a typical adaptive streaming implementation, the video/audio source is cut into many short segments ("chunks") and encoded to the desired delivery format… The encoded chunks are hosted on a HTTP Web server. A client requests the chunks from the Web server in a linear fashion and downloads them using plain HTTP progressive download.

The "adaptive" part of the solution comes into play when the video/audio source is encoded at multiple bit rates, generating multiple chunks of various sizes for each 2-to-4-seconds of video. The client can now choose between chunks of different sizes. Because Web servers usually deliver data as fast as network bandwidth allows them to, the client can easily estimate user bandwidth and decide to download larger or smaller chunks ahead of time. The size of the playback/download buffer is fully customizable.

In other words, HTTP-based adaptive streaming is about taking a source video, encoding it into lots of small chunks at various bitrates, and then letting the client play back the most appropriate chunks (based on available bandwidth).

If you’ve ever looked at IIS Smooth Streaming content, though, you’ll notice that there aren’t lots of tiny chunks. There are a few, fairly large video files. Alex explains this too:

IIS Smooth Streaming uses the MPEG-4 Part 14 (ISO/IEC 14496-12) file format as its disk (storage) and wire (transport) format. Specifically, the Smooth Streaming specification defines each chunk/GOP as an MPEG-4 Movie Fragment and stores it within a contiguous MP4 file for easy random access. One MP4 file is expected for each bit rate. When a client requests a specific source time segment from the IIS Web server, the server dynamically finds the appropriate Movie Fragment box within the contiguous MP4 file and sends it over the wire as a standalone file, thus ensuring full cacheability downstream.

Smooth Streaming files are quite literally all those little chunks concatenated together. I encourage you to read Alex’s entire article to understand the exact file format and wire format.

The key insight for our purpose is that to the client, Smooth Streaming content is just many small video chunks. The beauty of this model is that Smooth Streaming works great with CDNs and caches in between the client and the server. To the client, all that matters is that small chunks of video are being served from the appropriate URLs.

Using Windows Azure Blobs as a Smooth Streaming Host

Windows Azure Blobs can serve specific content at configurable URLs, which as we’ve seen is the only requirement to provide clients with a Smooth Streaming experience. To sweeten the deal, there’s built-in integration between Windows Azure Blobs and the Windows Azure CDN.

All that’s left for us to do is to figure out the set of URLs a Smooth Streaming client might request and store the appropriate video chunks at those URLs. There are two files that will help us do that:

The server manifest (*.ism) – This is a SMIL file that maps video and audio tracks to the file that contains them and the bitrates at which they were encoded.

The client manifest (*.ismc) – This is an XML file that specifies to the client which bitrates and timestamps are available. It also specifies the URL template clients should use to request chunks.

The combination of these two files tells us everything we need to know to extract the video chunks and store them in Widows Azure Blobs.

The Smooth Streaming with Windows Azure Blobs Uploader code first reads the server manifest and keeps track of the mapping of bitrate and content type (video or audio) to tracks within files. Then it reads the client manifest and generates all the permutations of bitrate, content type, and timestamp. For each of these, it looks up the appropriate track of the appropriate file, extracts that chunk from the file, and stores it in blob storage according to the URL template in the client manifest.

The code’s not too complicated, and you can find it in the Code Gallery project in SmoothStreamingAzure.cs.

Prior Work

After I patted myself on the back for coming up with this brilliant scheme, it was pointed out to me that Alden Torres blogged about this back in December 2009. He used a tool on Codeplex called MP4 Explorer, which has a feature that allows uploading to blob storage. That tool reads the source MP4 files themselves and derives the chunks from there (as opposed to my approach, which reads the client manifest).

The two big reasons I decided to write my own code for this were that I wanted a command-line tool and that I wanted to upload the blobs in parallel. I was able to cut down the upload time for the Big Buck Bunny video from around three hours (as Alden mentions in his post) to around thirty minutes simply by doing the uploads in parallel.

Shortcomings of This Approach

To the client, there’s no difference between IIS Smooth Streaming (hosted by IIS Media Services) or Smooth Streaming with Windows Azure Blobs. However, to the content owner and to the server, there are significant differences:

With IIS on the server, scenarios that require server intelligence are possible (like real-time transcoding or encryption).

There are fewer files to manage with IIS (since it keeps all the content in a small number of files). This makes copying files around and renaming them much simpler.

As future features (like fast-forward and new targets like the iPad) come out, all you need to do is update IIS Media Services to get the new functionality. With a solution like the one described in this post, you’ll need to reprocess existing content.

Because the manifest formats for IIS Smooth Streaming are actively evolving, there’s no guarantee that my code will work correctly with future Smooth Streaming clients and content.

Specifically, there are a few features of IIS Smooth Streaming that my code doesn’t handle today:

Trick play (fast-forward and rewind). This is supported under IIS by extracting keyframes from the video. My code doesn’t support extracting these keyframes.

Live Smooth Streaming. Handling a live event (where the manifest is changing and the chunks include extra hints about future chunks) isn’t supported in my code.

The Windows Azure team is still committed to running full IIS Media Services within Windows Azure web roles in the future.

Get the Tool

If you’d like to host Smooth Streaming content in Windows Azure Blobs, please check out the Smooth Streaming with Windows Azure Blobs Uploader project on Code Gallery, where you can download the command-line tool as well as the full source code.


Yves Goeleven’s Getting started with NHibernate on Azure table Storage services post of 7/29/2010 to the Capgemini blog uses the NHibernate azure table storage driver from http://nhazuredriver.codeplex.com/:

imageIn order to get NHibernate running on top of azure table storage, you first need an azure account obviously, or at least have installed the development fabric. I assume you’ve got that covered before attempting this tutorial.

imageNext up is to download and compile the NHibernate azure table storage driver from http://nhazuredriver.codeplex.com/. It already includes a test project that shows you how to get started. If you want to try it out, I suggest you continue from there.

Setting up the driver and it’s connection

First thing to do is to set up a SessionFactory that has been configured to use the driver and that has a connection to your azure storage account. The easiest way to do this is to use the Fluent NHibernate API, for which a configuration is included in the driver. This configuration is connecting to development storage by default, but you can pass it a connection string in any of the formats specified by MSDN:

var fluentConfiguration = Fluently.Configure().Database(
                AzureConfiguration.TableStorage
                .ProxyFactoryFactory(typeof(ProxyFactoryFactory).AssemblyQualifiedName)
                .ShowSql())
fluentConfiguration.Mappings(cfg => cfg.HbmMappings.AddFromAssemblyOf<NewsItem>());
sessionFactory = fluentConfiguration
                .ExposeConfiguration(cfg => nHibernateConfiguration = cfg)
                .BuildSessionFactory();

Note that I’ve exposed the internal nHibernate configuration, I will use it to tell NHibernate to create the schema in the table storage service. In reality, the underlying store doesn’t have a concept of schema, only the table name is registered.

using (var session = sessionFactory.OpenSession())
{
       var export = new SchemaExport(nHibernateConfiguration);
       export.Execute(true, true, false, session.Connection, null);
       session.Flush();
}

Mapping files

The azure storage environment does pose some restrictions to what you can specify in a mapping file as well:

  • The identifier must be a composite key which includes the fields RowKey and PartitionKey, both must be of type string (no exceptions)
  • All references, between entities in different tables, must be lazy loaded, join fetching (or any other relational setting for that matter) is not supported

A simple mapping file would look like:

<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"
                   assembly="NHibernate.Drivers.Azure.TableStorage.Tests"
                   namespace="NHibernate.Drivers.Azure.TableStorage.Tests.Domain">

  <class name="NewsItem" table="NewsItems">
    <composite-id>
      <key-property name="Id" column="RowKey" />
      <key-property name="Category" column="PartitionKey"/>
    </composite-id>
    <property name="Title" type="String" />
  </class>

</hibernate-mapping>

Persisting instances

Now we’re ready to go: save, update, get, load, list, delete, etc are all operational. In order to test this quickly, you could run a PersistenceSpecification from the FluentNHibernate library.

using (var session = SessionFactory.OpenSession())
 {
        new PersistenceSpecification<NewsItem>(session)
            .CheckProperty(c => c.Id, "1")
            .CheckProperty(c => c.Title, "Test Title")
            .CheckProperty(c => c.Category, "Test Category")
            .VerifyTheMappings();
 }

Some remarks

Please note that azure table storage DOES NOT support transactions, so all data you put in the store during testing must be removed before executing the next test. PersistenceSpecifciation does this by default, but in other tests you might have to do it yourself.

Also most NHibernate settings that rely on relational storage features, such as joins, batches, complex queries, etc… don’t work (yet). There is a lot of room for improvement, so any contributions are welcome…


Frederico Boerr shows you how to Create a WCF Data Service (OData) to share an Azure Table in this 7/29/2010 post to the Southworks blogs:

imageThe Open Data (OData) is a new protocol for querying and updating data. Find in this site a list of sites that are already supporting OData. Windows Azure Table Storage is one of them but to use this endpoint, the storage key is needed.

imageSharing an Azure Table is easy using WCF Data Service and Azure SDK.

1. First, create the class that will be used by the ADO.NET Data Service to create the service definition. All the IQueryable properties in this class will become a collection shared in the service.

Here is the class that we will use in our service, called AzureTableContext. The only collection that will be exposed is the Menu (public IQueryable<MenuItemRow> Menu).

namespace Sample.OData
{
using System;
using System.Configuration;
using System.Data.Services.Common;
using System.Linq;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;

public class AzureTableContext
{
private readonly TableServiceContext tableContext;

public AzureTableContext()
{
CloudStorageAccount.SetConfigurationSettingPublisher(
(configName, configSetter) =>
configSetter(ConfigurationManager.AppSettings[configName].ToString()));

var account = CloudStorageAccount.FromConfigurationSetting(”DataConnectionString”);

this.tableContext = new TableServiceContext(account.TableEndpoint.ToString(), account.Credentials);
}

public IQueryable<MenuItemRow> Menu
{
get
{
return this.tableContext.CreateQuery<MenuItemRow>(”Menu”).AsTableServiceQuery();
}
}
}

[EntityPropertyMapping("Name", SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true)]
public class MenuItemRow : TableServiceEntity
{
public string Name { get; set; }
public string Description { get; set; }
public DateTime CreatedOn { get; set; }
}
}

The attribute EntityPropertyMapping has been added in order to have the name of the MenuItemRow displayed as Title when browsing the service from a web browser.

2.  The Cloud Storage Account configuration is read from the web.config. Make sure that the following setting is in your site’s configuration:

Web.config file configuration

<appSettings>
<add key=”DataConnectionString” value=”UseDevelopmentStorage=true” />
</appSettings>

3. Right-click an existing website and select “Add”> “New item”. On the top-right corner’s textbox, type: “WCF Data Service”.

image

4. Add the following code to the class that has been auto-generated by the wizard.

namespace Sample.OData
{
using System.Data.Services;
using System.Data.Services.Common;

[System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)]
public class WcfDataService1 : DataService<AzureTableContext>
{
// This method is called only once to initialize service-wide policies.
public static void InitializeService(DataServiceConfiguration config)
{
// TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc.
config.SetEntitySetAccessRule(”*”, EntitySetRights.AllRead);
config.SetServiceOperationAccessRule(”*”, ServiceOperationRights.All);
config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
config.UseVerboseErrors = true;
}
}
}

The ServiceBehavior attribute has been added to help debugging. It’s also very useful to set the UseVerboseErrors in the data service configuration to get better error messages.

The “*” in the SetEntitySetAccessRule and SetServiceOperationAccessRule configurations will allow querying all the entities.

5. Run the solution and browse to the service’s web page. The service definition is displayed.

image

Given we have exposed the Menu collection, we can browse it by adding Menu at the end of the service’s url.

All the rows in the Menu table are displayed as an atom feed.


Frederico Boerr explains Windows Azure Storage: TDD and mocks in this 7/23/2010 post:

imageDuring the last months, we have been working on a sample application for the Windows Azure Architecture Guide.

One of the challenges we want to face in the development side is to develop the majority of the sample application following TDD practices.

This post shows how we mocked-up Azure Storage Tables by using a IAzureTable interface. Similar interfaces have been develop for queues (IAzureQueue) and blobs (IAzureBlobContainer).

Directly using WindowsAzure.StorageClient (from Windows Azure SDK)

When developing applications for Windows Azure, the most used  library for accessing the Azure Storage is the Storage Client (Microsoft.WindowsAzure.StorageClient) that comes as part of the Windows Azure SDK.

As an example, let’s imagine we are developing the SurveyStore (this class is part of the Storage component in the diagram above). This class that is called from the controllers and interacts with the persistence stores.

image

SurveyStore using WindowsAzure.StorageClient

public class SurveyStore : ISurveyStore
{
    private readonly CloudStorageAccount account;

    public SurveyStore(CloudStorageAccount account)
    {
        this.account = account;
    }

    public IEnumerable<Survey> GetSurveysByTenant(string tenant)
    {

        var cloudTableClient = new CloudTableClient(this.account.TableEndpoint.ToString(), this.account.Credentials);
        cloudTableClient.CreateTableIfNotExist("SurveysTable");

        TableServiceContext context = this.CreateContext();

        var query = (from s in context.CreateQuery<SurveyRow>("SurveysTable")
                           where s.PartitionKey == tenant
                           select s).AsTableServiceQuery();

        return query.Execute().Select(surveyRow => new Survey(surveyRow.SlugName)
                                                         {
                                                             Tenant = surveyRow.PartitionKey,
                                                             Title = surveyRow.Title,
                                                             CreatedOn = surveyRow.CreatedOn
                                                      }).ToList();
    }
}

Testing this implementation

If we want to write a test for this method, this would be a functional test because there is no way to mockup the calls to CloudTableClient and to TableServiceContext.

Every time we run the test, we have to:

1. Ensure the data we will query is exactly what we are expecting to get (2 calls to the real Surveys Table in the Azure storage)

2. Call GetSurveysByTenant (1 call to the real Surveys Table in the Azure storage)

3. Assert that we got what we were expecting from the store

[TestMethod]
public void GetSurveysByTenant()
{
    var expenseContext = AzureStorageHelper.GetContext();

    var expected = new Expense { Tenant = "Tenant",  (… initialize other properties …) };
    expected.Details.Add(new ExpenseItem { (… initialize all properties …) });
    AzureStorageHelper.DeleteExpenseAndItemsById(expenseContext, expected.Id);
    AzureStorageHelper.SaveExpense(expenseContext, expected);

    var store = new ExpenseStore();
    var expenses = store.GetSurveysByTenant("Tenant");

    Assert.AreEqual(1, expenses.Count());
    var actual = expenses.Single(e => e.Id == expected.Id);
    Assert.AreEqual("Tenant", actual.Tenant);

    (Assert other properties …)
}

Wrapping WindowsAzure.StorageClient with an IAzureTable

In the case our development is driven by TDD or we just want to write unit tests on any class that has to interact with Windows Azure Storage, we find a problem with the implementation shown above.

Working with CloudTableClient or TableServiceContext will not allow us to write *unit* tests. Testing the previous implementation, implied not only testing the SurveyStore class code, but also the code to access the Windows Azure Table itself. The ONLY way to test the SurveyStore code, is writing stubs and mocks for the code that access the Windows Azure Table.

This implementation also follows a good object-oriented design: "Program to an ‘interface’, not an ‘implementation’" and provides all its advantages.

SurveyStore using IAzureTable

public class SurveyStore : ISurveyStore
{
    private readonly IAzureTable<SurveyRow> surveyTable;

    public SurveyStore(IAzureTable<SurveyRow> surveyTable)
    {
        this.surveyTable = surveyTable;
    }

    public IEnumerable<Survey> GetSurveysByTenant(string tenant)
    {
        var query = from s in this.surveyTable.Query
                           where s.PartitionKey == tenant
                           select s;

        return query.ToList().Select(surveyRow => new Survey(surveyRow.SlugName)
                                                      {
                                                          Tenant = surveyRow.PartitionKey,
                                                          Title = surveyRow.Title,
                                                          CreatedOn = surveyRow.CreatedOn
                                                      });
    }
}

Testing this implementation

Testing this implementation is easier and let us focus ONLY in 1 part at a time.

In this example, we are only testing that the method GetSurveysByTenant correctly copies the title from the row read from the IAzureTable (SurveysTable) to the returned survey.

By mocking the IAzureTable, we can setup what the Query property is going to return, so there is no need to interact with the Windows Azure Storage itself. Remember that in the previous implementation we had to make 3 calls to the Windows Azure Table. Here, we are making no calls to the Windows Azure Table.

[TestMethod]
public void GetSurveysByTenantReturnsTitle()
{
    var surveyRow = new SurveyRow { PartitionKey = "tenant", Title = "title" };
    var surveyRowsToReturn = new[] { surveyRow };
    var mock = new Mock<IAzureTable<SurveyRow>>();
    mock.SetupGet(t => t.Query).Returns(surveyRowsToReturn.AsQueryable());
    var store = new SurveyStore(mock.Object, default(IAzureTable<QuestionRow>));

    var actualSurveys = store.GetSurveysByTenant("tenant");

    Assert.AreEqual("title", actualSurveys.First().Title);
}

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

image_thumb43Updated My Test Drive Project Houston CTP1 with SQL Azure tutorial by using the Northwind database (ww0h28tyc2.database.windows.net) published by the SQL Azure Houston team (@SQLHouston) on 7/31/2010 with read/write and EXEC permissions for Database: nwind, Login: nwind, Password: #SQLHouston. See Tweet1 and Tweet2:

image[3][1]

imageThis means you can test SQL Azure Houston without having a SQL Azure account.


Wayne Walter Berry (@WayneBerry) said I Miss You SQL Server Agent: Part 1 on 7/30/2010 in the first of a series of posts about creating SQL Agent-like features with a Windows Azure role:

image Currently, SQL Azure doesn’t support running SQL Server Agent in the cloud. If you need SQL Server Agent type functionality for SQL Azure, you can use a Windows Azure worker role and some custom code, I will show you how in this blog post series.

image“Secret agent man, secret agent man
They've given you a number and taken away your name” – Johnny Rivers

SQL Server Agent

SQL Server Agent is a Microsoft Windows service that executes scheduled administrative tasks, which are called jobs. SQL Server Agent uses SQL Server to store job information. Jobs contain one or more job steps. Each step contains its own task, for example, backing up a database. SQL Server Agent can run a job on a schedule, in response to a specific event, or on demand. For example, if you want to back up all the company servers every weekday after hours, you can automate this task. Schedule the backup to run after 22:00 Monday through Friday; if the backup encounters a problem, SQL Server Agent can record the event and notify you. SQL Server Agent is installed with the on-premise Enterprise, Datacenter and Standard editions of SQL Server, however is not available in the Express or Compact edition.

Worker Roles

A worker role is basically a Windows service in the cloud; that understands the concept of starting, stopping, and configuration refreshes. When started, well designed worker roles enter into an endless loops that:

Checks for an event that signals processing.

Perform an action.

Sleep until the event needs to be checked again.

They are purposefully abstract so that they can be used for multiple purposes.

Disclaimer: Because worker roles are purposefully abstract and SQL Server Agent is a well-developed, well rounded product that has been around for years, I will not be replicating SQL Server Agents complete functionality with a worker roles. Instead what I will attempt to do is provide some simple code to convey the idea that you can accomplish (with some coding) many of the tasks that SQL Server Agent does from a Windows Azure Worker role.

Getting Started

To get started, I create a Cloud project in Visual Studio, which includes a Worker role. You can read more about the basics of doing this here.

image

This generates me some sample code in a file called WorkerRole.cs that looks like this:

public override void Run()
{
    // This is a sample worker implementation. Replace with your logic.
    Trace.WriteLine("WorkerRole1 entry point called", "Information");

    while (true)
    {
        Thread.Sleep(10000);
        Trace.WriteLine("Working", "Information");
    }
}

The first thing I want to do is write some code that will execute a stored procedure on my SQL Azure database, this would be the equivalent of running a Transact-SQL script in a step within a SQL Server Agent job.

image

My code looks like this:

protected void ExecuteTestJob()
{
    using (SqlConnection sqlConnection = new SqlConnection(
        ConfigurationManager.ConnectionStrings["AdventureWorksLTAZ2008R2"].
            ConnectionString))
    {
        try
        {
            // Open the connection
            sqlConnection.Open();

            SqlCommand sqlCommand = new SqlCommand(
                "spTest", sqlConnection);

            sqlCommand.CommandType =
                System.Data.CommandType.StoredProcedure;

            sqlCommand.ExecuteNonQuery();
        }
        catch (SqlException)
        {
            Trace.WriteLine("SqlException","Error");
            throw;
        }
    }
}

Typically, you will be executing stored procedures on SQL Azure that don’t return any data, using the SqlCommand.ExecuteNonQuery() method. There is nowhere for a returning result set to go, i.e. no output; this is similar to SQL Server Agent.

Handling Exceptions

Notice that the code above does a very poor job of handling exceptions, which is the first hurdle you need to figure out when writing your own version of SQL Server agent in a Windows Azure worker role. With SQL Server Agent you are given a logical choice, if the step succeeds go the next step or if the step fails quit the job and report the failure. You can simulate the options in SQL Server Agent by using try/catch in your worker role. In the third blog post in this series I will address some of the error handling issues.

The option that you do not have is for SQL Server agent is to stop running on a failure. This is the same as with a worker role, on an unhandled exception the worker role is recycled and restarted, you can find out more about this here. We are going to use the worker roles recycle and restarted functionality in our error handling, covered in Part 3 of the blog series.

Summary

In part 2 of this blog series, I will cover how to execute the stored procedure at a specific time during the day. Do you have questions, concerns, comments? Post them below and we will try to address them.


Alex James (@adjames) proposes Enhancing OData Support for querying derived types in this 7/20/2010 post to the OData blog and requests readers help him choose between two filter syntaxes to handle derived types:

imageProbably the most frequently requested addition to OData is better support for derived types in queries.

image What does that mean exactly?

Background:

Today if you have a model like this:

EmployeeModel

Assuming you have a People set - as is usually but not always the case - and you try to expose it via a DataService here: http://server/service.svc/People it will fail.

This is because Data Services recognizes that the OData protocol doesn't provide rich enough support for querying derived types, so it fails early.

What is missing from OData?

(For the sake of brevity we'll use this ~ as short hand for http://server/service.svc from now on).

Given this setup all Employee objects should live in the same feed as - their base type - Person.

So if the Person feed is here: ~/People

Employee 41 will be found here: ~/People(41)

Great. So far so good.

What though if you want to retrieve the manager for Employee 41?

This: ~/People(41)/Manager won't work, because as far as the OData protocol is concerned the ~/People(41) uri segment is a Person, which as you can see from the model doesn't have a Manager property or navigation.

Not only that you can't reference derived properties or navigations in any query options either:

So something like this: ~/People/?$filter=Building eq '18' OR Firstname eq 'Bill'

Which is trying to find Employees in building 18 or People called Bill, won't work, again because OData sees the Segment type as Person and Building isn't defined on Person.

These two limitations really restrict the sorts of models you can expose in OData.

Requirements:

To remove these limitations we need two things:

  1. A way to change the url segment type, by filtering a base type feed to a derived type feed, so that derived navigations and properties can be accessed.
  2. A way to access derived properties and navigations in filters, projections, orderbys and expand query options without previously changing the url segment type.
Option 1 - !Employee:

We could add an ability to filter like this:

~/People!Employee

Which would filter the feed to contain only instances of employee or a subtype of employee and change the type of the segment from Person to Employee.

At which point you could either use a key selector directly:

~/People!Employee(41)

or you could access derived properties and navigations:

~/People!Employee(41)/Manager

you could also shape & filter the results using query options that now have access to the properties of Employee:

~/People!Employee/?$filter=Manager\Firstname eq 'Bill' OR Firstname eq 'Bill'

We'd also give you the ability to get to the derived type in a query option like this:

~/People/?$filter=!Employee\Manager\Firstname eq 'Bill' OR Firstname eq 'Bill'

You might be wondering why leaving the segment type unchanged and changing the type in the query option like this is useful.

If you compare the intent of these two urls, it should become clear:

(1): ~/People!Employee/?$filter=Manager\Firstname eq 'Bill' OR Firstname eq 'Bill'

(2): ~/People/?$filter=!Employee\Manager\Firstname eq 'Bill' OR Firstname eq 'Bill'

Uri (1) returns employees that are either called Bill or have a manager called Bill.
Uri (2) returns employees who have a manager called Bill or *people* who are called Bill.

Option 2 - OfType:

Another option is to add a system function called OfType(..), that takes the target type name as a parameter. And allow functions to be appended to URI segments and used in query options.

So something like this:

~/People/OfType('Employee')

would return just employees, and allow access to Employee properties etc.

Whereas this:

~/People/OfType('Employee')/?$filter=Manager\Firstname eq 'Bill'

would return just employees managed by Bill.

If you wanted to access by key, you would simple add a key in the normal way:

~/People/OfType('Employee')(41)

And now you can continue appending OData uri fragments in the normal way:

~/People/OfType('Employee')(41)/Manager
~/People/OfType('Employee')(41)/Building/$value

to access Employee 41's manager and raw ($value) Building respectively.

You could also use the function like this:

~/People/?$filter=OfType('Employee')/Manager/Firstname eq 'Bill'

This final example does feel a little weird, at least if you dig into the semantics, because it is actually a null propagating cast operator!

Possible Ambiguity?

If we allowed you to apply OfType(..) to uris that represent a single entry as well as uris that represent feeds, like this:

~/People(41)/OfType('Employee')

there is a chance for ambiguity.

Take this example:

~/Users('1')/Friends('2')

this is a valid OData uri today, that says get Person 1's friends and look for Person 2 in that collection.

All that needs to change to make this collide with OfType(..), is for the navigation property name to change from Friends to OfType.

So if we ever supported OfType on a segment identifying a single entry we would need a way of disambiguating.

Clearly OfType(..) is a 'system' function so the obvious choice would be to further embrace the $ prefix.

The rule could be: if there is any ambiguity, pick the user navigation property. Then if you really need to use the system OfType(..) function you need to prepend the $ prefix, like this:

~/Users('1')/$OfType('SuperUser')

Pros and Cons?

Option 1 has these strengths:

  • It is very simple.
  • It avoids quoting type names.
  • There are never any ugly double bracket combos i.e. ('type')(key).
  • It avoids possible ambiguity with navigation properties.

Option 2 on the other hand:

  • Is nicely aligned with the existing IsOf('TypeName') function which can be used today to narrow a set of results like this: ~/People/?$filter=IsOf('Employee')
  • Unlike the !Employee option, this approach is more widely applicable. You can simply think of OfType as a (system) function, that we allow you to append to correctly typed URIs or inside query options, sort like CLR extensions methods. This approach would lay a foundation for a future world where custom functions can be appended to URIs too.
  • If OfType is simply a function, then creating a way to change the type of a single entry, rather than a feed, like this: ~/People(41)/OfType('Employee')
    is as simple as adding a new overload.

Perhaps we can get the benefits of both by thinking of !Employee as simply a short hand notation for OfType('Employee')?

Summary:

As you can see we've come up with two options to add better support for Derived types in OData queries.

Which do you prefer? And why? Perhaps you have another suggestion?

Whatever you think, please tell us all about it on the OData Mailing List.


Wayne Walter Berry (@WayneBerry) explains Programmatically Changing the Firewall Settings in this 7/29/2010 post to the SQL Azure Team blog:

imageSQL Azure has two types of access control: SQL Authentication logins and passwords and a server side firewall that restricts access by IP address. This blog post will discuss how to programmatically modify the firewall settings. For information about programmatically creating logins, see this blog post.

Firewall

imageSQL Azure maintains a firewall for the SQL Azure servers, preventing anyone from connecting to your server if you do not give their client IP address access. The most common way to do this is via the SQL Azure portal. If you have used SQL Azure, you have used the portal to create firewall rules; you can’t connect to SQL Azure until you have granted your client IP access. Below is a screen shot from the portal:

image

Programmatically Changing the Firewall Rules

SQL Azure allows you to change the firewall rules by executing Transact-SQL on the master database with a connection to SQL Azure. You can add a firewall rule with the system extended stored procedure: sp_set_firewall_rule. Here is an example of the Transact-SQL of creating a firewall rule for a single IP address:

exec sp_set_firewall_rule N'Wayne Berry','206.63.251.3','206.63.251.3'

Here is an example of enabling the firewall for Microsoft services and Windows Azure:

exec sp_set_firewall_rule N'MicrosoftServices','0.0.0.0','0.0.0.0'

Note that every firewall rule must have a unique name and are not case sensitive. You can get a list of firewall rules by calling the view sys.firewall_rules. Here is an example of the Transact-SQL:

select * from sys.firewall_rules

The output of this command executed on my SQL Azure server (see in the portal screen shot above) viewed in the SQL Server Management Studio looks like this:

image

You can also delete a firewall rule using the sp_delete_firewall_rule system extended stored procedure:

exec sp_delete_firewall_rule N'Wayne Berry'

You can read more about these firewall extended stored procedures here.

Security Considerations

Only the server-level principal login, while connected to the master database, can configure firewall settings for your SQL Azure server. This is the same login as the administrator login found in the SQL Azure portal.

Another thing to note is that you must have at least one firewall rule before you can connection to SQL Azure; you need that connection to execute sp_set_firewall_rule and the other extended stored procedures.

From the Command Line

You can execute Transact-SQL against SQL Azure from the Windows command line using sqlcmd.exe. More about how to use sqlcmd.exe can be found in this blog post. Since you can execute Transact-SQL against SQL Azure from the command line, you can execute the firewall command above against SQL Azure from the command line. Using the command line you can script your firewall rules, along with your database creation scripts (see this blog post), schema creation, and schema synchronization.

From Windows Azure

Windows Azure can execute Transact-SQL against SQL Azure using ADO.NET; which means that you can programmatically add firewall rules to SQL Azure from Windows Azure. One of the nice things about doing this via Windows Azure is that Windows Azure “knows” the caller’s client side IP address.

One technique is to create a simple interface that allowed anyone that called a web page on your Windows Azure web role, to gain access to your SQL Azure account by adding their IP address to the SQL Azure firewall rules. You would want to make sure that the caller was authenticated by Windows Azure, using an authentication method of your choice. This technique would allow PowerPivot or WinForms users to grant themselves direct access to SQL Azure, by making a request to a web page. More about connecting to SQL Azure via PowerPivot here, and Winforms here.

Here is a little example code to get you started:

String clientIPAddress = Request.UserHostAddress;

using (SqlConnection sqlConnection = 
    new SqlConnection(ConfigurationManager.ConnectionStrings["SqlAzureMaster"].ConnectionString))
{
    sqlConnection.Open();

    using (SqlCommand sqlCommand =
        new SqlCommand("sp_set_firewall_rule", sqlConnection))
    {
        sqlCommand.CommandType = System.Data.CommandType.StoredProcedure;

        sqlCommand.Parameters.Add("@name", SqlDbType.NVarChar).Value 
            = clientIPAddress;
        sqlCommand.Parameters.Add("@start_ip_address", SqlDbType.VarChar).Value 
            = clientIPAddress;
        sqlCommand.Parameters.Add("@end_ip_address", SqlDbType.VarChar).Value 
            = clientIPAddress;

        sqlCommand.ExecuteNonQuery();
    }
}

imageSee “Frederico Boerr shows you how to Create a WCF Data Service (OData) to share an Azure Table in this 7/29/2010 post to the Southworks blogs” in the Azure Blob, Drive, Table and Queue Services section above.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

The Net SQL Services Team announced Windows Azure AppFabric LABS August Release, Breaking Changes Announcement and Scheduled Maintenance on 7/29/2010:

imageThe Windows Azure AppFabric LABS August release is expected to be released on August 5, 2010 (Thursday).  Users will have NO access to the AppFabric LABS portal and services during the scheduled maintenance down time.  

When:

  • START: August 5, 2010, 10am PST
  • END:  August 5, 2010, 6pm PST

Impact Alert:

  • LABS AppFabric Service Bus, Access Control and portal will be unavailable during this period.

Note that this is the LABS site that’s being updated.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Luke Chung’s Microsoft Access and Cloud Computing with SQL Azure Databases post 7/23/2010 contains links to two white papers about Windows Azure, SQL Azure and Access 2010:

image We at FMS are very excited about cloud computing and started developing solutions using Microsoft Azure including SQL Azure well before it was released to the general public. I feel cloud computing represents the next big platform change in the software industry and the most significant transformation since the introduction of the Internet in the mid-1990's. It will transform the internal hardware, application hosting, and data storage business the same way electric companies eliminated most organization's need to generate their own electricity.

Windows AzureWhile there's been lots of discussions of Azure with .NET and SQL Server, we also see lots of opportunities with Azure and the Microsoft Access/Excel/Office community. In fact, we're busily working on a way to integrate Access data and files with the cloud. Meanwhile, we'd like to share some tips and techniques for linking Access databases directly to tables in SQL Azure. This opens up huge new opportunities to create and deploy Access databases using a more robust, cheaper, and highly scalable platform that is enterprise quality.

I look forward to your feedback on two new papers:

Luke Chung is President of FMS, Inc., which is the world’s leading developer of products for Microsoft Access developers, and a top vendor of products for the SQL Server, Visual Studio .NET, and Visual Basic communities. FMS will introduce its EzUpData application, which runs on Windows Azure, shortly; read more at the EzUpData Web Site.

See also my Linking Microsoft Access 2010 Tables to a SQL Azure Database of 7/28/2010.


Bill CrowBill Crow (@billcrow) reported the World's largest digital photo created with #ICE & hosted on #Azure using #Silverlight #DeepZoom.

image

English and Magyar (Hungarian) languages currently are supported.

How did the idea of making a record breaking panorama occur?

Panoramic photography is the new buzz, getting worldwide recognition as we speak. Audiences love its immersiveness—the way it transports you into places never visited or inaccessible by any means other than your computer screen Gigapixel photographs are capable of displaying environments and artwork in unprecedented detail. The gigapixel photographs of two paintings (by Csontváry Kosztka Tivadar and Rippl-Rónai József; courtesy of the Hungarian National Gallery) recently displayed on our website resulted in 10,000 unique page views within a week—such is the magnetism of extreme detail. We are all about promoting this technology locally as well as internationally. We wanted to make a statement—a professionally challenging one with substantial entertainment value to boot. Hence the of making a high definition spherical panorama.

About us

360world is a youthful and energetic team of computer engineers, photographers, environmental designers, graphic artists, film editors and cameramen. We provide panoramic solutions and develop 3D applications to display high-definition web content. Our expertise goes way beyond software: we also offer innovative touchscreen—or even hands-free—control schemes to enrich interactive media presentations.

Field-tested during the Museum Night of July, 2010 by an audience of over 1,000 strong, our virtual tour system proved to be a snap for fifth graders and pensioners alike. Our eldest visitor, a 73 year-old gentleman, needed no tutoring at all: using his common sense, he controlled the content without much ado. Gigapixel imagery is a relatively new addition to our portfolio, Having taken it up little over a year ago, we certainly developed a flair for it: it is all about careful planning and flawless execution. The actual taking of photographs requires a precision way beyond human capacity—that is where robotics come in.

Robotization

Some tasks involving gigapixel photography simply demand automation. Porsche interiors, stylish as they are, leave little to no space for humans—in cases like this, it is much easier and faster to operate the panoramic heads remotely. When taking gigapixel images, we invariably use robotic panorama heads which move with high (tenth-of-a-degree) precision. Designed by a friend of us, an engineer and photographer Emrich Traugott, our bluetooth capable robot is a feat of digital technology. The idea of using it for a world record attempt occurred during initial testing—and lingered until we started to make actual arrangements in late 2009.

Location arrangement

The observation tower of János-hegy, the highest vantage point of Budapest with a 360 degree panorama, was an obvious location. It also allowed us to take on previous world records in both the ’highest definition image’ and the ’largest spherical panorama’ category. When contacted, the Council of District XII informed us on the upcoming anniversary of the tower. We agreed to cooperate in commemorating the September 2010 event by setting up new world records—give them our best shot if you please.

The World Record starts with a high definition camera and and a telephoto lens with an extremely narrow field of view. Sony’s A900 25MP camera seemed an obvious choice, and Sony was pleasantly eager to provide us one. Fitted with a 400mm Minolta lens and 1,4X teleconverter, it provided a 2,4 degree horizontal FOW per frame.

Our estimate was that a 60 gigapixel image (with the necessary overlaps) would require 5040 individual photographs — 24 rows with 210 frames in each, taken with a 360 degree horizontal and a 60 degree vertical FOV. Counting with 4 seconds per frame (during which the robotic head re-adjusts for the next shot), it was quite clear that the task would take a minimum of 6 hours to complete.

Six hours is an awfully lot of time in an outdoor environment. Weather changes. So is the angle and intensity of illumination. We had to come up with something to do it faster. As luck had it, our friend Emrich had already made a ’beefed up’ panoramic head with stronger servos to handle two top-mounted cameras. We recalculated the net weight of equipment, matched it against the nominal capacity of servos—it was risky, but Emrich kept his cool and convinced us to give it a go.

Then we needed another camera and another lens — both of those were provided by the generous Sony. So far so good: 360world was all set to go for the World Record.

World Record panoramas are invariably made in clear weather, preferably on days following the passing of a cold front with low humidity, maximum visibility and no anticyclonic clouds to play tricks with sunshine. With all the necessary equipment ready, all we needed was a perfect day like that—a surprisingly infrequent commodity. Spring was a mess of near-constant raining and flooding — ranks upon ranks of imperfect days well into May, when Emrich arrived from Germany to provide the necessary assistance in positioning the twin-camera setup.

We started early at the Tower on Day One. Weather was perfect: low humidity, no clouds in sight. Accordingly elevated, we finished setting up the hardware by 9PM and started the long exposition process without further ado.

We progressed by rows instead of columns—the latter would result in an obvious illumination difference between the first and the last columns, making it all but impossible to join them in seamless, 360 degree panorama.

Two hours into the initial exposition, misfortune struck in the form of a thunderstorm. Soon it was raining cats and dogs, canceling any hope of progress on Day One.

And so it went on the next day… and the day after that. 20,000 test pictures later, on Day Four, we finally got it right—that is, secured 40 Gigabytes of raw images from which to compile the largest panorama ever made: the view of Budapest from the Observation Tower of János Hill.

The Easy Way of Making Your Own Panorama!

The unique Panoramic mode of Sony NEX-5/NEX-3 allows you to take breath-taking panoramas with an extremely wide field of view. It won’t get any easier than this: select it, click exposition and move the camera vertically/horizontally.

image

Bill Crow manages the Seadragon technology at Microsoft Live Labs.

xRez Studio claims it:

[H]as had a longstanding working relationship with both Microsoft Live Labs and Microsoft Research, having supplied them with content and feedback for their innovative and cutting-edge gigapixel viewing technologies such as HD View, Seadragon, and Silverlight Deep Zoom. In addition, Microsoft played a central role acting as the primary sponsor of our 2008 Yosemite Extreme Panoramic Imaging Project, which they have showcased on the Surface multi-touch unit, as well featuring in their Seadragon web showcase.

The xRez page contains a Seadragon example of a [presumably Swedish'] tower clock taken from a single still with the 40 megapixel Hasselblad H3D.


Mark Rendle (@markrendle) reported posting an “#Azure-compatible (but not 100% working) fork of #RavenDB” to GitHub. RavenDB is “a linq enabled document database for .NET.” Here’s his Readme document for the fork:

IMPORTANT: This fork requires the Reactive Framework to be installed.

imageThe Samples solution in this fork includes an Azure Cloud project with a Worker Role which you can deploy to a Hosted Service.

image To get this working, I've had to implement my own TCP-based HTTP stack for the Raven server. There may be issues with this which have not turned up during my basic testing. If you have any problems, please use the Github Issues feature to report them.

image Also, there is no authentication on this implementation. The Role is intended to be used internally as storage for a Web application running in the same deployment, although it is currently configured as externally facing for testing purposes.

Finally, remember to change the Storage connection strings on the CloudRaven project to point to your own storage.

Raven DB

This release contains the following:

  • /Server - The files required to run Raven in server / service mode. Execute /Server/RavenDB.exe /install to register and start the Raven service
  • /Web - The files required to run Raven under IIS. Create an IIS site in the /Web directory to start the Raven site.
  • /Client-3.5 - The files required to run the Raven client under .NET 3.5 /Client - The files required to run the Raven client under .NET 4.0 *** This is the recommended client to use ***
  • /ClientEmbedded - The files required to run the Raven client, in server or embedded mode. Reference the RavenClient.dll and create a DocumentStore, passing a URL or a directory.
  • /Bundles - Bundles that extend Raven in various ways
  • /Samples - The sample applications for Raven * Under each sample application folder there is a "Start Raven.cmd" file which will starts Raven with all the data and indexes required to run the sample successfully.
  • /Raven.Smuggler.exe - The Import/Export utility for Raven

You can start the Raven Service by executing /server/ravendb.exe, you can then visit http://localhost:8080 for looking at the UI.

For any questions, please visit: http://groups.google.com/group/ravendb/

Raven's homepage: http://ravendb.net.

RavenDB has characteristics reminiscent of Windows Azure’s original SQL Server Data Services (SSDS), which used SQL Server in the cloud to host an entity-attribute-value (EAV) database.

You can get additional information about the RavenDB server from this live Windows Azure project: http://ravenworker.cloudapp.net:8080/raven/index.html.

Ayende Rahien has posted extensively about RavenDB. His Why Raven DB? post of 5/13/2010 explains his interest.


The Windows Azure Team posted a Real World Windows Azure: Interview with Arzeda Corp. case study to its blog on 7/29/2010:

As part of the Real World Windows Azure series, we talked to Alexandre Zanghellini, Cofounder at Arzeda, and Yih-En Andrew Ban, Project Leader and Scientist at Arzeda, about using the Windows Azure platform to power the company's compute-heavy enzyme-design process. Here's what they had to say:

MSDN: Tell us more about Arzeda and the services you offer.
Zanghellini:
Arzeda is a biotechnology firm that engineers custom-made enzymes for almost any chemical reaction. By designing these bioprocesses, our vision is to replace petroleum-based products and processes to contribute to a more sustainable environment.

MSDN: What was the biggest challenge Arzeda faced prior to implementing Windows Azure?
Zanghellini:
The methodology that we use to engineer enzymes requires extensive computing power. We have a small in-house Linux cluster, but it wasn't powerful enough for some of our computational calculations. The amount of processing power that we need for some calculations is on par with what you would find with a 250-core cluster. However, we only need that massive scalability a few days a month, so building an on-premises infrastructure to accommodate our processing needs was unrealistic at a cost in excess of U.S.$250,000.

MSDN: Can you describe the solution you built with Windows Azure and how it helped address your need for cost-effective scalability?
Ban:
We use Windows Azure compute and storage services. Scientists prepare jobs for computation, packaging them in XML messages and sending them to Queue storage in Windows Azure. Scientists then submit a job request and start compute instances-anywhere from tens to hundreds of instances-on Windows Azure, which picks up the jobs from the queues and processes them. We use Blob storage for data input and output files and Table storage to store job-state information. When the jobs are complete, scientists download the computations to on-premises computers that we for data analysis. Using a sweeper process, we can automatically shut down the instances of Windows Azure as soon as all the jobs are finished processing.

MSDN: What makes your solution unique?
Zanghellini: Our engineering methodology integrates the power of chemical catalysis, the high selectivity of biological macromolecules, and the flexibility of computational design. With Windows Azure, we have the power we need to complete these compute-heavy processes and can scale up with just a few clicks. …

The interview continues with typical “Real World” Q&A.


Kevin Griffin offered a Word of Warning For People Using 3rd Party Reporting Tools in Azure on 7/12/2010 about Telerik, XtraReports Azure, and GrapeCity ActiveReports (missed when posted): 

DISCLAIMER: I’m simply sharing my experiences here.  I am not laying blame on any of the tool developers or Azure team.  Frankly, I’m not sure who is should be held accountable in these cases.

image I’ve spend the last day working in Azure, and trying to implement a 3rd party reporting tool into my projects.  For my first choice, I decided to use Telerik Reporting.  They’re a big part of the developer community, and I am willing to give them my client’s money because they do have good products.

Implementing a report should be pretty straightforward, and in most cases it is.  Walk through the designer, connect to a datasource, layout the report, and you’re good to go!  Viewing the report should be equally as straightforward.  Create a form (in this case a WebForm), drop a ReportViewer control, and in the code behind do some voodoo in order to wire the report up to the report viewer.

But really, our use cases call for only returning the the reports in PDF or Excel formats.  So the above, while doesn’t work, isn’t necessary.  Telerik has a component for processing a report, and dumping it directly to PDF or Excel (in addition to several other formats).  That code too[k] five minutes to wire up.

I go into the Azure Development Fabric, and try to download my report.  The report generator ticks away for a few seconds and then BLAM, Out Of Memory Exception.

Excuse me!  The one problem I don’t have is lack of memory, so I scour to the forums to find out what the deal is.  Turns out that Telerik relies on GDI+ to renders reports in various formats.  Guess what Azure has poor support for?  You bet, GDI+.

Thanks ok, Azure is only 2 years old.  You can’t expect the products to have a quick turnaround.  I’ll have to go look at other solutions.  How about DevExpress XtraReports?  A quick Google search for “XtraReports Azure”, and you’ll find out that XtraReports falls into the same issue!  Reliance on GDI+ makes it unusable in an Azure environment.

How about GrapeCity’s ActiveReports?  Supported, but with limitations.

  • Rtf, Excel and Text filters are not supported on Windows Azure.
  • When using the Pdf export filter, digital signatures are not supported.
  • The Pdf export filter cannot access System fonts, so you must create a Custom Font Factory to embed any necessary fonts for non-ASCII characters.

Holy snap!  So I can’t export to Excel (which is a requirement), but it looks as if I want to create a report I have to embed the fonts in my reports.  Now I don’t know if that’s easy or hard.  I need to test the product to see, but the lack of Excel exporting is knocking the product down.

Now, I have no problem that these providers don’t currently work in Azure.  If you have a code based created around a single technology, it’s not quick or cheap to turn around and make it use another.  However, I wasn’t able to find anywhere on the product sites saying that they do not have support for Azure.  Instead, I had to download the products and waste several hours trying to make the product work in a way that was physically not possible.  Advertising unsupported features is as important as advertising support features.

And what’s up Azure team with lack of support for GDI+?  Isn’t the operating system supposed to be on par with Windows Server 2008?  I’m sure I could run my site in IIS on a WS2008 machine without issue.

So here’s my call:  If you’re working in Azure, and using 3rd party tools for reporting, please tell me what you’re using or if there is something I’m completely glossing over.  If I find something that works and meets my simple requirements, then I’ll give them a shout out on my blog.

The thread about this problem in the Windows Azure forum is dated 11/25/2009. Eight months later, the problem appears to be unresolved. Be sure to read the responses from DevExpress, GrapeCity and Telerik in the comments.

Return to section navigation list> 

Windows Azure Infrastructure

Mary Jo Foley’s Look who's on the Microsoft Azure team now post to her All About Microsoft ZDNet blog of 7/20/2010 reveals Mark Russinovich has moved from the Windows Core Operating System Division (COSD) to the Windows Azure Team:

imageCelebrated Windows expert Mark Russinovich has joined the Windows Azure team.

image Microsoft Technical Fellow Russinovich is considered one of the foremost experts — inside or outside Microsoft — on the inner workings of Windows. He was the cofounder (in 1996) of Winternals Software  — a company which Microsoft acquired in 2006. He also cofounded Sysinternals.com, for which he’s written dozens of Windows utilities, including Filemon, Regmon, Process Explorer, Rootkit Revealer and more. Prior to that, Russinovich was a resarcher at IBM’s TJ Watson Research Center, specializing in operating-system support for Web-server acceleration.

image For the past three years, Russinovich has been working on the Windows Core Operating Systems Division (COSD) team. In that position, he spent a lot of time working on architectural best practices, which included the creation of an architectural “constitution” which outlines the layers in Windows (along with the functionality in each layer) and guidance on application-programming interface (API) design.

I found out about Russinovich’s new role via a tweet from Microsoft developer evangelist Matthijs Hoekstra. Hoekstra was one of many Microsoft employees attending the company’s internal TechReady conference in Seattle this week who managed to cram into Russinovich’s presentation there. Another tweet, from Softie Srinam Krishnan, revealed that Russinovich has joined the Azure team within the past month.

I’ve asked Microsoft for more about what Russinovich will be doing on Windows Azure, but have yet to hear back.

A number of operating-system heavyweights are already working on Windows Azure. The team — back when Azure was known by its “Red Dog” codename — originally was comprised of a number of long-time Windows experts from COSD and other parts of the company. The father of NT, Dave Cutler, was one of the Windows Azure founding members and is still working on the virtualization and other components of the core Windows Azure operating system.

At the end of 2009, Microsoft folded the Azure team into the Server and Tools business and combined the teams. Microsoft is “leading with the cloud,” going forward, meaning it is going to try to get customers and partners to adopt its cloud offerings rather than on-premises alternatives. But will cloud economics add up in both Microsoft’s — and its customers — favor?

Maybe Mark can solve the problems that Kevin Griffin encountered with GDI+ and third-party reporting apps running in Windows Azure. (See the “Kevin Griffin offered a Word of Warning For People Using 3rd Party Reporting Tools in Azure on 7/12/2010 about Telerik, XtraReports Azure, and GrapeCity ActiveReports” article in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section above.)


Joe McKendrick continues the SOA vs. Cloud Services controversy (see earlier related articles by David Linthicum and Joe in Windows Azure and Cloud Computing Posts for 7/28/2010+) with his Does it matter if it's SOA or cloud services? Not to the business post of 7/30/2010 to ZDNet’s Service Oriented blog:

image What is the relationship between service-oriented architecture and cloud computing? Close siblings, or estranged relatives?  As discussed here at this blogsite, some analysts, such as David Mitchell Smith, say SOA isn’t necessarily cloud, while others such as Dave Linthicum say a service-oriented foundation is essential for cloud.

SOA-inspired best practices lead to profitable cloud consumption

image About the same time, ZapThink’s Jason Bloomberg penned this insightful analysis of what is needed to make cloud function profitably, also pointing to the need for architecture. He observes that while IT executives are on board with the promise of cloud computing, they “find themselves lost in the complexities of deployment in a cloud environment.” This results in over-reliance on vendors, and you know where that leads.

Architecture is the missing ingredient needed to help cloud computing deliver to the business — but to date, “cloud architecture” has meant building or buying clouds, not leveraging them. Jason urges moving forward with “Enterprise Cloud Architecture” or “Cloud Consumption Architecture” to achieve business value.

For example, there are three main options for dealing with an existing application that is being considering for a cloud migration: 1) leave the existing app where it is, but extend it by adding new capabilities in the Cloud;  2) migrate the existing app to the cloud, eventually retiring the existing app altogether; or 3) expose the existing app as loosely coupled services, and compose them with cloud-based services that are either already available to you or that you’ve built or purchased for this purpose.

Which is the SOA-ish option?  The third one definitely has an SOA-ish ring to it. But the fact of the matter is all three options are part of an SOA approach, Jason points out. SOA practices lay the groundwork for dealing with such issues:

“If your organization has already gone through the rigors of SOA, establishing a governance framework and a business Services abstraction layer, then cloud consumption naturally follows from the best practices you have already been following. Is what you’re doing still SOA? It doesn’t matter.”

Jason doesn’t see the difference between SOA and cloud implementations, and ultimately, the business doesn’t either. Both are about delivering services in a uniform, robust, secure fashion where the enterprise needs them.  SOA-enabled services can come from outside or within, so can cloud services.


Ashlee Vance reported the response time of Windows Azure, as measured by Compuware’s CloudSleuth tool, was faster than Amazon Web Services and Google in her 7/29/2010 How Fast Can a Cloud Run? story for the NY Times’ Bits section:

image The cloud has been put on notice: It’s being watched.

image A new graphical tool from Compuware, CloudSleuth, has arrived, in beta form, to measure and display the speeds at which cloud computing services run. The CloudSleuth tool already tracks Amazon.com, Microsoft, Google and a couple of other cloud providers. And it demonstrates that response times do indeed vary from operator to operator. It displays these results on a map that gives people an idea for the worldwide performance of different data centers at a quick glance.

Imad Mouline, the chief technology officer at Compuware’s Internet monitoring division, Gomez, said he designed the cloud tracking system to provide customers with a way of judging the various services on the market.

“I thought, ‘Why don’t we put these guys to the test?’” he said.

Gomez has set up identical servers in data centers around the globe that request the same files from the various cloud computing systems. It then measures the speed it takes to complete the request.

In addition, Gomez performs similar operations from peoples’ PCs.

All told, it can build a picture of how the cloud services perform on both high-speed corporate networks and slower home connections.

imageLooking worldwide over the past 30 days, Microsoft’s relatively new Azure platform has performed the best, responding to requests in 6.46 seconds. OpSource ranks second at 6.61 seconds, followed by Google at 6.77 seconds and Amazon.com 6.82 seconds. (The Amazon figure is for its East Coast data center. Its West Coast, European and Asian data centers had slower response times.)

The rankings were the same for the availability of the cloud systems, leaving Microsoft as the fastest and most reliable cloud provider.

Mr. Mouline stressed that the inner workings of cloud systems are complex because companies have all types of different applications running in the same data center. The interaction of all this software can pose performance challenges and makes the cloud services less responsive than the controlled, fine-tuned data centers that the likes of Amazon.com and Google run for their internal operations.

“Just because you run your application on Amazon’s EC2 or Google’s AppEngine, do not think you will get the same performance as Google.com or Amazon.com,” Mr. Mouline said. “That’s the first idea we need to make sure people get away from.

“The cloud is opaque,” Mr. Mouline added. “If you’re running an application in the cloud, you really don’t know what is going on at the infrastructure level.”

The physical location of data centers plays a huge roll in response times. For example, someone in Washington D.C. requesting a file from Amazon.com’s East Coast data center should receive a response in less than a second, while the same task would take about 11 seconds to complete if the request came from California.

Cloud providers have tended to offer service level agreements that focused on the overall uptime of their services. Amazon.com says it will be up 99.95 percent of the time, for instance.

But Mr. Mouline urged customers to begin thinking about making response time demands as well.

“We have already seen huge improvements in performance,” Mr. Mouline said. “We know the cloud guys are watching this and trying to improve their scores.”

A quick test with CloudSleuth’s default main page set to all locations for the past 30 days on 7/29/2010 also designates Windows Azure as having the fastest response time:

image

Pingdom usually reports Windows Azure’s response time for the Southwest US (San Antonio) data center to be about 0.65 seconds (My Azure Single-Instance Uptime Report: OakLeaf Table Test Harness for June 2010 (99.65%) reported 0.660 seconds.)


Bruce Guptill and Lee Geischecker coauthored a Mobility and Disruption: From the Edges of the Cloud to the Center of the Enterprise Research Alert on 7/29/2010 for Saugatuck Technologies (site registration required):

What is Happening? While Saugatuck is well-known for its survey research and analysis, our work really centers on interaction and dialogue with those who buy, sell, implement and use IT, especially in user enterprise settings. And discussions with IT users, buyers, and providers as we ramp up to our second annual Cloud IT/Infrastructure-as-a-service (IaaS) survey indicates that mobility is increasingly top-of-mind when it comes to enterprise IT.

Based on our current research, we think it reasonable to expect that by year-end 2012, mobility, directly and indirectly, will drive more than 25 percent of all enterprise IT investment, from devices to services to development to management (including security).

Therefore, mobility in that same time period has to rate as one of the IT world‟s more disruptive influences, driving and accelerating massive change in what is considered as enterprise IT, and the influence and roles of today‟s IT Master Brands.

Bruce and Lee continue with the usual “Why is it Happening” and “Market Impact topics.”


James Downey “shares his first impressions of [the] Microsoft Windows Azure Platform” in a sponsored Windows Azure, the First Cloud OS post of 7/28/2010 to the Microsoft Dynamics CRM blog:

image It’s funny that so much literature in IT trade magazines depicts Microsoft as the aging champion of on-premise software, struggling in vain to hold back the tide of younger, more nimble cloud players of the likes of Google and Salesforce.com. Perhaps a good story needs drama, and Microsoft fit the bill for a stock antagonist, ludicrously bound to the old order, tilting at windmills. But the plot now takes a surprise twist. Microsoft has leapt into the cloud, reinventing itself as a leader of innovation.

imageWindows Azure is the first OS for the cloud. All prior operating systems were built for servers. While virtualization broke the one-to-one connection binding servers to machines, it did not did not eliminate the concept of the server. Servers became virtual, could share a single physical machine, and could even hop from machine to machine. But even in the virtual world, a server is still a server. And these servers must be maintained, upgraded, and monitored in much the same way as they had been before they were virtualized. When an organization deploys applications in the virtual world, it deploys those applications to servers, on-premise or in the cloud.

In the new world of Azure, the individual server disappears into a cloud. Developers deploy applications to the cloud, not to servers. The developer does not even see a server, only the cloud. Azure as the cloud OS distributes, runs, and manages those applications. In the end, yes, they do run on virtual servers that live someplace on physical servers, but this is invisible to the developer, and certainly to the end user.

It’s true that other vendors offer platform-as-a-service. Salesforce.com promotes its Force.com as a platform. Indeed, Microsoft has a cloud offering of Dynamics CRM, which like Force.com is a platform for the rapid development of line-of-business applications. But neither Force.com nor Dynamics CRM are generalized platforms. While very flexible, they are intended for line-of-business applications, not just anything. You would not build a graphic design, word processing, or gaming application on either of these platforms.

Azure fits the category of platform-as-a-service, but it is the first generalized platform service in the cloud, the first platform in the cloud that supports anything your mind imagines.

Microsoft has just announced that it will support private clouds with Azure deployed through appliance hardware. With technology such as the Service Bus built into Azure, the capability exists for powerful integrations between private and public clouds. Going forward, Azure will give organizations a whole range of choices for public, private, and hybrid clouds. Enterprises will need these choices to deal with their ever changing business requirements.

Azure, a dramatic innovation in cloud computing, makes Microsoft a leader in cloud technology.

Cheers,

James Downey

<Return to section navigation list> 

Windows Azure Platform Appliance 

CNBC’s Jon Fortt leads with “Microsoft’s cloud in a box” in a 00:01:35 MSFT Addresses Analysts segment of 7/29/2010 about Microsoft’s Financial Analysts Meeting (FAM) 2010:

Insight from Microsoft's analyst meeting, with CNBC's Jon Fortt.

<Return to section navigation list> 

Cloud Security and Governance

Adam Ely wrote High Noon: The Browser as Attack Vector as an August 2010 (!)  “Strategy: Browser Security” Cloud Computing Brief for InformationWeek::Analytics:

For years we've worried about insecure Web interfaces, yet initiatives to harden them have largely fallen flat. Well, time's up. Critical apps are delivered in a SaaS model, built to run with nothing more than Firefox or IE. All of our data, from Facebook updates to company secrets, are accessed and controlled via the browser. Now what are you going to do?The browser has been a hot topic in security discussions for 10-plus years, since Web applications became popular during the first dot-com boom. Back then, concerns mainly focused on the applications themselves. But beginning with the Web 2.0 boom and accelerating with today’s popular SaaS model, new attack techniques are exploiting browser flaws and leading to the compromise of user applications, systems, networks and ultimately data.

The rise of these threats accompanied use of new languages, such as Ajax, and the extension and increased use of existing technologies like JavaScript and Flash. Attention to Web applications in turn drew into question the security of popular browsers. Attackers began to examine flaws and build exploits to trick users into visiting fake or compromised sites and opening malicious files.

image The reality is, content and applications are now consumed from outside the company firewall and from remote systems. In our recent InformationWeek Analytics cloud surveys, SaaS providers like Salesforce.com and NetSuite are by far the top choice of respondents.

There’s no going backwards. Attackers have myriad ways to compromise users and systems and attempt to penetrate the internal network. IT organizations are left in the difficult position of trying to protect their organizations while being denied control over the application interface. Here’s what you need to know about browser security. (S1530810)

Download

<Return to section navigation list> 

Cloud Computing Events

Larry Dignan asks Will cloud computing economics add up for Microsoft? in this 7/29/2010 post to ZDNet’s Between the Lines blog about Microsoft’s Financial Analysts Meeting (FAM) 2010:

imageMicrosoft at its financial analyst meeting made the case for being a cloud computing leader and argued that its economic prospects will improve as information technology shifts to an on-demand model.

The big question: Do you buy the argument that cloud computing will accelerate Microsoft’s earnings and revenue growth? Let’s face it: Every software vendor is talking cloud computing, but the economic theory is that it’s better to cannibalize your own business than allow some rival to do it. Few established software vendors have argued that the cloud will gussy up their financial metrics.

imageMicrosoft CFO Peter Klein, however, made the case that cloud computing is going to be big business, improve the company’s gross margins, cut costs and bring in more customers. And, as noted by Altimeter partner Ray Wang, Klein did a decent job putting cloud computing in financial terms. It’s too early to put any hard data around Microsoft’s nascent Azure efforts, but Klein said that the software giant could grab a bigger piece of the enterprise IT pie. After all, the historical dividing lines between vendors—software, hardware, networking and storage—are melting away.

So how’s Microsoft going to harness the cloud for financial gain? Klein had a few big themes to ponder:

The cloud opens up a broader IT spending pie.
In cloud computing, software, hardware and services blend together. You don’t buy servers. You buy capacity. Klein said:

Obviously, the total global IT spend, which we’re going to address more of now in the cloud world, is much bigger.  And so obviously at a high level is an incredible opportunity for the cloud to have us address a much larger piece of the IT market.

Microsoft can sell to more users. Klein talked a lot about how cloud computing can allow Microsoft to be more of a player for midmarket companies that don’t have the resources to implement a SharePoint infrastructure. Klein said:

[The cloud] allows us to sell to more users. This could either be new customers that we don’t serve today or new users within existing customers today. So, this is greenfield incremental opportunity selling to users that today don’t have Microsoft products.

He continued:

And with business users, I want to start with the mid-market segment.  Historically, while the mid-market has been a very attractive market for us, it also poses some challenges.  Number one, it’s highly fragmented, and number two, there’s a little bit of a gap between the needs and desires in terms of IT capabilities that mid-market companies have and the resources and expertise they have to deliver those capabilities.  In some sense, you could say a mid-market company has the needs of enterprise IT but the capability of a much smaller business. The cloud really solves that problem by bringing cost-effective, easy-to-deploy technology solutions to mid-market customer in ways they can’t today.

Delivering software via the cloud will improve gross margins by lowering costs for Microsoft and customers. The automation of deployments, configuration and ongoing maintenance will cut costs for all parties. There are also hardware savings. Klein said:

Interestingly, the one thing I want to point out on this, the 10 percent savings is net of what an organization would pay us for the Azure service.  So, it’s the net of what they would pay more, in addition to the licenses and for the service, and on top of that there’s 10 percent savings.  And again, given the one instance that you’re running, given the automation, you reduce support costs.  It’s actually a good thing for not only the customer but for Microsoft.  And you add all that up and that’s a 30 percent savings, again, net of what Microsoft would make for the existing software license, plus gross margin on the service.

The cloud makes it easier for Microsoft to garner “competitive migrations”—essentially poaching customers from rivals. “All that revenue is brand new and incremental to our bottom line,” said Klein.

Microsoft can make a “pretty smooth” transition to the cloud and lower costs of goods sold (COGS) and improve gross margins from the 80 percent mark today, said Klein. For comparison’s sake, Salesforce.com’s gross margins are roughly 82 percent and Oracle’s is projected to have margins in the 75 percent range (down from 80 percent due to its Sun hardware business). The pace of this transition remains to be seen, but Microsoft is confident. He said:

We spent a lot of time thinking, both from a sales and revenue perspective, and a sort of build-out perspective, how do we smooth out the impact of that.  And I think we’ve got — we’ve done some great work on the sales side for our customers in particular to provide a smooth transition from sort of our existing business models to the cloud model.

And from a margin perspective in terms of the COGS, I think, number one, we’re able to sort of leverage all the investments we make across the company in datacenter technology, which provides good scale for us and ability to smooth.  And we’ve done a lot of work to plan out how we think about datacenter build-out, how we get really smooth about how we just get it right in front of sort of the demand.  So, we’ve done a much better job in forecasting demand and then making out build plans on datacenter.  So, both from the revenue side, I think we’ve got some good licensing plans to make that smoother, and then from the COGS side — unless there’s sort of just dramatic spikes, which of course I think will cause COGS to go up.  I think we can smooth that out pretty well.

There were a few missing elements. Klein wasn’t specific about estimates for margin improvement and percentages on future growth, but it’s too early for that level of detail. Add it up and Microsoft’s case for cloud economics was a lot more fleshed out than details provided by other software companies.

More from Microsoft’s analyst meeting:

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

SearchCloudComputing.com reported Quest buys Surgient on 7/30/2010:

image In another sign that private cloud computing is emerging as a profitable and more commonly-used technology, Texas-based virtualization management vendor Quest Software has purchased Surgient, Inc., a provider of private cloud automation software. The deal is expected to close in the third quarter, giving Quest a fresh portfolio of tools for managing secure cloud infrastructures.

image The announcement was paired with IDC research that shows how Quest, already serving 20,000 virtualization customers, could benefit from branching more into internal clouds. Demand for server automation tools will likely top $1 billion by 2014, according to IDC research. An IDC survey also shows that 73 percent of all businesses are evaluating, planning or have already implemented private cloud strategies.

<Return to section navigation list>