I’m in the process of finalizing my Azure Blob Test Harness project, which is a sequel to the earlier Azure Table Services Test Harness project. The project is an extension of Jim Nakashima’s early Windows Azure Walkthrough: Simple Blob Storage Sample of 10/29/2008.
Updated 1/4/2009: Firefox 3 solves unrendered GridView cells and overlapping table rows but doesn’t display the FileUpload control correctly; error logs now include RequestId values.
In addition to enabling uploading of local files to Azure blogs, it also demonstrates file transfers of multiple sizes of bitmap (.bmp) and .zip files from my Windows Live SkyDrive account. You select the bitmap or .zip file from a dropdown list. The upscaled project adds display of content-type, content-encoding and content-length header data plus blob metadata values, such as UploadTime and CreateTime. The project uses the the RoleManager.WriteToLog() method to implement logging of successful uploads and exceptions.
Clicking a Link opens the .bmp in a separate window or displays a File Download dialog for .zip files.
Tests with the ASP.NET page running under IIS 7 showed that creating and opening blobs created by opening large (up to 37 MB) .bmp and .zip files from in my Windows Vista Ultimate SP1’s file system or Windows Live SkyDrive usually worked as expected with both Development Storage and Azure (Cloud) Storage for blobs.
Unexpected Behavior in the Azure (Cloud) Fabric
I experienced a number of strange problems and periodic failures that don’t occur when the application is running in the Development Fabric. The following screen capture illustrates a missing Link cell value after adding a blob; in some cases, multiple Link cell values and/or Delete command links in the GridView control fail to render after adding a blob:
A malformed blob causes the error message at the bottom of the page. The blob, which should have been an instance of a 4096 x 3072 px bitmap in BMP format, has no properties other than a GUID and no content. It was created in the midst of a series of successful uploads of .bmp files ranging in size from 2.4 to 38 MB.
Here’s Chris Hay’s Blob Browser displaying the defective blob’s properties (or lack thereof):
Notice the 1/1/1601 Last Modified date. I have no idea why the application would create an empty blob in Azure storage. It didn’t create any empty blobs during extensive testing in the Development Fabric with Development Storage.
Note: The Upload and Create times identified as for the Development Fabric are in mixed mode (using Azure Blob Storage rather than Development Storage). In this case, upload time is for a .bmp file download from my SkyDrive account to the local machine and create time is for upload of the .bmp file to Azure blob storage over a DSL line having a speed of 2582 kbps down and 440 kbps up.
The Upload times identified as for the Azure (Cloud) Fabric upload the .bmp files from SkyDrive over Azure’s faster internet connection; Create times are for moving the byte array from memory to Azure blob storage.
Files with Upload times of less than 10 ms are for local files uploaded to the project running in the Development Fabric with Development Storage.
Cloud storage also distorts the location of table rows by overlapping them, as shown here after adding a new bitmap blob:
The inexplicable (503) Server Unavailable error occurs periodically (perhaps 20% of the time) when uploading the 4096 x 3072 px (38 MB) bitmap with the project running in the Azure Fabric but not in the Development Fabric.
Update 1/4/2009: The unrendered GridView cells and overlapping table rows appear to be a side-effect of IE 8.0 Beta 2 because these problems don’t exist when using Firefox 3 as the browser. However FF 3 doesn’t respect the Width property of the FileUpload control and doesn’t recognize the traditional Size property workaround.
Logging has its own set of problems, as reported in my How to Resolve Mystery Error Message in Service Tuning Operation? thread of 1/1/2009 in the Windows Azure forum.
The first is this spurious error message:
e55106:Two or more parameters specified are not compatible with each other.
Reference #: 90dfaa5c-9789-4c17-a16f-a5d8d8cac1aa / 08
Reference Time:1/1/2009 10:50:58 AM
It appears if you save a configuration file without making a change to it. There appears to be no question that this is a bug.
Second, there’s no means to save a custom log file name you assign in the Service Tuning Event Logs section’s Container Name text box. If you don’t use the default name <AppID>-staging or <AppId>-production, you must type the custom name each time you copy the log.
Note: Saving the configuration file doesn’t save the custom event log name because event logs aren’t normally part of the configuration file. However, I added a LogContainerName setting to the projects configuration files.
Third, saving the event log to a blob is a one-shot, manual process that creates a new blob with log content since the last copy and then truncates the log. I don’t believe cloud admins will go for the manual copy method but would expect continuous logging behavior similar to that of Windows logs.
Here’s a screen capture of the log blob for a session with a single upload sandwiched between two malformed blob errors:
The semicolon-separated values format for blob metadata is optimized for export by LINQ to XML and import to Excel for analysis. However, I might in the future change the SCSV format to well-formed XML with elements that use metadata names.
Update 1/4/2009: At the request of the Azure development team, I’ve added the value of the x-ms-request-id response header to error messages. The added data aids their issue tracking efforts.
I’ll update this post as solutions or workarounds, if any, with the current CTP, are found for the problems reported here.