Sunday, December 16, 2012

Windows Azure and Cloud Computing Posts for 12/13/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI, Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image_thumb7_thumb1

‡    Updated 12/16/2012 12:30 PM PST with new articles marked .
•• Updated
12/15/2012 4:00 PM PST with new articles marked .
•    Updated
12/14/2012 1:00 PM PST with new articles marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

Azure Blob, Drive, Table, Queue, HDInsight and Media Services

‡ MingFei Yan posted Announcing Windows Azure Media Player Framework Preview for iOS on 12/13/2012:

image_thumb[2]I am pleased to announce that Windows Azure Media Services team is releasing a Windows Azure Media Player Framework Preview for iOS. This new framework is released as Open Source through Github: and licensed under Apache 2.0.

3Printscreen for Azure Media Player framework for iOS

What’s this framework for?

This framework enables developers to build native video application on iOS platform, which could consume secure HLS (Http live Streaming) content from Windows Azure Media Services. Mainly,this framework made easy for developers to integrate client-side advertisements. In the future we will also support various advertisement standards, such as VAST 3.0 and VMAP 1.0.

Architecture of this player framework

Architecture – Azure Media Player Framework Preview for iOS

1. JavaScript Library• Sequencer.js: used to maintain an internal playlsit
  • Scheduler.js: used to schedule advertisement and main content
  • Adresolver.js: used to parse different advertisement standards, currently there is nothing available but we will be adding VAST and VMAP support very soon
  • Note: developers are not advised to change existing JavaScript files but encouraged to write additional JavaScript file as plugin.
2. iOS sequencer wrapper:
  • iOS sequencer wrapper bubbles up most of functions implemented in JavaScript library into Objective C layer. We have a hidden UIWebView for calling all JavaScript functions.
  • Note: Developers are not advised to modify these files and APIs should remain the same even with additional plugins.
3. iOS AVPlayerFramework:
  • As shown in the architecture diagram above, the AVPlayeFramework includes iOS sequencer wrapper and based on AVPlayer native iOS library. This AVPlayerFramework provides additional functionalities such as advertisement insertion based on AVPlayer. All APIs in AVplayerFramework are what developer should use directly for developing its video application.
4. Sample player:
  • Sample Player is also shipped along with our player framework as a reference for developer. Here you could learn how to use various functions we provided, such as pre-roll, mid-roll and post-roll advertisement insertion. All the video sources are coming from Windows Azure Media Services. (Here is a blog for how-to produce HLS content with Windows Azure Media Services).
Who should use this SDK?

Developers who want to consume HLS (Http Live Streaming) content from Windows Azure Media Services and Developers who want to enable client-side advertisement insertion with their video application on iOS.

System requirement

iOS 5 and above system will be supported.

Feature List

Advertisement insertion

  • Pre-roll, Mid-roll and Post-roll support
  • Ad Pod
  • Advertisement clip could be either Mp4 or Http Live Streaming
  • Error Notification
  • Play-once or Sticky Ad


  • Seamless transition from Advertisement to Main Content and between advertisements
Question: how often will you refresh this framework?

Answer: We will refresh code frequently once we have important features finished. However, we don’t have rigid deadline such as a specific day of month, but our release cycle will be on monthly-basis. Therefore, do check out Readme often to see new features.

Question: If my video application is based on MPMoviePlayer, could I still use this framework?

Answer: Yes. Our IOSAVPlayerFramework is based on AVPlayer. And you could develop another Player Framework on top of IOS Sequencer Wrapper and utilize MPMoviePlayer. However, since MPMoviePlayer is Singleton, you won’t benefit from the seamless switch performance we provided.

Question: I see you have JavaScript library there, could I build a web-based application on top of your JavaScript library?

Answer: Yes, you could do that, though HTML5 video framework for browser is something we are planning. You could play with it if you can’t wait. Just a friendly notice that, if your main content or advertisement is in HLS format, your video could only be played back in Safari on MAC or i-device. For other browser, you will need to have progressive download content (such as H.264 or WebM).

Question: If I have feature requests, how should I contact you?

Answer: please go to our feature request user voice.

Question: If I have questions or bugs, how should I contact you?


  • 1. Read our documentation carefully. We will be publishing on both Github and MSDN.
  • 2. If you can’t find a clue, please post your question on Windows Azure Stackoverflow with tag Azure: We will monitor the forum closely.
  • 3. If we aren’t be able to solve the problem you posted, we will put it into back log and fix it according to how badly it impacts. Meanwhile, if you helped us fix the problem, you could contribute code into out Github repository and here is the guidance:

The Media Services Team updated their Player Framework: an open source component of the Microsoft Media Platform on CodePlex with a link to Player Framework for Windows 8 on 11/8/2012 (missed when published):

Player Framework for Windows 8 - v1.0 now available (11/8/12)

Project Description
An open source, robust video player framework for Windows 8, HTML5, Silverlight, Windows Phone and other application platforms.

Video players can be incredibly difficult to build. When developers require support for adaptive streaming, closed captioning, advertising standards integration, DVR-style playback control, and other advanced features, the complexity of their video player grows exponentially. Over the last few years at Microsoft we have helped build some of the most advanced video applications on the Web including the browser-based experience for the Beijing and Vancouver Olympics with NBC Sports, the last three seasons of NBC's Sunday Night Football (including the 2012 Super Bowl), the CBS March Madness college basketball tournament, Wimbledon, and a number of other major, live events with millions of simultaneous users. As a part of those projects we have developed one of the most powerful video players on the planet. And we've decided to share it with everyone, for free.

The Microsoft Media Platform's Player Framework is an open source video player that we continue to develop and evolve. It is available for Silverlight, HTML5, Windows Phone, Xbox, and now, in our latest release, Windows 8 applications. And it's fully open source!

The Player Framework supports a long list of advanced features including:

In the future we plan to expand our platform support to include other popular mobile platforms, even the ones that aren't a part of the Microsoft family. ;)

Hopefully, support for Android phones and tablets will be forthcoming soon.

Dhananjay Kumar (@debug_mode) described how to correct an Error in installing Windows Azure Storage Client Library 2.0 in 12/14/2012 post:

imageTo start working with Windows Azure Storage Client Library 2.0. I created a project and tried adding reference of Windows Azure Storage Client Library 2.0 using NuGet. When I tried installing, I got following error message.

“Microsoft.Data.OData could not be resolved”


imageThis error message is clearly saying that you need to install Microsoft.Data.OData version greater than 5.0.2 to install Windows Azure Storage Client Library 2.0. To install this in NuGet Manager Package dialog search Microsoft.Data.OData and install it.


After installing this you should able to install Windows Azure Storage Client Library version 2.0 via NuGet. I hope you find this post useful. Thanks for reading.

• Carl Nolan (@carl_nolan) described Hive and XML File Processing in a 12/13/2012 post:

imageWhen I put together the “Generics based Framework for .Net Hadoop MapReduce Job Submission” code one of the goals was to support XML file processing. This was achieved by the creation of a modified Mahout document reader where one can specify the XML node to be presented for processing. But what if ones wants to process XML documents in Hive. Fortunately Hive similarly supports document readers, thus enabling the same document readers to be used as the basis of table definitions.

The process of enabling XML processing in Hive is relatively straightforward:

  • imageCreate the table definition specifying that the input format is XML; thus exposing the necessary XML elements as columns
  • Parse the XML column data using xpath expressions in SELECT statements
  • or – Define a view on the XML table parsing out the relevant XML elements, returning them as native types

The syntax for the xpath processing in Hive can be found at:

imageSo onto a simple example.

In “SampleScripts” folder of the MapReduce Framework download there is a script that extracts Store information, in XML format, from the sample AdventureWorks database. A sample of the output is as follows:

  1. <Root>
  2. <Store>
  3. <BusinessEntityID>292</BusinessEntityID>
  4. <Name>Next-Door Bike Store</Name>
  5. <SalesPersonID>279</SalesPersonID>
  6. <Demographics>
  7. <StoreSurvey xmlns="">
  8. <AnnualSales>800000</AnnualSales>
  9. <AnnualRevenue>80000</AnnualRevenue>
  10. <BankName>United Security</BankName>
  11. <BusinessType>BM</BusinessType>
  12. <YearOpened>1996</YearOpened>
  13. <Specialty>Mountain</Specialty>
  14. <SquareFeet>21000</SquareFeet>
  15. <Brands>2</Brands>
  16. <Internet>ISDN</Internet>
  17. <NumberEmployees>13</NumberEmployees>
  18. </StoreSurvey>
  19. </Demographics>
  20. <Modified>2008-10-13T11:15:07.497</Modified>
  21. </Store>
  22. ...
  23. <Store>
  24. <BusinessEntityID>374</BusinessEntityID>
  25. <Name>Immense Manufacturing Company</Name>
  26. <SalesPersonID>277</SalesPersonID>
  27. <Demographics>
  28. <StoreSurvey xmlns="">
  29. <AnnualSales>3000000</AnnualSales>
  30. <AnnualRevenue>300000</AnnualRevenue>
  31. <BankName>Guardian Bank</BankName>
  32. <BusinessType>OS</BusinessType>
  33. <YearOpened>1998</YearOpened>
  34. <Specialty>Touring</Specialty>
  35. <SquareFeet>76000</SquareFeet>
  36. <Brands>4+</Brands>
  37. <Internet>DSL</Internet>
  38. <NumberEmployees>73</NumberEmployees>
  39. </StoreSurvey>
  40. </Demographics>
  41. <Modified>2008-10-13T11:15:07.497</Modified>
  42. </Store>
  43. </Root>

The record reader to be used will be “XmlElementStreamingInputFormat”. This document reader using a configuration element to define the XML node to be located, which then outputs for each row a single column consisting of the complete node contents.

Using this record reader a table can be defined consisting of a single XML column:

add JARS file:///C:/Users/Carl/Projects/MSDN.Hadoop.MapReduce/Release/msdn.hadoop.readers.jar;
set xmlinput.element=Store;

CREATE EXTERNAL TABLE StoresXml (storexml string)
STORED AS INPUTFORMAT 'org.apache.mahout.classifier.bayes.XmlElementStreamingInputFormat'
LOCATION '/user/Carl/stores/demographics';

The INPUTFORMAT option allows for the definition of the required document reader. The OUTPUTFORMAT specified is the default for Hive. In this example I have defined an EXTERNAL table over the directory containing the extracted XML; independently copied to the Hadoop cluster.

The ADD JARS statement ensures the document reader is available for job execution. The SET statement configures the job such that the document reader knows what XML node to extract.

Once this table is defined you can use it like any other Hive table. If one selects from this table you will get each Store element as a row. However, xpath processing allows you to extract the XML attributes as native types.

In addition to SELECT operations one also has the option of creating a VIEW that parses the XML and presents the data using native types:

CREATE VIEW Stores(BusinessEntityID, BusinessType, BankName, AnnualSales, AnnualRevenue) AS
xpath_int (storexml, '/Store/BusinessEntityID'),
xpath_string (storexml, '/Store/Demographics/*[local-name()=\'StoreSurvey\']/*[local-name()=\'BusinessType\']'),
xpath_string (storexml, '/Store/Demographics/*[local-name()=\'StoreSurvey\']/*[local-name()=\'BankName\']'),
xpath_double (storexml, '/Store/Demographics/*[local-name()=\'StoreSurvey\']/*[local-name()=\'AnnualSales\']'),
xpath_double (storexml, '/Store/Demographics/*[local-name()=\'StoreSurvey\']/*[local-name()=\'AnnualRevenue\']')
FROM StoresXml;

Using the Stores definition one can now process the XML data files through the normal Hive operations. Continuing with the same samples in the download one can now easily generate a revenue summary across the banks:

SELECT BusinessType, BankName, CAST(SUM(AnnualSales) AS INT) AS TotalSales FROM Stores
GROUP BY BusinessType, BankName;

As expected, under the covers the necessary MapReduce jobs are executed to aggregate the data, returning:

BM Guardian Bank 43200000
BM International Bank 43200000
BM International Security 43200000
BM Primary Bank & Reserve 42800000
BM Primary International 42200000
BM Reserve Security 42200000
BM United Security 42200000
BS Guardian Bank 88400000
BS International Bank 87400000
BS International Security 88400000
BS Primary Bank & Reserve 88400000
BS Primary International 87400000
BS Reserve Security 87400000
BS United Security 87400000
OS Guardian Bank 192000000
OS International Bank 186000000
OS International Security 186000000
OS Primary Bank & Reserve 186000000
OS Primary International 186000000
OS Reserve Security 186000000
OS United Security 186000000

If you download the aforementioned code, the sample for the Hive execution can be found in the “SampleScripts” folder. The “DocumentInputReaders” folder also contains the XML document reader classes along with a usable JAR file.

Brian Benz (@bbenz) continued his series with a Using LucidWorks on Windows Azure (Part 2 of a multi-part MS Open Tech series) post of 12/13/2012 to the Interoperability @ Microsoft blog:

imageLucidWorks Search on Windows Azure delivers a high-performance search service based on Apache Lucene/Solr open source indexing and search technology. This service enables quick and easy provisioning of Lucene/Solr search functionality on Windows Azure without any need to manage and operate Lucene/Solr servers, and it supports pre-built connectors for various types of enterprise data, structured data, unstructured data and web sites.

In June, we shared an overview of the LucidWorks Search service for Windows Azure, and in our first post in this series we provided more detail on features and benefits. For this post, we’ll start with the main feature of LucidWorks – quickly creating a LucidWorks instance by selecting LucidWorks from the Azure Marketplace and adding it to an existing Azure Instance. It takes a few clicks and a few minutes.

Signing up

LucidWorks Search is listed under applications in the Windows Azure Marketplace. To set up a new instance of LucidWorks on Windows Azure, just click on the Learn More button:


That takes you to the LucidWorks Account Signup Page. From here, you select a plan, based on the type of storage being used and the number of documents to index. There are currently four plans available: Micro, which has no monthly fee, Small and Medium, which have pre-set fees, and Large, which is negotiated directly with LucidWorks based on several parameters. All of the account levels have fees for overages, and the option to move to the next tier is always available via the account page.

The plans are differentiated on document limits in indexes, the number of queries that can be performed per month, the frequency that indexes are updated, and index targets. Index targets are the types of content that can be indexed – for a Micro, only Websites can be indexed, for small and large, files, RDBMS, and XML content can also be indexed. For large instances ODBC data drivers can be used to make content available to indexes.


Once the plan is selected, enter your information, including Billing Information:


Once the payment is processed (Or in the case of Micro, no payment), a new instance is generated and you’re redirected to an account page, and invited to start building collections!



In the next part of the series we’ll cover setting up collections in more detail, for now let’s cover the account settings and configuration. Here’s the main screen for collections:


The first thing you see is the Access URL options. You can access your collections via Solr or REST API, and here’s where you get the predefined URL for either. When you drill down into the collections you see a status screen first:


This shows you the index size and stats about modification, queries per second, and updates per second, displayable by the last hour, day or week. This screen is also where you can see the most popular queries.

Data Sources

If you were managing external data sources, here’s where you configure them, via the Manage Data Sources button.


From here you can select a new data source from the drop-down. The list in this drop-down is as of this writing, and may change over time – check here for more information on currently supported data sources.


The Indexing Settings are the next thing to manage in your LucidWorks on Azure account. Here’s the Indexing UI:


Indexing Settings

De-duplication manages how duplicate documents are handled. (As we discussed in our first post, any individual item that is indexed and/or searched is called a document.) Off ignores duplicates, Tag identifies duplicates with a unique tag, and Overwrite replaces duplicate documents with new documents when they are indexed. Remember that de-duplication only applies to the indexes of data, not the data itself – only the indexed reference to the document is de-duplicated – so duplicates will still exist in the source data even if data in the indexes has been de-duplicated. Duplicates are determined based on key fields that you set in the fields editing UI.

Default Field Type is used for setting the type of data for fields whose type LucidWorks cannot determine using its built-in algorithms.

Auto-commit and Auto-soft commit settings determine when the index will be updated. Max time is how long to wait before committing, and max docs is how many documents are collected before a commit. Soft commits are used for real time searching, while regular commits manage the disk-stored indexes.

Activities manage the configuration of indexes, suggested autocomplete entries, and user result click logging.

Full documentation of indexing settings can be found here.

Field Settings

Field Settings allow configuration of each field in the index. Fields displayed below are automatically defined by data extraction and have been indexed:


Field types defined by LucidWorks have been optimized for most types of content, and should not generally be changed. The other settings need to be configured once the index has run and defined your fields:


For example, a URL field would be a good candidate for de-duplication, and you may want to index it for autocomplete as well. You can also indicate on Field Settings whether you want to display URLs in search results. Here is full documentation of Field Settings.

Other Indexing Settings

Dynamic Fields are almost the same as fields, but are created or modified when the index is created. For example, adding a value before or after a field value, or adding one or more fields together to form a single value.

Field Types is where you add custom field types in addition to the default field types created by your LucidWorks installation.

Schedules is where you add and view schedules for indexing.


Querying Settings is where you can edit the configuration for how queries are conducted:


The Default Sort sets results to be sorted by relevance, date, or random.

There are four Query Parsers available out of the Box for LucidWorks; a custom LucidWorks parser, as well as standard Lucene, dismax and extended dismax. More information on the details of each parser is available here.

Unsupervised feedback resubmits the query using the top 5 results of the initial query to improve results.

This is also where you configure the rest of your more familiar query behavior, like where stop words will be used, auto complete, and other settings, the full details of which are here.

Next up: Creating custom Web site Search using LucidWorks.

In the next post in the series, we’ll demonstrate setting up a custom Web site that integrated LucidWorks Search, and the configuration settings we use to optimize search for that site. After that, in future posts we’ll discuss tips and tricks for working with specific types of data in Lucidworks.

Andy Cross (@AndyBareWeb) continued his HDInsight seriew with Bright HDInsight Part 5: Better status reporting in HDInsight on 12/13/2012:Go to full article

imageFollowing on from earlier blogs in this series, in this blog I will show another way of getting telemetry out of HDInsight. We have previously tackled counters and logging to the correct context, and now I will show an additional type of logging that is useful while a job is in process; the reporting of a job’s status in process.

imageThis status is a transient record of the state of a Map/Reduce job – it is not persisted beyond the execution of a job and is volatile. Its volatility is evident in that any subsequent set of this status will overwrite the previous value. Consequently, any inspection of this value will only reflect the state of the system when it was inspected, and may already have changed by the time the user gets access to the value it holds.

image_thumb75_thumb1This does not make the status irrelevant, in fact it gives a glimpse at the system state in process and allows the viewer to immediately determine a recent state. This is useful in its own right.

Controlling the Status

The status of a Map/Reduce job is reported at the Map, Combine and Reduce stages of execution, and the individual Task can report its status.

This status control is not available in the Hadoop C# SDK but is simple to achieve using the Console.Error – see Part 2 of this series for why this stream is used!

In order that we can remain in sync with the general approach taken by the C# Hadoop SDK, we will wrap the call into an extension method on ContextBase, the common base class for Mapper, Combiner and Reducer. With this new method – which we will call SetStatus(string status) – we will write a well known format to the output that Hadoop picks up for its Streaming API, and Hadoop will then use this as the status of the job.

The format of the string is:
reporter:status:Hello, World!

This will set the task executing’s state to “Hello, World!”.

Implementing the Extension method

The extension method is very simple:

public static class ContextBaseExtension


public static void SetStatus(this ContextBase context, string statusMessage)


Console.Error.WriteLine("reporter:status:{0}", statusMessage);



view raw file1.cs This Gist brought to you by GitHub.

Once we have this method referenced and included as a using include to our map/reduce/combine class, we can use the context parameter of our task (in my case I chose a Map), and set a status!

I chose a trivial and unrealisting sleep statement, so that I can see the message easily in the portal:

context.SetStatus("Sleeping During Map!");


view raw file2.cs This Gist brought to you by GitHub.

Viewing the output

To view this, you can enter the Hadoop portal and check the state of the running task. In this screenshot you can see that status is set to “Sleeping During Map!” – and you can see why I used Thread.Sleep – to give me a few moments to log onto the RDP server and take the screenshot. I choose 3 minutes so I could also have time to get a coffee …. ;-)

Production use

A better example, and a very useful use case for this is to report whether a Task is running successfully or whether it is encountering errors that have to be recovered from. In order to achieve this, we will catch any exceptions that can be recovered from, report that we are doing so and then continue execution.

Notice that this is a very similar map to our previous example with Counters, but that we are also setting a state of our Task that is in process.

using System;

using Microsoft.Hadoop.MapReduce;

namespace Elastacloud.Hadoop.SampleDataMapReduceJob


public class SampleMapper : MapperBase


public override void Map(string inputLine, MapperContext context)




context.IncrementCounter("Line Processed");

var segments = inputLine.Split("\t".ToCharArray(), StringSplitOptions.RemoveEmptyEntries);

context.IncrementCounter("country", segments[2], 1);

context.EmitKeyValue(segments[2], inputLine);

context.IncrementCounter("Text chars processed", inputLine.Length);


catch(IndexOutOfRangeException ex)


//we still allow other exceptions to throw and set and error state on the task but this

//exception type we are confident is due to the input not having >3 \t separated segments

context.IncrementCounter("Logged recoverable error", "Input Format Error", 1);

context.Log(string.Format("Input Format Error on line {0} in {1} - {2} was {3}", inputLine, context.InputFilename,

context.InputPartitionId, ex.ToString()));

context.SetStatus("Running with recoverable errors");




public static class ContextBaseExtension


public static void SetStatus(this ContextBase context, string statusMessage)


Console.Error.WriteLine("reporter:status:{0}", statusMessage);




view raw file3.cs This Gist brought to you by GitHub.

Timothy Khouri described how to Greatly Increase the Performance of Azure Storage CloudBlobClient in a 12/10/2012 post:

imageWindows Azure Storage boasts some very impressive transactions-per-second and general throughput numbers. But in your own applications you may find that blob storage, tables and queues all perform much slower than you’d like. This post will teach you one simple trick that literally increased the throughput of my application over 50 times.

image_thumb75_thumb1The fix is very simple, and only a few lines of code – but I’m not just going to give it away so easily. You need to understand why this is a “fix”. You need to understand what is happening under the hood when you are using anything to do with the Windows Azure API calls. And finally, you need to suffer a little pain like I did – so that’s the primary reason why I’m making you wait. :)

The Problem – Windows Azure uses a REST based API

At first glance, this may not seem like a throughput problem. In fact, if you’re a purist, you likely have already judged me a fool for the above statement. But hear me out on this one. If someone made a REST-based API, then it is very likely that a web-browser would be a client application that would consume this service. Now, what is one issue that web-browsers have by default when it comes to consuming web services from a single domain?

“Ah!” If you are a strong web developer – or you architect many web-based solutions, you probably have just figured out the issue and are no longer reading this blog post. However, for the sake of completeness, I will continue.

Seeing that uploading a stream to a blob is a semi-lengthy and IO-bound procedure, I thought to just bump up the number of threads. The performance increased only a little, and that led me to my next question.

Why is the CloudBlobClient slow even if I increase threads?

At first I assumed that I had simply hit the limit of throughput on an Azure Blob Container. I was getting about 10 blobs per second, and thought that I probably just need to create more containers – “perhaps it’s a partitioning issue.”

This didn’t feel right because Azure Blobs are supposed to partition based on “container + blob-name”, and not just on container alone… but I was desperate. So, I created 10 containers and ran the test again. This time more threads, more containers… the result? Zero improvement. The throughput was the exact same.

Then it hit me. I decided to do a test that “shouldn’t” make a difference – but it’s one that I’ve done before in the past to prove that I’m not crazy (or in some cases, to prove that I am). I ran my console app program many times. The results were strange. One application was getting about 10 inserts per second – but 3 applications were getting 10 each. This means that my computer, my network and the Azure Storage Service was able to process far more than my one console application was doing!

This proved my hunch that “something” was throttling my application. But what could it be? My code was super simple:

while (true)
	// Create a random blob name.
	string blobName = string.Format("test-{0}.txt", Guid.NewGuid());

	// Get a reference to the blob storage system.
	var blobReference = blobContainer.GetBlockBlobReference(blobName);

	// Upload the word "hello" from a Memory Stream.

	// Increment my stat-counter.
	Interlocked.Increment(ref count);

That’s when it hit me! My code is simple because I’m relying on other people who wrote code, in this case the Windows Azure Storage team! They, in turn, are relying on other people who wrote code… in their case the .Net Framework team!

So you might ask, “What functionality are they using that is so significant to the performance of their API?” That question leads us to the our final segment.

Putting it All Together – Getting More Throughput in Azure Storage

As was mentioned before, the Azure Storage system uses a REST (HTTP-based) API. As was also mentioned, the developers on the storage team used functionality that already existed in the .Net Framework to create web requests to call their API. That class – the WebRequest (or HttpWebRequest) class in particular was where our performance throttling was happening.

By default, a web browser – or in this case any .Net application that uses the System.Net.WebRequest class – will only allow up to 2 simultaneous threads at a time per host domain.

So no matter how many threads I added in my application code, ultimately I was being funneled back into a 2-thread-maximum bottleneck. Once I proved that out, all I had to do was add this simple configuration bit to my App.config file:

<?xml version="1.0" encoding="utf-8" ?>
			<add address="*" maxconnection="1000" />

Now my application inserts 50 times more than it used to:


<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

Jim O’Neil (@jimoneil) continued his series with Practical Azure #5: Windows Azure SQL Database on 12/14/2012:

imageThis, the latest installment of DevRadio Practical Azure, should have something everyone can – ahem – relate to… Join me for an overview of Windows Azure SQL Database (“the feature formerly known as SQL Azure”) to find out how it’s both the same and different from the SQL Server you know and love – from both a developer and a database admin perspective.


Download: MP3 MP4
(iPod, Zune HD)
High Quality MP4
(iPad, PC)
Mid Quality MP4
(WP7, HTML5)
High Quality WMV
(PC, Xbox, MCE)

And here are the Handy Links for this episode:

Sam Lester described Bulk Database Migration from On-Premise SQL Server to SQL Azure - Step 1: Bulk .bacpac Export in a 12/10/2012 post (missed when published):

imageI recently had a task to perform a bulk migration of hundreds of databases to Windows Azure SQL Database (formerly known as SQL Azure) from an instance of SQL Server 2012. If you need to migrate a very small number of databases, the easiest way is likely through the SSMS export Data-Tier Application wizard. You can manually create a .bacpac file for each of the databases and move them to your blob storage. However, for hundreds of databases, this is not a nice solution. There are several ways to automate this task, but I decided to use TSQL scripting to build up the correct SQLPackage.exe command line call, then use xp_cmdshell to execute. Similarly, the script below can be modified to generate the appropriate command line syntax and be executed outside of SQL Server if needed since most SQL Server environments have xp_cmdshell disabled.

imageThere are several different techniques for migrating an on-premise SQL Server database to SQL Database. My current technical preference is to "export" the database using Data-Tier Application functionality to create a self-contained .bacpac file (NOTE: .bacpac, not .dacpac) that can then be uploaded to blob storage and imported directly into your SQL Database environment. One benefit of using .bacpac export is that the process includes a validation step to verify that all objects in your on-premise database will be supported in SQL Database. Other techniques (scripting, Import/Export wizard, extracting .dacpacs, etc) do not perform this verification during the extraction phase and will result in a failed database creation in the SQL Database environment if you have unsupported objects (ex: a table without a clustered index). These other techniques give different benefits, such as the flexibility to manually modify scripts, etc., but for ease of use, I'm currently sticking with .bacpac export.

Task: Bulk Migration from On-Premise SQL Server to SQL Database - Step 1: Performing Bulk .bacpac Export through TSQL Scripting & SQLPackage.exe

Code Snippet

  1. /*
  2. Purpose: This script performs bacpac export for all databases in a given server. If the database contains objects fully supported in
  3. SQL Database / SQL Azure, then the bacpac export will succeed. If not, the errors/reasons for failure will be displayed in the result set.
  4. After running the script, save the output to text to help determine what objects need to be modified in order for each DB to be ready to migrate.
  5. Author: Sam Lester (MSFT)
  6. Create Date: 12/1/12
  7. Sample Call for bacpac export
  8. sqlpackage.exe /Action:Export /SourceServerName:. /sdn:"DB_Foo" /tf:"c:\MyBacpacs\DB_Foo.bacpac"
  9. */
  10. /* Create table to hold DB names to process */
  11. create table files(files varchar(max))
  12. create table publish(filename varchar(1000), cmd varchar(max))
  13. create table output (output varchar(1000) null)
  14. go
  15. declare @filepath varchar(1000), @dircmd varchar(1000), @sqlpackagepath varchar(1000)
  16. select @filepath = 'c:\MyBacpacs\'/* path to export .bacpacs -- Create this directory before running */
  17. ,@sqlpackagepath = 'C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\sqlpackage.exe' -- Update this to the correct location for your DacFramework.msi install
  18. -- Store the list of databases that we want to export. If you want a particular subset, add a WHERE clause to filter this list.
  19. insert into files
  20. select name from sys.databases where database_id > 4 --Skip the system DBs
  21. /* Build the command line string that we'll later execute in the WHILE loop. In this example, the instance name is hardcoded
  22. to the default instance (using . in /ssn:.). If you have a named instance, update this string to "/ssn:ServerName\InstanceName" */
  23. insert into publish
  24. select files, ' /Action:Export /sdn:' + files +' /ssn:. /tf:' + @filepath + files + '.bacpac'
  25. from files
  26. DECLARE @counter int, @limit int, @rc int
  27. SET @counter =(select count(*) from publish)
  28. SET @limit = 0
  29. -- Loop through and execute each command line to perform the export
  30. WHILE (@limit < @counter)
  31. BEGIN
  32. declare @txt varchar(1000)
  33. select @txt = 'cmd /c ""' + @sqlpackagepath + '"" ' +(select top 1 cmd from publish) + ' >> ' + @filepath + 'results_bacpac_export.txt'
  34. insert output select '----------------------------------------'
  35. insert output select top 1 cmd from publish
  36. insert output select '----------------------------------------'
  37. insert output exec @rc = master.dbo.xp_cmdshell@txt
  38. if (ISNULL(@rc,0) = 0)
  39. begin
  40. insert output select '--- SUCCESSFUL EXPORT FOR DATABASE = ' +(select top 1 filename from publish)
  41. end
  42. delete top (1) from publish
  43. select @limit = @limit + 1
  44. END
  45. select * from output where output is not null
  46. /* Cleanup
  47. drop table files
  48. drop table publish
  49. drop table output
  50. */

The result of running the script is the following:

1. For databases that can be successfully migrated to SQL Database, a .bacpac file is created in the file location supplied in the script. These can then be moved to blob storage and imported.
2. For databases that cannot be successfully migrated to SQL Database, the output of the script provides the list of unsupported objects. Save this output to text to help assess your migration strategy.


Hope this helps,
Sam Lester (MSFT)

Standard disclaimer: As with any script/code, please run this in a test server to make sure it works for your specific version/edition/environment settings before tinkering in your production environment.

Adam Mahood reported a Windows Azure Import/Export Service and External References update in a 12/13/2012 post to the SQL Server Data Tools Team blog:

The Windows Azure Import/Export Service powered by the SQL Server 2012 Data-Tier Application Framework (DACFx V3) provides a cloud service for logical backup/restore and migration of Windows Azure SQL Databases. This functionality is available via an HTTP endpoint, as well as through the Windows Azure Management Portal.

imageWe have recently released an update to the service that brings an enhancement to the level of validation carried out against a database during the Export operation. This improved level of validation ensures Exported BACPACs can be Imported (restored) to a new database in Azure. However, due to this improved validation, folks may see an increase in Export operation failures, particularly around invalid self-referencing external (three-part) names in object definitions. More details on the issue are below.

  1. You attempt to Export a Windows Azure SQL Database using the Import/Export Service via the HTTP Endpoint, or through the Windows Azure Management Portal.
  1. Export operation fails with error message similar to the following:

“Exception Microsoft.SqlServer.Management.Dac.Services.ServiceException: Error encountered during the service operation. Inner exception Microsoft.SqlServer.Dac.DacServicesException: Validation of the schema model for data package failed. Error SQL71562: Procedure: [dbo].[SampleProcedure] has an unresolved reference to object [MyDB].[dbo].[TestTable]. External references are not supported when creating a package from this platform.”


1. Improved validation blocks Exports of databases containing fully qualified three-part names in object definitions.


DACFx must block Export when object definitions (views, procedures, etc.) contain external references, as Azure SQL Database does not allow cross-database external references. This includes blocking Export for databases with three-part references to themselves - if these references were successfully Exported, Importing the resulting BACPAC to a database with a different name will always fail, as the three-part name references would no longer be self-referencing.


1. Modify your database schema, removing all of the self-referencing three-part name references, reducing them to a 2 part name.

There are many tools/mechanisms by which you can accomplish fixing your schema to remove these external references. One option is to use SQL Server Data Tools (SSDT). In SSDT, you can create a database project from your Azure database, setting the target platform of the resulting project to “SQL Azure”. This will enable Azure-specific validation of your schema which will flag all three-part name/external references as errors. Once all of the external reference errors identified in the Error List have been remedied, you can publish your project back to your Azure database and resume usage of the Import/Export Service.

Chris Klug (@ZeroKoll) described A way to upload files to Windows Azure Mobile Services in a 12/13/2012 post:

imageOk, so it is time for another Mobile Services post I believe. My previous posts about the subject has covered the basics as well as authentication when it comes to Mobile Service. But so far, I have only been doing the most simple tasks, such as added and read data from a SQL Database. However, I have mentioned that Mobile Services is supposed to be sort of a layer on top of more of Microsoft’s cloud offering like for example the Service Bus, storage etc. So in this post, I want to demo how you can utilize Mobile Services to upload files to blob storage.

imageThere are probably a lot of different ways to do this, but 2 stood out for me. The one I am about to describe, using public containers, as well as using shared access signatures (SAS). So before going about it “my way”, I am going to explain SAS, and why I don’t like it even though it might be a “cleaner” way to do it.

image_thumb75_thumb2Blob storage access is limited by default, something that I like, which is why I won’t even bother talking about public containers. But if uploading files to a public location works for you, then that is easier than what I am about to talk about…

So…private containers… To be able to access private containers, you need to sign your requests to Azure. This signature requires a key, which should be kept private for obvious reasons. So including it in a client application like the ones using Mobile Services would be a massive security issue. The solution to this is that you can create a special key (SAS) that will make it possible to access blob storage for a limited time. The SAS is generated serverside and can then be handed to the client to give him/her access to upload files. More information can be found online at places like this.

Ok, so why don’t I like this? Well, I just find that it means that I have to task the client application with doing the actual upload. This means that if I ever change storage, or even the storage structure, I will have to update the client. Besides, in Win8 and WP8, the client is supposed to do one thing great, and not 100 things “so so”. So tasking my recipe app with communicating with blob storage just because I want a picture of the user for personalization seems a bit off. (No I am not building a recipe app, it was just an example…)

And besides, I would still need some form of serverside code to get me the SAS. It would have been a completely different thing if including blob access code in the client meant not needing serverside code, and thus saving money, but I still need it. SO even if it is a “nicer” solution, it gives me no real added benefit more than adding stuff to the client that I don’t believe should be there.

Anyhow, time to move forward with the solution instead of talking about why I don’t like the other solution.

So… I already have a Mobile Service up and running since before, so I will just skip that. I also have a Windows 8 App Store client since before, so I will keep using that. All I need to do is to add the code needed to select an image and then post it to my Mobile Service.

I decided to do this quick and dirty in code behind of my application, but that’s just to keep it simple… So I added the following XAML

<TextBlock x:Name="txtFileName" />
<Button Content="Select File" Click="SelectFile" />
<Button Content="Send File" Click="SendFile" />

And the following C#

private async void SelectFile(object sender, RoutedEventArgs e)
var dlg = new FileOpenPicker();
dlg.ViewMode = PickerViewMode.Thumbnail;
var file = await dlg.PickSingleFileAsync();

if (file == null)

_file = file;
txtFileName.Text = _file.DisplayName;

private async void SendFile(object sender, RoutedEventArgs e)
var msg = new ImageUpload { fileName = "Microsoft.jpg" };
await msg.SetImageData(_file);
await App.MobileService.GetTable<ImageUpload>().InsertAsync(msg);
new MessageDialog("Done").ShowAsync();

As you can see, there are 2 event handlers. The first one does the file selection using a FileOpenPicker, which is basic stuff. The second one creates a new ImageUpload object, which I will cover in a little bit, and then uses the Mobile Services proxy to send it to the cloud. The real stuff is going on in the ImageUpload class though…

[DataTable(Name = "images")]
public class ImageUpload
public async Task SetImageData(StorageFile file)
var content = await FileIO.ReadBufferAsync(file);
var bytes = content.ToArray();
image = Convert.ToBase64String(bytes);

public int id { get; set; }
public string fileName { get; set; }
public string image { get; set; }

Ok, so the “real stuff” is still pretty simple. All the ImageUpload class does, besides hold the values that are to be sent to the cloud, is to take the contents of the file and convert it to a Base64 encoded string. That way, I can push my file to the cloud as just a string, which Mobile Services already supports.

So now that the ImageUpload class has been created and pushed to the cloud, what happens there? Well, there are a couple of things that have to happen. First of all, the image I am uploaded has to be converted from a Base64 encoded string to my actual image, and then that image has to be sent to blob storage. But let’s just take it one step at the time.

The first thing is to create a new table in my Mobile Service called “images”. Next I need to create an insert script, which is where all the stuff will be hapening.

The first part of the script looks like this

function insert(item, user, request) {
var azure = require('azure');
var blobService = azure.createBlobService('DefaultEndpointsProtocol=https;AccountName=XXXXX;AccountKey=YYYYY');

createContainerIfNotExists(blobService, function(error) {
if (error) {
uploadFile(blobService, item.image, item.fileName, function(error) {
if (error) {
delete item.image;

Ok, so what is happening in there? Well, the first thing that happens is that the script gets a reference to the Node.js module called “azure”. This is used for accessing Azure resources…duh… Next that module is used to create a proxy client for my blob storage.

This proxy is then used to create the container if it doesn’t exist. If that method fails, the script returns an HTTP 500. If not, it uploads the file using another helper method. And once again, if that fails, it return an HTTP 500. Otherwise, it removes the Image property so that it isn’t stored in the table, and then executes the request, inserting the rest of the entity properties into the table.

That part isn’t very complicated…so let’s look at the helper methods. First up is the createContainerIfNotExists.

It takes a blob storage proxy as a parameter and uses it to ensure that the target container exists using a publicAccessLevel of “blob”.

function createContainerIfNotExists(blobService, callback) {
console.log('creating container if needed')
blobService.createContainerIfNotExists('democontainer', {publicAccessLevel : 'blob'}, function(error) {
console.log('container created')

As you can see, I am doing quite a bit of logging as well. This helps when something goes wrong…

The next helper is the uploadFile method. It looks like this

function uploadFile(blobService, file, filename, callback) {
console.log('uploading file');
var fileBuffer = new Buffer(file, 'base64');
, filename
, new ReadableStreamBuffer(fileBuffer)
, fileBuffer.length
, { contentTypeHeader:'image/jpg' }
, function(error){
console.log('file uploaded')

It basically just forwards the file information to the blob storage proxy’s createBlockBlobFromStream method. However, there are a few interesting bits in here. First of all, it takes the “file”, which is really the Base64 encoded string, and puts it inside a Buffer, which is told that the content is Base64 encoded. So now I have my file content as a Buffer instead of a string, which is a good start. However, the method I am calling is called createBlockBlobFromStream. This means that it requires a Stream object, not a Buffer. Unfortunately, this is not .NET, so there isn’t just some neat implicit cast or extension method that solves this. And I couldn’t even find an implementation of Stream, which is an abstract base class, that wraps a Buffer. So after collecting some tips and code snippets from around the web, I built my own. It isn’t actually that complicated, but it becomes a lot of rows of code…

var ReadableStreamBuffer = function(fileBuffer) {
var that = this;;
this.readable = true;
this.writable = false;

var frequency = 50;
var chunkSize = 1024;
var size = fileBuffer.length;
var position = 0;

var buffer = new Buffer(fileBuffer.length);

var sendData = function() {
if(size === 0) {

var amount = Math.min(chunkSize, size);
var chunk = null;
chunk = new Buffer(amount);
buffer.copy(chunk, 0, position, position + amount);
position += amount;
size -= amount;

that.emit("data", chunk);

this.size = function() {
return size;

this.maxSize = function() {
return buffer.length;

this.pause = function() {
if(sendData) {
delete sendData.interval;

this.resume = function() {
if(sendData && !sendData.interval) {
sendData.interval = setInterval(sendData, frequency);

this.destroy = function() {
sendData = null;
that.readable = false;

this.setEncoding = function(_encoding) {

util.inherits(ReadableStreamBuffer, stream.Stream);

Ok, if you are interested, I suggest you look through that piece of code. If not, I will give you a quick rundown of what it does.

It basically takes a Buffer, which is pretty much like an byte[]. It wraps that, and keeps track of the current position inside it. It then uses a timer to push a chunk of bytes to baseclass ever so often as long as there is data to push.

That’s actually all there is to it!

Barbara Darrow (@gigabarb) reported SendGrid adds Parse, Stackmob, Azure integrations in a 12/13/2012 post to GigaOm’s Cloud blog:

imageSendGrid keeps moving toward ubiquity. The company, which brings e-mail delivery to popular applications like foursquare, Pinterest, and Airbnb, now integrates with Parse, Stackmob and Windows Azure mobile-backend-as-a-service (MbaaS) options. That should make it easier for more mobile devleopers to build email delivery and alerts into their applications without having to sweat the details of their infrastructure. Last week SendGrid announced tie ins to the popular Twilio APIs that enable SMS text and voice integration into mobile apps.

imageThis is a big deal because most mobile app users expect to communicate via their apps. What’s the good of foursquare if you can’t alert the world that you ousted Joe Schmoe as mayor of your Dunkin Donuts? The new MbaaS integrations are all available now, according to Boulder, Colo.-based SendGrid.

image_thumb75_thumb2One of the key advantages of SendGrid, developers say, is it lessens their overall reliance on Amazon Web Services for capabilities above and beyond basic compute and storage functionality.

imageThat’s increasingly important for developers who don’t want to be overly reliant on a particular vendor’s cloud. SendGrid’s biggest rival is Amazon Simple Email Service (SeS) and by using SendGrid developers can distinguish between their infrastructure provider and their mail service provider. “That’s key because it can take you six months to migrate an app off of Amazon if you need to,” said Mark Geene, CEO of Cloud Elements, a Denver area systems integrator specializing in cloud development projects.

SendGrid CEO Jim Franklin says he hears that sort of thing a lot. ”One of the strengths of SendGrid is it’s easy-on, easy-off. We make it very easy contractually and technically to sign up and to leave,” he said.

Full disclosure: I’m a GigaOm registered analyst.


<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

Don Pattee posted Announcing the release of HPC Pack 2012 to the Windows HPC Team blog on 12/14/2012:

Announcing the immediate availability of the next major version of Windows HPC software: HPC Pack 2012

The fourth 'major version' of the HPC software brings improved performance and reliability to the HPC platform, simplifies the installation experience, and adds some great new features such as:

  • support for Windows Server 2012 (head nodes, compute nodes, broker nodes, and unmanaged server nodes) and Windows 8 (as clients and workstation nodes)
  • the ability to customize the tuning of MPI operations for improved performance on your hardware
  • the ability to increase cluster robustness by adding more than 2 head nodes in a failover cluster group
  • imagethe ability to install a head node on a Windows Azure VM, so an entire cluster can run 'in the cloud'
  • all features previously split between the 'Express,' 'Enterprise,' and 'Workstation' or 'Cycle Harvesting' editions now available through a single installation
  • additional features are described in the What’s New in HPC Pack 2012 document

If you are using the Windows Azure HPC Scheduler SDK, this update comes with the same reliability and performance improvements that the HPC Pack is getting, and also adds compatability with the Windows Azure SDK October 2012 / v1.8 release.

Before installing either, take a look at the What’s New in HPC Pack 2012 document [see post below] and Release Notes over on our Windows HPC Technical Library.

More details and additional documentation will arrive in January, in the mean time head over to the Windows HPC Discussion forums if you have any questions or comments. We'll be happy to hear from you!

Note: The single HPC Pack 2012 download contains all the features previously separated in the Express, Workstation, Cycle Harvesting, and Enterprise editions. Standalone 'Client Utilities' and 'MS-MPI' packages have also been updated for HPC Pack 2012.

The Windows High Performance Computing (HPC) Team described What's New in Microsoft HPC Pack 2012 in a 12/10/2012 article for the TechNet Library:

  • Updated: December 10, 2012
  • Applies To: Microsoft HPC Pack 2012

This document lists the new features and changes that are available in Microsoft® HPC Pack 2012.

In this topic:

  • Deployment
  • Windows Azure integration
  • Node management
  • Job scheduling
  • Runtimes and development


  • Operating system and software requirements have changed. HPC Pack 2012 has an updated set of requirements for operating system and other prerequisite software.

The following table shows the operating system requirements of HPC Pack 2012 for the different node roles.


HPC Pack 2012 also has updated the requirements for software in addition to the operating system as follows:

  • HPC Pack 2012 requires and supports Microsoft® SQL Server® 2008 R2 or SQL Server 2012 to manage the HPC cluster databases. HPC Pack 2012 no longer supports SQL Server 2008.
  • HPC Pack 2012 supports Microsoft® .NET Framework 4. SQL Server® 2008 R2 or SQL Server 2012 installation still requires .NET Framework 3.5.
  • HPC Pack now supports the Server Core installation option of the Windows Server 2012 operating system. HPC Pack 2012 supports the Server Core installation option of the Windows Server 2012 operating system for the following roles:
    • Compute node
    • WCF Broker node
    • Unmanaged server node
    HPC Services for Excel is not supported on the Server Core installation option of the Windows Server 2012 operating system.
  • The SQL permissions required for remote database setup have been reduced. You no longer need to be a member of the SQL Server sysadmin role to install HPC Pack 2012 with remote databases. Before you install HPC Pack 2012 with remote databases, ask the database administrator to run the SetupHpcDatabase.cmd script in the Setup folder or to manually perform or modify the tasks in the script. For details, see 1.3. Decide if you want to deploy your cluster with remote databases in the Getting Started guide.

imageWindows Azure integration

  • Nodes can be deployed in Windows Azure deployments in which Windows Azure Virtual Network is available. You can use HPC Pack 2012 to deploy nodes in Windows Azure deployments in which Windows Azure Virtual Network is available. Virtual Network securely extends your enterprise network to Windows Azure, which allows applications that run on Windows Azure nodes to access resources on the enterprise network. With Virtual Network, you can build traditional site-to-site virtual private networks (VPNs) to scale data centers, and create hybrid applications that span from an on-premises HPC cluster to Windows Azure. This feature provides applications with the means to access files in shared folders, and to access servers for license validation.
  • The number of proxy nodes is configurable. Cluster administrators can now configure the number of proxy nodes that a Windows Azure deployment uses. Proxy nodes facilitate communication between on-premises head nodes and nodes hosted in Windows Azure. You can specify that the deployment uses a fixed number of proxy nodes, or that the deployment uses a number of proxy nodes that varies with the size of the deployment. Proxy nodes use Medium size Azure worker roles.
  • An application VHD can be specified to provision Azure worker role instances. Cluster administrators now can specify an application virtual hard disk (VHD) that is automatically mounted when Windows Azure worker role instances are provisioned.
  • All Windows Azure burst communication that goes through TCP port 443 can now use HTTPS. If configured in this release, all communication through TCP port 443 between the on-premises cluster and Windows Azure burst node deployments can now use the HTTPS protocol. In previous releases, NetTcp communication was used for Service-Oriented Architecture (SOA), job scheduling, and file staging services to Windows Azure deployments. HTTPS communication is allowed in many enterprise environments in which non-HTTPS traffic on port 443 is blocked. By default, NetTcp communication continues be configured for these services in HPC Pack 2012, to improve performance for some Windows Azure burst deployments.
  • An HPC Pack 2012 head node can be deployed on Windows Azure Virtual Machine. Using the Preview release of Windows Azure Virtual Machine, an administrator or independent software vendor (ISV) can create and run an HPC cluster and workload fully in Windows Azure with only minimal or no investment in on-premises infrastructure. The domain controller for the cluster can be either on-premises (if there is an existing enterprise domain) or in Windows Azure. You can use a single Windows Azure Virtual Machine for the head node, or multiple virtual machines for the head node, Microsoft SQL Server, and the Active Directory domain controller. You can add Windows Azure compute nodes to the cluster in the same way that you add Windows Azure nodes to an on-premises HPC cluster.
  • Windows Azure trace logs can be written to persistent storage. This release allows the Windows Azure trace logs that are generated on Windows Azure nodes to be written to persistent Windows Azure storage, instead of to local storage on the role instances. The trace logs facilitate troubleshooting of problems that can occur with “burst to Windows Azure” node deployments. Writing the trace logs to persistent storage is enabled by setting the AzureLoggingEnabled cluster property to True with the HPC PowerShell Set-HpcClusterProperty cmdlet. This affects all new deployments (not existing ones), and logs only Critical, Error, and Warning error messages to the WADLogsTable in the storage account associated with each deployment.

    ImportantImportant: Logging of Windows Azure deployment activities uses table storage space and generates storage transactions on the storage account associated with each deployment. The storage space and the storage transactions will be billed to your account. Activity logging is generally enabled only when problems occur with the deployment and is used to aid in troubleshooting issues with the deployment. After you disable logging, the logs will not be automatically removed from Windows Azure storage. You may want to keep the logs for future reference by downloading them. The log entries can be cleaned up by removing the WADLogsTable from your storage account.

  • HPC Pack 2012 integrates with Windows Azure SDK 1.8. Creation of Windows Azure VM roles nodes by using Windows HPC Pack 2012 requires version 1.8 (October 2012) of the Windows Azure SDK for .NET x64.
    The Windows Azure HPC Scheduler that is compatible with HPC Pack 2012 requires both the Windows Azure HPC Scheduler SDK 1.8 (available for download from the Microsoft Download Center) and version 1.8 of the Windows Azure SDK for .NET x64.

Node management

  • New features are available for discovering and managing the power plan or power scheme setting for on-premises nodes. The Windows power plan or power scheme setting on the compute nodes in your HPC cluster can affect the performance of the cluster. To help identify and alleviate performance problems that result from power plan settings, you now can run the Active Power Scheme Report diagnostic test, which reports the power plan that is active on the nodes. You can run this diagnostic test on on-premises nodes to ensure that the active power plan is the desired one, or to verify that the power plan has not changed unexpectedly. Unexpected changes to the power plan might result from a group policy, for example. The desired power plan in most cases is High performance. You can find the Active Power Scheme Report diagnostic test in Diagnostics , under System , and then under System Configuration .
    Additionally, you can now add a new Power Scheme Setting node template task for on-premises nodes that changes the power plan of the nodes to any of the default power plans, or to a specific power plan that you or your organization has created. This new node template task can run when you deploy nodes from bare-metal, or when you add preconfigured nodes to the HPC cluster. Also, the node template task can run when you run maintenance on existing nodes. You can find the Power Scheme Setting node template task in the Maintenance section of compute node templates.

Job scheduling

  • Custom job and task properties can be changed at run time. The owner of a job or a cluster administrator now can change the values of custom-defined job and task properties at any time.
  • A job can depend on another job. You now can specify that a job is dependent on a parent job when you submit the job. When a job is dependent on a parent job, the HPC Job Scheduler Service considers the job for scheduling only when its parent jobs finish.
  • Jobs can be allocated to nodes based on their memory requirements. You can configure a job property, EstimatedProcessMemory , to estimate the maximum amount of memory that a process in the job will consume. The HPC Job Scheduler Service uses this value to help allocate the job efficiently to the cluster nodes that have the memory resources to perform the job. Be default job allocation does not take into account the memory requirements of the job.
  • Additional flexibility is provided for scheduling jobs on node groups. You now can define a value for a property, NodeGroupOp , to specify an operator that affects how the HPC Job Scheduler Service uses node groups to allocate resources to a job. This new flexibility is useful for scheduling jobs on hybrid clusters where some nodes are Windows Azure nodes, and some node are on-premises compute nodes, workstation nodes, or unmanaged server nodes. Of the on-premises nodes, some nodes can have different attributes that some applications require, such as memory or cores.
    The following table describes the values for the NodeGroupOp property:


  • You can specify exit codes other than 0 that indicate success for tasks. Jobs and tasks now have a ValidExitCodes property that you can use to specify a list of exit codes that indicate successful task completion. If no list is specified, then 0 is the only task exit code that indicates success. If you specify a value for the ValidExitCodes property for a job, the specified list of successful exit codes applies to all tasks within the job, unless you override that list by specifying a different value for the ValidExitCodes property of the task itself.
    This feature is useful especially for some commercial applications that do not always return 0 when they successfully run. The list of successful exit codes applies only to the task exit code itself. Exit codes during setup of the task may have additional values. For example, if the file specified for the standard output of the task is not valid, the exit code is 3 for "file not found." Even if you include 3 in the list of values that you specify for the ValidExitCodes property of the task, the task still fails, because the failure occurs during task configuration, and is not the result of the task starting and finishing.
  • The most recent output of a task is cached, rather than the start of the output. Starting with this release, HPC Pack caches the most recent 4000 characters per task, not the first 4000 characters.
  • The HoldUntil job property can be set without cluster administrator permissions, and can be set any time before the job runs. Prior to this release, you could only set the HoldUntil property if you were a cluster administrator, and could only set this property on jobs in the queued state. In this release, all users can set this property, and they can change this property for a job any time before the job starts running.
  • A job can run on a single node without reserving all of the resources on the node. You now can run a job on a single node without reserving all the cores of the node. For example, prior to this release, if you specified that a job should run on a minimum of 2 cores and a maximum of 4 cores, the job could run on 2 cores, each located on a different node. You can now specify that a job should run on a minimum of 2 cores and a maximum of 4 cores, but still must run on a single node. This feature provides more efficient use of cluster resources for jobs that must run on a single node, such as OpenMP applications.
  • You can specify whether dependent tasks run if parent task fails. You can now set a new job property named FailDependentTasks to specify whether or not dependent tasks should continue if a parent task fails or is canceled. The property is set to false by default, and in that case all dependent tasks continue to run even if some of the parent tasks fail or get canceled. If you set this property to true, all dependent tasks fail upon the failure of any parent tasks.
  • Task-level preemption is now the default preemption behavior. In Queued scheduling mode, the default option for preemption behavior is now task-level immediate preemption, rather than job-level preemption. This new default behavior means that only as many tasks of low priority jobs are preempted as are needed to provide the resources required for the higher priority jobs, rather than preempting all of the tasks in the low priority jobs.
  • HPC Basic Profile Web Service is removed. The HPC Basic Profile Web Service was deprecated as of HPC Pack 2008 R2 with Service Pack 2 (SP2), and has been removed in this release. Instead, use the Web Service Interface, which is an HTTP web service that is based on the representational state transfer (REST) model. For information about the Web Service Interface, see Working with the Web Service Interface.

Runtimes and development

  • Easier monitoring of SOA jobs and sessions is available. You now can use HPC Cluster Manager or HPC Job Manager to view detailed information about the progress of SOA jobs and sessions, and to view message-level traces for SOA sessions. You can also export SOA traces and share them offline with support personnel.
  • The common data framework is extended to work on Windows Azure. This release extends support of the common data framework for service-oriented architecture (SOA) applications to Windows Azure burst deployments, using Windows Azure blob storage to stage data. Client application can use the SOA common data API as before and expect the data to be available on Windows Azure compute nodes.
  • Collective operations in the Microsoft Message Passing Interface can now take advantage of hierarchical processor topologies. In modern HPC clusters, communication between Message Passing Interface (MPI) ranks located on the same nodes is up to an order of magnitude faster than the communication between MPI ranks on different nodes. Similarly, on Non-Uniform Memory Access (NUMA) hardware, communication within a socket happens significantly faster than communication across sockets on the same machine. In this release, MPI applications can use hierarchical collective operations to take advantage of hierarchical processor topologies, by minimizing the number of messages and bytes sent over the slower links where possible.
    This feature is enabled through the mpiexec MSMPI_HA_COLLECTIVE environment variable. For more details, see the mpiexec command-line Help.
    This feature applies to the following MPI collective operations:
    • MPI_Barrier
    • MPI_Bcast
    • MPI_Reduce
    • MPI_Allreduce
  • Microsoft MPI now offers a tuning framework to help configure the appropriate algorithmic selection for collective operations. In this release, Microsoft MPI can run basic collective performance benchmarks and optimize the specific algorithms used for the collectives based on the cluster configuration. This facility is exposed through the following mpiexec environment variables:
  • Microsoft MPI can use message compression to improve performance over the socket interconnect. In this release, Microsoft MPI can compress messages before sending them over the sockets channel. Compression potentially reduces the amount of time applications spend waiting for communication, in exchange for the additional processing time needed to perform the compression.
    Compression only applies to message traffic over the sockets channel. The performance impact of this feature depends on the configuration of the hardware for the HPC cluster and the entropy of the data being communicated between MPI ranks.
    This feature can be enabled through the mpiexec MSMPI_COMPRESSION_THRESHOLD environment variable.
  • Microsoft MPI offers enhanced diagnostics. To help troubleshoot errors, Microsoft MPI can capture the program state when MPI errors are encountered. The system can be configured to capture individual process core dumps at various levels of detail after an application terminates abnormally. This facility is controlled through the MSMPI_DUMP_MODE mpiexec environment variable.
  • Microsoft MPI offers enhanced affinity support. In this release, Microsoft MPI supports a richer set of options for process placement. The system is aware of the processor topology on the nodes and is able to configure process layout based on this information. Typical configurations include:
    • One process per NUMA node to support hybrid parallelism (MPI/OpenMP)
    • Sequential process allocation across physical cores, to maximize locality
    • Spread process allocation, to maximize aggregate bandwidth
    This facility is exposed by the mpiexecaffinity and affinity_layout command-line parameters, or the MPIEXEC_AFFINITY environment variable.

Mary Jo Foley (@maryjofoley) provides a less technical summary of Microsoft’s new HPC approach in her Microsoft's new high-performance-computing pack provides clustering in the cloud post of 12/14/2012 to ZDNet’s All About Microsoft blog:

imageMicrosoft is making generally available its latest high-performance-computing software that provides support for Windows 8 and Windows Server 2012, among other features.

Read more.

image_thumb8No significant articles today for

<Return to section navigation list>

Windows Azure Service Bus, Caching Access Control, Active Directory, Identity and Workflow

•• The Windows Azure Team sent the following e-mail message to all Windows Azure subscribers on 12/14/2012:

Windows Azure Storage
imageEffective December 12, 2012, we announced another price reduction for Windows Azure Storage—by as much as 28 percent. This follows our March 8, 2012, reduction of 12 percent, furthering our commitment to best overall value in the industry. We also added more value to our storage offerings in a number of ways. For more information, please read our Windows Azure blog or refer to the Storage section of Data Management on the Pricing Details webpage.

Windows Azure Active Directory
imageTo help make identity management in the cloud accessible and available to every business and organization in the world, we now are offering two key features of Windows Azure Active Directory at no charge.

Please note that additional capabilities such as Windows Azure Active Directory Rights Management will be available as separately priced options. For more information, please refer to the Identity section of our Pricing Details webpage.

Windows Azure Media Encoding Reserved Units preview
imageStarting today, you can take advantage of improved media task processing throughput with the Media Services Encoding Reserved Units preview. Encoding Reserved Units allow you to process media tasks in parallel at a rate equal to the number of reserved units purchased. During preview, the original Media Services Encoding will remain free of charge. However, Encoding Reserved Units will be charged at the rate of $99 each per month, calculated on a daily basis using the highest number of reserved units that are provisioned in the account in the corresponding 24-hour period. For more information, please refer to the Media section of our Pricing Details webpage.

image_thumb75_thumb3Thank you,
Windows Azure Team

Nathan Totten (@ntotten) and Nick Harris (@cloudnick) produced Cloud Cover Episode 95 - Windows Azure Service Bus Fall Updates on 12/14/2012:

imageIn this episode Nick and Nate are joined by Abhishek Lal – Program Manager on Windows Azure – who talks about the latest updates to the Windows Azure Service Bus. Abhishek demonstrates the improvements in SDK 1.8 including the WinRT messaging libraries that allow developers to use Service Bus directly from Windows Store apps.


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

Sandrino di Mattia (@sandrinodm) described how to Deploy your legacy ASP.NET Web Site to Windows Azure Web Sites in a 12/12/2012 post:

This is a follow up post for an answer on StackOverflow.

imageThere are plenty of tutorials on the official Windows Azure website about deploying an ASP.NET Web Application to Windows Azure Web Sites. You simply need to download the publish profile, import it in Visual Studio and click the Publish button. That’s pretty easy if you have a Web Application.

imageBefore the release of Windows Azure Web Sites there were a few ways to deploy an ASP.NET Web Site to a Windows Azure Web Role. You could convert the Web Site to a Web Application (which I had to do for a customer, and this wasn’t fun), or you could play around with cspack.exe. Let’s take a look how easy it is to leverage Windows Azure Web Sites to deploy your ASP.NET Web Site with Visual Studio.

Creating the Web Site

image_thumb75_thumb4Creating a Web Site is pretty easy, simply go to the Portal and navigate to New > Compute > Web Site. There you get to choose a name for the Web Site and the region in which it should be hosted:

You could deploy the ASP.NET Web Site through a repository (GitHub, BitBucket, TFS, …) but let’s assume we want to do this from Visual Studio. In that case it’s important to create deployment credentials (if we don’t have any) and to find the FTP hostname for our Web Site. You’ll be able to do all of this on the dashboard:


Now that we have our Web Site it’s time to deploy our Web Site. Right click your Web Site and you’ll see a few options similar to those you would have on a Web Application:

Click the Publish Web Site button and you’ll see the following window:

By clicking the small button new to the Target Location you’ll be able to choose an FTP Server and this is where we need to enter the information we found on the dashboard:

Fill in the FTP hostname, username and password (you get these after clicking Reset deployment credentials in the dashboard). Make sure you check the Passive Mode checkbox. Finally you’ll need to fill in the exact path to the website’s root directory: site/wwwroot

Now click Openand then click OK to start the deployment. You’ll be prompted to enter your credentials again which you’ll need to do in order to proceed. Wait a few minutes for the deployment to complete and you’re finished:



<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Jason Gumpert (@jasongumpert) reported Microsoft Shares Updated Azure Plans for Dynamics NAV 2013, GP 2013, AX in a 12/14/2012 post to (requires free registration):

imageMicrosoft has told partners that it is launching a TAP program for Dynamics NAV 2013 on Azure, with a release targeted in the first half of 2013. I.B.I.S. chief operating officer Dwight Specht elaborated on a recent ERP cloud strategy update from Microsoft on his blog.

The update from Microsoft covered several specific and more general points, including the fact that a TAP program for NAV 2013 in Azure is under way:

image"NAV2013 is in a TAP program on Azure with several partners and will be the first cloud ERP delivery expected to release in Q1CY2013. This will run on Azure, but the IAAS portion of Azure (in other words, it will be using virtual SQL Servers and Application Servers). It is NOT running on SQL Azure."

imageThe Q1 2013 date provided by Specht differs from recent reports that had been communicated to us regarding a June 2013 release date for an Azure-based Dynamics NAV solution. But the virtual server approach appears to be in line with past assumptions - hosting individual Dynamics NAV instances rather than any sort of new multi-tenant or Azure-native architecture.

imageDynamics AX plans appear to remain unchanged in terms of cloud availability of the next major release in 2014 or 2015, though Specht clarifies a few points:

  • Microsoft plans to release the next major version of Dynamics AX as an Azure-based solution (or perhaps multiple specialized solutions) before the on-premise release
  • The Azure-based solutions will be multi-tenant "to be delivered exactly like CRM Online", including a Windows 8 native interface using HTML5 and Javascript
  • Microsoft still has plans for some of the key cloud-based ERP management tools they have talked about in past years, like support for a full development-test-production server stack (though Specht only mentions test and production), as well as control and better tools for managing upgrades.

Microsoft plans to support Dynamics GP 2013 on Azure, apparently in the same way as Dynamics NAV, Specht also reported. Microsoft plans to make the GP 2013 web client and web management console available in Azure as well. So far, the web management tools have been promoted for the efficiencies they will offer to hosting service providers who can take advantage of server savings with the multi-tenant database architecture of GP 2013.

• Capgemini (@Capgemini) posted A Close-up View of Microsoft [Windows] Azure Adoption, Business Decision-Makers are Driving Cloud Trends, “Capgemini’s view of the growth of public cloud as a vehicle for innovation” on 12/10/2012. From the summary:

imageFollowing the launch of our Business Cloud report, our Microsoft Azure Cloud report looks at the drivers for moving to the Public Cloud and in particular Microsoft Azure. Microsoft is one of our strategic partners and Azure is Microsoft’s Cloud platform.

imageMicrosoft is the only vendor with the ability to deliver the same platform (Server, SQL and Azure) and solutions (Exchange, SharePoint, Lync and Dynamics CRM) across public cloud, private cloud and on premise.

imageTitled A Close-up View of Microsoft Azure Adoption, Business Decision-Makers are Driving Cloud Trends, the report unveils interesting facts about Azure and Cloud adoption by organizations. The most striking group of findings, according to the report, is about the level of interest in Azure, with more than three in five executives saying they had already evaluated Azure, and over half have made it part of their Cloud strategy.

Table of Contents:

Introduction 3
Key Trends 4
Rates of evaluation and adoption of Azure are high 5
The business is in the driving seat 9
Key drivers for selection of Azure 14
Azure addresses trust issues 14
Obstacles to Cloud adoption 15
Trusting data to the Cloud? 19
Azure addresses growing complexity 21
The role of the CIO 23
Cloud adoption strategy and drivers 24
Adoption strategy 25
Business and IT drivers 26
Conclusions 29
Appendix A. About the survey 30
Sample 30
Data collection 30

Capgemini appears to have missed the branding memo: The product name is Windows Azure, not Microsoft Azure. Viewing or downloading the report requires free registration.

Thanks to Gauarav Mantri (@gmantri) for the heads-up.

• Business Wire reported Garantia Data to Offer the First Redis Hosting Service on Windows Azure Cloud in a 12/13/2012 press release:

imageGarantia Data, a provider of innovative in-memory NoSQL cloud services, today announced the availability of its Redis Cloud and Memcached Cloud database hosting services on the Windows Azure cloud platform. Garantia Data’s services will provide thousands of developers who run their applications on Windows Azure with virtually infinite scalability, high availability, high-performance and zero-management in just one click.

Used by both enterprise developers and cutting-edge start-ups, Redis and Memcached are open source, RAM-based, key-value memory stores that provide significant value in a wide range of important use cases. Garantia Data’s Redis Cloud and Memcached Cloud are reliable and fully-automated services for running Redis and Memcached on the cloud – essentially freeing developers from dealing with nodes, clusters, scaling, data-persistence configuration and failure recovery.

image“We are happy to be the first to offer the community a Redis architecture on Windows Azure,” said Ofer Bengal, CEO of Garantia Data. “We have seen great demand among .Net and Windows users for scalable, highly available and fully-automated services for Redis and Memcached. Our Redis Cloud and Memcached Cloud provide exactly the sort functionality they need.”

image"We're very excited to welcome Garantia Data to the Windows Azure ecosystem," said Rob Craft, Senior Director Cloud Strategy at Microsoft. "Services such as Redis Cloud and Memcached Cloud give customers the production, workload-ready services they can use today to solve real business problems on Windows Azure.”

Redis Cloud scales seamlessly and infinitely, so a Redis dataset can grow to any size while supporting all Redis commands. Memcached Cloud offers a storage engine and full replication capabilities to standard Memcached. Both provide true high-availability, including instant failover with no human intervention. In addition, they run a dataset on multiple CPUs and use advanced techniques to maximize performance for any dataset size.

Pricing and Availability

Garantia Data is currently offering its Redis Cloud and Memcached Cloud services free of charge to early adopters in the US-East and US-West Azure regions.

About Garantia Data

Garantia Data offers web companies fully automated cloud services for hosting their Redis and Memcached datasets in a reliable, scalable and fail-safe manner. The company’s Redis Cloud and Memcached Cloud services completely free developers from dealing with nodes, clusters, scaling, data-persistence configuration or failure recovery. They also provide high-availability and superior performance as measured by several benchmark tests. Privately held and backed by a group of super angels, Garantia Data’s team of software veterans have over 100 years aggregate experience in high-speed networking, application acceleration and cloud computing with a proven track record in transforming innovative ideas into viable and valuable products. For more information on the company visit: / twitter:

Dhananjay Kumar (@debug_mode) described Installing Windows Azure SDK on Visual Studio 2012 in a 12/12/2012 post:

imageIn this post we will do walkthrough of installing Windows Azure SDK on Visual Studio 2012. If Windows Azure SDK is not installed on your machine then on launching of Visual Studio 2012 and selecting Cloud project template, you will get option of Get Windows Azure SDK for .NET. Click on that.

image_thumb75_thumb5You will get an option to Download Windows Azure SDK for .NET

Next you will be navigated to Windows Azure download page , Click on install the SDK

Choose Visual Studio 2012 to start installation

You will be prompted with confirmation box. Choose to run in the confirmation box

You need to now click on Install button to start installing.

Make sure all the prerequisite is installed and Accept Term and conditions.

Windows Azure SDK will get start installing. After successful installation on launching of Visual Studio 2012 and selecting Cloud project tab, you will get Windows Azure Cloud Service project template to start working with Windows Azure projects.

In this way you can install Windows Azure SDK on Visual Studio 2012. I hope you find this post useful.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Michael Washington (@ADefWebServer) described Retrieving The Current User In The LightSwitch HTML Client in an 11/16/2012 post:


In the Visual Studio LightSwitch HTML Client, extra steps are required to determine who the currently logged in user is.


The first step is to turn on authentication in the LightSwitch project.


We create a UserName field. Notice that it is Required. This is the thing that will cause a problem (demonstrated later).


We select the Entity (table), then we select Write Code, then the Inserting method.

imageWe use the following code for the method:

        partial void PromiseOrdersSet_Inserting(PromiseOrders entity)
            // Set the Username 
            entity.UserName = this.Application.User.Name;

We also set the updating method:

        partial void PromiseOrdersSet_Updating(PromiseOrders entity)
            // Set the Username 
            entity.UserName = this.Application.User.Name;

We need to do this to protect the OData service points. Only setting this on the client side code (shown later) is not enough.

A user could access the OData service point directly and alter the value. Using the code above prevents this.


With the LightSwitch Silverlight Client that would be enough and it would just work.

However, with the LightSwitch HTML Client, the UserName is not populated when we run the application.


To fix this, we first switch to File View.


We add a page to the Server project using the following code:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
namespace LightSwitchApplication.Web
    public class GetUserName : IHttpHandler
        public void ProcessRequest(HttpContext context)
            using (var serverContext = LightSwitchApplication.Application.CreateContext())
                context.Response.ContentType = "text/plain";
        public bool IsReusable
                return false;

(this code uses the new serverContext API)


We switch back to Logical View, select the Entity, Client (tab), Write Code, and then the created method.


We add the following code:

        type: 'post',
        data: {},
        url: '../web/GetUserName.ashx',
        success: function success(result) {
            entity.UserName = result;

We run the project:


The UserName is now retrieved.

LightSwitch Help Website Articles
LightSwitch Team HTML and JavaScript Articles
Download Code

The LightSwitch project is available at (you must have HTML Client Preview 2 or higher installed to run the code.)

Paul van Bladel (@paulbladel) described An isolated storage approach for client side caching for feeding a LightSwitch custom control in a 12/13/2012 post:


imageSilverlight has a very nice feature: isolated storage. This could be used for caching data client side. Unfortunately, this feature can not be used in a normal LightSwitch grid, because the databinding of LightSwitch grid is focused on the viewmodel and retrieves the data from the server. Nonetheless, client side caching is a useful technique when dealing with custom controls in LightSwitch. (e.g. a treeview control)

What’s the big picture?

imageImagine my application needs a very big data tree which is updated only over night (very a process separate from my lightSwitch app). I want to be able to search in the tree in a very comfortable (read: fast) way.

In that scenario, client side caching can become your friend. I’ll focus in this blog post solely on the client side caching part, and not on the integration with a treeview control. I’m aware of the fact, that this makes things a bit abstract, but I’ll cover soon a post on how to do the treeview control integration.

What kind of low-level client side caching support do we need?

Well, … in LightSwitch terms client side caching is about being able to serialize data in the form of an IEnumerable<IEntityObject>. Obviously, we need a “serialization technology” which will impose restrictions on how we will serialize the data. The solution I have in mind should be usable in a generic way, so, … it’s kind of IsolatedStorageEntityCollectionCache. Ok, let’s use that as class name :)

The most obvious choice for a serialization mechanism, is the DataContractSerializer. This make we need a dedicated class describing the format of our serialized data. Of course, we want to be able to map the structure of our incoming data to the structure in the DataContract that is necessary for working with the DataContractSerializer.

For our treeview solution we have in mind, this would mean:

    public class CachedDepartment
        public int Id { get; set; }
        public int? ParentId { get; set; }
        public string Name { get; set; }

This boils down to requirement 1: we need to be able to map our incoming data (that we want to serialize) to a specific datacontract.

Our second requirement is more straightforward: we want to be able to specify a name for our serialized file (our app has potentially multiple client side caches).

Finally, we want that our cache can be tuned in the sense that we want to inject the algorithm to be used to tell use if the cache is expired or not. (e.g. specify that the cache expires after 2 hours, or any other algorithm).

This makes that we’ll instantiate our client side cache as follows:

_isolateStorage = new IsolatedStorageEntityCollectionCache<CachedDepartment>("deps.xml",
               entityObject => new CachedDepartment
                   Id = (int)entityObject.Details.Properties["Id"].Value,
                   ParentId = (int?)entityObject.Details.Properties["ParentId"].Value,
                   Name = entityObject.Details.Properties["Name"].Value.ToString()

              isolatedStorageFileCreationDateTimeOffset =>
                  bool result = false;
                  if (isolatedStorageFileCreationDateTimeOffset >= DateTimeOffset.Now.AddHours(-2))
                      result = true;
                  return result;

So, this is based on following constructor:

public IsolatedStorageEntityCollectionCache(
            string isolatedStorageFileName,
            Func<IEntityObject, DestinationType> mappingSelector,
            Func<DateTimeOffset, bool> expiryDateCalculation)
            _mappingSelector = mappingSelector;
            _isolatedStorageFileName = isolatedStorageFileName;
            _expiryDateCalculation = expiryDateCalculation;

Ok, i admit, injecting lamdba expression in the constructor, is a bit academic.

  • Our first param allows us to inject a mapping method (for mapping our data to be cached to a DataContract used during the serialization.
  • The second parameter is our file name in the isolated storage and
  • The third parameter is again a Func lamda expression representing the algorithm for calculating the expiry date of our cache.

The full cache provider goes as follows:

using System;
using System.Collections.Generic;
using System.IO;
using System.IO.IsolatedStorage;
using System.Runtime.Serialization;
using System.Linq;
using System.Reflection;
using Microsoft.LightSwitch;

namespace SilverlightClassLibrary
    public class IsolatedStorageEntityCollectionCache<DestinationType>
        private Func<IEntityObject, DestinationType> _mappingSelector;
        private Func<DateTimeOffset, bool> _expiryDateCalculation;
        private string _isolatedStorageFileName;

        public IsolatedStorageEntityCollectionCache(
            string isolatedStorageFileName,
            Func<IEntityObject, DestinationType> mappingSelector,
            Func<DateTimeOffset, bool> expiryDateCalculation)
            _mappingSelector = mappingSelector;
            _isolatedStorageFileName = isolatedStorageFileName;
            _expiryDateCalculation = expiryDateCalculation;

        public IEnumerable<DestinationType> RetrieveFromIsolatedStorage()
            using (IsolatedStorageFile isf = IsolatedStorageFile.GetUserStoreForApplication())
                using (IsolatedStorageFileStream isfs = new IsolatedStorageFileStream(_isolatedStorageFileName, FileMode.Open, isf))
                    DataContractSerializer serializer = new DataContractSerializer(typeof(List<DestinationType>));
                    IEnumerable<DestinationType> cachedCollection = (serializer.ReadObject(isfs)) as IEnumerable<DestinationType>;
                    return cachedCollection;

        public IEnumerable<DestinationType> StoreInIsolatedStorageAndReturnData(IEnumerable<IEntityObject> data)
            IEnumerable<DestinationType> cachedData = data.Select(_mappingSelector).ToList<DestinationType>();

            using (IsolatedStorageFile isf = IsolatedStorageFile.GetUserStoreForApplication())
                using (IsolatedStorageFileStream isfs = new IsolatedStorageFileStream(_isolatedStorageFileName, FileMode.OpenOrCreate, isf))
                    DataContractSerializer serializer = new DataContractSerializer(typeof(List<DestinationType>), getKnownTypes<DestinationType>());
                    serializer.WriteObject(isfs, cachedData);
                    return cachedData;

        public bool NonExpiredFileExistsInIsolatedStorage()
            bool result = false;
            using (IsolatedStorageFile isf = IsolatedStorageFile.GetUserStoreForApplication())
                if (isf.FileExists(_isolatedStorageFileName))
                    DateTimeOffset creationDateExistingFile = GetIsolatedStorageFileDateTime();
                    result = _expiryDateCalculation(creationDateExistingFile);
            return result;

        public void DeleteFileInIsolatedStorage()
            using (IsolatedStorageFile isf = IsolatedStorageFile.GetUserStoreForApplication())
                if (isf.FileExists(_isolatedStorageFileName))

        private IEnumerable<Type> getKnownTypes<T>()
            var knownTypes = new List<Type>();
            return knownTypes;

        private DateTimeOffset GetIsolatedStorageFileDateTime()
            DateTimeOffset result = DateTimeOffset.MinValue;
            using (IsolatedStorageFile isf = IsolatedStorageFile.GetUserStoreForApplication())
                if (isf.FileExists(_isolatedStorageFileName))
                    result = isf.GetCreationTime(_isolatedStorageFileName);
            return result;
        public void IncreaseStorage(long spaceRequest)
            using (IsolatedStorageFile isf = IsolatedStorageFile.GetUserStoreForApplication())

As said, I’ll focus in a next post on how to use this for a cached treeview control, but I’ll give already to introduction on how to use this.

Since we speak about a custom control, following code can be used in the code behind of the custom control. We have shown already how to instantiate our IsolatedStorageEntityCollectionCache.

The next purpose is to use our cached data for binding them to our custom control:

 if (_isolateStorage.NonExpiredFileExistsInIsolatedStorage())
                var result = _isolateStorage.RetrieveFromIsolatedStorage();

                _dataItemCollection = new DataItemCollection<DataItem>();
                // this _dataItemCollection is specific to our treeview implementation
                // we need to massage a bit our cached data and transform it to structure optimized for binding to our treeview control
                this.DataContext = _dataItemCollection;


                _screen.Details.Dispatcher.BeginInvoke(() =>

Basically, what we’re doing here is to check if our cache contains data (under the cover, it will be checked if our cache is expired). If there is no non expired cached we’ll trigger a data load (in my case via the LightSwitch viewmodel.

When our data comes in, we want to store these data in our cache:

public void DepartmentTree_Loaded()
            this.Dispatcher.BeginInvoke(() =>
                IEnumerable<IEntityObject> departmentCollection = _screen.Details.Properties["DepartmentFlatTrees"].Value as IEnumerable<IEntityObject>;

                this.DataContext = _dataItemCollection;

Given the asynchronous nature of loading data, our cache infrastructure is a bit distributed in 2 methods.


The above could be a useful base for client side caching in LightSwitch for a dedicated custom control.

Joe Binder reported The Cosmopolitan shell and theme source code is released! in a 12/13/2012 post to the Visual Studio LightSwitch Team blog:

imageWe’ve released the source code for the Cosmopolitan shell and theme, allowing you use the default LightSwitch 2012 shell and theme as starting points for your own custom shells and themes.

You can download the source here.

Custom LightSwitch shells and themes are topics that crop up commonly on customer visits and in the forums, yet the complexity and cost of building a custom shell or theme from scratch can outweigh the benefit in some situations: often small, visual tweaks are all that’s desired from a custom shell or theme. We created the Cosmopolitan shell and theme with the intent that it could serve as a working, easily customizable sample. The code and XAML is structured to facilitate making incremental changes to our default shell.

With all of the LightSwitch-specific plumbing already present and working in the sample, building a custom shell or theme is now a largely Silverlight-intensive task, with most effort focused on the specific user interface and experience required of your shell.


If you’re interested in building a custom theme or shell with the Cosmopolitan source, you’ll need the following on your machine:

If you’ve never built a LightSwitch extension, you may also want to take a look at the official documentation on LightSwitch extensions.

Whoa, that’s a lot of code!

This is the complete source code for the shell and theme we ship in the box, so there’s is a lot of functionality packed into the project. The volume of XAML and code might be a bit overwhelming. For the most part, though, you can start by looking at the root XAML files for the shell and theme and drill-in from there. The root shell and themes files are as follows:

  • ~/Presentation/Shells/CosmopolitanShell.xaml: This is root user control that describes the shell layout and appearance. Want to move the command bar to the top of the shell and put the navigation menu on the side? Start here.
  • ~/Presentation/Themes/CosmopolitanTheme.xaml: This is the root resource dictionary that merges in all of the styles used by LightSwitch. While there are no styles defined in this file, you can follow the merged dictionaries to track down the style you want to change. We’ve tried to group related styles into well-named files, too.

We’re putting the finishing touches on a multi-part blog series that will walk you through the process of customizing the Cosmopolitan shell and theme and include some tips-and-tricks for debugging and testing your custom shells and themes. So if all of the source code is still a bit much, stay tuned!

If you have any questions on the Cosmopolitan shell and theme source, head over to the LightSwitch Extensibility Forums

image_thumb6image_thumbNo significant articles today


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Michael Washam (@MWashamMS) described New Windows Azure PowerShell Update – December 2012 on 12/14/2012:

imageThe Windows Azure PowerShell team has just put out an update.

The first one I want to call out because it is close to my IaaS focused heart is Add-AzureVhd.

image_thumb75_thumb6If you have had the pleasure of uploading VHDs for IaaS before using CSUpload you know the old tool was pretty cumbersome.

Now uploading VHDs for onboarding virtual machines is simple(r).

In it’s simplest form you simply specify the local path to the VHD and the destination storage account URL:

Select-AzureSubscription 'mysubscription' 
Add-AzureVhd -LocalFilePath 'D:\VMStorage\SP2013VM1.vhd' -Destination ''

Once the upload has completed you can add the VHD to the disk repository by using the following command:

Add-AzureDisk -DiskName 'SP2013VM1OS' -MediaLocation '' -OS Windows

If you wanted to upload only a data disk just omit -OS Windows.
This cmdlet also supports uploading differencing disks to patch VHDs in storage as well. You can specify -BaseImageUriToPatch as the target VHD to apply the differencing disk too.

Once the disk is loaded to boot the virtual machine from the disk simply specify the disk name when configuring the VM.


If you prefer to provision from PowerShell:

New-AzureVMConfig -DiskName 'SP2013VM1OS' -InstanceSize Medium -Name 'SP2013VM1' | 
	Set-AzureSubnet -SubnetNames 'AppSubnet' | 
	New-AzureVM -ServiceName 'sp2013svc1' -VNETName 'HybridVNET' -AffinityGroup 'WestUSAG'

One potential regression I do want to call out in the IaaS space is a change to Get-AzureVMImage
The below code formerly worked and now no longer returns a value..

# Previous functionality 
(Get-AzureVMImage)[1].ImageName # Returns a value
Get-AzureVMImage | Select ImageName # Returns value

If your scripts did something similar you will need to use

Get-AzureVMIMage | ft imagename 

Store the specific image in a variable for later use.

Another key set of additions to the Windows Azure PowerShell Cmdlets:

We finally have the ability to directly manage ServiceBus Namespaces from the command line. From a dev-ops perspective this one is HUGE.

  • New-AzureSBNamespace – Create a new Windows Azure ServiceBus namespace
  • Get-AzureSBLocation – Get the Windows Azure regions that may be used to create new Windows Azure
  • Get-AzureSBNamespace – Get information about existing Windows Azure ServiceBus namespaces
  • Remove-AzureSBNamespace – Delete a WindowsAzure ServiceBus namespace and all associated objects

Complete list of the cmdlets in the December release:
(Note the Windows Azure SQL Database cmdlets made it back into the official release in November)


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Paul Miller (@PaulMiller) spun Hewlett Packard: a tale of many clouds in a 12/13/2012 post to his Clouds of Data blog:

imageHewlett Packard used its Discover event in Frankfurt last week to reassert the company’s cloud credentials. Public, private, hybrid; HP is painting pictures that encompass them all, whilst seeking to protect hardware revenues and reassure conservative executives at some of its largest and most profitable customers. But HP has been here before, making bold claims and telling people what they wanted to hear about an HP cloud upon which enterprises could depend. This time, will the company deliver?

imageEarlier this year, satirical news site The Onion took a cruel but funny swipe at HP’s cloud pretensions. HP, the sketch suggested, had the answers, the technology, and a lot of cloud. The company has done — and continues to do — a lot right in this space, but it really did bring this derision upon itself. Mixed messaging, repeated announcements of amazing new cloud services that never quite saw the light of day, an endless stream of apparent strategy U-turns that must surely have left long-time HP executives as dizzy as those trying to understand their intentions? None of this helped HP.

image_thumb75_thumb7But now, Windows Azure is apparently behind us. PalmOS (or whatever it’s called these days) is no longer a glue to bind hardware, peripherals, software and data together. Amazon is an inevitable piece of the whole. And at HP, the new story is one (more or less) of an OpenStack public cloud called HP Cloud (or HP Public Cloud), a VMware private cloud called Cloud System, and a professional services sell called Managed Cloud for Enterprise (which is messily spread across large swathes of HP’s dreadful website, with no obvious landing page to link here). [Emphasis added.]

A public cloud

The biggest cloud news out of Discover was probably the General Availability (at last) of HP’s OpenStack-powered public cloud offering. In keynotes and workshops, it was somewhat surprising to see the extent to which OpenStack and other enabling technologies were not mentioned. This was HP’s cloud, and the implication was clearly that HP know-how was what made it tick. HP hardware, HP software, HP cleverness. None of the ‘Intel Inside’ co-branding, Microsoft Diamond Sponsor loviness or VMware strategic partner rhetoric for this open source project, it seems. But, more relevantly, also none of the recognition that other named open source projects like the various Linux distributions do receive from HP.

Given the rather raw state of some OpenStack components, HP engineers have been busy stitching pieces together, but I would have expected HP to be telling more of a story about portability, about interoperability, and about the breadth and depth of the OpenStack community that customers would be joining. That story wasn’t told, and you had to know where to look to find much mention of the elusive OpenStack at all.

One place, it must be said, where the company was far more forthcoming was in the private Coffee Talks arranged for us by the team at Ivy. In frank cloud discussions such as those with Christian Verstraete, Chris Purcell, Florian Otel and others, far more of the detail — and rationale — was laid on the table.

Pricing is competitive, and it will be interesting to see how HP moves forward here. HP’s public cloud makes plenty of sense for enterprise customers already using HP kit and services elsewhere. But will a startup or a non-customer choose the HP Cloud in preference to Amazon or Google or Rackspace?

They might, if the messaging is right. German cloud analyst René Büst asserted in Frankfurt that “the next Instagram would never choose to start and grow on the HP Cloud”, as Amazon has all the mind-share in the startup community. Does HP care enough about the world beyond existing enterprise accounts to accept René’s challenge and entice that next cool startup? Is it, frankly, worth their while when their entire selling and support machine is geared toward people in suits who value fancy lunches and a Christmas card far more than credit card sign-up and cost competitiveness?

A private cloud

HP’s private cloud offering has been around a little longer, but the company reiterated — and reinforced — messages originally delivered at the Las Vegas Discover a few months back; Cloud System supports ‘bursting’ of compute jobs from an enterprise’s own private cloud to external providers such as HP’s public cloud and Amazon. This is a capability that will become increasingly important as even the most conservative enterprise customers begin their gradual transition out of the data centre and into the cloud.

Whatever Amazon and Salesforce executives might say in public about “the false cloud” or the number of Fortune 100 companies happily doing something on their public cloud infrastructure, they and we know that this is going to be a long game. HP’s flagship customers will move. Eventually, they’ll move almost everything. But it will take a decade or more, and there’s plenty of time to sell a few more private clouds and an awful lot of servers and storage arrays before that day comes.

A recognition of Amazon

HP’s messaging no longer tries to persuade customers that it will always meet every one of their cloud needs. HP has products and solutions to offer, but it is recognising that it needs to fit into a complex mixed environment. The company also recognises that Amazon is an inevitable part of that environment, and that HP solutions need to augment and add value with respect to Amazon. Helping customers to use Amazon when it’s appropriate is a far more effective strategy, long term, than either denying Amazon’s existence or insisting that its solutions are not fit for enterprise consumption. Neither are true, and HP’s customers are smart enough to realise that.

The SLA is king, maybe

One area in which HP is trying to differentiate itself from Amazon is in terms of Service Level Agreements, and this should play well with an enterprise audience. Rather than necessarily worrying about what hardware cloud infrastructure runs on, or whether it’s located on-premise, in a known and audited off-premise location, or out there in the fuzziness of the unbounded public cloud, HP is telling a story that focuses far more upon level of service, level of resilience, etc. This makes a lot of sense. I often don’t actually care whether data runs on my own machines or not. What I care about is whether or not my compliance and business requirements are being met. So instead of choosing public or private, off-premise or on, it makes a lot more sense to think about the business and compliance requirements that a particular solution helps me meet. One solution (on or off-premise) may be more secure, more robust, more disaster resilient, and it will come with an SLA (and a price tag) to reflect that. Another (again, on or off-premise) may be more suited to general crunching of less sensitive data. It’ll be more prone to failure, and cheaper. We tend to assume that our own data centre is the logical home of the former, and that the public cloud is a pretty cost-effective way to handle the latter. That’s not necessarily true, and that’s why it’s refreshing to at least begin to think in more nuanced terms. Unfortunately, although HP execs planted these ideas during their keynotes, the follow-up material quickly fell back into public v private, dodgy commodity kit v HP ‘enterprise grade’ hardware, etc. And that’s a shame.

Gartner’s Lydia Leong takes a deeper look at HP’s latest SLAs, and suggests that they may not be living up to their own rhetoric either. There’s plenty of work still to do in this area, and an effective means of differentiating service and value propositions is long overdue.

Dell goes the other way

HP uses OpenStack for the company’s public cloud, and VMware sits beneath their private offerings. Speaking at Dell World this week, Michael Dell announced that his company is doing the exact opposite; Dell’s existing VMware-powered public cloud is to be joined by a private cloud offering powered by OpenStack.

The public and private offerings of HP and Dell certainly aren’t directly comparable, but it is interesting that the two companies have reached such superficially odd decisions. It even raises the prospect that a customer of HP’s private cloud may find it easier to move to Dell’s public cloud than to HP’s, and that a customer of Dell’s private cloud may find it easier to move workloads to HP’s public cloud than to stick with Dell. Odd at best, this should be raising eyebrows in both Round Rock and Palo Alto.

Will the Converged Cloud actually, you know, Converge?

HP has a lot to say about convergence, both in terms of their hardware business but also in the cloud. And yet, it can be surprisingly difficult to see how the public and private pieces of the HP cloud portfolio really fit together. More often than I’d have expected, HP staffers discussing either the public or private cloud offerings spoke as if theirs was the only cloud in HP-land. A slip of the tongue once, or perhaps twice, but this was repeated again and again and again in Frankfurt. The joined-up story, and the reality of customers starting in either HP Cloud or Cloud System before realising a need to embrace parts of the other doesn’t seem to be getting through on the ground.

HP is a big ship, with some smart people and some great technology. But if it doesn’t tell a single — compelling — story and back it up with an attractive business model, it’s toast.

I can’t remember who it was, but someone in Frankfurt remarked in passing that HP would come through its current troubles “because it had technical chops.” Sadly for HP, that is simply not true. You can have the best technology in the world. But without a defined (or creatable) market requirement, a viable business proposition, and some credible messaging, all of that amazing technology is just some very expensive scrap metal. And a fatal red stain, spreading across the balance sheet.

HP has the technical pieces. It has the people pieces. It has some of the business model pieces. It has parts of the compelling story. It’s time the company joined those together credibly, filled in the gaps, and stopped shooting itself in the foot.

At least starve The Onion of material, so its writers have to try a little harder next time.

Disclosure: acting on behalf of Hewlett Packard, Ivy Worldwide invited me to Discover and covered travel and expenses associated with the trip. There was no requirement that I write about HP, and no requirement that any coverage be favourable.

Image by Flickr user Jose Roberto V Moraes

Related articles


No significant articles today

<Return to section navigation list>

Cloud Security, Compliance and Governance

Christopher Budd (@ChristopherBudd) posted 2013 Predictions in Cloud Security to the Trend Cloud Security blog on 12/13/2012:

What are some things that we should be watching out for around Cloud Security in 2013?

imageThis time every year, Raimund Genes, Trend Micro’s Chief Technology Officer (CTO) and his colleagues here take stock of what they’ve seen in the past year and what they think we can expect in the coming year. For 2013, in the area of cloud security, Raimund thinks that increasing adoption of the cloud poses specific risks for customers’ data and the Internet more broadly.

image_thumb2In terms of risks to customers of cloud-based services, Raimund outlines his belief that data protection should be the key area of focus since IT Administrators may find their on-premises data protections don’t migrate effectively to the cloud. Administrators will need to evaluate cloud-based solutions fully and adapt their data protection strategies to account for the new risks this introduces.

imageMeanwhile, it’s not just IT Administrators who will move to the cloud for increased convenience and save money. Providers of cloud-based services will likely find their platforms being used even more to launch attacks, control botnets, and provide storage for stolen data. Cloud-based service providers should expect to spend more time and money fighting to defend their services from malicious use.

When you combine Raimund’s predictions around the cloud with his predictions around consumerization, it’s clear that threats have fully embraced the post-PC era. Take some time to familiarize yourself with the likely threats in the coming year by reading the report, seeing Raimund’s thoughts on his blog. and hearing him speak directly in his annual video podcast.

What do you think of of these predictions? Will 2013 bring an increased focus on data risk?

<Return to section navigation list>

Cloud Computing Events

Paras Doshi (@paras_doshi, pictured below) reported on 12/10/2012 Azure PASS VC Next meeting: Kung Fu Migration to Windows Azure SQL Database to be held 12/19/2012 from 9:00 to 10:00 AM (rescheduled from 12/13/2012):

Speaker: Scott Klein, Technical Evangelist Microsoft

imageSummary: As cloud computing becomes more popular and cloud-based solutions the norm rather than the fringe, the need to efficiently migrate your database is crucial. This demo-filled session will discuss the tips and tricks, methods and strategies for migrating your on-premises SQL Server databases to Windows Azure SQL Database, AKA SQL Azure. Focusing primarily on SQL Server Data Tools and the DAC Framework, this session will focus on how these tools can make you a kung-fu migration master.

imageAbout Scott: Scott Klein is a Corporate Technical Evangelist for Microsoft focusing on Windows Azure SQL Database (AKA SQL Azure) and related cloud-ready data services. His entire career has been built around SQL Server, working with SQL Server since the 4.2 days. Prior to Microsoft he was a SQL Server MVP for several years, then followed that up by being one of the first 4 SQL Azure MVPs. Scott is the author of over ½ dozen books for both WROX and APress, including Pro SQL Azure. He can be found talking about Windows Azure SQL Database and database scalability and performance at events large and small wherever he can get people to listen, such as SQL Saturday events, local SQL Server user groups, and TechEd.

imageDetails at 

Download the calendar file:

How to Join Azure PASS VC’s?

If you want to stay updated on meeting announcements, please consider registering on PASS’s website and Joining our VC:

If you do not have a SQLPASS account:

a. Go to

b. Fill up the required information and register

Now, After successful login/registration – Go to

a. switch to MyChapters section

b. Now under virtual chapters, you would see a list of virtual chapters. Join the one’s you are interested in!

my PASS my Chapter Azure VC

I look forward to seeing you at next Azure PASS VC’s meeting!

<Return to section navigation list>

Other Cloud Computing Platforms and Services

Jeff Barr (@jeffbarr) announced AWS Detailed Billing Reports on 12/13/2012:

imageAt AWS we place a high value the "Voice of the Customer." We do our best to listen and to learn, and to make sure that our plans line up with what we are hearing.


image_thumb111We've had a number of requests for better access to more detailed billing data. We took the first step earlier this year when we announced Programmatic Access to AWS Billing Data on this blog. That data, along with the AWS Billing Alerts, provided you with additional information about your AWS usage, along with a notification if your spending for the month exceeded a particular amount.

Today we are going a step further, providing you with new AWS Detailed Billing Reports. You now have access to new reports which include hourly line items. If you use a combination of On Demand and Reserved Instances, you will now be able to ensure that you have enough Reserved Instances to meet your capacity requirements for any given hour.

As part of this release, we are also making it easier for you to track and manage the costs associated with Reserved Instances when used in conjunction with AWS Consolidated Billing. This report provides an additional allocation model for linked accounts, with two key features -- RI Affinity and Unblended Rates:

  • With RI Affinity, the allocated benefit of the less expensive hourly rate for a Reserved Instance is now prioritized to the linked account that purchased the RI first.
  • Currently the consolidated bill uses a blended rate (the average of On Demand, Free Usage Tier, and Reserved Instance) when allocating costs to linked accounts. The detailed billing report will continue to include blended rate and cost information, but will now be supplemented with unblended rate and cost as additional columns.

You can sign up for the Hourly reports on your AWS Billing Preferences page:

<Return to section navigation list>