Tuesday, April 30, 2013

Recurring HTTP 500/502 Errors with a Shared Windows Azure Website

On 4/25/2013 I created a new, free Windows Azure Website with WebMatrix and WordPress: Android MiniPCs and TVBoxes. While adding about 50 MB of content to it from the OakLeaf Systems blog, I exceeded the free Website quota so I changed the deployment from Free to Shared. I then enabled Pingdom’s free Website monitoring service for the site, which duplicates that for my Live OakLeaf Systems Windows Azure Table Services Sample Project.

Two days later, Pingdom was reporting periodic HTTP 500 Internal Server and 502 Bad Gateway errors:


Following are Root Cause Analyses for the two errors shown in the above screen capture:



Here’s the Root Cause Analysis for an earlier HTTP 502 error:


Fortunately, the errors subsided on 2/29/2012, but I’m curious if others have encountered this problem.

Saturday, April 27, 2013

Windows Azure and Cloud Computing Posts for 4/22/2013+

A compendium of Windows Azure, Service Bus, EAI & EDI, Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image_thumb7_thumb1

Top News this week: Scott Guthrie (@scottgu) announced Windows Azure: Improvements to Virtual Networks, Virtual Machines, Cloud Services and a new Ruby SDK on 4/26/2013. See the Windows Azure Infrastructure and DevOps section below.

• Updated 4/28/2013 with new articles marked .

Note: This post is updated weekly or more frequently, depending on the availability of new articles in the following sections:

Azure Blob, Drive, Table, Queue, HDInsight and Media Services

• Richard Astbury (@richorama) contributed an AzureGraphStore .NET Client for Windows Azure Table Storage to GitHub on 4/28/2013:


imageAn extension to the .NET SDK for Windows Azure Storage which adds a triple store abstraction over Table Storage.

In simple terms, it makes it easy to use Table Storage as a graph database.

Example Usage

Create a graph

imagevar account = CloudStorageAccount.DevelopmentStorageAccount;
var graphClient = account.CreateCloudGraphClient();
var graph = graphClient.GetGraphReference("example");

Add triples to the graph

// insert subject, property and value directly:
graph.Put("Richard", "Loves", "Cheese");

// insert a triple
var triple = new Triple("Richard", "Hates", "Marmite");

Query the graph

// query a single triple
var triple = graph.Get("Richard", "Loves", "Cheese").First();

// query using any combination of subject, property and value, i.e.
var triples = graph.Get(subject: "Richard");
var triples = graph.Get(property: "Loves");
var triples = graph.Get(values: "Cheese");
var triples = graph.Get(subject: "Richard", property: "Hates");
var triples = graph.Get(property: "Hates", value: "Marmite");

Delete triples from the graph:

graph.Delete("Richard", "Loves", "Cheese");

License: MIT

Denny Lee (@dennylee) described Optimizing Joins running on HDInsight Hive on Azure at GFS in a 4/26/2013 post:


imageTo analyze hardware utilization within their data centers, Microsoft’s Online Services Division – Global Foundation Services (GFS) is working with Hadoop / Hive via HDInsight on Azure.  A common scenarios is to perform joins between the various tables of data.  This quick blog post provides a little context on how we managed take a query from >2h to <10min and the thinking behind it.


imageThe join is a three-column join between a large fact table (~1.2B rows/day) and a smaller dimension table (~300K rows).  The size of a single day of compressed source files is ~4.2GB; decompressed is ~120GB.  When performing a regular join (in Hive parlance “common join”), the join managed to create ~230GB of intermediary files.  On a 4-node HDInsight on Azure cluster, taking a 1/6th sample of the large table for a single day of data, the query took 2h 24min.

colA, colB, … , colN
FROM FactTable f
LEFT OUTER JOIN DimensionTable d
ON d.colC = f.colC
AND d.colD = f.colD
AND d.colE = f.colE

image_thumb75_thumb1Join Categories

Our options to improve join performance were noted in Join Strategies in Hive:


* sample data size (1/6 of the full daily dataset)


Base Query

As noted above, on just 1/6 of the data, the regular join above took 2h 24min.

Compressing the Intermediate Files and Output

As noted earlier, upon analysis it was determined that there were 230GB of intermediary files generated.  By compressing the intermediate files (using the set commands below), it improved the query performance (down to 1:24:38) and reduced the size of the files bytes read and files bytes written.

set mapred.compress.map.output=true;
set mapred.map.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec;
set hive.exec.compress.intermediate=true;

Note, currently HDInsight supports Gzip and BZ2 codecs – we chose the Gzip codec to match the gzip compressed source.

Configure Reducer Task Size

In the previous two queries, it apparent that there was only one reducer in operation and increasing the number of reducers (up to a point) should improve query performance as well.  To improve the query to 0:21:39, the configuration of the number of reducers was added.

set hive.exec.reducers.bytes.per.reducer=25000000;

Full Dataset

While this improved performance, once we switched back to the full dataset, using the above configuration, it took 134 mappers and 182 reducers to complete the job in 2:01:56.   By increasing the number of nodes from four to ten, the query duration dropped down to 1:10:57.

Map Joins

The great thing about map joins is that it was designed for this type of situation – large tables joined to a small table.  The small table can be placed into memory / distributed cache.  By using the configuration below, we managed to take a query that took 1:10:57 down to 00:09:58.  Note that with map joins, there are no reducers because the join can be completed during the map phase with a lot less data movement.

set hive.auto.convert.join=true;
set hive.mapjoin.smalltable.filesize=50000000;

An important note is to not forget the hive.mapjoin.smalltable.filesize setting.  By default it is 25MB and in this case, the smaller table was 43MB.  Because I had forgotten to set it to 50MB, all of my original map join tests had reverted back to common joins.


By compressing intermediary / map output files and configuring the map join correctly (and adding some extra cores), we were able to take a join query that originally >2h to complete and get it under 10min.  For this particular situation, map joins were perfect but it will be important for you to analyze your data first to see if you have any skews, can fit the smaller table in memory, etc.

set mapred.compress.map.output=true;
set mapred.map.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec;
set hive.exec.compress.intermediate=true;
set hive.auto.convert.join=true;
set hive.mapjoin.smalltable.filesize=50000000;


Other great references on Hive Map Joins include:


Many thanks to Kamen Penev, Pedro Urbina Escos, Dilan Hewage

Manu Cohen-Yashar (@ManuKahn) explained Uploading Large Files to Blob Storage in a 4/22/2013 post:

imageIt you will try to upload a large file (2Mb and larger) to blob storage it is likely that you will get the following timeout exception: “StorageServerException : Operation could not be completed within the specified time.”

The solution is to do things in parallel.

imageFortunately blob storage has a simple API for parallel upload.
blobClient.ParallelOperationThreadCount = 20;

To use it it is required to open the max number of outgoing connection using


The following method will demonstrate that:

Code Snippet

  1. public static void LoadLargeBlob(string storageAccountName, string storageAccountKey)
  2. {
  3. ServicePointManager.DefaultConnectionLimit = 20;
  4. string storageConnectionString = string.Format("DefaultEndpointsProtocol=https;AccountName={0};AccountKey={1}",
  5. storageAccountName, storageAccountKey);
  6. CloudStorageAccount account = CloudStorageAccount.Parse(storageConnectionString);
  7. var blobClient = account.CreateCloudBlobClient();
  8. blobClient.ParallelOperationThreadCount = 20;
  9. var container = blobClient.GetContainerReference("myContainer");
  10. container.CreateIfNotExist();
  11. var blob = container.GetBlobReference("largeblob");
  12. blob.UploadFile("largefile");
  13. }

Hope this helps

Mingfei Yan (@mingfeiy) posted a Demo – how to create HLS and Smooth Streaming assets using dynamic packaging on 4/20/2013:

imageThis blog post is a walk-through on how to create HLS and Smooth Streaming assets using dynamic packaging with Windows Azure Media Services (WAMS), by using .NET SDK.

What is dynamic packing?

imageBefore talking about dynamic packing, we have to mention what’s the traditional way of doing things. If you want to delivery both Http Live Streaming and Smooth Streaming, you have to store both of them. Therefore, you stream HLS content to iOS devices and Smooth Streaming content to Windows 8 for instance. However, by using dynamic packing feature in WAMS, You only need to store a Mp4 file in your storage, and we dynamically packaging Mp4 file into HLS or Smooth Streaming based on your client request. If it needs HLS stream, we will package Mp4 into HLS on the fly, and serve out to your client. In this case, you no longer need to store a copy of smooth streaming and HLS, hence, we help you save storage cost by half at least. This diagram below demonstrates what I just described:

How dynamic packaging works in WAMS

How dynamic packaging works in WAMS


a. Besides of having a media services account, you need to request for at least one On-Demand Streaming Reserved Unit. You could use portal and click on Scale tab. Pricing detail please refers here.

Noted if you don’t have any on-demand streaming reserved unit, you will still be able to access our origin server. However, dynamic packaging feature won’t be unable.  You could still compose Smooth Streaming and HLS URLs, but you won’t get back any content by accessing them.
On-demand Reserved Unit

b. Here is a sample WMV file and you could download here.
c. This is the finished project if you find the following tutorial hard to follow.

1. Open Visual Studio 2012 and create a console application.
2. Add in WindowsAzure.mediaservices through Nuget in your reference folder.
3. Add in the following code in class Program. You could retrieve your account key and account name from portal. SingleInputFilePath is where you store the source file. And the outputPath is where we will place the file which contains the final streaming URLs information.

private static string accKey = "YOUR_ACCOUNT_KEY";
private static string accName = "YOUR_ACCOUNT_NAME";

private static CloudMediaContext context;
private static string singleInputFilePath = Path.GetFullPath(@"C:\tr\azure.wmv");
private static string outputPath = Path.GetFullPath(@"C:\tr");

4. Add in the following methods to upload asset and track your asset upload progress:

private static string CreateAssetAndUploadFile(CloudMediaContext context)
    var assetName = Path.GetFileNameWithoutExtension(singleInputFilePath);
    var inputAsset = context.Assets.Create(assetName, AssetCreationOptions.None);
    var assetFile = inputAsset.AssetFiles.Create(Path.GetFileName(singleInputFilePath));
    assetFile.UploadProgressChanged += new EventHandler<UploadProgressChangedEventArgs>(assetFile_UploadProgressChanged);
    return inputAsset.Id;

        static void assetFile_UploadProgressChanged(object sender, UploadProgressChangedEventArgs e)
            Console.WriteLine(string.Format("{0}   Progress: {1:0}   Time: {2}", ((IAssetFile)sender).Name, e.Progress, DateTime.UtcNow.ToString(@"yyyy_M_d__hh_mm_ss")));

5. Adding encoding method which transcode our WMV input file into Mp4. Here are various H.264 encoding presents that are available in Windows Azure Media Encoder.

private static IJob EncodeToMp4(CloudMediaContext context, string inputAssetId){
    var inputAsset = context.Assets.Where(a => a.Id == inputAssetId).FirstOrDefault();
    if (inputAsset == null) throw new ArgumentException("Could not find assetId: " + inputAssetId);
    var encodingPreset = "H264 Adaptive Bitrate MP4 Set SD 16x9";
    IJob job = context.Jobs.Create("Encoding " + inputAsset.Name + " to " + encodingPreset);
    IMediaProcessor latestWameMediaProcessor = (from p in context.MediaProcessors where p.Name == "Windows Azure Media Encoder" select p).ToList().OrderBy(wame => new Version(wame.Version)).LastOrDefault();
    ITask encodeTask = job.Tasks.AddNew("Encoding", latestWameMediaProcessor, encodingPreset, TaskOptions.None);
    encodeTask.OutputAssets.AddNew(inputAsset.Name + " as " + encodingPreset, AssetCreationOptions.None);

    job.StateChanged += new EventHandler<JobStateChangedEventArgs>(JobStateChanged);

    return job;

static void JobStateChanged(object sender, JobStateChangedEventArgs e)
    Console.WriteLine(string.Format("{0}\n  State: {1}\n  Time: {2}\n\n",((IJob)sender).Name, e.CurrentState, DateTime.UtcNow.ToString(@"yyyy_M_d__hh_mm_ss")));

6. The following steps are different from the traditional way of packing asset. You will create locator for your Mp4 and just by appending either /manifest (for Smooth Streaming) or /manifest(format=m3u8-aapl), we will dynamically packaging your media assets into Smooth Streaming or HLS based on your URL request.

private static string GetDynamicStreamingUrl(CloudMediaContext context, string outputAssetId, LocatorType type){
    var daysForWhichStreamingUrlIsActive = 365;
    var outputAsset = context.Assets.Where(a => a.Id == outputAssetId).FirstOrDefault();
    var accessPolicy = context.AccessPolicies.Create(outputAsset.Name,TimeSpan.FromDays(daysForWhichStreamingUrlIsActive),AccessPermissions.Read | AccessPermissions.List);
    var assetFiles = outputAsset.AssetFiles.ToList();
    if (type == LocatorType.OnDemandOrigin){
    var assetFile = assetFiles.Where(f => f.Name.ToLower().EndsWith(".ism")).FirstOrDefault();
    if (assetFile != null){
    var locator = context.Locators.CreateLocator(LocatorType.OnDemandOrigin, outputAsset, accessPolicy);
    Uri smoothUri = new Uri(locator.Path + assetFile.Name + "/manifest");
    return smoothUri.ToString();
    if (type == LocatorType.Sas){
    var mp4Files = assetFiles.Where(f => f.Name.ToLower().EndsWith(".mp4")).ToList();
    var assetFile = mp4Files.OrderBy(f => f.ContentFileSize).LastOrDefault(); //Get Largest File
    if (assetFile != null)
    var locator = context.Locators.CreateLocator(LocatorType.Sas, outputAsset, accessPolicy);
    var mp4Uri = new UriBuilder(locator.Path);
    mp4Uri.Path += "/" + assetFile.Name;
    return mp4Uri.ToString();

return string.Empty;

7. Let’s add in a utility method for writing locator URL into the file.

static void WriteToFile(string outFilePath, string fileContent)
    StreamWriter sr = File.CreateText(outFilePath);

The reason why we want to write locator URLs into a file is because in portal, you will only see the Mp4 SAS URL, which points to the storage. You won’t be able to grab the locator URL unless you write them down.
8. Adding the following lines into main program in order to create the following workflow: upload WMV ->  encode into MP4 -> create locator -> write locator URL for both Smooth Streaming and HLS into a file.

static void Main(string[] args)
    context = new CloudMediaContext(accName, accKey);
    string inputAssetId = CreateAssetAndUploadFile(context);
    IJob job = EncodeToMp4(context, inputAssetId);
    var mp4Asset = job.OutputMediaAssets.FirstOrDefault();
    string mp4StreamingUrl = GetDynamicStreamingUrl(context, mp4Asset.Id, LocatorType.Sas);
    string smoothStreamingUrl = GetDynamicStreamingUrl(context, mp4Asset.Id, LocatorType.OnDemandOrigin);
    string hlsStreamingUrl = smoothStreamingUrl + "(format=m3u8-aapl)";
    string content = "\n Mp4 Url:    \n" + mp4StreamingUrl + "\n Smooth Url: \n" + smoothStreamingUrl + "\n HLS Url:    \n" + hlsStreamingUrl;
    Console.WriteLine("\n Mp4 Url:    \n" + mp4StreamingUrl);
    Console.WriteLine("\n Smooth Url: \n" + smoothStreamingUrl);
    Console.WriteLine("\n HLS Url:    \n" + hlsStreamingUrl);

    string outFilePath = Path.GetFullPath(outputPath + @"\" + "StreamingUrl.txt");
    WriteToFile(outFilePath, content);

9. Now you could press F5 and run this program. This is a printscreen of my console.
Console application
10. I have uploaded the whole project here and feel free to try it out yourself!

Additional resources and questions:

  • • Technical blog: Dynamic Packaging and Encoding and Streaming Reserved Units by Nick Drouin
  • • Ch9 video: Introduction to dynamic packing by Nick Drouin
  • Question: What are supported input format and output format? Is MP4 the only input?

  • Answer: We support both Smooth Streaming format and Mp4 as input. And for output, we generate Smooth Streaming and HLS v4. Please noted that we dont support encrypted content as source files, neither Storage Encryption nor Common Encryption.
  • Question: Could I use an existing Mp4 or Smooth Streaming file as input without encoding?

  • Answer: Yes. You could upload existing adaptive bitrate sets and validate them using the Media Packager. Here is a MSDN tutorial on validating your existing asset. Please check it out if you have questions.

The post Demo – how to create HLS and Smooth Streaming assets using dynamic packaging appeared first on Mingfei Yan.


<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

Carl Ots (@fincooper) described how to Use Mobile Services to easily schedule tasks in Windows Azure in a 4/23/2013 post:

imageIf you want to schedule some advanced CRON jobs to be ran on Windows Azure, you should check out the Scheduler Add-on by Additi. The add-on allows you to create custom scheduled tasks of any type using a simple REST API. Another way to schedule tasks in Windows Azure is to use Mobile Services.

imageWindows Azure Mobile Services is designed to be used as a turnkey backend for mobile applications, but it also features a super neat Schedule Backend Tasks -feature.

imageWith this feature, you can schedule any type of jobs from an easy HTTP GET to a more complicated one. The job scripts are created using NODE.JS. This is by far the easiest way to schedule simple jobs in Windows Azure. Check out the video below to get started!

Carlos Figueira (@carlos_figueira) examined Large numbers and Azure Mobile Services in a 4/22/2013 post:

imageIn any distributed system with different runtime environments, the way that data is represented in all nodes of the system, and transferred between them, can cause some problems if there is a mismatch between the nodes. In Azure Mobile Services we often have this issue - the server runtime runs on a JavaScript (or more precisely, node.js) engine, while the client can run in many different platforms (CLR managed code, Objective-C, Java, JavaScript, or any other client using the REST interface). With JavaScript - the mobile service runtime - there are two data types which usually cause problems: dates and numbers. Let's look at them in this post.

imageDates problems aren't common to JavaScript - dealing with conversions between local time (what people would most like to see in their applications) and standard time (usually UTC, how we'd store data) has been a problem in many frameworks, including .NET, and many smart people have written about it. Besides the framework-specific issues with dates, the main problem with the fact that Azure Mobile Services as a distributed system uses JavaScript in the backend is that dates in JS are represented as the number of milliseconds from the Unix zero date (1970-01-01T00:00:00.000 UTC) is that dates with sub-millisecond precision are truncated, which is rarely a big problem.

Numbers, on the other hand, tend to cause some problem with heterogeneous systems with JavaScript in one side and another language on another. In JS, all numbers are represented as 64-bit (double precision) floating point values. In the managed world, that means that every number would be represented as a Double. But in the managed (or other languages with strong typing), other numeric types exist and are often used (with good reason) in defining the data types used by the application. Integers (usually 32 and 64 bits, but also in other sizes), single and double precision floating point numbers and fixed-point (with fixed or arbitrary precision) numbers are represented by a large variety of types in different languages. That means that there are many numbers which cannot be represented, without loss of precision, in JavaScript, so any time one of those numbers is sent from the client to the service (e.g., as part of an object being inserted into the database), when it’s sent back to the client, its value will be different. For example, any odd numbers beyond 2^53 cannot be represented as a 64-bit floating point value.

So what do the client SDKs for Azure Mobile Services do when facing with numbers which can potentially be corrupted by going to the server? The answer depends on the how the application is interacting with the SDK, or more specifically, the data types which are being stored into / retrieved from the backend service. In many languages, there are two possible ways for an application save data, so let’s look at how the SDK deals with numbers on those two separately.

Typed mode

In the typed mode, we use “regular”, user-defined data types (e.g., User, Product, Order, TodoItem, etc.), and the SDK handles the serialization / deserialization of those types into / from the format used in the wire (JSON). The clients for managed platforms (Windows Store, Windows Phone, Desktop .NET 4.5) and Android both have this mode. JavaScript-based clients (where there really are no user-defined data types – and I’m not going here into the argument of prototypes versus real object-orientation) doesn’t have this mode (and it really doesn’t matter for this specific post, since there’s no difference in number representation between the JavaScript on the client and on the server). The iOS client SDK also doesn’t have it, since there’s no widely-used, generic serialization mechanism to translate between Objective-C @interfaces and JSON.

In the typed mode, the SDK does a lot of data manipulation under the covers, so it was coded in a way that, if data loss were to happen, an exception is thrown to the user. The idea is that the developer is trusting the SDK with its data, and we don’t want to corrupt it without warning the user. Let’s take the code below.

  1. public sealed partial class MainPage : Page
  2. {
  3. public static MobileServiceClient MobileService = new MobileServiceClient(
  4. "https://YOUR_SERVICE_HERE.azure-mobile.net/",
  6. );
  7. public MainPage()
  8. {
  9. this.InitializeComponent();
  10. }
  11. private async void btnStart_Click_1(object sender, RoutedEventArgs e)
  12. {
  13. try
  14. {
  15. var table = MobileService.GetTable<Test>();
  16. Test item = new Test { Str = "hello", Value = (1L << 53) + 1 };
  17. await table.InsertAsync(item);
  18. AddToDebug("Inserted: {0}", item);
  19. }
  20. catch (Exception ex)
  21. {
  22. this.AddToDebug("Error: {0}", ex);
  23. }
  24. }
  25. void AddToDebug(string text, params object[] args)
  26. {
  27. if (args != null && args.Length > 0) text = string.Format(text, args);
  28. this.txtDebug.Text = this.txtDebug.Text + text + Environment.NewLine;
  29. }
  30. }
  31. public class Test
  32. {
  33. public int Id { get; set; }
  34. public string Str { get; set; }
  35. public long Value { get; set; }
  36. public override string ToString()
  37. {
  38. return string.Format("Id={0},Str={1},Value={2}", Id, Str, Value);
  39. }
  40. }

When the btnStart_Click_1 handler is invoked, the code tries to insert a typed item (Test) into a table with a long value which would be corrupted if the operation were to complete. Instead, the code throws the following exception

     System.InvalidOperationException: The value 9007199254740993 for member Value is outside the valid range for numeric columns.

The validation ensures that integers have to fall in the range [-2^53, +2^53]; numbers outside that range are rejected, and the exception is thrown.

Now, what if you really want to use numbers beyond the allowed range? There are a few possibilities. In the .NET-based SDKs, you can actually remove the validation, which is made by a JSON.NET converter, by using a code similar to the one below. Notice that this will cause data corruption, but if precision can be sacrificed for a wider range of numbers, then that’s an option.

  1. var settings = MobileService.SerializerSettings;
  2. var mspcc = settings.Converters.Where(c => c is MobileServicePrecisionCheckConverter).FirstOrDefault();
  3. if (mspcc != null)
  4. {
  5. settings.Converters.Remove(mspcc);
  6. }
  7. var table = MobileService.GetTable<Test>();
  8. Test item = new Test { Str = "hello", Value = (1L << 53) + 1 };
  9. await table.InsertAsync(item);
  10. AddToDebug("Inserted: {0}", item);

Another alternative is to change the data type for the value. Double is represented exactly like numbers in the server, so all numbers that can be represented in the client can be transferred to the server and back. But double values may lose precision as the numbers grow big as well.

Yet another alternative is to use strings instead of numbers. With strings you can actually have arbitrary precision, but you may lose the ability to do relational queries on the data (unless you use some sort of zero-left-padding to normalize the values), and they take up more storage on the server.

The main take away is that if you’re dealing with large numbers and user-defined types, there will be cases where those numbers won’t be able to be represented in the server. The client SDK will try its best to warn the user (via exceptions) that a data loss would occur, but there are alternatives if the application really requires large numbers to be stored.

Untyped mode

The second way for an application to exchange data with the service is via an “untyped” model, where instead of dealing with “user types”, the application works with simpler types (dictionaries, arrays, primitives) which map directly to JSON. The untyped model appears in different ways in different platforms:

Unlike on the typed mode, where there is a step which is taken to convert the object into the JSON-like structure which is sent to the server, this step is unnecessary in the untyped mode. Therefore, we had to make a choice: validate that the numbers could be faithfully represented in the server and return an error (such as returning exceptions or the appropriate error callback), and incur the penalty of the additional validation for a scenario which isn’t too common; or bypass the validation, and let the user (in the scenarios where it’s applicable) deal with the large numbers themselves. After some internal discussion, we made the second choice (I don’t think there’s really a right or wrong approach, just some decision that had to be made – but if you disagree, you can always send a suggestion in our UserVoice page and we can consider it for the updates to the client SDKs).

What that decision entails is that if you try to run the following code, you’ll not get any error:

  1. JObject item = new JObject();
  2. item.Add("Str", "hello");
  3. item.Add("Value", 1234567890123456789L);
  4. var table = MobileService.GetTable("Test");
  5. var inserted = await table.InsertAsync(item);
  6. AddToDebug("Original: {0}", item);
  7. AddToDebug("Inserted: {0}", inserted);

What will happen instead is that the output will be shown as follows:

Original: {
  "Str": "hello",
  "Value": 1234567890123456789
Inserted: {
  "Str": "hello",
  "Value": 1234567890123456800,
  "id": 36

Similarly, this Objective-C code shows the same result

- (IBAction)clickMe:(id)sender {
MSTable *table = [client tableWithName:@"Test"];
NSDictionary *item = @{@"Str" : @"Hello", @"Value" : @(1234567890123456789L)};
    [table insert:item completion:^(NSDictionary *inserted, NSError *error) {
NSLog(@"Original: %@", item);
NSLog(@"Inserted: %@", inserted);

And the logs:

2013-04-10 13:36:18.009 MyApp[9289:c07] Original: {
    Str = Hello;
    Value = 1234567890123456789;
2013-04-10 13:36:18.009 MyApp[9289:c07] Inserted: {
    Str = Hello;
    Value = 1234567890123456800;
    id = 58;

Now, what if we actually want to enforce the limit checks on untyped operations? One simple alternative is to, prior to making the CRUD call, to traverse the object to see if it contains a long value which cannot be represented at the server side. Another alternative is to add a message handler (on the managed clients) or a filter (on the other platforms) which will look at the outgoing JSON request and fail if it has some numbers which can cause trouble if sent to the server side. This is one simple implementation of the validation for the managed client:

  1. bool WillRoundTripWithNoDataLoss(JToken item)
  2. {
  3. if (item == null) return true;
  4. switch (item.Type)
  5. {
  6. case JTokenType.Array:
  7. JArray ja = (JArray)item;
  8. return ja.All(jt => WillRoundTripWithNoDataLoss(jt));
  9. case JTokenType.Object:
  10. JObject jo = (JObject)item;
  11. return jo.PropertyValues().All(jt => WillRoundTripWithNoDataLoss(jt));
  12. case JTokenType.Boolean:
  13. case JTokenType.Float:
  14. caseJTokenType.Null:
  15. case JTokenType.String:
  16. return true;
  17. case JTokenType.Integer:
  18. JValue jv = (JValue)item;
  19. long value = jv.Value<long>();
  20. long maxAllowedValue = 0x0020000000000000;
  21. long minAllowedValue = 0;
  22. unchecked
  23. {
  24. minAllowedValue = (long)0xFFE0000000000000;
  25. }
  26. return minAllowedValue <= value && value <= maxAllowedValue;
  27. default:
  28. throw new ArgumentException("Validation for type " + item.Type + " not yet implemented");
  29. }
  30. }

In summary, it’s possible that you’ll never need to worry about this “impedance mismatch” between the client and the server for large numbers, and all values will just work. But it’s always nice to go into a framework knowing some of its pitfalls, and this one is in my opinion one which could be hard to identify.


<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData


No significant articles today

<Return to section navigation list>

Windows Azure Service Bus, Caching Access Control, Active Directory, Identity and Workflow

• Alex Simons (@Alex_A_Simons) reported the availability of a Developer Preview of OAuth Code Grant and AAL for Windows Store Apps on 4/22/2013:

imageI am happy to let you know that the next phase in our developer preview program has started and today we're making two new features available for you to try out and give us feedback on:

  • Support for OAuth2 for delegated directory access
  • An updated Windows Azure Authentication Library (AAL) designed to work with Windows Store applications.

imageTogether, those new features will help you deliver a modern application authorization experience that takes advantage of Windows Azure AD from your native client applications running on Windows RT and Windows 8 devices. 

Below you can find more details on the new features.

New Authentication Options: OAuth2 Code Grant

In our ongoing efforts to build the world’s most open Identity Management service, we’re thrilled to introduce the developer preview of our new OAuth2 grant type authorization flow. This builds on top of our already strong support for SAML, WS-Federation and the client credentials grant type in OAuth2 for server to server flows.

The authorization code grant enables you to drive user authentication flows from native applications; moreover, it offers important features (such as refresh tokens) which can help you to maintain long running sessions while minimizing the need to prompt users for their credentials.

Our OAuth2 preview also gives Administrators fine-grained control over which applications can have which sets of access privileges to the directory Graph API.

As part of this work, our Graph API has been extended to include new entities which facilitate managing delegation relationships between clients and services and the Graph Explorer, the test tool we made available during the first developer preview to explore the Graph API, has been updated to enable you to experiment with new features.

Fig 1: Updated Graph Explorer

Windows Azure Authentication Library (AAL) for Windows Store

Today we are also releasing a developer preview of the Windows Azure Authentication Library (AAL) for Windows Store applications.

Like its .NET counterpart (announced here) AAL for Windows Store makes it easy for you to add authentication capabilities to your modern client apps, delegating the heavy lifting to Windows Azure AD by taking advantage of the new OAuth2 code grant support.

AAL for Windows Store takes full advantage of the Windows Runtime environment features. For example:

  • It is packaged as a Windows Runtime Component, which allows you to use the library in both C# and HTML5/JavaScript application types
  • It wraps the WebAuthenticationBroker, a Windows 8 feature designed to facilitate web based authentication flows and single sign on across trusted apps
  • It offers transparent session management: AAL leverages the Credential Vault feature in Windows 8 to take care of persistent token caching, automatic token refreshing and even roaming across trusted machines!

Naturally, the advantages of the AAL .NET approach are available for AAL for Windows Store as well, or example, making it easy for you to add support for multiple authentication factors in your Windows Store apps.

Fig 2: AAL for Windows Store wrapping the Windows Auth Broker

For more details on AAL for Windows Store, please refer to this deep dive post.

To help you quickly get up to speed on these new capabilities, we have built a complete step by step walkthrough that will guide you through the development and testing of a Windows Store app and a REST service. You’ll be using AAL for Windows Store to add authentication capabilities to a Windows Store app, the JWT token handler for securing an ASP.NET Web API service, and the Graph Explorer to register the app and service, as well as grant permissions for the app to call the service. You can access it here.

If you want to take a look at the code right away, the end result of the walkthrough is also available as a downloadable sample here.

This is our first preview touching on the devices + services scenarios. You can expect much more in the coming months, including support for multiple platforms and more protocols.

During our first developer preview your feedback has been invaluable in shaping Windows Azure AD to be the identity service you want. We hope you’ll choose to partner with us again, by providing us with the feedback we need to ensure we’ll exceed your expectations.

• Vittorio Bertocci (@vibronet) posted Windows Azure Authentication Library (AAL) for Windows Store: a Deep Dive to his new Cloud Identity blog on 4/22/2013:


imageI have been longing to publish this post for a looong time.

Today we are announcing the next batch of preview features for Windows Azure Active Directory. You can find most details in Alex’ announcement here, but in a nutshell:

  • We now have endpoints capable of handling the OAuth2 code grant flow
  • We are releasing a preview of the Windows Azure Authentication Library for Windows Store – it’s available directly from VS as a NuGet package

imageAs much as I am excited about the endpoints, it’s the AAL for Windows Store announcement that is making my heart racing: the DevEx team has been nurturing this baby bird for a long time, and it’s finally time to let it fly.

This walkthrough and this sample show you how to use AAL for Windows Store to add authentication capabilities to your modern Windows applications and secure calls to a Web API project. Head there to learn how to use the library, it’s easy!

In this post I will go deeper (much deeper) in the library features and share with you some design considerations behind the choices we made. I like to think that we really took advantage of what the new Windows Runtime has to offer here, but you be the judge!

This is not required reading, come along for the ride only if you are very curious about it.

AAL in a Nutshell

If you are new to AAL, you might want to take a look at the intro post here: but in brief, the Windows Azure Authentication Library (AAL) makes it very easy for developers to add to their client applications the logic for authenticating users to Windows Azure Active Directory, and obtain access tokens for securing API calls.  No deep protocol expertise required!

Here there’s the laundry list of the main features in AAL for Windows Store:

  • Easy programming model for adding Windows Azure AD authentication to Windows Store apps
    • Take advantage of your Office365 accounts to authenticate in your Windows Store apps, too!
  • Works with Windows 8 and Windows RT devices
  • Support for multi factors authentication
  • Can be used from both C# and JavaScript/HTML5 apps
  • Persistent token cache
  • Automatic, just-in-time token refresh
  • Token roaming across trusted devices
  • easy-to-use wrap of the WebAuthenticationBroker API for business apps

I am definitely biased, but that sounds pretty exciting to me…

AAL for Windows Store is not our first foray in the client authentication space. We already have a preview of AAL out, designed to work with client applications written with .NET 4.0 and up: that includes both apps with user interaction and server-side apps operating in a client role (e.g. middle tiers calling other services). The developer passes as parameters what he or she knows about the scenario, library itself picks up which protocols to use to make authentication happen.

Given the nature of Windows Store apps, for which server to server authentication flows would not make much sense, AAL for Windows Store focuses exclusively on enabling user-based, interactive authentication. Behind the scenes, AAL for Windows Store implements that flow using the new OAuth2 code grant endpoints in Windows Azure AD.

Scenario and Programming Model

As mentioned, AAL for Windows Store tries to solve about the same class of problems tackled by AAL .NET. Here there’s a short excerpt from the original announcement, describing the basic scenario from the PoV of the developer (that would be you! ).

  • I know I want to call service A, and I know how to call it
    (e.g. REST or otherwise, which parameters/verbs/etc to use,…)
  • Without knowing other details, I know that to call A I need to present a “token” representing the user of my app
  • I know that Windows Azure AD knows all the details of how to do authentication in this scenario, though I might be a bit blurry about what those details might be


AAL offers you a programming model that has counterparts for all those concepts, interlinked by the same functional relationships. In short, for Windows Store apps you implement the scenario following those 3 steps:

  1. Initialize an instance of the AuthenticationContext class, which represents the Windows Azure AD tenant that knows about the users and the services/resources you want to work with
  2. Ask to your AuthenticationContext instance to to get a token for your client to access your target service, by invoking it method AcquireTokenAsync passing client and target service info
    1. the library takes care of all the necessary authentication experience (though often it won’t be required, see the token lifecycle section)
  3. AcquireTokenAsync returns an AuthenticationResult instance. If the operation was successful, you can extract the token from its AccessToken property and use it to call your target service

For once, I am not oversimplifying: that is exactly what happens. Here there’s some C# code implementing the sequence, in the case in which the target resource is a REST service:

1: AuthenticationContext authenticationContext =

new AuthenticationContext(“https://login.windows.net/treyresearch1.onmicrosoft.com”);

2: AuthenticationResult result =

await authenticationContext.AcquireTokenAsync(“http://myapps/quotesapp”,


   3: HttpClient httpClient = new HttpClient();

4: httpClient.DefaultRequestHeaders.Authorization =

new AuthenticationHeaderValue(“Bearer”, result.AccessToken);

5: HttpResponseMessageresponse =

await httpClient.GetAsync(https://poorsmanas.azurewebsites.net/api/InspiringQuotes);

Please note, AAL is actually used just in the first 2 lines: the rest shows how to use the resulting token to call a REST service, all standard .NET code. Let’s see what happens in a bit more detail, line by line.

  1. This line creates an instance of AuthenticationContext tied to the Windows Azure AD tenant TreyResearch1.
  2. Here we ask to our AuthenticationContext to get for us a token to access the service identified by the URI “http://myapps/quotesapp”. In order to do so, the corresponding Windows Azure AD tenant must know the client app originating the request, hence we provide the ID identifying the app as well (more details later). Here two things can happen:
    1. the call returns right away, in case a suitable token is already available in the cache (more details in the section about caching)
    2. the call triggers the display of the WinRT’s WebAuthenticationBroker dialog, presenting the user with the Windows Azure AD authentication experience.

      Assuming that the authentication took place successfully

  3. We create a new HttpClient
  4. We add an HTTP Authorization header, containing the token obtained in #2 and formatted according to the OAuth2 bearer resource access style
  5. We invoke the service; the presence of the token in the header provides the access info that the resource needs to verify before granting access.

…and that’s all you need to do! Smile

More About the Object Model

The AAL developer surface is really reduced to the essential, however there is a bit more to it than the absolute basic scenario described above. In the fullness of time we’ll have a complete MSDN reference, but for the time being here there’s a quick glance at the main primitives.


AuthenticationContext is the proxy of your Windows Azure AD tenant; it is the primitive you use every time you need your tenant to do something for you. In some sense, AuthenticationContext *is* AAL. Here there’s some more details on how it works.


The code snippet above showed the main way of constructing an AuthenticationContext: you pass the complete URL of your Windows Azure AD tenant. In the future we will offer some kind of resource-driven discovery mechanism, but for the time being the explicit, full URL is required at construction time. Also note: if you pass a string of a format different from the login.windows.net/<tenant> one, AAL validation of the tenant URL template will fail: that is for preventing redirect attacks in case you have dynamic logic that determines the tenant’s URL. There is an overload of the constructor which allows you to turn validation off, should you need to do so for development purposes.

IMPORTANT. Whereas AAL .NET works with both ACS2.0 namespaces and Windows Azure AD tenants, AAL for Windows Store works only with Windows Azure AD tenants.
This is largely a function of the protocol types supported by the WebAuthenticationBroker, and the availability of such protocols on the service side. AAL for Windows Store engages with Windows Azure AD via the new code grant endpoints, but those endpoints are not available for ACS namespaces.

Method AcquireTokenAsync

You already encountered the simplest form of AcquireTokenAsync:

IAsyncOperation<AuthenticationResult> AcquireTokenAsync(string resource, string clientId);

This overload requires you to pass (hence know) the URI and the ID with which the Windows Azure AD knows the target resource and your client application, respectively.

If you know how the OAuth2 code grant flow you might be wondering what return URI is used, given that none is being passed as a parameter: well, in this case we use the URI assigned to the app by the Windows Runtime itself, the one of the form ms-app//<SID>. You’ll read more about this in the section on Windows Store features; here I’ll just say that the use of such URI will cause the WebAuthenticationBroker used during authentication to operate in “SSO mode”.

AcquireTokenAsync has another overload, taking far more parameters:

IAsyncOperation<AuthenticationResult> AcquireTokenAsync(string resource, string clientId,

string redirectUri, string loginHint, string extraQueryParameters);

The first two parameters are the same as the ones in the first overload.

redirectUri allows you to specify a return URI different from the one assigned to the client by the Windows Runtime. There are multiple reasons for which you might want to do this: the entry representing the client app in Windows Azure AD might use an arbitrary value; you might want to opt out from WebAuthenticationBroker’s SSO mode for this call ; and so on.

loginHint allows you to specify the username (e.g. adam@treyresearch1.onmicrosoft.com) of the user you want to use for this specific authentication operation. The value of loginHint will be used to filter the token cache entries, selecting only the ones that match; if no cache entries for the username and the resource exist, the loginHint value is used for initializing the username textbox in the authentication dialog.

extraQueryParameters is there for giving you more latitude. The Windows Azure AD authorization server might accept more parameters in the future, and we don’t want AcquireTokenAsync to get too long. Talking in OAuth2 terms, extraQueryParameter can be used for adding whatever extra info you want to send with the token request to the authorization endpoint.

To be completely transparent, personally I am not very happy of having just 2 overloads: I would have wanted to have at least another one taking resource, client id and an arbitrary return URL… but I lost that battle Smile. As you guys send us feedback, we’ll have more data on how you use AcquireTokenAsync and on if we need to add more overloads.

Property TokenCache

By default, successful invocations of AcquireTokenAsync result in the storing in a persistent cache of the requested token. AAL for Windows Store comes equipped with such persistent cache out of the box: you can find more details about that later in the post.

If you need to customize the caching logic, you can write your own cache (implementing the ITokenCache interface) and use it instead of the default one: all you need to do is to create an instance of your custom class and assign it to the TokenCache property.

If you want to get rid of caching altogether, all you need to do is assigning TokenCache to null.

Method AcquireTokenbyRefreshTokenAsync

IAsyncOperation<AuthenticationResult> AcquireTokenByRefreshTokenAsync(

string refreshToken, string clientId);

If for some reason you don’t want to (or can’t) take advantage of the automatic refresh of tokens offered by the AAL cache, you have the opportunity of getting matters in your own hands. The method AcquireTokenbyRefreshTokenAsync (yep, we plan to improve the name: any ideas?) allows you to pass a refresh token (usually obtained from a former call to AcquireTokenAsync) and a client ID to trigger a token refresh.


All token acquisition operations result in an AuthenticationResult instance. Its purpose is to carry to your code the outcome of the authentication, so that you can make use of it: that typically materializes as extracting access tokens to be used in your service invocations, or examining error info if something went wrong.

Here there’s the full list of public members:

Property AuthenticationStatus Status

Communicates the outcome of the authentication operation. Possible values, from the enum AuthenticationStatus, are Succeeded and Failed.

Property string AccessToken

Upon successful authentication (or positive cache hit) this property holds the bits of the token intended to be used for the target resource.

Property DateTime ExpiresOn

This value indicates the time at which the token will expire.

Property string RefreshToken

If emitted by the authentication operation, this property returns a token that can be sued to refresh the access token without having to prompt the user for credentials.

Properties string Error, string ErrorDescription

In case of error those two properties surface details about the issue, so that it can be dealt with appropriately. Returning errors in the status makes error handling in asynchronous situations easier than through classic exceptions; that is especially evident when working in JavaScript.

Windows Store Exclusive Features

The Windows Runtime API, coupled with the Windows Store apps “sandboxing” system, offers an extremely powerful platform to build on. Here there’s a list of the areas in which AAL takes advantage of Windows Runtime specific features to help you handling authentication as smoothly as possible.

AAL for Windows Store is a Windows Runtime Component


There are many languages you can choose from when it comes to developing Windows Store applications: C#, JavaScript/HTML5, C++… we wanted to make sure you retain that freedom when you use AAL.

As its .NET counterpart, AAL for Windows Store is a library meant to live in-process with your application. However, AAL for Windows Store does not come as an assembly/DLL: that would constrain its usage to Windows Store apps written in C# and VB.

AAL for Windows Store it is packaged as a Windows Runtime Component, a file with extension .winmd.

You can head to MSDN for a more accurate description, but in super-short: a Windows Runtime Component is a reusable library, written in either C# or C++, which takes advantage of the language projection mechanisms of the Windows Runtime. The basic idea is that if your library exposes only Windows Runtime types, and it is packaged appropriately, then your classes can be transparently projected in the syntax of each of the languages supported for Windows Store apps development. In our case, that means that with a single library we can enable both C# and JS/HTML5 based apps to take advantage of Windows Azure AD. That’s pretty sweet!

P.S: in theory the library should also work for Windows Store apps written in C++/CX. However at packaging time the NuGet support for distributing Windows Runtime Components did not cover C++ project types. If you really want to experiment with that you can get the NuGet package and extract the WINMD by hand, however you should know that the scenario is untested. Please let us know if you use C++ a lot for your business Windows Store apps!

WebAuthenticationBroker and App Capabilities

AAL for .NET hosts the authentication experience in a dialog containing a WebBrowser control. With AAL for Windows Store, however, we did not need to build such dialog: the Windows Runtime already offers a specific API providing a surface for rendering Web content at authentication time. That API is exposed through the WebAuthenticationBroker class.

The WebAuthenticationBroker (can I call it WAB from now on? it’s really long to type) is very handy: it’s basically a system dialog, with fixed rules for size, positioning and consistent look & feel, which providers are standardizing on. It also takes specific steps for maintaining a level of isolation between the app itself and the context in which the authentication takes place.

Used directly, the WAB requires the developer to provide in input protocol-specific information. For example, if you want to use it for driving an OAuth2 code grant flow you’d have to construct the exact URL containing the request for the authorization endpoint; and once you’d get back the code, you’d be responsible to parte the request to retrieve it and use it to hit the token endpoint with the right message format.

AAL for Windows Store fully wraps the WAB, presenting you with an object model that is only concerned with the actors coming into play in your solution (your client, the windows azure AD tenant, the resource you want to access)  and making all of the necessary calls on your behalf. Hence, in theory you would not even need to know that the WAB is the API used to render the auth UI.
In practice, the WAB has useful features that can be used even when wrapped hence it’s worth calling it out. The main one, I’d say, is the way in which it handles cookies.

AAL and WAB’s Cookie Management

Authentication cookies management in Windows Store is less straightforward than on the classic desktop, and that influences how AAL for Windows Store operates.


When you use AcquireTokenAsync without specifying a return URL, AAL invokes the WAB in “SSO mode”. In a nutshell that means that the WAB will expect the auth provider to deliver its results by redirecting toward a special URI of the form ms-app://<SID> where the SID is the package ID of the Windows Store application. In return for this proof of tight coupling between the authority and the app itself, the WAB will grant access to a persistent cookie jar shared by all SSO mode applications: that will allow taking advantage of information associated to previous runs (such as persistent cookies) and occasionally achieve the SSO that gives the name to this mode. If App1 authenticated with Authority A in SSO mode, and A dropped a persistent cookie to represent an ongoing session, App2 opening the WAB in SSO mode will be able to authenticate with A by leveraging the existing cookie.

When you use AcquireTokenAsync specifying an arbitrary return URL, that value is passed straight to the WAB. And then the WAB operates on an arbitrary return URL, it assumes a “low trust” connection with the provider and constrains execution against a brand-new empty cookie jar. That is for protecting the user from low-trust apps, which could leverage existing cookies to silently access data on the web sites the user is authenticated with.

That’s pretty handy! But, say, what if you want to take advantage of SSO mode (which you automatically get with the simplest AcquireTokenAsync) while also specifying the login_hint for the current user? Easy. You can use the longer AcquireTokenAsync overload, and pass as return URI the address that the WAB would use if you would not specify any. You can obtain that programmatically by calling WebAuthenticationBroker.GetCurrentApplicationCallbackUri().

Windows Store Application Capabilities


Windows Store applications are executed in a sandboxed environment, which limits what they can do with system resources to the things that the user explicitly allowed at install time.

Being Windows Azure AD meant to enable business scenarios, the use of AAL for facilitating authentication flows will more often than not entail access to enterprise resources and capabilities such as

  • Ability to navigate to intranet locations
  • Ability to leverage domain authentication for the current user
  • Ability to access certificate stores and smartcards for multifactor authentication

Windows Store applications can do none of those things, unless you explicitly enable them in the capabilities section of the Package.appxmanifest file. And given that it’s really impossible for us to know in advance what will be required (that’s really driven by the Windows Azure AD tenant: is it federated with a local ADFS2 instance? Does it require certificate based auth?) and even if you’d know at a given time, the authenticating authority can change policy much faster than your app’s install/update lifecycle.

For those reasons, if you don’t turn on those capabilities in your manifest AAL might not work as expected. This is another area for which we are eager to get your feedback: is it acceptable for your business and enterprise apps to request those capabilities? Let us know about your scenarios, it will be of tremendous help for us to understand how to handle those aspects moving forward.

Token Lifecycle: Persistent Cache, Automatic Refresh, Roaming

Ah, I finally get to write about my favorite feature of the release: automatic token lifecycle.

Besides the protocol and security details, one of the main pain points of working with authentication in rich clients is having to handle sessions. Obtaining a token is only the start: if you want to access the protected resource more than once, you need to save the token somewhere; retrieve it when you need it again, possibly after the app has been closed and reopened; verify that it is still valid; if you have more than one resource, ensure that you are retrieving the right token; and so on. You normally have to worry about all that, while at the same time minimizing the times in which you prompt the user for credentials without adding any security horrors in the process. OAuth2 provides a super-useful artifact for handling that, the refresh token, however that comes at the cost of extra moving parts in the session management logic.

What if I’d tell you that AAL for Windows Store takes care of all that for you, without requiring any explicit action?

By default, AAL for Windows Store engages with a persistent token cache every single time you invoke AcquireTokenAsync. The cache entry structure is shown below:


Further below you can find a flowchart describing exactly what happens, but for the ones among you preferring text to visuals:

Say that you invoke AcquireTokenAsync passing myresource, myclientid, and myloginhint (the other parameters are not used in the cache).
AAL looks up in the cache if there is an entry matching the Windows Azure AD tenant associated to the current AuthenticationContext, myresource and myclientid, myloginhint.

  1. if there is an entry
    1. is there a valid access token?
      1. if there is, return the cache entry right away
      2. if there isn’t
        1. is there a refresh token?
          1. if there is, use it
            1. if the refresh succeeded, replace the old entry and return it
            2. if the refresh did not succeed, prompt the user for auth
              1. if successful, save results in cache and return
              2. if unsuccessful, return error info
          2. if there isn’t, prompt the user for auth
              1. if successful, save results in cache and return
              2. if unsuccessful, return error info
  2. if there isn’t, prompt the user for auth
    1. if successful, save results in cache and return
    2. if unsuccessful, return error info



The bottom line is: use AcquireTokenAsync every time you need a token. AAL will take care of querying the cache for you and even transparently refresh when necessary. The user will be prompted only when there’s absolutely no other alternative.

In this release the default cache implementation is very simple: everything is handled transparently, the only explicit operation you can do is flushing its content via the Clean() method. The next refresh will offer a more sophisticated cache, offering fine grained access to the entries.
AAL uses the default cache implementation using the ITokenCache interface methods, which means that you can plug in your own cache implementation if you so choose. In this phase, however, I would rather  have you guys tell us which features you’d like in the default cache, so that we can incorporate them and give you the best experience right out of the box.

AAL for Windows Store Cache and Windows’ Password Vault

Windows 8 and Windows RT have a great feature, the Password Vault, which is what allowed us to easily set up a persistent token cache.


The idea is pretty simple: every Windows Store app has one secure store that can be used for saving credentials, accessed via the PasswordVault class.
Within the Windows Runtime execution environment, data saved in that area can only be accessed by the app that wrote them, when operated by the user that was logged in Windows at save time.

Can you believe it? A per-app isolated area, which survives across multiple launches of the app, right out of the box! That was just the perfect fit for a persistent token cache. AAL uses the PasswordVault as a flat store for the cache entries of the form described earlier: there is no direct connection to a particular  AuthenticationContext instance, given that every entry is keyed per authority there’s really no need for it; every cache operation goes straight to the Vault app-wide.

As added bonus, the use of the PasswordVault grants to AAL’s cache a very interesting capability: cross-devices roaming of the saved tokens. In short: say that you install App1 on one of your Windows 8 or Windows RT devices and you authenticate using it. Say that you pick up another device of yours, and you install App1 there as well. At first launch of App1 you might discover that you are already authenticated without the need of entering any credentials!

I can’t say that we did this intentionally, that was not a feature high in the stack, but it just comes with the use of the Vault. In fact, there’s really nothing we can do to influence it from code: the way in which roaming takes place is more of a function of how your user’s machines are configured. The rules determining how roaming takes place are summarized in the picture above.
In summary:

  • outgoing roaming only happens if the user is signed in the machine with a Microsoft Account AND the machine is listed as a trusted device in the corresponding Microsoft Account settings.
    • However if the machine is also domain joined, outgoing roaming does NOT take place.
  • inbound roaming happens on devices where the same originating app is installed, the receiving device is marked as trusted by the same Microsoft Account, and the Microsoft Account (or a domain account linked to that Microsoft Account) is present on the device

If you refer to those rules there should be no surprise roams: if you don’t want roam to happens you can enforce your choices via devices settings or by opting out from the default caching implementation altogether.

All in all, pretty awesome Smile

Client Registration and Permissions Settings

In closing of this long deep dive, I want to mention few things that are not strictly inherent to AAL itself, but are aspects of working with Windows Azure AD and rich clients you have to be aware of.

If you’ve been using AAL .NET for authenticating against services, or in fact wrote any rich client accessing protected remote resources in the last 10 years or so, you are already familiar with the authentication model: there is an authority that knows about the service you want to invoke, and that knows how to authenticate you; if you can prove who you are, you get a token that grants you access to the service. In that picture, it does not really matter which rich client you are using: if you write code that implements that flow in a WPF app you can copy & paste it as is in any other WPF app, WinForm app, Excel plugin, Visual Studio extension and whatever else comes to mind without having to change anything on the authority and service side.

Well, the above holds for classic authentication protocols; but it does not (usually) hold anymore when you use an authorization framework like OAuth2, which happens to be what is used by AAL for Windows Store to drive the authentication flow.

I won’t repeat here the mega-post I wrote on the relationship between OAuth2 and sign on, nor I will start a dissertation about whether accessing a protected resource from a rich client amounts to “sign in”; those will come in some future post. Here I’ll just list some practicalities about how things are different and what to do about it to make AAL for Windows Store work in your apps.

The idea here is that you are not as much authenticating the user; rather, you are authorizing the app to access the resource on behalf of the user. The idea might sound subtle, but once again it has important consequences.

App Registration

The first consequence is that now the identity of the client application does matter. No longer just a transparent extension of the user, the app in itself is an active actor whose identity has an impact on whether access will be granted or denied: the exact same user might gain access to a resource when using App1, but see his/here requests denied when using App2. As such,  your client apps need to be registered in your Windows Azure AD tenant.

As counterintuitive as it might sound, the entity used for representing your client is a ServicePrincipal Smile or in fact an Application object (see here for more details).

Here there’s a dump of the Application object representing one of my test clients:

     "odata.type": "Microsoft.WindowsAzure.ActiveDirectory.Application",
     "objectType": "Application",
     "objectId": "d22366d4-2692-4074-a079-0eabad5dbaa3",
     "appId": "2b8606b7-6bad-4e8b-ac3c-1356aca8ab0e",
     "availableToOtherTenants": false,
     "displayName": "Todo Client",
     "errorUrl": null,
     "homepage": null,
     "identifierUris": [
     "keyCredentials": [],
     "mainLogo@odata.mediaContentType": "image",
     "logoutUrl": null,
     "passwordCredentials": [],
     "publicClient": true,
     "replyUrls": [
     "samlMetadataUrl": null

The schema is the same, but it is used a bit differently: without going too much in details, note the appId (which in AAL for Windows Store is used as clientid in AcquireTokenAsync) and the reply URL, which happens to be one of the SSO ones that the WAB likes.

The corresponding ServicePrincipal (from now on SP) is not terribly interesting, but I’ll paste it anyway in case you are curious:

      "odata.type": "Microsoft.WindowsAzure.ActiveDirectory.ServicePrincipal",
      "objectType": "ServicePrincipal",
      "objectId": "47ed0cac-fb08-4f3c-844f-96fecefdc165",
      "accountEnabled": true,
      "appId": "2b8606b7-6bad-4e8b-ac3c-1356aca8ab0e",
      "displayName": "Todo Client",
      "errorUrl": null,
      "homepage": null,
      "keyCredentials": [],
      "logoutUrl": null,
      "passwordCredentials": [],
      "publisherName": "Microsoft AAA",
      "replyUrls": [
      "samlMetadataUrl": null,
      "servicePrincipalNames": [
      "tags": []

…all pretty standard.


As of today, registering the client is necessary but not sufficient condition for your app to access a service. You need to let your Windows Azure AD tenant know that your app has the necessary permissions to access the service(s) you are targeting.

That can be achieved by adding a suitable entry in yet another new entity, the Permissions collection. Such entry must tie the SP’s entry of your client with the SP entry of the target service, and declare the access level the client should enjoy. Here there’s the entry enabling the client described above:

      "clientId": "47ed0cac-fb08-4f3c-844f-96fecefdc165",
      "consentType": "AllPrincipals",
      "expiryTime": "9999-12-31T23:59:59.9999999",
      "objectId": "rAztRwj7PE-ET5b-zv3BZcw8dYEmKBdGl7heaQ_BLzU",
      "principalId": null,
      "resourceId": "81753ccc-2826-4617-97b8-5e690fc12f35",
      "scope": "user_impersonation",
      "startTime": "0001-01-01T00:00:00"

Now, don’t get confused! The clientId here is not the clientid you’d use in AAL for Windows Store for identifying the client (that would be the SP’s appId). Rather, it is the objectId of the SP representing the client; you can verify by comparing with the earlier entries.
The resourceId, I am sure you already guessed it, is the objectId of the SP representing the service.

The other interesting value there is scope: the ones among you familiar with OAuth2 will recognize it. In short, that’s simply the kind of access the client has to the service: in this case, accessing it as the user (which for interactive rich apps is pretty much the equivalent of the more traditional user authentication flows).

With that entry, Windows Azure AD has all it needs for enabling the scenario. All you need to do is plugging the correct values in the AAL for Windows Store API, hit F5 and watch the magic unfold.

The Graph Explorer


Although you can – if you so choose – create the entries for the client app and the permission element directly via the Graph API, you don’t have to.

The Graph Explorer has been updated to help you do both tasks with just few clicks: the walkthrough and the sample guide you though the exact procedure you need to follow.

Note: the entries for your clients will appear in the Windows Azure portal, alongside the ones of the Web apps you are creating following the GA path. For now the portal does not expose the necessary settings: I’ll ask what the official guidance is, but for the time being my recommendation (you do remember my disclaimer, right?) would be to avoid modifying those entries via web UI.



A short note here to highlight that the same tricks for supporting multi factor authentication shown in this post for AAL .NET will work for AAL for Windows Store, too: just authenticate as a user who was marked for 2Fa, and watch your Windows Store app gain an extra level of authentication assurance without changing a line of code.


Well, I promised a long post, and a long post you got.

I know that the ability of using Windows Azure AD from Windows Store apps was one of the most requested features in the past few months, and I can’t tell you how impatient I am to see how you will use this preview.

You can expect the next update to AAL .NET to incorporate some of the innovations we have in AAL for Windows Store, like the use of AuthenticationResult as a return type and the support for the OAuth2 code grant flow. How much of the rest makes its way back in the library, and how AAL for Windows Store, will in large part depend on your feedback. Happy coding.

Richard Seroter (@rseroter) explained Using Active Directory Federation Services to Authenticate / Authorize Node.js Apps in Windows Azure in a 4/22/2013 post:

imageIt’s gotten easy to publish web applications to the cloud, but the last thing you want to do is establish unique authentication schemes for each one. At some point, your users will be stuck with a mountain of passwords, or, end up reusing passwords everywhere. Not good. Instead, what about extending your existing corporate identity directory to the cloud for all applications to use? Fortunately, Microsoft Active Directory can be extended to support authentication/authorization for web applications deployed in ANY cloud platform.

image_thumb75_thumb3In this post, I’ll show you how to configure Active Directory Federation Services (ADFS) to authenticate the users of a Node.js application hosted in Windows Azure Web Sites and deployed via Dropbox.

[Note: I was going to also show how to do this with an ASP.NET application since the new “Identity and Access” tools in Visual Studio 2012 make it really easy to use AD FS to authenticate users. However because of the passive authentication scheme Windows Identity Foundation uses in this scenario, the ASP.NET application has to be secured by SSL/TLS. Windows Azure Web Sites doesn’t support HTTPS (yet), and getting HTTPS working in Windows Azure Cloud Services isn’t trivial. So, we’ll save that walkthrough for another day.]


Configuring Active Directory Federation Services for our application

imageFirst off, I created a server that had DNS services and Active Directory installed. This server sits in the Tier 3 cloud and I used our orchestration engine to quickly build up a box with all the required services. Check out this KB article I wrote for Tier 3 on setting up an Active Directory and AD FS server from scratch.


AD FS is a service that supports identity federation and supports industry standards like SAML for authenticating users. It returns claims about the authenticated user. In AD FS, you’ve got endpoints that define which inbound authentication schemes are supported (like WS-Trust or SAML),  certificates for signing tokens and securing transmissions, and relying parties which represent the endpoints that AD FS has a trust relationship with.


In our case, I needed to enabled an active endpoint for my Node.js application to authenticate against, and one new relying party. First, I created a new relying party that referenced the yet-to-be-created URL of my Azure-hosted web site. In the animation below, see the simple steps I followed to create it. Note that because I’m doing active (vs. passive) authentication, there’s no endpoint to redirect to, and very few overall required settings.


With the relying party finished, I could now add the claim rules. These tell AD FS what claims about the authenticated user to send back to the caller.


At this point, AD FS was fully configured and able to authenticate my remote application. The final thing to do was enable the appropriate authentication endpoint. By default, the password-based WS-Trust endpoint is disabled, so I flipped it on so that I could pass username+password credentials to AD FS and authenticate a user.


Connecting a Node.js application to AD FS

Next, I used the JetBrains WebStorm IDE to build a Node.js application based on the Express framework. This simple application takes in a set of user credentials, and attempts to authenticate those credentials against AD FS. If successful, the application displays all the Active Directory Groups that the user belongs to. This information could be used to provide a unique application experience based on the role of the user. The initial page of the web application takes in the user’s credentials.

        h1= title
        form(action='/profile', method='POST')
                        label(for='user') User
                        input(id='user', type='text', name='user')
                        label(for='password') Password
                        input(id='password', type='password', name='password')
                        input(type='submit', value='Log In')

This page posts to a Node.js route (controller) that is responsible passing those credentials to AD FS. How do we talk to AD FS through the WS-Trust format? Fortunately, Leandro Boffi wrote up a simple Node.js module that does just that. I grabbed the wstrust-client module and added it to my Node.js project. The WS-Trust authentication response comes back as XML, so I also added a Node.js module to convert XML to JSON for easier parsing. My route code looked like this:

//for XML parsing
var xml2js = require('xml2js');
var https = require('https');
//to process WS-Trust requests
var trustClient = require('wstrust-client');

exports.details = function(req, res){

    var userName = req.body.user;
    var userPassword = req.body.password;

    //call endpoint, and pass in values
        scope: 'http://seroternodeadfs.azurewebsites.net',
        username: userName,
        password: userPassword,
        endpoint: 'https://[AD FS server IP address]/adfs/services/trust/13/UsernameMixed'
    }, function (rstr) {

        // Access the token
        var rawToken = rstr.token;
        console.log('raw: ' + rawToken);

        //convert to json
        var parser = new xml2js.Parser;
        parser.parseString(rawToken, function(err, result){
            //grab "user" object
            var user = result.Assertion.AttributeStatement[0].Attribute[0].AttributeValue[0];
            //get all "roles"
            var roles = result.Assertion.AttributeStatement[0].Attribute[1].AttributeValue;

            //render the page and pass in the user and roles values
            res.render('profile', {title: 'User Profile', username: user, userroles: roles});
    }, function (error) {

        // Error Callback

See that I’m providing a “scope” (which maps to the relying party identifier), an endpoint (which is the public location of my AD FS server), and the user-provided credentials to the WS-Trust module. I then parse the results to grab the friendly name and roles of the authenticated user. Finally, the “profile” page takes the values that it’s given and renders the information.

        h1 #{title} for #{username}
            div.roleheading User Roles
                each userrole in userroles
                    li= userrole

My application was complete and ready for deployment to Windows Azure.

Publishing the Node.js application to Windows Azure

Windows Azure Web Sites offer a really nice and easy way to host applications written in a variety of languages. It also supports a variety of ways to push code, including Git, GitHub, Team Foundation Service, Codeplex, and Dropbox. For simplicity sake (and because I hadn’t tried it yet), I chose to deploy via Dropbox.

However, first I had to create my Windows Azure Web Site. I made sure to use the same name that I had specified in my AD FS relying party.


Once the Web Site is set up (which takes only a few seconds), I could connect it to a source control repository.


After a couple moments, a new folder hierarchy appeared in my Dropbox.


I copied all the Node.js application source files into this folder. I then returned to the Windows Azure Management Portal and chose to Sync my Dropbox folder with my Windows Azure Web Site.


Right away it starts synchronizing the application files. Windows Azure does a nice job of tracking my deployments and showing the progress.


In about a minute, my application was uploaded and ready to test.

Testing the application

The whole point of this application is to authenticate a user and return their Active Directory role collection. I created a “Richard Seroter” user in my Active Directory and put that user in a few different Active Directory Groups.


I then browsed to my Windows Azure Website URL and was presented with my Node.js application interface.


I plugged in my credentials and was immediately presented with the list of corresponding Active Directory user group membership information.



That was fun. AD FS is a fantastic way to extend your on-premises directory to applications hosted outside of your corporate network. In this case, we saw how to create  Node.js application that authenticated users against AD FS. While I deployed this sample application to Windows Azure Web Sites, I could have deployed this to ANY cloud that supports Node.js. Imagine having applications written in virtually any language, and hosted in any cloud, all using a single authentication endpoint. Powerful stuff!


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

Venkat Gattamneni posted Virtual Network adds new capabilities for cross-premises connectivity on 4/26/2013:

imageLast week we announced the general availability of Windows Azure Infrastructure Services and Virtual Network [see article below].  Virtual Network is a service that allows you to create a private, isolated network in Windows Azure and treat it as an extension of your datacenter. You can assign private IP addresses to virtual machines inside a virtual network, specify DNS, and connect it to your on-premises infrastructure using a Cisco or Juniper hardware VPN device in a ‘site-to-site’ manner.

imageToday, we’re excited to announce that we’re expanding the capabilities of Virtual Network and stepping beyond cross-premises connectivity using hardware VPN devices. 

First, we’re enhancing the existing ‘Site-to-Site VPN’ connectivity so you can use Windows Server 2012 RRAS (Routing and Remote Access) as an on-premises VPN server. This gives you the flexibility of using a software based VPN solution to connect your on-premises network to Windows Azure. Of course, if you prefer the more traditional route, you can still connect a virtual network to your hardware based VPN and we will continue to expand the list of supported VPN devices.

Second, we’ve added a new capability called Point-to-Site VPN which allows you to setup VPN connections between individual computers and a virtual network in Azure. We built this capability based on customer requests and learnings from a preview feature called Windows Azure Connect. Point-to-Site VPN greatly simplifies setting up secure connections between Azure and client machines, whether from your office environment or from remote locations.

Here is an illustration:

Using Point-to-Site VPN enables some new and exciting ways to connect to Windows Azure that are not possible from other cloud providers. Here are a few examples:

  • You can securely connect to your Windows Azure environment from any location. You can connect your laptop to a Windows Azure test and development environment and continue to code away while sipping coffee at an airport café!
  • Small businesses or departments within an enterprise who don’t have existing VPN devices and/or network expertise to manage VPN devices can rely on the Point-to-Site VPN feature to securely connect to workloads running in Windows Azure virtual machines.
  • You can quickly set up secure connections to Windows Azure even if your computers are behind a corporate proxy or firewall.
  • Independent Software Vendors (ISVs) wanting to provide secure access to their cloud apps can leverage the Point-to-Site VPN feature to offer a seamless application experience.

Windows Azure Virtual Network continues to deliver the power of ‘AND’ by providing you the ability to integrate your on-premises environment with Windows Azure. If you want to learn more about Windows Azure Virtual Network, its capabilities and scenarios, you can find more information here. Also, check out Scott Guthrie's blog for a deeper dive on the new features.

Check out our Free Trial and try it for yourself.

See the Virtual Networks: New Point-to-Site Connectivity and Software VPN Device support section of Scott Guthrie’s (@scottgu) Windows Azure: Improvements to Virtual Networks, Virtual Machines, Cloud Services and a new Ruby SDK post of 4/26/2013 in the Windows Azure Infrastructure and DevOps section below:

Software VPN Device support for Site-to-Site

imageWith today’s release we are also adding software VPN device support to our existing ‘Site-to-Site VPN’ connectivity solution (which previously required you to use a hardware VPN device from Cisco or Juniper). Starting today we also now support a pure software based Windows Server 2012 ‘Site-to-Site’ VPN option.  All you need is a vanilla Windows Server 2012 installation. You can then run a PowerShell script that enables the Routing and Remote Access Service (RRAS) on the Windows Server and configures a Site-To-site VPN tunnel and routing table on it. 

image_thumb75_thumb4This allows you to enable a full site-to-site VPN tunnel that connects your on-premises network and machines to your virtual network within Windows Azure - without having to buy a hardware VPN device.

imageI’m glad to see support for software VPN connectivity due to the issues I encountered when attempting to configure a Cisco ASA 5505 Adaptive Security Appliance for VPN networking with Windows Azure, as described in my Configuring a Windows Azure Virtual Network with a Cisco ASA 5505-BUN-K9 Adaptive Security Appliance post of 6/21/2012.

See the Cloud Services: Enabling Dynamic Remote Desktop for a Deployed Cloud Service section of Scott Guthrie’s (@scottgu) Windows Azure: Improvements to Virtual Networks, Virtual Machines, Cloud Services and a new Ruby SDK post of 4/26/2013 in the Windows Azure Infrastructure and DevOps section below:

imageWindows Azure Cloud Services support the ability for developers to RDP into web and worker role instances.  This can be useful when debugging issues.

Prior to today’s release, developers had to explicitly enable RDP support during development – prior to deploying the Cloud Service to production.  If you forgot to enable this, and then ran into an issue in production, you couldn’t RDP into it without doing an app update and redeploy (and then waiting to hit the issue again).

imageWith today’s release we have added support to enable administrators to dynamically configure remote desktop support – even when it was not enabled during the initial app deployment.  This ensures you can always debug issues in production and never have to redeploy an app in order to RDP into it.

How to Enable Dynamic Remote Desktop on a Cloud Service

Remote desktop can be dynamically enabled for all the role instances of a Cloud Service, or enabled for an individual role basis.  To enable remote desktop dynamically, navigate to the Configure tab of a cloud service and click on the REMOTE button:


This will bring up a dialog that enables you to enable remote desktop – as well as specify a user/password to login into it:


Once dynamically enabled you can then RDP connect to any role instance within the application using the username/password you specified for them.


Nuno Godinho (@NunoGodinho) described SharePoint Migration to Windows Azure IaaS in a 4/26/2013 post to the Aditi Technologies blog:

imageSharePoint has become over the years an extremely important part of content management portfolio for many enterprises, and thus this is very important there is still a lot more that would benefit from it, but this also brings some important challenges both implementing and managing it.

Some of the most common ones are the high costs for both in infrastructure and maintenance, difficulty on enabling disaster recovery for the solution, as well as, enabling Geo-Location of the SharePoint Websites.

imageNow with Windows Azure IaaS (Infrastructure as a Service) it become easier for SharePoint customers since some of those challenges will be tackled immediately. For example if we think about the infrastructure costs, that will be definitely an area where a lot of customers will start to look into, since they are significantly lowered while at the same time a there’s a gain in elasticity. But this is not the only important point that should be considered, since by leveraging Windows Azure IaaS a lot more will be made enabled. For example the ability to create quickly and effectively a Disaster Recovery Strategy for SharePoint, the ability to create more and more environments exactly when they are needed and make more sense, like Staging and QA, as well as the alignment and consistency of the tools from authoring, workflow to approval. When we think about all those aspects we really understand that this will being SharePoint to the next level in this product’s lifecycle.

imageSo the wave is coming, now you just need to plan and prepare to start leveraging Windows Azure IaaS with SharePoint and you’re ROI will be a lot more than it has been until now, providing better customer services with less the investment.

What are your experiences with SharePoint migration? Is Azure IaaS present in your IT roadmap for 2013?

You might also like these blog posts – Challenges in Data Migration to Cloud and Windows Azure IaaS fundamentals.

Nuno Godinho (@NunoGodinho) explained Lessons Learned: Taking the best out of Windows Azure Virtual Machines on 4/22/2013:


imageNow that Windows Azure IaaS offerings are out and made GA a lot of new workloads can be enabled with Windows Azure. Workloads like, SQL Server, SharePoint, System Center 2012, Server Roles like AD, ADFS DNS and so on, and even Team Foundation Server. More of the supported list of server software that is currently supported in Windows Azure Virtual Machines can be found here.

But knowing what we can leverage in the Cloud isn’t enough, every features has its tricks in order to take the best out of it. In this case in order to take the best performance out of the Windows Azure Virtual Machines, I’ll provide you with a list of things you should always do, and so making your life easier and the performance a lot better.

1. Place each data disk in a single storage account to improve IOPS

Last November 2012 Windows Azure Storage had an update which was called “Windows Azure’s Flat Network Storage” which provided some new scalability targets to the blob storage accounts. In this case it went from 5,000 to 20,000, which means that we can actually have something like 20,000 IOPS now.

Having 20,000 IOPS is good but if we have several disks for the same Virtual Machine this means that we’ll need to share those IOPS with all those disks, so if we have 2 disks in the same storage account we’ll have 10,000 IOPS for each one (roughly). This isn’t optimal.

So, in order to achieve optimal we should create each disk in a separate storage account, because that will mean that each disk has it’s 20,000 IOPS just for itself and not sharing with any other disk.

2. Always use Data Disks for Read/Write intensive operations, never the OS Disk

Windows Azure Virtual Machines have two types of disks, which are OS and Data Disks. The OS Disk goal is to have everything that has to do with OS installation or any other product installation information, but isn’t actually a good place to install your highly intensive read/write software. In order to do that, you should actually leverage Data Disks, because their goal is to provide a faster and read/write capability and also separate this from the OS Disk.

So since data disks are better than OS Disks it’s easy to understand why we should always place read/write intensive operations on data disks. Just be careful on the maximum number of data disks you can associate to your virtual machine, since it will differ. 16 Data Disks is the maximum you are allowed but for that you need to have an extra large virtual machine.

3. Use striped disks to achieve better performance

So we told that you should always place your read/write intensive operations software on data disks and in different storage accounts because of the IOPS you can get, and we told it was 20k IOPS, but is that enough? Can we live with only 20k IOPS in a disk?

Sometimes the answer might be yes, but in some cases it won’t because we need more. For example if we think about SQL Server or SharePoint they will require a lot more, and so how can we get more IOPS?

The answer is data disks striped together. What this means is that you’ll need to understand your requirements and know what’s the IOPS you’re going to need and based on that you’ll create several data disks and attach them to the virtual machine and finally stripe them together like they were a single disk. For the user of the virtual machine it will look like a single disk but it’s actually several ones striped together, which means each of the parts of that “large disk presented to the user” has 20k IOPS capability.

For example, imagine we’re building a virtual machine for SQL Server and that the size of the database is 1TB but requires at least 60k IOPS. What can we do?

Option 1, we could create a 1TB Data Disk and place the database files in there but that would max out to 20k IOPS only and not the 60k we need.

Option 2, we will create 4 data disks of 250GB each and place each of them in a single storage account. Then we’ll attach it to the virtual machine and in the Disk Management we’ll choose to stripe them together. Now this means that we have a 1TB disk in the virtual machine that is actually composed by 4 data disks. So this means that we can actually get something like a max of 80k IOPS for this. So a lot better than before.

4. Configure Data Disks HostCache for ReadWrite

By now you already understood that data disks are your friends, and so one of the ways to achieve better performance with them is leveraging the HostCache. Windows Azure provides three options for data disk HostCache, which are None, ReadOnly and ReadWrite. Of course most of the times you would choose the ReadWrite because it will provide you a lot better performance, since now instead of going directly to the data disk in the storage account it will have some cached content making IOPS even better, but that doesn’t work in all cases. For example in SQL Server you should never use it since they don’t play well together, in that case you should use None instead.

5. Always create VMs inside a Affinity Group or VNET to decrease latency

Also another big improvement you can do is to place always de VM inside an affinity group or a VNET, which in turn will live inside the affinity group. This is important because when you’re creating the several different storage accounts that will have data disks, OS disks and so on, you' want to make sure the latency is decreased to the max and so affinity groups will provide you with that.

6. Always leverage Availability Sets to get SLA

Windows Azure Virtual Machines provide a 99,95% SLA but only if you have 2 or more virtual machines running inside an availability set, so leverage it, always create your virtual machines inside an availability set.

7. Always sysprep your machines

One of the important parts of the work when we take on Windows Azure Virtual Machines is to create a generalized machine that we can use later as a base image. Some people ask me, why is this important? why should I care?

The answer is simple, because we need to be able to quickly provision a new machine if it’s required and if we have it syspreped we’ll be able to use it as a base and then reducing the time of installation and provisioning.

Examples of where we would need this would be for Disaster Recovery and Scaling.

8. Never place intensive read/write information on the Windows System Drive for improved performance

As stated before, OS Disks aren’t good for intensive IOPS so avoid leveraging them for read/write intensive work, leverage data disks instead.

9. Never place persistent information on the Temporary Drive (D:)

Careful what you place inside the Temporary Drive (D: ) since that’s temporary and so if the machine recycles it will go away, so only place there something that can be deleted without issues. Things like the IIS Temporary files, ASP.NET Temp files, SQL Server TempDB (this has some challenges but can be achieved like it’s shown here, and it’s actually a best practice).


So in summary, Windows Azure Virtual Machines are a great addition to Windows Azure but there’s a lot of tricks in order to make it better and these are some of them. If you need any help feel free to contact me and I’ll help you in anyway possible. But best of all, start to take the best out of Windows Azure Virtual Machines today and take your solutions into the next level.

Nick Harris (@CloudNick), Nathan Totten (@ntotten) and Michael Washam (@mwashamms) produced CloudCover Episode 105 - General Availability of Windows Azure Infrastructure as a Service (IaaS) on 4/19/2013 (missed when published):

In this episode Nick Harris and Nate Totten are joined by Senior Windows Azure PMs Michael Washam and Drew McDaniel.  During this episode Drew and Michael discusses the Windows Azure Virtual Machine General availability announcement, price reduction, new VM image RAM + OS disk size increase, updates for security, remote PowerShell and updates to the PowerShell Management cmdlets.

This week in News:

Like Cloud Cover on Facebook!

Follow @CloudCoverShow
Follow @mwashamms
Follow @cloudnick
Follow @ntotten

Benjamin Guinebertière (@benjguin) asked Do I have VM Roles that I should migrate? on 4/19/2013 (missed when published):

imageYou may have received this kind of e-mail and wonder if you have already VM Roles deployed. There are good chances the answer is no, but let’s check.

Dear Customer,

The Windows Azure VM Role preview is being retired on May 15. Please transition to Windows Azure Virtual Machines, and delete your VM Role preview instances as soon as possible.

Thank you for participating in the preview program for the Windows Azure VM Role. Since we started the preview program, we have learned a lot about your needs and use cases. Your feedback and insights have helped fine-tune our approach to infrastructure services. We’ve directed all of that feedback into the design of Windows Azure Virtual Machines, the evolution of VM Role.
On April 16, 2013, we announced the general availability of
Windows Azure Virtual Machines. Virtual Machines provides on-demand scalable compute resources to meet your growing business needs and can extend your existing infrastructure and apps into the cloud. With the general availability of Windows Azure Virtual Machines we are retiring the VM Role preview.


Please migrate from VM Role to Virtual Machines, and delete your running instances of VM Role as soon as possible. You can these follow these instructions to migrate to Virtual Machines.
Here are important dates to note:

  • Starting May 15, 2013, calls to create new VM Role deployments will fail.
  • On May 31, 2013, all running VM Role instances will be deleted.

Please note that you will continue to be billed for your VM Role consumption until your running instances are deleted.

Thank you for participating in the VM Role preview program and shaping the future of Windows Azure Virtual machines! You can find more information on Windows Azure Virtual Machines

Thank you,
Windows Azure Team

A simple way to check whether you have VM Roles or not is to use the previous portal (the one in Silverlight) which is available at https://windows.azure.com/. From the new portal (https://manage.windowsazure.com) you can choose previous portal in the menu.


From the main page, click on Hosted Services, Storage Accounts & CDN


and VM Images


If you have a VM Role image, this looks like this


In my case, it is not instanciated as In use is False (I haven’t used VM Roles for years, I think; this image was created in March 2011!), and I don’t have anything to do.

More information is also available at http://msdn.microsoft.com/en-us/library/gg465398.aspx.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

See the Windows Azure SDK for Ruby section of Scott Guthrie’s (@scottgu) Windows Azure: Improvements to Virtual Networks, Virtual Machines, Cloud Services and a new Ruby SDK post of 4/26/2013 in the Windows Azure Infrastructure and DevOps section below:

imageWindows Azure already has SDKs for .NET, Java, Node.js, Python, PHP and Mobile Devices (Windows 8/Phone, iOS and Android).  Today, we are happy to announce the first release of a new Windows Azure SDK for Ruby (v0.5.0).

image_thumb75_thumb5Using our new IaaS offering you can already build and deploy Ruby applications in Windows Azure.  With this first release of the Windows Azure SDK for Ruby, you can also now build Ruby applications that use the following Windows Azure services:

  • Storage: Blobs, Tables and Queues
  • Service Bus: Queues and Topics/Subscriptions

If you have Ruby installed, just do a gem install azure to start using it.  Here are some useful links to learn more about using it:

Like all of the other Windows Azure SDKs we provide, the Windows Azure SDK for Ruby is a fully open source project hosted on GitHub. The work to develop this Ruby SDK was a joint effort between AppFog and Microsoft. I’d like to say a special thanks to AppFog and especially their CEO Lucas Carlson for their passion and support with this effort.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Philip Fu posted [Sample Of Apr 25th] Complex Type Objects demo in EF to the Microsoft All-In-One Framework blog on 4/26/2013:

Sample Download : http://code.msdn.microsoft.com/CSEFComplexType-d058a5a3

image_thumbThe code sample illustrates how to work with the Complex Type which is new in Entity Framework 4.0.  It shows how to add Complex Type  properties to entities, how to map Complex Type properties to table columns, and how to map a Function Import to a Complex Type.

imageYou can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.

Beth Massi (@bethmassi) described Adding a Signature Control to the LightSwitch HTML Client in a 4/24/2013 post:

imageLightSwitch is all about building business solutions quickly -- defining your data models & business rules and visually creating screens with a set of built-in controls. It does all the boring plumbing so you can concentrate on the real value of your applications. But LightSwitch also allows for all sorts of customizations so you don’t hit that infamous “glass ceiling”. When the team set out to build the LightSwitch HTML client, they wanted to make sure that the extensibility model was super simple and in line with the way web developers build solutions today.

image_thumb6With the Silverlight client, extension authors have to know about the guts of the LightSwitch extensibility model in order to provide easy-to-use extensions consumable by LightSwitch developers. There are over 100 (and growing) LightSwitch extensions on the Visual Studio Gallery (even a signature control). But with the HTML client we wanted to take advantage of the huge web ecosystem already out there, so adding customizations can be as easy as finding the JavaScript library you want and wiring it up to your app. 

The team has written a variety of articles on incorporating custom controls and data binding in LightSwitch. Here are a few:

In particular, the last one from Joe helped me immensely. Among other things, it describes the two hooks you get for UI customization in the HTML client -- the render and postRender events. If you need to modify DOM elements (i.e. controls) created by LightSwitch, use the postRender event. This allows you to augment the element like adding a CSS class or other DOM attributes. However, if you need to completely take over the rendering yourself then you do this in the render event of a LightSwitch custom control element.

LightSwitch produces single-page applications (SPAs) based on jQuery and jQueryMobile so there are a plethora of plugins available for you to use in LightSwitch. Which ones you use totally depends on what you want to provide to your users and the devices that you need to support. In this post I want to show you a quick way of incorporating a signature control based on the jSignature plugin.

You can download the full source code for my example here.

Add the Library to your LightSwitch Project

The first step is to grab the control you want. Search the web, look in a catalog, ask a friend, browse NuGet right from within Visual Studio, or (heaven forbid) write one yourself! With any custom code you bring into your application you’ll want to make sure it works for you. jSignature has a live demo, so you can test that it works on the devices you want to support before doing anything. This particular library says it works well on many devices, has a nice import/export feature, and good documentation, so that’s why I picked this one for this example.

Once you download jSignature, extract the ZIP. With your LightSwitch project open, flip to File View in Solution Explorer and drag the jSignature.min.js library into your Scripts folder under your HTML client project.


Next open the default.htm and add the script reference:


Now you’re ready to use the signature control in your LightSwitch app.

Add the Custom Control

For this example I’ve created a simple data model for tracking work orders. Once an employee completes a work order, they need to sign it. There are many ways to store the jSignature data in LightSwitch. jSignature has a variety of formats it supports so have a look at the documentation. For this example I’ll show how to store it as an Image using the Image Business Type as well as storing it in a much more compressed base30 string that can be reloaded back into the jSignature control.

So our data model looks like this. Note the two fields we’ll be using for our custom signature controls, SignatureImage and SignatureVector. I made SignatureVector of length 2000 to be on the safe side, but I’d imagine most signatures wouldn’t need that much space.


Next we’ll add a couple Add/Edit Details screens that work with these fields. Add a new screen and select Add/Edit Details and for the Screen Data select the WorkOrder. First one name “SignWorkOrderImage” and then add another one called “SignWorkOrderVector”


Now in the Screen Designer make the content tree how you like. For the SignWorkOrderImage screen I am going to place the SignatureImage on the screen twice, but one of them will be our custom control. There’s a few ways you can add custom controls to screens. If your control isn’t specific to a particular field, at the top of the Screen Designer click “Add Layout Item”, select “Custom Control”, and then you can specify the “screen” as the binding path.

However, since our binding path will be the SignatureImage field, just select the content item in the tree and change it to a “Custom Control”.


Next set the height and width of both the Signature controls to “Fit to Content”.


I’ll also set the display name of the custom control to something a little more prominent “SIGN HERE:”. So for the SignWorkOrderImage screen the content tree looks like this.


Similarly, for the SignWorkOrderVector screen, we’ll be working with the SignatureVector field. For testing, I’d like to actually see what the vector data is so I’ll also add a textbox to display that under the custom control. So the content tree for this screen looks like this.


Next we need to write some code in the custom control’s render events.

Working with Signatures as Images

Every control library will be different so you’ll need to learn the capabilities of the control itself before you can write code to render it. But once you’ve worked through the demo it’s pretty easy to figure out this particular control.

The only thing that you need to keep in mind regards to LightSwitch is that the DOM we see inside our _render method is not the “live” DOM—it’s the DOM that exists before jQueryMobile expands it. Because of this, we need to wait until the DOM is fully rendered so that we can initialize the control and hook up the change notifications. Joe taught us how to do this using the setTimeout() function.

What gets passed to us is the DOM element (element) and the data item we’re working with (contentItem). First create the control by specifying a <DIV> with an ID of “signature” and add that to the DOM. Then inside the setTimeout function we can initialize the control and then hook up the change listener. In order to get the data as an image from the jSignature control, we call getData and pass in the type we want back. The method returns an array with the type and the actual image data (ex. img[1] ).

myapp.SignWorkOrderImage.Signature_render = function (element, contentItem) {

    //Create the control & attach to the DOM
    var sig = $("<div id='signature'></div>");

    setTimeout(function () {
        //Initialize and start capturing

        // Listen for changes made via the custom control and update the 
        // content item whenever it changes.  
        sig.bind("change", function (e) {
            var img = sig.jSignature("getData", "image");
            if (img != null) {
                if (contentItem.value != img[1]) {
                    contentItem.value = img[1];
    }, 0);    

We can also add a button to the screen to clear the signature – just specify this code in the button’s execute method.

myapp.SignWorkOrderImage.ClearSignature_execute = function (screen) {
    // Write code here.

So when we run this screen we will see the signature control and as we perform each stroke, our image control is updated. If we save this, it will save as an image to the database.


Working with Signatures as Vectors

Saving the image directly to the database is quick and handy but the jSignature control documentation warns that it’s not supported on older versions of Android. Also the images won’t scale particularly well. So it’s probably better to store the data as vector data or something that can be re-rendered later. Like I mentioned before, there are a lot of formats you can choose from, even SVG and Base64 formats. Instead of storing the data as an Image business type in LightSwitch, change it to a standard Binary and you’re good to go.

But since the base30 format that jSignature provides is the most compact, we’ll use that for the next example. This format also allows us to load it into the jSignature control when we open the screen as well.

myapp.SignWorkOrderVector.Signature_render = function (element, contentItem) {

    //Create the control & attach to the DOM
    var sig = $("<div id='signature' ></div>");

    setTimeout(function () {
        //Initialize control and set initial data       
        if (contentItem.value != null) {
            sig.jSignature("setData", "data:" + contentItem.value);

        // Listen for changes made via the custom control and update the  
        // content item whenever it changes.  
        sig.bind("change", function (e) {
            var data = sig.jSignature("getData", "base30");
            if (data != null) {
                data = data.join(",");
                if (contentItem.value != data) {
                    contentItem.value = data;
    }, 0);   

When we run this screen you can see the much smaller payload for the signature. We also see the control loaded with data when we reopen it.


Wrap Up

That’s it! As you can see, adding UI customizations takes just a little knowledge of JavaScript and understanding of how LightSwitch renders the DOM. In fact, I’m learning JavaScript & jQuery through LightSwitch and I think I have just enough knowledge to be dangerous productive now :-)

Keep in mind that not all devices support all the controls out there so your mileage may vary. But I hope I demonstrated how you can easily customize the LightSwitch HTML client.

Paul van Bladel (@paulbladel) began a series with Flexible CSV exports over web-api with server side MEF. (part 1) on 4/24/2013:


imageSince the introduction of LightSwitch V3, the amount of plumbing options we have in LightSwitch increased drastically. Especially when using it in combination with proven technologies like web-api and the new ServerApplicationContext in LightSwitch V3.

Today, we’ll add another technology: the managed extensibility  framework. Not just for the sake of the technology, but for solving a real live problem: flexible exports.

What do I mean with flexible exports

imageLet’s first clearly define that an export is not the same as a report. An export simply retrieves data from the server and presents it to the user in the form of a kind of tabular structure which can be opened by the user in Excel (the most portable format for this is CSV, comma separated value).

The most simple export is the one that available out of the box in LightSwitch. This export is ok for simple usage, but in most cases the user would like to have a richer set of potential export definitions she can select from.

So, wouldn’t it be nice that instead of a simple Customer export, the user could have some more options when she clicks the export button:


Obviously, we want to reuse the approach via web-api  I documented over here: reporting via web-api.

In this post I want to focus on an elegant way to define the different export definitions.

It’s all about projection strategies

In order to come up with a good approach, let’s first focus on what different between the above (potentially over simplified) export definitions.

Basically, the three definitions are just variations on the applied “projection strategy”.  A projection is a prominent “Linq” concept. When you do a select new (with is the core element in our web-api based exporting solution I referred to above) you are applying a projection.

An example can clarify:

        public Expression<Func<Customer, CustomerCSV1>> GetProjection()
            return (Customer c) => new CustomerCSV1 { FullName = c.FirstName + " " + c.LastName };

The above projection goes together with following POCO class.

 public class CustomerCSV1
        public string FullName { get; set; }

So, our customer entity is projected into a new POCO type (CustomerCSV1) which massages the data into a structure with only a FullName field (which is a concatenation of FirstName and LastName). In mathematical terms the Customer class is the “Source” and the CustomerCSV1 class is the “domain” (the destination if you want). The data are projected from source to domain.

That’s easy. Another projection can incorporate the underlying orders of the customer:

public Expression<Func<Customer, CustomerCSV3>> GetProjection()
            return (Customer c) => new CustomerCSV3 { FullName= c.FirstName + " " + c.LastName, OrderCount = c.Orders.Count()};

Based on following domain poco:

public class CustomerCSV3
        public string FullName { get; set; }
        public int OrderCount { get; set; }

It is clear that even the most flexible export solution can not avoid that the above projection strategy and the corresponding POCO have to be created in code. But our goal is that this is the only thing we need to do: new projection strategies should be resolved by the system automatically !

That’s why we need the Managed Extensibility framework.

Sometimes people compare MEF with an IOC (inversion of control) container, but that’s not completely accurate. MEF brings in a way imports and exports (euh… not data exports as above, but code functionality of course)  together by means of attribute decoration. But, I said already too much, no MEF tutorial here. Use your browser.

Adding a new strategy to our CSV export machinery would look as follows:

  [ExportProjection( "First Name only")]

    public class CustomerFirstNameProjection : IProjection<Customer, CustomerCSV2>
        public Expression<Func<Customer, CustomerCSV2>> GetProjection()
            return (Customer c) => new CustomerCSV2 { FirstName = c.FirstName };

As you can see we decorated the CustomerFirstNameProjection with two attributes:

  • [Export(typeof(IProjection))] : tells that we want to include this strategy in our export repository. By doing so, our export selection window is automatically updated with another export template !
  • [ExportProjection("First Name only")]: we want to provide the export strategy a meaningful name.
What’s next?

The implementation of the above. Be prepared for pretty technical code, but remember it’s infrastructure code, the goals is writing less code when using it !

Return to section navigation list>

Windows Azure Infrastructure and DevOps

Scott Guthrie (@scottgu) announced Windows Azure: Improvements to Virtual Networks, Virtual Machines, Cloud Services and a new Ruby SDK on 4/26/2013:

imageThis morning we released some great enhancements to Windows Azure. These new capabilities include:

  • Virtual Networks: New Point-to-Site Connectivity (very cool!), Software VPN Device and Dynamic DNS Support
  • Virtual Machines: Remote PowerShell and Linux SSH provisioning enhancements
  • Cloud Services: Enable Remote Desktop Support Dynamically on Web/Worker Roles
  • Ruby SDK: A new Windows Azure SDK support for Ruby

imageAll of these improvements are now available to start using immediately (note: some services are still in preview). Below are more details on them:

Virtual Networks: New Point-to-Site Connectivity and Software VPN Device support

Last week we announced the general availability of Virtual Network support as part of our IaaS release.

Virtual Networking allows you to create a private, isolated network in Windows Azure and treat it as an extension of your on-premises datacenter. For example, you can assign private IP addresses to virtual machines inside a virtual network, specify a DNS, and securely connect it to your on-premises infrastructure using a VPN device in a site-to-site manner.

Here’s a visual representation of a typical site-to-site scenario through a secure Site-To-Site VPN connection:


Today, we are excited to announce that we’re expanding the capabilities of Virtual Networks even further to enable three new scenarios:

New Point-To-Site Connectivity

With today’s release we’ve added an awesome new feature that allows you to setup VPN connections between individual computers and a Windows Azure virtual network without the need for a VPN device. We call this feature Point-to-Site Virtual Private Networking. This feature greatly simplifies setting up secure connections between Windows Azure and client machines, whether from your office environment or from remote locations. 

It is especially useful for developers who want to connect to a Windows Azure Virtual Network (and to the individual virtual machines within it) from either behind their corporate firewall or a remote location. Because it is point-to-site they do not need their IT staff to perform any activities to enable it, and no VPN hardware needs to be installed or configured.  Instead you can just use the built-in Windows VPN client to tunnel to your Virtual Network in Windows Azure.  This tunnel uses the Secure Sockets Tunneling Protocol (SSTP) and can automatically traverse firewalls and proxies, while giving you complete security.

Here’s a visual representation of the new point-to-site scenarios now enabled:


In addition to enabling developers to easily VPN to Windows Azure and directly connect to machines, the new Point-to-Site VPN support enables some other cool new scenarios:

  • Small businesses (or departments within an enterprise) who don’t have existing VPN devices and/or network expertise to manage VPN devices can now rely on the Point-to-Site VPN feature to securely connect to their Azure deployments. Because the VPN software to connect is built-into Windows it is really easy to enable and use.
  • You can quickly set up secure connections without the involvement from the network administrator, even if your computers are behind a corporate proxy or firewall. This is great for cases where you are at a customer site or working in a remote location (or a coffee shop). 

How to Enable the Point-to-Site Functionality

With today’s release we’ve updated the Virtual Network creation wizard in the Portal so that you can now configure it to enable both ‘Site-to-Site’ and ‘Point-to-Site’ VPN options.  Create a Virtual Network using the “Custom Create” option to enable these options:


Within the Virtual Network Custom Create wizard you can now click a checkbox to enable either the Point-To-Site or Site-To-Site Connectivity options (or both at the same time):


On the following screens you can then specify the IP address space of your Virtual Network.  Once the network is configured, you will create and upload a root certificate for your VPN clients, start the gateway, and then download the VPN client package.  You can accomplish these steps using the “Quick Glance” commands on the Virtual Network dashboard as well as the “Create Gateway” button on the command-bar of the dashboard.  Read this tutorial on how to “Configure a Point-to-Site VPN in the Management Portal” for detailed instructions on how to do this.

After you finish installing the VPN client package on your machine, you will see a new connection choice in your Windows Networks panel.  Connecting to this will establish a secure VPN tunnel your Windows Azure Virtual Network:


Once you connect you will have full IP level access to all virtual machines and cloud services hosted in your Azure virtual network!  No hardware needs to be installed to enable it, and it works behind firewalls and proxy servers.  Additionally, with this feature, you don’t have to enable public RDP endpoints on virtual machines to connect to them - you can instead use the private IP addresses of your virtual private network to RDP to them through the secure VPN connection.

For details instructions on how to do all of the above please read our Tutorial on how to “Configure a Point-to-Site VPN in the Management Portal”

Software VPN Device support for Site-to-Site

With today’s release we are also adding software VPN device support to our existing ‘Site-to-Site VPN’ connectivity solution (which previously required you to use a hardware VPN device from Cisco or Juniper). Starting today we also now support a pure software based Windows Server 2012 ‘Site-to-Site’ VPN option.  All you need is a vanilla Windows Server 2012 installation. You can then run a PowerShell script that enables the Routing and Remote Access Service (RRAS) on the Windows Server and configures a Site-To-site VPN tunnel and routing table on it. 

This allows you to enable a full site-to-site VPN tunnel that connects your on-premises network and machines to your virtual network within Windows Azure - without having to buy a hardware VPN device.

Dynamic DNS Support

With today’s release we have also relaxed restrictions around DNS server setting updates in virtual networks. You can now update the DNS server settings of a virtual network at any time without having to redeploy the virtual network and the VMs in them. Each VM in the virtual network will pick up the updated settings when the DNS is refreshed on that machine, either by renewing the DNS settings or by rebooting the instance.  This makes updates much simpler.

If you’re interested further in Windows Azure Virtual Networks, and the capabilities and scenarios it enables, you can find more information here.

Virtual Machines: Remote PowerShell and Linux SSH provisioning enhancements

Last week we announced the general availability of Virtual Machine support as part of our IaaS release. With today’s update we are adding two nice enhancements:

Support for Optionally Enabling Remote PowerShell on Windows Virtual Machines

With today’s update, we now enable you to configure whether remote PowerShell is enabled for Windows VMs when you provision them using the Windows Azure Management Portal. This option is now available when you create a Virtual Machine using the FROM GALLERY option in the portal:


The last step of the wizard now provides a checkbox that gives you the option of enabling PowerShell Remoting:


When the checkbox is selected the VM enables remote PowerShell, and a default firewall endpoint is created for the deployment.  This enables you to have the VM immediately configured and ready to use without ever having to RDP into the instance.

Linux SSH Provisioning

Previously, Linux VMs provisioned using Windows Azure defaulted to using a password as their authentication mechanism – with provisioning Linux VMs with SSH key-based authentication being optional. Based on feedback from customers, we have now made SSH key-based authentication the default option and allow you to omit enabling a password entirely if you upload a SSH key:


Cloud Services: Enabling Dynamic Remote Desktop for a Deployed Cloud Service

Windows Azure Cloud Services support the ability for developers to RDP into web and worker role instances.  This can be useful when debugging issues.

Prior to today’s release, developers had to explicitly enable RDP support during development – prior to deploying the Cloud Service to production.  If you forgot to enable this, and then ran into an issue in production, you couldn’t RDP into it without doing an app update and redeploy (and then waiting to hit the issue again).

With today’s release we have added support to enable administrators to dynamically configure remote desktop support – even when it was not enabled during the initial app deployment.  This ensures you can always debug issues in production and never have to redeploy an app in order to RDP into it.

How to Enable Dynamic Remote Desktop on a Cloud Service

Remote desktop can be dynamically enabled for all the role instances of a Cloud Service, or enabled for an individual role basis.  To enable remote desktop dynamically, navigate to the Configure tab of a cloud service and click on the REMOTE button:


This will bring up a dialog that enables you to enable remote desktop – as well as specify a user/password to login into it:


Once dynamically enabled you can then RDP connect to any role instance within the application using the username/password you specified for them.

Windows Azure SDK for Ruby

Windows Azure already has SDKs for .NET, Java, Node.js, Python, PHP and Mobile Devices (Windows 8/Phone, iOS and Android).  Today, we are happy to announce the first release of a new Windows Azure SDK for Ruby (v0.5.0).

Using our new IaaS offering you can already build and deploy Ruby applications in Windows Azure.  With this first release of the Windows Azure SDK for Ruby, you can also now build Ruby applications that use the following Windows Azure services:

  • Storage: Blobs, Tables and Queues
  • Service Bus: Queues and Topics/Subscriptions

If you have Ruby installed, just do a gem install azure to start using it.  Here are some useful links to learn more about using it:

Like all of the other Windows Azure SDKs we provide, the Windows Azure SDK for Ruby is a fully open source project hosted on GitHub. The work to develop this Ruby SDK was a joint effort between AppFog and Microsoft. I’d like to say a special thanks to AppFog and especially their CEO Lucas Carlson for their passion and support with this effort.


Today’s release includes a bunch of nice features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it.

Hope this helps.

Hopefully, Scott’s team can maintain the pace of this flurry of new Windows Azure feature announcements and win the innovation race with Amazon Web Services.

Darryl K. Taft (@darrylktaft) reported Opscode ChefConf 2013 Brings Chef Open-Source Automation to Light [for Windows Azure] in a 4/26/2013 post to eWeek’s Developer blog:

imageOpscode, provider of the Chef open-source automation platform, is wrapping up its second annual user conference, ChefConf, marking increased momentum for the company and platform, including partnerships with the likes of IBM and Microsoft.

ChefConf 2013 ran April 24 through 26 in San Francisco, where a sell-out crowd of 700 attendees gathered for technical sessions, leadership sessions, workshops and presentations from innovators at Disney, Facebook, Forrester Research, General Electric, Nordstrom and many more about using Opscode Chef as the automation platform for what Opscode refers to as the coded business.

imageJay Wampold, Opscode's vice president of marketing said as IT has become the touchpoint for businesses to interact with consumers, then code is central to that interaction. And any business that takes this approach is considered a coded business, he said. "It started with companies like Amazon, Google, Facebook and Yahoo, but it's moving into other companies and enterprises; we're seeing all these businesses based on code," Wampold told eWEEK.

Opscode announced it is collaborating with both IBM and Microsoft in creating open-source automation solutions for leveraging both the public and private cloud as a catalyst in accelerating time-to-market and reducing business risk.

imageMeanwhile, Opscode is collaborating with Microsoft Open Technologies to deliver a series of Chef Cookbooks providing cloud infrastructure automation capabilities for Microsoft Azure. The companies released new Cookbooks for automating Drupal and WordPress deployments on Windows Azure. Opscode also announced that Chef provides integration with the new, generally available version of Windows Azure Infrastructure Services. By combining Opscode Chef with Windows Azure, users can automate everything from server provisioning and configuration management to continuous delivery of infrastructure and applications. …

Read more.

James Conard (@jamescon) presented the Visual Studio Live! 2013 Las Vegas Day 1 Keynote on 3/27/2013 and the conference’s sponsor, 1105 Media, made YouTube videos of them available shortly thereafter:

  • image_thumb75_thumb6Video Studio Live! James Conard Keynote, Part 1

  • Video Studio Live! James Conard Keynote, Part 2

  • Video Studio Live! James Conard Keynote, Part 3

  • Video Studio Live! James Conard Keynote, Part 4

David Linthicum (@DavidLinthicum) asserted “The OpenStack Foundation will crack down on vendors that use the OpenStack label but don't live up to the standard” in a deck for his Beware the fake OpenStack clouds post of 4/19/2013 (missed when published):

imageAccording to Nancy Gohring at IT World, the OpenStack Foundation is starting to call out incompatible clouds. "Get ready for the OpenStack Foundation to start cracking down on service providers that call their clouds OpenStack but aren't actually interoperable," she wrote, "The first companies that may be in the foundation's crosshairs: HP and Rackspace."

imageRackspace is an early innovator with OpenStack, and Hewlett-Packard could be the largest OpenStack provider by the end of the year. It's interesting they're the first to be accused of compatibility issues.

HP and Rackspace have both fired back with responses to the compatibility allegations:

  • "HP Cloud Services adheres to OpenStack's interoperability guidelines. We are committed to OpenStack and are compatible with OpenStack APIs. In addition, we have a policy of not introducing proprietary API extensions. HP is supporting core OpenStack APIs and we have not added our own proprietary API extensions, therefore this ensures our interoperability with other OpenStack deployments," HP said in a statement.
  • Rackspace wrote a blog post saying it hopes to adhere to the letter of the OpenStack standard by 2014. "While we believe some variation in implementations will be inevitable, we do want to eliminate as many of these as possible to provide as much of a common OpenStack experience as we can," wrote Rackspace's Troy Toman.

imageAt the OpenStack Summit this week, we saw more energy behind the use of OpenStack, but also some finger-pointing around living up to the letter of the standard. The truth is that large technology providers have a poor history of remaining confined to standards. As the market heats up, larger providers that have more resources, such as HP and Rackspace, may find it irresistible to add their own proprietary API extensions.

We may even find public and private cloud technology providers going off the OpenStack reservation if they believe they can do better outside the standard. The OpenStack Foundation will have the unenviable job of policing the use of both the standard and the OpenStack brand. Indeed, there could be a few court battles in the future, which has also been part of the history of many technology standards as vendors seek to capitalize on them and offer something "special" at the same time.

<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image_thumb75_thumb7No significant articles today

<Return to section navigation list>

Cloud Security, Compliance and Governance

PRNewswire reported Microsoft updates Business Associate Agreement to address new HIPAA requirements and help enable healthcare organizations to maintain compliance in the cloud on 4/25/2013:

REDMOND, Wash., April 25, 2013 /PRNewswire/ -- Microsoft Corp. today announced the release of a new, revised version of its HIPAA Business Associate Agreement (BAA) for the company's next-generation cloud services. This enables customers in the healthcare industry to leverage cloud solutions to coordinate care, improve patient health outcomes, and maintain compliance with privacy and security regulations issued under the U.S. Health Insurance Portability and Accountability Act (HIPAA) of 1996.

imageAddressing HIPAA is embedded in the DNA of Microsoft's cloud solutions, and Microsoft updated its BAA to help healthcare organizations address compliance for the final omnibus HIPAA rule, which went into effect March 26. Microsoft's updated BAA covers Office 365, Microsoft Dynamics CRM Online and Windows Azure Core Services. [Emphasis added.]

"Team communication and collaboration is the lifeblood of the health industry, and more and more healthcare organizations are realizing the productivity, care team communications and cost-savings benefits of cloud computing," said Dennis Schmuland, chief health strategy officer, U.S. Health & Life Sciences, Microsoft. "Microsoft Office 365 is the only major cloud business productivity solution to programmatically offer a BAA built with the industry, and for the industry, to HIPAA-regulated customers, allowing healthcare organizations to be confident in the security and privacy of their patient data while empowering their staff to communicate and collaborate virtually anytime and almost anywhere."

Microsoft collaborated with some of the leading U.S. medical schools and their HIPAA privacy counsel, as well as other public- and private-sector HIPAA-covered entities, in creating a BAA for its cloud services.

image_thumb2The refreshed BAA aligns with new regulatory language included in the final omnibus HIPAA rule, such as the new definition of a Business Associate, which includes any entity that maintains protected health information on behalf of a HIPAA-covered entity and has access to such data, even if it does not view the data. It also covers important data protections, such as Microsoft's reporting requirements in accordance with the HIPAA Breach Notification Rule, and Microsoft's obligation to require its subcontractors who create, receive, maintain or transmit protected health information to agree to the same restrictions and conditions imposed on Microsoft pursuant to the applicable requirements of the HIPAA Security Rule. Allscripts is among the first organizations to leverage Microsoft's updated BAA.

"We have programmatically offered a BAA for our healthcare customers since the launch of Office 365 nearly two years ago and have subsequently included our other cloud offerings such as Microsoft Dynamics CRM Online and Windows Azure Core Services under the BAA," said Hemant Pathak, assistant general counsel, Microsoft. "Addressing the clarifications and changes incorporated in the final omnibus HIPAA rule reaffirms Microsoft's commitment to comply with security and privacy requirements and maintain its status as a transparent and trusted data steward for healthcare organizations leveraging the cloud."

Office 365 is the first and only major cloud business productivity service to adopt the rigorous requirements of the federal government's HIPAA Business Associate standards. Where the provision of services include storage of and access to electronic protected health information by a cloud provider, Microsoft's substantial commitment to compliance helps healthcare customers placing protected health information in the cloud avoid potentially significant liability under HIPAA for failure to comply with applicable HIPAA contracting and safeguard requirements. …

<Return to section navigation list>

Cloud Computing Events

Magnus Mårtensson (@noopman) updated Maarten Balliauw’s (@maartenballiauw) Global Windows Azure Bootcamp! post on 4/27/2013:

imageWelcome to Global Windows Azure Bootcamp! (#GlobalWindowsAzure) On April 27th, 2013, you’ll have the ability to join a Windows Azure Bootcamp on a location close to you. Why? We’re organizing bootcamps globally, that’s why!

Learn about locations, install the necessary prerequisites and get excited!

This one day deep dive class will get you up to speed on developing for Windows Azure. The class includes a trainer with deep real world experience with Windows Azure, as well as a series of labs so you can practice what you just learned.

Copenhagen, Denmark done!

Sitting quite happy on the train back from the Danish GlobalWindowsAzure Bootcamp. Almost 10 people defied the sunny spring weather on a Saturday in Denmark to attend our location, which was most impressive! I am very happy with all of the good questions that got asked – most of them got answers too. It went really well, [...]

Planes, trains and automobiles 4 GlobalWindowsAzure!

On the train to “my” GWAB location in Copenahgen Denmark I cross a bridge-tunnel* to get there. It occurs to me what a great logistical thing Global Windows Azure is! People use all modes of transportation to get to their respective evelts all over the World. Ihave heard that some even fly in to the events [...]

Europe time zones locations are about to go live!

It’s early morning now in Europe. With the coming morning there it is sure to turn the Global-Wicked-Madness that is #GlobalWindowsAzure into a Wild-And-Crazy-Global-Wicked-Madness! 26 European and 6 African and Asian locations are in those time zones. This will add to the 25 Asian locations already and full swing moving #GlobalWindowsAzure passed the 50 location [...]

Alan Smith posted Global Windows Azure Bootcamp: Please Bring Your Kinect on 4/26/2013:

Originally posted on: http://geekswithblogs.net/asmith/archive/2013/04/26/152793.aspx

imageI’m just putting the finishing touches on the Global Render Lab for the Global Windows Azure Bootcamp. The lab will allow bootcamp attendees around the world to join together to create a render farm in Windows Azure that will render 3D ray-traced animations created using depth data from Kinect controllers.

There is a webcast with an overview of the Global Render Lab here.

imageIf you are attending a Global Windows Azure Bootcamp event you will have the chance to deploy an application to Windows Azure that will contribute processing power to the render farm. You will also have the chance to create animations that will be rendered in Windows Azure and published to a website.

A Kinect controller will be required to create animations, if you have either a Windows Kinect, or an X-Box Kinect with a power supply and adapter for a standard USB connection, please take it with you to the Global Windows Azure Bootcamp event you are attending. Having as many locations there are where attendees can create and upload animations as possible will male for a great community lab on a global scale.

Jim O’Neil (@jimoneil) suggested Windows Azure – Discover the Cloud and Win Cash! on 4/26/2013:

imageOver the past few months, I’ve been working with a lot of student and professional developers enabling them to take advantage of promotions for building applications for Windows 8 and Windows Phone, such as the ongoing Keep the Cash.  If your head’s a bit more in the cloud – like the Windows Azure cloud – you might have felt a tad left out, but no longer!

Windows Azure Developer ChallengeCode Project is running Windows Azure Developer Challenge in which you could share in over $16,000 in prizes including a Surface RT, $2500, or ‘spot prizes’ of $100.

The contest is actually a series of FIVE challenges, each lasting two weeks, but hurry the first challenge ends this Sunday, the 28th! Each challenge has you progress through a key feature or component of Windows Azure incorporating that into your own idea for a web application that will be hosted on Azure, and each step of the way, you provide a narrative of your journey – what you learned, how you leveraged specific features, etc.

You’ll definitely want to check out the contest site for the full details and terms and conditions, but for your planning, here are the five challenges along with their run dates:

    • Getting Started: April 15th – 18th
    • Build a website: April 29th – May 12th
    • Using SQL on Azure: May 13th – May 26th
    • Virtual Machines: May 27th – June 9th
    • Mobile access: June 10th – June 24th

Grab your free, 90-day trial account now and get started!

The Windows Azure Team (@WindowsAzure) posted video archives of the Windows AzureConf 2013 keynote and sessions to Channel9 on 4/23/2013:

imageWelcome to Windows AzureConf, a free event for the Windows Azure community. This event features a keynote presentation by Scott Guthrie, along with numerous sessions executed by Windows Azure community members. After the keynote, two concurrent sets of sessions will be streamed live for an online audience right here on Channel 9, which will allow you to see how developers just like you are using Windows Azure to develop applications in the cloud.

image_thumb75_thumb8Community members from all over the world join Scott in the Channel 9 studios to present their own ideas, innovations inventions and experiences. These presentations will provide you the opportunity to see how your peers in the community are doing great things using Windows Azure offerings like Mobile Services, Web Sites, Service Bus, virtual machines, and more. Whether you're just learning Windows Azure or you've already achieved success on the platform, you won't want to miss this special event.

Taking Control of Your Windows Azure Services by Michael Collier

Taking Control of Your Windows Azure Services Have you ever needed to take control of your Windows Azure services but found some of the higher-level tooling just didn't quite fit your needs? While there are many ways to manage your Windows Azure services (PowerShell, Visual Studio, web portal, etc.), it is the Windows Azure Service Management API...

Windows Azure Web Sites - Things They Don't Teach Kids in School by Maarten Balliauw

Windows Azure Web Sites - Things They Don't Teach Kids in School Microsoft has a cloud platform which runs .NET, NodeJS and PHP. All 101 talks out there will show you the same: it's easy to deploy, it scales out on demand and it runs WordPress. Great! But what about doing real things? In this session, we'll explore the things they don't teach kids in school. How about trying to find out the architecture of this platform? What about the different deployment options for Windows Azure Web Sites, the development flow and some awesome things you can do with the command line tools? Did you know you can modify the automated build process? Join me in this exploration of some lesser known techniques of the platform.

From Collocated Servers to Windows Azure Web Sites in Three Days by Joey Schluckter

imageThe problem was that our platform was slow and couldn't support the growth we expected. We knew we needed to move the platform to the cloud and in a timely manner. After moving our core website to Azure, we decided to experiment with Azure websites on some of the platform features. For our 'Showcase' Facebook app we chose node.js. We started coding on Friday and demoed the application on the following Wednesday. The previous version of the 'Showcase' app took several months to complete. Since then we have moved several other platform features over to WAWS. The flexibility and agility they offer us make it an easy decision.

Essential Iaas for Developers by Vishwas Lele

imageWhy should developers care about IaaS? Think of your Windows Azure Datacenter as an Object Model(OM). "IaaS OM" can help us build cost effective systems by only turning on the parts of system that are needed and when they are needed. Additionally, systems built using IaaS can not only be more fault tolerant but also scale in the direction of business growth. We will begin this session with a review of key Windows Azure IaaS concepts including compute, storage and VNET. Next we will walk the "IaaS OM" using Windows Azure Power Shell SDK, including a discussion of bootstrapping VMs. Finally, we will walk through a collection of IaaS scenarios including virtual branch office, single pane of glass for managing on-premises and Azure based resources and hybrid applications that combine IaaS & PaaS.

Developing Cross Platform Mobile Solutions with Azure Mobile Services by Chris Woodruff

imageMobile applications are the current hot development topic today. Many companies desire a common story for their applications across all mobile platforms. By using Telerik's cross-platform mobile development solution Icenium, developers can work with a cloud-based mobile framework containing everything necessary to build and publish cross-platform hybrid mobile Apple iOS and Google Android applications. Microsoft MVP Chris Woodruff will share exciting strategies for architecting and building your mobile app with Azure Mobile Services with Icenium, including best practices to share with your team and organization.

Debugging and Monitoring Windows Azure Cloud Services by Eric D. Boyd

imageWindows Azure Cloud Services is an awesome platform for developers to deliver applications in the cloud without needing to manage virtual machines. However, the abstraction that gives you this simplified deployment and scale, prevents you from attaching a Visual Studio Remote Debugger. Sometimes you need visibility into the execution of your production applications. What if you could replay the real production usage with the exact call sequence and variable values using the Visual Studio Debugger? What if you could collect production metrics that would help you identify performance bottlenecks and slow code?

In this session, Eric Boyd will walk you through debugging and monitoring real-world Windows Azure applications. Eric will show you how to collect diagnostics like Event Logs, Perf Counters, IIS Logs, and even file-based logs from running Windows Azure compute instances. Next, Eric will also show you how to debug your production Windows Azure services using IntelliTrace's black box recording capabilities. Lastly, you will learn how to collect CLR-level diagnostics and performance metrics without instrumenting your code using tools like AppDynamics and New Relic.

If you feel like Windows Azure Cloud Services are a black box when debugging issues and solving performance problems, you will leave this session feeling like Windows Azure is radically more transparent and easier to debug than the applications in your own data center.

Real World Architectures using Windows Azure Mobile Services by Kristof Rennen

imageWith Windows Azure Mobile Services, Microsoft has made available an amazing service to easily build mobile solutions on a solid API, offering a lot of important components out of the box. Starting from data running on a Windows Azure SQL Database, exposed through a REST API and supported by javascript-enabled server side logic and scheduled tasks, a mobile backend can be set up in only minutes. Adding the extra power of authentication using various well known identity providers and the free notification services to serve push notifications make it a solid solution for all mobile platforms, including Android, iOS, Windows Phone and Windows 8.

In this session we will show you how Windows Azure Mobile Services can already be applied in real world architectures and projects, even while it is still in preview. We will talk through a few Windows 8 and Windows Phone apps, already or soon available in the Windows Store and we will show you how to combine the SDK and REST possibilities offered by the service to build solid solutions on all mobile platforms.

Lights, Camera, Action - Media Services on the Loose by Mike Martin

imageYou just cannot imagine the Web without audio and video services. Up until now, if you want to include streaming media content in your websites or applications, you need to rely on third party services or massive computing capacity for media transcoding, and streaming to a range of client devices. With the release of Windows Azure Media Services and the Media Services SDK, these capabilities are becoming easily available for you to incorporate in your websites and applications. In this session we'll give an overview of Windows Azure Media Services, and you'll learn from a series of demos how you can take advantage of the platform to add media content to your development.

How we Made MyGet.org on Windows Azure by Maarten Balliauw

imageEver wonder how some applications are built? Ever wonder how to combine components of the Windows Azure platform? Stop wondering and learn how we've built MyGet.org, a multi-tenant software-as-a-service. In this session we'll discuss architecture, commands, events, access control, multi tenancy and how to mix and match those things together. Learn about the growing pains and misconceptions we had on the Windows Azure platform. The result just may be a reliable, cost-effective solution that scales.

<Return to section navigation list>

Other Cloud Computing Platforms and Services

Jeff Barr (@jeffbarr) reported Now Available on Amazon EC2: Red Hat Enterprise Linux 6.4 AMIs on 4/19/2013 (missed when published):

Version 6.4 of Red Hat Enterprise Linux (RHEL) is now available as an Amazon Machine Image (AMI) for use on all EC2 instance types in all AWS Regions.

imageWith this release, AMIs are available for 32 and 64-bit PVM (paravirtualized) and 64-bit HVM (hardware-assisted virtualization). The new HVM support means that you can now run RHEL on a wider variety of EC2 instance types including the Cluster Compute (cc), High Memory Cluster Compute (cr), Cluster GPU (cg), High Storage (hs), and High I/O (hi) families (availability of instance types varies by Region).

RHEL 6.4 now includes support for the popular CloudInit package. You can use CloudInit to customize your instances at boot time by using EC2's user data feature to pass an include file, a script, an Upstart job, a bootstrap hook to the instance. This mechanism can be used to create and modify files, install and configure packages, generate SSH keys, set the host name, and so forth.

This release also changes the default user login from "root" to "ec2_user."

More information on the availability of Red Hat Enterprise Linux, including a global list of AMI IDs for all versions of RHEL can be found on the Red Hat Partner Page.

Support for these AMIs is available through AWS Premium Support and Red Hat Global Support.

image_thumb111No significant articles today


<Return to section navigation list>