Thursday, November 08, 2012

Windows Azure and Cloud Computing Posts for 11/5/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI,Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222


Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

Azure Blob, Drive, Table, Queue, Hadoop and Media Services

Bruno Terkaly (@brunoterkaly) posted Essential Knowledge for Azure Table Storage on 11/8/2012:


  1. Azure tables are used to store non-relational structured data at massive scale

Azure Storage Options

  1. There are many types of storage options for the MS cloud. We will focus on Azure tables.
  2. Here is what we'll cover:
    • When to use Azure Tables
    • When are the appropriate to consider
    • Understanding that Azure Tables are collection of entities
    • Access Azure Tables directly or through a cloud application
    • Key Features of Azure Tables
    • Relationship between accounts, Tables, and entities
    • Efficient Inserts and Updates
    • Designing for scale
    • Query Design and Performance
    • Understanding Partition Keys
    • How data is partitioned
    • Coding considerations
    • Azure Table Query Concepts
    • Understanding TableServiceEntity/TableServiceContext
    • Additional Resources

When to use Azure tables
  1. These are some typical use case scenarios for using Azure tables.
  2. Azure tables are optimized for capacity and performance (scale)

Azure Tables : When Appropriate
  1. SQL Database is limited currently to 150 GB without federation. Federation can be used to increase the size beyond 150 GB.
  2. If your code requires strong relational semantics, Azure tables are not appropriate. They don't allow for join statements.
  3. You can think of Azure tables as nothing more than a collection of objects. Note that each entity (similar to a row in a table) could have different attributes. In the diagram above, the second entity does not have a city property.
  4. One of the beauties of Azure Tables is that your can replicate across data centers, aiding in disaster recovery.

Tables: A collection of entities
  1. A table is a collection of entities.
  2. An entity is like an object. It has name/value pairs.
  3. An entity is kind of like a row in a relational database table, with the caveat that entities don't need to have the exact same attributes.

Accessing Azure Table Storage From Azure
  1. Any application that is capable of http is capable of communicating with Azure tables. That is because Azure tables are REST-based. This means a Java or PHP application can directly perform CRUD (create, read, update, delete) operations on an Azure Table.

Accessing Azure Table Storage From Azure
  1. Azure cloud applications can be hosted in the same data center as the Azure Table Storage. The compelling point here is that the latency from the cloud application is very low and can read and update the data at very high speeds.

Features: Azure Table Storage
  1. One of the key features of Azure tables is the low cost. You can use the Pricing Calculator to determine your predicted costs at
  2. It is important to remember that Azure tables are non-relational and therefore joins are not possible.
  3. Azure tables can automatically span multiple storage nodes, maintaining performance. This is based on the partition key that you define. It is very important to consider the partition key carefully as it determines performance.
  4. Transactions can occur only within Partition Keys. This is another example of why you must carefully consider Partition Keys.
  5. The data is replicated 3 times, including alternate data centers.

Relationships among accounts, tables, and entities
  1. Note that an account can have multiple tables and that each table can have one or more entities.
  2. Note the URL that is used to access your tables. This is the URL that any client that is http-capable can use.

Efficient Inserts and Updates
  1. Special semantics are available to make inserts and updates efficient. The bottom line is that you can do either an update or insert in just one operation.

Designing For Scale
  1. The Partition Key and RowKey are required properties for each entity. They play a key role on how the data is partitioned and scaled. They also determine performance for various queries. As mentioned previously, they also play a role in transactions (transactions cannot span Partition Keys).
  2. How to issue efficient queries will be addressed later in this post.

Query Design & Performance
  1. Performance is always an important consideration. The spectrum of speed varies considerably, depending on the type of query you issue. Specific examples are provided later in this post.

Understanding Partition Keys
  1. This slide illustrates how your entities get distributed across partition nodes. Note that the partition key determines how data is spread across storage nodes.

How Data is Partitioned
  1. The key point here is that every entity is uniquely identified by the combination of partition key and row key. You can think of partion key and row key together being similar to a primary index in a relational table.

How data is partitioned
  1. Azure will automatically manage both the partitioning and the replication of your entities. I am trying to emphasize how important it is to consider the partition key and row key.

Coding Considerations
  1. Note that Query 1 is fast because it performs and exact match on partition key and row key. It only returns one entity.
  2. Query 2 is slower than Query 1 because it does a range-based query.
  3. Query 3 is slower than Query 2 because it doesn't leverage the row key.

Azure Table Query Concepts
  1. Queries 4 and 5 are very slow because they don't use the partition key. This is equivalent to a full table scan with SQL Server. You want to avoid this at all costs. You may need to re-consider your partition keys and row keys if you find yourself issuing these type of queries.
  2. You may even want to keep duplicate copies of your data in other tables that are optimized for certain types of queries.

Understanding TableServiceEntity/TableServiceContext
  1. The table above stores email addresses. The partition key is the domain part of the email address and the mailname is the row key.
  2. TableServiceEntity and TableServiceContext are used when programming with C# or Visual Basic. By deriving from TableServiceEntity you can define your own entities that get stored in tables. TableServiceContext is used when you wish to perform CRUD operations on tables and is not illustrated here.

Additional Resources
  1. The Windows Azure Training Kit is the best way to get up and running.
  2. One of the labs is called Exploring Windows Azure Storage. It provides excellent examples on using storage.
  3. It can be found here (once you install the training kit) C:\WATK\Labs\ExploringStorage\HOL.htm

Denny Lee (@dennylee) described An easy way to test out Hive Dynamic Partition Insert on HDInsight Azure in an 11/7/2012 post:

imageOne you get your cluster up and running, an easy way to test out Hive Dynamic Partition Insert (the ability to load data into multiple partitions without the need to load each partition individually) on HDInsight Azure is to use the HiveSampleTable already included and the scripts below. You can execute the scripst from the Hive Interactive Console or from the Hive CLI.

image1) For starters, create a new partitioned table

CREATE TABLE hivesampletable_p (
clientid STRING, 
querytime STRING, 
market STRING,
devicemake STRING, 
devicemodel STRING, 
state STRING, 
querydwelltime STRING, 
sessionid BIGINT, 
sessionpagevieworder BIGINT
PARTITIONED BY (deviceplatform STRING, country STRING)

2) Then to insert data into your new partitioned table, run the script below

set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
FROM hivesampletable h
INSERT OVERWRITE TABLE hivesampletable_p PARTITION (deviceplatform = 'iPhone OS', country)
SELECT h.clientid, h.querytime,, h.devicemake, h.devicemodel, h.state, h.querydwelltime, h.sessionid, h.sessionpagevieworder,
 WHERE deviceplatform = 'iPhone OS';

Some quick call outs:

  • The first two set statements indicate to Hive that you are running a dynamic partition insert
  • The HiveQL statement populates the hivesampletable_p (that you just created) from the HiveSampleTable.
  • Notice that partition statement has two clauses noting that we are partitioning by deviceplatform and country
  • We have specified deviceplatform = ‘iPhone OS’ indicating that all of this data should only go into the iPhone OS set of partitions. The where clause ensure that this is being filtered correctly.
  • Also specified is country (with no value) meaning that all country values will have their own partitions as well.

The easiest way to visualize the partitions being created is to Remote Desktop into the name node and open up the Hadoop Name Node Status (the browser link is available on the desktop when you RDP into the name node). Click Browse the File System > hive > warehouse > hivesampletable_p.

You will notice in the hivesampletable_p, there is a folder called deviceplatform=iPhone%20OS representing the deviceplatform partitioning scheme. Clicking on the iPhone OS folder you will see multiple folders – one for each country – as noted with the country partitioning scheme.


Joe Giardino, Serdar Ozler, Veena Udayabhanu and Justin Yu of the Windows Azure Storage Team posted Windows Azure Storage Client Library 2.0 Tables Deep Dive on 11/6/2012:

This blog post serves as an overview to the recently released Windows Azure Storage Client for .Net and the Windows Runtime. In addition to the legacy implementation shipped in versions 1.x that is based on DataServiceContext, we have also provided a more streamlined implementation that is optimized for common NoSQL scenarios.

Note, if you are migrating an existing application from a previous release of the SDK please see the overview and migration guide posts.

New Table implementation

The new table implementation is provided in the Microsoft.WindowsAzure.Storage.Table namespace. There are three key areas we emphasized in the design of the new table implementation: usability, extensibility, and performance. The basic scenarios are simple and “just work”; in addition, we have also provided two distinct extension points to allow developers to customize the client behaviors to their specific scenario. We have also maintained a degree of consistency with the other storage clients (Blob and Queue) so that moving between them feels seamless.

With the addition of this new implementation users have effectively three different patterns to choose from when programing against table storage. A high level summary of each and a brief description of the benefits they offer are provided below.

  • Table Service Layer via TableEntity – This approach offers significant performance and latency improvements over the WCF Data Services, but still offers the ability to define POCO objects in a similar fashion without having to write serialization / deserialization logic. Additionally, the optional EntityResolver provides the ability to easily work with heterogeneous entity types returned via queries without any additional client objects or overhead. Additionally, users can optionally customize the serialization behavior of their entities by overriding the ReadEntity or WriteEntity methods. The Table Service layer does not currently expose an IQueryable, meaning that queries need to be manually constructed (helper functions are exposed via static methods on the TableQuery class, see below for more). For an example see the NoSQL scenario below.
  • Table Service Layer via DynamicTableEntity – This approach is provided to allow user’s direct access to a Dictionary key value pairs. This is particularly useful for more advanced scenarios such as defining entities whose property names are dictated at runtime, entities with large amount of properties, server side projections, and bulk updates of heterogeneous data. Since DynamicTableEntity implements the ITableEntity interface all results, including projections, can be persisted back to the server. For an example see the Heterogeneous Update scenario below.
  • WCF Data Services – Similar to the legacy 1.7x implementation, this approach exposes an IQueryable allowing users to construct complex queries via LINQ. This approach is recommended for users with existing code assets as well as non-latency sensitive queries as it utilizes greater system resources. The WCF Data Services based implementation has been migrated to the Microsoft.WindowsAzure.Storage.Table.DataServices namespace. For additional details see the DataServices section below.
    Note, a similar table implementation using WCF Data Services is not provided in the recently released Windows 8 library due to limitations when projecting to various supported languages.

The new table implementation utilizes the OdataLib components to provide the over the wire protocol implementation. These libraries are available via NuGet (See the resources section below). Additionally, to maintain compatibility with previous versions of the SDK, the client library has a dependency on System.Data.Services.Client.dll which is part of the .Net platform. Please Note, the current WCF Data Services standalone installer contains version 5.0.0 assemblies, referencing these assemblies will result in a runtime failure.

You can resolve these dependencies as shown below


To install Windows Azure Storage, run the following command in the Package Manager Console.

PM>Install-Package WindowsAzure.Storage

This will automatically resolve any needed dependencies and add them to your project.

Windows Azure SDK for .NET - October 2012 release

  • Install the SDK ( click on the “install the SDK” button)
  • Create a project and add a reference to %Program Files%\Microsoft SDKs\Windows Azure\.NET SDK\2012-10\ref\Microsoft.WindowsAzure.Storage.dll
  • In Visual Studio go to Tools > Library Package Manager-> Package Manager Console and execute the following command.

PM> Install-Package Microsoft.Data.OData -Version 5.0.2


The new table implementation has shown significant performance improvements over the updated DataServices implementation and the previous versions of the SDK. Depending on the operation latencies have improved by between 25% and 75% while system resource utilization has also decreased significantly. Queries alone are over twice as fast and consume far less memory. We will have more details in a future Performance blog.

Object Model

A diagram of the table object model is provided below. The core flow of the client is that a user defines an action (TableOperation, TableBatchOperation, or TableQuery) over entities in the Table service and executes these actions via the CloudTableClient. For usability, these classes provide static factory methods to assist in the definition of actions.

For example, the code below inserts a single entity:

CloudTable table = tableClient.GetTableReference([TableName]);



Similar to the other Azure Storage clients, the table client provides a logical service client, CloudTableClient, which is responsible for service wide operations and enables execution of other operations. The CloudTableClient class can update the Storage Analytics settings for the Table service, list all tables in the account, and can create references to client side CloudTable objects, among other operations.


A CloudTable object is used to perform operations directly on a given table (Create, Delete, SetPermissions, etc.), and is also used to execute entity operations against the given table.


The TableRequestOptions class defines additional parameters which govern how a given operation is executed, specifically the timeouts and RetryPolicy that are applied to each request. The CloudTableClient provides default timeouts and RetryPolicy settings; TableRequestOptions can override them for a particular operation.


The TableResult class encapsulates the result of a single TableOperation. This object includes the HTTP status code, the ETag and a weak typed reference to the associated entity. For TableBatchOperations, the CloudTable.ExecuteBatch method will return a collection of TableResults whose order corresponds with the order of the TableBatchOperation. For example, the first element returned in the resulting collection will correspond to the first operation defined in the TableBatchOperation.


The TableOperation class encapsulates a single operation to be performed against a table. Static factory methods are provided to create a TableOperation that will perform an Insert, Delete, Merge, Replace, Retrieve, InsertOrReplace, and InsertOrMerge operation on the given entity. TableOperations can be reused so long as the associated entity is updated. As an example, a client wishing to use table storage as a heartbeat mechanism could define a merge operation on an entity and execute it to update the entity state to the server periodically.

Sample – Inserting an Entity into a Table

// You will need the following using statements
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Table;

// Create the table client.
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable peopleTable = tableClient.GetTableReference("people");

// Create a new customer entity.
CustomerEntity customer1 = new CustomerEntity("Harp", "Walter");
customer1.Email = "";
customer1.PhoneNumber = "425-555-0101";

// Create an operation to add the new customer to the people table.
TableOperation insertCustomer1 = TableOperation.Insert(customer1);

// Submit the operation to the table service.

The TableBatchOperation class represents multiple TableOperation objects which are executed as a single atomic action within the table service. There are a few restrictions on batch operations that should be noted:

  • You can perform batch updates, deletes, inserts, merge and replace operations.
  • A batch operation can have a retrieve operation, if it is the only operation in the batch.
  • A single batch operation can include up to 100 table operations.
  • All entities in a single batch operation must have the same partition key.
  • A batch operation is limited to a 4MB data payload.

The CloudTable.ExecuteBatch which takes as input a TableBatchOperation will return an IList of TableResults which will correspond in order to the entries in the batch itself. For example, the result of a merge operation that is the first in the batch will be the first entry in the returned IList of TableResults. In the case of an error the server may return a numerical id as part of the error message that corresponds to the sequence number of the failed operation in the batch unless the failure is associated with no specific command such as ServerBusy, in which case -1 is returned. TableBatchOperations, or Entity Group Transactions, are executed atomically meaning that either all operations will succeed or if there is an error caused by one of the individual operations the entire batch will fail.

Sample – Insert two entities in a single atomic Batch Operation

// You will need the following using statements
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Table;

// Create the table client.
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable peopleTable = tableClient.GetTableReference("people");

// Define a batch operation.
TableBatchOperation batchOperation = new TableBatchOperation();

// Create a customer entity and add to the table
CustomerEntity customer = new CustomerEntity("Smith", "Jeff");
customer.Email = "";
customer.PhoneNumber = "425-555-0104";

// Create another customer entity and add to the table
CustomerEntity customer2 = new CustomerEntity("Smith", "Ben");
customer2.Email = "";
customer2.PhoneNumber = "425-555-0102";

// Submit the operation to the table service.

The TableQuery class is a lightweight query mechanism used to define queries to be executed against the table service. See “Querying” below.

ITableEntity interface

The ITableEntity interface is used to define an object that can be serialized and deserialized with the table client. It contains the PartitionKey, RowKey, Timestamp, and Etag properties, as well as methods to read and write the entity. This interface is implemented by the TableEntity and DynamicTableEntity entity types that are included in the library; a client may implement this interface directly to persist different types of objects or objects from 3rd-party libraries. By overriding the ITableEntity.ReadEntity or ITableEntity.WriteEntity methods a client may customize the serialization logic for a given entity type.


The TableEntity class is an implementation of the ITableEntity interface and contains the RowKey, PartitionKey, and Timestamp properties. The default serialization logic TableEntity uses is based off of reflection where all public properties of a supported type that define both get and set are serialized. This will be discussed in greater detail in the extension points section below. This class is sealed and may be extended to add additional properties to an entity type.

Sample – Define a POCO that extends TableEntity

// This class defines one additional property of integer type, since it derives from
// TableEntity it will be automatically serialized and deserialized.    
public class SampleEntity : TableEntity
    public int SampleProperty { get; set; }  

The DynamicTableEntity class allows clients to update heterogeneous entity types without the need to define base classes or special types. The DynamicTableEntity class defines the required properties for RowKey, PartitionKey, Timestamp, and ETag; all other properties are stored in an IDictionary. Aside from the convenience of not having to define concrete POCO types, this can also provide increased performance by not having to perform serialization or deserialization tasks.

Sample – Retrieve a single property on a collection of heterogeneous entities

// You will need the following using statements
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Table;

// Define the query to retrieve the entities, notice in this case we
// only need to retrieve the Count property.
TableQuery query = new TableQuery().Select(new string[] { "Count" });

// Note the TableQuery is actually executed when we iterate over the
// results. Also, this sample uses the DynamicTableEntity to avoid
// having to worry about various types, as well as avoiding any
// serialization processing.
foreach (DynamicTableEntity entity in myTable.ExecuteQuery(query))
      // Users should always assume property is not there in case another client removed it.
      EntityProperty countProp;

    if (!entity.Properties.TryGetValue("Count", out countProp))    
        throw new ArgumentNullException("Invalid entity, Count property not found!");
    // Display Count property, however you could modify it here and persist it back to the service.

Note: an ExecuteQuery equivalent is not provided in the Windows Runtime library in keeping with best practice for the platform. Instead use the ExecuteQuerySegmentedAsync method to execute the query in a segmented fashion.


The EntityProperty class encapsulates a single property of an entity for the purposes of serialization and deserialization. The only time the client has to work directly with EntityProperties is when using DynamicTableEntity or implementing the TableEntity.ReadEntity and TableEntity.WriteEntity methods.

The samples below show two approaches that can be a player’s score property. The first approach uses DynamicTableEntity to avoid having to declare a client side object and updates the property directly, whereas the second will deserialize the entity into a POJO and update that object directly.

Sample –Update of entity property using EntityProperty

// You will need the following using statements
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Table;

// Retrieve entity
TableResult res = gamerTable.Execute(TableOperation.Retrieve("Smith", "Jeff"));
DynamicTableEntity player = (DynamicTableEntity)res.Result;

// Retrieve Score property
EntityProperty scoreProp;

// Users should always assume property is not there in case another client removed it.
if (!entity.Properties.TryGetValue("Score ", out scoreProp))    
    throw new ArgumentNullException("Invalid entity, Score property not found!");

scoreProp.Int32Value += 1;

// Store the updated score

Sample – Update of entity property using POJO

// You will need the following using statements
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Table;

public class GamerEntity : TableEntity
    public int Score { get; set; }
// Retrieve entity
TableResult res = gamerTable.Execute(TableOperation.Retrieve<GamerEntity>("Smith", "Jeff"));
GamerEntity player = (GamerEntity)res.Result;

// Update Score
player.Score += 1;

// Store the updated score

The EntityResolver delegate allows client-side projection and processing for each entity during serialization and deserialization. This is designed to provide custom client side projections, query-specific filtering, and so forth. This enables key scenarios such as deserializing a collection of heterogeneous entities from a single query.

Sample – Use EntityResolver to perform client side projection

// You will need the following using statements
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Table;

// Define the query to retrieve the entities, notice in this case we only need
// to retrieve the Email property.
TableQuery<TableEntity> query = new TableQuery<TableEntity>().Select(new string[] { "Email" });

// Define a Entity resolver to mutate the entity payload upon retrieval.
// In this case we will simply return a String representing the customers Email address.
EntityResolver<string> resolver = (pk, rk, ts, props, etag) => props.ContainsKey("Email") ? props["Email"].StringValue : null;

// Display the results of the query, note that the query now returns
// strings instead of entity types since this is the type of EntityResolver we created.
foreach (string projectedString in gamerTable.ExecuteQuery(query, resolver, null /* RequestOptions */, null /* OperationContext */))

There are two query constructs in the table client: a retrieve TableOperation which addresses a single unique entity and a TableQuery which is a standard query mechanism used against multiple entities in a table. Both querying constructs need to be used in conjunction with either a class type that implements the TableEntity interface or with an EntityResolver which will provide custom deserialization logic.


A retrieve operation is a query which addresses a single entity in the table by specifying both its PartitionKey and RowKey. This is exposed via TableOperation.Retrieve and TableBatchOperation.Retrieve and executed like a typical operation via the CloudTable.

Sample – Retrieve a single entity

// You will need the following using statements
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Table;

// Create the table client.
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
CloudTable peopleTable = tableClient.GetTableReference("people");

// Retrieve the entity with partition key of "Smith" and row key of "Jeff"
TableOperation retrieveJeffSmith = TableOperation.Retrieve<CustomerEntity>("Smith", "Jeff");

// Retrieve entity
CustomerEntity specificEntity = CustomerEntity)peopleTable.Execute(retrieveJeffSmith).Result;


TableQuery is a lightweight object that represents a query for a given set of entities and encapsulates all query operators currently supported by the Windows Azure Table service. Note, for this release we have not provided an IQueryable implementation, so developers who are migrating applications to the 2.0 release and wish to leverage the new table implementation will need to reconstruct their queries using the provided syntax. The code below produces a query to take the top 5 results from the customers table which have a RowKey greater than 5.

Sample – Query top 5 entities with RowKey greater than or equal to 5

// You will need the following using statements
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Table;

TableQuery<TableEntity> query = new TableQuery<TableEntity>().Where(TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThanOrEqual, "5")).Take(5);

In order to provide support for JavaScript in the Windows Runtime, the TableQuery can be used via the concrete type TableQuery, or in its generic form TableQuery<EntityType> where the results will be deserialized to the given type. When specifying an entity type to deserialize entities to, the EntityType must implement the ITableEntity interface and provide a parameterless constructor.

The TableQuery object provides methods for take, select, and where. There are static methods provided such as GenerateFilterCondition, GenerateFilterConditionFor*, and CombineFilters which construct other filter strings. Some examples of constructing queries over various types are shown below

// 1. Filter on String
TableQuery.GenerateFilterCondition("Prop", QueryComparisons.GreaterThan, "foo");

// 2. Filter on GUID
TableQuery.GenerateFilterConditionForGuid("Prop", QueryComparisons.Equal, new Guid());

// 3. Filter on Long
TableQuery.GenerateFilterConditionForLong("Prop", QueryComparisons.GreaterThan, 50L);

// 4. Filter on Double
TableQuery.GenerateFilterConditionForDouble("Prop", QueryComparisons.GreaterThan, 50.50);

// 5. Filter on Integer
TableQuery.GenerateFilterConditionForInt("Prop", QueryComparisons.GreaterThan, 50);

// 6. Filter on Date
TableQuery.GenerateFilterConditionForDate("Prop", QueryComparisons.LessThan, DateTime.Now);

// 7. Filter on Boolean
TableQuery.GenerateFilterConditionForBool("Prop", QueryComparisons.Equal, true);

// 8. Filter on Binary
TableQuery.GenerateFilterConditionForBinary("Prop", QueryComparisons.Equal, new byte[] { 0x01, 0x02, 0x03 });

Sample – Query all entities with a PartitionKey=”SamplePK” and RowKey greater than or equal to “5”

string pkFilter = TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, "samplePK");

string rkLowerFilter = TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.GreaterThanOrEqual, "5");

string rkUpperFilter = TableQuery.GenerateFilterCondition("RowKey", QueryComparisons.LessThan, "10");

// Note CombineFilters has the effect of “([Expression1]) Operator (Expression2]), as such passing in a complex expression will result in a logical grouping.
string combinedRowKeyFilter = TableQuery.CombineFilters(rkLowerFilter, TableOperators.And, rkUpperFilter);

string combinedFilter = TableQuery.CombineFilters(pkFilter, TableOperators.And, combinedRowKeyFilter);

// OR 
string combinedFilter = string.Format(“({0}) {1} ({2}) {3} ({4})”, pkFilter, TableOperators.And, rkLowerFilter, TableOperators.And, rkUpperFilter);
TableQuery<SampleEntity> query = new TableQuery<SampleEntity>().Where(combinedFilter);

Note: There is no logical expression tree provided in the current release, and as a result repeated calls to the fluent methods on TableQuery overwrite the relevant aspect of the query. Additionally,

Note the TableOperators and QueryComparisons classes define string constants for all supported operators and comparisons:


  • And
  • Or
  • Not


  • Equal
  • NotEqual
  • GreaterThan
  • GreaterThanOrEqual
  • LessThan
  • LessThanOrEqual

A common pattern in a NoSQL datastore is to work with storing related entities with different schema in the same table. In this sample below, we will persist a group of heterogeneous shapes that make up a given drawing. In our case, the PartitionKey for our entities will be a drawing name that will allow us to retrieve and alter a set of shapes together in an atomic manner. The challenge becomes how to work with these heterogeneous entities on the client side in an efficient and usable manner.

The table client provides an EntityResolver delegate which allows client side logic to execute during deserialization. In the scenario detailed above, let’s use a base entity class named ShapeEntity which extends TableEntity. This base shape type will define all common properties to a given shape, such as it Color Fields and X and Y coordinates in the drawing.

public class ShapeEntity : TableEntity
    public virtual string ShapeType { get; set; }
    public double PosX { get; set; }
    public double PosY { get; set; }
    public int ColorA { get; set; }
    public int ColorR { get; set; }
    public int ColorG { get; set; }
    public int ColorB { get; set; }

Now we can define some shape types that derive from the base ShapeEntity class. In the sample below we define a rectangle which will have a Width and Height property. Note, this child class also overrides ShapeType for serialization purposes. For brevities sake the Line, and Ellipse entities are omitted here, however you can imagine representing other types of shapes in different child entity type such as triangles, trapezoids etc.

public class RectangleEntity : ShapeEntity
    public double Width { get; set; }
    public double Height { get; set; }

    public override string ShapeType
        get { return "Rectangle"; }
        set {/* no op */}

Now we can define a query to load all of the shapes associated with our drawing and an EntityResolver that will resolve each entity to the correct child class. Note, that in this example aside from setting the core properties PartitionKey, RowKey, Timestamp, and ETag, we did not have to write any custom deserialization logic and instead rely on the built in deserialization logic provided by TableEntity.ReadEntity.

// You will need the following using statements
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Table;

TableQuery<ShapeEntity> drawingQuery = new TableQuery<ShapeEntity>().Where(TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, "DrawingName"));

EntityResolver<ShapeEntity> shapeResolver = (pk, rk, ts, props, etag) =>
    ShapeEntity resolvedEntity = null;
    string shapeType = props["ShapeType"].StringValue;

    if (shapeType == "Rectangle") { resolvedEntity = new RectangleEntity(); }
    else if (shapeType == "Ellipse") { resolvedEntity = new EllipseEntity(); }
    else if (shapeType == "Line") { resolvedEntity = new LineEntity(); }    
    // Potentially throw here if an unknown shape is detected

    resolvedEntity.PartitionKey = pk;
    resolvedEntity.RowKey = rk;
    resolvedEntity.Timestamp = ts;
    resolvedEntity.ETag = etag;
    resolvedEntity.ReadEntity(props, null);

    return resolvedEntity;

Now we can execute this query in a segmented Asynchronous manner in order to keep our UI fast and fluid. The code below is written using the Async Methods exposed by the client library for Windows Runtime.

List<ShapeEntity> shapeList = new List<ShapeEntity>();
TableQuerySegment<ShapeEntity> currentSegment = null;
while (currentSegment == null || currentSegment.ContinuationToken != null)
    currentSegment = await drawingTable.ExecuteQuerySegmentedAsync(
 currentSegment != null ? currentSegment.ContinuationToken : null);


Once we execute this we can see the resulting collection of ShapeEntities contains shapes of various entity types.


Heterogeneous update

In some cases it may be required to update entities regardless of their type or other properties. Let’s say we have a table named “employees”. This table contains entity types for developers, secretaries, contractors, and so forth. The example below shows how to query all entities in a given partition (in our example the state the employee works in is used as the PartitionKey) and update their salaries regardless of job position. Since we are using merge, the only property that is going to be updated is the Salary property, and all other information regarding the employee will remain unchanged.

// You will need the following using statements
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Table;

TableQuery query = new TableQuery().Where("PartitionKey eq 'Washington'").Select(new string[] { "Salary" });

// Note for brevity sake this sample assumes there are 100 or less employees, however the client should ensure batches are kept to 100 operations or less.      
TableBatchOperation mergeBatch = new TableBatchOperation();
foreach (DynamicTableEntity ent in employeeTable.ExecuteQuery(query))
    EntityProperty salaryProp;

    // Check to see if salary property is present
    if (!ent.Properties.TryGetValue("Salary", out salaryProp))
        if (salaryProp.DoubleValue < 50000)
            // Give a 10% raise
            salaryProp.DoubleValue = 1.1;
        else if (salaryProp.DoubleValue < 100000)
            // Give a 5% raise
            salaryProp.DoubleValue = *1.05;

        throw new ArgumentNullException("Entity does not contain Salary!");
// Execute batch to save changes back to the table service
Persisting 3rd party objects

In some cases we may need to persist objects exposed by 3rd party libraries, or those which do not fit the requirements of a TableEntity and cannot be modified to do so. In such cases, the recommended best practice is to encapsulate the 3rd party object in a new client object that implements the ITableEntity interface, and provide the custom serialization logic needed to persist the object to the table service via ITableEntity.ReadEntity and ITableEntity.WriteEntity.

Continuation Tokens

Continuation Tokens can be returned in the segmented execution of a query. One key improvement to the [Blob|Table|Queue]ContinuationTokens in this release is that all properties are now publicly settable and a public default constructor is provided. This, in addition to the IXMLSerializable implementation, allows clients to easily persist a continuation token.


The legacy table service implementation has been migrated to the Microsoft.WindowsAzure.Storage.Table.DataServices namespace and updated to support new features in the 2.0 release such as OperationContext, end to end timeouts, and asynchronous cancellation. In addition to the new features, there have been some breaking changes introduced in this release, for a full list please reference the tables section of the Breaking Changes blog post.

Developing in Windows Runtime

A key driver in this release was expanding platform support, specifically targeting the upcoming releases of Windows 8, Windows RT, and Windows Server 2012. As such, we are releasing the following two Windows Runtime components to support Windows Runtime as Community Technology Preview (CTP):

  • Microsoft.WindowsAzure.Storage.winmd - A fully projectable storage client that supports JavaScript, C++, C#, and VB. This library contains all core objects as well as support for Blobs, Queues, and a base Tables Implementation consumable by JavaScript
  • Microsoft.WindowsAzure.Storage.Table.dll – A table extension library that provides generic query support and strong type entities. This is used by non-JavaScript applications to provide strong type entities as well as reflection based serialization of POCO objects

The images below illustrate the Intellisense experience when defining a TableQuery in an application that just reference the core storage component and when the table extension library is used. EntityResolver, and the TableEntity object are also absent in the core storage component, instead all operations are based off of the DynamicTableEntity type.

Intellisense when defining a TableQuery referencing only the core storage component


Intellisense when defining a TableQuery with the table extension library


While most table constructs are the same, you will notice that when developing for the Windows Runtime all synchronous methods are absent which is in keeping with the specified best practice. As such, the equivalent of the desktop method CloudTable.ExecuteQuery which would handle continuation for the user has been removed. Instead developers are supposed to handle this segmented execution in their application and utilize the provided ExecuteQuerySegmentedAsync methods in order to keep their apps fast and fluid.


This blog post has provided an in depth overview of developing applications that leverage the Windows Azure Table service via the new Storage Client libraries for .Net and the Windows Runtime. Additionally, we have discussed some specific differences when migrating existing applications from a previous 1.x release of the SDK.

Joe Giardino
Serdar Ozler
Veena Udayabhanu
Justin Yu

Windows Azure Storage

Get the Windows Azure SDK for .Net

We Azure table users are still waiting for secondary indexes, which has been promised for several years.

Mark Kromer (@mssqldude) started a series with Hortonworks on Windows – Microsoft HDInsight & SQL Server – Part 1 on 11/6/2012:

imageI’m going to start a series here on using Microsoft’s Windows distribution of the Hadoop stack, which Microsoft has released in community preview here together with Hortonworks:

imageCurrently, I am using Cloudera on Ubutnu and Amazon’s Elastic MapReduce for Hadoop & Hive jobs. I’ve been using Sqoop to import & export data between databases (SQL Server, HBase and Aster Data) and ETL jobs for data warehousing the aggregated data (SSIS) while leaving the detail data in persistent HDFS nodes. Our data scientists are analyzing data from all 3 of those sources: SQL Server, Aster Data and Hadoop through cubes, Excel, SQL interfaces and Hive. We are also using analytical tools: PowerPivot, SAS and Tableau.

That being said, and having spent 5 years previously @ Microsoft, I was very much anticipating getting the Windows distribution of Hadoop. I’ve only had 1 week to play around with it so far and I’ve decided to begin documenting my journey here in my blog. I’ll also talk about it so far, along with Aster, Tableau and Hadoop on Linux Nov 7 @ 6 PM in Microsoft’s Malvern office, my old stomping grounds:

As the group’s director, one of the reasons that I like having a Windows distribution of Hadoop is so that we are not locked into an OS and can leverage the broad skill sets that we have on staff & off shore and so that we don’t tie ourselves to hiring on specific backgrounds when we analyze potential employee experience.

When I began experimenting with the Microsoft Windows Hadoop distribution, I downloaded the preview file and then installed it from the Web Installer, which then created a series of Apache Hadoop services, including the most popular in the Hadoop stack that drives the entire framework: jobtracker, tasktracker, namenode and datanode. There are a number of others that you can read about from any good Hadoop tutorial.

The installer created a user “hadoop” and an IIS app pool and site for the Microsoft dashboard for Hadoop. Compared to what you see from Hortonworks and Cloudera, it is quite sparse at this point. But I don’t really make much use of the management consoles from Hadoop vendors at this point. As we expand our use of Hadoop, I’m sure we’ll use them more. Just as I am sure that Microsoft will expand their dashboards, management, etc. and maybe even integrate with System Center.

You’ll get the usual Hadoop namenode and MapReduce web pages to view system activity and a command-line interface to issue jobs, manage the HDFS file system, etc. I’ve been using the dashboard to issue jobs, run Hive queries and download the Excel Hive drive, which I LOVE. I’ll blog about Hive next, in part 2. In the meantime, enjoy the screenshots of the portal access into Hadoop from the dashboard below:

This is how you submit a MapReduce JAR file (Java) job:

Here is the Hive interface for submitting SQL-like (HiveQL) queries against Hadoop using Hive’s data warehouse metadata schemas:

Saravanan G described HOW TO: Copy files between Windows Azure storage accounts in an 11/6/2012 post to the Aditi Technologies blog:

imageThis blog will guide you with simple steps on how to copy files between storage accounts (blob to blob). Most often, this may be helpful when you need to copy vhd files between Windows Azure storage accounts.

Prerequisite: You need to download and install the Node.js tool from here.

Then, run the following command from the Node.js command prompt

(Go To à Start | Node.js(x64) | Node.js command prompt – For a Windows machine)

Windows Machine: npm install azure –g (Need to run in administrator mode).

Linux Machine: sudo npm install azure –g (Need to run this command with elevated privileges).

image222To make sure your installation was successful, type azure in the command prompt. This will list all the Azure related commands.

vm is the command that helps you to manage your Azure virtual machine.

Typing azure vm –help in command prompt will list all the Azure virtual machines related commands.

azure vm disk upload is the command that copies your files between storage accounts (Blob to Blob).

Syntax: azure vm disk upload


azure vm disk upload "" "" "DESTINATIONSTORAGEACCOUNTKEY"image222

Note: Your source location container should be public or you can also specify the URL with shared access signature.

Here the copy is made between the Windows Azure storage itself. Not even one of the files is copied to your local machine. This makes the transfer much faster.

I tried copying 1TB file from one blob container to another blob container in the same storage location for which it took less than 5 minutes. This may take more time when we copy files between different storage accounts (a 30GB file took me one hour when I was copying it to different storage accounts).

Avkash Chauhan (@avkashchauhan) described Hadoop adventures with Microsoft HDInsight in an 11/5/2012 post:

What is HDInsight?

imageHDinsight is the product name for Microsoft installation of Hadoop and Hadoop on azure service. HDInsight is Microsoft’s 100% Apache compatible Hadoop distribution, supported by Microsoft. HDInsight, available both on Windows Server or as an Windows Azure service, empowers organizations with new insights on previously untouched unstructured data, while connecting to the most widely used Business Intelligence (BI) tools on the planet.

It is available in two mode:

  • imageHDInsight as Cloud Service: Cloud Version running on Windows Azure
  • HDInsight as Local Cluster: A downloadable version to runs locally on Windows Server and Desktop

In this article we will see how to use HDInsight on local machine.

Where to get it?

What does Windows installer brings to your machine:

After the installation is completed you will see the following applications are installed:

  1. Microsoft HDInsight Community Technology Preview Version
  2. Hortonwoks Data Platform 1.0.1 Developer Preview Version 1.0.1
  3. If you do not change the installed component, Python 2.7.3150 is also installed
  4. Java and C++ runtime is also installed as required in the machine

Once installer is completed you will see the following shortcuts are setup in your machine:

Here is the list of shortcuts:

  1. Hadoop Command Line
  2. Microsoft HDInsight Dashboard
  3. Hadoop MapReduce Status
  4. Hadoop Name Node Status

By default the Hadoop is installed at C:\Hadoop as below:

If you launch the “Hadoop command Line” you will see the list of commands as below:

  • namenode -format format the DFS filesystem
  • secondarynamenode run the DFS secondary namenode
  • namenode run the DFS namenode
  • datanode run a DFS datanode
  • dfsadmin run a DFS admin client
  • mradmin run a Map-Reduce admin client
  • fsck run a DFS filesystem checking utility
  • fs run a generic filesystem user client
  • balancer run a cluster balancing utility
  • fetchdt fetch a delegation token from the NameNode
  • jobtracker run the MapReduce job Tracker node
  • pipes run a Pipes job
  • tasktracker run a MapReduce task Tracker node
  • historyserver run job history servers as a standalone daemon
  • job manipulate MapReduce jobs
  • queue get information regarding JobQueues
  • version print the version
  • jar <jar> run a jar file
  • distcp <srcurl> <desturl> copy file or directories recursively
  • archive -archiveName NAME <src>* <dest> create a hadoop archive
  • daemonlog get/set the log level for each daemon
  • or
  • CLASSNAME run the class named CLASSNAME

Most commands print help when invoked w/o parameters.

Try checking the Version as below:

c:\Hadoop\hadoop-1.1.0-SNAPSHOT>hadoop version

Hadoop 1.1.0-SNAPSHOT

Subversion on branch -r

Compiled by jenkins on Wed Oct 17 22:28:56 PDT 2012

From source with checksum 80f5614dfb0743b569344f051a07b37d

Now if you launch “Microsoft HDInsight Dashboard” shortcut you will see the dashboard running locally as below:

Launching “Hadoop MapReduce Status” shortcut will give you the following info:

And Launching “Hadoop Name Node Status” shortcut you will see the following:

So as you can see above, you do have Hadoop Cluster running on your local machine.

Play with it a little more and my next article is coming with more info on this regard.

Have fun with Hadoop!!

Hanu Kommalapati (@hanuk, pictured below) asserted Windows Azure’s Flat Network Storage to Enable Higher Scalability Targets in an 11/4/2012 post:

imageBrad Calder, a Distinguished Engineer at Microsoft, recently blogged about higher scalability targets for Azure Storage enabled by Flat Network architecture: Windows Azure’s Flat Network Storage and 2012 Scalability Targets. Windows Azure’s implementation of Flat Network toplogy, referred to as “Quantum 10” (Q10) network architecture, is influenced by the Microsoft Research paper: VL2: A Scalable and Flexible Data Center Network presented at SIGCOMM’09.

imageOnce software improvements are fully rolled out for flat network implementation, which will happen per Brad by the end of 2012, will provide the following scalability targets for each Azure Storage account created after June 7th, 2012:


Storage Account Scalability Targets

  • Capacity – Up to 200 TBs
  • Transactions – Up to 20,000 entities/messages/blobs per second
  • Bandwidth for a Geo Redundant storage account
    • Ingress - up to 5 gigabits per second
    • Egress - up to 10 gigabits per second
  • Bandwidth for a Locally Redundant storage account
    • Ingress - up to 10 gigabits per second
    • Egress - up to 15 gigabits per second

Please keep in mind that any storage account created before June 7th, 2012 need to be migrated to newly created storage account to take advantage of the above higher scalability targets.

Since each storage account is composed of many partitions, it is important to understand the scalability targets (achieved with 1kb object size) for a single partition which are given below:

Partition Scalability Targets

  • Single Queue (all the messages in a queue are accessed through a single partition)
    • Up to 2,000 messages/second
  • Single Table Partition (all the entities in a table with the same partition key)
    • Up to 2,000 entities/second (with efficient partitioning scheme one should be able to achieve 20,000 entities/second)
  • Single Blob (each blob will be in its own partition as the partition key for blog is “container name + blob name”)
    • Up to 60 MBytes/sec

The above are the high end numbers and as Brad suggests, your mileage will vary depending on the object sizes and access patterns. So, establishing application specific scalability numbers before production deployment helps with the scale out planning of your Azure infrastructure as application load ramps up.

Read Brad’s excellent paper titled Windows Azure Storage: A Highly Available Cloud Storage Service with Strong Consistency for the internals of Azure Storage architecture presented at SIGOPS’11.


<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

Matteo Pagani (@qmatteoq) described Azure Mobile Services for Windows Phone 8 in an 11/8/2012 post:

imageDuring the BUILD conference, right after the announcement of the public release of the Windows Phone 8 SDK, Microsoft has also announced, as expected, that Azure Mobile Services, that we’ve learned to use in the previous posts, are now compatible with Windows Phone 8.

imageDo you remember the tricky procedure that we had to implement in this post to manually interact with the server? If you’re going to develop an application for Windows Phone 8 you can forget it, since now the same great SDK we used to develop a Windows Store app is now available also for mobile. This means that we can use the same exact APIs and classes we’ve seen in this post, without having to make HTTP requests to the server and manually parsing the JSON response.

The approach is exact the same: create an instance of the MobileServiceClient class (passing your service’s address and the secret key) and start doing operations using the available methods.

The only bad news? If your application is targeting Windows Phone 7, you still have to go with the manual approach, since the new SDK is based on WinRT and it’s compatible just with Windows Phone 8.

You can download the SDK from the following link:×409

Josh Twist (@joshtwist) posted BUILD 2012: The week we discovered ‘kickassium' on 11/3/2012:

imageIt’s been a very long week, but a very good one. Windows Azure Mobile Services got it’s first large piece of airtime at the BUILD conference and the reaction has been great. Here’s just a couple of my favorite quotes so far from the week:

“Mobile Services is the best thing at BUILD, and there’s been a lot of cool stuff at BUILD” – Attendee in person

“I'm tempted to use Windows Azure #mobileservices for the back end of everything from now on. Super super awesome stuff. #windowsazureAndy Cross

“Starting #Azure #MobileServices with @joshtwist. I heard that in order to make it they had to locate the rare mineral Kickassium. #bldwin”- James Chambers


imageThe BUILD team also hosted a hackathon and Mobile Services featured prominently. In fact two of the three winners of the hackathon was built on Mobile Services and you can watch the team talk about their experience in their live interview on Channel 9 (link to come when the content goes live). Again, some favorite quotes from the winning teams (some of which were mentored by the incredible Paul Batum):

“I was watching the Mobile services talk on the live stream, and as I was watching it I started hooking it up. By the time he finished his talk, I got the backend for our app done” – Social Squares, winner

“We got together on Monday and we did a lot of work – he did a service layer, I did a web service layer, we did bunch of stuff that would help [our app] to communicate, and then we went to Josh’s session… and we threw everything away and used Mobile Services. What took us roughly 2000 lines of code, we got for free with Mobile Services” – QBranch, winner



I had three presentations at BUILD, including a demo at the beginning of the Windows Azure Keynote – check it out. Mobile Services is 10 minutes in:

I also had two breakout sessions and I’m pleased to announce that the code for these is now available (links below each session):

Developing Mobile Solutions on Windows Azure Part I

We take a Windows Phone 8 application that has no connectivity and uses no cloud services, to building out a whole connected scenario in 60 minutes. There’s a lot of live coding, risk and we even get (entirely by coincidence) James Chambers up on stage for some audience interaction that doesn’t quite go to plan! The code for this is up on github here (download zip).

Also, be sure to checkout my colleagues Nick and Chris’ awesome session which follows on from this: Developing Mobile Solutions on Windows Azure Part II.

Windows 8 Connectathon with Windows Azure Mobile Services

In this session, I build a Windows 8 application starting from the Mobile Services quickstart, going into some detail on authentication, scripts and push notifications including managing channels. The code for is up on github here (download zip) and – due to popular demand I created a C# version of the Windows 8 client. The Windows Phone client was pretty easy – I’ll leave that as an exercise for the reader.

Channel 9 Live

Paul and I were also interviewed by Scott Hanselman on Channel 9 Live – right after the keynote. We had a blast talking to Scott about Mobile Services and got to answer some questions coming in from the audience.

One of the outcomes of the Channel 9 interview was we promised to setup a Mobile Services UserVoice. We never want to break a promise on Mobile Services so here you go: – so please log your requests and get voting! Don’t forget about our forums and always feel free to reach out to me on twitter @joshtwist.

Glenn Gailey (@ggailey777) described Mobile Services: request new features, //build// demos, and a new poster in an 11/2/2012 post:

imageSome interesting things today for fans of Windows Azure Mobile Services, Microsoft’s cloud-based backend solution for mobile device apps…

Mobile Services Feedback Site is Open

imageI wanted to let you know that the Windows Azure Mobile Services team has opened up their feedback request site for you to provide feedback on and propose new ideas for new Mobile Services features. I’ve already added mine, go add yours!

New Windows Azure Platform Poster


Also, there is a cool new Window Azure platform poster that we made for the //build/ conference, which features Mobile Services prominently.


Check it out:

I hope it shows up soon in Server Posterpedia app.

Checkout Josh’s Demo at //build/

I watched Satya (President of MS Server and Tools) give his keynote this past Wednesday morning at the //build/ conference. The first thing that he talked about was Mobile Services—featuring a smokin’ hot demo by Josh Twist. Satya also called Mobile Services his “favorite” Azure service.

In this demo, Josh creates a new mobile service and a new Windows Store app, hooks-up push notifications, integrates the app with SkyDrive, and triggers a notification from a second Windows Phone 8 app, all in about 5 min. You have to check it out.

I’ve seen a bunch of these demos from Josh, and they are getting better every time. Great job Josh (his demo starts about 11min into Satya’s talk).

Here are two other talks on Mobile Services given at //BUILD/:


<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

Doug Mahugh (@dmahugh) reported on OData at Information on Demand 2012 in an 11/8/2012 post to the Interoperability @ Microsoft blog:

imageI attended IBM’s Information on Demand conference two weeks ago, where I had the opportunity to talk to people about OData (Open Data Protocol). Microsoft and IBM are collaborating on the development of the OData standard in the OASIS OData Technical Committee, and for this conference we were demonstrating a simple OData feed on a DB2 database, consumed by a variety of client applications.

Here’s a high-level view of the architecture of the demo app:


image_thumb8to the Interoperability @ Microsoft blogFor this demo, we deployed an OData service on Windows Azure that exposes a few entities from a DB2 database running on IBM’s cloud platform. By leveraging WCF Data Services in Visual Studio, we were able to create this OData feed in a matter of minutes.

Here’s a screencast that shows the steps involved in creating the demo service and consuming it from various client devices and applications:

For more information about using OData with DB2 or Informix, see “Use OData with IBM DB2 and Informix” on the IBM DeveloperWorks site.

The growing OData ecosystem is enabling a variety of new scenarios to deliver open data for the open web, and it was great to have the opportunity to learn from so many perspectives this week! Standardizing a URI query syntax and semantics means that data providers and data consumers can focus on innovative ways to add value by combining disparate data sources, and assures interoperability between a wide variety of data producers and consumers. To learn more about OData, including the ecosystem, developer tools, and how you can get involved, see this blog post.

Special thanks to Susan Malaika, Brent Gross, and John Gera of IBM for all of their help with putting together the demo and their support at the booth throughout the conference. We’re looking forward to continued collaboration with our colleagues at IBM and the many other organizations involved in the ongoing standardization of OData!

<Return to section navigation list>

Windows Azure Service Bus, Access Control Services, Caching, Active Directory and Workflow

Alex Simon described Enhancements to Windows Azure Active Directory Preview in an 11/7/2012 post:

imageI’m happy to have the opportunity to share with you another set of enhancements we’ve recently added to our Windows Azure Active Directory Preview. We were excited by the response we got from developers when we launched the Preview in May of this year and over the last few months we’ve worked hard to make it even better based on your feedback and your use of the platform.

The latest enhancements we’ve added are:

  • Added graphical user interfaces for registering your application in a directory tenant.
  • Support for the SAML 2.0 protocol for Web SSO.
  • Sign out support for the WS-Federation protocol in Web SSO.
  • Differential query in the Graph API.
  • An updated version of the Windows Azure Authentication Library which is now 100% managed code.
  • The ability to federate Directory Tenants with Access Control Namespaces.
Making It Easier to Connect Your Application to Windows Azure Active Directory

When we first launched the Windows Azure Active Directory Preview, we used the Microsoft Online Services Module for Windows PowerShell as the tool administrators used to grant applications access to a company’s Windows Azure AD tenant. While PowerShell is a great approach for some admins, we knew that we also needed to provide a graphical user interface that made it easy for admins to grant your applications access with a few simple clicks.

For developers building applications that need single-sign-on with a single directory tenant, such as an enterprise developer building a web-based line of business application, we’ve developed an extension to Visual Studio that makes it easy to register an application in a directory tenant as part of the development process. This extension is part of the ASP.NET Fall 2012 Update. You can read more about the Windows Azure AD authentication extension part of the update here.

For developers building multi-tenant software-as-a-service applications, we’ve built a web-based experience for your customers to register your application in their directory tenant. You can now give us details about your application, including the access your application requires to the customer’s directory tenant, and your customers can set up single-sign-on and grant your app access to their directory as part of an experience that is integrated with your application’s sign-up process. You can optionally publish your application in to the Windows Azure Marketplace so that customers can discover your application and its benefits.

To help you take advantage of this opportunity, we’ve developed some resources to help get you started:

We would love to get your feedback on this experience, so please provide feedback and questions on our Windows Azure AD forums.

Updates to Web SSO with Windows Azure Active Directory

Addition of SAML 2.0 protocol support for Web SSO

Many of the developers who are participating in the Windows Azure AD developer preview have requested we add support for the SAML 2.0 federation protocol. Many SaaS applications already support SAML 2.0 as a way to sign-in to their applications, and having this support would make it even simpler for these developers to integrate with Windows Azure AD.

We are excited to announce that we now provide this capability and have tested it with a variety of application providers. We’ll be adding integration documentation as we go forward, but a good place to learn more is this MSDN documentation on Windows Azure AD support for SAML 2.0.

Sign out Support in WS-Federation

We have included the ability for applications to support the single sign-out capability for those using the WS-Federation protocol. We will also be adding sign-out support for SAML 2.0 in a future update.

You can now use a URI as an Application Identifier

When we originally launched the Developer Preview we noted that the Audience field in tokens sent by Windows Azure AD was in SPN format and included both the identifier of the application and the identifier of the tenant, while most other federation systems allow applications to be identified by a URI. We have updated the Preview to enable the use of URIs as application identifiers. For example, instead of receiving a token with Audience value of “spn:<appID_GUID>@<tenantID_GUID>”, it can now simply be “”.

NOTE: Applications that were registered with Windows Azure AD before we deployed this update will no longer accept sign-in responses and must be updated to continue to function properly. We discuss this in detail in this forum post.

Differential Query: Get Delta Changes to Objects You Care About in the Graph

Part of the power of a Graph API is taking action based on the information in the directory that your application is targeting. With differential query, we’ve given you the ability to listen in to only the information you care about so that your application can be both faster and easier to develop.

A differential query request returns all changes made to specified entities during the time between two consecutive requests. For example, if you make a differential query request an hour after the previous differential query request, only the changes made to objects in the scope of the query during that hour will be returned. This functionality is especially useful when you need to cache directory data in your local application store and keep it in sync with the contents of the directory.

We believe this is a great addition to the platform for application developers, and we have some resources to get you started using this in your application:

Update to the Windows Azure Authentication Library Preview

We continue to work to make rich apps and REST development easier and more secure. In this update we are re-releasing part of the Windows Azure Authentication Library (WAAL) with some important improvements:

  • The library is now 100% managed. This eliminates the constraints that resulted from the mixed mode nature of the first release. You can now use the library regardless of the “bitness” of your target environment, and the VC runtime is no longer required.
  • The first WAAL preview concentrated in one single package both client and protected resource features. That led to a number of constraints, such as the dependency on the WIF1.0 runtime and the impossibility of developing with the .NET 4.0 Client Profile. In this refresh we refactored WAAL to work exclusively in the client role, eliminating those restrictions. We will provide the functionality to address the protected resource role in separate components: more about that soon.

The new WAAL bits are already at work in other products! The Windows Azure Active Directory integration features in the ASP.NET Fall 2012 Update use WAAL as part of the setup and publication experience. WAAL is also used in our new sample app to secure calls to the Graph API.

We will soon update the WAAL samples to take advantage of the new managed-only package. At that time, we will retire the current x86- and x64-specific packages.

Ability to Federate Directory Tenants with Windows Azure Access Control Namespaces

With the changes in the way in which Windows Azure AD handles application identifiers, you are now able to use directory tenants as identity providers in Windows Azure Access Control service namespaces. This new capability opens up a number of interesting scenarios such as enabling your web application to accept both organizational identities from directory tenants and consumer identities from social providers such as Facebook, Google, Yahoo!, Microsoft accounts or any other OpenID provider. You can find detailed instructions on how to implement the scenario in this post.

Breaking Changes to the Developer Preview

As we respond to your feedback and add or modify features, sometimes we have to make breaking changes to the behavior of the Preview. If you’ve already developed applications using Windows Azure Active Directory or plan to build applications now, please make sure you periodically check the Windows Azure Active Directory Forums for notification of upcoming changes.

We Look Forward to Hearing from You!

We’re excited to hear from you about our latest round of enhancements. If you’d like to see them in action now, you can check out Vittorio Bertocci’s presentation from the BUILD 2012 conference.

And here’s one final note about how to get Windows Azure Active Directory. We’ve had a number of people surprised to find out that every Office 365 customer already has a Windows Azure Active Directory tenant! Windows Azure Active Directory is used by a number of Microsoft cloud services today, and through the developer preview it is possible to extend the usage of these directory tenants to other applications. It’s also possible to obtain a “standalone” directory tenant through our provisioning system.

Vittorio Bertocci (@vibronet) described Provisioning a Windows Azure Active Directory Tenant as an Identity Provider in an ACS Namespace in an 11/7/2012 post:

imageThanks to the improvements introduced in the latest refresh of the developer preview of Windows Azure Active Directory, we are finally able to support a scenario you often asked for: provisioning a Windows Azure Active Directory tenant as an identity provider in an ACS namespace.

imageIn this post I am going to describe how to combine the various parts to achieve the desired effect, from the high level architecture to some very concrete advice on putting together a working example demonstrating the scenario.



Let’s say that we want to develop one MVC application accepting users coming from both Facebook and one customer’s Windows Azure AD, say from Trey Research, which uses a Windows Azure AD directory tenant for its cloud-based workloads.

One way in which you could decide to implement the solution would be to outsource most of the authentication work to one ACS namespace (let’s call it LeFederateur, in honor of my French-speaking friends) and configure that namespace to take care of the details of handling authentication requests to Facebook or to Trey Research. Your setup would look like the diagram below (click on the image for the full view).


From right to left, bottom to top:

  • Your application is configured (via WIF) to redirect unauthenticated requests to the ws-federation endpoint of the ACS namespace
  • The ACS namespace contains a representation of your application, in form of relying party
  • The ACS namespace contains a set of rules which describe what claims should be passed on (as-is, or somehow processed) from the trusted identity providers to your application
  • The ACS namespace contains the coordinates of the identity providers it trusts; in this case
  • Facebook contains the definition of the app you want to use for handling Fb authentication via ACS
  • The Trey Research’s directory tenant contains a service principal which describes the ACS namespace’s WS-Federation issuing endpoint as a relying party

I am sure that the ones among you who are already familiar with the concept of federation provider are not especially surprised by the above. This is a canonical topology, where the application outsources authentication to a federation provider which in turn handles relationships with the actual identity sources, in form of identity providers. The ACS namespace plays the role of the federation provider, and the Windows Azure AD tenant plays the role of the identity provider.
Rather, the question I often got in the last few months was: what prevented this from working before the latest update?

That’s pretty simple. In the first preview ServicePrincipals could only be used to describe applications with realms (in the ws-federation sense) following a fixed format, namely spn:<AppId>@<TenantId> (as described here) which ended up being used in the AudienceRestriction element of the issued SAML token. That didn’t work well with most of the existing technologies supporting claims-based identity, including with ACS namespaces: in this specific case, LeFederateur would process incoming tokens only if their AudienceRestriction element would contain “” and that was simply inexpressible. With the improvements we made to naming in this update, service principals can use arbitrary URIs as names, resulting in arbitrary realms & audience restriction clauses: that restored the ability of directory tenants to engage as identity providers through classic ws-federation flows, including making ACS namespace integration (or any other federation provider's implementation) possible.


Enough with the philosophical blabber! Ready to get your hands dirty? Here there’s a breakdown in individual tasks:

  1. create (or reuse) an ACS namespace
  2. create a Facebook app and provision it as IdP in the ACS namespace
  3. create (or reuse) a directory tenant
  4. provision a service principal in the directory tenant for the ws-federation endpoint of the ACS namespace
  5. provision the directory tenant’s ws-federation endpoint as an IdP in the ACS namespace
  6. create an MVC app
  7. use the VS2012 Identity and Access tools for connecting the application to the ACS namespace; select both Facebook and the directory tenant as IdPs
  8. Optional: modify the rule group in ACS so that the Name claim in the incoming identity will have a consistent value to be shown in the MVC app UI

many of those steps are well-known ACS tasks, which I won’t re-explain in details here; rather, I’ll focus on the directory-related tasks.

1. Create (or reuse) an ACS namespace

This is the usual ACS namespace creation that you know and love… but wait, there is a twist!

In order to create a namespace you need to use the Windows Azure Silverlight portal, and I occasionally heard that people using the HTML5 portal as their default might not know how to get back to it. The trick is to click on your email in the top-right corner of the HTML5 portal; at the very top you will find a link that will lead you back to the Silverlight pages with their ACS namespace creation settings.


If you want to reuse an existing namespace you can go straight to the ACS management UI via the usual">https://<namespace>

2. Create a Facebook app and provision it as IdP in the ACS namespace

Follow the ACS documentation here.

3. Create (or reuse) a directory tenant

Obtaining a Windows Azure Active Directory tenant is very easy. For example:if you are an Office365 customer, you already have one! Head to and sign in with your organizational account: if you are an administrator, you’ll see something to the effect of the shot below:


In fact, be warned: in order to go through this walkthrough you *have* to be an administrator.

If you don’t already have a tenant, setting one up is fast and painless: follow this link to set up one for free as part of our developer preview program.


If you create a new tenant, the first user you generate as part of the signup process will also be an administrator; keep the credentials of that user handy, because you’ll need them in the next step.

4. Provision a service principal in the directory tenant for the ACS namespace

From this point on, things get interesting. You need to let your directory tenant know about the existence of your ACS namespace: the web SSO (in this case, WS-Federation) endpoints won’t issue tokens for any recipient, only the ones represented by a service principal object will be deemed eligible.

As of today, the Windows Azure Active Directory portal does not yet offer commands for creating service principals; so your most straightforward option get that done is by using the Office365 cmdlets, These can be downloaded from here (please read the instructions carefully, especially for what concerns prerequisites and x86 vs x64 platforms).

Once you have installed the cmdlets, double click on the new PowerShell shortcut on your desktop (“Microsoft Online Services Module for Windows PowerShell”) and enter the following (after having substituted lefederateur with your own ACS namespace)

 MSOnlineExtended -Force
 –Address ""
"LeFederateur ACS Namespace"

Now, before you accuse me of witchcraft, let me explain what happens line by line :-)

  1. The first line establishes a session with your tenant of choice. You’ll be prompted to enter your admin credentials: this will serve the double purpose of a) authenticate you and b) establish which directory tenant you want to work on (for example: for me it was
  2. The second line loads in the PowerShell session the module containing the commands you need for working with service principals. Here there are 2 caveats I’ve seen people stumble on over and over again:
    • it’s MSOnlineExtended, NOT MSOnline only. If you just tab your way to parameter completion, the first instance will be MSOnline and that does NOT contain the commands you’ll need
    • On Windows8 my experience is that if you don’t add the –Force parameter the import won’t succeed. My colleagues tell me that you should not need it, but for the time being things won’t work for me on WIndows8 without it. YMMV.
  3. In this line I create an object of type MsolServicePrincipalAddresses with the return URL root of the target ACS namespace, which will be used as parameter in the subsequent line. It just seems cleaner to do it in its own line rather than cramming everything in the service principal creation command
  4. This is finally the command we wanted to execute, the creation of the service principal. Here I added the bare minimum in order to obtain a service principal viable for this walkthrough purposes, all the omitted parameters will simply be assigned default or (in the case of GUIDs) randomly initialized values. The most important value here is the one in ServicePrincipalNames, which will be used as realm (hence as AudienceRestriction) in the tokens issued on behalf of the directory tenant to the ACS namespace

That’s it; you can close the PowerShell prompt.

5. Provision the directory tenant as an IdP in the ACS namespace

This is one of my favorite parts: it’s REALLY easy :-).

Head to the identity providers section of the ACS management portal (you’ll find it at modulo namespace name of course), hit Add, choose “WS-Federation identity provider” and click the Next button.


Type in Display name and Login link text whatever moniker you want to sue for the directory tenant as identity provider (the name of the organization is usually a good candidate). Then paste in the URL field of the WS-Federation metadata section the URL of the metadata document of the directory tenant. With the name improvements we introduced in this dev preview refresh, it is in the very convenient format where is the domain associated to the directory tenant.

Scroll to the bottom of the page, hit save and you’re all set.

6. Create an MVC app

Let’s jump off the cloud for few minutes, and hit the good ol’ (not really, we barely just released :-)) Visual Studio 2012 to create our test application.

You can really use any Web application template, but I would suggest using MVC4 as that will allow you to get some work done automatically for you by the identity and access tools. The internet or intranet templates will work just as well, just be aware of the fact that they both have some authentication-related logic out of the box that will become dead code once we outsource authentication to ACS. Here I’ll go with the intranet template.

7. Use the VS2012 Identity and Access tools for connecting the app to the ACS namespace

What do you mean with “I don’t have the Identity and Access Tools for Visual Studio 2012 installed on my machine”? Drop everything you are doing and go get them! :-) Jokes aside, they’ll make things SO much easier in this tutorial: you’ll need those installed if you want to follow step by step.

Right click on the project in the Solution Explorer and choose “Identity and Access…”. The tools dialog will open on the Providers tab. Hit the “Use the Windows Azure Access Control Service” option. Assuming that you configured the tools to connect with your target ACS namespace, you’ll see something like the following:


Check both boxes and hit OK: the tool will create one entry for your application in the ACS namespace, and will add to your web.config the necessary WIF settings for redirecting all unauthenticated requests to ACS.

At this point you could refine things further: for example, you could use the Identity and Access tools for creating an authentication experience that would help your users to choose between Facebook and Trey Research directly in your app, giving you control on the look & feel and presentation aspects of the experience. Here for simplicity I’ll just rely on the barebones-but-functional home realm discover experience hosted by ACS itself; if you want more control on the experience please refer on the flow described here right after the release notes.

8. Modify the rule group in ACS

There is one last detail we should take care of before hitting F5.

As part of the provisioning of the current application as an RP in the ACS namespace, the Identity and Access Tool creates a rule group which copies all the incoming claims from the two IPs to the output token. This works great for Facebook and the MVC template: MVC uses the Name property of the Identity in the current thread to greet the authenticated user, and one of the claim types issued via Facebook nicely maps to that property.

Things are less straightforward with the claims coming from the directory tenant: in this developer preview a directory tenant won’t emit a claim of type, which (absent explicit mapping in the web.config, a story for another post) means that the Name property will be empty: the user will be greeted by an “Hello, “ and that blank might be puzzling to the user. Luckily there are multiple places in the pipeline in which we can inject corrective logic: here I’ll show you how to do by creating a custom rule in ACS.

Navigate to the rule groups editor in the portal ( and select the rule group that the tool generated for you (typically of the form <AppName><N>_RuleGroup). You’ll see that you already have two rules, one for Facebook and one for Trey Research, passing all claims thru. Click Add: you’ll land on a page that will allow you to define a new claim transformation rule.

We want to pick one claim type that the directory tenant does issue, possibly one that can be used for referring to the user in the sentence “Hello, <claimvalue>”, and make ACS issue one claim with the same value but type

·From the top: choose “Trey Research” in the Identity Provider dropdown.

Unfortunately the portal won’t be of much help for selecting which input claim type we should use: the metadata document of the directory tenant does not declare all the claims it issues. You can easily find out by adding a breakpoint in your app and, once authenticated, inspecting the content of ClaimsPrincipal.Current.Claims. I could give you the complete list here, but that list WILL change before GA hence I don’t want to deceive future visitors coming from search engines with a list that I already know will be obsolete.

For the time being just take as an article of faith that a directory tenant will issue a claim of type; that seems an acceptable value for our purposes, let’s paste that URI in the Enter Type text field. You can keep the Input claim value section to its default (any).

On the Output claim type section, select from the dropdown. Leave everything as is, and hit Save on the bottom of the page.


9. Test the solution

Ready to give the solution a spin? Go back to Visual Studio and hit F5.

You’ll be instantly redirected to the home realm discovery page on ACS, were you’ll be offered to choose between Trey Research and Facebook.


If you choose Facebook you’ll go through the usual authentication-consent-login flow; Here I’ll pick Trey Research.


As expected, the Windows Azure Active Directory log in page shows up in all its colorful glory. Enter your credentials and…


…you’re in!

Congratulations. You just signed in a Web application secured via ACS using organizational credentials from a directory tenant.

In Summary

With the latest update of the developer preview of Windows Azure Active Directory, you are now able to provision Windows Azure AD directory tenants as identity providers in ACS namespaces. This allows you to easily support in the same application organizational identities, individual identities from well-known Web providers (Microsoft account, Facebook, Google, Yahoo!, any OpenID 2.0 provider supporting attribute exchange) and direct, un-brokered federation relationships.

Conceptually, the process of onboarding a directory tenant does not deviate from the sequence of tasks you would follow for provisioning any other authority in a federation provider-identity providers topology: tell the identity provider about ACS, tell ACS about the identity provider, adjust rules to handle the occasional impedance mismatch between what the application wants and what the provider can provide, and so on. Concretely, there are still few places where you need to do a bit of extra work to make the ends meet: as we get closer to GA, hopefully things will get simpler and more streamlined.

In fact, we need your feedback! The developer preview is here for you to experiment and let us know what works and what doesn’t: looking forward to hear your impressions!


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

Michael Washam (@MWashamMS) described Bootstrapping a Virtual Machine with Windows Azure in an 11/8/2012 post:

imageIn my Advanced IaaS Talk at Build I showed a demo where you can configure a PowerShell script that can download a zip file with multiple actions (unzip or execute) that gives you similar functionality to a Windows Azure startup task for web and worker roles.

This is a very simple example but it does show some of the capabilities you can do.


The bootstrap.ps1 file below is the main script that is responsible for downloading the file from your storage account. You should create a directory called c:\BootStrap and place the script inside and reference it from the startup task on your virtual machine.

In this demonstration I am pulling from a public storage container. If you want to contain anything secure in your you should modify the url below to use a shared access signature..



$rootpath = 'C:\BootStrap\'
$manifest = 'C:\BootStrap\Working\manifest.xml'
$workingdir = 'C:\BootStrap\Working\'
$downloaddir = 'C:\BootStrap\Working\'
$packagesource = ''

function GetPayload()
    $retries = 5
	while($retries -gt 0)
		CheckFolder $workingdir
	    try {
		    $wc = New-Object System.Net.WebClient
		    $wc.DownloadFile($packagesource, $downloaddir)
	     catch [System.Net.WebException] {
		    # $_ is set to the ErrorRecord of the exception
	        if ($_.Exception.InnerException) {
	     	   $_.Exception.InnerException.Message | Out-File c:\BootStrap\error.txt -Append
	        } else {
	           $_.Exception.Message | Out-File c:\BootStrap\error.txt -Append
			Start-Sleep -Seconds 15
			$retries = $retries - 1
     UnzipFileTo $downloaddir $workingdir

function BootStrapVM()
  if((Test-Path HKLM:\Software\VMBootStrap) -eq $true)
     Write-Host "Already Ran"

  [xml] $manifest = Get-Content $manifest
  $counter = 0
  $manifest.StartupManifest.Items.Item | foreach { 
	$action = $_.action 
	$path = $_."#text"
	$target = $ 
	$sourcefullpath = $workingdir + $path
	   "execute" {
		  Write-Host "Executing command: " $sourcefullpath
		  ExecuteCommand $sourcefullpath
	   "unzip" {
		  $sourcefullpath = $workingdir + $path
		  Write-Host "Unzipping " $sourcefullpath " to " $target 
	      UnzipFileTo $sourcefullpath $target
  New-Item -Path HKLM:\Software -Name VMBootStrap –Force | Out-Null
  Set-Item -Path HKLM:\Software\VMBootStrap -Value "ran" | Out-Null

function ExecuteCommand($commandpath)
	& $commandpath

function UnzipFileTo($sourcepath, $destinationpath)
	CheckFolder $destinationpath
	$shell_app = new-object -com shell.application
	$zip_file = $shell_app.namespace($sourcepath)
	$destination = $shell_app.namespace($destinationpath)
	$destination.Copyhere($zip_file.items(), 16)

function CheckFolder($path)
	if((Test-Path $path) -eq $false)
   		New-Item -ItemType directory -Path $path -Force | Out-Null

Here are two sample configs I put together: Enables Remote PowerShell Installs IIS, WebPI, MySQL and WordPress

The file contains a manifest.xml file and all of the entities you need to upload.

Example of the manifest.xml section of the

     <Item action="execute">InstallRoles\ConfigureRoles.ps1</Item>
     <Item action="unzip" target="c:\webpi\">WebPI\</Item>
     <Item action="execute">InstallPackages\EnableWebPI.cmd</Item>

All that is needed to configure the script to run when your VM boots is to configure a startup task using group policy.

Run: mmc
File -> Add/Remove Snapin
Select -> Group Policy Object Editor (Local Computer)
Expand -> Computer Configuration -> Windows Settings -> Scripts Startup/Shutdown

Once the script is configured you will need to change the execution policy on the virtual machine before you run sysprep and capture on your image otherwise the script will not execute on the next boot.

 Set-ExecutionPolicy RemoteSigned

Finally, if you do enable to the remote PowerShell task here is the command line you can use to connect.

Enter-PSSession -ComputerName {hostname} 

Jim O’Neil (@jimoneil) continued his series (see post below) with Windows 8 Notifications: Push Notifications via Windows Azure Web Sites (Part 2) on 11/7/2012:

imageIn this post, I’ll cover what’s required to outfit my Windows Store Boys of Summer application to receive toast notifications. If you haven’t already, I suggest reviewing the previous post for context.

imageTo provide the underlying ‘glue’ for the Windows Notification Service, a Windows Store application must be associated with an application profile you’ve created under your Windows Store developer account. Prior to the official release of Windows 8, there was a temporary service that could be used, but going forward you will need to have a Windows Store developer account to develop and test Windows Store applications that leverage push notifications.

Step 1: Associate your application with the Windows Store to obtain credentials necessary for push notifications

Associate Windows Store application with the storeYou can access your account through the Dev center, or simply through Visual Studio via the Store>Associate App With the Store… context menu option, at which point you’ll be asked to login to the Dev center with your store account credentials. For each app you declare on the Dev center, you’ll get a checklist of what’s needed to complete your app before submission, but for the purposes of this exercise, you’ll only need to configure one of the steps.

image222The screen shots below walk through the process to obtain a package security identifier (SID) and a client secret. Those will be needed for the implementation of the Cloud Service, which I’ll cover in the next post.

  1. After application name as been reserved:

  2. After selecting Advanced features optionabove:

  3. After selecting Push notifications and Live Connect services info. The Package Security Identifier and Client Secret, should be treated as sensitive data; think of them as a user id and password allowing a service to push notifications to the registered application.

During this process, your application is also assigned a unique package name and publisher. You can obtain those values from the Identifying your app… page accessible via a link on the left sidebar above and populate it in your project’s Package.appxmanifest manually, or you can let Visual Studio associate it for you via the aforementioned context menu option.

Associate app with the Windows Store

Speaking of the application manifest don't forget to mark your application as Toast capable in the Application UI tab and select the Internet (Client) capability on the Capabilities tab of the manifest, or you'll be scratching your head for a while wondering why everything seems in place, but the notifications don't occur!

Step 2: Get a push notification channel within the application

A push notification channel is a URI that looks something like the following (and always points to the domain)…

It serves to uniquely identify the current application running on the current device and as such is a potential destination for notifications issued from the Cloud Service in the notification workflow I described earlier.

The notification channel URIs are not persistent however, and have an expiration period of 30 days (though I’ve seen URIs change under other circumstances as well, such as new application deployments during testing). It’s up to your application to be aware of the expiration period and handle things accordingly. At the very least the application should renew its notification channel every time it runs. If the application may be dormant for more than 30 days, you may also want to register a background maintenance task that refreshes the channel, so the application continues to receive notifications even though it hasn’t been recently launched.

I’m assuming my Windows Store application will be compelling enough to be opened at least once a month, so I am foregoing the background task. To refresh the channel on each launch, I’ve included the following single line of code at the end of the OnLaunched and OnActivated methods (this is the XAML/C# implementation; for JavaScript you’d use the activated event listener)

this.CurrentChannel = await PushNotificationChannelManager

I defined CurrentChannel as a public read property of the Application class instance, so I could easily refer to it within other areas of the application.

Step 3: Register the notification channel with the Cloud Service

The notification channel is merely an web address to which the Cloud Service pushes the notification data. It’s up to the service to decide what that data is and to what subset of users that information should be sent.

Notification toggle in the Boys of Summer user interfaceA toast notification, which is the scenario here, typically targets a subset of users based on some combination of criteria (metadata, if you will) associated with their notification channel. In the Boys of Summer application, that metadata is fairly obvious – it’s the team or teams that a user is interested in keeping abreast of. And the user signifies his or her interest (or disinterest!) by toggling the Notification switch (right) associated with a given team in the application’s user interface.

The code (below) behind the ToggleSwitch’s toggled event directly interfaces with the Cloud Service (via a RESTful API). I’ll describe that service in more detail later, but at the high level, the code really does one of two things:

  • Records the current user’s interest in notifications for the associated team (Lines 9 – 25) by POSTing the notification channel and the team name to the Cloud Service, or
  • Issues a DELETE HTTP request (Lines 29 –36) to signal that same service to no longer inform this client about events related to the given team.
 1: private async void notificationsToggled(object sender, RoutedEventArgs e)
 2: {
 3:     var toggleSwitch = e.OriginalSource as ToggleSwitch;
 4:     var selectedTeam = ((FrameworkElement)sender).DataContext as SampleDataItem;
 6:     var channelUri = ((App)Application.Current).CurrentChannel.Uri;
 8:     HttpClient clientRequest = new HttpClient();
 9:     if (toggleSwitch.IsOn)
 10:     {
 11:         string postUri = String.Format("{0}/api/registrations/", 
 12:                                         App.Current.Resources["ServiceUri"]);
 14:         var JsonPayload = new StringContent(
 15:                 Newtonsoft.Json.JsonConvert.SerializeObject(
 16:                     new
 17:                     {
 18:                         TeamId = selectedTeam.UniqueId,
 19:                         Uri = ((App)Application.Current).CurrentChannel.Uri
 20:                     }
 21:                 )
 22:         );
 23:         JsonPayload.Headers.ContentType = 
new MediaTypeHeaderValue("application/json");
 25:         await clientRequest.PostAsync(postUri, JsonPayload);
 26:     }
 27:     else
 28:     {
 29:         // format DELETE request
 30:         string deleteUri = String.Format("{0}/api/registrations/{1}/{2}",
 31:             App.Current.Resources["ServiceUri"],
 32:             selectedTeam.UniqueId,
 33:             channelUri.ToBase64ish()
 34:         );
 36:         await clientRequest.DeleteAsync(deleteUri);
 37:     }
 38: }

What's this ToBase64ish call on Line 33? This was part of an interesting adventure that really is tangential to Windows 8 and push notifications, so feel free to skim or ignore this callout.

Since I am using the ASP.NET Web API to implement the Cloud Service, I wanted to do things in a very RESTful way. The DELETE HTTP request, therefore, should identify the resource on the server that is to be deleted as part of the URI itself (versus within the HTTP request body). In this application, that resource is uniquely identified by the combination of the notification channel URI and the team name.

Now, notification channel URIs have some rather unfriendly characters (like the forward slash and the colon) when it comes to including them as a path element of a larger URI. That’s not a huge shock, and that’s one reason why there’s UrlEncode functions (or as I found out multiple variants thereof!). It get’s uglier when you discover there’s also settings like DontUnescapePathDotsAndSlashes and, oh, if you have multiple forward slashes they’ll collapse into one.

So, I gave in and decided to Base64 encode the notification URI. Base64 uses the 52 Latin alphabetic characters (lower and upper case) and the digits (0 through 9), but also supplements with two other characters: the forward slash (/) and the plus sign (+). Both of those are also somewhat ‘special’ in terms of URIs, but the hyphen and underscore are not, so I created a simple String extension method (named ToBase64ish) that Base64 encodes and then replaces every / with a hyphen and every + with an underscore. (Of course, there’s a matching FromBase64ish method as well that the Cloud Service uses on the other end of the pipe).

Step 2.5: Update previously recorded notification channel URI registrations

While the three steps above would seem to do the trick, let’s take a step (or half-a-step) back and consider what happens when the notification channel for the application changes (as it will at least every 30 days). When the client’s notification channel URI has changed, all of those channel URI/team combinations previously sent to the server for safe keeping are obsolete. The user will assume she’ll still get notifications about her favorite team, but when team news items are processed by the Cloud Service, it will forward them to a URI that’s no longer valid!

It’s up to the application to make sure the channel URIs that are registered on the server accurately reflect the current channel URI that the application has secured. There are a few ways to handle this, and how you do it depends somewhat on your application’s requirements. The guidance for Windows Store applications is to provide as optimal an off-line experience as possible, so for the Boys of Summer application this is what I came up with:

  1. If the application is off-line, the notification ToggleSwitches will appear disabled, but retain their last-known settings. That means that I need to save some application state for the notification setting (on or off) of each of the teams. Local application storage is a good spot for that, and so my application pulls that data into the ViewModel that’s associated with each division’s team list.
  2. The application also needs to retain the value of the last URI it used to register notifications with the Cloud Service; otherwise, it wouldn’t know that that URI had changed! Local application storage to the rescue again.
  3. Lastly, the application needs to check if the notification URI it just obtained (on launch or activation) is different from the last one it used to register teams of interest with the Cloud Service. If so then the Cloud Service needs to update all of the notification records containing the previous URI to reflect the new one.

To handle this last step, I included another method in the App class, and invoke that method immediately after securing the push notification channel - that one line of code above that calls CreatePushNotificationChannelForApplicationAsync. As you might expect, if the channel URI has changed, I need to invoke another method on the Cloud Service, this time via an HTTP PUT indicating that the previous URI should be updated on the server with the new one.

private async void RefreshChannelUri(String newUri)
    // get last known notification channel id
    String previousUri = 
           ?? String.Empty).ToString();
    ApplicationData.Current.LocalSettings.Values["ChannelUri"] = newUri; 
    // if no previous channel id, nothing to do
    if (String.IsNullOrEmpty(previousUri)) return; 
    // if channel id hasn't changed, nothing to do
    if (previousUri == newUri) return; 
    // update current registrations to new URI
    HttpClient clientRequest = new HttpClient();
    string putUri = String.Format("{0}/api/registrations/{1}", 
        App.Current.Resources["ServiceUri"], previousUri.ToBase64ish()); 
    var JsonPayload = new StringContent(
                    Uri = newUri.ToBase64ish()
    JsonPayload.Headers.ContentType = new MediaTypeHeaderValue("application/json");
     await clientRequest.PutAsync(putUri, JsonPayload);

Jim O’Neil (@jimoneil) began a series with Windows 8 Notifications: Push Notifications via Windows Azure Web Sites (Part 1) on 11/7/2012:

imageIf you haven't read my previous post on push notifications for Windows 8 applications, you may want to do so before continuing with the next few articles, which are a deeper dive into the development of a cloud service to support a specific Windows Store application.

imageTo provide a more concrete scenario, I’ve developed a Windows Store application called Boys of Summer (no, it’s not actually in the store… yet?). As you might gather from the title, it’s a baseball related app and provides information about the various Major League teams. Part of the functionality of the application is to keep you up-to-date on off-season team developments like trades, management changes, etc. And, of course, push notifications are a great way to implement that.

The application interface appears below; notice that the group detail view includes a toggle switch the user can flip to be alerted for news regarding one or multiple teams.

Boys of Summer UI

As you might expect, the user’s interest in various teams is passed on to a Cloud Service that records the client’s URI as well as the team or teams which she is interested in tracking.

image222The other half of the equation is a Cloud Service along with some outside stimulus that triggers the notifications. In a previous post, I’ve talked about Windows Azure Mobile Services which can greatly simplifies many push notification scenarios. Here though, I’d like to explore the implementation a bit closer to the metal, so I’m leveraging Windows Azure Web Sites, which (as of this writing) provides a free offering that is more than sufficient to support a Windows Store application like this. The resulting architecture (reusing the Windows Push Notification workflow from the MSDN documentation) is captured below.

Boys of Summer push notification architecture

My Windows Azure Web Site includes a user interface (the ASP.NET web page shown below) that enables an operator to trigger a notification to all subscribers to a given team. The notification then shows up as toast on the client’s machine. Going forward, the manual operation of entering the notification could be automated by pulling information off of a news feed, reformatting as needed, and finally triggering the notification.

Push notification experience

Let’s dig in a bit! In the next two posts I’ll cover:

the Windows 8 client application, and

the Windows Azure Web Sites implementation.

Sanjay Bannerjee descried HOW TO: Set up a SharePoint farm [on a Windows Azure VM] as an extension to Enterprise Network in a 5/2/2012 post to the Aditi Technologies blog:

In my previous blog post, I had talked about how to create a SharePoint Foundation VM on Windows Azure as standalone server. That was pretty easy and simple stuff. I also tried doing the same in CloudShare and that was pretty neat and impressive. I am also thinking about comparing CloudShare with Azure from price point of view but that I am keeping it for some other post.

imageAs a sequel to my previous blog post, I am going to talk about what it takes for an organization to build a SharePoint Farm which will be nothing but an extension to their corporate network. There are plenty of great "how to" articles available but my effort over here is to consolidate and put in one place all that you need to know if you are considering going this route.

imageAs a part of Windows Azure's cross premise connectivity, Microsoft has come up with something call Windows Azure Virtual Network which is currently in preview stage and provides secure private IPv4 network which enables an organization to extend their enterprise network to Windows Azure securely over S2S VPN. With Azure Virtual Network lot of new scenarios can be enabled like hybrid apps that span across cloud and their premise, Enterprise Identity and Access Control etc.

Now coming back to the topic of what are the steps one need to take to have a SharePoint farm work as extension to corporate network. Other than that there are other issues to that we need to be thinking about like how to configure load balancing for WFEs and how to implement high availability for SQL Server.

So, first things first, let us start with how to create an extended Virtual Network. Following are the steps:

imagea) Create a Virtual Network for cross premise connectivityb) Add a virtual machine to the virtual networkc) Install a Replica AD Domain controller on the VM (please note in this link there are few steps which you may not need to do if you have AD and DNS setup in your environment)d) Provision other VMs to the Domain

image222Now coming to other concern about load balancing the WFEs we have on our Azure SharePoint Farm. For load balancing the WFEs, we can use the Azure Load Balancer. Following are the steps for configuring the Load Balancer:

a) Create the first SharePoint WFE VM as standalone

b) Configure the HTTP endpoint for the first VM

One point to notice here is that the endpoints are opened at the load balancer level, so even if we have one VM but that is also placed behind the load balancer, this makes the configuration of adding VMs to load balancer all that more simpler.
Install SharePoint on the VM.
c) Add other WFEs and connect to the previously created VM

d) Now add the VM to the previously created load balancer

And that's it - we are done! That's pretty simple, right?
Another concern people have is how can we set up high availability on SQL Server VMs on Windows Azure because Windows Azure doesn't allow SQL Server Clustering. So here we have two options either to go with SQL Server mirroring or Log shipping. For Automatic fail-over we can configure the SQL Server mirroring with witness which will have one primary server, secondary server and witness server.

One important thing to remember is that don't forget to add the WFEs and your SQL Servers (all three Principal,Mirro and Witness) on the their own availability set so that when Microsoft updates its OS, it won't take down all the server.

See also Jinesh Varia (@jinman) described how to Deploy a SharePoint 2010 Server Farm on AWS Cloud in 6 Simple Steps (All Scripts and Templates Included) in the Other Cloud Computing Platforms and Services section below.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Himanshu Singh (@himanshuks) posted Real World Windows Azure: HRG Extends Application to Mobile Devices, Cuts Startup Costs by 80 Percent to the Windows Azure blog on 11/8/2012:

imageAs part of the Real World Windows Azure series, we connected with Paul Saggar, Director of Technology and Product Development of Hogg Robinson Group (HRG) to learn more about how it extends its core trave platform application to mobile devices on Windows Azure, while cutting 80 percent in start-up costs and increasing productivity by 25 percent. Read on to find out what he had to say.

Himanshu Kumar Singh: Tell me about Hogg Robinson.

imagePaul Saggar: HRG, located in the United Kingdom, provides corporate travel booking services to customers worldwide. HRG connects with dozens of third-party suppliers through its platform to offer customers high-quality, cost-effective travel services with the best value pricing for airfares, hotel rooms, train tickets, and other travel requirements.

HKS: How does HRG deliver its services?

PS: To deliver services, HRG integrates systems and data from third-party suppliers, customer business systems, and in-house software, using Microsoft BizTalk Server 2010 Enterprise. The software creates a connection layer between the web, mobile, and desktop applications that comprise its core platform. Customer data is always protected behind either the company or customer firewalls.

HKS: What influenced HRG’s decision to add mobile capabilities?

PS: HRG wanted to develop a mobile version of its solution, because most customers need to access booking services on their mobile device while they’re travelling. There were a few initial challenges we had to address in order to deliver our services on mobile devices: scalability, performance and security.

HKS: How is HRG utilizing Windows Azure?

image222PS: My team and I created the HRG i_Suite mobile application incorporating a mobile gateway based on Windows Azure. The gateway uses Cloud Services, Service Bus, and SQL Database to create a hybrid solution comprising cloud and on-premises services. Some functionality is delivered directly from the cloud, while a reliable and secure message and relay infrastructure connects information from the on-premises booking platform to Cloud Services.

HKS: How do you store customers’ personal information, and manage transactional data?

PS: No personally identifiable information is stored on the device. Service Bus is a managed service that supports multiple messaging protocols and patterns. With Service Bus, we route messages containing personally identifiable information to BizTalk Server and Microsoft .NET–based on-premises services. In this way, information is protected in line with our customers’ security protocols. Tables are used to maintain active and authenticated user information permitting a fast, secure mechanism for connecting to the HRG on-premises services.

The highly available and scalable SQL Database has built-in fault tolerance and manages transactional data that forms part of the booking process. Data that is accessed frequently on the HRG i-Suite mobile application, such as airport locations, is also stored on SQL Database. With Cloud Services, servers are automatically provisioned to support the appropriate messaging model and best performance for the location of device and type of transaction.

HKS: What are some of the development benefits you’ve seen with Windows Azure?

PS: By incorporating the cloud platform into our hybrid solution, the HRG team has increased productivity by about 25 percent for projects that use or are hosted on Windows Azure, and we’re already starting on phase two of our mobile strategy. This is testimony to how quickly we can develop and extend services with the cloud platform.

HKS: Have you customers benefited from your use of Windows Azure?

PS: Absolutely. This was imperative in our decision to go with Windows Azure. Our customers usually access services while travelling, and the mobile solution offers them convenience and security without compromising performance. By using Windows Azure, services are provisioned as close to the user’s device as possible, making transactions fast, seamless and secure. We also provide customers with a choice of booking travel by phone, on the web, or by using their mobile devices. Although HRG’s travel booking services require complex technology configurations between in-house and third-party applications, our team can now create new services quickly.

HKS: And what are the benefits to HRG?

PS: HRG’s CAPEX is considerably lower than if we had to purchase and provision our own physical servers and then connect many on and off-premises applications. With Windows Azure, we’ve spent around 20 percent of the budget we’d usually dedicate to developing and piloting a new solution. There’s also less financial risk for us because we don’t have to predict what servers and licenses we need to buy, but we can scale up easily if we need to.

Read the full HRG case study here. Check out other customer stories here.

Mark Eisenberg (@AzureBizAndTech) asserted “Microsoft missed the boat in getting this point out to potential customers, at the cost of Windows Azure” in his Re-defining the Windows Azure cloud computing message post of 11/7/2012 to TechTarget’s SearchCloudComputing blog:

imageWhat's wrong with Windows Azure? In a word: marketing.

From the beginning, Microsoft's efforts to explain Windows Azure to the world fell under the assumption that potential customers understood cloud computing. After all, if they know what a nail is used for, then you only need to explain the features and benefits of a hammer.

imageFirst, consider how cloud computing looked three years ago. It was generally accepted that everyone had their own definition of cloud. When Windows Azure entered the market, vendors and customers were free to define the cloud to suit their needs or, more often, their desires. The architects of Windows Azure created a Platform as a Service (PaaS) implementation that closely matched NIST's definition of cloud computing. Microsoft went with it -- it didn't have to create and defend its own definition; a generally respected standards organization had already done the legwork.

Second, Amazon Web Services (AWS) had been in the market for roughly three years when Windows Azure previewed in late 2008. Much like Windows Azure was a near-perfect implementation of PaaS, AWS was a near-perfect implementation of Infrastructure as a Service (IaaS). And while Amazon had plenty of technical guidance available as to how its cloud offering should be used, it was also possible to use it in a very traditional manner, where it behaved like a legacy hosting service. This appealed to customers looking for a new technology that could be adopted with minimal changes to implementation approaches.

Third, in the fall of 2005, Ray Ozzie, then chief software architect at Microsoft, introduced what has become known as the "services disruption memo." Ozzie outlined the case that service-oriented computing would become the next paradigm shift in the industry and introduced Microsoft's strategy to address this.

Windows Azure was part of that effort, though Microsoft didn't let on to this until late 2008. And when it was introduced, it was part of a message campaign called "software plus services." No one at Microsoft could explain what it meant, and its competition claimed this was a cynical attempt to redefine Software as a Service (SaaS) to Microsoft's advantage. The message that Windows Azure was a platform upon which custom services could be built quickly and easily was lost in the mix.

The problem with this is that markets, like nature, abhor a vacuum. If a vendor fails to define its product and, more importantly, fails to define a product's value proposition, the market will do it. And much of this market-driven definition will focus on what the product is not. For Windows Azure, this boiled down to it not being IaaS.

Unfortunately, with each passing month, cloud consumers declared strongly that what they actually wanted was IaaS.

Many observers have assessed that the market did not and will not want PaaS, at least not for some time. And Windows Azure often appears as the poster child for rejection. Customers say they don't necessarily want what Windows Azure has. In reality, though, many have never heard the real Windows Azure/PaaS value proposition. PaaS hasn't been readily adopted because it is a different approach and therefore must be used differently. But sometimes different is a good thing.

Microsoft missed the boat in getting this point out to potential customers, at the cost of Windows Azure. And this is a textbook example of what happens when a vendor doesn't control -- or clearly state -- its product message, especially in newer markets like cloud computing.

According to his Twitter bio, Mark is a “former member of the US Azure sales team now driving business development for Fino, a consulting firm focused on enterprise cloud and mobile applications.” IMO, his points are well taken.

Full disclosure: I’m a paid contributor to TechTarget’s blog.

Haishi Bai (@HaishiBai2012) posted Play fireworks together! Process of building a SignalR sample on 11/7/2012:

imageIn my BUILD 2012 session on Cloud Services, I mentioned a fireworks sample but didn’t get time to show it. It is an end-to-end sample showcasing how to use SignalR to build massive interactive web applications. As promised, I’m sharing the source, as well as some of “Performance by Design” thinking that I went through building the sample. I hope you can pick up some practical SignalR tricks and tips here as well as some hints on designing an application with performance considerations.

image222I have a version deployed at This is a two-instance deployment with a Service Bus backplane, so it should hold up for hundreds of concurrent fireworks. The application works on your PCs, Macs (with HTML 5 enabled browsers) as well as mobile devices such as Surface, WP, iPhone and iPad. I haven’t tested on Android devices, though. Note that because the code does intense graphics, your mobile devices may have a hard time keep up with rendering, especially when there are lots of fireworks. My Windows Phone 7 could handle about 500 concurrent fireworks.

The Application

The idea is simple – to allow many users to play virtual fireworks together over the Internet. It’s a single page web application written in JavaScript and HTML 5. The application supports two firework types – simple and complex, but it can be extended by providing additional subclasses of Firework class. In addition, you can pick from several different colors (include a random multi-color option) as well as how trails of sparks are rendered.


The Code

Complete code of the application can be found at GitHub. Here are some brief descriptions that may help you to understand the code. There are comments inline with the code as well.

  • Everything interesting is in the default Home view (Views/Home/Index.cshtml).
  • Firework is the base class that implement basic behaviors of a firework. It defines a series of Sparks, which in turn contain a list of Trail points.
  • FireworkPatternFactory is a class that helps to reduce amount of calculations. See the last section for more details.
  • SimpleFirework and ComplexFirework are two subclasses of Firework. If you want to implement additional firework types, you’ll need to initialize the Sparks in your constructor and implement updateSparks() to update the sparks. Trails are automatically maintained by the base class.
  • Rendering routine is the updateCanvas() method, which is set on a timer with 50ms intervals.
SignalR Tricks & Tips

This post is not an introduction to SignalR. Instead, I’ll provide a couple of quick tricks & tips that you can use when implementing your own SignalR application on Windows Azure.

Enable Service Bus backplane

SignalR Service Bus backplane allows you to scale SignalR hubs to multiple instances. You’ll need to enable the backplane if your Cloud Service will have multiple instances.

  1. You’ll need to add a reference to Microsoft ASP.NET SignalR ServiceBus Libraries NuGet package.
  2. Initialize the backplane in Global.asax.cs
    private string topicPathPrefix = "firework";
    protected void Application_Start()
                              partitionCount: 5, 
                              nodeCount: getRoleInstanceCount(), 
                              nodeId: getRoleInstanceNumber(), topicPrefix: topicPathPrefix);
    private int getRoleInstanceCount()
        return RoleEnvironment.CurrentRoleInstance.Role.Instances.Count;
    private int getRoleInstanceNumber()
        var roleInstanceId = RoleEnvironment.CurrentRoleInstance.Id;
        var li1 = roleInstanceId.LastIndexOf(".");
        var li2 = roleInstanceId.LastIndexOf("_");
        var roleInstanceNo = roleInstanceId.Substring(Math.Max(li1, li2) + 1);
        return Int32.Parse(roleInstanceNo);

Note the nodeCount should be consistent with number of your role instances, and nodeId should be unique within the cluster.

Enable WebSockets

At the time when this post is written, WebSockets is not enabled by default on osFamily 3. However you can easily do this via a startup script:

%SystemRoot%\system32\dism.exe /online /enable-feature /featurename:IIS-WebSockets

You don’t have to do this – SignalR will fall back to other means of establishing connections. But WebSockets is preferred. In addition, you need to set runtime context to elevated to make this work. The operation may takes a long time. I’d suggest to make it a background task so that your deployments can go through faster (and continue to work even if the command fails).

Service Bus Connection String

At the time when this post is written, you need to make sure your service bus connection string contains token provider type:

<add key="Microsoft.ServiceBus.ConnectionString" value="Endpoint=sb://[your namespace].servicebus.;SharedSecretIssuer=owner;SharedSecretValue=[your secret key];provider=SharedSecret" />
Performance by Design

This is a very “busy” application – on one hand it needs to sync its states with hundreds of other clients; and on the other hand it needs to render thousands of elements on the screen at a very fast pace. To ensure performance, during the designing phase I paid extra attention to these two elements: Reduce amount of information that needs to be synced; Reduce amount of calculation. Here are processes of some design decisions I made along the way. I hope this can provide some ideas to help you to design your first interactive web application.

What to sync

iteration 1

When you render a beautiful firework, you need to know the positions of every single spark as well as its trail. If you want to directly sync all the information the amount of traffic will be considerable – each complex firework contains 48 sparks with 240 trail points – that’s near 300 coordinates, along with color of each spark, to be synced. Now multiply that by 100 users, at every 50 milliseconds. That’s craziness. We need a much simpler representation of a firework. And here’s the abstraction I use – a firework is described by a base position (where user has clicked), a base color, a type (simple or complex), a trail type (dot, line, or none), and a phase. Note I’m cheating a little here – for multi-color fireworks I’m not going to sync color of each spark. No one would care about (or event notice) they have different colors - knowing when and how to cheat is important ;). The phase is an interesting concept – basically when a firework is ignited it’s at phase 0, and it goes through all the phases till it burns out. The program is designed in the way that you can paint a firework of any phase at any time – you don’t have to start with phase 0. This allows me to sync firework states at any time.

iteration 2

That’s much better, but can we do even better? Obviously we can compress the data, such as encoding color, type, and trail to a single byte. But there will still be a lot of data to sync. Eventually, I decided to sync the actions instead of sync the states. Generally speaking, when you sync among distributed systems, syncing the states is much more reliable than syncing the actions, because with a single action missing, duplicated, or out of place, the whole synchronization is messed up. However, for this particular application, missing some of the actions – and there’s only one “add” action – is not that terrible. So some clients may miss a firework or two -- not a big deal. And it’s also not critical to make sure all pixels on the screens are in sync either. Hence syncing the actions is a very reasonable choice in this case, and it dramatically reduces the amount of data to be transferred. All clients maintain their states separately, and they try to perform the same actions by best attempts. For a firework application that’s acceptable.

Reduce calculation


When fireworks explode we need to calculate trajectories of the sparks. In order to avoid repetitive SIN/COS calculations, I pre-calculated points of an expanding circle and saved all the points in memory. Then, when I need to plot a spark (or its trail) on screen, I simply lookup corresponding values by phases. Acceleration of gravity is kept to a separate pre-calculated array so I can adjust and apply gravity effects independently from the perfect explosion circle.


Drawing a perfect trail of expanding circle (the spark) takes some non-trivial calculations, which is not the price I want to pay. Instead of calculating perfect polygons to compose the trail, I chose to use approximate bands that connect centers of the trail points. This is a much simpler calculation and the result is very reasonable. In the following diagram, the red area shows approximation bands, while the yellow area shows perfect trail polygons. You can see the algorithm is biased vertically so when the sparks fall the trails look more spectacular (again, knowing when and how to cheat is important).


What’s left
  • A better, stretchable UI.
  • Explosion effect for complex firework isn’t that nice. Because of the 3D perspective in real life, the center should not be hollow but filled with sparks that are coming toward you. A pre-calculated 3D projection is needed, as well as the spark coordinates to be extended to 3 dimensions. It’s not a terribly hard change but needs a little 3D geometry.

Here you go, a SingalR firework program that I coded up in about 3 days. I hope you can get some useful tips setting up the Service Bus backplane. If you had some fun playing with the sample that’s even better! And if you are inspired to build much, much better interactive web applications, my goal here is achieved :).

My test of the live app on 11/8/2012 showed the following static screen with the default settings:


Bruno Terkaly (@brunoterkaly) asserted BizSpark is fantastic for Windows 8 and Azure Entrepreneurs in an 11/7/2012 post:

Microsoft BizSpark is an awesome way to get access to the MS stack for free.

  1. imageIt would be foolish not to leverage BizSpark if you are an entrepreneur. You can sign up here. It is free.
  2. Sign up Link:

      Sign up for free MS technologies with BizSpark

  3. You can learn more from Douglas Crets at:
    2. @Bizspark
    3. @douglascrets
  4. Microsoft BizSpark is a global program that helps software startups succeed by giving them access to Microsoft software development tools, connecting them with key industry players, including investors, and providing marketing visibility to help entrepreneurs starting a business.
  5. Microsoft believes that by helping startups succeed we're helping to build a valued long-term partnership. Together we can build a more vibrant global software economy.
  6. BizSpark helps startups by providing access to Microsoft software when you most need it and can least afford it, and by supporting the network of organizationsstartup incubators, investors, advisors, government agenciesthat are equally involved and invested in software-fueled innovation and entrepreneurship.
  7. It allows software startups to get access to software development tools, connecting them with key industry players, and providing marketing visibility.
  8. image222I love to talk about BizSpark because it gives you free access to Cloud Computing (Windows Azure)
  9. For free you get access to a powerful cloud platform for the creation of web applications and services.
  10. But that is not all, you get technical support, business training and a network of over 2,000 partners to connect members with incubators, investors, advisors, government agencies and hosters.
  11. More than 45,000 companies in over 100 countries have joined BizSpark.

BizSpark Benefits

  1. The Windows Azure platform makes it easy for startups to get a production ready solution up and running quickly so you can pursue the things that matter. Windows Azure offers a simple, comprehensive, and powerful platform for the creation of web applications and services. Windows Azure enables Start-ups to focus on their business logic, as opposed to operational hurdles, in creating compelling products.
  2. Azure Solves Problems
    • Geo location of hosting servers
    • Real time backup and failover for databases
    • Deploying from staging to production environments
    • Shared memory caches
  3. With respect to Azure, here is what you get:
    • Compute 1,500 hours of a Small Instance
    • Storage 30 GB
    • Storage Transactions 2,000,000
    • SQL Azure 5 GB
    • Access Control Transactions* 500K
    • Service Bus 3,000 hours
    • Cache 128 MB cache
    • Data transfers (WW) 35GB Out Free In
    • Annual Savings** $3,700
  4. So what's the biggest service that you can build?
    1. With the previously mentioned provisos (delete deployments at the end of each and every 8 hour work day (this can be automated with PowerShell), write hosted services to interoperate, combine capacity of storage accounts / SQL Azure databases, etc) and for a team of 3 developers, you could in theory test an uber-service with up to 18 single-core instances (any combination of web/worker/vm roles), 15 GB of relational storage (3 separate 5GB databases perhaps sharded along customer boundaries?), 90GB non-relational storage (combination of blobs, tables, queues), and 768MB distributed memory cache (3 distinct namespaces.)
  5. Windows 8 Applications
    1. Get your Win8 apps into the Windows Store
    2. BizSpark members now get a free, 1-year Windows Store developer account.
  6. Save Time, Resources and Money with SendGrid
    • SendGrid's cloud email infrastructure eliminates the cost and complexity of maintaining custom email systems in-house.
  7. Visual Studio and Developer Tooling
    • BizSpark gives you access to Microsoft's giant software stack for free.

Just some of the software packages that you get for free.

  1. My local startup developers in Silicon Valley love this program.
  2. This is for startups
  3. Windows 8 Entrepreneurs
  4. Cloud Entrepreneurs
  5. Sign Up for BizSpark
  6. Click here to save $1,000's

Helping Innovators..
BizSpark has been an amazing opportunity for my local entrepreneurs.

BusinessWire reported Certeon Announces Windows Azure Ready aCelera in an 11/7/2012 press release:

imageCerteon, the application performance company, today announced availability of Certeon’s aCelera wide area network (WAN) optimization platform for Windows Azure. The interoperability of aCelera software with Windows Azure unlocks the vast application capabilities of the Microsoft public cloud, helping users experience the benefits of seamless, high-speed and secure data access and an enhanced end user experience for any application running in Windows Azure. In addition aCelera accelerates virtual machines (VMs) moving between datacenters and any cloud, a wide variety of applications and optimizes network performance. The combination of Certeon and Windows Azure can enable IT organizations and cloud service providers to optimize computing resources and maximize utilization of their IT investments.

"Maximizing the power of public cloud platforms is best accomplished through the use of a dynamic, consistent WAN Optimization platform," said Massood Zarrabian, CEO, Certeon. "With Windows Azure and Certeon aCelera, organizations can now make the most of what Windows Azure offers right along side their internal business environments, delivering maximum flexibility, ease of use, productivity and adapting to business change."

image222The integration enables customers to connect Windows AZURE and data centers through a single accelerated access point to get the all benefits of WAN optimization. When moving or replicating data across any WAN, there are reductions in responsiveness that impact user experience and business critical processes. Certeon and Windows Azure’s solutions can help customers optimize their WAN, so companies can realize the benefits of utilizing hybrid cloud environments without compromising performance. The aCelera platform provides significant WAN performance improvements across, and helps reduce the costs associated with WAN infrastructure.

In his recent research note Managing Network Performance of Cloud-Based Applications, Gartner analyst Eric Siegel observed that "Cloud computing is a hot topic, and the performance of the network is a critical factor in user satisfaction," and that "Network performance is a key component of the total performance of cloud-based applications." Mr. Siegel concluded, "If the application or network can't be redesigned, WAN performance optimization can make the difference between useable and useless."

Certeon aCelera features the scale, speed and simplicity enterprise applications need and also includes integrated cloud intelligent features like wan bridging, on demand security and a profile based configuration management system. The combination of Windows Azure and aCelera enables organizations to deploy a single integrated solution that is managed through a single interface and deployed across data center, private and public cloud.

Certeon aCelera for Windows Azure environments provides:

  • Integrated High Availability and Failover
  • Integrated Quality of Service, Traffic Shaping and on demand Security
  • Highest performance/cost ratio in the industry
  • 400% more acceleration
  • Up to a 50 times improvement in application response times
  • Subscription licensing

About Certeon

Certeon‘s award-winning software helps enterprises eliminate traditional network constraints and accelerates application performance providing LAN-like experience when using applications and/or data from branch offices, data centers and the cloud. Certeon’s platform is specifically built for virtualized environments; is both hypervisor and hardware agnostic and can also run on Windows Server. These capabilities provide unmatched performance; scalability and flexibility – while allowing organizations to leverage their existing infrastructure - and future proof their IT investment. Whether your initiative is virtualization, consolidation, cloud computing, or disaster recovery Certeon can help.

Founded in 2004, Certeon is a private company backed by Sigma Partners, RRE Ventures and Globespan Capital Partners. More information can be found at

Claudio Caldato and Erik Meijer described MS Open Tech Open Sources Rx (Reactive Extensions) – a Cure for Asynchronous Data Streams in Cloud Programming in an 11/6/2012 post :

Updated: added quotes from Netflix and BlueMountain Capital Management

If you are a developer that writes asynchronous code for composite applications in the cloud, you know what we are talking about, for everybody else Rx Extensions is a set of libraries that makes asynchronous programming a lot easier. As Dave Sexton describes it, “If asynchronous spaghetti code were a disease, Rx is the cure.”

Reactive Extensions (Rx) is a programming model that allows developers to glue together asynchronous data streams. This is particularly useful in cloud programming because helps create a common interface for writing applications that come from diverse data sources, e.g., stock quotes, Tweets, computer events, Web service requests.

Today, Microsoft Open Technologies, Inc., is open sourcing Rx. Its source code is now hosted on CodePlex to increase the community of developers seeking a more consistent interface to program against, and one that works across several development languages. The goal is to expand the number of frameworks and applications that use Rx in order to achieve better interoperability across devices and the cloud.

Rx was developed by Microsoft Corp. architect Erik Meijer and his team, and is currently used on products in various divisions at Microsoft. Microsoft decided to transfer the project to MS Open Tech in order to capitalize on MS Open Tech’s best practices with open development.

There are applications that you probably touch every day that are using Rx under the hood. A great example is GitHub for Windows.

According to Paul Betts at GitHub, "GitHub for Windows uses the Reactive Extensions for almost everything it does, including network requests, UI events, managing child processes (git.exe). Using Rx and ReactiveUI, we've written a fast, nearly 100% asynchronous, responsive application, while still having 100% deterministic, reliable unit tests. The desktop developers at GitHub loved Rx so much, that the Mac team created their own version of Rx and ReactiveUI, called ReactiveCocoa, and are now using it on the Mac to obtain similar benefits."

And Scott Weinstein with Lab49 adds, “Rx has proved to be a key technology in many of our projects. Providing a universal data access interface makes it possible to use the same LINQ compositional transforms over all data whether it’s UI based mouse movements, historical trade data, or streaming market data send over a web socket. And time based LINQ operators, with an abstracted notion of time make it quite easy to code and unit test complex logic.”

Netflix Senior Software Developer Jafar Husain explained why they like Rx. "Rx dramatically simplified our startup flow and introduced new opportunities for performance improvements. We were so impressed by its versatility and quality, we used it as the basis for our new data access platform. Today we're using both the Javascript and .NET versions of Rx in our clients and the technology is required learning for new members of the team."

And Howard Mansell, Quantitative Strategist with BlueMountain Capital Management added, “We are very pleased that Microsoft are Open-Sourcing the Reactive Extensions for .NET. This will allow users to better reason about performance and optimize their particular use cases, which is critical for performance and latency sensitive applications such as real-time financial analysis.”

Part of the Rx development team will be on assignment with the MS Open Tech Hub engineering program to accelerate the open development of the Rx project and to collaborate with open source communities. Erik will continue to drive the strategic directions of the technology and leverage MS Open Tech Hub engineering resources to update and improve the Rx libraries. With the community contribution we want to see Rx be adopted by other platforms. Our goal is to build an open ecosystem of Rx-compliant libraries that will help developers tackle the complexity of asynchronous programming and improve interoperability.

We are also happy to see that our decision is welcome by open source developers.

“Open sourcing Rx just makes sense. My hope is that we’ll see a couple of virtuous side-effects of this decision. Most likely will be faster releases for bug fixes and performance improvements, but the ability to understand the inner workings of the Rx code should encourage the creation of additional tools and Rx providers to remote data sources,” said Lab 49’s Scott Weinstein.

According to Dave Sexton,, “It’s a solid library built around core principles that hides much of the complexity of controlling and coordinating asynchrony within any kind of application. Opening it will help to lower the learning curve and increase the adoption rate of this amazing library, enabling developers to create complex asynchronous queries with relative ease and without any spaghetti code left over.”

Starting today, the following libraries are available on CodePlex:

  • Reactive Extensions
    • Rx.NET: The Reactive Extensions (Rx) is a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators.
    • RxJS: The Reactive Extensions for JavaScript (RxJS) is a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators in JavaScript which can target both the browser and Node.js.
    • Rx++: The Reactive Extensions for Native (RxC) is a library for composing asynchronous and event-based programs using observable sequences and LINQ-style query operators in both C and C++.
  • Interactive Extensions
    • Ix: The Interactive Extensions (Ix) is a .NET library which extends LINQ to Objects to provide many of the operators available in Rx but targeted for IEnumerable<T>.
    • IxJS: An implementation of LINQ to Objects and the Interactive Extensions (Ix) in JavaScript.
    • Ix++: An implantation of LINQ for Native Developers in C++.
  • Bindings
    • Tx: a set of code samples showing how to use LINQ to events, such as real-time standing queries and queries on past history from trace and log files, which targets ETW, Windows Event Logs and SQL Server Extended Events.
    • LINQ2Charts: an example for Rx bindings. Similar to existing APIs like LINQ to XML, it allows developers to use LINQ to create/change/update charts in an easy way. We would love to see more Rx bindings like this one.
    • With these libraries we are giving developers open access to both push-based and pull-based data via LINQ in Microsoft’s three fundamental programming paradigms (native, JScript and Managed code).

We look forward to seeing you guys use the library, share your thoughts and contribute to the evolution of this fantastic technology built for all you developers.

Larry Franks (@larry_franks) posted Windows Azure Command-line Tools for Mac and Linux, Version 0.6.7 is Available on 11/6/2012:

imageA new version of the Windows Azure Command-line Tools for Mac and Linux has been released. This version adds some useful commands for working with Windows Azure Web Sites, specifically setting app settings and working with GitHub and the server side repository.

App Settings

App settings is a feature of Windows Azure Web Sites, which lets you create key/value pairs that end up as environment variables at runtime. We've written about previously. But it required you to set the key/value pairs using the Windows Azure portal. Now you can control app settings from the command-line by using the following command:

azure site config add <key>=<value> 

image222To see a list of all key/value pairs, use the following:

azure site config list 

Or if you know the key and want to see the value, you can use:

azure site config get <key> 

Now, what if you want to change the value of an existing key? Unfortunately, just adding it again doesn't work since it already exists. You must first clear the existing key and then re-add it. The clear command is:

azure site config clear <key> 

To see the help listing for these commands, just use:

azure site config 
GitHub Deployments

One of the features added to the Windows Azure Portal a month or two back was to enable continuous deployment from GitHub, CodePlex, or Bitbucket, but this had to be done through the portal. Now you can use the following command to enable continuous deployment from GitHub:

azure site deployment github 

You'll be prompted for your GitHub username and password, and then a list of the repositories you have access to. Once you select a repository, a new website hook will be created on GitHub for your Azure web site. New updates to the repository on GitHub will automatically be deployed to the Windows Azure Web Site. This will default to the master branch.

If you want to use a different branch than master, you can use the following command to specify the branch:

azure site repository <branchname> 

There's another 'repository' command that's new also, which lets you completely remove a repository from your Windows Azure Web Site:

azure site repository delete
Web Site general

There's also a new general web site command, which will recycle a web site:

azure site restart 

Issuing this will stop, and then restart the web site.

Getting the new bits

Since the command-line tools are implemented using Node, you can use npm to get the new bits. Use the following command to install the command-line tools:

npm install azure -g 

This should install version 0.6.7 of the tools, which is the new version as of this post.


That's it for the new commands in the Windows Azure Command-line tools. There's also some updates to the Windows Azure PowerShell cmdlets, but I'll leave that post for Brian. For more information on using the Windows Azure Command-Line Tools for Mac and Linux, see

Sandrino Di Mattia (@sandrinodm) described the IIS 8.0 Application Initialization module in a Windows Azure Web Role in an 11/5/2012 post:

imageBack in 2009 the IIS team announced the beta release of the Application Warm-Up Module as a downloadable extension. Today this module was renamed to Application Initialization andis included in IIS8. Activating it will add a few interesting capabilities to your IIS installation:

  • Starting a worker process without waiting for a request (AlwaysRunning)
  • Load the application without waiting for a request (preloadEnabled)
  • Show a nice loading page while the application is starting

The first two features are important in terms of performance. After starting the World Wide Web service, it will start the application pool and initialize your application. In most cases, this means that the first users visiting your site won’t have to wait until the application starts, because application was already started and initialized. In addition to automatically starting the application, this will also improve the ‘recycle experience’. When your application pool is being recycled, a new worker process will be started. But with the preloadEnabled setting, the new worker process will only start receiving requests when the application is completely initialized. Wade Hilmo wrote a great introduction to Application Initialization on his blog.

In case your application takes a long time to start (SharePoint 2010 or Dynamics CRM 2011 for example), you can provide your users with a better experience. Using an URL rewrite you can show a loading page while the application is starting. The IIS team blogged about this a while ago and as you can see it’s pretty simple to set this up:

Installing Application Initialization

Let’s see how we can bring this to a Windows Azure Web Role. We’ll create a simple application that deploys to Windows Azure, enables the Application Initialization module in IIS and configures the site and the application pool. The first thing we’ll do is create a new Cloud Service targeting the .NET Framework 4.5. As a result, the osFamily in the ServiceConfiguration.cscfg will be set to 3. This means you’ll be deploying a Web Role on Windows Server 2012.

The Application Initialization module is part of IIS8 but isn’t activated by default. In order to activate this module, we’ll need to write a startup task that runs pkgmgr.exe (a tool that allows you to install Windows Server Roles and Features through the command line). The only problem is that we don’t know the name of the Application Initialization feature required by pkgmgr.exe. After digging in the registry I found the following key:

With this information I could simply create a startup task that adds this feature to the current IIS8 installation:

And here is the result:

Configuring the application

The module has been installed, but how do we configure the website and the application pool? We have to do this on the right moment since IIS needs to be started and the Web Role’s website(s) need to be configured properly. That’s why I don’t like using a startup task for this since most of the time your websites haven’t been configured when it runs. Instead of using a startup task, we’ll write some code in the WebRole.cs. Since we’ll be modifying some settings in IIS, we need to make sure the code in WebRole.cs runs in an elevated process. This is possible by adding the following line in the ServiceDefinition.csdef:

Now simply add a reference to Microsoft.Web.Administration (which can be found in C:\Windows\System32\inetsrv) and write the following code (this won’t work if you’re debugging locally with IIS Express):

This code will connect to IIS and set 2 attributes:

  • On the application pool of the main website it will set startMode to AlwaysRunning. This will make sure that the application pool starts whenever the World Wide Web service starts (after an iisreset for example).
  • On the main website it will set preloadEnabled to true. Setting this to true will make sure your application initializes after your application pool starts or recycles (it will simulate the first request).

Finally calling CommitChanges will save these settings. If you activate Remote Desktop and you connect to your Web Role you’ll see that the w3wp.exe process is already running:

Setting up the splash screen

I’ve modified the Global.asax file to simulate some work while the application is starting. Using Thread.Sleep it will take 30 seconds for the application to start:

In case users start connecting while the application is starting we can show a splash screen thanks to the Application Initialization module. Besides using url rewrites you can configure the Application Initialization module directly in the web.config. Here I’m showing Loading.htm while the application is starting:

While the application is starting users will see the following page:

Once the application is ready, they’ll see the actual application:

I’m assuming you’ll want to test this. It seems that running iisreset does something which doesn’t show the splash screen (I don’t know if this is caused by Windows Azure or by IIS). But I noticed that restarting the Web Server manually through the UI does work:

That’s it!

The complete application is available on GitHub:

Martin Sawicki announced Windows Azure Plugin for Eclipse with Java – November 2012 Preview on 11/5/2012:

image222I’m pleased to announce the availability of a major update to our Eclipse tooling, the “Windows Azure Toolkit for Eclipse, November 2012 Preview (version 1.8.0)”. This release accompanies the release of the Windows Azure SDK v1.8, as well as the AMQP 1.0 messaging protocol support in Windows Azure Service Bus, and exposes a number of related features recently enabled by Windows Azure.

The key highlights of this release include:

a) The updated “Windows Azure Plugin for Eclipse with Java” supports using Windows Server 2012 as the target operating system in the cloud

b) The plugin also now allows you to easily configure Windows Azure Caching, so you can use a memcached-compatible client for co-located, in-memory caching scenarios

c) The toolkit includes a new component: “Package for Apache Qpid Client Libraries for JMS (by MS Open Tech)”, which is a distribution of the latest client libraries from Apache supporting AMQP 1.0-based messaging recently enabled by Windows Azure Service Bus

d) Plus a number of additional customer-feedback driven enhancements and bug fixes

To learn more, see our latest documentation.

Martin Sawicki
Principal Program Manager
Microsoft Open Technologies, Inc.
A subsidiary of Microsoft Corporation


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Kostas Christodoulou (@kchristo71) described modifications to LightSwitch’s Default Screens in an 11/7/2012 post:

imageIf you have tried implementing something that required a mechanism more complex than the default one provided by LightSwitch regarding default screen selection, you already are familiar with the limitations.

For the ones that don’t get what I mean, a very common example is open the details screen of an object from a list containing items from a view (database view) that include the id of the original object. Imagine you have a Customer entity and a CustomerInfo entity that comes from a database view bringing aggregated data of the customer from various tables. Selecting and editing the CustomerInfo you want to edit the Customer not the CustomerInfo.

image_thumb6The above is only one fast example. Another would be having Add/Edit Screens i.e. screens that can handle both create or edit an existing object. In any case if you need to modify the default signature of a default screen (adding an extra parameter for example) causes this screen to be unselected as default screen because LightSwitch mechanism for default screens cannot handle it.

Going back to the Customer/CustomerInfo example, I have come across at least a couple of posts in threads trying to find a way to “override” the default click behavior of links in collections. In our example clicking on the link of the customer’s name (for example) will open a default screen for CustomerInfo.
So, one easy solution is creating a default screen for CustomerInfo where upon load you will open the Customers default screen passing the id of the customer and close the current one. But how do you handle allowing only one instance of a details screen per object? Or all other issues that might come come up? (Imagine you have more than one view entities for the Customer entity).

Anyway, this is my approach at solving this issue, that works fine for me in a big project whose development runs for almost a year now (and by the way is the reason for not posting very often –understatement of the year- here lately):
Reintroducing the ShowDefaultScreen method of the Application in the Client project along with an interface defined and implemented by all screens that need to be involved.

This is the interface definition:

public interface IDefaultScreen : IScreenObject
    bool Shows(IEntityObject instance);
    IEntityObject Object { get; }

This is the code in Application.cs:

public new void ShowDefaultScreen(IEntityObject instance) {

public void ShowOrFocusDefaultScreen(IEntityObject entity) {
  foreach (IActiveScreen activeScreen in this.ActiveScreens) {
    if ((activeScreen.Screen is IDefaultScreen) &&
        (activeScreen.Screen as IDefaultScreen).Shows(entity)) {
  entity.Details.Dispatcher.BeginInvoke(() => {
    if (!HandleEntityDefaultScreen(entity))

private bool HandleEntityDefaultScreen(IEntityObject entity) {
  switch (entity.GetType().Name) {
    case "CustomerInfo": {
        this.ShowEditCustomer((entity as CustomerInfo).Id);
        return true;
  return false;
In the code displayed above the implementation in HandleEntityDefaultScreen code is just to demonstrate the concept. The ellipsis below the first case implies you can write whatever is required by your application.
And finally this is the code from the Customer details screen that implements IDefaultScreen:
public partial class EditCustomer : IDefaultScreen
  #region IDefaultScreen Members
  public bool Shows(IEntityObject instance) {
    return (instance is CustomerInfo) &&
           (instance as CustomerInfo).Id.Equals(this.CustomerProperty.Id);

  public IEntityObject Object {
    get { return this.CustomerProperty; }

The careful reader will notice that the IEntityObject Object { get; } property exposed by IDefaultScreen is not used in the sample code. I ported the code exactly as I use it in my projects (apart from the changes made to demonstrate the Customer/CustomerInfo paradigm) where this property is used for a different purpose. You can either remove it or find a good use for it.

Beth Massi (@bethmassi) suggested on 11/5/2012 that you watch //BUILD Session: Building Connected Business Applications with Visual Studio LightSwitch:

Friday, John Stallo, Lead Program Manager of the LightSwitch team, did a session at //BUILD that is now available online. We’re very excited about the LightSwitch HTML Client and the direction we’re taking the product. Check out John’s presentation to learn more.

Watch: Building Connected Business Applications with Visual Studio LightSwitch

With the recent addition of HTML5 support, Visual Studio LightSwitch remains the easiest way to create modern line of business applications for the enterprise. In this demo-heavy session, see how to build and deploy data-centric business applications that provide rich user experiences tailored for modern devices. We’ll cover how LightSwitch helps you focus your time on what makes your application unique, allowing you to easily implement common business application scenarios—such as integrating multiple data sources, data validation, authentication, and access control—as well as leveraging Data Services created with LightSwitch to make data and business logic available to other applications and platforms. You will also see how developers can use their knowledge of HTML5 and JavaScript to build touch-centric business applications that run well on modern mobile devices

Return to section navigation list>

Windows Azure Infrastructure and DevOps

Lori MacVittie (@lmacvittie) asserted “Meeting user expectations of fast and available applications becomes more difficult as you relinquish more and more control…” in an introduction to her Filling the SLA Gap in the Cloud post of 11/7/2012 to F5’s DevCentral blog:

imageUser expectations with respect to performance are always a concern for IT. Whether it's monitoring performance or responding to a fire drill because an application is "slow", IT is ultimately responsible for maintaining the consistent levels of performance expected by end-users – whether internal or external.

Virtualization and cloud computing introduce a variety of challenges for operations whose primary focus is performance. From lack of visibility to lack of control, dealing with performance issues is getting more and more difficult.


The situation is one of which IT is acutely aware. A ServicePilot Technologies survey (2011) indicates virtualization, the pace of emerging technology, lack of visibility and inconsistent service models as challenges to discovering the root cause of application performance issues. Visibility, unsurprisingly, was cited as the biggest challenge, with 74% of respondents checking it off.

These challenges are not unrelated. Virtualization's tendency toward east-west traffic patterns can inhibit visibility, with few solutions available to monitor intra-virtual machines deployed on the same physical machine. Cloud computing – highly virtual in both form factor and in model – contributes to the lack of visibility as well as challenges associated with disconnected service models as enterprise and cloud computing providers rarely leverage the same monitoring systems.

Most disturbing, all these challenges contribute to an expanding gap between performance expectations (SLA) and the ability of IT to address application performance issues, especially in the cloud.


There are many "gaps" associated with virtualization and cloud computing: the gap between dev and ops, the gap between ops and the network, the gap between scalability of operations and the volatility of the network. The gap between application performance expectations and the ability to affect it is just another example of how technology designed to solve one problem can often illuminate or even create another.

Unfortunately for operations, application performance is critical. Degrading performance impacts reputation, productivity, and ultimately the bottom line. It increases IT costs as end-users phone the help desk, redirects resources from other just as important tasks toward solving the problem and ultimately delaying other projects.

This gap is not one that can be ignored or put off or dismissed with a "we'll get to that". Application performance always has been – and will continue to be – a primary focus for IT operations. An even bigger challenge than knowing there's a performance problem is what to do about it – particularly in a cloud computing environment where tweaking QoS policies just isn't an option.

What IT needs – both in the data center and in the cloud – is a single, strategic point of control at which to apply services designed to improve performance at three critical points in the delivery chain: the front, middle, and back-end.


Such a combined performance solution is known as ADO – Application Delivery Optimization – and it uses a variety of acceleration and optimization techniques to fill the gap between SLA expectations and the lack of control in cloud computing environments.

adoA single, strategic implementation and enforcement point for such policies is necessary in cloud computing (and highly volatile virtualized) environments because of the topological challenges created by the core model. Not only is the reality of application instances (virtual machines) popping up and moving around problematic, but the same occurs with virtualized network appliances and services designed to address specific pain points involving performance. The challenge of dealing with a topologically mobile architecture – particularly in public cloud computing environments – is likely to prove more trouble than it's worth. A single, unified ADO solution, however, provides a single control plane through which optimizations and enhancements can be applied across all three critical points in the delivery chain – without the topological obstacles.

By leveraging a single, strategic point of control, operations is able to leverage the power of dynamism and context to ensure that the appropriate performance-related services are applied intelligently. That means not applying compression to already compressed content (such as JPEG images) and recognizing the unique quirks of browsers when used on different devices.

ADO further enhances load balancing services by providing performance-aware algorithms and network-related optimizations that can dramatically impact the load and thus performance of applications.

What's needed to fill the gap between user-expectations and actual performance in the cloud is the ability of operations to apply appropriate services with alacrity. Operations needs a simple yet powerful means by which performance-related concerns can be addressed in an environment where visibility into the root cause is likely extremely limited. A single service solution that can simultaneously address all three delivery chain pain points is the best way to accomplish that and fill the gap between expectations and reality.

Manu Cohen-Yashar answered Where is Azure’s previous portal? in an 11/7/2012 post:

[A] few days ago the new portal was upgraded. The service bus was made available (and few other new features) but the CTP announcements and the link to the previous portal was removed.

Unfortunately as for today not all Azure features are available in the new portal, so the previous portal is still required. For example to use ACS or Data Sync we have to use the previous portal.

To access the previous portal click on your name:


Then a new menu will be opened, and a nice menu item will point you to the previous portal.


James Staten (@staten7) posted Q: Which Apps Should I Move to the Cloud? A: Wrong Question to his Forrester Research blog on 11/6/2012:

imageOut of all the inquiries I get from Forrester enterprise clients the above question is by far the most common these days. However, the question shows that we have a lot to learn about true public cloud environments.

I know I sound like a broken record when I say this but public clouds are not traditional hosting environments and thus you can't just put any app that can be virtualized into the cloud and expect the same performance and resiliency. Apps in the cloud need to adapt to the cloud - not the other way around (at least not today). This means you shouldn't be thinking about what applications you can migrate to the cloud. That isn't the path to lower costs and greater flexibility. Instead you should be thinking about how your company can best leverage cloud platforms to enable new capabilities. Then create those new capabilities as enhancements to your existing applications.

imageThis advice should sound familiar if you have been in the IT business for more than a decade. Back in 1999 we did the same thing. As the Web was emerging we didn't pick up our UNIX applications and move them to the web. We instead built new web capabilities and put them in front of the legacy systems (green screen scrapers, anyone?). The new web apps were built in a new way - using the LAMP stack, scaling out and being geographically dispersed through hosting providers and content delivery networks. We learned new programming architectures, languages and techniques for availability and performance. Cloud platforms require the same kind of thinking.

Sure cloud platforms build off the web generation - we still scale out via load balancing, HTML and Javascript are key components and app servers and databases play key roles -- but what's different this time are two key factors that demand your attention: …

Read more

David Linthicum (@DavidLinthicum) asserted “Increasingly, CIOs can discern between mere hosting providers and true cloud services, but the cloudwashing persists” in his It's hosting, dammit: Fed up with fake cloud providers post of 11/6/2012 to InfoWorld’s Cloud Computing blog:

imageAccording to a CIO survey by hosting provider ElasticHosts, "83 percent of companies are frustrated with having to cut through marketing hype to find out which solutions are genuine cloud offerings and which are merely conventional hosting services with the word 'cloud' added to the title." Good ol' cloud-washing in action!

imageTwo-thirds of the survey's respondents had been offered "cloud" services that are fixed-term, 40 percent had been offered services that weren't elastic or scalable, and 32 percent were offered services that weren't even self-service. (Keep in mind that a provider did this survey, so selfish interests are in play.)

Still, it's hardly a secret that most of the services that bear the "cloud computing" label are warmed-over hosting technologies that do not provide the core attributes of cloud computing, including self- and auto-provisioning, elasticity, and even pay-per-use. The good news is that customers can tell the difference, but they resent having to dig down to find out if the vendor is trying to fool them.

Although this survey highlights what we already know, it's time that we call them out on their lies and start the shaming. Not only are they wasting would-be buyers' time, but some of these fake cloud services will make it into cloud projects, only to fall on their face, eat up funds, and delay the migration to true cloud platforms.

What can you do about this problem? Call BS on the BS. If a vendor promotes something as a cloud solution though it's not, tell the guilty party to stop misleading you -- you have better ways to spend your day.

Truth be told, most of those presentations are given by salespeople who don't know a cloud from a hole in the ground; they actually believe they're selling is a cloud. Used cars, appliances, PCs, cloud services -- the sales process is all the same, right?

image222No significant articles today

<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Kevin Remde (@KevinRemde) began a video series with TechNet Radio: Cloud Innovators– My interview with Tom Shinder and Yuri Diogenes (Part 1) on 11/8/2012:

A couple of weeks ago I had the privilege of speaking with Tom Shinder and Yuri Diogenes as a first in our five part series on the Private Cloud. In part one we discuss basic private cloud principles and what it means for business, as well as how organizations can begin to understand and accept the value of implementing a private cloud.

After watching this video, follow these next steps:

Download Server 2012 evaluation

Step #1 – Download Windows Server 2012
Step #2 – Download Your FREE Copy of Hyper-V Server 2012

If you're interested in learning more about the products or solutions discussed in this episode, click on any of the below links for free, in-depth information:


Websites & Blogs:

Virtual Labs:

clip_image003Follow @technetradio
clip_image004Become a Fan @

clip_image003Follow @KevinRemde
clip_image004Become a Fan of Kevin’s @

Subscribe to our podcast via iTunes, Zune, Stitcher, or RSS


<Return to section navigation list>

Cloud Security and Governance

image_thumb2No significant articles today

<Return to section navigation list>

Cloud Computing Events

Rick Garibay (@rickggaribay) announced that he is Speaking at AzureConf on Channel 9 Next Week in an 11/7/2012 post:

imageI am flattered to share that I’ve been invited to speak at AzureConf one week from today at Channel 9 Studios in Redmond on Wed. 11/14.

AzureConf is a premier live streamed event delivered for the community by the community from Channel 9 Studios on the Microsoft Campus in Redmond, WA.

image222Brady Gaster and Corey Fowler have been hard at work for several weeks organizing content, logistics and I can tell you that I am both excited and humbled by the speaker line-up and content.

AzureConf LogoThe event will kick off with a keynote presentation by Scott Guthrie, along with numerous sessions executed by Windows Azure community members including my friend, colleague and fellow (Azure) MVP Michael Collier and esteemed MVPs and Insiders flying in from all over the country/world to join Scott and team at the Channel 9 studios including Magnus Martensson and Eric Boyd, just to name a few.

Streamed live for an online audience on Channel 9, the event will allow you to see how customers, partners and MVPs are making the most of our skills to develop a variety of innovative applications on Windows Azure. The goal of the conference is to be just as valuable to seasoned Azure developers and architects as well as those just learning the tremendous power of this exciting platform.

You can learn more about AzureConf by visiting Please be sure to register as capacity for the live streamed event will be limited (however sessions will be available for playback following the conference).

Thank you for your interest and please help spread the word!


Paul Tidwell will present a Azure Mobile Services and Azure Media Services session to the Austin .NET User Group on 11/16/2012:

imageThe Azure offering has matured into a flexible solution for a broad selection of solutions. New additions extend this feature set into the realm of medias services for online encoding and video workflows. A REST API provides access to upload, configure, batch process, and check the status of uploaded assets. We’ll go over the available features and how to leverage them in your video pipeline.

image222Microsoft’s new Azure Mobile Services provides a fast and simple solution for client developers to set up a back end to persist data and consume services with very little effort. Developers can quickly provision and service role, add CRUD functionality, and authenticate with iOS, Android, and Windows 8 clients. We’ll discuss how to do all this with a demonstration an discussion of the offering.

Brent Stineman (@BrentCodeMonkey) recapped BUILD 2012 – Not just for Windows anymore in a 11/5/2012 post:

imageLast week marked the second BUILD conference. In 2011, BUILD replaced the Microsoft PDC conference in an event that was so heavily Windows 8 focused that it was even host at While the URL didn’t change for 2012, the focus sure did as this event also marked the latest round of big release news for Windows Azure. In this post (which I’m publishing directly from MS Word 2013 btw), I’m going to give a quick rundown of the Windows Azure related announcements. Think of this as your Cliff Notes version of the conference.

Windows Azure Service Bus for Windows Server – V1 Released

imagePreviously released as a beta/preview back in June, this on-premise flavor of the Windows Azure Service bus is now fully released and available for download. Admittedly, it’s strictly for brokered messaging for now. But it’s still a substantial step towards providing feature parity between public and private cloud solutions. Now we just need to hope that shops that opt to run this will run it as internal SaaS and not set up multiple silos. Don’t get me wrong. It’s nice to know we have the flexibility to do silos, but I’m hoping we learn from what we’ve seen in the public cloud and don’t fall back to old patterns.

One thing to keep in mind with this… It’s now possible for multiple versions of the Service Bus API to be running within an organization. To date, the public service has only had two major API versions. But going forward, we may need to be able to juggle even more. And while there will be a push to keep the hosted and on-premises versions at similar versions, there’s nothing requiring someone hosting it on-premises to always upgrade to the latest version. So as solution developers/architects, we’ll want to be prepared for be accommodating here.

Windows Azure Mobile Services – Windows Phone 8 Support

image222With Windows Phone 8 being formally launched the day before the BUILD conference, it only makes sense that we’d seen related announcements. And a key one of those was the addition of Windows Phone 8 support to Windows Azure Mobile Services. This announcement makes Windows Phone 8, the 3rd supported platform (Windows Store & iOS apps) for Mobile Services. This added to an announcement earlier in the month which expanded support for items like sending email, and different identity providers. So the Mobile Services team is definitely burning the midnight oil to get new features out to this great platform.

New Windows Azure Storage Scalability Targets

New scale targets have been announced for storage accounts created after June 7th 2012. This change has been enabled by the new “flat network” topology that’s being deployed into the Windows Azure Datacenters. In a nutshell, it allows the tps scale targets to be increased by 4x and the upper limit of a storage account to be raised to 200tb (2x). This new topology will continue to be rolled out through the end of the year but will only affect storage accounts created after the 07/12/2012 as mentioned above. These scale target improvements (which BTW are separate from the published Azure Storage SLA) will really help reduce the amount of ‘sharding’ that needs to be done for those with higher throughput requirements.

New 1.8 SDK – Windows Server 2012, .NET 4.5, and new Storage Client

BUILD also marked the launch of the new 1.8 Windows Azure SDK. This release is IMHO the most significant update to the SDK since the 1.3 version was launched almost 2 years ago. You could write a blog post any one of the key features, but since they are all so closely related and this is supposed to be a highlight post, I’m going to bundle it up.

The new SDK introduces the new “OS Family 3″ to Windows Azure Cloud Services giving us support for Windows Server 2012. Now when you combine this with the added support for .NET 4.5 and IIS 8, we can start taking advantage of technology like Web Sockets. Unfortunately Web Sockets are not enabled by default so there is some work you’ll need to do to take advantage of it. You may also need to tweak the internal Windows Firewall. A few older Guest OS’s were also depreciated so you may want to refer to the latest update of the compatibility matrix.

The single biggest, and subsequently most confusing piece of this release has to do with the new 2.0 Storage Client. Now this update includes some great features including support for a preview release of the storage client toolkit for Windows Runtime (Windows Store) apps. However, there are some SIGNIFICANT changes to the client, so I’d recommend you review the list of Breaking Changes and Known Issues before you decide to start converting over. Fortunately, all the new features are in a new set of namespaces (Windows.AzureStorage.StorageClient has become simply Windows.Azurestorage.Storage). So this does allow you to mix and match old functionality with the new. But forewarned is forearmed as they say. So read up before you just dive into the new client headlong.

For more details on some of the known issues with this SDK and the workarounds, refer to the October 2012 release notes and you can learn about all the changes to the Visual Studio tools by checking out “What’s New in the Windows Azure Tools“.

HDInsight – Hadoop on Windows Azure

Technically, this was released the week before BUILD, but I’m going to touch on it none the less. A preview of HDInsight has been launched that allows you to help test out the new Apache™ Hadoop® on Windows Azure service. This will feature support for common frameworks such as Pig and Hive and it also includes a local developer installation of the HDInsight Server and SDK for writing jobs with .NET and Visual Studio.

It’s exciting to see Microsoft embracing these highly popular open source initiatives. So if you’d doing anything with big data, you may want to run over and check out the blog post for additional details.

Windows Azure – coming to China

Doug Hauger also announced that Microsoft has reached an agreement (Memorandum of Understanding, aka an agreement to start negotiations) which will license Windows Azure technologies to 21Vianet. This will in turn allow them to offer Windows Azure in China from local datacenters. While not yet a fully “done deal”, it’s a significant first step. So here’s hoping the discussions are concluded quickly and that this is just the first of many such deals we’ll see struck in the coming year. So all you Aussies, hold out hope! J

Other news

This was just the beginning. The Windows Azure team ran down a slew of other slightly less high-profile but equally important announcements on the team blog. Items like a preview of the Windows Azure Store, GA (general availability) for the Windows Azure dedicated, distributed in-memory cache feature launched back in June with the 1.7 SDK, and finally the launch of the Visual Studio Team Foundation Service which has been in preview for the last year.

In closing…

All in all, it was a GREAT week in the cloud. Or as James Staten put it on ZDNet, “You’re running out of excuses to not try Microsoft Windows Azure“. And this has just been the highlights. If you’d like to learn more, I highly recommend you run over and check out the session recordings from BUILD 2012 or talk to your local Microsoft representative.

PS – Don’t forget to snag your own copy of the great new Windows Azure poster!

Brent is now a Microsoft technical evangelist for Window Azure working out of Minneapolis, MN.

Alan Smith reported a Sweden Windows Azure Group Meeting in November & Fast with Windows Azure Competition in an 11/4/2012 post:

image222SWAG November Meeting

There will be a Sweden Windows Azure Group (SWAG) meeting in Stockholm on Monday 19th November. Chris Klug will be presenting a session on Windows Azure Mobile Services, and I will be presenting a session on Web Site Authentication with Social Identity Providers. Active Solution have been kid enough to host the event, and will be providing food and refreshments.

The registration link is here:

If you would like to join SWAG the link is here:

Fast with Windows Azure Competition

I’ve entered a 3 minute video of rendering a 3D animation using 256 Windows Azure worker roles in the “Fast with Windows Azure” competition. It’s the last week of voting this week, it would be great if you can check out the video and vote for it if you like it. I have not driven a car for about 15 years, so if I win you can expect a hilarious summery of the track day in Vegas. My preparation for the day would be to play Project Gotham Racing for a weekend, and watch a lot of Top Gear.


My video is “Rapid Massive On-Demand Scalability Makes Me Fast!”.

The link is here:

David Gristwood reported a Windows Azure Open Platform Services (OSS) Summit - Paris, France - Dec 4, 2012 on 11/2/2012 (missed when posted):


image222Learn more about how Windows Azure has helped companies realize the true value of the cloud. Windows Azure supports multiple development platforms, languages, and tools providing first-class choice beyond .NET for developers and enterprises. Register for our Open Services Summit to learn how you can take advantage of an open, flexible, rock solid platform.

Windows Azure Open Platform Summit Goals

Delivered by Windows Azure experts, this special one day technical event will focus on the compelling story of OSS and the Windows Azure platform, and will cover key topics such as:

  • Why Azure is a great platform to create open, scalable, robust and resilient solutions across a wide range of technology stacks and programming tools and languages
  • How Windows Azure’s “Infrastructure-as-a-Service” makes it easy to create and manage virtual machines and networks for both Linux- and Windows-based applications
  • Experience the great developer story with Eclipse and a wide host of SDKs covering Node.JS, Python, PHP, and Java
  • Explore the range of data stores available on Windows Azure for both SQL and NoSQL, including databases such as MongoDB
Windows Azure Overview and Roadmap

In this session, we will present the Windows Azure platform overview and end-to-end Windows Azure story as it relates to OSS. We will also highlight OSS & Windows Azure success stories.

The Developer Story – Languages, Tooling, SDKs

We will show how to develop applications using programming languages such as Java, PHP, Python and Node.js. Also, we will dive into language specific SDKs and the developer experience using popular IDEs and tools, including Eclipse and Github.

IaaS – Linux, Virtual Machines, Networking

The Infrastructure-as-a-Service session will cover Virtual Machines, Virtual Networks, and hybrid solutions using multiple operating systems including Linux and Windows Server.

The Data Story – SQL, NoSQL

We will cover both SQL and NoSQL. We will cover relational databases like Windows Azure SQL DB and MySQL and on NoSQL side we will cover Windows Azure Table Storage and MongoDB. We will dive into hosting MongoDB in your own virtual machine and MongoDB-as-a-service.


Many apps are being built with Node.js today. We’ll show how a node.js app works in Windows Azure, as well as its applicability to Windows Azure Web Sites and NoSQL databases such as MongoDB.


Date: December 4, 2012

Location: Crowne Plaza Paris-République, 10 Place De La Republique, Paris 75011, France

Registration Link: Click HERE to register

Who Should Attend? If you are technical decision maker in your organization – i.e. CTO, Architect, Engineering Lead and/or incorporating open source technologies a part of your organization’s platform solution, you should attend this event.

<Return to section navigation list>

Other Cloud Computing Platforms and Services

Datanami (@datanami) posted a Backup to Amazon Glacier with CloudBerry Backup press release on 11/8/2012:

imageNEWPORT BEACH, CA, Nov. 8 – CloudBerry Lab, a leading cloud storage tools maker, today released CloudBerry Backup version 3.0.2, an application that allows users to online backup data to their cloud storage accounts such as Amazon S3, Windows Azure, Google Storage, Rackspace and OpenStack.

image_thumb11In the new release CloudBerry Backup becomes fully compatible with Amazon Glacier, the recently introduced extremely low-cost storage. Optimized for data backup and archiving it becomes a supplement to Amazon S3 storage costing as little as $0.01 per gigabyte per month. Amazon claimed Glacier to be as reliable as its original S3 solution but with longer data retrieval time.

imageFrom this release CloudBerry Backup users can leverage scheduling capabilities to automate data backup to Glacier or backup files in real time as soon as they are changed. Varieties of encryption algorithms help users protect their sensitive data, while compression allows reducing total storage cost. All operation executes on the user machine before data is sent to Amazon Glacier. Such users get full control over their backup process while keeping data remotely.

About CloudBerry Backup

CloudBerry Backup is a powerful online backup and recovery software designed to automate encrypted and compressed data backup to public cloud storage and to make any disaster recovery plan simple, reliable, and affordable.

CloudBerry Backup is designed to work on Windows XP/Vista/7 and Windows Server 2003/2008. Command line interface allows partners and advanced computer users integrate backup and restore plans with other routines.

CloudBerry Backup is available in Desktop Edition that costs $29.99, Server Edition that costs $79.99 and Enterprise Edition that costs $299.99 for a single license.

Volume discounts are available.

For more information and to download the evaluation copy, please visit our website at:

About CloudBerry Lab
CloudBerry Lab was established in 2008 by a group of experienced IT professionals with the mission to help organizations in adopting Cloud computing technologies by closing the gap between Cloud vendor propositions and consumer needs through development of innovative low-costs solutions.


Source: CloudBerry Lab

Jeff Barr (@jeffbarr) announced RDS & ElastiCache Updates - New Instance Types and Price Reductions on 11/6/2012:

imageI've got good news for users of the Amazon Relational Database Service and ElastiCache. We're reducing prices and adding more instance types.

RDS Price Reduction
We've reduced prices on RDS Database Instances by 8% to 14% in the US East (Northern Virginia) and US West (Oregon) Regions. Here are sample prices for standard deployment (single Availability Zone) MySQL Database Instances:


These changes take effect November 1, 2012. Similar reductions have been made for Multi-AZ deployments; see the RDS pricing page for additional information.

image_thumb11ElastiCache Price Reduction
We've made similar reductions in the price of ElastiCache Cache Nodes in US East (Northern Virginia) and US West (Oregon). Here are sample prices for Cache Nodes:


Again, these changes take effect on November 1, 2012. See the ElastiCache pricing page for additional information.

New Instance Types
We are also adding support for additional instance types. Here's what's new:

  • You can now launch RDS Database Instances on Medium instances (3.75 GB of RAM) in all AWS Regions, using any of the supported database engines (MySQL, Oracle Database, and SQL Server). This instance type is larger than the existing Small, and smaller than the existing Large.
  • You can now launch RDS Database Instances running Oracle Database or SQL Server on Extra Large instances with 15 GB of RAM. This instance type is larger than the existing Large and smaller than the existing Double Extra Large.

I hope that you enjoy our new, lower prices and the additional flexibility that you get with the new instance types.

Jinesh Varia (@jinman) described how to Deploy a SharePoint 2010 Server Farm on AWS Cloud in 6 Simple Steps (All Scripts and Templates Included) in an 11/5/2012 post:

imageOne of the key differentiators of the AWS cloud is its flexibility. It allows you to not only choose the programming models, languages, operating systems but also deploy enterprise-grade packaged applications that have been already battle-tested by the industry. You can run the business applications that you already know how to use and developers and other IT professionals can bring their existing skills and knowledge to the cloud.

imageMicrosoft SharePoint is a one such widely-adopted enterprise packaged application, common in many organizations as a platform for public-facing websites and intranet portal for team collaboration, content management, workflow, and access to corporate applications. and Lionsgate are few examples of how companies are deploying SharePoint-powered public websites in the cloud.

image_thumb11Deploying an enterprise-class highly-available SharePoint server solution in a best practices way that involves multiple components and tiers can be time consuming and error-prone. AWS cloud not only provides the on-demand resources (compute, database, network etc.) you need to run this solution but also provides a way to script the provisioning and configuration steps so you can deploy it easily. By using AWS CloudFormation, you can mention what AWS resources, configuration values and interconnections you need in a template (JSON file) and then let AWS CloudFormation do the rest with a few simple clicks in the AWS Management Console, via the command line tools. You can use these templates repeatedly to create identical copies of the same stack (or to use as a foundation to start a new stack).

Today, we have published an article – Deploy a Microsoft SharePoint Server 2010 Farm in the AWS Cloud in 6 Simple Steps – which includes easy-to-launch AWS CloudFormation sample templates and steps to create custom AMIs so you can launch a fully functional multi-tiered SharePoint 2010 server farm on AWS in 6 simple steps.

In 6 simple steps, you will be able to launch a sample deployment as shown in the figure below that includes web, app, database and Active Directory tiers in a VPC spanning 2 Availability Zones fronting a load balancer with all the necessary Security Groups settings.


This article is reference implementation of public-facing website scenario which is discussed in our SharePoint 2010 Server on AWS: Reference Architecture whitepaper (released in April 2012). Whether you are existing SharePoint developer trying to learn about AWS and Cloud Computing or existing customer trying to deploy SharePoint workloads in the cloud, this article (and resources) should serve you as a great starting point for your proof of concept.

We understand that no two architectures are alike and based on the business and IT requirements, you might want to customize the provisioning and configuration steps to suit your needs. Hence, we also provide these resources (sample templates) in download form, and have published an advanced reference guide so you can customize the templates as per your needs and deploy them in the cloud repeatedly and reliably using CloudFormation and other tools such as the Windows PowerShell or the Command Line Tools.

Hear more about this directly from Ulf Schoo, AWS Solution Architect, who worked on this project:

This is first time we have published a reference implementation of multi-tier solution that involves multiple CloudFormation templates, instructions on how to customize an AMI and an enterprise packaged application (Microsoft SharePoint 2010). Hence, we are looking for your feedback and suggestions. Give it a try and let us know if you would like us to build similar content for other solutions.

<Return to section navigation list>