Saturday, November 17, 2012

Windows Azure and Cloud Computing Posts for 11/14/2012+

A compendium of Windows Azure, Service Bus, Active Directory, Access Control, Connect, SQL Database, and other cloud-computing articles. image_thumb7_thumb1

Editor’s Note: I finally removed the obsolete Windows Azure feature diagram because the product now has too many features to display in a reasonably sized illustration. In addition, the Windows Azure team changes feature names too often.

‡    Updated 11/17/2012 with new articles marked .
11/16/2012 with new articles marked .
•    Updated
11/15/2012 with new articles marked .

Tip: Copy bullet(s) or dagger, press Ctrl+f, paste it/them to the Find textbox and click Next to locate updated articles:


Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

Azure Blob, Drive, Table, Queue, Hadoop and Media Services

‡ Gaurav Mantri (@gmantri) described Storage Client Library 2.0 – Migrating Table Storage Code in an 11/17/2012 post:

imageRecently with the release of SDK 1.8 for Windows Azure, the Windows Azure Storage Team announced the availability of the next version of storage client library (version 2.0). The new library is significantly different than the previous version and from what I know, it has been written from ground up to achieve better performance and is more user friendly. Because it is significantly different, upgrading from previous version (1.7) to this version is not trivial.

imageIn this blog post, I will attempt to provide some code sample through which I will try and demonstrate how you can do some common tasks when working with Azure Table Storage. What I did is wrote two simple console applications: one which uses storage client library version 1.7 and the other which uses version 2.0 and in those two applications I demonstrated some simple functionality.

Read These First

Since version 2.0 library is significantly different than the previous ones, before you decide to upgrade your code to make use of this version I strongly urge you to read up the following blog posts by the storage team as there’re many breaking changes.

Introducing Windows Azure Storage Client Library 2.0 for .NET and Windows Runtime

Windows Azure Storage Client Library 2.0 Breaking Changes & Migration Guide

Windows Azure Storage Client Library 2.0 Tables Deep Dive

Getting Started

Before jumping into the code, there’re a few things I would like to mention:

Storage Client Libraries

To get the reference for storage client library 1.7, you can browse your local computer and navigate to the Azure SDK installation directory (C:\Program Files\Microsoft SDKs\Windows Azure\.NET SDK\2012-10\ref – assuming you have SDK 1.8 installed) and select Microsoft.WindowsAzure.StorageClient.dll from there. For SDK 1.7 project, you would also need to add reference to System.Data.Services.Client.dll as well.

To get the reference for storage client library 2.0 (or the latest version for that matter), I would actually recommend getting this using Nuget. That way you’ll always get the latest version.

Sample Entity

For the purpose of demonstration, I created a simple entity called “CustomerEntity”. With version 1.7, it derives from TableServiceEntity class and with version 2.0, it derives from TableEntity class. Please note that with version 2.0, storage team has also made available a DynamicTableEntity class which comes in handy to fully exploit the No-SQL schema-less support in Azure Table Storage. In this blog post, we will focus on TableEntity though. Also for the sake of simplicity, I have kept the PartitionKey as “Customer” and assigned a GUID for RowKey.

For version 1.7, the customer entity looks something like this:

    public class CustomerEntity : TableServiceEntity
        public CustomerEntity()
            PartitionKey = "Customer";
            RowKey = Guid.NewGuid().ToString();

        public string FirstName

        public string LastName

        public DateTime? LastOrderDate

For version 2.0, the customer entity looks something like this:


    public class CustomerEntity : TableEntity
        public CustomerEntity()
            PartitionKey = "Customer";
            RowKey = Guid.NewGuid().ToString();

        public string FirstName

        public string LastName

        public DateTime? LastOrderDate

Now let’s see how you can perform some operations. What I’ve done is first showed how you did an operation with version 1.7 and then how would you do the same operation with version 2.0.

Create Table

If you’re using the following code with version 1.7 to create a table:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            CloudTable table = cloudTableClient.GetTableReference(tableName);
Delete Table

If you’re using the following code with version 1.7 to delete a table:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            CloudTable table = cloudTableClient.GetTableReference(tableName);
Insert Entity

If you’re using the following code with version 1.7 to insert an entity:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            var customer = new CustomerEntity()
                FirstName = "John",
                LastName = "Smith",
                LastOrderDate = DateTime.UtcNow.Date.AddDays(-10)

            var serviceContext = cloudTableClient.GetDataServiceContext();
            serviceContext.AddObject(tableName, customer);

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            var customer = new CustomerEntity()
                FirstName = "John",
                LastName = "Smith",
                LastOrderDate = DateTime.UtcNow.Date.AddDays(-10)

            CloudTable table = cloudTableClient.GetTableReference(tableName);
            TableOperation insertOperation = TableOperation.Insert(customer);
Delete Entity

If you’re using the following code with version 1.7 to delete an entity:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            CustomerEntity customer = GetCustomerEntityFromSomePlace();

            var serviceContext = cloudTableClient.GetDataServiceContext();
            serviceContext.AttachTo(tableName, customer, "*");

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            CustomerEntity customer = GetCustomerEntityFromSomePlace();

            CloudTable table = cloudTableClient.GetTableReference(tableName);
            TableOperation deleteOperation = TableOperation.Delete(customer);
Replace Entity

If you’re using the following code with version 1.7 to replace an entity:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            CustomerEntity customer = GetCustomerEntityFromSomePlace();
            customer.LastOrderDate = null;
            var serviceContext = cloudTableClient.GetDataServiceContext();
            serviceContext.AttachTo(tableName, customer, "*");

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            CustomerEntity customer = GetCustomerEntityFromSomePlace();
            customer.LastOrderDate = null;
            CloudTable table = cloudTableClient.GetTableReference(tableName);
            TableOperation replaceOperation = TableOperation.Replace(customer);
Merge Entity

If you’re using the following code with version 1.7 to merge an entity:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            CustomerEntity customer = GetCustomerEntityFromSomePlace();
            customer.LastOrderDate = null;
            var serviceContext = cloudTableClient.GetDataServiceContext();
            serviceContext.AttachTo(tableName, customer, "*");

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            CustomerEntity customer = GetCustomerEntityFromSomePlace();
            customer.LastOrderDate = null;
            CloudTable table = cloudTableClient.GetTableReference(tableName);
            TableOperation mergeOperation = TableOperation.Merge(customer);
Insert or Replace Entity

If you’re using the following code with version 1.7 to insert or replace an entity:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            var customer = new CustomerEntity()
                FirstName = "John",
                LastName = "Smith",
                LastOrderDate = DateTime.UtcNow.Date.AddDays(-10)
            var serviceContext = cloudTableClient.GetDataServiceContext();
            serviceContext.AttachTo(tableName, customer, null);

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            var customer = new CustomerEntity()
                FirstName = "John",
                LastName = "Smith",
                LastOrderDate = DateTime.UtcNow.Date.AddDays(-10)
            CloudTable table = cloudTableClient.GetTableReference(tableName);
            TableOperation insertOrReplaceOperation = TableOperation.InsertOrReplace(customer);
Insert or Merge Entity

If you’re using the following code with version 1.7 to insert or merge an entity:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            var customer = new CustomerEntity()
                FirstName = "John",
                LastName = "Smith",
                LastOrderDate = DateTime.UtcNow.Date.AddDays(-10)
            var serviceContext = cloudTableClient.GetDataServiceContext();
            serviceContext.AttachTo(tableName, customer, null);

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            var customer = new CustomerEntity()
                FirstName = "John",
                LastName = "Smith",
                LastOrderDate = DateTime.UtcNow.Date.AddDays(-10)
            CloudTable table = cloudTableClient.GetTableReference(tableName);
            TableOperation insertOrMergeOperation = TableOperation.InsertOrMerge(customer);
Entity Batch Operation

As you know, Azure Table Storage supports entity batch transactions to manage multiple entities in a single transaction. Assuming you’re trying to insert some entities in an Azure table using entity batch transaction. If you’re using the following code to perform bulk insert with version 1.7:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            var customer = new CustomerEntity()
                FirstName = "John",
                LastName = "Smith",
                LastOrderDate = DateTime.UtcNow.Date.AddDays(-10)
            var serviceContext = cloudTableClient.GetDataServiceContext();
            serviceContext.AddObject(tableName, customer);
            customer = new CustomerEntity()
                FirstName = "Jane",
                LastName = "Smith",
                LastOrderDate = DateTime.UtcNow.Date.AddDays(-5)
            serviceContext.AddObject(tableName, customer);
            customer = new CustomerEntity()
                FirstName = "John",
                LastName = "Doe",
                LastOrderDate = DateTime.UtcNow.Date.AddDays(-7)
            serviceContext.AddObject(tableName, customer);
            customer = new CustomerEntity()
                FirstName = "Jane",
                LastName = "Doe",
                LastOrderDate = DateTime.UtcNow.Date.AddDays(-3)
            serviceContext.AddObject(tableName, customer);

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            var customer = new CustomerEntity()
                FirstName = "John",
                LastName = "Smith",
                LastOrderDate = DateTime.UtcNow.Date.AddDays(-10)
            CloudTable table = cloudTableClient.GetTableReference(tableName);
            TableBatchOperation batchOperation = new TableBatchOperation();
            customer = new CustomerEntity()
                FirstName = "Jane",
                LastName = "Smith",
                LastOrderDate = DateTime.UtcNow.Date.AddDays(-5)
            customer = new CustomerEntity()
                FirstName = "John",
                LastName = "Doe",
                LastOrderDate = DateTime.UtcNow.Date.AddDays(-7)
            customer = new CustomerEntity()
                FirstName = "Jane",
                LastName = "Doe",
                LastOrderDate = DateTime.UtcNow.Date.AddDays(-3)
Fetching Single Entity

If you’re using the following code with version 1.7 to fetch a single entity by PartitionKey and RowKey:

            string partitionKey = “<some PartitionKey value>”;
            string rowKey = “<some RowKey value>”;
            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            var serviceContext = cloudTableClient.GetDataServiceContext();
            IQueryable<CustomerEntity> customerQuery = (from entity in serviceContext.CreateQuery<CustomerEntity>(tableName) where entity.PartitionKey == partitionKey && entity.RowKey == rowKey select entity);
            var returnedCustomer = customerQuery.FirstOrDefault();

You would use something like this with version 2.0 to achieve the same:

            string partitionKey = “<some PartitionKey value>”;
            string rowKey = “<some RowKey value>”;
            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            CloudTable table = cloudTableClient.GetTableReference(tableName);
            TableOperation retrieveOperation = TableOperation.Retrieve<CustomerEntity>(partitionKey, rowKey);
            TableResult retrievedResult = table.Execute(retrieveOperation);
            CustomerEntity fetchedCustomer = retrievedResult.Result as CustomerEntity;
Querying Entities

If you’re using the following code with version 1.7 to fetch entities by say filtering the records on last name attribute of our Customer entity:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            var serviceContext = cloudTableClient.GetDataServiceContext();
            //Specify the filter condition on LastName == 'Smith'
            IQueryable<CustomerEntity> customerQuery = (from entity in serviceContext.CreateQuery<CustomerEntity>(tableName) where entity.LastName == "Smith" select entity);
            var result = customerQuery.ToList();

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudTableClient cloudTableClient = storageAccount.CreateCloudTableClient();
            CloudTable table = cloudTableClient.GetTableReference(tableName);
            //Specify the filter condition on LastName == 'Smith'
            TableQuery<CustomerEntity> query = (new TableQuery<CustomerEntity>()).Where(TableQuery.GenerateFilterCondition("LastName", QueryComparisons.Equal, "Smith"));
            var result = table.ExecuteQuery<CustomerEntity>(query);
Closing Thoughts

As you saw, there’re significant differences between version 1.7 and 2.0 and one need to put some extra thoughts before undertaking the migration however in my opinion the version 2.0 library is much more straight forward to understand and is more intuitive. For example, to create an entity we would use TableOperation.Insert(entity).

Though I did not cover it in this blog post, but you can still use the legacy DataServiceClient approach with version 2.0. I haven’t tried it myself but it seems we just have to change namespaces under the using section in our source files and everything else should work as is. That would certainly ease the pain in the short run however my personal recommendation is to “take the pill” :) and make use of TableEntity if you can afford it. Furthermore the DataServiceClient approach is not available in library for Windows 8.

Finally, don’t give up on Storage Client Library 1.7 just yet. There’re still some components which depend on 1.7 version. Good example is Windows Azure Diagnostics which still depends on the older version at the time of writing this blog. Good thing is that both version 1.7 and 2.0 can co-exist in a project.


The examples I presented in this post are quite basic but hopefully they should give you an idea about how to use the latest version of storage client library. In general, I am quite pleased with the changes the team has done though I wish there was an easier migration path. Please feel free to share your experience with migration exercise by providing comments. This will help me and the readers of this blog immensely. Finally, if you find any issues with this post please let me know and I will try and fix them ASAP.

‡ Bruno Terkaly (@brunoterkaly) posted Hadoop on Azure : Introduction on 11/16/2012:Go to full article


imageI am in complete awe on how this technology is resonating with today’s developers. If I invite developers for an evening event, Big Data is always a sellout.

This particular post is about getting everyone up to speed about what Hadoop is at a high level.

5xeb23pxBig data is a technology that manages voluminous amount of unstructured and semi-structured data.

Due to its size and semi-structured nature, it is inappropriate for relational databases for analysis.

Big data is generally in the petabytes and exabytes of data.

  1. However, it is not just about the total size of data (volume)
  2. It is also about the velocity (how rapidly is the data arriving)
  3. What is the structure? Does it have variations?

Sources of Big Data


Two problems to solve


Two seminal papers

The Google File System It is about a scalable distributed file system for large distributed data-intensive applications. It provides fault tolerance while running on inexpensive commodity hardware, and it delivers high aggregate performance to a large number of clients
MapReduce: Simplified Data Processing on Large Clusters MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key

Hadoop : What is it?

  1. Hadoop is an open-source software framework that supports data-intensive distributed applications. Hadoop is written in Java.
  2. I met its creator, Doug Cutting, who was working at Yahoo at the time. Hadoop is named after his son's toy elephant. I was hosting a booth at the time, and I remember Doug was curious about finding some cool stuff to bring home from the booth to give to his son. Another great idea, Doug!
  3. One of the goals of Hadoop is to run applications on large clusters of commodity hardware. The cluster is composed of a single master and multiple worker nodes.
  4. Hadoop leverages the the programming model of map/reduce. It is optimized for processing large data sets.
  5. MapReduce is typically used to do distributed computing on clusters of computer. A cluster had many “nodes,” where each node is a computer in a cluster.
  6. The goal of map reduce is to break huge data sets into smaller pieces, distribute those pieces to various slave or worker nodes in the cluster, and process the the data in parallel. Hadoop leverages a distributed file system to store the data on various nodes.

It is about two functions

Hadoop comes down to two functions. As long as you can write the map() and reduce() function, your data type is supported, whether we are talking abuot (1) text files (2) xml files (3) json files (4) even graphics, sound or video files.


The “map” in MapReduce

  1. There is a master node and many slave nodes.
  2. The master node takes the input, divides it into smaller sub-problems, and distributes the input to worker or slave nodes. worker node may do this again in turn, leading to a multi-level tree structure.
  3. The worker/slave nodes processes the data into a smaller problem, and passes the answer back to its master node.
  4. Each mapping operation is independent of the others, all maps can be performed in parallel.

The “reduce” in MapReduce

  1. The master node then collects the answers from the worker or slave nodes. It then aggregates the answers and creates the needed output, which is the answer to the problem it was originally trying to solve.
  2. Reducers can also preform the reduction phase in parallel. That is how the system can process petabytes in a matter of hours.

There are 3 key methods


  1. The Hello World sample for Hadoop is a word count example.
  2. Let's assume our quote is this:
    • It is time for all good men to come to the aid of their country.


High-level Architecture


  1. There are two main layers to both the master node and the slave nodes – the MapReduce layer and the Distributed File System Layer. The master node is responsible for mapping the data to slave or worker nodes.

Hadoop is a platform


There also related modules that are commonly associated with Hadoop.


I signed up

I recently signed up for the Windows Azure HDInsight Service here.

Logging in


After logging in, you will be presented with this screen:


Next post : Calculate PI with Hadoop

  1. We will create a job name called “Pi Example.” This very simple sample will calculate PI using a cluster of comptuers.
  2. This is not necessarily the best example of big data, it is more of a compute problem.
  3. The final command line will look like this:
    1. Hadoop jar hadoop-examples- pi 16 10000000
  4. More details on this sample coming soon.

• Andrew Brust (@andrewbrust) reported Microsoft's PolyBase mashes up SQL Server and Hadoop in an 11/15/2012 post to ZD Net’s Big on Data blog:

Summary: Microsoft is not the first company to mash up relational MPP technology with Hadoop, but it may be the most impactful player to do so.


imageI’ve said it before: Massively Parallel Processing (MPP) data warehouse appliances are Big Data databases. And, as if to prove that point, Microsoft last week announced “PolyBase,” a new technology that will integrate its MPP product, SQL Server Parallel Data Warehouse (PDW), with Hadoop.

The announcement was made at the PASS Summit, which is the de facto Microsoft-endorsed SQL Server conference, and one where database administrators (DBAs) dominate the audience. In presenting PolyBase to that audience, Microsoft made it clear that it sees technology and skill sets for Big Data and conventional databases as correlated, rather than mutually exclusive.

PolyBase will be released in the first half of 2013, as part of the next version of SQL Server PDW and will integrate data in Hadoop’s Distributed File System (HDFS) with the relational database engine. Polybase will allow data stored in Hadoop to be queried with SQL (Structured Query Language) and will even allow that data to be joined to native relational tables so that Hadoop and SQL PDW can be queried in tandem, with result sets that integrate data from each source.

You got SQL in my MapReduce…but you’re not the first
The PDW/PolyBase combo is not the first product to integrate SQL with Hadoop and/or MapReduce; far from it. Hadapt, Rainstor, Teradata Aster, ParAccel and, most recently, Cloudera have each released products that offer some flavor of this integration. And the Hive component found in virtually all Hadoop distributions provides a SQL-like interface to Hadoop too. But Microsoft’s approach is especially interesting, for two reasons:

  1. While “phase 1” of the PolyBase will implement SQL Query over HDFS, phase 2 will introduce a cost-based optimizer that will also, selectively, utilize MapReduce on the Hadoop cluster instead of SQL, when and where appropriate.
  2. Although PolyBase will initially appear only in SQL Server PDW, it seems likely that the technology will migrate down to the conventional SQL Server Enterprise product as well.

How it compares
PolyBase has a lot in common with its competitors, but in a mix and match fashion. Like ParAccel’s on-demand integration (ODI), PolyBase brings schema/metadata of an HDFS file into the the relational database’s own metadata store and allows the Hadoop data to be treated as if it were a local table. Polybase does this in a parallelized fashion, such that PDW’s own distributed Data Movement Service (DMS) uses an “HDFS Bridge” to query multiple data nodes in the Hadoop cluster concurrently.

With SQL PDW and PolyBase, the schema is determined and the guest table created using the same SQL command used to create a physical table, but with the addition of “EXTERNAL” before the “TABLE” keyword. From there, whenever the HDFS data is queried, the local SQL query engine fetches the data directly from the data nodes in the Hadoop cluster, which is rather similar to the way Cloudera’s Impala works.

And while PolyBase can talk to any Hadoop cluster, like Hadapt and Rainstor it can also work such that the relational and Hadoop nodes are co-located on the same equipment (though that would ostensibly require use of Microsoft’s own HDInsight Windows-based Hadoop distribution).

MapReduce vs. direct HDFS access
The “phase 1” release of PolyBase and the three competing products just described each bypass Hadoop’s MapReduce distributed computing engine and fetch data directly from HDFS. Hadoop distribution components like Hive and Sqoop instead use MapReduce code to extract data from the Hadoop cluster. In general, the translation to, and use of, MapReduce is easier to implement, and the direct HDFS access is considered more efficient and better performing.

But the key word there is “general,” because for certain queries it might be much more efficient to offload some work to Hadoop’s MapReduce engine, bring the job output back to the relational product and perform join and further query tasks using the latter. Teradata Aster’s SQL-H works this way and still allows the kind of heterogeneous joins that PolyBase makes possible (Teradata Aster also allows the relational data to be queried with MapReduce-style code). This approach is especially valuable if the input data set is relatively large, as that would add a huge data movement burden to the query if all work has to happen on the MPP cluster.

That’s exactly why, in its phase 2 release, PolyBase will be able to employ both techniques, and will employ an enhanced PDW cost-based optimizer to determine which technique will perform best. To my knowledge, almost none of the relational/Hadoop, SQL/MapReduce hybrids that I have come across combine the HDFS-direct and Hadoop MapReduce techniques; instead they stick to one or the other. The one exception here is Hadapt, but its relational nodes are based on PostgreSQL, and some enterprises might prefer the SQL Server technology used in numerous corporate applications.

Beyond MPP
If Microsoft can pull PolyBase off – such that it runs reliably for production Big Data workloads – it will be impressive. What would also be impressive is the integration of this technology into non-PDW editions of SQL Server, the relational database that is #1 in unit sales and #2 in revenue, industry-wide, according to Microsoft. Granted, all of PDW’s competitors are also high-end, appliance-based MPP data warehouse products; anything less and they wouldn’t be legitimate Big Data technologies.

But the mash-up of transactional databases and Hadoop is a valid one in certain reporting and other scenarios, and the implementation of PolyBase into non-PDW editions of SQL Server would accommodate it. In his well-attended PASS Summit session on PolyBase, Microsoft Technical Fellow, Dr. Dave DeWitt, who heads the PolyBase team, said this may indeed happen, though he stopped short of promising it.

Promised or not, one thing is certain: when an enterprise- and consumer-oriented company like Microsoft embraces Hadoop, you know Big Data has arrived.

Disclosure: the author of this post was a speaker at this year’s PASS Summit and runs a user group in New York City that is a local chapter of PASS.

Hopefully, PolyBase will come to Windows Azure SQL Database, as well as SQL Server Data Center Edition.

Avkash Chauhan (@avkashchauhan) asked and answered How Hadoop is shaping up at Disney World? in an 11/12/2012 post:

Hadoop World 2011: Advancing Disney’s Data Infrastructure with Hadoop – Matt Estes, Disney


How Disney built a big data platform on a startup budget


<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

‡ Glenn Gailey (@ggailey777) described Live Connect Single Sign-On from Windows Phone 8 for Mobile Services in an 11/16/2012 post:

imageI’m super excited that today Microsoft has released an updated version of the Live SDK for Windows and Windows Phone. This new version supports Windows Phone 8, and includes the new async behaviors, like await, which makes async SO much easier to program on the phone.

imageNow that Live SDK officially supports the Windows Phone 8 programming model, we were able to create a Windows Phone version of the Windows Azure Mobile Services tutorial: Authenticate with Live Connect single sign-on. This tutorial uses Mobile Services with Live Connect to log-in users with single sign-on. It also access the Live service to get user information.

Since it will take a week of so to get this new tutorial published, here it is in the meantime. Enjoy!

Authenticate your Windows Phone 8 app with Live Connect single sign-on

This topic shows you how to use Live Connect single sign-on to authenticate users in Windows Azure Mobile Services from a Windows Phone 8 app. In this tutorial, you add authentication to the quickstart project using Live Connect. When successfully authenticated by Live Connect, a logged-in user is welcomed by name and the user ID value is displayed.

Note This tutorial demonstrates the benefits of using the single sign-on experience provided by Live Connect for Windows Phone apps. This enables you to more easily authenticate an already logged-on user with you mobile service. For a more generalized authentication experience that supports multiple authentication providers, see the topic Get started with authentication.

This tutorial walks you through these basic steps to enable Live Connect authentication:

  1. Register your app for authentication and configure Mobile Services
  2. Restrict table permissions to authenticated users
  3. Add authentication to the app

This tutorial requires the following:

This tutorial is based on the Mobile Services quickstart. You must also first complete the tutorial Get started with Mobile Services.

Register your app with Live Connect

To be able to authenticate users, you must register your app at the Live Connect Developer Center. You must then register the client secret to integrate Live Connect with Mobile Services.

  1. Log on to the Windows Azure Management Portal, click Mobile Services, and then click your mobile service.


  2. Click the Dashboard tab and make a note of the Site URL value.


    You will use this value to define the redirect domain.

  3. Navigate to the My Applications page in the Live Connect Developer Center, and log on with your Microsoft account, if required.

  4. Click Create application, then type an Application name and click I accept.


    This registers the application with Live Connect.

  5. Click Application settings page, then API Settings and make a note of the values of the Client ID and Client secret.


    Security Note: The client secret is an important security credential. Do not share the client secret with anyone or distribute it with your app.

  6. In Redirect domain, enter the URL of your mobile service from Step 2, click Yes under Mobile client app, and then click Save.

  7. Back in the Management Portal, click the Identity tab, enter the Client secret obtained from Live Connect, and then click Save.


Both your mobile service and your app are now configured to work with Live Connect.

Restrict permissions to authenticated users
  1. In the Management Portal, click the Data tab, and then click the TodoItem table.


  2. Click the Permissions tab, set all permissions to Only authenticated users, and then click Save. This will ensure that all operations against the TodoItem table require an authenticated user. This also simplifies the scripts in the next tutorial because they will not have to allow for the possibility of anonymous users.


  3. In Visual Studio 2012 Express for Windows Phone, open the project that you created when you completed the tutorial Get started with Mobile Services.

  4. Press the F5 key to run this quickstart-based app; verify that an exception with a status code of 401 (Unauthorized) is raised.

    This happens because the app is accessing Mobile Services as an unauthenticated user, but the TodoItem table now requires authentication.

Next, you will update the app to authenticate users with Live Connect before requesting resources from the mobile service.

Add authentication to the app
  1. Download and install the Live SDK for Windows and Windows Phone.

  2. In the Project menu in Visual Studio, click Add Reference, then expand Assemblies, click Extensions, check Microsoft.Live, and then click OK.


    This adds a reference to the Live SDK to the project.

  3. Open the project file mainpage.xaml.cs and add the following using statements:

    using Microsoft.Live; 
  4. Add the following code snippet to the MainPage class:

    private LiveConnectSession session; private async System.Threading.Tasks.Task Authenticate() { LiveAuthClient liveIdClient = new LiveAuthClient("<< INSERT CLIENT ID HERE >>"); 
    while (session == null) { LiveLoginResult result = await liveIdClient.LoginAsync(new[] { "wl.basic" }); if (result.Status == LiveConnectSessionStatus.Connected) { session = result.Session; LiveConnectClient client = new LiveConnectClient(result.Session); LiveOperationResult meResult = await client.GetAsync("me"); MobileServiceUser loginResult = await App.MobileService.LoginAsync(result.Session.AuthenticationToken); string title = string.Format("Welcome {0}!", meResult.Result["first_name"]); var message = string.Format("You are now logged in - {0}", loginResult.UserId); MessageBox.Show(message, title, MessageBoxButton.OK); } else { session = null; MessageBox.Show("You must log in.", "Login Required", MessageBoxButton.OK); } }

    This creates a member variable for storing the current Live Connect session and a method to handle the authentication process.

    Note: This code forces a logout, when possible, to make sure that the user is prompted for credentials each time the application runs. This makes it easier to test the application with different Microsoft Accounts to ensure that the authentication is working correctly. This mechanism will only work if the logged in user does not have a connected Microsoft account.

  5. Update string << INSERT CLIENT ID HERE >> from the previous step with the client ID value that was generated when you registered your app with Live Connect.

  6. Delete or comment-out the existing OnNavigatedTo method override and replace it with the following method that handles the Loaded event for the page.

    async void MainPage_Loaded(object sender, RoutedEventArgs e) { await Authenticate(); RefreshTodoItems(); } 

    This method calls the new Authenticate method.

  7. Replace the MainPage constructor with the following code:

    // Constructor public MainPage() { InitializeComponent(); this.Loaded += MainPage_Loaded; } 

    This constructor also registers the handler for the Loaded event.

  8. Press the F5 key to run the app and sign into Live Connect with your Microsoft Account.

    After you are successfully logged-in, the app runs without errors, and you are able to query Mobile Services and make updates to data.

Next steps

In the next tutorial, Authorize users with scripts, you will take the user ID value provided by Mobile Services based on an authenticated user and use it to filter the data returned by Mobile Services. For information about how to use other identity providers for authentication, see Get started with authentication.

• Bruno Terkaly (@brunoterkaly) and Ricardo Villalobos (@ricvilla) wrote Windows Azure Mobile Services: A Robust Back End for Your Device Applications and MSDN Magazine published it in its November 2012 issue. From the introduction:

imageDuring the last few years, the developer’s landscape has been characterized by three main features: multiple devices and OSes to support; applications that rely on asynchronous Web services, typically hosted in the cloud; and variable traffic that complicates resource planning and provisioning. The new Microsoft Windows Azure Mobile Services (WAMS) simplifies the implementation of software architectures that address this type of environment, as shown in Figure 1.

Typical Architecture to Support Multiple Devices with a Single Web Service
Figure 1 Typical Architecture to Support Multiple Devices with a Single Web Service

imageEven though it’s possible to manually create and deploy the required components for this approach to work, what mobile developers really want are endpoints that provide the needed functionality without having to worry about the underlying infrastructure. Having to spend cycles on maintaining structured or semi-structured storage, user authentication or push notifications is a distraction from developing mobile applications.

Keep in mind that we’re using the term “mobile application” very loosely here. This client-side (mobile application) technology can target almost anything: a Windows 8 phone or tablet; an iOS iPhone or iPad; or an Android phone or tablet. This is possible due to the fact that WAMS is based on open standards and regular HTTP requests.

Creating Your First Application Based on WAMS

WAMS is currently in preview mode and getting started is just a few clicks away. Simply sign up for a free trial, at and log in to access the Windows Azure Portal, then click on CREATE A NEW APP.

As you’ll see, a window will pop up that allows you to quickly accomplish several things. Here you’ll define a URL that will be used by your mobile application to interact with the available services and you’ll create a new relational database (though you could choose to use an existing Windows Azure SQL Database). At the same time, a series of REST-based endpoints and services will be created and exposed for mobile clients.

Next, you’ll provide a database name, server, login name, password and region, as shown in Figure 2.

Specifying Database Settings
Figure 2 Specifying Database Settings

In just a few moments, you created a new mobile service in one of the reliable, secured, and globally distributed Microsoft datacenters, as shown in Figure 3.

The Mobile Service Account
Figure 3 The Mobile Service Account

Now click on the service name (“brunoterkaly” in Figure 3) to access the underlying management dashboard. …

Read more.

Full disclosure: I’m a paid contributor to 1105 Media’s Visual Studio Magazine. 1105 Media publishes MSDN Magazine.

• Saravanan G compared SQL Azure Reporting Vs. IaaS VM Reporting in an 11/15/2012 post to the Aditi Technologies blog:

imageThis post compares SQL Azure reporting with IaaS VM reporting. The table below shows the comparison between the two:



The screenshot below shows the cost comparison between SQL Azure reporting and IaaS VM reporting:

The screenshot below shows a comparison of the time taken to render a report in SQL Azure reporting server and IAAS VM report server.

imageWhen can we expect a tutorial on IAAS VM reporting?


Nick Harris (@cloudnick) described How to upload an Image to Windows Azure Storage using Mobile Services in an 11/12/2012 post:

imageThis post details how to use the Windows Azure SDK for Node from Windows Azure Mobile Services to generate a Shared Access Signature (SAS) and then how to upload your Image (or any file) to Blob Storage directly from your Windows Store App using the Windows Azure Storage Client Library 2.0 for Windows Runtime (CTP)


imageIn my previous post How to Upload an Image using a Blob Storage SAS generated by Windows Azure Mobile Services I detailed:

  1. Why you should use a SAS to upload any binary data from client devices
  2. How you could generate your own SAS using Mobile Services Server Side scripts
  3. How you could use the HttpClient from a Windows Store app to upload your image using the SAS

With the recent inclusion of the Windows Azure SDK for Node in Mobile Services and the announcement of the Windows Azure Storage Client Library 2.0 for Windows Runtime (CTP) the process for performing Steps 2 and 3 are much easier. This post will detail the updated approach.

Creating your Mobile Service

In this post I will extend the Mobile Services quick start sample. Before proceeding to the next section create a mobile service and download the quickstart as detailed in the tutorial here

Capturing the Image|Media

Our first task is to capture the media we wish to upload. To do this follow the following steps.

  • Add an AppBar to MainPage.xaml with a take photo button to allow us to capture the image






<Button Name="btnTakePhoto" Style="{StaticResource PhotoAppBarButtonStyle}"

Click="OnTakePhotoClick" />






view raw gistfile1.cs This Gist brought to you by GitHub.

  • Add the OnTakePhotoClick handler and use the CameraCaptureUI class for taking photo and video

using Windows.Media.Capture;



private async void OnTakePhotoClick(object sender, RoutedEventArgs e)


//Take photo or video

CameraCaptureUI cameraCapture = new CameraCaptureUI();

StorageFile media = await cameraCapture.CaptureFileAsync(CameraCaptureUIMode.PhotoOrVideo);





view raw gistfile1.cs This Gist brought to you by GitHub.

  • Update the TodoItem class with some properties that will be required to generate the SAS

public class TodoItem


public int Id { get; set; }


[DataMember(Name = "text")]

public string Text { get; set; }


[DataMember(Name = "complete")]

public bool Complete { get; set; }


//Added below for blob sas generation in Mobile Services


[DataMember(Name = "containerName")]

public string ContainerName { get; set; }


[DataMember(Name = "resourceName")]

public string ResourceName { get; set; }

public string SAS { get; set; }



view raw gistfile1.cs This Gist brought to you by GitHub.

  • Update the OnTakePhotoClick handler to insert the todoitem setting the ContainerName and resourceName for which we want a SAS generated

private async void OnTakePhotoClick(object sender, RoutedEventArgs e)


//Take photo or video

CameraCaptureUI cameraCapture = new CameraCaptureUI();

StorageFile media = await cameraCapture.CaptureFileAsync(CameraCaptureUIMode.PhotoOrVideo);


//add todo item to trigger insert operation which returns item.SAS

var todoItem = new TodoItem() {

ContainerName = "myPics",

ResourceName= media.Name,

Text = "NA",



await todoTable.InsertAsync(todoItem);



//TODO: Upload image direct to blob storage using SAS


view raw gistfile1.cs This Gist brought to you by GitHub.

Generating a Shared Access Signature (SAS) using Mobile Services server-side script

In this step we add sever-side script to generate a SAS on insert operation of the TodoItem table.

To do this perform the following steps:

  • Navigate to your Mobile Service and select the Data Tab, then click on Todoitem

  • Select Script, then the Insert drop down

  • Add the following server side script to create the containerName and generate a blob SAS for the resourceName

var azure = require('azure');

var qs = require('querystring');


function insert(item, user, request) {

var accountName = '<replace with your storage account name>';

var accountKey = '<replace with your storage account key>';

var host = accountName + '';

var canonicalizedResource = '/' + item.containerName + '/' + item.resourceName;

//Must be lowercase

item.containerName = item.containerName.toLowerCase();

//Create the container if it does not exist

//we will use public read access for the blobs and will use a SAS to upload

var blobService = azure.createBlobService(accountName, accountKey, host);

blobService.createContainerIfNotExists(item.containerName, {publicAccessLevel : 'blob'}, function(error){


// Container exists now define a policy that provides write access

// that starts immediately and expires in 5 mins

var sharedAccessPolicy = {


Permissions: azure.Constants.BlobConstants.SharedAccessPermissions.WRITE,

//Start: //use for start time in future, beware of server time skew

Expiry: formatDate(new Date(new Date().getTime() + 5 * 60 * 1000)) //5 minutes from now



//Generate the SAS for your BLOB

var sasQueryString = getSAS(accountName,





//full path for resource with sas = 'https://' + host + canonicalizedResource + '?' + sasQueryString;









function getSAS(accountName, accountKey, path, resourceType, sharedAccessPolicy) {

return qs.encode(new azure.SharedAccessSignature(accountName, accountKey)

.generateSignedQueryString(path, {}, resourceType, sharedAccessPolicy));



function formatDate(date){

var raw = date.toJSON();

//blob service does not like milliseconds on the end of the time so strip

return raw.substr(0, raw.lastIndexOf('.')) + 'Z';



view raw gistfile1.js This Gist brought to you by GitHub.

Using the Windows Azure Storage Client Library 2.0 for Windows Runtime (CTP) to upload the Image directly to storage using the SAS
  • Download the Storage Client libraries for Windows 8 click here.
  • Extract and add a reference to Microsoft.WindowsAzure.Storage.winmd your client project
  • Update OnTakePhotoClick handler to update the image directly to blob storage using the CloudBlockBlob,UploadFromStreamAsync and the generated todoitem.SAS



using Microsoft.WindowsAzure.Storage.Auth;

using Microsoft.WindowsAzure.Storage.Blob;




private async void OnTakePhotoClick(object sender, RoutedEventArgs e)


//Take photo or video

CameraCaptureUI cameraCapture = new CameraCaptureUI();

StorageFile media = await cameraCapture.CaptureFileAsync(CameraCaptureUIMode.PhotoOrVideo);


if (media != null)


//add todo item to trigger insert operation which returns item.SAS

var todoItem = new TodoItem()


ContainerName = "mypics",

ResourceName = media.Name,

Text = "NA",


await todoTable.InsertAsync(todoItem);



//Upload image direct to blob storage using SAS and the Storage Client library for Windows CTP

//Get a stream of the image just taken

using (var fileStream = await media.OpenStreamForReadAsync())


var sasUri = new Uri(todoItem.SAS);


//Our credential for the upload is our SAS token

StorageCredentials cred = new StorageCredentials(sasUri.Query.Substring(1));

CloudBlobContainer container = new CloudBlobContainer(new Uri(string.Format("https://{0}/{1}", sasUri.Host, todoItem.ContainerName)), cred);

CloudBlockBlob blobFromSASCredential = container.GetBlockBlobReference(todoItem.ResourceName);

await blobFromSASCredential.UploadFromStreamAsync(fileStream.AsInputStream());




view raw gistfile1.cs This Gist brought to you by GitHub.

Run the application
  • Hit F5 on the application and right click with your mouse to show the app bar
  • Press the Take Photo button
  • Observe that the SAS is returned from your Mobile Service

  • Check your storage account now has a great picture of a fully polished chrome dome capable of reflecting light far better then your average mirror :)

Cihan Biyikoglu (@cihangirb) reported Enzo Cloud Backup Utility now has support for Federations! in an 11/12/2012 post:

imageHerve did it again, he raced to the front and built Enzo Cloud Backup tool that gives users a great set of tools for moving data around in the cloud.

image_thumb75_thumb2I am going to be trying out the tool soon. Great work Herve. Thanks!


<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

‡ PRWeb reported Digital Folio to Deliver Real-time Retail Pricing Data through Windows Azure Marketplace in an 11/16/2012 press release:

Features over 5 billion retail data points with thousands more generated every hour - for Amazon, Walmart, Target, Best Buy, and other key retailers and marketplaces.

imageDigital Folio announced today that it has launched its revolutionary Retail Intelligence Data API, which delivers unprecedented real-time visibility to dynamic pricing data as well as stock levels, channel availability, and reference data.

For the first time, businesses—including retailers, researchers, and analysts—can access real-time and historical competitor/retail online pricing, availability, and statistical analytics directly within Microsoft Excel, via browser, via download, or within their own business intelligence platforms—all via a self-service model without costly set-up fees or delays. The API also provides an unlimited number of queries with no limit on the number of products that can be accessed.

Digital Folio’s Retail Intelligence API ( delivers unprecedented data fidelity and access to retail big data as dynamic pricing happens.


“With online retailers now deploying the ability to dynamically change pricing up to once every six seconds, it’s nearly impossible to keep up,” said Patrick Carter, President of Digital Folio, Inc. “Digital Folio does keep up because it’s powered by the crowd, providing the platform capability to update up to 10,000 products per second. That’s the equivalent of an entire major retailer in less than a minute.”

Carter added, “From a technical perspective, we offer both OData and REST API interfaces, providing access to hourly interval summaries as well as discrete event data. We are particularly excited about also making our data API available via Microsoft’s Windows Azure Marketplace.”

imageMax Uritsky, Group Program Manager, Windows Azure Marketplace, said, “We are excited to work with Digital Folio and their data launch on Windows Azure Marketplace, a premiere online market for buying and selling finished premium data. Windows Azure Marketplace lets users directly access this data via large-scale Hadoop implementations, as well as via Microsoft Excel and Microsoft Office.”

“Digital Folio has already been recognized by the press for the level of dynamic pricing accuracy we deliver within our consumer applications,” Carter said. “Now our API provides dynamic pricing intelligence beyond what’s easily exposed within our shopping applications.

“Until now there has been no ‘Nielsen’ for real-time retail pricing. Digital Folio’s platform is like a Level 2 trading system for real-time retail that empowers retailers and shoppers alike.”

Digital Folio’s Retail Intelligence API represents the newest addition to the Digital Folio comparison shopping platform, which includes the powerful Digital Folio sidebar, DigitalFolio comparison shopping site, and Windows 8 app. Thousands of shoppers use Digital Folio apps to compare product prices instantly across major retailers, save their finds in intelligent shopping lists, share their finds with friends and family, and get instant product pricing and availability updates…all in one place.

Established in 2011, Digital Folio is the first multi-channel shopping network, powered by the crowd, to provide unbiased real-time pricing and availability across top retailers. For consumers, we make it easy to get a complete, real-time shopping picture every time they browse, collect, and share shopping finds. For businesses, we provide the first platform to deliver real-time retail intelligence data powered by the crowd and universal comparison shopping, wish list, and dynamic promotion capabilities. For more information on Digital Folio Platform Solutions, visit

A search for Digital Folio in Windows Azure Marketplace returns 6 data sets:


The StreamInsight Team answered What Is StreamInsight? A Primer for Non-Programmers with a link to a TechNet Wiki on 11/15/2012:

Are you trying to figure out whether StreamInsight might be something you could use, but you’re having trouble sifting through all the programming jargon that’s used to describe it? StreamInsight is, ultimately, a set of programming tools, and at some point it takes a programmer to implement a StreamInsight solution. But it really should be possible to get a handle on what StreamInsight is all about even if you’re not a programmer yourself.

A new article published in the TechNet Wiki may be able to help: StreamInsight for Non-Programmers. It gives an overview of the technology, but it leaves out the C# references and relates StreamInsight to more familiar SQL databases and queries. Check it out.

When you’re done there and are ready to dig a little deeper, take a look at Get Started with StreamInsight 2.1. That article should help you navigate through the StreamInsight official documentation and other resources.

And, as always, you can post questions or comments here or on the TechNet Wiki.

For more information on Microsoft Project Codename “Austin” (StreamInsight Services for Windows Azure) watch for a new two-part article coming soon from Red Gate Software.

Tiffany Trader reported Microsoft Outfits Azure Cloud for Big Compute in an 11/13/2012 post to the HPC In the Cloud blog:

image_thumb75_thumb3On Tuesday at SC12 Microsoft debuted a set of "big compute" capabilities for its Windows Azure offering. The company is courting the HPC space with more powerful hardware, new instance configurations, and the updated Microsoft HPC Pack 2012. The advanced management software has many new features and supports the running of compute-intensive workloads in three configurations: on premise, on Windows Azure, or mixed use, hybrid scenario.

imageBig Compute on Windows Azure – it's unclear whether that's the final name – is available in two configurations. The entry-level HPC instance sports 8 cores and 60 GB RAM, while the higher-end option doubles these specs for a total of 16 cores and 120 GB of RAM. The new servers are outfitted with dual Xeon E5-2670s (2.6 GHz) with DDR3 1600 MHz RAM. The nodes are linked via a 40 Gbps InfiniBand network with RDMA, while a 10GbE backplane is used to hook up to external storage and the Internet.

The new configurations call to mind Amazon's High-Memory Instances. The High-Memory Double Extra Large Instance (m2.2xlarge) has 4 virtual cores with 34 GB of memory, while the High-Memory Quadruple Extra Large Instance (m2.4xlarge) has 8 virtual cores and 68.4 GB of memory.

An important distinction, however, is that Amazon only uses high-speed (10GbE) interconnect technology for its Cluster Compute and Cluster GPU instances, while Microsoft is introducing the ability to do RDMA (remote direct memory access) in a virtualized environment. The technology provides low-latency network capability for MPI (message passing interface) applications and allows an Azure cluster to send a 4 byte packet across machines in 2.1 microseconds. Alex Sutton, Group Program Manager for Windows HPC Server, interviewed for this article at SC12, said that Microsoft is the first company to offer virtualized RDMA in a commercial environment.

"For applications written to use the message passing interface (MPI) library, RDMA allows memory on multiple computers to act as one pool of memory," writes Bill Hilf, general manager, Windows Azure Product Marketing, in a blog entry today. "Our RDMA solution provides near bare metal performance (i.e., performance comparable to that of a physical machine) in the cloud, which is especially important for Big Compute applications."

According to Sutton, the performance penalty for running virtualization is down to about a 1 or 2 percent difference now. This will appeal to organizations that want to access the benefits of cloud (flexibility, scalability, on-demand, etc.), but aren't willing to sacrifice performance. The use of InfiniBand also enhances throughput, allowing applications to scale more effectively and improving time to results.

As a proof of concept, Microsoft ran the LINPACK benchmark across 504 16-core virtual machines (8,064 cores total). The test cluster, named Faenov, achieved 151.3 teraflops (167.7 peak) with 90.2 percent efficiency, earning it the 165th spot on the most recent TOP500 list. In terms of efficiency, the system placed 27th. Faenov ran Windows Server 2012 in virtual machines hosted on Windows Azure on top of Hyper-V virtualization. Sutton makes the point that 90.2 percent efficiency is better than many on-premise (non-virtualized) clusters.
Bringing I/O latency under control still leaves the bandwidth barrier that is the consumer Internet, but for the majority of customers, this won't be an issue. For those that need to make large data transfer into and out of the cloud, Microsoft plans to support "FedEx net," (physical shipping of drives) at some point.

Pricing on the new configurations has not been announced, so price point comparisons to EC2, Google Compute Engine and other IaaS offerings won't be possible yet. Initially "Big Compute" will only run Windows, but they are looking into Linux. Of course, the hardware can support Linux, but the engineers still need to hammer out how to run it on virtualized RDMA.

Microsoft is describing early success stories around a segment of customers who run Windows and need low-latency. Initial interest and customer stories are coming out of risk modeling, disease research and complex engineering tasks. Big data is also on Microsoft's radar, as the company anticipates many big data workloads benefiting from the new configurations.

Today's announcement shows us a Microsoft that continues to evolve on the cloud front, both to compete against EC2 and in its support for the HPC community. Azure was originally launched as a PaaS offering in 2010, but in June of this year, Microsoft added infrastructure as a service (IaaS) capabilities and began allowing users to spin up Linux VMs. Customers want choice and competition, but with its purpose-built architecture and significant lead time, Amazon is going to be tough to catch. Microsoft has a dedicated following of Windows users, but most of the action in the HPC community is Linux.

It will be interesting to see whether low-latency virtualization pans out as a differentiator for Azure. It might take some R&D work, but Amazon could similarly outfit their cloud if they see a call for it. In order for the cloud to be profitable, it has to maintain the right balance of utilization. Too much extra inventory is as bad for business in the long run as too little inventory is in the short run. Cloud companies want just the right about of cushion (or excess inventory). To this point, Microsoft says that it is tracking demand and keeping tight control on the ordering process.

Big Compute on Windows Azure is currently in private preview with select partners. A public beta period is expected to commence in the first half of 2013, followed by general availability in roughly the same time frame.

See also Bill Hilf (@bill_hilf) posted Windows Azure Benchmarks Show Top Performance for Big Compute to the Windows Azure blog on 11/13/2012 from OakLeaf.

image_thumb8No significant articles today

<Return to section navigation list>

Windows Azure Service Bus, Access Control Services, Caching, Active Directory and Workflow

Todd Holmquist-Sutherland described the location of WinRT SDK for Windows Azure Service Bus in an 11/17/2012 message:

image_thumb75_thumb3I have to admit that we’ve made it exceedingly difficult to find our Service Bus WinRT SDK.  It is actually part of the Mobile Services SDK, the link for which can only be found if you work through the Windows Store apps tutorial in the Mobile Dev Center.  To save you a few steps, here is the link:


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

‡ Peter Saddow (@petersad1) of the SQLOS Team announced the availability of the Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server MAP toolkit in an 11/13/2012 post (missed when published):

With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server.

MAP Toolkit 8.0 Beta is available for download here

Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at or through one of our surveys.

imageNow, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following:

    • Perform an inventory
    • View the Windows Azure VM Readiness results and report
    • Collect performance data for determine VM sizing
    • View the Windows Azure Capacity results and report

Perform an inventory:

1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below:


2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue.


3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue.


Description of Discovery Methods:

    • Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS.
    • Windows networking protocols -- This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers.
    • System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers.
    • Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range.
    • Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers.
    • Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried.

4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added.


The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment.


5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next.


6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names.


7. On the Summary page confirm your settings and then click Finish.


After clicking Finish the inventory process will start, as shown below:


Windows Azure Readiness results and report

After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes.


If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report.


After the report is generated, select View | Saved Reports and Proposals to view the location of the report.


Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported.


Collect Performance Data

Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for.


Windows Azure Capacity results and report

After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed.


You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.


MAP Toolkit 8.0 Beta is available for download here

Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at or through one of our surveys.

Useful References:

‡ Cory Fowler (@SyntaxC4) described Continuous Deployment in Windows Azure Web Sites in an 11/17/2012 post:

imageAutomation of tasks is one thing that I am an advocate of in my development projects. Getting functionality that is repeatable with a low risk of human error for a one time cost is a sound business decision and as a developer, keeps your hands on rolling more code for a greater percentage of your work day. It’s a Win-Win scenario.

imageThe Windows Azure Web Sites team along side the Kudu team have added Continuous Deployment functionality in Windows Azure Web Sites with support for three familiar social source code repositories: CodePlex, GitHub and BitBucket. The team has also added support for Contiguous Integration using Team Foundation Service [a new Cloud Based offering of Team Foundation Server].

I’m pleased to announce that Windows Azure Web Sites now allows Continuous Deployment from Private Repositories from both GitHub and BitBucket.

In this post:

ol.xoxo {list-style:decimal;} ol.xoxo ol {list-style:lower-latin;} ol[compact="compact"] {display:none;}

  1. Create a Windows Azure Web Site
    1. Enable Git Deployment
  2. Associate a Source Code Repository
    1. Associate a GitHub Repository to Windows Azure Web Sites
    2. Associate a BitBucket Repository to Windows Azure Web Sites
Create a Windows Azure Web Site

To avoid reinventing the wheel, you can follow instructions outlined on the Windows Azure Develop Center on how to Create a Windows Azure Web Site and Enable Git Deployment.

NOTE: If you do not need a MySQL database, or have decided to go with another database option, choose Quick Create from the Web Site menu instead of Create with Database.

Associate a Source Code Repository

In order to facilitate the Continuous Deployment it’s necessary to have a centralized location to pull the website code from, in this particular blog entry we’re going to use GitHub and BitBucket.

Initializing a Git Repository will redirect the Management Portal to the DEPLOYMENTS tab.

Now that Git has been enabled use the collapsible menus to select how you would like to deploy code to the new Windows Azure Web Site.

Associate a GitHub Repository to Windows Azure Web Sites


Expand the item labeled Deploy from my GitHub repository.


Click on Authorize Windows Azure. This will open a window to federate with GitHub, you will need to approve the ability for Windows Azure to access your GitHub account.


Once access has been granted, the browser will redirected back to the Management Portal to a screen to select either a Public or Private repository.


After selecting the repository to be published, click on the check mark to start the deployment process.

If your repository is empty, push to GitHub to trigger deployment to Windows Azure.


Each subsequent push to GitHub will trigger a service hook and begin a deployment of the latest bits to the Web Site. Now that the deployment has been pulled into the Web Site, clicking on Browse in the Taskbar Drawer will launch the web application.


Associate a BitBucket Repository to Windows Azure Web Sites


Just like associating a GitHub account to a Windows Azure Web Site, expand the Deploy from my BitBucket repository. Authorize Windows Azure to access a BitBucket account by federating authentication through BitBucket.


After clicking on Authorize Windows Azure, a prompt to authenticate with BitBucket.


After signing into BitBucket a prompt to select the Public or Private repository to deploy to Windows Azure Web Sites.


Unlike GitHub, BitBucket will require you push a change to the repository before the Service Hook will deploy code to the Windows Azure Web Site.


Once a push has been made to the private BitBucket repository, the deployment will be pushed to the Web Site.


In the taskbar drawer at the bottom of the browser viewport, click the Browse Button.


A new window will open and the site will display the web files which were pulled into the Web Site.


Continuous Deployment is a great way to introduce new features or functionality to your customers in an automated fashion. With the new support for Private Repositories, Windows Azure Web Sites can help delivery stunning web sites which utilize either open source projects from public repositories, or provide clients with a customized solution from a private repository.

‡ Cory Fowler (@SyntaxC4) reported PHP 5.4 Now Native in Windows Azure Web Sites in an 11/16/2012 post:

A while back I wrote a blog post on Enabling PHP 5.4 in Windows Azure Web Sites, when we enabled the ability to bring-you-own-runtime to Windows Azure Web Sites. This is still a valid solution if you would like to manage your own PHP.ini file, or if you would like to ensure that you are using a specific build of PHP.

It’s exciting to announce that Windows Azure Web Sites now has PHP 5.4 ready to be enabled in your Web Sites.

Even though PHP 5.4 is available PHP 5.3 is still enabled by default.


Enable Native PHP 5.4 in Windows Azure Web Sites

After Creating a Windows Azure Web Site, navigate into the Web Site details page and select the CONFIGURE tab. Under the framework section you will see PHP VERSION, select the box containing 5.4, it will turn purple notifying that there is an unsaved change.


At the bottom of the browser viewport you will find the TASK DRAWER, which would have changed to include a SAVE button. Click the SAVE button to enable PHP 5.4 for your Windows Azure Web Site.


Once the change has been saved, you’ll be greeted by this nice little success notice.


Finally, you will also notice that the purple indicator has now changed back to blue on the PHP 5.4 box.


You are now ready to deploy PHP 5.4 applications to Windows Azure Web Sites.

Avkash Chauhan (@avkashchauhan) posted a Customer Evaluation Guide for Windows Azure Virtual Machines (IaaS) on 11/14/2012:

imageFor Customers and Partners, the following summary of resources will support the evaluation of Windows Azure Virtual Machines (IaaS):

Core Resources

image_thumb75_thumb490 Day Free Trial - we can confirm the Preview Period in support of the new release of Windows Azure is now available, which enables you to evaluate the new Virtual Machines/IaaS and Enterprise Networking capabilities. Once you have registered for the 90 Day Free Trial and created a new Account, you can access the Preview directly at this link:

imageDigital Chalk Talk Videos – detailed technical overviews of the new Windows Azure services and supporting technologies as announced June 7, including Virtual Machines (IaaS Windows and Linux), Storage, Command Line Tools

Scenarios Videos on You Tube – “how to” guides, including “Create and Manage Virtual Networks”, “Create & Manage SQL Database”, and many more

MSDN Forums for Windows Azure

Microsoft Knowledge Base article Microsoft server software support for Windows Azure Virtual Machines

Windows Azure Training Kit

Windows Azure Trust Center - provides a comprehensive of view of Windows Azure and security and compliance practices

Main Site for Windows Azure

Recent Events (and Video Recordings)

//build/ Conference at Microsoft Campus, October 31, 2012

Search “Azure build 2012” at for more Windows Azure Sessions
Windows Azure Overview, Scott Guthrie
Windows Azure Internals, Mark Russinovich
Introduction to Windows Azure Infrastructure as a Service (IaaS), Mark Russinovich
Windows Azure Active Directory: enabling single sign on and directory services for cloud SaaS apps
Vittorio Bertocci
Developing for Windows Azure Web Sites and SharePoint Online,
Yochay Kiriaty and Thomas Mechelke

Meet Windows Azure, June 7, 2012
Meet Windows Azure Keynote, Scott Guthrie

TechEd Europe, June 2012
TechEd Europe Keynote with Brad Anderson and Jason Zander
Windows Azure Internals, Mark Russinovich
Deep Dive into Running Virtual Machines on Windows Azure, Vijay Rajagopalan,

TechEd Orlando, June 2012

Satya Nadella Keynote – TechEd 2012 Orlando, June 11, 2012 (34 minute – Virtual Machines, 47 minute – Applications)
Meet the New Windows Azure, Scott Guthrie – TechEd 2012 Orlando, June 11, 2012

Windows Azure Virtual Machines, Networking, and Hybrid
Windows Azure Internals, Mark Russinovich
Windows Azure Virtual Machines and Virtual Networks, Mark Russinovich
Windows Azure IaaS and How It Works, Corey Sanders
Deep Dive into Windows Azure Virtual Machines: From the Cloud Vendor and Enterprise Perspective, Vijay Rajagopalan
Linux on Windows Azure IaaS with Partner Demos, Tad Brockway
An Overview of Managing Applications, Services, and Virtual Machines in Windows Azure, Karandeep Anand
Monitoring and Managing Your Windows Azure Applications and Services, Chandrika Shankarnarayan
Overview of Windows Azure Networking Features, Ganesh Srinivasan
Deep Dive: Extending Enterprise Networks to Windows Azure - Project "Brooklyn", Ganesh Srinivasan
Hybrid Will Rule: Options to Connect, Extend and Integrate Applications in Your Data Center and Windows Azure, Yousef Khalidi
Business Continuity in the Windows Azure Cloud, Yousef Khalidi

Windows Azure Blogs

Forrester Research Blogs and Key Publications

Microsoft Blog

Windows Azure Virtual Machines
Michael Washam, Senior Technical Evangelist, Microsoft Corporation, June 8, 2012

Wely Lau (@welylive) posted Windows Azure Virtual Machine: A look at Windows Azure IaaS Offerings (Part 1) on 11/13/2012:

This article looks at the journey Windows Azure has taken from when it was first launched as a PaaS, to the newly announced IaaS offerings. In the later part of this article, I’ll also provide a quick, hands-on tutorial on how to set up a Windows Azure Virtual Machine.

image_thumb75_thumb4Started with PaaS, the stateless VM model

As many of you might be aware of Microsoft started Windows Azure with PaaS (Platform as- a Service) model, generally available in February 2010.

With PaaS, Web and Worker Roles were introduced, customers only had to take care of the application and data, not the operating system and infrastructure. The stateless Virtual Machine (VM) concept was also brought into the picture. This means at the runtime, each VM should not store the data locally as it’ll be gone if the VM is reincarnated due to unexpected events, such as hardware failures. Instead, data should be stored in persistent storages such as SQL Database or Windows Azure Storage.

One primary advantage of this model is scaling in and out could be easily done. In fact, it’s just a matter of changing a parameter and within a few minutes the VM(s) will get provisioned.

Scaling in Windows Azure Paas

Figure 1 – Scaling in Windows Azure PaaS “Cloud Services”

imageChallenges of PaaS

Although since its launch many customers have adopted Windows Azure as a cloud platform, there have also been many unsuccessful deals because of various stumbling blocks, especially when migrating the existing applications to the PaaS model. The following summarizes two major challenges:

1. Migration and portability

When talking about the effort involved in migration, a lot of it depends on the architecture of the application itself. I’ve written a series of articles on moving an application to the PaaS cloud model.

If you’ve decided to migrate your application to the PaaS regardless of the effort, what about bringing them back to on-premise? It might take more effort again. Alternatively, you could maintain two copies of your application source code.

2. Stateless virtual machine

Although there are some techniques to install third-party software on Windows Azure Stateless VM, the installation could be only done when setting up the VM; any changes at runtime wouldn’t be persistent. This restricts customers to install and run state-full applications on Windows Azure.

Introducing IaaS

With feedback from customers and communities, an initiative of supporting Infrastructure as a Service (IaaS) was finally announced on 7 June 2012 at the Meet Windows Azure event. This is an awesome move by Microsoft bringing more powerful capabilities to the platform and also competing with other cloud providers. Exciting news to customers!

Typically, the support of IaaS is implemented with Windows Azure Virtual Machine (WAVM). The major difference between this newly launched IaaS VM and PaaS so-called “Cloud Services” VM is the persistence. Yes, the IaaS VM is now persistent. Meaning that, any change that we perform at runtime will stay durable although the VM is reimaged. Aside from WAVM, the IaaS offerings are also supported with various new networking features. They offer a rich set of capabilities to establish connection amongst cloud VMs and also between cloud VM and on-premise network infrastructure.

Disclaimer: at the time this article was written, Windows Azure IaaS offerings including Virtual Machine are still in Preview. As such, any changes might be applied till the GA (general availability).

Windows Azure Virtual Machine

Windows Azure Virtual Machine utilizes the fantastic backend Windows Azure Storage. As such, it inherits the highly-available benefit so that the VM image is replicated for 3 copies.

Windows Azure Virtual Machine on Blob Storage

Figure 2 – Windows Azure Virtual Machine on Blob Storage
(Source: MS TechEd North America 2012 – AZR201.pptx – Slide 30)

The VM is represented in a standard and consistent form of VHD file. Thus, the VHD can be effortlessly moved from an on-premise virtualized environment (Hyper-V) to Windows Azure or the other way around, or even to other cloud providers. This gives the customer lots of mobility, portability, and no lock-in experience.

Image Mobility

Figure 3 – Image Mobility
Windows Azure Platform Training Kit – WindowsAzureVirtualMachines.pptx – Slide 11

Supported OS Images in Windows Azure VM

Windows Azure supports several versions of Windows Server and several distros of Linux as can be seen in the figure below:

Figure 4 – Supported OS in Windows Azure Virtual Machine
(Source: Windows Azure Platform Training Kit – VirtualMachineOverview.pptx – Slide 7)

Some of you might be surprise to see Linux distros are on the list. This proves that Microsoft is now heading in an open direction to reach more Microsoft and open-source customers.

A hands-on tutorial

0. This tutorial requires you to have Windows Azure subscription. If you don’t have one, you can sign up the free trial here. As Windows Azure IaaS is still in Preview at the moment, you are required to request the preview features here. It might take some time for them to grant you the preview features.

1. If you are ready with the subscription and preview features, log on to new Windows Azure Management Portal with your live ID and password. You will see the following screen if you’ve successfully logged in to the portal.

windows azure management portal

2. To create a Virtual Machine, click on the “+ New” button located in the left bottom corner. When the pop-up menu shows up, select Virtual Machine in the left hand menu and select FROM GALLERY.

3. (VM OS Selection screen) It will then show the available OS images. Let’s choose Microsoft SQL Server 2012 Evaluation Edition. This is basically Windows Server 2008 R2 with SQL Server 2012 Evaluation Edition pre-installed.

VM OS Selection

4. (VM Configuration Screen) The subsequent step requires us to fill in the VM configurations. Please remember your password; you will need to use it again in later steps.

5. (VM Mode Screen) This screen allows you to define how and where your VM will be stored. Choose the STANDALONE VIRTUAL MACHINE option and enter your preferred DNS Name for you service. As mentioned above, WAVM will use Blob Storage to store the VHD. This screen allows you to choose the Storage Account, Affinity Group, and Subscription.

VM Mode Screen

6. (VM Options Screen) This screen requires you to define the Availability Set of your Virtual Machine. Just simply click accept button image, leave the configuration as default. I will explain more about the Availability Set in a subsequent article.

7. If everything goes well, you will see the VM is being provisioned.

It might take few minutes for the VM to be ready; you will see the status change to Running. You can then click on the VM to see the details.

8. Clicking “Connect” will download a RDP file. Open the RDP file and you should see the Windows Security pop up. Enter the password that you specified in step 4.

9. When it prompts you with the certificate error, just simply accept it by clicking “Yes”.

10. As can be seen, I’ve successfully RDP-in to the VM. Most importantly, any changes that we do now (at the runtime) will be persistent.

You can also see that SQL Server 2012 is pre-installed for us.

Coming Up Next

In the next article, we will continue to look at Windows Azure Virtual Machine in more detail, including disk and images concepts, networking features, the combination of PaaS and IaaS, and so on. Stay tuned.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Himanshu Singh (@himanshuks, pictured below) posted Real World Windows Azure: IT Firm Improves Its Flagship Product and Reaches More Customers with Cloud Solution to the Windows Azure blog on 11/15/2012:

imageAs part of the Real World Windows Azure series, we connected with Oguz Kucukbarak, Managing Partner at ODC to learn more about how the IT firm improved its flagship product and reached more customers with Windows Azure. Read ODC’s success story here. Read on to find out what he had to say.

Himanshu Kumar Singh: Tell me about ODC

imageOguz Kucukbarak: Based in Istanbul, Turkey, ODC is an IT business and technology consultancy with more than 50 employees, and it has offices in Istanbul, Dubai, and Baku, Azerbaijan. ODC was founded in 2005 with the launch of its flagship products SmartMessage and SmartMessage On-Demand, and it currently has more than 800 enterprise customers in a variety of industries, including banking, insurance, automotive, and telecommunications.

HKS: How does ODC deliver its services?

OK: SmartMessage and SmartMessage On-Demand are e-marketing and campaign management platforms that companies use to communicate with customers and employees through mass distribution of email, short message service (SMS) messages, and other electronic formats. SmartMessage synchronizes with customer databases to easily target campaigns based on criteria such as age, occupation, or location. SmartMessage is an on-premises solution that runs on a customer's hardware, while SmartMessage On-Demand is hosted on ODC servers and requires no customer investment in infrastructure.

HKS: What led you to evaluate cloud services as a possible solution?

OK: Although ODC has successfully engaged customers with existing products, we faced challenges trying to reach customers in the small and medium enterprise (SME) segment, which is a strong emerging market in the region. For these customers, the hardware investment to host an on-premises SmartMessage deployment may be prohibitive, and they may not need the product's advanced features.

SmartMessage On-Demand is an alternative for these customers, but the rapid growth of the product was straining the ODC hosting infrastructure. Each month, more than 20 million email messages and 10 million SMS messages are sent via SmartMessage On-Demand. On peak usage days, it was becoming difficult and expensive for us to scale our hosting hardware to meet demand levels and satisfy our service level agreements for performance. So we began looking for a more flexible and scalable solution that is suitable for SME customers.

HKS: How did you decide on Windows Azure?

imageOK: Given the increasing viability of cloud-based solutions for highly scalable and cost-effective enterprise applications, ODC felt that a cloud solution would be a perfect match for the company's new version of SmartMessage. Specifically, ODC considered Windows Azure. We have partnered with Microsoft since our company was founded. So, we felt that Windows Azure would fit perfectly with our product development strategy. Windows Azure allows us to dynamically scale our resources, which makes it much easier to accommodate peak demand without additional infrastructure costs for us.

HKS: How did the migration to Windows Azure go?

OK: A team of three developers began to work on the new product, SmartMessage Lite, in early 2012. Because the developers had experience with the Microsoft Visual Studio 2010 development system, the transition to Windows Azure went smoothly. Our developers also found it easy to access helpful resources during development. We were able to turn to MSDN for answers and received strong support from the local Microsoft team. There is a growing community of Windows Azure developers online and we were able to find solutions to any problems we encountered.

HKS: What are some of the operational benefits you’ve seen with Windows Azure?

OK: In addition to making SmartMessage Lite more scalable with Windows Azure, the development team also made the product easier to use so that it is a better fit for SME customers. The initial release of SmartMessage Lite includes simplified user interface and core functionality for SMS distribution. A second phase of the product, currently scheduled for late 2012, will add email features.

Development of SmartMessage Lite took only three months; we were impressed by Windows Azure storage, which makes it easy to store huge amounts of data. We also found the Windows Azure emulator very useful for locally testing new versions before moving them to the cloud. The dynamic scalability of Windows Azure meant that we could spend less time worrying about resource allocation and more time enhancing product features.

HKS: And the benefits for your customers?

OK: Being able to meet extreme peaks in demand is a key part of SmartMessage Lite. We are now able to give our customers a more available product, a better experience, and we have reduced our own costs.

Read how others are using Windows Azure.

Nathan Totten (@ntotten) put the Spotlight on Windows Azure Startup in an 11/13/2012 post:

imageIf you watched the second BUILD keynote a few weeks ago you may have seen Satya mention a company called is an exciting new startup that runs their core services and infrastructure on Windows Azure. If you haven’t seen I definitely recommend signing up for a free account and giving them a try. We were fortunate to have Johnny Halife in Redmond for BUILD where he participated in a few talks and interviews which you will find below.

image_thumb75_thumb4From a developer prospective, is really doing some cool things. Their back-end services are built entirely in Node.js and use MongoDB hosted on MongoLab to store their data. On the front-end they have built an impressive HTML/JavaScript application that runs in all modern browsers. Additionally, the dev team is iterating about as fast as I have ever seen. They are doing upwards of 15-20 deployments every day into production using a sophisticated, yet fast CI setup that runs Jenkins on a Windows Azure Virtual Machine.

Below you will find several recordings from BUILD sessions and interviews where Johnny describes in detail how is built.

Cloud Cover Episode 93 - Real-World Windows Azure with

Continuous Integration with Windows Azure Web Sites

Bootstrapping your Startup with Windows Azure

Cloud Cover Live @ BUILD 2012

Maarten Balliauw (@maartenballiauw) posted A phone call from the cloud: Windows Azure, SignalR & Twilio on 11/12/2012:

imageNote: this blog post used to be an article for the Windows Azure Roadtrip website. Since that one no longer exists, I decided to post the articles on my blog as well. Find the source code for this post here: 05 (1.32 mb).
It has been written earlier this year, some versions of packages used (like jQuery or SignalR) may be outdated in this post. Live with it.

image_thumb75_thumb5In the previous blog post we saw how you can send e-mails from Windows Azure. Why not take communication a step further and make a phone call from Windows Azure? I’ve already mentioned that Windows Azure is a platform which will run your code, topped with some awesomesauce in the form of a large number of components that will speed up development. One of those components is the API provided by Twilio, a third-party service.

Twilio is a telephony web-service API that lets you use your existing web languages and skills to build voice and SMS applications. Twilio Voice allows your applications to make and receive phone calls. Twilio SMS allows your applications to make and receive SMS messages. We’ll use Twilio Voice in conjunction with jQuery and SignalR to spice up a sign-up process.

The scenario

The idea is simple: we want users to sign up using a username and password. In addition, they’ll have to provide their phone number. The user will submit the sign-up form and will be displayed a confirmation code. In the background, the user will be called and asked to enter this confirmation code in order to validate his phone number. Once finished, the browser will automatically continue the sign-up process. Here’s a visual:


Sounds too good to be true? Get ready, as it’s relatively simple using Windows Azure and Twilio.

Let’s start…

Before we begin, make sure you have a Twilio account. Twilio offers some free credits, enough to test with. After registering, make sure that you enable international calls and that your phone number is registered as a developer. Twilio takes this step in order to ensure that their service isn’t misused for making abusive phone calls using free developer accounts.

Next, create a Windows Azure project containing an ASP.NET MVC 4 web role. Install the following NuGet packages in it (right-click, Library Package Manager, go wild):

  • jQuery
  • jQuery.UI.Combined
  • jQuery.Validation
  • json2
  • Modernizr
  • SignalR
  • Twilio
  • Twilio.Mvc
  • Twilio.TwiML

It may also be useful to develop some familiarity with the concepts behind SignalR.

The registration form

Let’s create our form. Using a simple model class, SignUpModel, create the following action method:

public ActionResult Index() { return View(new SignUpModel()); }

This action method is accompanied with a view, a simple form requesting the required information from our user:

@using (Html.BeginForm("SignUp", "Home", FormMethod.Post)) { @Html.ValidationSummary(true) <fieldset> <legend>Sign Up for this awesome service</legend> @* etc etc etc *@ <div class="editor-label"> @Html.LabelFor(model => model.Phone) </div> <div class="editor-field"> @Html.EditorFor(model => model.Phone) @Html.ValidationMessageFor(model => model.Phone) </div> <p> <input type="submit" value="Sign up!" /> </p> </fieldset> }

We’ll spice up this form with a dialog first. Using jQuery UI, we can create a simple <div> element which will serve as the dialog’s content. Note the ui-helper-hidden class which is used to make it invisible to view.

<div id="phoneDialog" class="ui-helper-hidden"> <h1>Keep an eye on your phone...</h1> <p>Pick up the phone and follow the instructions.</p> <p>You will be asked to enter the following code:</p> <h2>1743</h2> </div>

This is a simple dialog in which we’ll show a hardcoded confirmation code which the user will have to provide when called using Twilio.

Next, let’s code our JavaScript logic which will spice up this form. First, add the required JavaScript libraries for SignalR (more on that later):

<script src="@Url.Content("~/Scripts/jquery.signalR-0.5.0.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/signalr/hubs")" type="text/javascript"></script>

Next, capture the form’s submit event and, if the phone number has not been validated yet, cancel the submit event and show our dialog instead:

$('form:first').submit(function (e) { if ($(this).valid() && $('#Phone').data('validated') != true) { // Show a dialog $('#phoneDialog').dialog({ title: '', modal: true, width: 400, height: 400, resizable: false, beforeClose: function () { if ($('#Phone').data('validated') != true) { return false; } } }); // Don't submit. Yet. e.preventDefault(); } });

Nothing fancy yet. If you now run this code, you’ll see that a dialog opens and remains open for eternity. Let’s craft some SignalR code now. SignalR uses a concept of Hubs to enable client-server communication, but also server-client communication. We’ll need the latter to inform our view whenever the user has confirmed his phone number. In the project, add the following class:

[HubName("phonevalidator")] public class PhoneValidatorHub : Hub { public void StartValidation(string phoneNumber) { } }

This class defines a service that the client can call. SignalR will also keep the connection with the client open so that this PhoneValidatorHub can later send a message to the client as well. Let’s connect our view to this hub. In the form submit event handler, add the following line of code:

// Validate the phone number using Twilio $.connection.phonevalidator.startValidation($('#Phone').val());

We’ve created a C# class with a StartValidation method and we’re calling the startValidation message from JavaScript. Coincidence? No. SignalR makes this possible. But we’re not finished yet. We can now call a method on the server side, but how would the server inform the client when the phone number has been validated? I’ll get to that point later. First, let’s make sure our JavaScript code can receive that call from the server. To do so, connect to the PhoneValidator hub and add a callback function to it:

var validatorHub = $.connection.phonevalidator; validatorHub.validated = function (phoneNumber) { if (phoneNumber == $('#Phone').val()) { $('#Phone').data('validated', true); $('#phoneDialog').dialog('destroy'); $('form:first').trigger('submit'); } }; $.connection.hub.start();

What we’re doing here is adding a client-side function named validated to the SignalR hub. We can call this function, sitting at the client side, from our server-side code later on. The function itself is easy: it checks whether the phone number that was validated matches the one the user entered and, if so, it submits the form and completes the signup.

All that’s left is calling the user and, when the confirmation succeeds, we’ll have to inform our client by calling the validated message on the hub.

Initiating a phone call

The phone call to our user will be initiated in the PhoneValidatorHub’s StartValidation method. Add the following code there:

var twilioClient = new TwilioRestClient("api user", "api password"); string url = "" + "&phoneNumber=" + HttpContext.Current.Server.UrlEncode(phoneNumber); // Instantiate the call options that are passed to the outbound call CallOptions options = new CallOptions(); options.From = "+14155992671"; // Twilio's developer number options.To = phoneNumber; options.Url = url; // Make the call. twilioClient.InitiateOutboundCall(options);

Using the TwilioRestClient class, we create a request to Twilio. We also pass on a URL which points to our application. Twilio uses TwiML, an XML format to instruct their phone services. When calling the InitiateOutboundCall method, Twilio will issue a request to the URL we are hosting ( to fetch the TwiML which tells Twilio what to say, ask, record, gather, … on the phone.

Next up: implementing the TwilioValidationMessage action method.

public ActionResult TwilioValidationMessage(string passcode, string phoneNumber) { var response = new TwilioResponse(); response.Say("Hi there, welcome to Maarten's Awesome Service."); response.Say("To validate your phone number, please enter the 4 digit" + " passcode displayed on your screen followed by the pound sign."); response.BeginGather(new { numDigits = 4, action = "" + Server.UrlEncode(phoneNumber), method = "GET" }); response.EndGather(); return new TwiMLResult(response); }

That’s right. We’re creating some TwiML here. Our ASP.NET MVC action method is telling Twilio to say some text and to gather 4 digits from his phone pad. These 4 digits will be posted to the TwilioValidationCallback action method by the Twilio service. Which is the next method we’ll have to implement.

public ActionResult TwilioValidationCallback(string phoneNumber) { var hubContext = GlobalHost.ConnectionManager.GetHubContext<PhoneValidatorHub>(); hubContext.Clients.validated(phoneNumber); var response = new TwilioResponse(); response.Say("Thank you! Your browser should automatically continue. Bye!"); response.Hangup(); return new TwiMLResult(response); }

The TwilioValidationCallback action method does two things. First, it gets a reference to our SignalR hub and calls the validated function on it. As you may recall, we created this method on the hub’s client side, so in fact our ASP.NET MVC server application is calling a method on the client side. Doing this triggers the client to hide the validation dialog and complete the user sign-up process.

Another action we’re doing here is generating some more TwiML (it’s fun!). We thank the user for validating his phone number and, after that, we hang up the call.

You see? Working with voice (and text messages too, if you want) isn’t that hard. It enables additional scenarios that can make your application stand out from all the many others out there. Enjoy!

05 (1.32 mb)


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

‡ Steve Fox (@redmondhockey) explained the difference between Autohosting and Provider Hosting of SharePoint Apps in SharePoint and Windows Azure’s Interlude in Vegas: Building Apps using the Autohosted App Model on 11/17/2012:

Back in Seattle…

imageWow, a whirlwind week in Vegas meeting with customers, colleagues, the community and of course conference-goers. If you didn’t catch the conference, last week was the SharePoint Conference 2012. It was a great week full of new areas for the SharePoint community to explore. For me, it was especially interesting as this conference brought together two keen areas of focus for me into focus: SharePoint and Windows Azure. This intersection manifested through the introduction of the new SharePoint cloud-hosted app model to the SharePoint community, and it is here where you really find these two platforms intersecting.

imageIt started with the keynote, which was headlined by Jeff Teper, top brass in Office, who with the help of the SharePoint gang showed off some of the new Social, Mobile and Cloud-centric aspects of SharePoint 2013. To the theme above, it was also good to see Scott Guthrie there as well—introducing the 10,000-ish people to the cloud developer story. The discussion and buzz around this new app model continued throughout the week with many questions focusing on the what and how of these new apps.

Discussing Autohosted and Provider-Hosted…

Having spent some time in this area, I had the chance to present a session on the topic as well as deliver a post-con workshop to a group of very enthusiastic developers with tons of questions about the new cloud app model. I’ve posted my deck below here so for those that couldn’t attend, you can peruse the deck and at least get a sense for what we were discussing—which focused on the Autohosted app and the Provider-Hosted app. These are in essence the two new app models introduced in SharePoint 2013 having strong integration with Windows Azure.

imageThe Autohosted app model natively leverages Windows Azure when you deploy the app to SharePoint, and the Provider-Hosted app enables you to use Windows Azure or other Web technologies (such as PhP). Now one of the questions that kept coming up this week was as follows: “if they both use Windows Azure, then what is the difference between the two?” And this is where things get interesting.

Both hosting models are intended to move code off of the server; this is part of the reason for the move to a new cloud-hosted model. For you SharePoint admins out there, this should make you happy—this abstraction of code away from the server should mitigate ill-performing or malicious code from running on the server. However, this is only part of the reason. The other part is in the new world of Cloud computing, services are rolled out in a more frequent cadence. And at the same time you’re introducing value to the customer through the ability to leverage services that constantly improve and evolve over time (and in a much shorter cycle), you don’t want those updates to cross-sect with your customizations. Updates should be seamless, and abstracting the code from the server helps support this process.

Okay, so we’ve moved off of the server into these two models. You still haven’t answered how they’re different? And the Autohosted and Provider-hosted models are different in a number of ways:

  1. The Autohosted app model leverages Windows Azure natively, so when you create the app and deploy it to Office 365 the Web app components and database are using the Windows Azure Web role and Windows Azure SQL Database under the covers. This is good because it’s automatic and managed for you—although you do need to ensure you programmatically manage the cross-domain OAuth when hooking up events or data queries/calls into SharePoint. So, top-level points of differentiation: 1) the Autohosted app model uses the Web Sites and SQL Database services of Windows Azure, and 2) it is deployed to Windows Azure (and of course to the SharePoint site that is hosting the app). If you’re building departmental apps or light data-driven apps, the Autohosted option is good. And there are patterns to use if you want to replace the default ASP.NET Web project with, say, an ASP.NET MVC4 Web project to take advantage of the MVVM application programming.
  2. The Provider-hosted app model supports programming against a much broader set of Windows Azure features—mainly because you are managing the hosting of this type of app so you can tap into, say, Cloud Services, Web Sites, Media Services, BLOB Storage, and so on. (And if these concepts are new to you, then check out the Windows Azure home page here.) Also, while the Autohosted app model tightly couples Windows Azure and SharePoint within a project and an APP that is built from it, the Provider-hosted app provides for a much more loosely-coupled experience. And as I mentioned earlier, this broader experience of self-hosting means that you can also use other Web technologies within the Provider-hosted app.

So, let’s spend a little time in this post on the Autohosted app model, and then I’ll return in a follow-up post with the Provider-hosted app model.

Building your first Autohosted App…

First, open up Visual Studio 2012 and click File, New Project. Select Office/SharePoint, and then select Apps and then App for SharePoint 2013. Provide a name for the app and click OK.


When prompted in the New app for SharePoint dialog, leave the default name for the app, select your O365 tenancy and then select Autohosted as the project type. Click Finish when done.


Visual Studio creates two projects for you: a SharePoint app project and a Web project. Right-click the project and select Add, Class. Call the class Sales, and then add the bolded code below. This is a simple class with three properties: an ID, a Quarter (which represents a fiscal quarter), and TotalSales (representing the total sales for that quarter).

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;

namespace MyFirstAutohostedAppWeb
public class Sales
public int ID { get; set; }
public string Quarter { get; set; }
public string TotalSales { get; set; }


Then double-click the Default.aspx page and replace the <form> element contents with the following bolded code. This provides you with a GridView object, to which you will bind some in-memory data, and a LinkButton, which will trigger the binding.

<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="MyFirstAutohostedAppWeb.Pages.Default" %>

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">

<html xmlns="">
<head runat="server">
<form id="form1" runat="server">
<p style="font-family: calibri">Simple Autohosted Sales App</p>
<asp:GridView ID="salesGridView" runat="server" CellPadding="4" Font-Names="Calibri" Font-Size="Small" ForeColor="#333333" GridLines="None">
<AlternatingRowStyle BackColor="White" ForeColor="#284775" />
<EditRowStyle BackColor="#999999" />
<FooterStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" />
<HeaderStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" />
<PagerStyle BackColor="#284775" ForeColor="White" HorizontalAlign="Center" />
<RowStyle BackColor="#F7F6F3" ForeColor="#333333" />
<SelectedRowStyle BackColor="#E2DED6" Font-Bold="True" ForeColor="#333333" />
<SortedAscendingCellStyle BackColor="#E9E7E2" />
<SortedAscendingHeaderStyle BackColor="#506C8C" />
<SortedDescendingCellStyle BackColor="#FFFDF8" />
<SortedDescendingHeaderStyle BackColor="#6F8DAE" />
<br />
<asp:LinkButton ID="lnkGetSalesData" runat="server" Font-Names="Calibri" Font-Size="Small" OnClick="lnkGetSalesData_Click">Get Sales</asp:LinkButton>

Right-click the Default.aspx page and in the code-behind file (Default.aspx.cs), amend the file as per the bolded code below. You’ll see a List collection here, the object you’ll use for the binding, a number of key objects that are used by OAuth to generate the access token for the app and some simple logic to create an in-memory data object and bind it to the GridView.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;

namespace MyFirstAutohostedAppWeb.Pages
public partial class Default : System.Web.UI.Page
List<Sales> mySalesData = new List<Sales>();
SharePointContextToken contextToken;
string accessToken;
Uri sharepointUrl;

protected void Page_Load(object sender, EventArgs e)
string contextTokenString = TokenHelper.GetContextTokenFromRequest(Request);

if (contextTokenString != null)
contextToken = TokenHelper.ReadAndValidateContextToken(contextTokenString, Request.Url.Authority);

sharepointUrl = new Uri(Request.QueryString["SPHostUrl"]);
accessToken = TokenHelper.GetAccessToken(contextToken, sharepointUrl.Authority).AccessToken;

lnkGetSalesData.CommandArgument = accessToken;


protected void lnkGetSalesData_Click(object sender, EventArgs e)
string accessToken = ((LinkButton)sender).CommandArgument;

if (IsPostBack)
sharepointUrl = new Uri(Request.QueryString["SPHostUrl"]);

Sales FY11 = new Sales();
Sales FY12 = new Sales();
Sales FY13 = new Sales();

FY11.ID = 1;
FY11.Quarter = "FY11";
FY11.TotalSales = "$2,002,102.00";

FY12.ID = 2;
FY12.Quarter = "FY12";
FY12.TotalSales = "$2,500,201.00";

FY13.ID = 3;
FY13.Quarter = "FY13";
FY13.TotalSales = "$2,902,211.00";

salesGridView.DataSource = mySalesData;

When you’re done, hit F6 and build the app. Then, double-click the AppManifest.xml file to open the designer, click the Permissions tab and set the Web scope to have Read permissions.


At this point, you can right-click and Deploy (and sign into your tenancy to deploy) or right-click and Publish, navigate to your SharePoint site, click new app to deploy, and then upload, click Deploy and then trust your app.

The result should be a deployed app, and when you click it and hit the Get Sales link button, the result should be something fantabulous like the following:


Now if you inspect the URL for this new app, you’ll see something interesting. As per the URL below, you’ll see the GUID for the app, the fact that it’s being hosted in the domain (in Windows Azure), and then you have token(s) and data that are embedded within the URL such as the SPHostUrl, the Language, and so on.

So, congrats if this was your first Autohosted app you built. It was a relatively simple one, but in time you’ll build some interesting apps that leverage more complex patterns. You can download the code from here. (Open the project, click the App project and then change the SharePoint Site URL to point to your SharePoint Online site before you build and deploy.)

For those that want to do more, there’s also a great sample from the SDK that illustrates OAuth and OData, which you can find here. Nice walkthrough that actually works without any rework save for changing the site property because it uses an existing hidden list.

Well, that’s it for today. More to come on the Provider-hosted app model soon.

‡ My (@rogerjenn) LightSwitch HTML Client Preview 2: OakLeaf Contoso Survey Application Demo on Office 365 SharePoint Site article of 11/17/2012 begins:

imageThe Visual Studio LightSwitch Team (@VSLightSwitch) released Visual Studio LightSwitch HTML Client Preview 2 for Visual Studio 2012 Standard Edition or higher to the Web on 11/12/2012. For more information on the Web release see the “LightSwitch” section of Windows Azure and Cloud Computing Posts for 11/14/2012+. The new SharePoint Apps model lets you build SharePoint 2013 apps with LightSwitch.


imageThe HTML Client Preview 2 is included in the Microsoft Office Developer Tools for Visual Studio 2012 Preview 2 (OfficeToolsForVS2012GA.exe). Installing the tools adds LightSwitch HTML Application (Visual Basic) and (Visual C#) templates to the LightSwitch group:


imageTo build SharePoint 2013 apps with LightSwitch and the SharePoint Apps model, you must sign up for a free Office 365 Developer Site. Here’s the OakLeaf Team Site with the SurveyApplicationCS project published to it:


Clicking the SurveyApplicationCS link runs the App in your browser.

View the public OakLeaf Systems Office 365 Developer Site:


Many of this post’s images are included in my LightSwitch HTML Client Preview 2 Pinterest board.

The LightSwitch Survey Application Tutorial

imageThe LightSwitch Team prepared a Survey App Tutorial: Developing a SharePoint Application Using LightSwitch (, updated 11/15/2012), which contains LightSwitchSurveyApplicationTutorial.docx and LightSwitchSurveyApplicationTutorial.pdf, as well as C# or VB versions of SurveyTutorialFiles.sln.

Tip: If you use the tutorial’s C# version, be sure to copy source code from LightSwitchSurveyApplicationTutorial.docx, not the PDF version. My Contoso Tutorial's C# Code for PhotosController is Corrupted in PDF File thread in the LightSwitch HTML Client Preview Forum describes the problem with the PDF version.

From the tutorial’s Overview:

Contoso Foods is both a food manufacturer and distributor that sells a variety of products to grocery stores nationwide. Contoso Foods’ Sales Representatives play an important role in maintaining relationships with partner stores by frequently visiting each location to deliver products and conduct quality surveys. Quality surveys are completed for every product to measure the presence that the product has within the store. Typical survey data that is collected by Sales Representatives includes:

  • Cleanliness of the product display (ranging from “very poor” to “excellent”)
  • Quality of the product display lighting (also ranging from “very poor” to “excellent”)
  • Product location within an aisle (middle of aisle, end of aisle, or aisle end-cap)
  • Product shelf height position (top shelf, eye-level shelf, or bottom shelf)

In addition, as part of completing surveys, photos are taken of the product display to support the overall assessment.

On a weekly basis, Sales Representatives visit store locations to take product surveys. Currently, survey data is captured using a clipboard and pen, but this method is slow and increases the likelihood of transcription errors. Also, this method makes it difficult to take and attach photos to surveys. To address these problems, the sales team has decided to create a Survey Application that Sales Representatives can access from their tablet devices to easily collect survey data and attach photos that have been taken using their device. Specifically, the Survey Application will be an Office 365 SharePoint application created using Visual Studio LightSwitch. Key reasons for this approach are:

  • The sales team recently switched to Office 365 for internal email and collaboration, so Sales Representatives are already used to signing into the team’s SharePoint Online site to view customer contact information, marketing material, and customer invoices. Based on this, the team’s SharePoint site is the logical place to host and launch the Survey Application.
  • SharePoint Online offers easy access and management of images. SharePoint’s Picture Library automatically creates thumbnail and web optimized versions of images which improves performance when displaying photos within the application.

This tutorial will walk you through the steps for developing the Survey Application that Contoso Foods’ Sales Representatives will use for completing survey assessments.

Here’s the opening screen with two surveys completed for the logged in Sales Representative:


Note: The sample uses OAuth authentication managed by SharePoint.

Selecting an existing item displays editable survey record details:


Note: I added items for Trader Joe’s - Oakland and Trader Joe’s - Oakland, as well as some of my favorite Trader Joe’s products to the original lists in the ApplicationDataService_SampleData.cs file’s ApplicationDataService class. …

and continues with screen captures of image importation, deletion, and thumbnail display.

•• Brian Moore reported a problem with Publishing LightSwitch applications to Azure Web Sites in an updated 11/16/2012 thread in the LightSwitch HTML Client Preview forum:

imageWe've discovered an issue with publishing applications to Windows Azure Web Sites. The list of list of web sites will not populate in the list for selection so you're unable to select a destination web site. The problem began earlier this week with an update to the Azure Web Site preview release.

imageTo work around this problem in LightSwitch, you can publish to your Azure Web Site using the IIS publishing path in the wizard. The easiest way to do this is to download the .publishSettings file from the Azure management portal and import those setting into the LightSwitch publish wizard.

First, log into the portal and find the link to download the settings on the DASHBOARD tab for the web site you want to use.

Next, when choosing your Application Server Configuration in the LightSwitch publish wizard, choose IIS Server, and then import the settings file you downloaded from the portal.

This will then publish to your Azure Web Site, using the IIS path.

We hope to have the issue resolved soon.

•• Eric Erhardt described how to Diagnose HTTP 401 errors when running SharePointLaunch.aspx and other procedures in an 11/16/2012 thread in the LightSwitch HTML Client Preview forum (scroll to last message):

imageHere's what I do to diagnose 401 errors during SharePointLaunch.aspx (and a lot of other errors on the server).

Open Solution Explorer and go to "File View". Find your "Server" project and open your Web.config.

imageIn the Web.config, ensure the following settings are enabled (Note: tracing is not recommended in a production application. It should only be used to diagnose issues and during debugging):

    <add key="Microsoft.LightSwitch.Trace.Enabled" value="true" />
    <add key="Microsoft.LightSwitch.Trace.Level" value="Verbose" />
    <add key="Microsoft.LightSwitch.Trace.Sensitive" value="true" />

Now F5 again, and when you get the 401 error, navigate to the following site:


In the trace log, you will see a POST request for the SharePointLaunch.aspx page. It should have a 401 status code. Click the "View Details" link. In there you should see some red error messages. This exception message should lead you to what problem you are running into. If you need help, post this exception message on the forums and we'll take a look.

•• Beth Massi (@bethmassi) solved my Contoso Tutorial's C# Code for PhotosController is Corrupted in PDF File problem by adding a LightSwitchSurveyApplicationTutorial.docx file to on 5/15/2012:

imageI changed the tutorial download to the docx file. This is the document we all tested with and it works. You can download it here:

•• Scott Guthrie (@scottgu) demonstrated the Windows Azure and SharePoint 2013 underpinnings of the LightSwitch HTML Client Preview 2 in a 00:22:33 Scott Guthrie's Portion of the SharePoint Conference 2012 Opening Keynote video on YouTube:


For more about LightSwitch HTML Client Preview 2, see the articles below.

• Tim Huckaby (@TimHuckaby) analyzed Visual Studio 2012 LightSwitch's New HTML5 Features and Beyond in an 11/15/2012 article for DevPro:

imageWhen I worked on one of Microsoft's product teams in the late 1990s, I worked side-by-side with developers who refused to work in Visual Studio and opted to write code in a tool that they were much more familiar with: Notepad. No lie. I look back at those days smiling and thinking of myself as a junior-league developer showing Visual Studio to some of the most brilliant software engineers in the world. And some developers rebuked me and refused to use Visual Studio. Although it's safe to say that almost every developer at Microsoft now uses Visual Studio, I'm guessing that some of the original Win16 devs are still using Notepad.

imageVisual Studio has evolved over the last decade from a simple GUI-based code editor to an enormous suite of tools. LightSwitch is one of the hidden gems in Visual Studio 2012. Although LightSwitch isn't included in any edition of Visual Studio Express 2012, it's available only in Visual Studio 2012 Professional and higher editions.

LightSwitch Brings Devs Productivity Benefits

LightSwitch was publicly announced in July 2011. At that time I would have called LightSwitch a code generation or application generation tool that helps create departmental Silverlight apps. Consequently, some advanced Silverlight developers immediately dismissed LightSwitch as a development tool.

But the LightSwitch product team, which includes some of the most brilliant people in Microsoft's developer division, hasn't been sitting around doing nothing for the past year. LightSwitch is useful for more development projects than for Silverlight app generation alone. I wrote a column back in June that the LightSwitch team was building HTML5 application generation functionality. This week's announcements make good on that promise because Microsoft has released a preview update to LightSwitch that installs right into Visual Studio 2012.

This update introduces several impressive new features. LightSwitch's development experience for HTML5 has vastly improved from top to bottom with better IntelliSense, better JavaScript debugging, and better screen design support. Also, you'll find better theming and control functionalities to help tailor your desired look and feel in the IDE. LightSwitch is now JQuery Mobile–compatible, which means that you can use ThemeRoller to build your CSS in a WYSIWYG GUI and JQuery Mobile controls can be used as-is. LightSwitch promises end-to-end phone support, which makes smartphones a first-class device target. And as you'd expect, there are new controls and APIs included in the update.

imageBefore you scoff and say, "I'm an awesome developer, and I don't need an app generation tool to help me build software," picture this: I saw LightSwitch team member Beth Massi at a user group presentation this week build and deploy an Azure service (and not a trivial one) in about five minutes through LightSwitch. "Why in the world would you go through all the pain of infrastructure, configuration, and deployment if you had a tool like this to do it?" Massi said. She's right. You wouldn't. I believe that some of these LightSwitch features will be used by even the most experienced developers. Massi also demoed the deployment of an HTML5 application that's built in LightSwitch straight to an Azure website with brain-dead simple ease.

LightSwitch Announcements About the Future

The LightSwitch team also made some pretty exciting announcements for SharePoint developers. SharePoint 2013 is a very different platform from SharePoint 2010, and the LightSwitch team is taking advantage of some of its plumbing. SharePoint 2010 runs as a single website where all application code runs alongside SharePoint code. SharePoint 2013 provides application isolation in which all server code is sandboxed. In SharePoint 2013, all applications are web applications that are hosted on web servers, which makes them easier to update and migrate and enables hosting and cloud scenarios.

What does this mean? First, most enterprises use SharePoint as a hub or portal for corporate activity, and LightSwitch applications can be integrated into that experience. SharePoint 2013 can host Silverlight or HTML5 LightSwitch applications. You can even enable existing LightSwitch applications for SharePoint 2013 easily. Second, LightSwitch applications can consume SharePoint data and leverage the SharePoint programming model to enable new integrated application scenarios.

Where to Go Next

Are you ready to start LightSwitch development? You can download the bits and install them into Visual Studio 2012. The LightSwitch team blog and developer center are great resources to get started with LightSwitch. And if you've got a burning question about LightSwitch development, then be sure to check out Microsoft's LightSwitch forums for additional guidance.

Related Content:

Stay tuned for articles about my experience completing the Visual Studio LightSwitch Team’s Contoso Survey Application tutorial for the LightSwitch HTML Preview 2 and SharePoint Online with an Office 365 Developer Preview account.

Beth Massi (@bethmassi) posted Getting Started with the LightSwitch HTML Client Preview 2 on 11/13/2012:

imageIf you haven’t heard, we released the HTML Client Preview 2 yesterday morning! This release is part of the Office Tools Preview 2 release that was announced at the SharePoint Conference keynote in Las Vegas (as well as on Soma’s blog). With this release, LightSwitch enables developers to easily build touch-oriented business applications with HTML5 that run well across a breadth of devices. These apps can be standalone, but with this preview developers can now also quickly build and deploy data-driven apps for SharePoint using the new web standards-based apps model.

image_thumb6I am personally very excited about these new capabilities. This allows LightSwitch developers an easy entry into web development and web developers an easy entry into business app development. For me, I’m learning just the little bit of JavaScript that I need to build a full-blown HTML5 mobile application. Plus, now I can take advantage of SharePoint to not only deploy and access these apps, but also work with SharePoint assets.

image_thumb75_thumb8Of course SharePoint is not required to build and deploy a LightSwitch application -- they can be hosted on any IIS web server or even in Azure (as I’ve shown before) -- but many enterprises today use SharePoint as a portal of information and applications while using SharePoint’s security model to control access permissions. So with the new SharePoint 2013 apps model, this makes deploying LightSwitch HTML5 applications to SharePoint (or Office 365) very compelling for businesses.

LightSwitch really is the easiest way to create modern line of business apps for the enterprise. Exciting times! So how do you get started?

We created the LightSwitch HTML Client page on the Developer Center that has the resources you need, like the download, tutorials and documentation. Make sure to point people to

Here are some more resources to get you started.

1- Get the preview

The LightSwitch HTML Client Preview 2 is a Web Platform Installer (WPI) package which is included in the Microsoft Office Developer Tools for Visual Studio Preview 2. This includes other components for building SharePoint 2013 Apps. Make sure you have Visual Studio 2012 Professional or higher installed first.

LightSwitch HTML Client Preview 2 Now Available!Download: Microsoft Office Developer Tools for Visual Studio - Preview 2

2- Work through the tutorials

We’ve got a couple tutorials that we released to help you learn the new capabilities. Also check out the HTML Client documentation on MSDN.

LightSwitch HTML Client Tutorial – this tutorial walks you through building an application that connects to existing data services and provides a touch-first, modern experience for mobile devices.

LightSwitch SharePoint Tutorial – this tutorial shows you how to use LightSwitch to build a SharePoint application with an HTML client that runs on a variety of mobile devices. The tutorial shows you how to sign up and deploy to an Office 365 online account.

3- Ask questions and report issues in the forum

We’ve got a dedicated forum specifically for getting feedback and answering questions about the HTML Client Preview release. The team is ready and listening so fire away!

LightSwitch HTML Client forum

4- Read our blog

We’ve got a great line-up of posts rolling out in the next several weeks to help you understand the architecture of the apps you build as well as tips & tricks on how to build them. Check out the first post from one of our architects:

New LightSwitch HTML Client APIs

And keep tabs on the LightSwitch Team Blog for a lot more!

5- Spread the word & join the community!

Become a fan of Visual Studio LightSwitch on Facebook. Have fun and interact with us on our wall. Check out the cool stories and resources. Here are some other places you can find the LightSwitch team:

LightSwitch MSDN Forums
LightSwitch Developer Center
LightSwitch Team Blog
LightSwitch on Twitter

Steven Provine of the Visual Studio LightSwitch Team (@VSLightSwitch) described New LightSwitch HTML Client APIs in an 11/13/2012 post:

imageFollowing the recent announcement of the LightSwitch HTML Client Preview 2, we are excited to introduce a set of additional APIs that allow further control over the behavior and capabilities of a LightSwitch HTML-based application. You can download Preview 2 from the Developer Center.

image_thumb6The APIs we included as part of the June Preview primarily covered UI customization through custom rendering of controls in the DOM, post-rendering (i.e. tweaking) of existing DOM elements, and a data binding API to wire the UI up to data. For Preview 2, we have significantly improved the design-time coding experience with better IntelliSense support and debugging of the original source code, introduced a number of additional coding entry points for responding to common events, and built a richer set of APIs for interacting with the application.

Intellisense Support

We know that writing JavaScript code can be hard when you don’t know the shapes of all the objects you are working with. IntelliSense is one mechanism that can be used to help guide developers to discover and use new APIs. In the June Preview, we offered some existing IntelliSense support for the jQuery library but had minimal support for the LightSwitch library and application-specific assets such as entities or screens.

This all changes with Preview 2. Whether navigating the contents of the LightSwitch library object (msls) or understanding the properties on generated entity or screen objects, IntelliSense now offers completion lists with appropriate glyphs and tooltip descriptions for each item. Here are some examples:



We still have work to do in this area to ensure IntelliSense always works in all the expected places, such as in callback functions passed to promise objects, but it has been much improved since the June Preview. Please try it out and let us know what you think!

The Entity and Screen created Events

It is a very common scenario to configure defaults for new entities and screens. The June Preview provided no way to plug in code at the appropriate time to implement this. Preview 2 introduces the created event, allowing you to write code immediately after a new entity or screen has been created.

Suppose you have created a new HTML client project and designed an Order entity in the Entity Designer as follows:


Now, switch the perspective of the entity from “Server” to “Client” using the buttons at the bottom of the designer:


Having different entity perspectives is a new feature with Preview 2 and allows you to independently configure aspects of how an entity appears on the server versus a client. One of these aspects is code. Specifically, it allows you to write JavaScript-based entity code that can run in the browser instead of using C# or VB which can only execute on the server.

Now in the designer toolbar, locate the “Write Code” dropdown, open it, and click the “created” link. This will open a code file and output the event stub into which you can add some defaulting logic:

myapp.Order.created = function (entity) {
// Write code here.
entity.OrderDate = new Date();

In this case, a new Order on this HTML Client will set its OrderDate property to the current date.

To try this out, let’s first configure the summary property for the entity to show the OrderDate. This is located in the properties window in the entity designer:


With this done, right click on the Client project, choose “Add Screen…” and add a browse screen for orders, then do the same for an add/edit details screen for orders. Return to the browse screen and add a button that adds and edits a new order entity:



If you run this application and click the “Add Order” button, it will launch the add/edit details screen and the order date will be set to the current date.

Custom Screen Methods

While the June Preview enabled a lot of flexibility regarding the user experience via the render and postRender events, it provided no built-in capability to add buttons to your UI that call custom code to perform specific business tasks. Preview 2 introduces custom screen methods that are designed in a similar manner to screen methods in Silverlight. For each content item that represents a button, you can write canExecute code, which determines the visible or enabled state of the button, and execute code, which actually performs the work.

Continuing the example from the previous section, let’s add a custom button that will import some orders from the sample Northwind OData service:



If you right click either the screen method or the content item representing the button in the tree, you can choose to “Edit Execute Code”. This takes you to the code editor and produces the necessary stub, into which we can add the necessary code:

myapp.BrowseOrders.ImportOrders_execute = function (screen) {
// Write code here.
var northwind = "";
return msls.promiseOperation(function (operation) {{ requestUri: northwind + "/Orders?$top=10",
recognizeDates: true,
enableJsonpCallback: true },
function success(result) {
var results = result.results;
for (var i = 0, len = results.length; i < len; i++) {
var importedOrder = screen.Orders.addNew();
importedOrder.OrderDate = results[i].OrderDate;
function error(e) { operation.error(e); });

Since calling an OData service is an asynchronous operation and is potentially long running, this code uses the msls.promiseOperation() function to create a LightSwitch-compatible promise object that represents the operation of calling the OData service and processing the results. This promise is then returned to the LightSwitch runtime. The runtime uses this promise to track the operation, and if it appears to be taking a long time, a progress indicator is shown to the user, blocking additional operations from running and interfering with the current operation.

LightSwitch uses OData to communicate with the middle tier, so all HTML clients already include the datajs client library which provides the OData object for reading and updating data through an OData service. This code simply reuses that API to request data from the external Northwind OData service, then on success, adds the imported orders into the screen’s collection of orders and sets the order date for each one.

Now to illustrate some simple canExecute code, let’s ensure that the user can only perform a single import per instance of the browse screen. First, add a data item to the screen of type Boolean called “ImportPerformed”. Then right click either the screen method or the content item in the tree and this time choose to “Edit CanExecute Code”:

myapp.BrowseOrders.ImportOrders_canExecute = function (screen) {
// Write code here.
return !screen.ImportPerformed;

Finally, add a line to the execute code that sets this property to true right before completing the operation:

importedOrder.OrderDate = results[i].OrderDate; } screen.ImportPerformed = true; operation.complete(); },

When the button is initially shown, the canExecute code is called and it returns true:


When the import operation has completed, the runtime automatically detects that the ImportPerformed property changed and calls the canExecute code again, and this time it returns false, which greys out the button:


Other Events

The events that have been described so far in this post are considered primary coding entry points for a LightSwitch application. These have end to end support in the designer and are static events, meaning they apply to all instances of asset to which they are attached. For example, the entity created entry point applies to all newly created entities, or a screen method execute entry point applies to all instances of the screen in the running application.

In addition to these static events, there are also instance-based events that can be hooked up from static event handlers. These events are exposed using more traditional JavaScript event patterns, which include both single (obj.onevent) and multi-handler syntaxes (obj.addEventListener()).

The most common instance event in LightSwitch is the “change” event. In fact, since it is so common, we introduced a helper API for attaching to change events – addChangeListener() – that uses addEventListener() under the covers. Since the HTML client does not currently support imperative validation, let’s expand on the Order example and see if we can use a change listener to provide some simple custom validation of the order date.

Open the add/edit details screen for the Order and write the following code in the screen created entry point:

myapp.OrderDetail.created = function (screen) {
// Write code here.
screen.Order.addChangeListener("OrderDate", function (e) {
var order = screen.Order, contentItem = screen.findContentItem("OrderDate"),
today = new Date();
today.setHours(0, 0, 0, 0);
if (order.details.entityState === msls.EntityState.added &&
contentItem.validationResults.length === 0 &&
order.OrderDate < today) {
contentItem.validationResults = [
new msls.ValidationResult(,
"Cannot be before today.") ];

This code is initially invoked when the OrderDetail screen is created, but it subsequently attaches a listener that is called each time the OrderDate property of the Order for the screen changes. When this happens, it finds the content item that is backing the date picker control showing the order date in the UI, then sets the validation results if a) the entity is in a pending add state, b) does not already have validation errors, and c) the date does not occur before today’s date.

With this code, if you run the application and add a new order, then change the order date to a date before today, you will see the desired validation error attached to the control:


If you change it to a date in the future, the validation error disappears:


The code does not explicitly remove the existing validation error, but when the value changes the runtime clears out the current validation results and revalidates the data, so it works as desired in this case.

This example is not perfect (it does not include code to unhook from the change event when leaving the screen) but gives a sense for what can be achieved using these instance events. The change event is present on almost every LightSwitch object: entities, data services, screens, and many other objects accessible from these core objects.

The Application Object

As with the existing Silverlight client, a LightSwitch HTML client application combines a LightSwitch-provided shell with user-provided screens and data. The HTML client shell handles not only navigation between screens, tabs and dialogs but also the commit model for the application. These gestures are available through both the UI and through an application object exposed in the API.

The application object is accessible through the msls library object as “msls.application”, but also more directly through an alias to the same object in the global space called “myapp”. This object is strongly typed with all the entities and screens that have been defined for the application as well as the set of built-in methods that are called by the shell:


This snapshot is taken from the example application we have been building. You can see the Order entity and OrderDetail screen assets in the dropdown, and the selected item is a generated show screen method for the OrderDetail screen. Notice that the parameters to this method include “Order”, which is the designed screen parameter for this screen. The completion list also shows some of the built-in methods like navigateHome() and saveChanges(). The onsavechanges event is not covered in this post, but in brief, it allows you to customize what it means to save when there are changes pending across multiple data services that cannot be transacted as a single unit.

Let’s make one last change to the example application: alter the ImportOrders screen method so that it imports orders sorted by order date and then automatically navigates to edit the order with the most recent order date. Here is the new code snippet (added/changed lines are highlighted in yellow):

myapp.BrowseOrders.ImportOrders_execute = function (screen) {
// Write code here.
var northwind = "";
return msls.promiseOperation(function (operation) {{ requestUri: northwind + "/Orders?$orderby=OrderDate&$top=10",
recognizeDates: true,
enableJsonpCallback: true },
function success(result) {
var results = result.results, importedOrder;
for (var i = 0, len = results.length; i < len; i++) {
importedOrder = screen.Orders.addNew();
importedOrder.OrderDate = results[i].OrderDate;
screen.ImportPerformed = true;
myapp.showOrderDetail(msls.BoundaryOption.nested, importedOrder);
function error(e) { operation.error(e); });

The OData URI is altered to include an orderby operator, and a line is added to show the order detail screen for the last imported order. The first parameter to this method – a boundary option – is used so the caller can tell the target screen how it is being hosted. In this case, the nested boundary option is used which means that the target screen begins a nested change set over the data and shows OK and Cancel buttons in the UI. You can find more information about the available boundary options in my previous architectural post.

This is not a very interesting example, but it does illustrate the usage of a show screen method on the application object and how you can integrate screen navigations into the rest of your business logic.

Debugging Support

Finally, a short word on debugging. For the June Preview, we had suggested that you use the F12 debugger in Internet Explorer or another browser to debug code because the integrated experience in Visual Studio was not yet available. With Preview 2, you can now set breakpoints in your original source code and they will be hit if you run your project in Internet Explorer under F5:


This support greatly simplifies the experience for debugging your custom LightSwitch code from inside Visual Studio.


The LightSwitch team has made a number of improvements to the coding experience for the HTML Client Preview 2, spanning design-time experiences, available coding entry points and additional runtime APIs. This is still a work in progress, and we are eager to drive our next wave of API work using your feedback, so please use the HTML Client forum to post any questions or comments you have on what you have seen here

Return to section navigation list>

Windows Azure Infrastructure and DevOps

‡ David Linthicum (@DavidLinthicum) asserted “These four criteria will help you find -- or become -- the person who will make cloud computing work in your business” in a deck for his Defining the elusive cloud architect article of 11/16/2012 for InfoWorld’s Cloud Computing blog:

imageThe cloud architect is much like Bigfoot: There are sightings, even some blurry video, but we really don't have solid proof that this creature exists.

The problem is cloud computing is so new that it's tough to find people who understand how all of it fits together for enterprises. Indeed, this is one of the biggest limitations around cloud adoption.

imageI call myself a cloud architect, and a few people like me are running around out there. But there aren't many, and even fewer who use the label correctly. How can you find one of your own? How can you become one?

I believe the first step is defining the knowledge required for the job, so you know what to look for or what to learn. Here is my short list:

1. An understanding of most cloud computing technology, both private and public. This is the tough one. Try keeping up with this space for even a month. It's exhausting. However, those who build cloud solutions need to have a holistic understanding of the available technology, including its proper function and use. This means understanding all OpenStack and CloudStack products, management tools, cloud security solutions, and of course where Amazon Web Services (AWS) fits in the mix.

2. An understanding of architectural best practices going back 20 years. The ability to design cloud solutions is based on architectural procedures and methods that go way back. If you understand what those are and have experience using those approaches, you won't end up reinventing the wheel in the world of cloud computing. This includes both service-oriented architecture (SOA) and more traditional enterprise architecture approaches.

3. A willingness to rethink what doesn't work to find what does. Much of cloud computing architect's job is trying new things, seeing if they work, and if not, finding alternatives that deliver. The best architects have open minds around the use of technology and are always moving to the optimal solution. But remember: Just because something works does not mean it's the right solution.

4. A willingness to work with and understand the business. Successful cloud computing solutions are those that align directly with the needs of the business, and cloud architects need to understand the business and work directly with the stakeholders. Many tech pros tend to get caught up in the technology and not understand the core business problems that need solving. That leads nowhere good.

Some cloud architects exist today, but most still need to be created. If you follow my guidance, you can more easily find a good one -- or become one yourself.

M. Sheik Uduman Ali (@Udooz) posted a Book Review: Cloud Architecture Patterns by Bill Wilder – O’Reilly on 11/14/2012:

imageBill Wilder who is a MVP in Windows Azure has taken nice initiative in the cloud computing space with cloud architecture patterns. Good collections.

Cloud computing relatively new and evolving technology, where we are facing so many recurred problems when dealing with application migration. Bill introduces the basic tenants of cloud computing and design principles as Cloud Computing architecture patterns in this book. There are 14 basic and common cloud computing patterns in this book.

The whole chapter 1 discusses on Scalability in depth. I really liked his explanation on Performance vs Scalability. Also, he listed out the characteristics of Cloud-Native applications.

Though it is not well fit into ‘pattern’, Horizontally Scaling Compute pattern discusses all design and anti-design aspects. The Queue-centric workflow discusses the loosely coupled aspects. It covers the asynchronous programming characteristics of cloud-native applications. However, asynchronous and end user responsiveness related problems not covered in this, or as a different pattern.

He well touched the CAP theorem, sharding, fan-out quite well. The multitenancy pattern can be explained much more better way.

The colocate and valet key patterns are nicely covered efficient use of network and trust with third party service integration.

The patterns try at best to vendor-neutral. Fortunately, it lights up more on Windows Azure.

This book is very useful for people who are new to cloud computing space and going to construct/migrate application on this.

For well experts and people who have already spend their life on cloud computing, would be a refresher.

Well attempt for basic cloud computing architecture concepts.

You can buy this book at

David Linthicum (@DavidLinthicum) asserted “It's time to make a path to the clouds for some of your business data. Here's how” in a deck for his 3 steps to a cloud database strategy that works article for InfoWorld’s Cloud Computing blog:

imageEvery day, cloud-based databases add more features, decrease in cost, and become better at handling prime-time business. However, enterprise IT is reluctant to move data to public clouds, citing the tried-and-true excuses of security, privacy, and compliance. Although some have valid points, their reasons often boil down to "I don't wanna."

imageSoon it will be difficult to avoid the advantages offered by databases in public clouds. Consider the benefits they bring: elastic scalability, universal network accessibility, integration with mobile platforms, pay-per-use efficiencies, avoidance of capital costs, and access to widely scattered structured and unstructured data.

How do you map a path to the cloud-based database frontier? Here are three simple steps to follow:

Understand your data. "Of course we understand our data," you might retort. Not really -- most enterprises know that data exists and where it is physically, but have no clue as to what that data means or how it's interrelated. Just ask for a single definition of a customer, and see where that conversation goes.

You can't build new databases or migrate data unless you understand the meaning of that information trove. Although this may seem very fundamental (and it is), it's often where cloud database projects become derailed.

Define data security and data governance objectives. The big pushback on the use of public cloud-based databases is lack of control, often meaning the absence of security and governance. Ironically, with the cloud, you can increase your ability to control data. Public cloud-based databases can offer more sophisticated security models and mechanisms, as well as data-level governance to set policies around the use of data.

Define a path to improvement and not just a migration. Many in IT consider cloud computing to be a platform change, not a way to create more value from the data. If you're just moving data from local to remote systems, save yourself the trouble. Moving to the cloud means changing how you store and retrieve data. At least one reason for doing so should be to make the data more valuable to both operations-oriented data owners and those who use the data to make strategic decisions.

Three steps -- easy, right? Don't get me wrong: This is hard stuff. But most worthwhile projects require effort.

Steven Martin (@stevemar_msft) posted Announcing: Comprehensive Updates to Windows Azure Customer Support Offerings, and Free Standard Support till Dec 31, 2012 on 11/13/2012:

imageAs customers transition to the cloud, a number of tasks traditionally handled in-house become the responsibility of a trusted cloud provider. While the idea of using a vendor for technical support isn’t new, the role of Customer Support in the cloud is quickly expanding beyond break/fix issues and into robust requirements tailored for specific needs.

Customer requirements for support are quickly outgrowing industry norms as the partnership between customers and cloud providers now spans from basic support all the way to advanced support from a team that knows your applications, your infrastructure, and also provides educational and advisory services.

image_thumb75_thumb6Today we are pleased to announce significant enhancements to Windows Azure Customer Support which better reflect the needs of customers as they transition to the cloud and grow deployments. Key improvements include more areas covered by local language support, specialized support options that address a variety of needs including break/fix issues, rapid response guarantees, and on-site consulting for architecture and optimization decisions.

Effective immediately, customers can choose from five support options. All customers will receive Standard Support benefits through December 31, 2012 free of charge giving them time to evaluate and select the plan that best suits their business needs.

Here are the highlights of the updated support program:

  • All levels of support receive:
    • Improved forum support with twice the number of Microsoft engineers that provide timely answers to customers free of charge.
    • Increased seniority and technical expertise from our support engineers.
    • New support interface within the Management portal for faster support ticket submissions and the ability to manage, track, and interact with support engineers online.
    • Expanded global support with eight languages including: English, Chinese, Korean, French, German, Italian, Spanish, and Japanese.
    • Flat rate monthly pricing.
  • Additional Benefits for Professional Direct and Premier
    • Faster response time with priority routing and escalation assistance.
    • Access to technical managers who understand your business and have knowledge of your application and environment.
    • Access to advisory services to help customers with specific features, architectural, and implementation questions.
    • A host of online or in-person professional services including:
      • Assessments
        • .NET Application Migration
        • Application Scalability
        • Application Cost Effectiveness
        • Infrastructure and Identity
        • Knowledge Transfer
          • Developing Windows Azure Solutions Workshop
          • Windows Azure Proof of Concept Accelerator
          • Windows Azure Application Monitoring and Diagnostic Workshop
        • Process Optimization
          • Implementing Windows Azure Release Management

1 Additional information on Premier Support, including how to purchase, can be found here.

2 15-minute response time is only available with the purchase of Microsoft Rapid Response and Premier Support for Windows Azure.

3 Business hours for local languages and 24x7 for English.

4 Professional Direct is only available in US, UK, and Canada.

These new support offers reflect the expanded role of support in the public cloud. From forum based break/fix support, to expert help and training onsite from experts who are familiar with your applications, we’re offering a variety of options to meet your needs. We invite you to try out the Standard Support Offer benefits that are provided free of charge through December 31, 2012 and let us know what you think!

- Steven Martin, General Manager, Windows Azure Business Planning & Operations

<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image_thumb75_thumb7No significant articles today

<Return to section navigation list>

Cloud Security and Governance

Himanshu Singh (@himanshuks) posted Windows Azure SQL Database named an Enterprise Cloud Database Leader by Forrester Research on 11/14/2012:

Editor's Note: This post comes from Ann Bachrach, Senior Product Marketing Manager in our SQL Server team.

imageForrester Research, Inc. has positioned Microsoft as a Leader in The Forrester Wave™: Enterprise Cloud Databases, Q4 2012. In the report posted here Microsoft received the highest scores of any vendor in Current Offering and Market Presence. Forrester describes its rigorous and lengthy Wave process: “To evaluate the vendors and their products against our set of criteria, we gather details of product qualifications through a combination of lab evaluations, questionnaires, demos, and/or discussions with client references.”

image_thumb75_thumb7Forrester notes that “cloud database offerings represent a new space within the broader data management platform market, providing enterprises with an abstracted option to support agile development and new social, mobile, cloud, and eCommerce applications as well as lower IT costs.”

Within this context, Forrester identified the benefits of Windows Azure SQL Database as follows: “With this service, you can provision a SQL Server database easily, with simplified administration, high availability, scalability, and its familiar development model,” and “although there is a 150 GB limit on the individual database size with SQL Database, customers are supporting multiple terabytes by using each database as a shard and integrating it through the application.”

From the Microsoft case study site, here are a few examples of customers taking advantage of these features:

  • Fujitsu System Solutions: “Developers at Fsol can also rapidly provision new databases on SQL Database, helping the company to quickly scale up or scale down its databases, just as it can for its compute needs with Windows Azure.”
  • Connect2Field: “With SQL Database, the replication of data happens automatically…. For an individual company to run its own data replication is really complicated.… If we were to lose any one of our customer’s data, we would lose so much credibility that we wouldn’t be able to get any more customers. Data loss would destroy our customers’ businesses too.”
  • Flavorus: “By using sharding with SQL Database, we can have tons of customers on the site at once trying to buy tickets.”

The best way to try out SQL Database and Windows Azure is through the free trial. Click here to get started.

<Return to section navigation list>

Cloud Computing Events

• Steve Plank (@plankytronixx) reported availability of an Event: Video: Slide Decks: from first 2 days of Six Steps to Azure on 11/15/2012:

imageSix Steps to Windows Azure launched last week with over 160 attendees over the 2 days at our London kick off events which were run in partnership with the UK Windows Azure User Group. Our first event Azure in the Real World showcased some fantastic real life solutions. Our second day focused on Advanced Topics in Windows Azure which included Windows Azure Media Services and Web Services. Overall the feedback has been fantastic.(#sixstepsazure) The audience was a real mix of those who have just started with Windows Azure to those considering it in the coming year.

image     image

imageHere is the content that was delivered by a great line up of speakers on the 8th and 9th November.

8th November:

Powerpoint Decks:


9th November:

Powerpoint Decks:

What’s next in Six steps to Azure?

Step 2: Architecture and Design for Windows Azure - Join us online on 26th November:

Step 3: Integration with Mobile and the New World of Apps – Join us online on 4th December

We also have a number of Windows Azure Developer camps, which will take attendees from knowing nothing about the cloud to actually having deployed a simple application, and made it available on the public internet.

Other steps to follow but you can find out about the entire programme here.

Eric D. Boyd (@EricDBoyd) suggested that you Join me tomorrow on Channel 9 at Windows Azure Conf in an 11/13/2012 post:

imageTomorrow, November 14, 2012, Microsoft will be hosting Windows Azure Conf, a free event for the Windows Azure community. This event will feature a keynote presentation by Microsoft Corporate Vice President, Scott Guthrie, along with numerous sessions from Windows Azure experts.

Windows AzureConf will be streamed live on Channel 9. This event will allow you to see how other developers are using Windows Azure to develop applications in the Cloud. Community members and industry experts from all over the world will join Scott in the Channel 9 studios to present their own inventions and experiences developing apps on Windows Azure.

image_thumb75_thumb8At Windows Azure Conf, I will presenting the following two sessions:

Building Cross-Platform Media Apps using Windows Azure Media Services
Applications with rich video and audio are increasing popular, but preparing and delivering this media to consumers has historically required lots of costly infrastructure and setup. Windows Azure Media Services enables you to outsource your media management to the cloud to let you focus on developing your applications instead of this costly infrastructure. In this session, Eric will walk through building a cross-platform HTML5 media application for the web, Windows 8 and other devices you may use day-to-day.

Solving Security and Compliance Challenges with Hybrid Clouds
When considering public clouds, many industries and companies have concerns about security, intellectual property and regulatory compliance challenges. The good news is a hybrid cloud can often solve these challenges. In this session, Eric D. Boyd will teach you how to use Windows Azure and still protect sensitive information and achieve regulatory and compliance mandates, like PCI compliance, by combining on-premise data centers and private clouds with the Windows Azure public cloud. There are a number of ways to achieve this using messaging and networking technologies and during this presentation Eric will walk through the options and provide you with guidance on when to choose each.

Whether you’re just learning Windows Azure or you’ve already achieved success on the platform, you won’t want to miss this special event.

Register and Join Windows Azure Conf
Wednesday, November 14, 2012
10:30am-7:00pm CST

Cory Fowler (@SyntaxC4) reported Windows AzureConf is Tomorrow! in an 11/13/2012 post:

imageThree years ago, I began learning about this thing called “The Cloud” at that point in time I made a decision, much like I made as a teenager to get into Web Development in the first place. The thought that lead me down this career path was “Huh, this web thing is really going to take off, I need to be a part of it.” When I heard about Cloud Computing a similar thought ran through my mind, “Wow, this cloud thing is really cool, this is definitely the way of the future.

imageA year into working with Windows Azure, I urged other developers to start researching the cloud [Get Started for Free] as it was still early enough to be ahead of the curve and set themselves out from the rest. Enter Windows AzureConf.

I cannot say for certain if I influenced any of these individuals in particular , but what I can say is they have also identified that the Cloud was future. Featuring a Keynote from Scott Guthrie (The Gu) as well as presentations from the Windows Azure Community drawing from their own Real-World experience with Windows Azure.

Join us for Windows AzureConf

In June we launched a number of additional features and services in to Windows Azure’s offerings, since then we have added a great deal more as highlighted in Satya Nadella’s Day 2 Keynote Address at Build.

What will be covered at Windows AzureConf

Windows AzureConf will cover a wide variety of topics including:

  • HTML5
  • WebSockets
  • Windows Azure Media Services
  • Deployment
  • Keeping Costs low
  • SignalR
  • Windows Azure Mobile Services
  • Continuous Delivery
  • Building Cloud-Scale Applications
  • Node.js
  • Compliance
  • Messaging Architectures
How can I get started?

There are a number of ways to get started with Windows Azure including the Windows Azure Training Kit, tutorials on and of course reading the blogs of our community members:

Speaker Name Twitter Blog
Andy Cross @AndyBareWeb
Sasha Goldshtein @goldshtn
Rick Garibay @rickggaribay
Mihai Tataran @mihai_tataran
Eric Boyd @ericdboyd
Panagiotis Kefalidis @pkefal
Michael Collier @MichaelCollier
Magnus Martensson @noopman

Find more great Windows Azure Community Members on the MVP Site.

Go to commentsComments (0)

Scott Guthrie (@scottgu) chimed in with Free online Windows AzureConf this Wednesday on 11/12/2012:AzureConf LogoThis Wednesday, November 14th, we’ll be hosting Windows AzureConf – a free online event for and by the Windows Azure community. It will be streamed online from 8:30am->5:00 PM PST via Channel 9, and you can watch it all for free.

imageI’ll be kicking off the event with a Windows Azure overview in the morning (a great way to learn more about Windows Azure if you haven’t used it yet!), and following my talk the rest of the day will be full of excellent presentations from members of the Windows Azure community. You can ask questions from them live and I think you’ll find the day an excellent way to learn more about Windows Azure – as well as hear directly from developers building solutions on it today.

Click here to learn more about the event and register for free to watch it live.

Hope to see you there!


P.S. We will also make the presentations available for download after the event in case you miss them.

<Return to section navigation list>

Other Cloud Computing Platforms and Services

‡ Jeff Barr (@jeffbarr) described a AWS Marketplace Update - New Big Data Category in an 11/16/2012 post:

imageThe AWS Marketplace has been growing by leaps and bounds. The number of listings has been growing steadily and we're really happy with the number of launches that are taking place.

Today we are adding a new Big Data category to the Marketplace.

imageWe want to make it easier for you to store and analyze large amounts of data. We want to take away the tedium associated with setting up clusters and installing system software and applications so that you can run your business in an analytical, data-driven fashion.

To help you get to this point, we've created a new Big Data category in the AWS Marketplace. You'll be able to launch the applications of your choice on an EC2 instance that's best suited to the job at hand. You can even use the new High I/O Quadruple Extra Large instance type and its low-latency, high-performance SSD storage.

We've divided the category into three sections: Collection and Storage, Analytics and Computation, and Collaboration and Sharing. You can choose from the following Collection and Storage products:

  • Acunu Reflex - Apache Cassandra NoSQL database.
  • Couchbase - Community and enterprise editions NoSQL.
  • MongoDB - NoSQL database with and without EBS RAID storage.
  • ScaleArc - MySQL load balancing.
  • HANA One - In-memory real time data analysis.

Here's are the Analytics and Computation products:

  • MapR M5 – Optimized Apache Hadoop distribution.
  • TreasureData - Hadoop based cloud data warehousing.
  • Metamarkets - Event based data processing.
  • Quantivo - Data association analytics.
  • KarmaSphere – Analytics workspace for Amazon Elastic MapReduce.

And here's what we have in the Collaboration and Sharing arena:

  • Aspera Faspex – On-demand 20 mbps data transfer. …

• Amy Barzdukas reported VMware (finally) admits that its costs are higher than Microsoft’s in an 11/15/2012 post to the Microsoft Server and Cloud Platform blog:

imageRecently we came across the updated VMware cost per application calculator and discovered what our customers and partners have been telling us all along – VMware vSphere 5.1 costs more than Windows Server 2012 and System Center 2012 combined. And we’re not just talking about license acquisition cost, but also the capital expenditure costs (CAPEX), including power, space, storage, and server hardware costs.

To see for yourself, plug in the following values in the calculator–

  1. Number of VMs: 100
  2. Virtualization host type: Server B
  3. Network storage type: iSCSI SAN
  4. Compare to vendor: Microsoft
  5. VMware vSphere 5.1 edition: Enterprise Plus
  6. Management deployed on physical or virtual: virtual
  7. Electricity: low
  8. Real estate: low

(It’s important to note that these aren’t random values -- they represent a common datacenter virtualization scenario.) When you input these values into the calculator and review the output – the headline looks like the following:

The cost-per-application to virtualize 100 apps using VMware vSphere 5.1 Enterprise Plus edition is 19% higher than with Microsoft Hyper-V and System Center 2012.

However, we firmly believe you will save far more with Microsoft. According to VMware’s calculator, Microsoft’s total software cost ($974) is much lower than VMware’s ($1,491), but the infrastructure cost ($1,198) is higher than VMware’s ($1,083) infrastructure cost. Why is this the case?

This happens because the calculator assumes that a VMware ESXi host can run 20% more applications than a Microsoft Windows Server 2012 (Hyper-V) host—an assumption with little credibility or real-life customer evidence.

  1. The calculator bases the “run 20% more applications advantage” on a third party, VMware commissioned report from August 2011 that compares vSphere 5 to Microsoft Hyper-V 2008 R2 SP1. Dynamic Memory, introduced in Hyper-V 2008 R2 SP1 has been improved in Hyper-V 2012, a fact that VMware simply ignores. Moreover, you cannot apply the results of a test performed with a previous version of the product (Hyper-V 2008 R2 SP1) to the current version (Hyper-V 2012) and assume everything remains constant. Why would VMware choose to base results on older technology? This paper about the advantages of Hyper-V over vSphere 5.1 provides the likely answers.
  2. Also, this report doesn’t build on a realistic customer scenario. VMware was able to show an 18.9% performance improvement (and higher consolidation ratio) only when using many VMs running the exact same workload with the exact same data and overcommitting the host -- under specific VM configurations and settings. Ask yourself: do you ever run the exact same workloads with exact same data on a host and overcommit in a production environment? Your most likely answer is no.

When you purchase Microsoft’s Windows Server 2012 and System Center 2012, you get a complete private cloud solution. A realistic cost comparison with VMware should include VMware’s private cloud solution, introduced recently, named vCloud Suite 5.1. If you re-run the cost comparisons for 100 VMs using vCloud Suite 5.1, you’ll find that a VMware solution costs not 19% more, but around 440% more than a Microsoft solution with Windows Server 2012 and System Center 2012.
You can try the calculation here.

Let me know what you find when you do real apples to apples comparisons!

Amy Barzdukas
General Manager, Server & Tools Marketing

• Jeff Barr (@jeffbarr) announced New - Range Retrieval for Amazon Glacier on 11/15/2012:

imageAmazon Glacier is designed for storing data that is infrequently accessed. Once you have stored your data, you can retrieve up to 5% of it (prorated daily) each month at no charge.

Today we are making it easier for you to remain within the 5% retrieval band by introducing Range Retrievals. You can use this new feature to fetch only data you need from a larger file or to spread the retrieval of a large archive over a longer period of time.

imageRange Retrieval
Glacier's existing archive retrieval function now accepts an optional RetrievalByteRange parameter. If you don't provide this header, Glacier will retrieve the entire archive.

If you choose to provide this parameter, it must be in the form StartByte-EndByte. The value provided for StartByte must be megabyte aligned (a multiple of 1,048,576). The value provided for EndByte must be megabyte aligned if you are retrieving data from somewhere within the archive. If you want to retrieve data from StartByte up to the end of the archive, simply specify a value that is one less than the archive size.

When you upload data to Glacier, you must also compute and supply a tree hash. Glacier checks the hash against the data to ensure that it has not been altered en route. A tree hash is generated by computing a hash for each megabyte-sized segment of the data, and then combining the hashes in tree fashion to represent ever-growing adjacent segments of the data.

If you would like to use tree hashes to confirm the integrity of the data that you download from Glacier (and you definitely should), then the range that you specify must also be tree-hash aligned. In other words, a tree hash must exist (at some level of the tree of hashes) for the exact range of bytes retrieved. If you specify such a range, Glacier will provide you with the corresponding tree hash when the retrieval job completes.

This new feature is available now and you can start using it today. The AWS SDK for Java and the AWS SDK for .Net have been updated and now include support for Range Retrievals.

For More Information
Here are some quick links that you can use to learn more about Range Retrievals in Glacier:

• Jeff Barr (@jeffbarr) described New - Amazon Simple Workflow Recipes in an 11/14/2012 post:

imageWe launched Amazon Simple Workflow (SWF) earlier this year, introducing a service designed to help developers automate the coordination of work in applications for better scalability and performance. Coordination of work in an application becomes particularly onerous for developers to build when the application needs to manage high volume and/or multiple streams of work concurrently across a collection of servers. For example, developers who build applications that deal at scale with image conversion, encoding, or data analysis often spend significant time writing code that coordinates how the work gets executed. Without SWF, a developer would need to write custom code that manages the execution state, concurrency, and distribution of work. Aside from more code to develop and maintain, this kind of work coordination is complex to write. A developer would need to build code to manage the state of process execution, define how the application reacts to failed or delayed processes, and manage work streams that can complete at different times.

imageSWF automates the coordination of work for applications, reducing the amount of code and complexity that developers write to handle work that executes across multiple processes and servers. SWF provides a programming model to simplify how developers express work coordination business logic in the application, as well as a service to manage the coordination of work at runtime. Customers today are taking advantage of SWF to coordinate high-volume, low-latency processing that spans data centers. For example, the developers and architects at NASA JPL used SWF to automate a complex image-processing pipeline for the Mars Science Lander (Curiosity). The case study contains a lot of helpful information about how (and why) NASA used SWF. Other customers use SWF for applications that range from automating scheduled jobs run by several EC2 instances to video encoding to stringing together the steps required to get packages from a warehouse to a delivery truck.

To adopt SWF, customers need to add code to their application that defines work coordination logic in a “decider” (specifying work sequencing, timing, and failure conditions) and which application components are executing work in “activities” that live within “workflows.” Once these changes are in place, the SWF service will coordinate the work automatically on behalf of the application. Developers can use any programming language when programming directly against the SWF service API to define deciders, activities, and workflows. We also provide a Java client AWS Flow Framework, which simplifies common programming tasks like retry logic for failed or delayed work.

Flow Framework Recipes

To make it easier for customers to use SWF, we are introducing 17 AWS Flow Framework recipes that show best practices for adding SWF work coordination logic to your application. These recipes are Java code samples with JUnit tests that use the AWS Flow Framework to perform common programming steps.

The recipes start by showing you how to add common workflow coordination logic, like repeatedly executing a unit of work (an activity) in your application iteratively or recursively until a condition like time or number of retries is met. Next, the recipes give guidance on setting up more involved coordination logic, like executing multiple processes of the same work in your application in parallel or starting multiple processes of the same work and taking the results of the process that finishes first. This type of parallel processing helps developers scale high-volume processing jobs by leveraging concurrent process execution while still exerting control over the work. We also show you how to put work on hold until an action occurs elsewhere in the application. Finally, the recipes show how you can define retry logic for failed or delayed work in an application.

Getting Started
The AWS Flow Framework recipes take the form of 11 Java packages, some of which contain multiple recipes which are commonly used together. The recipes build upon the AWS Flow Framework and take advantage of a number of its data types and features including Promise<T>, Settable<T>, and Asynchronous methods.

If you’d like a little background on the programming concepts in Simple Workflow before you start, you can check out the Amazon Simple Workflow Service Developer Guide's Introduction, Getting Set Up, and Basic Concepts sections. You can also check out my blog post introducing Simple Workflow.

We have plenty of documentation to help you get started, along with instructions to run the recipes using Eclipse and Ant. You can find everything you need in the AWS Flow Framework recipes package. If you need any help, feel free to post a question to the Amazon Simple Workflow developer forum.

Sounds like an interesting new feature to me.

• Derek Gascon asserted Smooth sailing to the cloud with the Dell DX Object Storage Platform in an 11/14/2012 post to Dell’s Inside Enterprise IT: Cloud Computing blog:

imageWhen you think of the word “cloud” in terms of information technology (IT), what does it mean to you? It is a dark and hazy horizon full of security risks, or a bright blue frontier where IT can transform businesses? Does storing data out on the cloud feel like traveling on an ill-equipped small boat across the ocean, subject to its perils and lost in its massiveness?

We are in the midst of a dynamic world connected through social and business networks that create interactive relationships, resulting in an explosion of new data. This data growth is surpassing IT budget increases; however IT organizations are still tasked to ensure data remains accessible for business, analytics and compliance needs.

Many organizations would like to implement a cloud strategy to help address data growth based on three pressing challenges. First, there is continuing business pressure to implement more cost-effective and agile IT services. Secondly, organizations are being asked to provide data storage with high availability and fast performance which is also scalable for future data demands. Lastly, and perhaps most importantly, customers are striving to enhance data protection to help ensure business continuity.

Object storage is quickly becoming the standard for cloud storage infrastructure and Web 2.0 communities. It can be that state-of-the-art vessel that lets you navigate and control your course of direction in a cloud environment. Many service providers of public cloud storage today have developed their own object-based infrastructure.

Equipping Your Vessel

There are several key tenets generally required for a cloud service offering that match the architecture and features of an object storage platform:

  • A cost-efficient platform that allows a scale-on-demand model could help organizations meet varying data growth and streamline budgets. Near limitless scalability and ease of management is required to address the large amounts of ever-growing data and could be achieved through clustering storage on standard x86 hardware platforms.
  • Additionally, cloud infrastructures should be self-managing and self-healing with the ability to automatically rebalance workloads in case of node failure to provide continuous availability.
  • To protect the data in the petabyte and multi-petabyte range, a data protection scheme like replication is ideal because backup simply cannot be done efficiently. Multi-tenant functionality helps provide appropriate data access based on department or organization and immutability options are important to protect data based on an organization’s policies or industry regulations.
  • Rich metadata support can enhance search, intelligent information management/distribution and analytics, removing the danger of data being “lost at sea” in a large cloud environment.

Finding a safe harbor

Deploying an object-based storage platform on-premise, as a private cloud is a way to use and understand cloud storage in a safe environment before advancing to a more complex hybrid or public cloud. Private storage clouds can help fulfill a variety of use cases including internal “dropbox” file-sharing functionality and highly expandable secure regulatory archive.

Beginning your voyage

The Dell DX Object Storage Platform is changing the economics of cloud and archive deployments through intelligent object storage technology to support your long-term vision at massive scale. Help increase data value while reducing data costs with the self-healing, self-managing, and metadata-aware DX Object Storage Platform that can seamlessly scale to multiple petabytes.

An easy HTTP interface provides private, hybrid or public cloud connections while immutability options and multi-tenancy can ensure that organizations can safely access and protect their data by group or department. The simple-to-manage DX Object Storage Platform can help your data stay available and protected with integrity, authenticity and retention capabilities delivered through automated policy-based management options. The DX Object Storage Platform in conjunction with a certified technology partner can offer “dropbox” functionality that can enhance mobile user productivity via secure collaboration capabilities across devices of choice. This blog offers more detail about an offering to help keep data behind the corporate firewall.

The DX Object Storage Platform allows you to choose an archive storage environment today and a cloud-based storage environment tomorrow. Or you can begin with an on-premise, private cloud storage environment today with the added capability of automated retention policies to help meet regulatory guidelines and/or keep your data for a long period of time.

With Dell, you have the choice of which path to the cloud is right for your organization—and it is smooth sailing with the Dell DX Object Storage Platform.

Learn more about the Dell DX Object Storage Platform by participating in the HOL lab, advanced whiteboard session, and roadmap NDA session at the Dell Storage Forum in Paris this week.

You can reach us through Facebook, Twitter and other Social Media channels. We want to hear your thoughts, because they continue to inspire us to drive storage innovations that give you the power to do more.

I would only consider Dell’s DX Object Storage Platform if it were half or less the cost of Windows Azure Blog Storage.

• Jeff Barr (@jeffbarr) reported Developers (Mobile), Developers (Ruby), Developers (Java) - New Blogs! in an 11/14/2012 post:

imageI'm happy to announce that we are launching three new blogs, all focused on developers and each one targeted at a particular language or platform. Here's what we have for you:

The new AWS Mobile Blog is for users of the AWS Mobile SDKs for iOS and Android.

The new AWS Ruby Blog is for Ruby developers, especially those who are using the AWS SDK for Ruby.

The new AWS Java Blog is for (you guessed it) Java developers, especially those who are using the AWS SDK for Java.

You will find these blogs helpful if you are a developer looking for code samples and recipes covering popular use cases on AWS.

James Governor (@monkchips) described Apprenda: an alternative PaaS for .NET in an 11/14/2012 post to his Monkchips blog:

imageWe just signed up Apprenda as a client. It’s an interesting firm – a classic Microsoft/partner/competitor infrastructure company. As Citrix is to Microsoft Terminal Services, or VMware is to Microsoft Virtual Server, so Apprenda wants to be to Azure. The firm sees PaaS as the new Application Server rather than the new virtualisation.

imageApprenda’s goal is to provide a development experience that is business as usual for Windows .NET developers, making multi-tenancy into a configuration decision, by taking a standard app, automatically instrumenting the app, managing data isolation and transformation (even at the row level), making the app appropriate for the cloud.

Apprenda points to Amerisource Bergen, a pharmaceutical distribution company, as an example of its approach. Amerisource had an oncology app which was originally built for managing one clinic, which would run on a local machine. Using Apprenda the app was cloud-enabled and is now delivered as a service.

How does that work? The developer wraps and uploads the app, specifies configuration options at deploy time – such as authentication model, multitenancy (hard or meta isolation), then specifies resources needed, before publishing. Apprenda then builds a meta model of the app = looking at dependencies, call patterns and so on (Windows gorp, that is). – before finding the most appropriate servers for the app. It acts as a container. The focus on roles and authentication at the row level is crucial to enabling apps for the cloud that weren’t designed for the deployment model – deploying an on-prem app without changes to the cloud is a Really Bad Idea.

Apprenda certainly has a strong opinion – the firm has no interest in polyglot PaaS at all. So far the company is involved in a lot of consulting work to get companies ready for the change to PaaS, with knowledge transfer generally taking 8-12 weeks according to the firm.

imageAnother company worth mentioning here is Appfog, another RedMonk client, which is taking the opposite approach: partnering with Microsoft to bring code developed in other languages such as Node.js to Azure – so you can mix and match Node apps built for Appfog and those built using the native Azure Node support. You can also deploy direct to Azure from Appfog, rather than Cloud Foundry only, giving a degree of PaaS redundancy. [Emphasis added.]

Apprenda has a *long* way to go before its a Citrix or VMware, but its aggressive support of traditional Microsoft APIs and SDKs could pay dividends.

Jeff Barr (@jeffbarr) explained Archiving Amazon S3 Data to Amazon Glacier in a 11/13/2012 post:

imageAWS provides you with a number of data storage options. Today I would like to focus on Amazon S3 and Amazon Glacier and a new and powerful way for you to use both of them together.

Both of the services offer dependable and highly durable storage for the Internet. Amazon S3 was designed for rapid retrieval. Glacier, in contrast, trades off retrieval time for cost, providing storage for as little at $0.01 per Gigabyte per month while retrieving data within three to five hours.

image_thumb11How would you like to have the best of both worlds? How about rapid retrieval of fresh data stored in S3, with automatic, policy-driven archiving to lower cost Glacier storage as your data ages, along with easy, API-driven or console-powered retrieval?

Sound good? Awesome, because that's what we have! You can now use Amazon Glacier as a storage option for Amazon S3.

There are four aspects to this feature -- storage, archiving, listing, and retrieval. Let's look at each one in turn.

First, you need to tell S3 which objects are to be archived to the new Glacier storage option, and under what conditions. You do this by setting up a lifecycle rule using the following elements:

  • A regular expression to specify which objects in the bucket are subject to the policy.
  • A relative or absolute time specifier and a time period for transitioning objects to Glacier. The time periods are interpreted with respect to the object's creation date. They can be relative (migrate items that are older than a certain number of days) or absolute (migrate items on a specific date)
  • A time period for expiring objects from Glacier.

You can create a lifecycle rule in the AWS Management Console:

Every day, S3 will evaluate the lifecycle policies for each of your buckets and will archive objects in Glacier as appropriate. After the object has been successfully archived using the Glacier storage option, the object's data will be removed from S3 but its index entry will remain as-is. The S3 storage class of an object that has been archived in Glacier will be set to GLACIER.

As with Amazon S3's other storage options, all S3 objects that are stored using the Glacier option have an associated user-defined name. You can get a real-time list of all of your S3 object names, including those stored using the Glacier option, by using S3's LIST API. If you list a bucket that contains objects that have been archived in Glacier, what will you see?

As I mentioned above, each S3 object has an associated storage class. There are three possible values:

  • STANDARD - 99.999999999% durability. S3's default storage option.
  • RRS - 99.99% durability. S3's Reduced Redundancy Storage option.
  • GLACIER - 99.999999999% durability, object archived in Glacier option.

If you archive objects using the Glacier storage option, you must inspect the storage class of an object before you attempt to retrieve it. The customary GET request will work as expected if the object is stored in S3 Standard or Reduced Redundancy (RRS) storage. It will fail (with a 403 error) if the object is archived in Glacier. In this case, you must use the RESTORE operation (described below) to make your data available in S3.

You use S3's new RESTORE operation to access an object archived in Glacier. As part of the request, you need to specify a retention period in days. Restoring an object will generally take 3 to 5 hours. Your restored object will remain in both Glacier and S3's Reduced Redundancy Storage (RRS) for the duration of the retention period. At the end of the retention period the object's data will be removed from S3; the object will remain in Glacier.

Although the objects are archived in Glacier, you can't get to them via the Glacier APIs. Objects stored directly in Amazon Glacier using the Amazon Glacier API cannot be listed in real-time, and have a system-generated identifier rather than a user-defined name. Because Amazon S3 maintains the mapping between your user-defined object name and the Amazon Glacier system-defined identifier, Amazon S3 objects that are stored using the Amazon Glacier option are only accessible through the Amazon S3 API or the Amazon S3 Management Console.

Archiving in Action
We expect to see Amazon Glacier storage put to use in a variety of different ways. Toshiba's Cloud & Solutions Division will be using it to store medical imaging. Tetsuro Murugana, Chief Technology Executive of the division is very exciting about it. Here's what he told us:

We currently provide a service enabling medical institutions to securely store patients’ medical images in Japan. We are excited about using Amazon Glacier through Amazon S3 to affordably and cost-effectively archive these images in large volumes for each of our customers. We will combine Toshiba’s cloud computing technology with Amazon Glacier’s low costs and Amazon S3’s lifecycle policies to provide a unique offering tailored to the needs of medical institutions. In addition, we expect we can build similarly tailored integrated solutions for our wide range of customers so that they can archive massive amounts of data in various business areas.

You will pay standard Glacier pricing for data stored using S3's new Glacier storage option.

Learn More
Learn how to archive your Amazon S3 data to Glacier by reading the Object Lifecycle Management topic in the Amazon S3 Developer Guide or check out the new Archiving Amazon S3 Data to Amazon Glacier video:

Jeff Barr (@jeffbarr) described Amazon ElastiCache - Four New Cache Node Types in an 11/13/2012 post:

imageIf you are using Amazon ElastiCache to implement a caching layer in your application, you now have four additional cache node types to choose from, bringing the total up to eleven types. Here are the new types and their specs:

  • cache.t1.micro has 213 MB of RAM for caching, and 1 virtual core.
  • cache.m1.medium has 3.35 GB of RAM for caching, and 1 virtual core.
  • cache.m3.xlarge has 14.6 GB of RAM for caching,and 4 virtual cores.
  • cache.m3.2xlarge has 33.8 GB of RAM for caching, and 8 virtual cores.

image_thumb11Using the Micro cache node, you can get started for less than $16 per month. You can now boost your application's performance with a caching layer regardless of your budget. Visit the ElastiCache pricing page for pricing information on the entire range of cache node types.

The new instance types are available in every Region supported by ElastiCache; see the AWS Products and Services by Region page for more info (the m3 cache node types are available only in the US East (Northern Virginia) Region.

<Return to section navigation list>