Monday, September 03, 2012

Windows Azure and Cloud Computing Posts for 8/31/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI,Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222


‡ Updated 9/3/2012 9:00 AM PDT with new articles marked .
• Updated 9/2/2012 5:00 PM PDT with new articles marked .
• Updated 9/1/2012 1:30 PM PDT with new articles marked .

Tip: Copy bullet(s) or dagger, press Ctrl+f, paste it/them to the Find textbox and click Next to locate updated articles:


Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

Azure Blob, Drive, Table, Queue, Hadoop and Media Services

‡ Gaurav Mantri (@gmantri) described a Simple Calculator for Comparing Windows Azure Blob Storage and Amazon S3 Pricing in a 9/3/2012 post:

image[A] few months back, I wrote a few blog posts comparing Windows Azure Blob Storage and Amazon S3 services. You can read those blog posts here:

imageSince pricing for both of these services are changing quite frequently and depended upon a number of factors, it was not possible for me to pinpoint exactly which service is cheaper. I created a simple calculator where you can input appropriate values and compare the cost of both of these services to you.

As mentioned in my other blog posts, the pricing depends on 3 factors in both services:

  1. Transaction costs i.e. cost incurred based on number of transactions performed against each service. These include various REST based operations performed against the two services.
  2. Storage costs i.e. cost incurred based on the amount of data stored in each service. These are usually calculated based on Giga bytes of data stored per month.
  3. Bandwidth costs i.e. cost incurred based on the data sent out of the data center in each service. Please note that at the time of writing of this blog, all incoming traffic is free in both service as well as the data transferred between an application and storage service in same data center is also free.

imageIn this simple calculator, I took only first two factors into consideration. Again when it comes to storage costs, both services offered a tiered pricing scheme, which I have not considered.

[Following is a static image of Gaurav’s calculator. The live version is here.]


‡ Nicole Hemsoth (@datanami) reported Study Stacks MySQL, MapReduce and Hive in a 9/3/2012 post to the Datanami blog:

imageMany small and medium sized businesses would like to get in on the big data game but do not have the resources to implement parallel database management systems. That being the case, which relational database management system would provide small businesses the highest performance?

imageThis question was asked and answered by Marissa Hollingsworth of Boise State University in a graduate case study that compared the performance rates of MySQL, Hadoop MapReduce, and Hive at scales no larger than nine gigabytes.

Hollingsworth also used only relational data, such as payment information, which stands to reason since anything more would require a parallel system. “This experiment,” said Hollingsworth “involved a payment history analysis which considers customer, account, and transaction data for predictive analytics.”

imageThe case study, the full text of which can be found here, concluded that MapReduce would beat out MySQL and Hive for datasets larger than one gigabyte. As Hollingsworth wrote, “The results show that the single server MySQL solution performs best for trial sizes ranging from 200MB to 1GB, but does not scale well beyond that. MapReduce outperforms MySQL on data sets larger than 1GB and Hive outperforms MySQL on sets larger than 2GB.”

Hollingsworth ran her tests on sample data provided to her by what she calls “CompanyX,” a software company near Boise that wished to remain anonymous. Along with being a computer science graduate student at Boise State, Hollingsworth works for HP Indigo as a Software Design Engineer. It is possible that Hollingsworth was able to obtain this data as a result of working for HP Indigo, but that matters little.

imageCompanyX’s motivation is that they want to perform predictive analytics on their payment information. One benefit of this is that they, like everyone else, have customers are consistently late with their payments. CompanyX automatically hires out a collection agency to follow up on late and delinquent charges, a collection agency that CompanyX has to pay per inquiry. If CompanyX can identify customers who may always pay late but do always pay, they would not have to ask the collection agency to inquire about that customer. It is unclear if those saved expenses would cover the cost of implementing a predictive analytics system, but there are other benefits as well.

The data that CompanyX gave her was not enough, however. Hollingsworth had to expand on it, creating data that was statistically similar to the datasets given to her. While this creation of data may sound scientifically sketchy, Hollingsworth simply needed test data to feed into the management systems. Further, CompanyX gave her the data under the context of having hundreds of customers but expecting to expand their customer base by a factor of ten. As long as the fabricated data was statistically similar as the original data, it would make the company happy while standing up to scientific scrutiny.

imageThe results were compiled on Boise State’s Onyx cluster. The exact runtimes are not important here, since Onyx will perform differently from other clusters. The runtime information is useful only when compared to that of the other systems when run on the exact same system using the exact same computing power.

According to Hollingsworth, MapReduce was the big winner. “From these results, it is evident that MapReduce outperforms MySQL and Hive by a dramatic margin.” What was particularly interesting was that for MapReduce and Hive, the runtimes remained relatively constant from 500 to 20,000 accounts (or 235 MB to 9 GB) while MySQL’s runtimes rose at some non-linear curve whose function is not discernible. Hollingsworth did not offer an explanation for this since her focus was simply on which performed best.

Hive’s average runtime raised only slightly, from 535 seconds for 500 accounts to 583 seconds for 20,000 accounts, a runtime increase of 9% for a data increase of 4000%. MapReduce remained relatively constant as well, going from 81.1 seconds to 88.9 seconds, an increase of 9.6%. Meanwhile, MySQL’s runtime increased dramatically. MySQL actually outperformed MapReduce until about the 2,500 account benchmark, analyzing 500 accounts in only 4.2 seconds and 1000 accounts in 13.8 seconds. MySQL also proves better than Hive until somewhere between 5,000 and 10,000 accounts. The elongation in runtime is remarkable in MySQL, as it grows nearly exponentially until the 10,000 account mark.

So what does this all mean? In a basic sense, it means that Hollingsworth recommended MapReduce to the small software company for their predictive analytics. Indeed, Hollingsworth wrote, “our results indicate that MapReduce is the best candidate for this case study. Therefore, we recommend that CompanyX deploy this type of distributed warehousing solution for their BI predictive analytics.” …

Read more: Nicole continues with a broader analysis here.

• John Deutscher (@johndeu) described How to Copy from an existing blob into a new Asset for Windows Azure Media Services (WAMS) in an 8/29/2012 thread:

imageA number of folks on the forum have contacted me and asked how to copy from an existing blob in a storage account into a WAMS Asset.

Nick Drouin wrote some code that was posted to a recent thread that I wanted to share for folks looking for examples on how to do this. I moved it here so that you could find it easily when searching the Forum.

imageThe code below uses the CopyBlob operation in the StorageClient library to quickly copy a blob from an existing file into a newly created "empty" Asset in Media Services so that you can use it in any encode job workflow or Origin streaming (if it is Smooth Streaming format already).

Limitations on the code sample.

  • The sample code below only copies a single file into the asset.
  • Only tested this while copying to/from the same storage account, the one registered with WAMS.
private static CloudMediaContext _context;

static void Main(string[] args)

//Get your account context:
_context = new CloudMediaContext("YourWAMSAccount", "YourPassword");
//Create an empty asset:
Guid g = Guid.NewGuid();
IAsset assetToBeProcessed = _context.Assets.CreateEmptyAsset("YourAsset_" + g.ToString(), AssetCreationOptions.None);
//Create a locator to get the SAS url:
IAccessPolicy writePolicy = _context.AccessPolicies.Create("Policy For Copying", TimeSpan.FromMinutes(30), AccessPermissions.Write | AccessPermissions.List);
ILocator destinationLocator = _context.Locators.CreateSasLocator(assetToBeProcessed, writePolicy, DateTime.UtcNow.AddMinutes(-5));
//Create CloudBlobClient:
var storageInfo = new StorageCredentialsAccountAndKey("YourStorageAccount", "YourStoragePassword");
CloudBlobClient cloudClient = new CloudBlobClient("", storageInfo);
//Create the reference to the destination container:
string destinationContainerName = (new Uri(destinationLocator.Path)).Segments[1];
CloudBlobContainer destinationContainer = cloudClient.GetContainerReference(destinationContainerName);
//Create the reference to the source container, in this case, a container called uploads:
CloudBlobContainer sourceContainer = cloudClient.GetContainerReference("uploads");
//Get and validate the source blob, in this case a file called FileToCopy.mp4:
CloudBlob sourceFileBlob = sourceContainer.GetBlobReference("FileToCopy.mp4");
long sourceLength = sourceFileBlob.Properties.Length;
System.Diagnostics.Debug.Assert(sourceLength > 0);
//If we got here then we can assume the source is valid and accessible.
//Create destination blob for copy, in this case, we choose to rename the file:
CloudBlob destinationFileBlob = destinationContainer.GetBlobReference("CopiedFile.mp4");
destinationFileBlob.CopyFromBlob(sourceFileBlob);  // Will fail here if project references are bad (the are lazy loaded).
//Check destination blob:
System.Diagnostics.Debug.Assert(sourceFileBlob.Properties.Length == sourceLength);
//If we got here then the copy worked.
//Publish the asset:
assetToBeProcessed = RefreshAsset(assetToBeProcessed);
//At this point, you can create a job using your asset.
// ...
Console.WriteLine("You are ready to use " + assetToBeProcessed.Name);
private static IAsset RefreshAsset(IAsset asset)
  var assets = from a in _context.Assets
               where a.Id == asset.Id
               select a;
  return assets.FirstOrDefault();

Bert Latamore reported Apache Drill to Provide SQL-Like Fast Answers to Specific Questions from Hadoop Databases in an 8/31/2012 post to the ServicesANGLE blog:

image_thumb3_thumbIn what he terms “yet another example of the remarkable innovation occurring in the open source Big Data Community,” Wikibon Big Data Analyst Jeff Kelly writes that a small group of committers is developing an open source SQL-like query tool for Hadoop databases called “Apache Drill”.

imageDesigned after BigQuery, Google’s Big-Data-Analytics-as-a-Service technology that it made publicly available in May, Apache Drill is meant to compliment MapReduce, allowing business people to get answers to specific questions such as “what time of day is most popular for app downloads” from Apache Hadoop Big Data databases directly in seconds. MapReduce, by contrast, is a much more technically complex query languages for use by professional Data Scientists to answer much larger questions such as identifying hidden customer behavior patterns from years of unstructured data.

Apache Drill is far from ready for general use, however. It is presently in incubator status with eight committers, including two from MapR, actively developing it. Kelly notes that some MapR projects have remained closed source. However, MapR executives say they are committed to keeping Drill open source because they believe that the open source community can help drive development faster than an internal team.

Given the project’s immaturity, Kelly recommends that organizations with immediate needs for real-time Big Data query support consider alternative approaches including the Google BigQuery service while keeping an eye on Drill’s progress.

As with all Wikibon research, this Professional Alert is available free to the public. Interested IT professionals are invited to register for free membership in the Wikibon community. This gives them the ability to add comments to research and to post their own articles and allows them to receive invitations to Wikibon Peer Incite meetings.

It might be a while until Apache Drill becomes a part of Apache Hadoop on Windows Azure.

<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

Note: Scroll down for Josh’s doto demo post.

•• Josh Twist (@joshtwist) described Going deep with Mobile Services data in a 9/2/2012 post to his The Joy of Code blog:


My friend Patrick asked on twitter if it was possible to expose data relationships in Mobile Services and since this is a common question I thought it might be worthy of a blog post. As you no doubt know, in Windows Azure Mobile Services your data is your own; that is – it’s your SQL database and there’s nothing abnormal about the data schema within – it’s a canonical representation of the data your client inserted via our JSON api. Which means it’s easy to use for reporting, analysis and you can easily back up your data – it’s just an Azure SQL database.

imageA core theme held throughout the design of Mobile Services was simplicity with enablement. That is, we wanted to make backend development easy for all kinds of developers, but avoid closing the gate on scenarios wherever possible.

In the public preview of Mobile Services we peeled back the client to keep it as thin and easy to use as possible and I think it’s a delight to use.

Of course, one of the great things about SQL Server is that it’s a relational database and, sometimes, you want to access those relationships at the client. In this post, I’ll show one of the ways you can do this today. In this case – using scripts.

Imagine we have two tables – Posts and Comments. You get the idea - Comments are related to Posts via a PostId property on the Comment. Here’s the two C# classes that would go into my client application (you don’t need a class/type in JS)

[DataTable(Name = "posts")]
public class Post
    public int Id { get; set; }

[DataMember(Name="text")] public string Text { get; set; } }

[DataTable(Name = "comments")] public class Comment { public int Id { get; set; }

[DataMember(Name="postId")] public int PostId { get; set; }

[DataMember(Name="text")] public string Text { get; set; } }

Straightforward enough. Now imagine, we’d like to retrieve a post’s comments whenever we read the posts table. One way to do this is inside the read script for our Posts table:


And here’s what my script looks like. The code comments should help you work out what’s going on:

var commentsTable = tables.getTable('comments');

function read(query, user, request) {
    // run the query for posts
        success: function(results) {
            // grab all the post ids into 
// an array using JS’ map var postIds = {
}); // find all comments that match
//these post ids commentsTable.where(function(postIds) { return this.postId in postIds; }, postIds).read({ success: function(comments) { // attach the comments to
// each post results.forEach(function (r) { // find only the comments
// that match this post
// id using JS’ filter console.log(r); r.comments = comments.filter(
function(c) { if (c.postId === { return c; } }); }); // now the results have
// been augmented,
// return to the client request.respond(); } }); } }); }

This is a fun example because I get to stretch my scripting legs. And we’re done (note, you could also just use T-SQL to perform a join in the database). We’ll now be sending JSON over the wire that has a comments array for each post!

C# (Managed) Client

The remaining challenge is to have the C# client deserialize that comments array. My friend Carlos already posted an awesome article ‘Supporting arbitrary types in Azure Mobile Services managed client – simple types’ and hints that his next post will talk about complex types. Therefore this is a sneak preview of some of the goodness to come in his next post.

We add a Comments property to the Post class and attribute it to identify a converter:

typeof(CommentListConverter))] public List<Comment> Comments { get; set; }

Now we need an IDataMemberJsonConverter called CommentListConverter:

public class CommentListConverter : 
IDataMemberJsonConverter { public object ConvertFromJson(IJsonValue value) { return value.GetArray().Select(c => MobileServiceTableSerializer.
Deserialize<Comment>(c)).ToList(); }

public IJsonValue ConvertToJson(object instance) { // don't reverse the conversion. We don't // want to push the collection back up to // the mobile service return null; } }

And you should be golden. You’ll now have a populated List<Comment> for all posts you read from the Mobile Service (empty if there were no comments). Note that we don’t want to reverse the conversion as you don’t want to push the comments to the server on insert or update – a null works fine here.

JS (WinJS) client

In the JS world we have slightly less work to do as the object will automatically have the comments array property (JavaScript rocks at handling JSON – no surprise there). However, since you don’t want to push the comments back to the service, you should remember to delete that property before calling update, e.g.:

function updatePost(post) {
    delete post.comments;

•• Josh Twist posted a Windows Azure Mobile Services - Doto sample to the MSDN Code Samples library on 8/28/2012:

imageDownload [the source code]: C# (2.0 MB)


Doto is a simple, social todo list application that demonstrates the features of Windows Azure Mobile Services and how easy it is to add the power of Windows Azure to your Windows 8 app.

imageDoto uses the following features from Windows Azure Mobile Services:

  • Integrated authentication via Microsoft Account and Live Connect
  • Structured storage with server scripts to provide validation and authorization
  • Integration with Windows Notification Services to send Live Tile updates and Toast notifications

The sample contains a Windows 8 application that allows you to create todo lists, manage your todo items and even share your lists with other users. To complete the scenario, you'll need to create a Windows Azure Mobile Service and follow the instructions in Setup.rtf (included in the zip download).

How to doto

When first starting doto, you'll be asked to register. To improve the user experience we pre-fill your name and city using your Windows Live profile. Once registered, you'll be asked to create a new list. Click the 'create a new list' button and enter a name and click Save.

You can create new lists at any time using the app bar (right click the screen to show the app bar). You can also use the app bar to add and remove items, refresh the current list, invite other users or leave a list.

You can switch between your multiple lists using by clicking on the name of your list at the top left of the screen. You'll see a dropdown with the names of all of your list.

Doto was designed to be extremely simple - items are either on your todo list or deleted. There is no idea of edit task or complete task. To remove items from the list, just select them and click remove items in the app bar.

You can invite other users to share any of your lists by clicking on the invite user button and searching for people by name (to test this feature, you might want to sign out using the settings charm and sign in with a second live account).

The invited user should receive a Toast Notification, and can click the View Invite button in the app bar to accept (or reject) your invite. Note, that an invited user gets full permissions over your list including the ability to add other users.

Enjoy! Remember, to get started you'll need to follow the instructions in the Setup.rtf.


To run doto you'll need the following

Josh delivers a doto demo in his Cloud Cover Episode 89 interview in this section below.

Craig Dunn (@conceptdev) explained Microsoft's Azure Mobile Services... and Mono-for-Android in a 9/1/2012 post:

imageYesterday's post [see article below] introduced a quick implementation of Microsoft's Azure Mobile Services using MonoTouch to build an iOS app.

The WebClient and Json handling was easily refactored into a single class - AzureWebService - which was then added to the existing Android version of the Tasky sample... and now we have the same Azure Mobile Service being access by three platform clients: Windows 8, iOS and Android all with C# (and the iOS and Android apps sharing the service client code).

imageAdditional features have also been added to AzureWebService to allow deletion of tasks. The Android app source is on github and it looks like this (delete has been added to the iOS app too):


Here is a discussion of how the API was reverse-engineered with Fiddler. The REST endpoints that TaskyAzure accesses are:

GET /tables/TodoItem

GET /tables/TodoItem/?$filter=(id%20eq%20{0})

PATCH /tables/TodoItem/{id}
{"id":1,"text":"updated task text","complete":false}

POST /tables/TodoItem
{"text":"new task text","complete":false}

DELETE /tables/TodoItem/{id}

Finally, only a few small updates were required in the Windows 8 example prevent the completed tasks from disappearing and instead make use of the checkbox in a more natural way:

Now all three apps are reading and writing to the same Azure data store! Can't wait for the official cross-platform APIs :-)

See also the New England Mobile .NET Developers group announced on 9/1/2012 that Mike Bluestein (@mikebluestein) will present a session about developing for Android devices in C# in the Cloud Computing Events section below.

Craig Dunn (@conceptdev) posted Microsoft's Azure Mobile Services... and MonoTouch on 8/31/2012:

imageMicrosoft only recently announced a cool new addition to the Azure product offering: Mobile Services. They have done a great job at providing a getting started tutorial that gives you a working Windows 8 app in about 5 minutes (seriously, it's fast and easy).

imageAzure Mobile Services consist of an underlying REST API, so it didn't take long for someone (Chris Risner :-) to put a simple iOS client together. That was all the inspiration required to get it working with MonoTouch.

Actually there is already a MonoTouch todo-list example called Tasky and it has previously been adapted to use Apple's iCloud storage.

The finished code for TaskyAzure borrows heavily from the existing Tasky samples (eg. it uses MonoTouch.Dialog), and really only borrows the REST API urls and Json from Chris' post. I might be biased, but the C# code looks a lot simpler to me :-)
Visit github for the TaskyAzure code. The app looks like this:


And just to prove that the Windows 8 app and the MonoTouch app are both writing to the same Azure database, here is a screenshot of the Azure Management Portal showing the data storage:

Azure Mobile Services looks pretty interesting - look forward to seeing the official cross-platform APIs :-)

UPDATE: to try the code follow the Microsoft's instructions, including creating a free trial account. Once your Azure Mobile Service has been created, configure the app by updating the constants in the AzureWebService.cs class:

static string subdomain = "xxxxxx"; // your azure subdomain

static string MobileServiceAppId = "xxxxxx"; // your application key

Nathan Totten (@ntotten) and Nick Harris (@CloudNick) interviewed Josh Twist (@joshtwist, pictured below) in CloudCover Episode 89 - Windows Azure Mobile Services on 8/31/2012:

imageIn this episode Nick and Nate are joined by Josh Twist – Senior Program Manager – who introduces Windows Azure Mobile Services. Josh demonstrates how easy it is to build Windows 8 apps with Mobile Services. Finally, Josh shows us his sample application DoTo that utilizes Mobile Services.

In the News:

Follow @CloudCoverShow
Follow @cloudnick
Follow @ntotten

Check out Josh’s Introducing Windows Azure Mobile Services, AKA the Birth of Zumo, which explains where the Zumo codename came from.

Kirill Gavrylyuk (@kirillg_msft) recommended that you Add cloud to your app with Windows Azure Mobile Services in an 8/28/2012 post to the Windows 8 Developer blog:

imageGreat Windows Store apps are connected. They use live tiles, authenticate users with single sign-on and share data between devices and users. To get all these great benefits of being connected, your app needs to use services in the cloud.

imageBuilding cloud services is hard. Most cloud platforms offer general purpose capabilities to store data and execute code, but you have to author reams of infrastructure code to glue these capabilities together. I’m sure you are up for the challenge, but I bet backend infrastructure code is not your first priority. You want to focus on realizing your awesome app idea.

Addressing this difficulty, earlier this week we announced a preview of the new service in Windows Azure: Mobile Services. Let me show you how you can add the cloud services you need to your app in minutes, using Mobile Services.

To get started, sign up for the free trial of Windows Azure. You’ll get 10 Mobile Services for free. Let’s use one of them to build a simple todo list app.

Create a new Mobile Service
  1. After you’ve signed up, go to log in using your Microsoft account. Create a new Mobile Service by clicking on the +NEW button at the bottom of the navigation pane.


  2. Select Mobile Service and click Create. You will see a screen like this:


    Figure 1. Create drawer in the management portal.

  3. In the New Mobile Service wizard, type a name of your app. This forms part of the URL for your new service.


    Figure 2 Create Mobile Service wizard, first screen.

  4. When you create a Windows Azure Mobile Service, we automatically associate it with a SQL database inside Windows Azure. The Windows Azure Mobile Service backend then provides built-in support for enabling remote apps to securely store and retrieve data from it, without you having to write or deploy any custom server code. Type the name of the new database, and enter a Login name and password for your new SQL Server. Remember these credentials if you want to reuse this database server for other Mobile Services. If you signed up for a 90 days free trial, you are entitled for one 1GB database for free.


Figure 3. Create Mobile Service wizard, second screen.

Click the tick button to complete the process. In just a few seconds you’ll have a Mobile Service – a backend that you can use to store data, send push notifications and authenticate users. Let’s try it out from an app.

Create a new Windows Store app
  1. Click the name of your newly created mobile service.


    Figure 4. Your newly created mobile service.

    You now have two choices: to create a new app, or to connect an existing app to your Mobile Service. Let’s pick the first option and create a simple todo list that stores todo items in your SQL database. Follow the steps on the screen:


    Figure 5. Creating a Mobile Services app.

  2. Install the Visual Studio 2012 and Mobile Services SDK, if you haven't already done so.
  3. To store todo items, you need to create a table. You don’t need to predefine the schema for your table; Mobile Services will automatically add columns as needed to store your data.
  4. Next, select your favorite language, C# or JavaScript, and click Download. This downloads a personalized project that has been pre-configured to connect to your new Mobile Service. Save the compressed project file to your local computer.
  5. Browse to the location where you saved the compressed project files, expand the files on your computer, and open the solution in Visual Studio 2012 Express for Windows 8.
  6. Press F5 to launch the app.
    In the app, type a todo item in the textbox on the left and click Save. This sends an HTTP request to the new Mobile Service hosted in Windows Azure. This data is then safely stored in your TodoItem table. You receive an acknowledgement from the Mobile Service and your data is displayed in the list on the right.


Figure 6. Completed app.

Let us take a look at the code inside the app that saves your data. Stop the todo list app and double-click on App.xaml.cs. Notice the lines:

public static MobileServiceClient MobileService = new MobileServiceClient(

This is the only code you need to connect your app to your Mobile Service. If you are connecting an existing app to your Mobile Service, you can copy this code from the quick start “Connect your existing app” option. Now, open MainPage.xaml.cs and take a look at the next code that inserts data into the Mobile Service:

private IMobileServiceTable<TodoItem> todoTable = App.MobileService.GetTable<TodoItem>();
private async void InsertTodoItem(TodoItem todoItem)
await todoTable.InsertAsync(todoItem);

This is all that’s needed to store data in your cloud backend. Here is an equivalent code in JavaScript:

var client = new Microsoft.WindowsAzure.MobileServices.MobileServiceClient(
var todoTable = client.getTable('TodoItem');

var insertTodoItem = function (todoItem) {
todoTable.insert(todoItem).done(function (item) {
Manage and monitor your Mobile Service

Go back to the Windows Azure Management Portal, click the Dashboard tab to see real-time monitoring and usage info for your new Mobile Service.


Figure 7. Mobile Services dashboard.

Click the Data tab and then click on the TodoItems table.


Figure 8. Data tab.

From here you can browse the data that the app inserted into the table.


Figure 9. Browse data.


In this brief overview we’ve looked at how easy it is to add the power of Windows Azure to your app without the drag that comes with authoring, managing and deploying a complex backend project. We’ve only scraped the surface of what you can do with Mobile Services. Here’s a teaser of some of the other features that are ready for you to explore.

Learn more about Mobile Services at

Chris Risner’s detailed Windows Azure Mobile Services and iOS post of 8/20/2012 is for you if you can’t wait for the Windows Azure Team’s iOS implementation. It begins:

imageAs mentioned yesterday by me, and half the internet, Windows Azure Mobile Services has been launched. Already people have started talking about how fast and easy it is to use Mobile Services as a backend. One thing that I highlighted and that others have pointed out is that official support for iOS, Android, and Windows Phone 8 is coming. This means that if you want to download and install pre-built REST helper methods for your non-Windows 8 operating system, you’ll have to wait. However, since all of the calls to Mobile Services are being done over HTTP and are REST based, it’s pretty easy to see what each call sends over the wire. This means that we can take that information and write our own code that will run on iOS and Android and hit Mobile Services.

Today, I’ll start to show you how to do just that. In this article, we’ll walk through creating a new Mobile Service and then connecting an iOS client to it. We’ll only use some basic data capabilities provided by Mobile Services but in the coming weeks, I’ll show you how to watch the HTTP calls made by a Windows 8 app (so you can figure out what’s going across the wire) and then how to reproduce some of the more advanced things in both iOS and Android. By the end of this walkthrough, we’ll have reproduced in an iOS client, all of the capabilities of the initial Todos Windows 8 Mobile Services demo. You’ll be able to add new todos, list those todos, and mark todos complete. …

Chris continues with a detailed tutorial, including source code.

Daniel Todd described the The Real Gold In The Mobile Market in an 8/30/2012 post to the Seeking Alpha blog:

imageEarlier this week Microsoft (MSFT) announced the addition of a new capability for Windows Azure called Windows Azure Mobile Services. This service will allow developers to add a cloud backend to their Windows 8 application, with support for mobile platforms including Windows Phone 8, iOS and Android in development. Developers will be able to store data in the cloud, authenticate users, and send out push notifications to clients using the new platform. This service could become a huge advantage for Microsoft as it will enable companies to deploy mobile apps to employees through the cloud without server side coding. Perhaps just as importantly, it will allow developers to deliver mobile apps that will run on every Microsoft desktop - providing a platform that is bigger than Apple's (AAPL) and Google's (GOOG) combined.

imageAlthough we are in an increasingly mobile world, Microsoft still owns the desktop and knows it. There are many millions of people whiling away their time on PCs at work. Microsoft believes it can eventually funnel these people over to its own mobile platform, Windows Phone. Next Microsoft will be joining Google in providing users with a way to pay for services and products using their phones. Microsoft's Windows Phone 8 will enable such technology as well [as] incorporate a wallet application (similar to Google Wallet) that will store credit cards, coupons and other payment information. Apple's Passport, announced in June, matches most of the feature set of Google's Wallet, in that it allows users to store airplane tickets, concert tickets and other passes digitally. If rumors are correct that the new iPhone will include a NFC chip, then Apple will also join Google (most notably in combination with the Samsung Galaxy S3) in enabling users to actually pay for goods and services with their phones.

imageAs Facebook (FB) has shown, making money off a mobile app (even if it may be the biggest of some 500 million out there) is not a facile matter. All of these giants will be looking to partner with, or acquire, smaller companies with very specific expertise to achieve this goal. As I posited earlier, these mobile sector "service" companies, namely Vringo Inc. (VRNG), Mimvi (MIMV.OB) Millennial Media Inc, (MM) Synacor Inc. (SYNC) and Mitek Systems Inc. (MITK) are worth keeping an eye on. Just as Facebook acquired Instagram and Apple purchased Chomp, expect more acquisitions and partnerships from these two companies as well as from Google and Microsoft.

The Mother Lode

If making money off mobile apps is the Holy Grail, then getting users to pay for goods and services WITH their mobile phones may prove to be even more valuable. If you compare the mobile marketplace to the Gold Rush, then there may well be a few very wealthy prospectors at the end of the era. But it is much more likely that the provisions providers come away with the greater share. After all, a hundred and fifty years after the San Francisco Gold Rush, the name of Levi Strauss & Co. still stands tall. So it is the companies building the mobile payment infrastructure that will benefit most over the long term. With the release of Windows Azure Mobile Services and the upcoming Windows 8, Microsoft may be a better investment than in many years. And for those less conservative investors, the price point of Vringo, Mimvi, Millennial Media, Synacor and Mitek Systems may just yield the mother lode.

Disclosure: I have no positions in any stocks mentioned, and no plans to initiate any positions within the next 72 hours. I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it. I have no business relationship with any company whose stock is mentioned in this article.

Jim O’Neil (@jimoneil) posted Windows 8 Notifications: Using Azure for Periodic Notifications (as an alternative to the Notifications feature of Windows Azure Mobile Services) on 8/29/2012:

imageAt the end of my last post, I put in a plug for using Windows Azure to host periodic notification templates, so I’ll use this opportunity to delve into a bit more detail. If you want to follow along and don’t already have an Azure subscription, you can get a free 90-day trial account on Windows Azure in minutes.

The Big Picture

The concept of periodic notifications is a simple one: it’s a publication/subscription model. Some external application or service creates and exposes a badge or tile XML template for the Windows 8 application to consume on a regular cadence. The only insight the Windows 8 application has is the public HTTP/HTTPS endpoint that the notification provider exposes. What’s in the notification and how often it’s updated is completely within the purview of the notification provider. The Windows 8 application does need to specify the interval at which it will check for new content – selecting from a discrete set of values from 30 minutes to a day – and that should logically correspond to the frequency at which the notification content is updated, but there is no strict coupling of those two events.

Periodic notification workflow for Chez Contoso

Take for instance, a Windows 8 application for Chez Contoso (right), a local trendy restaurant that displays its featured entrée on a daily basis within a Start screen tile, trying to lure you in for a scrumptious dinner. Of course the chef’s spotlight offering will vary daily, so it’s not something that can be packaged up in the application itself; furthermore, the eatery may want to have some flexibility on the style of notification it provides – mixing it up across the several dozen tile styles to keep the app’s presence on the Start screen looking fresh and alive.

Early each morning then, the restaurant management merely needs to update a public URI that returns the template with information on the daily special. The users of the restaurant’s application, which taps into that URI, will automatically see the updates, whether or not they even run the application that day.

Using Windows Azure Storage

Since the notification content that needs to be hosted is just a small snippet of XML, a simple approach is to leverage Windows Azure blob storage, which can expose all of its contents via HTTP/HTTPS GET requests to a blob endpoint URI.

In fact, if Chez Constoso owned the Windows Azure storage account, the restaurant manager could simply copy the XML file over to the cloud (using a file copy utility like CloudBerry Explorer), and with an access policy permitting public read access to the blob container, the Windows 8 application would merely need to register the URI of that blob in the call to startPeriodicUpdate (along with a polling period of a day and an initial poll time of 9 a.m.). There would be no requirement (or cost) for an explicit cloud service per se: the Windows Azure Storage service fills that need by supporting update and read operations on cloud storage.

Unfortunately, there is a catch! By default, the expiration period for a tile is three days; furthermore, if the client machine is not connected to the network at the time the poll for the update is made, that request is skipped. In the Chez Contoso scenario, this could mean that the daily dinner special for Monday might be what the user sees on his or her Start screen through Wednesday – not ideal for either the restaurant or the patron.

  • The good news is that to ensure content in a tile is not stale, a expiration time can be provided via a header in the HTTP response that returns the XML template payload, and upon expiry, the tile will be replaced by the default image specified in the application manifest.
  • The bad news is that you can’t set HTTP headers when using Azure blob storage alone, you need a service intermediary to serve up the XML and set the appropriate header… enter Windows Azure.
Using Windows Azure Cloud Services

A Windows Azure service for a periodic notification can be pretty lightweight – it need only read the XML file from Windows Azure storage and set the X-WNS-Expires header that specifies when the notification expires. Truth be told, you may not need a cloud storage account since the service could generate the XML content from some other data source. In fact, that same service could expose an interface or web form that the restaurant management uses to supply content for the tile. To keep this example simple and focused though, let’s assume that the XML template is stored directly in Windows Azure storage at a known URI location; how it got there is left to the reader :).

With Windows Azure there are three primary ways to setup a service:

Windows Azure Web Sites, a low cost and quick option for setting up simple yet scalable services.

Windows Azure Virtual Machines, an Infrastructure-as-a-Service (Iaas) offering, enabling you to build and host a Linux or Windows VM with whatever service code and infrastructure you need.

Windows Azure Cloud Services, a Platform-as-a-Service offering, through which you can deploy a Web Role as a service running within IIS.

I previously covered the provisioning of Web Sites in my blog post on storing notification images in the cloud, so this time I’ll go with the Platform-as-a-Service offering using Windows Azure Cloud Services and ASP.NET.

Getting Set Up for Azure Development

If you haven’t used Windows Azure before, now’s a great time to get a 90-day free, no-risk trial.

As far as tooling goes, you can use Visual Studio 2012, Visual Studio 2010, or Visual Web Developer Express. I’m using Visual Studio 2012 for the rest of this post, but the experience is quite similar for Visual Studio 2010 and Visual Web Developer Express.

Since the service will be developed in ASP.NET, you’ll also want to download specifically the Windows Azure SDK for .NET (via the Web Platform Installer). In fact, if you don’t have Visual Studio already installed, this will give you Visual Web Developer Express automatically.

Creating the Service

In Visual Studio, create a new Solution (File>New Project…) of type Cloud:

Creating a new Cloud Service solution

When you create a new Cloud Service, you get the option to create a number of other related projects, each corresponding to a Web or Worker Role that is deployed under the umbrella (and to the same endpoint) of that Cloud Service. Web and Worker Roles typically coordinate with other Azure services like storage and the Service Bus to provide highly scalable, decoupled, and asynchronous implementations. In this simple scenario, you need only a single Web Role, which will host a simple service.

Adding a Web (Service) Role

For the type of ASP.NET MVC project, select Web API, a convenient and lightweight option for building a REST-ful service:

Creating a Web API ASP.NET MVC application

Coding the Service

At a minimum, the service needs to do two things:

  1. return the tile XML content within a predetermined Windows Azure blob URI, the same one that is updated by the restaurant every morning at 9 a.m.
  2. set the X-WNS-Expires header to the time at which the requested tile should no longer be shown to the user. In this case, let’s assume the restaurant stops serving the special at midnight each day, and you don’t want to show the tile after that time.

To create the API for returning the XML content, add a new empty API Controller called Tiles on the ASP.NET MVC 4 project (or you can just modify the ValuesController that gets automatically created):

Creating a new API controller

New API Controller dialog

In that controller, you’ll need a single Get method (corresponding to the HTTP GET request that the Windows 8 application will make on a regular basis to get the XML tile content). That Get method will access a parameter indicating the name of the blob on Windows Azure that contains the XML. For example, a URL like:

will access the blob called dinner in a Windows Azure blob storage container called chezcontoso. The api/tiles segment of the URL is controlled by the ASP.NET MVC route that is set by default in the global.asax file of the project, so it too can be modified if you like.

The complete code for that Get method follows and is also available as a Gist on GitHub for easier viewing and cutting-and-pasting. Note, you will need to supply your own storage account credentials in Line 5ff (see below for a primer on setting up your storage account).

   1:  // /api/tiles/<tilename>
   2:  public HttpResponseMessage Get(String id)
   3:  {
   4:      // set cloud storage credentials
   5:      var client = new Microsoft.WindowsAzure.StorageClient.CloudBlobClient(
   6:          "",
   7:          new Microsoft.WindowsAzure.StorageCredentialsAccountAndKey(
   8:              "YOUR_STORAGE_ACCOUNT_NAME",
   9:              "YOUR_STORAGE_ACCOUNT_KEY"
  10:              )
  11:      );
  13:      // create HTTP response
  14:      var response = new HttpResponseMessage();
  15:      try
  16:      {
  17:          // get XML template from storage
  18:          var xml = client.GetBlobReference(id.Replace("_", "/")).DownloadText();
  20:          // format response
  21:          response.StatusCode = System.Net.HttpStatusCode.OK;
  22:          response.Content = new StringContent(xml);
  23:          response.Content.Headers.ContentType = 
  24:              new System.Net.Http.Headers.MediaTypeHeaderValue("text/xml");
  26:          // set expires header to invalidate tile when content is obsolete
  27:          response.Content.Headers.Add("X-WNS-Expires", GetExpiryTime().ToString("R"));
  28:      }
  29:      catch (Exception e)
  30:      {
  31:          // send a 400 if there's a problem
  32:          response.StatusCode = System.Net.HttpStatusCode.BadRequest;
  33:          response.Content = new StringContent(e.Message + "\r\n" + e.StackTrace);
  34:          response.Content.Headers.ContentType = 
  35:              new System.Net.Http.Headers.MediaTypeHeaderValue("text/plain");
  36:      }
  38:      // return response
  39:      return response;
  40:  }
  42:  private DateTime GetExpiryTime()
  43:  {
  44:      return DateTime.UtcNow.AddDays(1);
  45:  }

And now for the line-by-line examination of the code:

Line 2
The Get method is defined to accept a single argument (this will be of the format container_blob) which is automatically passed in by the ASP.NET MVC routing engine to the id parameter. This leverages the default route for APIs as defined in the App_Start/WebApiConfig.cs of the ASP.NET MVC project, but you do have full control over the specific path and format of the URL.

Lines 5-11
This instantiates the StorageClient class wrapper for interacting with Windows Azure blob storage. For simplicity, the credentials are hard-coded into this method. As a best practice, you should include your credentials in the role configuration (see the Configuring a Connection String to a Storage Account in Configuring a Windows Azure Project) and then reference that setting in code via RoleEnvironment.GetConfigurationSettingValue.

Line 14
A new HTTP response object is created, details for which are filled by the rest of the code in the method.

Line 18
The blob resource in Windows Azure is downloaded using the account set in Line 5ff. The id parameter is assumed to be of the format container_blob, with the underscore character separating the distinct Windows Azure blob construct references. Since references to containers and blobs in Windows Azure require the / separator, the underscore in the parameter value is replaced before the call is made to download the content.
You could alternatively create a new MVC routing rule that accepted two parameters to the Get method, container and blob, and then concatenate those values here. I opted for the implementation here so as not to require additional modifications in other parts of the ASP.NET MVC project.

Line 21
A success code is set for the HTTP response, since an exception would have occurred if the tile content were not found. It’s still possible that the content is not valid, but the notification engine in Windows 8 will handle that silently and transparently and simply not attempt to update the tile.

Line 22
The content payload for the tile, which should be just a snippet of XML subscribing to the Tile schema, is added to the HTTP response.

Line 23
The content type is, of course, XML.

Line 27
To guard against stale content, the X-WNS-Expires header is set to the time at which the tile should no longer be displayed on the user’s Start screen, and instead the default defined in the application’s manifest should be used. Here the time calculation is refactored into a separate method which just adds a day to the current time. That’s actually not the best implementation, but we’ll refine that a bit later in the post.
Note that the time is formatted using the “R” specifier, which applies the RFC1123 format required for HTTP dates.

Lines 31-35
The exception handling code is fairly simplistic but sufficient for the example. If there’s any problem in accessing the blob content, a HTTP 400 status code is returned. The notification processing component of Windows 8 will realize there’s a problem and not attempt to modify the tile. The additional information provided (message and stack trace) is there for diagnostic purposes; it would never be visible to the user of the Windows 8 application.

Line 39
The HTTP response is returned as the result of the service request.

Line 42-45
GetExpiryTime calculates when the tile should expire and the default tile defined in the application’s manifest should be reinstated. In the code embedded above, the tile will expire one day after the content was polled versus the default of three days. The Gist includes an updated version of that calculation that I’ll discuss a bit later in the post.

Testing the Service

The Windows Azure SDK includes an emulator that allows you to run a Windows Azure service on your local machine for testing. You can invoke the emulator when running Visual Studio (as an administrator) by hitting F5 (or Ctrl+F5). For the service application we’re working with, that will open a browser to the default page for the site. That page isn’t the one you want, of course, but since the tile is available via an HTTP GET request, simply supplying the appropriately formatted URI should bring up the XML content in the browser.

Browsing to Web API endpoint

After confirming the service works, it’s time to give it shot with a real Windows 8 application that leverages periodic notifications. If you’ve got one coded already, then all you need to do is provide the full URI to startPeriodicUpdate. If you haven’t written your app yet, the Push and periodic notifications client-side sample offers a quick way to test.

When you run that sample, select the fourth scenario (Polling for tile updates) and enter the URL for the service you just created. Then press the Start periodic updates button.

Sample application demonstrating periodic updates

Tile resulting from periodic updateAs for all periodic updates, when startPeriodicUpdate is called, the endpoint is polled immediately, so you should see the tile updated on your Start Screen along the lines of what is shown to the left. If you were to manually update the tile XML in blob storage, you should see the new content reflected in about a half-hour, since that’s the default recurrence period specified in the sample application. For the actual application, a recurrence period of one day makes more sense; additionally, you’d set the startTime parameter in the call to startPeriodicUpdate to 9 a.m. or slightly after to be sure that the poll picks up the daily refreshed content.

Deploying the Service

Visual Studio Publish... optionWhen you’re satisfied the application is working with the local service, you’re ready to deploy it to the cloud with your 90-day free Windows Azure account (or any other account you may have). You can deploy the ASP.NET site to Windows Azure directly from Visual Studio by selecting the Publish option from the Cloud Service project (right).

If this is the first time you’ve published to Windows Azure from Visual Studio, you’ll need to download your publishing credentials, which you can access via the link provided on the Publish Windows Azure Application dialog below.

Downloading Windows Azure publication credentials

Selecting that link prompts you to login to the Windows Azure portal with your credentials and download a file with the .publishsettings extension. Save that file to your local disk and click the Import button to link Visual Studio to your Azure subscription. You only need to do this once, and you should then secure (or remove) the .publishsettings file, since it contains information that would enable others to access your account. With the subscription now in place, you can continue to the next step of the wizard.

Windows Azure storage location promptYou may next be prompted to create a storage account on Windows Azure. This account is used to store your deployment package during the publication process, and it can be used by all the services you ultimately end up deploying. As a result, you need only do this once; although, you can change the default storage account used at a later stage in the wizard. To create the account enter a storage account name, which must be unique across all of Windows Azure, and indicate which of the eight data centers should house the account.

Service creation dialogIf you have existing services deployed, you’ll have the option to update one of those with the build currently in Visual Studio, or as in this case, you can create a new Cloud Service on Windows Azure. Doing so requires supplying (1) a service name (which must be unique across Windows Azure since it becomes the third level domain name prepended to the domain which all Windows Azure Cloud Services are a part of) and (2) the location of the data center where the service will run. As you can see to the left, I chose my service to reside in the East US and be addressable via

If the service name is valid and not in use, the Common Settings and Advanced Settings options will be populated automatically and allow some customization of how the service runs and what capabilities you have (e.g., remote desktop to your VM in the cloud, whether to use a production or staging slot, if Intellisense is enabled, etc.) The defaults are fine for our purposes here, but you can read more about the additional settings on the Windows Azure site.

Publish Settings

At this point the Next button brings you to a summary screen which indicates that your choices will be saved for when you redeploy, or you can just click Publish to deploy your service to Windows Azure now. Within Visual Studio you can keep tabs on the deployment via the Windows Azure Activity log (accessible via the View->Other Windows menu option in Visual Studio) – note that it may take 10 minutes or more the first time you deploy to a new service; subsequent updates are generally much quicker.

Windows Azure Activity Log

Now that the service is fully deployed to the cloud, you can revisit the sample tile application and provide the Azure hosted URI (versus the localhost one used earlier when testing) to see the complete end-to-end process in action!

Tidying Up

Earlier on, I mentioned I’d revisit the tile timeout logic, namely the implementation of GetExpiryTime in the code sample; it’s time to clean up that loose end!

Since Chez Contoso updates its menu daily at 9 a.m., it seems quite logical to have the tile expire on a daily basis (which was the default implementation I introduced above), but that creates a less than satisfactory experience:

When users fire up the application for the first time, the URI will be polled for the tile and by default the tile will expire 24 hours from then. If a user should first access the app at 8:45 a.m. and then disconnect for the rest of the day, they will continue to see the previous day’s special on their Start screen! I’d say that if the restaurant stops serving at midnight, no one should continue to see that day’s special beyond that point.

To address that scenario, here’s an updated implementation of GetExpiryTime:

   1:  private DateTime GetExpiryTime()
   2:  {
   3:      Int32 TimeZoneOffset = -4;  // EDT offset from UTC
   5:      // get representation of local time for restaurant
   6:      var requestLocalTime = DateTime.UtcNow.AddHours(TimeZoneOffset);
   8:      // if request is hitting before 9 a.m., information is stale
   9:      if (requestLocalTime.Hour <= 8)
  10:          return DateTime.UtcNow.AddDays(-1);
  12:      // else, set tile to expire at midnight local time
  13:      else
  14:      {
  15:          var minutesUntilExpiry = (24 - requestLocalTime.Hour) * 60 
- requestLocalTime.Minute;
  16:          return DateTime.UtcNow.AddMinutes(minutesUntilExpiry);
  17:      }
  18:  }

The basic algorithm here is to convert the time the request hits the server (in UTC) to the equivalent local time. Here, I’m assuming the restaurant is on the East Coast of the United States, which currently is offset by four hours from UTC (Line 3). One disadvantage of this particular approach is that the conversion to Standard Time will require a change to the algorithm. That could be alleviated by moving the offset to the services configuration file (ServiceConfiguration.cscfg), but it would still need to be updated twice a year. A more robust (and likely complex) implementation is almost certainly possible, but I’m deeming this ‘close enough’ to illustrate the concept.

Once the representation of the time local to the restaurant is obtained (Line 6), a check is made to see if the time is before or after 9 a.m., the time we’re assuming management is prompt about updating the coming evening’s special.

If it’s before 9 a.m. that day (Line 9), then the tile is set to expire before it’s even delivered, because the current contents refer to the previous day’s special entrée. This does mean that the tile is delivered unnecessarily, but the payload is small, and the Windows 8 notification mechanism will honor the expiration date. An alternative would be to return an HTTP status code of 404, which may be more correct and elegant, but require updating a bit more of the other code.

If the request arrives after 9 a.m. Chez Contoso time (Line 13ff), then the expiration time is calculated by determining how many minutes are left until midnight. Once that time hits, the tile on the user’s Start screen will be replaced with the default supplied in the application’s manifest, which would probably include a stock photo or some other generic and time insensitive text.

Final Design Considerations

Congratulations! At this point you should have a cloud service setup that can service one or many of your Windows 8 applications. Before signing off on this post though, I wanted to reiterate a few things to keep in mind about periodic notifications.

  • If the machine is not connected to the internet when it’s time to poll the external URI, then that attempt is skipped and not retried until the next scheduled recurrence – which could be as long as a day away. The takeaway here is to consider expiration policies on your tiles so that stale content is removed from the user’s Start screen.
  • The polling interval is not precise; there can be as much as a 15-minute delay for the notification mechanism to poll the endpoint.
  • The URI is always polled immediately when a client first registers (via startPeriodicUpdate), but you can specify a start time of the next poll and then the recurrence interval for every request thereafter.
  • You can leverage the notification queue by providing up to five URIs to be polled at the given interval. Each of those can have separate expiration policies and also provide different tag value (X-WNS-Tag header) to determine which tile of the set should be replaced with new content.

Get a free 90-day trial accountThis section is a primer for setting up a Windows Azure storage account in case you want to follow along with the blog post above and set up your own periodic notification endpoints in Windows Azure. If you’ve already worked with Windows Azure blob storage, you won’t miss anything by skipping this!

Getting Your Windows Azure Account

There are a number of options to quickly provision a Windows Azure account. Many of these are either free or have a free monthly allotment of services - more than enough to get your feet wet and explore how Windows Azure can enhance Windows 8 applications from a developer and an end-user perspective:

Creating Your Storage Account

+NEWOnce your subscription has been provisioned, log in to the Windows Azure Management Portal, and select the NEW option at the bottom left.

From the list of options on the left, select STORAGE and then QUICK CREATE:

Creating a new storage account

You’ll need to provide two things before clicking CREATE STORAGE ACCOUNT:

    • a unique name for your storage account (3-24 lowercase, alphanumeric characters), which also becomes the first element of the five part hostname that you’ll used to access blobs, tables and queues within that storage account, and
    • the region you’d like your data to be stored, namely one of Azure’s eight data center locations worldwide.

The geo-replication option offers the highest level of durability by copying your data asynchronously to another data center in the same region. For the purposes of a sample it’s fine to leave it checked, but do note that replication does add roughly a 33% cost to your storage.

Once your account is created, select the account in the portal, and click the MANAGE KEYS option at the bottom center, to bring up your primary and secondary access key. You’ll ultimately need to use one of these keys (either is fine) to create items in your storage account.

Storage Access Keys

Installing a Storage Account Client

The Windows Azure Management portal doesn’t provide any options for creating, retrieving or updating data within your storage account, so you’ll probably want to install one of the numerous storage client application out there. There is a wide variety of options from free to paid, from proprietary to open source. Here is a partial list of the one’s I’m aware of – I tend to favor Cloudberry Storage Explorer when working with blob storage.

Regardless of which you use, you’ll need to setup access to your storage account by providing the storage account name and either the primary or secondary account key. Here for instance is a screen shot of how to do it Cloudberry Explorer:

Selecting Windows Azure Account within CloudBerry Explorer

Creating a Container

Windows Azure Blob storage is a two-level hierarchy of container and blobs:

  • a container is somewhat like a file folder, except containers cannot be nested. This is the level at which you can define an access policy applying to the container and the blobs within it,
  • a blob is simply an unstructured bit of data; think of it as a file with some metadata and a MIME-type associated with it. Blobs always reside in containers and are accessed with a URI of the format (http is also supported).

You can create a container programmatically of course, and the 3rd party notification service could do so, but it’s likely you’ll set up the container before hand and let the service simply deal with the blobs (XML files) in it. Depending on the storage client you’re using, creating a container will be very much like creating a file folder in Explorer. Here’s a snapshot of the user interface in CloudBerry Explorer.

Creating a  new container

Note that I’ve set the container policy to Public read access for blobs only, which would allow the storage account to be a direct endpoint that a Windows 8 application can subscribe to for periodic notifications. That’s a supported, but less than ideal approach as the body of the blog post above explains. A service-based delivery approach would allow a more stringent access policy of “No public read access,” since the storage key could be used by and secured at the service host.

<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

Jeff Cogswell (@WriteWithJeff) posted Cloudant BigCouch Delivers Azure-Hosted CouchDB to’s Cloud Computing Reivews blog on 8/31/2012:

imageCloudant BigCouch is a hosted version of CouchDB, which was re-engineered for high availability and scalability. It is now available hosted on Microsoft Azure. For this review, eWEEK Labs' Jeff Cogswell tried it out and reports on its performance.

Database provider Cloudant Inc. created the highly available, fault-tolerant, clustered version of CouchDB, called BigCouch—which is now available on Windows Azure. Tests at eWEEK Labs show that the new offering is on par with Cloudant’s data layer service that is hosted on Softlayer and other cloud providers.

imageOverall, users will find very little performance difference, thus, making the decision to use the distributed NoSQL database service on Windows Azure a matter of price and developer skill sets.

Cloudant’s Data Layer comes into play when developing cloud-based apps. It’s common to spin up applications on different hosting providers, such as Rackspace or Amazon. The same applies to the database in order to take advantage of a provider who specializes in a particular type of database and handles features, such as replication and distribution, that the organization would otherwise have to handle itself.

imageCloudant is one such database provider. By hooking the organization's applications into Cloudant’s Data Layer, application managers don’t have to worry about managing a distributed CouchDB. Instead, Cloudant handles the administration, while your developers write code, which can then store and read documents in Cloudant’s databases. The service is provided on a pay-as-you-go metered plan.

Cloudant released the Azure version of their hosted CouchDB databases in June.

To test the Azure offering, I started by hosting our tests on the Amazon Web Services (AWS) East Coast data center; then, I moved our data to the Azure hosting to see how it compares. In terms of functionality and programmer usage, I found no difference in the Azure hosting compared with the other hosting service. In either case, you get what appears conceptually as a CouchDB database (but is, in fact, Cloudant’s own distributed version of CouchDB). To use it in your software, you make use of existing CouchDB drivers.

In my tests, my goal was to find out two things: First, does the Azure Cloudant hosting work the same way as with the other Cloudant hosting services; second, how does the speed compare with that of the others? In general, the idea behind Cloudant is that they provide the data layer to your Web-based application.

That means you might have your application hosted on one server and your database would be hosted and managed by Cloudant. So, for example, even though you use Azure for your Cloudant hosting, that doesn’t mean you necessarily have to use Azure for your application hosting (but you can).

My choice for development platform was Node.js and the Cloud9 integrated development environment (IDE). I started out by building the application locally on my own machine, against a local installation of CouchDB. Then I moved the application to one of my own servers on Rackspace, after which I pushed the CouchDB databases up to the Cloudant servers, which were hosted by AWS. Then, after running the tests, I moved my database over to the Azure hosting.

For the first tests, running with the app on Rackspace and the Cloudant database on the AWS servers, I initially pulled down 132 small documents out of my database and they came across in 276 milliseconds. The 1,100 rows of data came down in 569 milliseconds. A total of 7,132 records came down in 1.45 seconds. Is that fast? The download of 276 records was certainly fast.

But the amount of data actually moving from their computer to my computer, even for the 7,132 records, was actually pretty small and probably took about the same amount of time, whether 276 records or 7,132 records.

Any delay after that would actually have been caused by the fact that I didn’t set up any indexes, or do any additional configuring of my database. I just pulled down all the records. The time I saw was about the same as it was running against my local CouchDB installation. In other words, there was almost no performance hit whatsoever running against Cloudant versus a local installation.

Next, I pushed everything over to Azure. What’s interesting is that from a programming perspective, there’s no difference: The database is just hosted someplace else. I didn’t have to change anything in my code other than the domain name of my database. The database is still CouchDB as far as my app was concerned, even though it’s on Azure now.

Running the same tests against the Azure installation, the times were a bit slower with the delay likely caused by differences in proximity to the data centers. But these were still fast response times. Pulling down 132 small documents took 530 milliseconds.

Phani Raju described Async extension methods for OData Windows 8 client library in an 8/22/2012 post (missed when published):

imageIf you’re writing Windows Store applications and want to use the async goodness that the platform allows with your OData client applications,
take a look at these extension methods that allow you to use the await and async keywords in your apps.

imageSome useful links

namespace System.Data.Services.Client.Async


using System;

using System.Collections.Generic;

using System.Data.Services.Client;

using System.Threading.Tasks;


public static class DataServiceContextAsyncExtensions


public static async Task<IEnumerable<TResult>> ExecuteAsync<TResult>(this DataServiceQuery<TResult> query)


var queryTask = Task.Factory.FromAsync<IEnumerable<TResult>>(query.BeginExecute(null, null),

(queryAsyncResult) =>


var results = query.EndExecute(queryAsyncResult);

return results;



return await queryTask;



public static async Task<IEnumerable<TResult>> ExecuteAsync<TResult>(this DataServiceContext context, Uri requestUri)


var queryTask = Task.Factory.FromAsync<IEnumerable<TResult>>(context.BeginExecute<TResult>(requestUri, null, null),

(queryAsyncResult) =>


var results = context.EndExecute<TResult>(queryAsyncResult);

return results;



return await queryTask;



public static async Task<IEnumerable<TResult>> ExecuteAsync<TResult>(this DataServiceContext context, DataServiceQueryContinuation<TResult> queryContinuationToken)


var queryTask = Task.Factory.FromAsync<IEnumerable<TResult>>(context.BeginExecute<TResult>(queryContinuationToken, null, null),

(queryAsyncResult) =>


var results = context.EndExecute<TResult>(queryAsyncResult);

return results;



return await queryTask;



public static async Task<IEnumerable<TResult>> LoadPropertyAsync<TResult>(this DataServiceContext context, object entity, string propertyName)


var queryTask = Task.Factory.FromAsync<IEnumerable<TResult>>(context.BeginLoadProperty(entity, propertyName, null, null),

(loadPropertyAsyncResult) =>


var results = context.EndLoadProperty(loadPropertyAsyncResult);

return (IEnumerable<TResult>)results;


return await queryTask;



public static async Task<IEnumerable<TResult>> LoadPropertyAsync<TResult>(this DataServiceContext context, object entity, string propertyName, DataServiceQueryContinuation continuation)


var queryTask = Task.Factory.FromAsync<IEnumerable<TResult>>(context.BeginLoadProperty(entity, propertyName, continuation, null, null),

(loadPropertyAsyncResult) =>


var results = context.EndLoadProperty(loadPropertyAsyncResult);

return (IEnumerable<TResult>)results;



return await queryTask;



public static async Task<IEnumerable<TResult>> LoadPropertyAsync<TResult>(this DataServiceContext context, object entity, string propertyName, Uri nextLinkUri)


var queryTask = Task.Factory.FromAsync<IEnumerable<TResult>>(context.BeginLoadProperty(entity, propertyName, nextLinkUri, null, null),

(loadPropertyAsyncResult) =>


var results = context.EndLoadProperty(loadPropertyAsyncResult);

return (IEnumerable<TResult>)results;



return await queryTask;



public static async Task<DataServiceResponse> ExecuteBatchAsync(this DataServiceContext context, params DataServiceRequest[] requests)


var queryTask = Task.Factory.FromAsync<DataServiceResponse>(context.BeginExecuteBatch(null, null, requests),

(queryAsyncResult) =>


var results = context.EndExecuteBatch(queryAsyncResult);

return results;



return await queryTask;



public static async Task<DataServiceResponse> SaveChangesAsync(this DataServiceContext context)


return await SaveChangesAsync(context, SaveChangesOptions.None);



public static async Task<DataServiceResponse> SaveChangesAsync(this DataServiceContext context, SaveChangesOptions options)


var queryTask = Task.Factory.FromAsync<DataServiceResponse>(context.BeginSaveChanges(options, null, null),

(queryAsyncResult) =>


var results = context.EndSaveChanges(queryAsyncResult);

return results;



return await queryTask;





view rawDataServiceAsyncExtensions.csThis Gist brought to you by GitHub.

<Return to section navigation list>

Windows Azure Service Bus, Access Control Services, Caching, Active Directory and Workflow

• Clemens Vasters (@clemensv) described Sagas and provided source code for a example in an 8/31/2012 post:

imageToday has been a lively day in some parts of the Twitterverse debating the Saga pattern. As it stands, there are a few frameworks for .NET out there that use the term "Saga" for some framework implementation of a state machine or workflow. Trouble is, that's not what a Saga is. A Saga is a failure management pattern.

Sagas come out of the realization that particularly long-lived transactions (originally even just inside databases), but also far distributed transactions across location and/or trust boundaries can't eaily be handled using the classic ACID model with 2-Phase commit and holding locks for the duration of the work. Instead, a Saga splits work into individual transactions whose effects can be, somehow, reversed after work has been performed and commited.


The picture shows a simple Saga. If you book a travel itinerary, you want a car and a hotel and a flight. If you can't get all of them, it's probably not worth going. It's also very certain that you can't enlist all of these providers into a distributed ACID transaction. Instead, you'll have an activity for booking rental cars that knows both how to perform a reservation and also how to cancel it -- and one for a hotel and one for flights.

The activities are grouped in a composite job (routing slip) that's handed along the activity chain. If you want, you can sign/encrypt the routing slip items so that they can only be understood and manipulated by the intended receiver. When an activity completes, it adds a record of the completion to the routing slip along with information on where its compensating operation can be reached (e.g. via a Queue). When an activity fails, it cleans up locally and then sends the routing slip backwards to the last completed activity's compensation address to unwind the transaction outcome.

If you're a bit familiar with travel, you'll also notice that I've organized the steps by risk. Reserving a rental car almost always succeeds if you book in advance, because the rental company can move more cars on-site of there is high demand. Reserving a hotel is slightly more risky, but you can commonly back out of a reservation without penalty until 24h before the stay. Airfare often comes with a refund restriction, so you'll want to do that last.

I created a Gist on Github that you can run as a console application. It illustrates this model in code. Mind that it is a mockup and not a framework. I wrote this in less than 90 minutes, so don't expect to reuse this.

The main program sets up an examplary routing slip (all the classes are in the one file) and creates three completely independent "processes" (activity hosts) that are each responsible for handling a particular kind of work. The "processes" are linked by a "network" and each kind of activity has an address for forward progress work and one of compensation work. The network resolution is simulated by 'Send". …

Clemens continues with a couple hundred lines of Saga source code.

Andrew Schwenker described SharePoint 2013: Claims Infrastructure – Part II, which usually involve Windows Azure Access Control Services, in an 8/31/2012 post:

imageWelcome to Part II of SharePoint 2013 Claims Infrastructure. Previously I wrote about the Distributed Cache Service and how it will revolutionize the authentication model in SharePoint 2013 (along with a lot of other great use cases). In this post, I want to focus on the way Open Authentication (OAuth) works with SharePoint Apps and the Client-Side Object Model (CSOM).

imageOAuth is new in SharePoint 2013 and is implemented specifically to enable working with SharePoint Apps and the CSOM. OAuth is identity provider agnostic, meaning that it doesn’t know and, more specifically, doesn’t care who your identity provider is. It provides a way for SharePoint and your App to mutually verify each other before they start talking by using a third-party broker. In most instances, this broker will be Azure Access Control Services (ACS), but it can be your internal SharePoint server if you’re using an on-premise app. The overall OAuth App launcher process looks like this:


  1. User authenticates to SharePoint via claims-based authentication and clicks on an App
  2. SharePoint requests a Context Token for the App specific to the User from the Authentication Server
  3. The Authentication Server validates the request and returns a valid Context Token that has been signed. The encryption key is trusted by SharePoint
  4. SharePoint returns the Context Token to the User and directs the User to the App
  5. User POSTs the Context Token to the App as part of the initial request
  6. App takes the Context Token, extracts the Refresh Token and sends it to the Authentication Server to obtain an OAuth Token for accessing SharePoint
  7. Authentication Server validates the Refresh Token and returns the OAuth Token that the App server can use with SharePoint. The OAuth Token contains the User’s identity and the App’s identity and is trusted by SharePoint because it’s signed by the Authentication Server
  8. (Optional) App requests data from SharePoint via REST/CSOM using the OAuth Token as its identity
  9. SharePoint returns requested data to App
  10. App renders itself to the user as HTML, JavaScript, and CSS

imageThis process is kind of complicated, but the general principle is that both SharePoint 2013 and the App trust the third-party broker which validates that both services are who they say they are. Generally, the Authentication Server will be Azure Access Control Services, but on-premise SharePoint 2013 Farms and Apps can be entirely self-contained and cut off from the Internet if necessary. In such a scenario, the Subscription Settings Service Application in SharePoint acts as the broker.

You’ll notice that step 8 (and thus step 9) is optional because an App doesn’t necessarily need to call into SharePoint to get data. However, once it has the OAuth token, it has the capability. In addition, security for Apps is different. An App can have additional security permissions above a standard User, meaning that anyone authorized to use the App can potentially have elevated permissions. This situation exists because CSOM doesn’t have a similar SPSecurity.RunWithElevatedPrivileges construct that exists in server-side code.

OAuth facilitates both dealing with claims-based authentication without getting a FedAuth token and getting the user’s identity without asking for credentials. This coupled with the expanded feature set of CSOM should lead to many interesting and exciting apps. Check in soon for the final part of this series: SharePoint 2013 Claims and Search!

<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

•• Tyler Doerksen (@tyler_gd) completed his series with Connecting to the Cloud-Part 2 on 8/31/2012:

Link to Part 1 [See article below.]

imageTo recap, this 2 part blog series outlines the variety of options you have to connect Azure applications to components not hosted in Azure. Notice that I specifically did not say on-premises, I get into lots of questions about migrating to Azure, mainly around migrating from another cloud provider like Amazon or Rackspace. The reality is that you can host a portion of your application in Azure and keep the remaining parts on whatever provider, or on-premises. Take iCloud for example, there is evidence (unconfirmed of course) that it uses both Amazon and Azure services.


imageIn part 1, I talked about Data layer and Application layer integration. Here I will briefly cover the final two types of machine and network layer integration options.

3. Machine-to-Machine


This option uses a piece of software installed onto the machines called Windows Azure Connect. Some may claim that Azure Connect has been depreciated now that the site-to-site connections are available, I would argue that there are situations where you do not need an entire network connected, simply a single machine. This is helpful when integrating with a monolithic, age old on-premises machine that cannot be moved. Once the Azure Connect software is installed and configured on the machine it can easily communicate with any number of other roles configured in the virtual network. Note that this does not give the roles access to your entire network, only the machine with the software installed.

To get access to this functionality you need to venture back to Silverlight land and the old portal. I am not sure what Microsoft plans for this feature but I hope it can become as easy to use as the site-to-site networks with the new HTML5 portal.

4. Site-to-Site


At this point the Virtual Network is the newest feature to round out the Azure integration scenarios. This was released with the Infrastructure-as-a-Service features in the “Spring Release” (June 2012) of Azure. Windows Azure Virtual Networks provide the capability to treat Azure like a remote datacenter. Using an VPN connection, all network traffic can be routed between the two locations as if they were in fact in the same place. There has been lots of movement in this space, not only because it is a new feature of Azure, but it provides “branch office” functionality which is very familiar to most IT departments of larger organizations.

Unlike Cloud Services or Virtual Machine groups, the Virtual Network does not provide DNS for the machines, once you start adding machines to a VNet you need to provide your own DNS server. That server could be hosted in the VNet in Azure or over the VPN tunnel in your datacenter.

Virtual networks are connected to Affinity Groups in Azure. Which means that other services (storage and cloud services) can be associated to the group. A Cloud Service in the same affinity group even has network access to the virtual network which helps in a number of scenarios where you may be building new services that depend on existing VM installed components.

Site-to-Site connectivity may be the least intrusive to your application however it is the most difficult to setup. If your datacenter does not use VPN tunnelling it could mean new appliances from Cisco or Juniper and maybe some help setting up and maintaining the network appliance. However, the cost savings of going to Azure for IaaS hosting could be well worth it.

Well that about all I have for now. Be sure to check back for technical implementations of these scenarios with applications like SharePoint, TFS, and Active Directory.

•• Tyler Doerksen (@tyler_gd) began a series with Connecting to the Cloud–Part 1 on 8/31/2012:

imageFor those looking to create a new Cloud system with an on-premises/alternate host component. Or possibly you are looking at migrating an application to the cloud but you have to have some piece of the system on-premises. There are options. Even though Microsoft’s approach to the cloud was “we’re all in” that does not mean that your system has to be “all into” Azure.

imageWhen integrating to an Azure application you have a number of options, choosing the best one means understanding what is involved in these options.

Here is a graphic which outlines the 4 main options for integrating with Azure.


From the top.

1. Data Synchronization


This form of integration propagates changes made on one database to another, and vise-versa. You can even setup a data hub system to sync 3 or more databases with each other. This is commonly used in situations where an application is too large for a single location or has a remote data cache. For example which is hosted on Windows Azure, uses Data Sync and Traffic Manager to host multiple versions of their applications across the globe. Comments made in Europe and Asia can take up to 1 hour to Sync to the US datacenters and viewed by those accessing the site from the US.

This form of integration is performed by the SQL Azure Data Sync service. One of the databases (called the hub database) has to be a SQL Azure database. If you want to sync more than one on-premises database together it has to go through the hub. Periodically, the databases are queried for changes and those changes are made to the other databases.

Data Sync is a very effective integration pattern because it can greatly reduce load on your datacenter and does not require any external endpoints, the database connects to the hub using Windows Azure Connect (number 3 on the list) the load on your local database is very controlled.

2. Application Layer


Application layer integration is very common. In fact it is widely used on multi-tiered applications to improve overall performance. 3rd party frameworks like nServiceBus and BizTalk can provide scalable, flexible solutions to help multiple systems communicate with each other.

To take the service bus concept to Azure you have 2 main options both provided by the Windows Azure AppFabric Service Bus. Relay Messaging and Queue Messaging. Relay Messaging provides a real-time connection between two or more components to allow request-response calls which traverse network boundaries. This feature is crucial for those applications that require immediate feedback from connected business logic. Queue Messaging is a persisted, transactional message system that was added to Azure Service Bus later. This feature is similar to Azure Storage Queues, the key difference being that Service Bus Queues are guaranteed ordered, transactional, and other features making them the better choice for enterprise level integration. This messaging scheme is used for a variety of patterns like pub-sub. Queue Topics can also be used to split the message stream into filtered subsets or duplicate entries picked up by different subscribers. Azure Service Bus is a very flexible service, providing a number of Application Layer integration solutions.

To be continued…

This is the end of part 1. In part 2 I will go into Azure Connect and Virtual Networks.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

•• Haishi Bai (@HaishiBai2010) described the Multi-Lingual Chat: Architecture of an interactive web application on Windows Azure in a 9/2/2012 post:

imageMulti-Lingual Chat is a simple chat room application with an interesting twist – it allows chat room participants to communicate in their native languages by automatically translating chat messages to corresponding languages. It also allows users to dispute default translations provided by Bing Translator API. Users can enter and vote on alternative translations to replace the machine translations. The application is designed to provide a fluent user experience. There’s no need for registration or sign on. When you launch the application, you are automatically joined to a public chat room with an automatically assigned user name and an avatar. You can switch chat rooms, change your user name and avatar, and change your preferred language at any time. I have a deployed version running on Windows Azure here, but I’m not guaranteeing to keep it running (and it will probably go down once I reach my Bing API quota). The source code of the project is hosted on this GitHub repo.

The main purpose of this project is not just to create another chat application, nor is it to show off specific technologies such as SignalR. Instead, the main goal of this project is to present a sample of designing a scalable, interactive web application on Windows Azure platform. The architecture used by this application is common enough to be applied to typical web applications. And it will be the focus of this blog post.



imageASP.Net MVC 4 Web API, SignalR, Service Bus, Windows Azure Cache, Windows Azure Table Storage, HTML, CSS, jQuery, Knockout, Windows Azure Web Role, Bing Translator API, Bing Search API, Twitter Avatar, Facebook Avatar.


The following diagram depicts the overall system architecture. The client side comprises a presentation layer with HTML markups and CSS styles, and a view model layer using jQuery and Kncokout. On the server side, we have a service layer, a data access layer, and vendor-specific components working with specific service vendors such as Bing and Windows Azure Table Storage. In the following sections we’ll go through each layer in more detail. A key to a good architecture is proper separation of concerns. For each layer we’ll define its responsibility and talk about some principles and guidelines to keep it faithful to its goal.


HTML Presentation Layer

Presents view models. Hook-up user events with view model methods. That’s it.


  • Presentation is about the look only. It should not have any “additional smartness” by itself. It presents view models to the user, and gets user inputs to the view models. That’s all it should care.
  • For HTML presentation layer, all styling should be done via CSS style sheets. There could be exceptions where some ad-hoc styling is applied for layout purposes, in which case having the attribute in-place makes it easier to adjust and understand. However in most cases styles should be within CSS style sheets.
  • There could be Java scripts involved in presentation layer, for instance to perform UI animations. However these scripts should be restricted to presentation only – they only interact with HTML tags, not with any view model properties or methods.


The following code snippet is the HTML tags for a chat message. You can see how chat messages are linked to corresponding presentation tags via data-bindings, and how user events are forwarded to the view model directly (click: dispute). Note the event handlers highlighted in green – they are strictly presentation logics so they can be put there. How to test why they should be there? Ask the question “should anyone else care about this?”. The answer is “no” in this case. So they are at the right places!

<div id="divDisplay" class="chat_display" data-bind="foreach: ChatLines">
    <div class="chat_line">
        <div class="avatar_box">
            <img data-bind="attr: {src: User.Avatar}" class='avatar' />
        <div class="chat_line_text" data-bind="text: Translated, click: dispute"
        <div class="chat_line_smalltext" data-bind="text: Original"></div>
View Model Layer

View models represent logical views of a set of concerns. They define presentable properties, as well as possible actions within the scope of these concerns. In a client-server architecture, the client-side actions are usually facades of service calls. In other words the view models respond to operations but don’t actually process them. Instead, they delegate actual handlings to the backend server.


  • View Model should never, ever care about how it’s going to be presented. This is the most important principle to follow when constructing view models. If you find your view models are generating HTML tags, consider refactoring!
  • View Models are usually observable. They raise events when their selected properties changes so that UI can respond to the changes.


This is one version of alternative translation view model I had:

function messageTranslationVieMode(partition, row, text, rank, isBing, parent) {
    this.Rank = ko.observable(rank);
    this.voteUp = function () {
        $.getJSON('/api/chatapi/vote?partition=' + self.PartitionKey + '&row=' + self.RowKey 
                           + '&offset=1', function (data) {
There are two bad lines, highlighted in red, in this implementation. The first bad line tells the parent view model to sort all translations by ranking. The second line is worse – it calls to a UI function to hide the alternative translation dialog. What should have happened here? The view model should just update the ranking, and raise an event so that parent view model can respond and resort the translation array. The same event can be handled by UI logic to hide the dialog box. After refactoring, the code becomes clearer:
this.voteUp = function () {
    $.getJSON(..., function (data) {
And the parent view model simply subscribe to the property change event to keep the list in order:
$.each(entity.Translations, function (i, val) {
    var model = new messageTranslationVieMode(...);
    model.Rank.subscribe(function () {

Finally, the presentation script subscribe to the list changed event:

viewModel.AlternativeTranslations.Translations.subscribe(function () {
Service Layer

Service Layer usually contains two parts: enablement part and business logics part. The enablement part is to enable different protocols via which the service can be consumed. Examples of this part include SOAP service contracts, REST-ful Web API routes and alike. The service logic part implements the actual business processes.


  • Service layer serves view models to clients.
  • Service layer methods should be stateless. They should have minimum, if any, requirements on the calling contexts.
  • Service layer should not contain any vendor-specific code. The service layer contains the core value of your business processes, you don’t want to have it mingled with any specific vendors, even if the vendor is who you trust, such as a SQL Database store.


The business layer of Multi-lingual Chat is very simple. Feel free to browse code in ChatHub and ChatAPIController.

Data Access Layer

Traditionally Data Access Layer is only responsible to access persistent storages such as a relational database. Nowadays Data Access Layer also need to call out to other services, such as Windows Azure Table Storage Service, to persist data. Data Access Layer is often wrapped as one or more repositories.


  • Repositories return business entities, which should not expose vendor-specific details.
  • Repositories should be self-contained. They should not be aware of other repositories, nor should they care about how they are consumed.

(Negative) Sample

The sample I can provide here is a negative one: MessageTranslationEntity. This entity contains a partition key and a row key, which are specific to Windows Azure Table Storage. My repository returns these properties upwards, causing the view models eventually become aware of the properties and start to use these properties as object identifiers. The mistake is baked into different layers and it is hard to pull it out in a later phase. This is a good example to show how adhering to design principles is important throughout the construction. Some little mistakes will ripple through the system, and cause more mistakes to be made.

Why this mistake is bad? Now my business logics, along with my view models, are hijacked by partition keys and row keys. This means in the future, when I need to switch to a different repository, which may have a different identifier system, I still need to cook up some sort of partition keys and row keys so not to break all the code. This kind of code is hard to understand and hence hard to maintain. And when such codes accumulates, the code becomes a mystery.

I intend to fix the code in future releases. So don’t be surprised if partition keys and row keys are removed, in which case you can use GitHub to trace back to the older versions to see the mistake.

Additional Notes

There’s a CachedDictionary class you may find interesting. It wraps around Windows Azure Cache (Preview at the moment) and provide an IDictionary implementation so that you can maintain states of multiple collections easily. The main benefits of the class include simplicity and allowing concurrent accesses to collection members. There will be a separate blog to provide more details. Keep tuned!

Vendor-specific Code

Work with a specific service provider.


  • All but only interactions with a specific vendor should be wrapped in a vendor-specific component.
  • Be ready for external services to become unavailable. It’s usually a good idea to provide some down-graded service when outside service is down.
  • It’s usually not a good idea to direct expose vendor API signatures. Instead, define methods that are most convenient and relevant to upper-level codes.


BingServiceClient wraps all access to Bing APIs. Examine the GetLangugaesForTranslate() method:

catch (WebException)
   return new List<string>(new string[]{"en"});
When the service call fails, the method sets available languages to English only, which essentially turns off translation feature. The users lose translation capability in this case, but they can still use the application without problem.

Multi-lingual provides an example of the architecture of a typical interactive web application. The key of a successful architecture is proper separation of concerns. And as developers, it’s important to adhere to this principle to make clearer, maintainable code.

The FINANCIAL reported Quest Software Simplifies Email Migration for Customers With Windows Azure in a 9/1/2012 article:

imageQuest Software is simplifying how customers of all sizes move their email to the cloud with its email migration solution, Quest OnDemand Migration for Email.

As Microsoft reported, the solution is based on Windows Azure and helps customers migrate to Microsoft Office 365. By running the solution on Windows Azure, Quest was able to give companies large and small a fast, low-cost, low-risk way to migrate email to cloud environments.

imageQuest recognized that many customers wanted to migrate their email messaging from both on-premises infrastructures and other cloud solutions to Office 365 but had questions about the migration, noting that several email migration tools in the market were difficult to set up and required the deployment of on-site servers for large mailbox migrations. As a result, Quest created an email migration solution that could be delivered using a software-as-a-service deployment model with Windows Azure in which customers do not need to install or maintain any software for the move, helping save time and money.

“We wanted customers to gain instantaneous access to our migration product via any Web browser and start migrating users within minutes, not weeks,” said Ron Robbins, product manager, Migration Solutions at Quest Software.

imageBy hosting its solution on Windows Azure, Quest can scale to meet its customers’ needs — from small businesses with hundreds of mailboxes to large enterprises with tens of thousands of mailboxes — all while keeping its migration service affordable. Quest’s pay-as-you-go pricing model, which is enabled by Windows Azure’s pricing model, also gives customers the flexibility to migrate all their users at once or spread the investment over time.

To date, Quest has migrated tens of thousands of email seats to Office 365 using OnDemand Migration for Email, and does so with exceptional levels of performance and availability from Windows Azure. Microsoft ’s cloud platform also gives Quest the ability to expand its global reach.

“We can expand our markets because Windows Azure has datacenters worldwide,” Robbins said. “Deploying our own SaaS infrastructure worldwide would be cost-prohibitive, but with Windows Azure we can reach new markets without infrastructure concerns.”

Brian Loesgen (@BrianLoesgen) reported BizTalk Server 2010R2 Image Available for Azure Virtual Machine on 8/30/2012:

imageIn case you missed it, the Windows Azure Virtual Machine Gallery now contains an image for BizTalk Server, letting you create an Azure Virtual Machine with BizTalk installed.


imageIt was super-easy to do. I had to run the BizTalk configuration wizard and turn on SQL Agent, but that’s all. In a few minutes, you can be up and running with a fully configured BizTalk Server running on a Windows Azure VM. This opens up MANY interesting use cases…

Kurt Mackie (@kurmac) reported Microsoft Releases BizTalk Server 2010 R2 CTP to Windows Azure in an 8/30/2012 article for 1105 Media’s Redmond Channel Partner:

imageMicrosoft on Wednesday issued a community technology preview (CTP) of its BizTalk Server 2010 R2 product, the company announced in a blog post.

Current Microsoft Technology Adoption Program (TAP) customers and prequalified users have been able to run the CTP natively in a computing environment since Tech-Ed in late June. However, what's new in Wednesday's announcement is that organizations that aren't TAP participants can now get their hands on the CTP by running it on Windows Azure.

imageMicrosoft is allowing broader access outside the TAP program, but the main catch is that non-TAP testers have to run BizTalk Server 2010 R2 CTP in a virtual machine using Windows Azure. In order for non-TAP testers to use the CTP, they need to set up a 90-day trial Windows Azure account. Once that's set up, they can use Windows Azure's management portal to create a virtual machine running the BizTalk Server 2010 R2 CTP. The steps to do that are outlined in this blog post.

Microsoft has been fairly quiet about BizTalk Server 2010 R2. Wednesday's blog posts follow eight months of silence from the BizTalk Server team blog, although the company did show some demos of BizTalk Server 2010 R2 in June during Microsoft's Tech-Ed events. A Tech-Ed North America session covered integration; a Tech-Ed Europe session by Karthik Bharathy, a Microsoft senior program manager on the BizTalk team, covered Microsoft's roadmap and using Windows Azure to run BizTalk Server 2010 R2.

BizTalk Server 2010 R2 will be released six months after Windows 8, according to Bharathy, which would mark it for release in late April 2013, as Windows 8 is scheduled for release on Oct. 26. Microsoft had originally planned on releasing the CTP in July, followed by a beta release in October, per Bharathy's presentation. However, that schedule seems to have slipped by a month.

Bharathy added that Microsoft is committed to releasing BizTalk Server for "years to come" and that Microsoft is enabling new Azure-based BizTalk scenarios for enterprise application integration (EAI) and electronic data interchange (EDI).

BizTalk also will be supported by Microsoft both on premises and in Azure, Bharathy promised. Customers told Microsoft that business-to-business operations are "more amenable to the cloud" while line-of-business assets "will always be on premises." Microsoft's conclusions from that feedback are that one approach doesn't suit all organizations, so it plans to support hosted, on premises and hybrid architectures for BizTalk Server.

Currently, there are about 12,000 BizTalk 2010 customers. Microsoft is committed to releasing a new BizTalk Server product every two to three years, Bharathy said.

Microsoft's announcement indicated that the CTP supports some of Microsoft's next wave of emerging products, as well as Windows Azure Active Directory Access Control authentication. The CTP works with Windows Server 2012 release candidate and Windows Server 2008 R2. It also supports SQL Server 2012 and SQL Server 2008 R2. Office 2010 is supported, but no information was supplied about future support for Office 2013, which is currently available as a customer preview release.

The CTP also adds support for the "latest" line-of-business versions, according to the announcement. It supports SAP 7.2, as well as Oracle DB 11.2 and EBS 12.1.

Microsoft also promised in its announcement that the CTP is facilitating integration with services that use REST protocols, such as The blog claims that Microsoft's integration goes beyond just consuming REST services. The CTP now supports exposing REST services from BizTalk Server, too.

Microsoft plans to ship the ESB Toolkit as part of the core product, when released, according to Bharathy. Microsoft is also saying that Visual Studio 2012 release candidate supports BizTalk Server 2010 R2.

Microsoft describes BizTalk Server as an "integration and connectivity server." It can be used to tie together business processes that depend on disparate software solutions. It's middleware that acts like an enterprise service bus in service-oriented architecture scenarios.

Full disclosure: I’m a contributing editor for 1105 Media’s Visual Studio Magazine.

<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

•• Michael Washington (@ADefWebServer) described A Visual Studio LightSwitch Picture File Manager in a 9/1/2012 post to his Visual Studio LightSwitch Help Website:


imageVisual Studio LightSwitch has the capability to allow you to create a picture file manager that will upload files to the server hard drive and display them in the user interface. This article builds on the previous article: Saving Files To File System With LightSwitch (Uploading Files). The difference is that in this article the pictures uploaded are displayed and you have the ability to delete them.


To add pictures, we click the “+” button.


imageThis opens the Select File Dialog.

We use the Browse button to select a local file.


We click the OK button to upload the file.


The file will be uploaded and the picture will display.


To delete a file, we select the picture and click the “-“ button.


A “X” will show next to each picture selected for deletion.

When the Save button is clicked the pictures marked for deletion will be deleted.

WCF RIA Service


To enable the functionality to view and delete files, we use a WCF RIA Service.

You can find more information on creating WCF RIA Services at the following links:

In the WCF RIA Service, a class is created to pass the data between the service and LightSwitch:

public class FileRecord
    public string FileName { get; set; }
    public byte[] FileImage { get; set; }

Note that the image will be stored as a byte[]. When the WCF RIA Service is imported into LightSwitch we will change the field type in LightSwitch to Image.

The following code reads the files in the server hard drive and creates a collection of FileRecords:

   public FileDomainService()
        _LocalFiles = new List<FileRecord>();
        string strFileDirectory = GetFileDirectory();
        EnsureDirectory(new System.IO.DirectoryInfo(strFileDirectory));
        // Add all directories at this directory.
        DirectoryInfo objDirectoryInfo = new DirectoryInfo(strFileDirectory);
        string FolderName = objDirectoryInfo.Name;
        // Get the files in the directory
        foreach (var fi in objDirectoryInfo.EnumerateFiles().OrderBy(x => x.Name))
            FileRecord objFileRecord = new FileRecord();
            objFileRecord.FileName = fi.Name;
            // Load Image
            string strPath = string.Format(@"{0}\{1}", strFileDirectory, fi.Name);
            FileStream sourceFile = new FileStream(strPath, FileMode.Open);
            long FileSize;
            FileSize = sourceFile.Length;
            byte[] getContent = new byte[(int)FileSize];
            sourceFile.Read(getContent, 0, (int)sourceFile.Length);
            objFileRecord.FileImage = getContent; 
            // Add file to the collection

The following method will be called by LightSwitch to retrieve the FileRecords:

    [Query(IsDefault = true)]
    public IQueryable<FileRecord> GetFileRecords()
        return _LocalFiles.AsQueryable();

The following method will be called by LightSwitch to delete files:

    public void DeleteFile(FileRecord objFileRecord)
        // Look for the file
        var objFile = (from LocalFiles in _LocalFiles
                       where LocalFiles.FileName == objFileRecord.FileName
                       select LocalFiles).FirstOrDefault();
        if (objFile != null)
            string strFileDirectory = GetFileDirectory();
            File.Delete(Path.Combine(strFileDirectory, objFile.FileName));

The presence of a method with the word delete in it that takes a single FileRecord as a parameter is all that is needed by LightSwitch to automatically enable the delete capability when the WCF RIA Service is imported into LightSwitch.


To import the service into LightSwitch we right-click on Data Sources, and we Add Data Source.


In the wizard we select WCF RIA Service.


We Add Reference.


We add a reference to our WCF RIA Service.


We can now select the service.

Note: If you do not see your service make sure your WCF RIA Service is ASP.NET 4.0 not ASP.NET 4.5 (go into properties in the WCF RIA Service to change it).


We select the Entity (table) and click Finish.


The table will show.


We must open the table and switch the FileImage property from Binary to Image (and save the changes).


We are now able to add the table to any LightSwitch screen and select a Image Viewer to display the pictures.

The ability to view and delete pictures is handled automatically by the WCF RIA Service.

Uploading pictures (and downloading) is covered in the following articles:

Advanced File Management

This example is suitable for a small amount of files. It reads all the files in the Files directory each time. If you have a lot of files, it is better to simply store the file names in the database when they are uploaded and display a list of the files from the database. The WCF RIA Service can still be used to display the pictures but it would get its list of files from the database rather than reading the list of files directly from the file system. The article Help Desk: An Advanced Visual Studio LightSwitch Application has an example of this.

Download Code

The LightSwitch project is available at

(note: When you deploy the application, you must give permission to the web server process to access the files on the file system)

The Visual Studio LightSwitch Team (@VSLightSwitch) posted Concurrency Handling in LightSwitch to MSDN’s LightSwitch topic in 8/2012:

imageLightSwitch automatically detects concurrency conflicts, but you can improve the performance of an application in some cases by understanding how LightSwitch handles that process .

Problems can occur in applications when multiple users are allowed to edit the same record at the same time. Applications that detect concurrency conflicts first determine whether another user has changed a record, and they handle any conflicting values appropriately.

Concurrency in LightSwitch

LightSwitch uses the OData protocol for communication between the client and server. The OData protocol uses the ETag part of the HTTP protocol to detect concurrency conflicts. Every property that will be used to detect concurrency conflicts has its original value serialized into the ETag value when the item is read. This value is sent to the client application along with all other values of the entity that's being read. A client that tries an update will submit the ETag value along with the updated property values to the server. The server will verify whether the ETag still matches the original value. If the value has changed, the update is rejected, and the application must handle the conflict.

LightSwitch handles conflicts slightly differently depending on what data source is being used.

Intrinsic Data Source

When you create a table in the LightSwitch intrinsic database, a generated column that's named RowVersion is added to your table. The RowVersion column utilizes the rowversion data type in SQL Server. The rowversion column is automatically updated each time that any other column in the record is updated. Rather than serializing all column values into an Etag, only the 8-byte value in RowVersion is used to detect conflicts. This strategy helps to improve performance by reducing the amount of data that's sent to the server.

You can’t display the RowVersion column in the Entity Designer, but that column appears in the Screen Designer. You can write code for the RowVersion column by using the RowVersion_Changed, RowVersion_IsReadOnly, and RowVersion_Validate methods.

You can also create a query in the Query Designer by using RowVersion. For example, you might want to detect whether a record has been changed since the most recent time that it was read. To detect such changes, you can create a query with Id and RowVersion parameters. You can pass in a record’s Id and current RowVersion values into the parameters of this query. If no record is returned, the record has been modified or deleted. If a record is returned, the record hasn't changed in the database.

Attached Data Source

When you connect to an existing database in LightSwitch and a rowversion column exists in a table, LightSwitch uses that rowversion column to detect concurrency conflicts. If you attach to an external database that doesn’t contain a rowversion column, Visual Studio LightSwitch uses all available columns to detect concurrency conflicts. The latter strategy could result in poor performance, especially with large sets of data.

We recommend that you add a rowversion column to external database tables, which you can do by using SQL Server Management Studio or the SQL Server Object Explorer in Visual Studio.

WCF RIA Service Data Source

When you attach to a Windows Communication Foundation (WCF) Rich Internet Application (RIA) data source, LightSwitch uses one of three attributes of WCF RIA Services to determine whether a property should be used to detect concurrency conflicts: TimestampAttribute, ConcurrencyCheckAttribute, or RoundTripOriginalAttribute. Any property that's marked with one of these attributes on your WCF RIA entity is used to detect concurrency conflicts. If the entity doesn’t have any of these attributes on its properties, all properties are used to detect concurrency conflicts. The latter strategy could result in poor performance, especially with large sets of data.

See Also

Other Resources

See also Eric Erhardt's Concurrency Enhancements in Visual Studio LightSwitch 2012 article of 7/10/2012.

Andrew Lader of the Visual Studio LightSwitch Team (@VSLightSwitch) reported Publishing LightSwitch Apps to Azure with Visual Studio 2012–UPDATED on 8/31/2012:

image_thumb1The recent release of LightSwitch for Visual Studio 2012 included some updates around publishing to Azure. Most importantly, we have made it even easier to publish your LightSwitch applications to Azure Web Sites by improving the publish wizard. For more information on these updates, and a step-by-step walkthrough on publishing to Azure, check out the recently updated blog post: Publishing LightSwitch Apps to Azure with Visual Studio 2012. [See article below.]

The Visual Studio LightSwitch Team (@VSLightSwitch) updated Publishing LightSwitch Apps to Azure with Visual Studio 2012 on 8/29/2012:

UPDATED 8/29/2012: Updated for Visual Studio 2012 and Azure SDK 1.7.1

imageWith the release of Visual Studio 2012, we’ve improved the experience for publishing LightSwitch applications to Windows Azure. The Azure deployment support for LightSwitch in Visual Studio 2012 depends on the Windows Azure SDK for .NET - June 2012 SP1, which was released August 15th. For more information on the release, please visit Jason and Scott’s blogs.

image_thumb1In V1 of LightSwitch we made publishing LightSwitch applications to Azure possible – in Visual Studio 2012 we’ve made it easy. If you’ve ever published a LightSwitch application to Azure you know it involves marshaling GUIDs and certificates between the Azure portal and the LightSwitch development environment in what could be best described as a “Version 1.0 Experience”. But this post is not about V1 of LightSwitch, so I won’t go into details there and will move right into what’s new in this release.

To start, the LightSwitch application you create doesn’t need to know anything about Azure until it’s time to publish. You can develop and test your application in the development environment and even publish to IIS before you make the decision to publish to Azure. When you’re ready to move to the cloud you simply sign up for your account and make the choice during publishing.

IMPORTANT: The latest release of the Azure SDK contains improvements to LightSwitch’s publish wizard making it even easier to publish to Azure Web Sites. Now more than ever, you may want to sign-up for this option as it is the perfect choice for many LightSwitch applications.

Let’s get started!

The first thing you will notice in Visual Studio 2012 is that the publishing functionality for Azure will first require downloading and installing the necessary components for Azure publishing. Doing so enables LightSwitch to always provide the latest features as Windows Azure evolves.

Installing the Windows Azure SDK for .NET

So, the first thing to do is download and install the Azure SDK. When you select Windows Azure as your publishing target, LightSwitch will walk you through the installation of the WPI feed that contains the Visual Studio components required to publish to Azure. Simply install the feed, restart Visual Studio and the new functionality is available.

EnableWindowsAzure (modified)

If you already have the latest version of the Web Platform Installer you’ll be directed right to the SDK to install. If not, then you will be prompted to install the latest version of WPI before installing the Windows Azure SDK for .NET.

If you want to manually install the Windows Azure SDK for .NET then just select it in the Spotlight section or search for it in WPI. If you go the manual route, be sure to select the SDK feed for Visual Studio 2012. If you go the automatic route, it will be chosen for you.


Once installed, just go back to VS and you’re ready to go.

Importing your settings

The first improvement we’ve made is to enable downloading and importing settings for managing your Azure subscriptions – this removes the need to copy/paste information between the Azure portal and VS. It also keeps the experience in-line with cloud publishing throughout Visual Studio. The settings you use to manage your Azure subscriptions are available to LightSwitch and Azure cloud projects alike.

If you don’t see your settings listed, simply 1) click the link to sign-in to Azure and download your settings and then 2) import those settings into the publish wizard.

AzureCredentials (modified)

NOTE: the file contains your subscription information as well as a management certificate that will allow VS to manage your account. Be sure to secure this file or delete it after you import it.

Once imported, these settings are saved, encrypted, in your profile. Other users in VS cannot see or use these credentials. In a shared development environment you may want to share the settings file with other users or have them create their own by following the steps above.

Choice of Service Type

Once your Azure credentials have been chosen, the next step is to choose the service type. This is a new addition to the publish wizard, and gives you the choice to publish to an Azure Web Site or to a Cloud Service.


How to follow the steps

This article is organized to to cover the choice you make here in the following way:

  1. The next two sections cover the different paths the publish wizard will take based on whether you are publishing to a web site or a cloud service.
    1. If you are publishing to a web site, then you will want to continue with the next section, Publish to an Azure Web Site.
    2. If on the other hand, you are publishing to a cloud service, skip down to the section entitled Publish to an Azure Cloud Service.
  2. In either case, the publish wizard converges again when you get to the Security Settings section of the wizard. Both the Security Settings and the Data Connections portions of the wizard are the same regardless of the choice you make here. The section entitled Securing your application is where the two pathways come together.
Publish to an Azure Web Site

If you chose the first option, publish to an Azure Web Site, then you need to make sure you have one provisioned first. Azure Web Sites are the perfect pairing for hosting a LightSwitch application in the cloud. Creating a site and a database couldn’t be easier in the new portal and you have all the benefits of the Windows Azure platform. Web sites are a good choice for getting a LightSwitch application to the cloud quickly and easily. Use this type of deployment when you don’t need the advanced features of a cloud service web role, like remote desktop, intellitrace, etc. Web sites are to Azure, what LightSwitch is to Visual Studio.

So, if you haven’t done so yet, you will need to provision a web site, . To begin the process, click the link Learn more about Windows Azure Web Sites on the publish wizard.

You can sign-up or add it to your existing Azure account. Once you’ve signed up, the only thing you need to do is create the web site. Typically, you will want to provision a database with your web site.

Creating a web site

Using the new Windows Azure portal, the first step is to create the site along with a database. You can use an existing database if you like and even link that database to the new web site. Linking the site and database will put everything you need into a single publish profile that you can download and import into LightSwitch.


When creating or selecting a database you must be sure to open the firewall on your database to let LightSwitch publish to that database remotely. By default, your Azure database will not allow for remote management except by Windows Azure. The simplest way to allow remote management of your database from your IP address is to click “manage” within the Azure portal from the computer that will publish to the web site. Windows Azure will ask you if you want to enable management of the database from your current IP.


Back to the LightSwitch Publish Wizard

So now that you have provisioned an Azure Web Site, return to the publish wizard and select the option to publish to an Azure Web Site.


Service Configuration

Having chosen to publish to a web site, you must now choose which web site to host your LightSwitch application.


This can be the web site you just created in the previous step, or one that already existed. If you just created one, make sure to click the “Refresh” button to populate the drop down with the newly provisioned site.

Next Steps

The next steps in the wizard are to configure the security and database settings. Since you are publishing to Azure Web Sites, you can skip over “Publish to an Azure Cloud Service” and jump down to the section called Securing your application.

Publish to an Azure Cloud Service

To publish to an Azure Cloud Service, select the “Cloud Service” radio button.


Selecting this option allows you to publish to an Azure Cloud Service just as you have done in the past.

Service Configuration

Once you make the decision to publish to a cloud service, you’ll be asked to select the cloud service that will host your application.


If you don’t want to publish to one of your existing services in the list, you can create a new one from the wizard without having to go to the portal. Select the environment, staging or production and you can continue to the next step. Or… you can change other settings to further customize your deployment.

Enabling Remote Desktop

If you want to enable remote desktop on your deployed application, simply check the box and provide a username and password for the remote desktop user. You can also specify a certificate and an expiration date if needed.

Advanced Configuration

Advanced configuration options are optional settings you may use as the number of your Azure deployments grows. You can customize the name of the deployment to distinguish it from others and even include a time stamp automatically each time the application is deployed.

You can also select a storage account used and, as with other Azure artifacts, create a new one if needed.

Note: if you create a new service or storage account, be sure to collocate them as appropriate. For example, if your service is hosted in the East US region, you should use a storage account also hosted in the East Region as well.

The final option is to use an “upgrade” deployment method. Instead of provisioning the role from scratch, this option will simply update a LightSwitch application that’s already in place. This greatly reduces the time to availability of the service, but of course, this is only appropriate for upgrading applications because there is nothing to upgrade on a new application.


Now on to the next step, securing your application.

Securing your application

The next step in the wizard allows you to secure your application in the cloud. If your application uses authentication (strongly recommended) you need to specify the username of your application’s administrator.


The next step is to configure your HTTPS settings for your publish. How you configure this option will depend on the service type you chose earlier (web site or cloud service).

HTTPS Security Settings when publishing to an Azure Web Site

If you selected to publish to an Azure Web Site, then you will be given the option to choose whether HTTPS is required for your application. It is recommended that you use HTTPS for your LightSwitch applications as the data sent back and forth will not be encrypted otherwise.


If you choose HTTPS here, then you must configure your web site to require a secure connection.

HTTPS Security Settings when publishing to an Azure Cloud Service

If you chose to publish to a cloud service, the next step is to specify a certificate to use to secure your application over HTTPS. If you already have a certificate you can upload it to Windows Azure from the wizard, or if you don’t have one you can have the wizard create a temporary certificate for you, useful for staging environments.


Digital Signature

The final step in securing your application is to specify a certificate to sign your application – so that when updates to your application are published, your users know the updates are from you. This step is optional in case you want to wait until you’re ready to roll out your application.


Database connections

The final step in the publishing process is to specify the database or databases used by your application. By default you only need to specify your intrinsic database, if your application uses external data sources, you’ll have the option to change those here as well.

Another new option, though not unique to Azure publishing, is the option to not deploy the database. If you haven’t made changes to the database and want to leave your data intact, then you can skip the database publishing step by unchecking the box below. Note, however, if you have made changes to your database and “forget” this step, your application will deploy, but not run. If you’re unsure, just leave the box checked and LightSwitch will take care of necessary changes.



The last page in the wizard lets you review your information to let you do one last check before publishing. Also, the next time you come to the publish wizard you’ll jump straight to this page since all of your information is saved along with your project for future use.


Once you publish, you’ll see LightSwitch working for a while to build and publish your application, but you can go back to work once the publish is complete. If this is the first time you’ve published the application (or are not doing an upgrade) then Azure will continue to provision your role and application after LightSwitch has finished. You can monitor that progress in the portal and access your application once finished.

That’s it!

As you can see we made a lot of enhancements to the Azure publishing experience in LightSwitch so it is much easier to get your LightSwitch apps deployed to the cloud. Let us know what you think by adding a comment below. If you run into issues please post a question in the LightSwitch Forum – the team is there to help.

Return to section navigation list>

Windows Azure Infrastructure and DevOps

‡ My (@rogerjenn) Uptime Report for my Live OakLeaf Systems Azure Table Services Sample Project: August 2012 = 99.92 % of 9/3/2012 begins:

My (@rogerjenn) live OakLeaf Systems Azure Table Services Sample Project demo runs two small Windows Azure Web role instances from Microsoft’s South Central US (San Antonio, TX) data center. This report now contains more than a full year of uptime data.

Here’s Pingdom’s Monthly Report for August 2012:


and continues with detailed uptime and response time graphs, as well as a description of the live OakLeaf Systems Azure Table Services Sample Project demo.

•• Herve Roggero (@hroggero) asserted Cloud Computing Forces Better Design Practices in a 9/2/2012 post:

imageIs cloud computing simply different than on premise development, or is cloud computing actually forcing you to create better applications than you normally would? In other words, is cloud computing merely imposing different design principles, or forcing better design principles? A little while back I got into a discussion with a developer in which I was arguing that cloud computing, and specifically Windows Azure in his case, was forcing developers to adopt better design principles. His opinion was that cloud computing was not yielding better systems; just different systems. In this blog, I will argue that cloud computing does force developers to use better design practices, and hence better applications.

imageSo the first thing to define, of course, is the word “better”, in the context of application development. Looking at a few definitions online, better means “superior quality”. As it relates to this discussion then, I stipulate that cloud computing can yield higher quality applications in terms of scalability, everything else being equal.

Before going further I need to also outline the difference between performance and scalability. Performance and scalability are two related concepts, but they don’t mean the same thing. Scalability is the measure of system performance given various loads. So when developers design for performance, they usually give higher priority to a given load and tend to optimize for the given load. When developers design for scalability, the actual performance at a given load is not as important; the ability to ensure reasonable performance regardless of the load becomes the objective. This can lead to very different design choices. For example, if your objective is to obtains the fastest response time possible for a service you are building, you may choose the implement a TCP connection that never closes until the client chooses to close the connection (in other words, a tightly coupled service from a connectivity standpoint), and on which a connection session is established for faster processing on the next request (like SQL Server or other database systems for example). If you objective is to scale, you may implement a service that answers to requests without keeping session state, so that server resources are released as quickly as possible, like a REST service for example. This alternate design would likely have a slower response time than the TCP service for any given load, but would continue to function at very large loads because of its inherently loosely coupled design. An example of a REST service is the NO-SQL implementation in the Microsoft cloud called Azure Tables.

Now, back to cloud computing… Cloud computing is designed to help you scale your applications, specifically when you use Platform as a Service (PaaS) offerings. However it’s not automatic. You can design a tightly-coupled TCP service as discussed above, and as you can imagine, it probably won’t scale even if you place the service in the cloud because it isn’t using a connection pattern that will allow it to scale [note: I am not implying that all TCP systems do not scale; I am just illustrating the scalability concepts with an imaginary TCP service that isn’t designed to scale for the purpose of this discussion]. The other service, using REST, will have a better chance to scale because, by design, it minimizes resource consumption for individual requests and doesn’t tie a client connection to a specific endpoint (which means you can easily deploy this service to hundreds of machines without much trouble, as long as your pockets are deep enough).

The TCP and REST services discussed above are both valid designs; the TCP service is faster and the REST service scales better. So is it fair to say that one service is fundamentally better than the other? No; not unless you need to scale. And if you don’t need to scale, then you don’t need the cloud in the first place. However, it is interesting to note that if you do need to scale, then a loosely coupled system becomes a better design because it can almost always scale better than a tightly-coupled system. And because most applications grow overtime, with an increasing user base, new functional requirements, increased data and so forth, most applications eventually do need to scale. So in my humble opinion, I conclude that a loosely coupled system is not just different than a tightly coupled system; it is a better design, because it will stand the test of time. And in my book, if a system stands the test of time better than another, it is of superior quality.

Because cloud computing demands loosely coupled systems so that its underlying service architecture can be leveraged, developers ultimately have no choice but to design loosely coupled systems for the cloud. And because loosely coupled systems are better…

… the cloud forces better design practices.

My 2 cents.

Lori MacVittie (@lmacvittie) asserted “Adopting a cloud-oriented business model for IT is imperative to successfully transforming the data center to realize ITaaS” in an introduction to her Cloud isn’t Social, it’s Business post of 8/31/2012 to F5’s DevCentral blog:

imageMuch like devops is more about a culture shift than the technology enabling it, cloud is as much or more about shifts in business models as it is technology. Even as service providers (that includes cloud providers) need to look toward a business model based on revenue per application (as opposed to revenue per user) enterprise organizations need to look hard at their business model as they begin to move toward a more cloud-oriented deployment model.

While many IT organizations have long since adopted a “service oriented” approach, this approach has focused on the customer, i.e. a department, a business unit, a project. This approach is not wholly compatible with a cloud-based approach, as the “tenant” of most enterprise (private) cloud implementations is an application, not a business entity. As a “provider of services”, IT should consider adopting a more service provider business model view, with subscribers mapping to applications and services mapping to infrastructure services such as rate shaping, caching, access control, and optimization.

By segmenting IT into services, IT can not only more effectively transition toward the goal of ITaaS, but realize additional benefits for both business and operations.

A service subscription business model:

  • Makes it easier to project costs across entire infrastructure
    Because functionality is provisioned as services, it can more easily be charged for on a pay-per-use model. Business stakeholders can clearly estimate the costs based on usage for not just application infrastructure, but network infrastructure, as well, providing management and executives with a clearer view of what actual operating costs are for given projects, and enabling them to essentially line item veto services based on projected value added to the business by the project.
  • Easier to justify cost of infrastructure
    Having a detailed set of usage metrics over time makes it easier to justify investment in upgrades or new infrastructure, as it clearly shows how cost is shared across operations and the business. Being able to project usage by applications means being able to tie services to projects in earlier phases and clearly show value added to management. Such metrics also make it easier to calculate the cost per transaction (the overhead, which ultimately reduces profit margins) so that business can understand what’s working and what’s not.
  • Enables business to manage costs over time
    Instituting a “fee per hour” enables business customers greater flexibility in costing, as some applications may only use services during business hours and only require them to be active during that time. IT that adopts such a business model will not only encourage business stakeholders to take advantage of such functionality, but will offer more awareness of the costs associated with infrastructure services and enable stakeholders to be more critical of what’s really needed versus what’s not.
  • Easier to start up a project/application and ramp up over time as associated revenue increases
    Projects assigned limited budgets that project revenue gains over time can ramp up services that enhance performance or delivery options as revenue increases, more in line with how green field start-up projects manage growth. If IT operations is service-based, then projects can rely on IT for service deployment in an agile fashion, added new services rapidly to keep up with demand or, if predictions fail to come to fruition, removing services to keep the project in-line with budgets.
  • Enables consistent comparison with off-premise cloud computing
    A service-subscription model also provides a more compatible business model for migrating workloads to off-premise cloud environments – and vice-versa. By tying applications to services – not solutions – the end result is a better view of the financial costs (or savings) of migrating outward or inward, as costs can be more accurately determined based on services required.

The concept remains the same as it did in 2009: infrastructure as a service gives business and application stakeholders the ability to provision and eliminate services rapidly in response to budgetary constraints as well as demand.

That’s cloud, in a nutshell, from a technological point of view. While IT has grasped the advantages of such technology and its promised benefits in terms of efficiency it hasn’t necessarily taken the next step and realized the business model has a great deal to offer IT as well.

One of the more common complaints about IT is its inability to prove its value to the business. Taking a service-oriented approach to the business and tying those services to applications allows IT to prove its value and costs very clearly through usage metrics. Whether actual charges are incurred or not is not necessarily the point, it’s the ability to clearly associate specific costs with delivering specific applications that makes the model a boon for IT.

Richard Seroter (@rseroter, pictured below) posted a new episode of his Interview Series: Four Questions With … Shan McArthur on 8/31/2012:

imageWelcome to the 42nd interview in my series of talks with thought leaders in the “connected systems” space. This month, we have Shan McArthur who is the Vice President of Technology for software company Adxstudio, a Microsoft MVP for Dynamics CRM, blogger and Windows Azure enthusiast. You can find him on Twitter as @Shan_McArthur.

Q: Microsoft recently injected themselves into the Infrastructure-as-a-Service (IaaS) market with the new Windows Azure Virtual Machines. Do you think that this is Microsoft’s way of admitting that a PaaS-only approach is difficult at this time or was there another major incentive to offer this service?

imageA: The Azure PaaS offering was only suitable for a small subset of workloads. It really delivered on the ability to dynamically scale web and worker roles in your solution, but it did this at the cost of requiring developers to rewrite their applications or design them specifically for the Azure PaaS model. The PaaS-only model did nothing for infrastructure migration, nor did it help the non-web/worker role workloads. Most business systems today are made from a number of different application tiers and not all of those tiers are suited to a PaaS model. I have been advocating for many years that Microsoft must also give us a strong virtual machine environment. I just wish they gave it to us three years ago.

As for incentives, I believe it is simple economics – there are significantly more people interested in moving many different workloads to Windows Azure Virtual Machines than developers that are building the next Facebook/twitter/yammer/foursquare website. Enterprises want more agility in their infrastructure. Medium sized businesses want to have a disaster recovery (DR) environment hosted in the cloud. Developers want to innovate in the cloud (and outside of IT interference) before deploying apps to on-prem or making capital commitments. There are many other workloads like SharePoint, CRM, build environments, and more that demand a strong virtual machine environment in Azure. In the process of delivering a great virtual machine environment, Microsoft will have increased their overall Azure revenue as well as gaining relevant mindshare with customers. If they had not given us virtual machines, they would not survive in the long run in the cloud market as all of their primary competitors have had virtual machines for quite some time and have been eating into Microsoft’s revenue opportunities.

Q: Do you think that customers will take application originally targeted at the Windows Azure Cloud Services (PaaS) environment and deploy them to the Windows Azure Virtual Machines instead? What do you think are the core scenarios for customers who are evaluating this IaaS offering?

A: I have done some of that myself, but only for some workloads that make sense. An Azure virtual machine will give you higher density for websites and a mix of workloads. For things like web roles that are already working fine on Azure and have a 2-plus instance requirement, I think those roles will stay right where they are – in PaaS. For roles like back-end processes, databases, CRM, document management, email/SMS, and other workloads, these will be easier to add in a virtual machine than in the PaaS model and will naturally gravitate to that. Most on-premise software today has a heavy dependency on Active Directory, and again, an Azure Virtual Machine is the easiest way to achieve that. I think that in the long run, most ‘applications’ that are running in Windows Azure will have a mix of PaaS and virtual machines. As the market matures and ISV software starts supporting claims with less dependency on Active Directory, and builds their applications for direct deployment into Windows Azure, then this may change a bit, but for the foreseeable future, infrastructure as a service is here to stay.

That said, I see a lot of the traditional PaaS websites migrating to Windows Azure Web Sites. Web sites have the higher density (and a better pricing model) that will enable customers to use Azure more efficiently (from a cost standpoint). It will also increase the number of sites that are hosted in Azure as most small websites were financially infeasible to move to Windows Azure previously to the WaWs feature. For me, I compare the 30-45 minutes it takes me to deploy an update to an existing Azure PaaS site to the 1-2 minutes it takes to deploy to WaWs. When you are building a lot of sites, this time really makes a significant impact on developer productivity! I can even now deploy to Windows Azure without even having the Azure SDK installed on my developer machine.

As for myself, this spring wave of Azure features has really changed how I engage customers in pre-sales. I now have a number of virtual disk images of my standard demo/engagement environments, and I can now stand up a complete presales demo environment in less than 10 minutes. This compares to the day of effort I used to stand up similar environments using CRM Online and Azure cloud services. And now I can turn them off after a meeting, dispose of them at will, or resurrect them as I need them again. I never had this agility before and have become completely addicted to it.

Q: Your company has significant expertise in the CRM space and specifically, the on-premises and cloud versions of Dynamics CRM. How do you help customers decided where to put their line-of-business applications, and what are your most effective ways for integrating applications that may be hosted by different providers?

A: Microsoft did a great job of ensuring that CRM Online and on-premise had the same application functionality. This allows me to advise my customers that they can choose the hosting environment that best meets their requirements or their values. Some things that are considered are the effort of maintenance, bandwidth and performance, control of service maintenance windows, SLAs, data residency, and licensing models. It basically boils down to CRM Online being a shared service – this is great for customers that would prefer low cost to guaranteed performance levels, that prefer someone else maintain and operate the service versus them picking their own maintenance windows and doing it themselves, ones that don’t have concerns about their data being outside of their network versus ones that need to audit their systems from top to bottom, and ones that would prefer to rent their software versus purchasing it. The new Windows Azure Virtual Machines features now gives us the ability to install CRM in Windows Azure – running it in the cloud but on dedicated hardware. This introduces some new options for customers to consider as this is a hybrid cloud/on-premise solution.

As for integration, all integration with CRM is done through the web services and those services are consistent in all environments (online and on-premise). This really has enabled us to integrate with any CRM environment, regardless of where it is hosted. Integrating applications that are hosted between different application providers is still fairly difficult. The most difficult part is to get those independent providers to agree on a single authentication model. Claims and federation are making great strides, and REST and oAuth are growing quickly. That said, it is still rather rare to see two ISVs building to the same model. Where it is more prevalent is in the larger vendors like Facebook that publish an SDK that everyone builds towards. This is going to be a temporary problem, as more vendors start to embrace REST and oAuth. Once two applications have a common security model (at least an identity model), it is easy for them to build deep integrations between the two systems. Take a good long hard look at where Office 2013 is going with their integration story…

Q [stupid question]: I used to work with a fellow who hated peanut butter. I had trouble understanding this. I figured that everyone loved peanut butter. What foods do you think have the most even, and uneven, splits of people who love and hate it? I’d suspect that the most even love/hate splits are specific vegetables (sweet potatoes, yuck) and the most uneven splits are universally loved foods like strawberries. Thoughts?

A: Chunky or smooth? I have always wondered if our personal tastes are influenced by the unique varieties of how each of our brains and sensors (eyes, hearing, smell, taste) are wired up. Although I could never prove it, I would bet that I would sense the taste of peanut butter differently than someone else, and perhaps those differences in how they are perceived by the brain has a very significant impact on whether or not we like something. But that said, I would assume that the people that have a deadly allergy to peanut butter would prefer to stay away from it no matter how they perceived the taste! That said, for myself I have found that the way food is prepared has a significant impact on whether or not I like it. I grew up eating a lot of tough meat that I really did not enjoy eating, but now I smoke my meat and prefer it more than my traditional favorites.

Good stuff, Shan, thanks for the insight!

<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image_thumb2No significant articles today.

<Return to section navigation list>

Cloud Security and Governance

Chris Hoff (@Beaker) posted SiliconAngle Cube: Hoff On Security – Live At VMworld 2012 on 8/31/2012:

imageI was thrilled to be invited back to the SiliconAngle Cube at VMworld 2012 where John Furrier, Dave Vellante and I spoke in depth about security, virtualization and software defined networking (SDN) [Link to Hoff’s interview added.]

I really like the way the chat turned out — high octane, fast pace and some great questions!

imageHere is the amazing full list of speakers during the event. Check it out, ESPECIALLY Martin Casado’s talk.

As I told him, I think he is like my Obi Wan…my only hope for convincing my friends at VMware that networking and security require more attention and a real embrace of the ecosystem…

I’d love to hear your feedback on the video.

David Canellos posted Gartner Highlights the Trough of Disillusionment for “Cloud-Based Security Services” – When will Market Expectations be Realized? to the PerspecSys blog on 8/29/2012:

imageGartner recently issued a research note on “Cloud-Based Security Services,” the capability to deliver security controls without on-premise technology deployment and management. The category currently sits in the “Trough of Disillusionment” in the latest Gartner Hype Cycle for Cloud Security, meaning the technologies offered have failed to meet market expectations.

image_thumbGartner suggests that users investigate specific areas in their cloud implementations and SLAs, including service continuity, response time, and customization to ensure they deliver on the expected outcomes. A key design goal of the PerspecSys Cloud Data Protection Gateway was to offer multiple deployment options, including cloud data protection as a service. Our validated Infrastructure as a Service (IaaS) hosting partners are well-equipped and experienced in supporting enterprise mission-critical business requirements. Our IaaS partners have a long history of providing a variety of enterprise services – including security – without any on-premise infrastructure, and they do not compromise on service continuity, response time, or user customization. Our cloud data protection virtual service is underpinned through our partners’ SLAs, consistent with what enterprise companies have come to expect.

Users must also consider that changing requirements are now mandatory in today’s business environment, and this can create risk in cloud security implementations. By working with a variety of IaaS partners, PerspecSys offers a utility service in which cloud data protection is consumed on an as-needed basis, and vendor lock-in is reduced. Our solution is built using standards-based technologies (Linux, Java, Oracle, mainstream encryption ciphers, and FIPS 140-2 validation), and it supports multiple clouds as a platform to ensure customers are not locked-in to single cloud data protection controls, IaaS partners, or cloud applications.

Enterprises that opt for a “Cloud-Based Security Services” deployment model like PerspecSys offers are generally in verticals driven by industry compliance and directives. These organizations have to address regulatory requirement for securing sensitive business data (GLBA, HIPAA & HITECH, PCI DSS, ITAR, etc.). For data residency needs, although many enterprises opt for PerspecSys as a service, others will opt for an on-premise deployment to keep customer sensitive data behind the firewall in the data center for complete control.

PerspecSys figures prominently as a cloud access security broker in the space Gartner has coined “Cloud Encryption Gateways.” And we believe that as these cloud security innovations continue the rapid adoption towards mainstream acceptance, market expectations of security controls via “Cloud-Based Security Services” will also be realized.

<Return to section navigation list>

Cloud Computing Events

•• The New England Mobile .NET Developers group announced on 9/1/2012 that Mike Bluestein (@mikebluestein) will present a session about developing for Android devices in C# at the Microsoft New England R&D Center, One Memorial Drive, Cambridge, MA 02142 in Cambridge MA on 9/10/2012 from 6:00 to 8:00 PM:

imageThis session will discuss how to develop services in C# using Mono for Android. We’ll cover the basics of how Android services work and examine the various scenarios that services are used for when developing Android applications. Although some knowledge of Mono for Android will be helpful, we’ll introduce enough of the basics such that any C# developer should be able to follow along, even those without Android experience.

Add to Calendar

Register Now

Another case of folks who can’t wait until the Windows Azure Mobile for Android preview arrives.

Himanshu Singh (@himanshuks) posted Windows Azure Community News Roundup (Edition #34) on 8/31/2012:

imageWelcome to the latest edition of our weekly roundup of the latest community-driven news, content and conversations about cloud computing and Windows Azure. Here are the highlights for this week.

Articles and Blog Posts

Upcoming Events and User Group Meetings

North America


Rest of World/Virtual

Recent Windows Azure MSDN Forums Discussion Threads

Recent Windows Azure Discussions on Stack Overflow

Send us articles that you’d like us to highlight, or content of your own that you’d like to share. And let us know about any local events, groups or activities that you think we should tell the rest of the Windows Azure community about. You can use the comments section below, or talk to us on Twitter @WindowsAzure.

Check out Chris Risner’s detailed Windows Azure Mobile Services and iOS post if you can’t wait for the Windows Azure Team’s implementation.

BusinessWire reported Microsoft & CoCentrix to Lead Discussion on Using “Big Data” & Cloud Computing to Optimize Behavioral Health & Human Service Organizations on 10/17/2012 in an 8/31/2012 press release:

imageApplying “big data” in the health and human service field is the focus of a cutting-edge keynote session scheduled to close day one of the 2012 OPEN MINDS Technology and Informatics Institute on Wednesday, October 17th, 2012. This highly-anticipated panel session, led by industry thought leaders at Microsoft Systems, CoCentrix, and OPEN MINDS, will explore how big data and cloud computing can be utilized to lower costs and ensure interoperability within a structure of care coordination and case management.

“The health and human service field has a great deficit in tools that facilitate standardized practices and better decision support,” said OPEN MINDS Chief Executive Officer, Monica E. Oss. “But, until recently, these tools were neither easy to use on a large scale nor cost effective. The growing use of cloud computing has made analysis of large data sets and improved decision support a reality in many systems. This session will be an eye-opener for industry executives nationwide.”

Using Cloud Computing & Big Data For Better Care Management Decisions,”which will include perspectives from leading payer and provider organizations currently using “big data,” will also examine how the powerful technology combination is being used nationwide to improve consumer care through facilitation of case manager decision support – providing cost efficiency, risk management, and data protection.

“The need to coordinate care across agencies and providers has never been greater,” said Leigh Orlov, President of CoCentrix, Inc. “As the amount of data continues to grow at such incredible rates, leveraging the cloud provides the most cost effective and secure means to utilize this data to improve the quality of care and reduce costs.”

The 2012 OPEN MINDS Technology and Informatics Institute will be held on October 17-18, 2012 in Baltimore, Maryland. More information can be found online at:

OPEN MINDS is a national health and human service industry market intelligence, executive education, and management consultation firm. Founded in 1987 and based in Gettysburg, Pennsylvania, the award-winning organization provides innovative management solutions designed to improve operational and strategic performance. Learn more at

For additional questions and inquiries, please contact Tim Snyder, Vice President, Marketing, OPEN MINDS at 717-334-1329 or

The Microsoft “thought leader” is Kevin Dolan, Director, Health & Human Services, Microsoft Systems, Inc.:

imageMr. Dolan initiated a new business unit for Microsoft focused on the largest agencies in resources and budget in state and local government — Health & Human Services. He has worked with a small team to build that business to now over $400m annually in Microsoft software revenues and many billions in partner services impacted revenues. Making Microsoft enterprise software relevant in key solution areas such as Medicaid, Case Management, Business Intelligence, Health Information Exchange, Behavioral Health and Child Welfare, he built a global alliance of strategic partners committed to our vision of connected and consumer-centered service delivery using the full spectrum of Microsoft and 3rd party products

<Return to section navigation list>

Other Cloud Computing Platforms and Services

• Jeff Barr (@jeffbarr) announced Amazon S3 - Cross Origin Resource Sharing Support in an 8/31/2012 post:

Good News

Here's the good news in a nutshell: Amazon S3 now supports Cross Origin Resource Sharing (aka CORS). The CORS specification gives you the ability to build web applications that make requests to domains other than the one which supplied the primary content.

You can use CORS support to build web applications that use JavaScript and HTML5 to interact directly with resources in Amazon S3 without the need for a proxy server. You can implement HTML5 drag and drop uploads to Amazon S3, show upload progress, or update content directly from your web applications. External web pages, style sheets, and HTML5 applications hosted in different domains can now reference assets such as web fonts and images stored in an S3 bucket, enabling you to share these assets across multiple web sites.

Read the new CORS documentation to learn more.


In order to keep your content safe, your web browser implements something called the same origin policy.

The default policy ensures that scripts and other active content loaded from one site or domain cannot interfere or interact with content from another location without an explicit indication that this is the desired behavior.

In certain cases, the developer of the original page might have legitimate reasons to write code that interacts with content or services at other locations. CORS provides the mechanism to allow the developer to tell the browser to allow this interaction.

imageYou can configure any of your S3 buckets for cross-domain access through the AWS Management Console or the S3 API. You do this by adding one or more CORS rules to your bucket. Each rule can specify a domain that should have access to your bucket (e.g. and a set of HTTP verbs you wish to allow (e.g. PUT). Here is a quick tour of the relevant parts of the console. There is a new Add CORS Configuration option in the property page for each bucket:

Clicking that option will display the CORS Configuration Editor:

imageWe have included a number of sample CORS configurations in the S3 documentation.

I know that many of you have been asking for this feature for quite some time. Let me know how it works out for you!

Sounds to me like a good feature for Windows Azure blobs.

Keith Townsend (@virtualizedgeek) shared My thoughts on the major announcements from VMworld 2012 in an 8/29/2012 post:

imageI believe the theme for 2013 will be, “The year the hypervisor becomes a commodity.” Microsoft, Citrix and KVM are gaining enough parity with ESXi that they will be good enough solutions. I guess a better theme will be “2013 the year of the good enough hypervisor.” I was expecting major announcements on the management front from VMware along with continued strides on hypervisor innovation. VMware did not disappoint in either category.

First on their core business the hypervisor, I thought the announcement on the death of vRAM entitlement was a tad uncomfortable. vRam was a mistake that should have been corrected right after it was made. It was no surprise that the first announcement was the death of vRam entitlements. I don’t consider this innovative but rather a prerequisite to any continuation of the vSphere product line when looking at hypervisor competition. The fact that VMware’s newly announced CEO Pat Gelsinger had to ask for bigger applause indicates I’m not the only one that felt this way. There was also continued innovation on storage. VMware announced the anticipated virtual SAN which allows SMB to pool the local storage of ESXi hosts to create a virtual SAN. Also, VMware announced enhanced vMotion which allows the migration of virtual machines sans a SAN (pun intended). VMware said that enhanced vMotion would also support DRS which is a cool trick that I’m anxious to see.

imageThese changes all lead to the continued drum beat of the virtualized data center which was branded by VMware as the Software Defined Data Center (SDDC). VMware wants to own the data center from server to storage to network. VMware has preached the virtualized data center since vSphere 4.1. Now that they have acquired both DynamicOps and Nicira, I believe VMware can actually be taken seriously. These two acquisitions combined with the announcement that they will be a gold member of the OpenStack project helps lend credence to their claim of wanting to run the data center. I have questions as to how VMware’s strategy will mesh with OpenStack as vCloud, vFabric and, vFoundry are all direct competition to Openstack.

imageVMware showed the strength of their vision with application aware vCloud 5.1. I was extremely impressed with the vCloud Hadoop demo. As part of the demo VMware showed how not just the infrastructure is elastic in vCloud but the elasticity stretches to the application stack. They were able to provision Hadoop application and database nodes that resized the Hadoop stack. I don’t know how much of this was real versus story boards but the claim is very impressive.

I’m not too excited about VDI in general. VMware had announcements around its Horizon solution but I have to be honest the VDI is a niche solution. I’ve shared my thoughts in the past. For those needing VDI they will buy VDI. For the rest of us trying to figure out BYOD we’ve decided VDI isn’t the solution. It may be a stop gap solution until something better comes along for mobile devices. From a traditional since I believe VMware is well positioned with its growing management suite and the fact the Microsoft is shipping Hyper-V as part of Windows 8 to do some very interesting things on the full client side with portable offline mode VDI.

I’m also pretty excited about VXLAN or should I say vXLAN. It’s the most enterprise relevant part of the whole SDN movement I’ve seen so far. The demo of extending your data center virtually to public clouds using the same address space was network geek nip. This is another demo that I have to see out in the wild. The thing about software demos versus hardware demos is that you can make anything look smooth with software in a controlled setting. That is unless you are Bill Gates.

I did VMworld from afar again this year. I promise I’ll attend in person next year and hopefully do some live blogging.

Barb Darrow (@gigabarb) made a Prediction: More cloud confusion ahead on 8/31/2012 on GigaOm’s Cloud blog:

imageComing off VMworld 2012 and the CloudOpen Conference this week, one thing is clear: People from consumers to tech pros are still confused about cloud computing.

As SUSE announced its OpenStack private cloud iteration, and VMware unveiled its vCloud Suite, even pundits scratched their heads about such mega issues as cloud interoperability; what defines an “open” cloud; and which cloud technology a company should pick to run its workloads. And, the constant fighting of vendor factions pushing their own cloud agendas isn’t helping matters.

The OpenStack Foundation seeks to build “the Linux of cloud,” a single infrastructure stack that many vendors can build upon without losing basic interoperability. But in April, Citrix, an OpenStack member, started pushing CloudStack as a more mature open-source rival to OpenStack. The nightmare scenario, as vocalized by Marten Mickos, CEO of Eucalyptus, still another open-source cloud, is that OpenStack is really the “Unix of the cloud,” a menagerie of not-quite-compatible clouds by different vendors.

That fear of fracturing remains a problem. “I understand the Linux Foundation and how the kernel is controlled [but] OpenStack doesn’t have that model and has very influential and powerful companies [with] very different interests. So, I … fear that OpenStack may not achieve its goal because of the divergent agenda’s of it members,” said Keith Townsend, chief architect for Lockheed Martin Information Systems and Global Solutions, a large systems integrator.

What makes a cloud open?

TechCrunch’s Alex Williams made a valiant attempt to define what an “open cloud” means – his list of attributes includes open APIs; a collaborative development community; an absence of cloud washing (from his keyboard to God’s ear.) He also contrasted the two West Coast shows characterizing VMworld as evidence of VMware moving beyond its tried-and-true virtualization pitch to a broader, vision that includes participation in OpenStack and data centers, while CloudOpen is all about software.

Over at Infoworld, David Linthicum, CTO and founder of Blue Mountain Labs, wrote that the ‘open cloud’ is getting awfully confusing. [See article below.] With the emergence of the OpenStack camp — Rackspace, HP, Internap, Piston Cloud (see disclosure) and others have OpenStack clouds running — and rival CloudStack and Eucalyptus, there are too many flavors of open-source clouds.

Wrote Linthicum:

If you’re looking at adopting an ‘open cloud’ technology, you have complex work ahead. Assessing their value is complicated by the fact that many of the vendors are less than a two years old and have a minimal install base that can provide insight into fit, issues, and value.

Hazy understanding of cloud

If corporate IT gurus are confused about the cloud, think about the poor consumer. Despite the success of Apple’s iCloud and various Google services, new research sponsored by Citrix shows that many consumers still don’t really “get” the cloud. The most quoted factoid from the Wakefield Research report was that 51 percent of the 1,006 American adults surveyed, believe that stormy weather interferes with cloud computing.

Once the concept was explained to the respondents however, 40 percent thought the ability to access work information from home would be a good thing if only so they can work in their “birthday suits.”

That’s progress I guess.

Disclosure: Piston is backed by True Ventures, a venture capital firm that is an investor in the parent company of this blog, Giga Omni Media. Om Malik, founder of Giga Omni Media, is also a venture partner at True.

Feature photo courtesy of Flickr user Dano

David Linthicum (@DavidLinthicum) asserted “OpenStack, CloudStack, variations, and vendor spin are starting to confuse would-be adopters of the open source cloud” as a deck for his The 'open cloud' is getting awfully confusing post of 8/31/2012 to InfoWorld’s Cloud Computing blog:

imageI'm at the Linux Foundation's CloudOpen conference this week in San Diego. As you might expect, the talk is all about how to use "open clouds" -- cloud software using open source approaches.

Everyone loves the idea of cloud software that leverages open source. Indeed, an IDC report to be released next week notes that "72 percent of businesses say that the use of open source software, open standards, and/or open APIs are key factors when choosing a cloud provider or building their own cloud." (IDC surveyed 282 users from companies with 500 or more employees.)

imageWhat does this mean to the cloud computing market? The number of cloud technology companies that call themselves "open" is exploding. And organizations that want to use this technology are getting confused.

Let's just look at the news this week. As InfoWorld's Eric Knorr reported, EMC VMware asked to become a member of the foundation governing OpenStack, the open source cloud operating system. Suse tossed its hat in with OpenStack as well, with its own distribution. Rackspace recently announced its OpenStack-based private cloud software. I could go on. Each announcement becomes another more "me too" in the emerging OpenStack space of 200-plus companies on the OpenStack bandwagon.

Of course, the emergence of CloudStack in April means there's another open cloud standard besides OpenStack; CloudStack has several companies signed up, with more to come. And don't forget about Eucalyptus, which provides Amazon Web Services compatibility in an open source distribution.

If you're looking at adopting an "open cloud" technology, you have complex work ahead. Assessing their value is complicated by the fact that many of the vendors are less than a two years old and have a minimal install base that can provide insight into fit, issues, and value.

As with any significant IT endeavor, you need to do your homework, understand your requirements, and make sure to test this stuff well before you use it in your enterprise. At some point, the "open cloud" market will normalize, and when that happens, you hope your seat will still be available in the ensuing game of musical chairs.

Don't get me wrong: The "open cloud" approach is a good thing. Just be careful in how you approach it in these early days.

Chris Talbot (@ajaxwriter) reported Google BigQuery Adds Batch Query, Excel Connector Features in an 8/31/2012 post to the TalkinCloud blog:

imageFollowing last month’s launch of the Google Cloud Partner Program, which (among other things) gave partners access to Google BigQuery, Google (NASDAQ: GOOG) has launched two new features to help partners get a handle on Big Data.

As part of BigQuery, which gives partners the ability to import data from on-premise and cloud data sources for analysis, the new features will give partners two new ways to work with Big Data, the Google Enterprise Blog reports.

imageBigQuery provides a way to get insights into Big Data quickly, but in his blog post, product manager Ju-kay Kwek noted, “We understand that there are important, non-interactive queries, such as nightly reports, that businesses also need to run.” That’s where the new batch query features comes in. Google channel partners now can designate queries as batch queries that will take a matter of hours to complete. Pricing for this feature through Google’s self-service model is 2 cents per Gigabyte processed for batch queries and 3.5 cents per Gigabyte processed for interactive queries.

The second feature will extend BigQuery’s ability to execute queries on Google spreadsheets to Microsoft Excel. The Google spreadsheet query feature uses Google Apps Script integration, and the new BigQuery Connector for Excel will essentially provide the same service for those with big Excel spreadsheets. Kwen wrote that it will be a feature for analysts and executives to “use spreadsheets to explore large data sets.”

Connector for Excel uses Excel’s standard web query feature “to eliminate the extra work of manually importing data and running queries directly within Excel,” Kwen wrote.

About the same time Google was announcing these new features to BigQuery, the company also announced on its blog that Google+ is being brought into the business world. Google Apps product management director Clay Bavor wrote that new business-specific features were being launched for free from now until the end of next year, at which point Google plans to tack on a fee.

One of the new features is private sharing for organizations, which gives customers more control over the content they post to Google+. Posts now can be marked as restricted, granting access only to others within your organization. Additionally, video meetings have been integrated into Gmail, Calendar and Docs apps, following the July launch of multi-way video chat, powered by Hangouts. New Google+ administrative controls for businesses are also being added.

According to Google, these new Google+ business features are just scratching the surface. The company hinted at more features to come, including a mobile version of Google+ for enterprise users.

Read More About This Topic

There’s no question that Excel PivotTables and Power View are the favorites for DIY BI analytics.

<Return to section navigation list>