Saturday, April 06, 2013

Windows Azure and Cloud Computing Posts for 4/1/2013+

A compendium of Windows Azure, Service Bus, EAI & EDI, Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image_thumb7_thumb1

Updated 4/6/2013 with new articles marked .
•  Updated
4/5/2013 with new articles marked .

Note: This post is updated weekly or more frequently, depending on the availability of new articles in the following sections:

Azure Blob, Drive, Table, Queue, HDInsight and Media Services

The Windows Azure Storage Team described AzCopy – Using Cross Account Copy Blob on 3/31/2013:

imagePlease download AzCopy CTP2 here, and there is a previous blog of AzCopy CTP1 here as a reference

New features have been added in this release
  • Support of Cross-Account Copy Blob: AzCopy allows you to copy blobs within same storage accounts or between different storage accounts (visit this blog post for more details on cross account blob copy). This enables you move blobs from one account to another efficiently with respect to cost and time. The data transfer is done by the storage service, thus eliminating the need for you to download each blob from the source and then upload to the destination. You can also use /Z to execute the blob copy in re-startable mode.
  • Added /MOV: This option allows you move files and delete them from source after copying. Assuming you have delete permissions on the source, this option applies regardless of whether the source is Windows Azure Storage or the local file system.
  • Added /NC: This option allows you to specify the concurrent network calls. By default, when you upload files from local computer to Windows Azure Storage, AzCopy will initiate network calls up to eight times the number of cores this local computer had to execute concurrent tasks. For example, if your local computer has four cores, then AzCopy will initiate up to 32 (eight times of 4) network calls at one time. However, if you want to limit the concurrency to throttle local CPU and bandwidth usage, you can specify the maximum concurrent network calls by using /NC. The value specified here is the absolute count and will not be multiplied by the core count. So in the above example, to reduce the concurrent network calls by half, you would specify /NC:16
  • Added /SNAPSHOT: This option allows you transfer blob with snapshots. This is a semantic change, as in AzCopy CTP 1 (Released in October, 2012), it would transfer a blob’s snapshots by default.  However starting from this version, by default AzCopy won’t transfer any snapshots while copying a blob. Only with /SNAPSHOT specified, AzCopy will actually transfer all the snapshots of a blob to destination, however these snapshots will became separate blobs instead of snapshots of original base blob in destination, so each of the blobs will be full charged for (not block reuse between them). The transferred blob snapshots will be renamed in this format: [blob-name] (snapshot-time)[extension].
    For example if readme.txt is a source blob and there are 3 snapshots created for it in source container, then after using /SNAPSHOT there will be 3 more separate blobs created in destination container where their names will looks like
    readme (2013-02-25 080757).txt
    readme (2012-12-23 120657).txt
    readme (2012-09-12 090521).txt
    For the billing impact compare blob snapshots to separate blob, please refer to this blog post Understanding Windows Azure Storage Billing
  • Added /@:response-file: This allows you store parameters in a file and they will be processed by AzCopy just as if they had been specified on the command line. Parameters in the response file can be divided as several lines but each single parameter must be in one line (breaking 1 parameter into 2 lines is not supported). AzCopy parses each line as if it is a single command line into a list of parameters and concatenates all the parameter lists into one list, which is treated by AzCopy as from a single command line. Multiple response files can be specified, but nested response file is not supported, instead it will be parsed as a location parameter or file pattern. Escape characters are not supported, except that “” in a quoted string will be parsed as a single quotation mark. Note that /@: after /- does not means a response file, it is treated as a location parameter or file pattern like other parameters


Here are some examples that illustrate the new features in this release.

Copy all blobs from one container to another container under different storage account

AzCopy https://<sourceaccount><sourcecontainer>/ https://<destaccount><destcontainer>/  /sourcekey:<key> /destkey:<key> /S

The above command will copy all blobs from the container named “sourcecontainer” in storage account “sourceaccount” to another container named “destcontainer” in storage account “destaccount”

If you have base blob with snapshots, please add/Snapshot to move all snapshots with base blob to destination, please be noted the blob snapshot will be renamed to this format in destination: [blob-name] (snapshot-time)[extension]

AzCopy https://<sourceaccount><sourcecontainer>/ https://<destaccount><destcontainer>/  /sourcekey:<key> /destkey:<key> /S /SNAPSHOT

For example if you have readme.txt with 3 snapshots in source container, then it will be as below in destination container

readme (2013-02-25 080757).txt
readme (2012-12-23 120657).txt
readme (2012-09-12 090521).txt

If you’d like to delete those blobs from the source container when the copy is complete, then just added /MOV as below

AzCopy https://<sourceaccount><sourcecontainer>/ https://<destaccount><destcontainer>/  /sourcekey:<key> /destkey:<key> /MOV /S

You can also create a response file to make it easier of running same command again and again. Create a txt file called “myAzCopy.txt” with content below

#URI of Source Container
#URI of Destination Container

Then you can run the command below to transfer files from source container to destination container

AzCopy /@:C:\myAzCopy.txt /sourcekey:<key> /destkey:<key> /MOV /S

WenMing Ye (@wenmingye, a.k.a. HPC Trekker) recommended big data developers Make another small step, with the JavaScript Console Pig in HDInsight in a 3/31/2013 post:

imageOur previous blog, MapReduce on 27,000 books using multiple storage accounts and HDInsight [see post below] showed you how to run the Java version of the MapReduce code against the Gutenberg dataset we uploaded to the blog storage.  We also explained how you can add multiple storage accounts and access them from your HDInsight cluster.  In this blog, we’ll take a smaller step and show you how this works with the JavaScript example, and see if it can operate on a real dataset.

image_thumb75_thumb2The JavaScript Console gives you simpler syntax, and a convenient web interface.  You can do quick tasks such as running a query, and check on your data without having to RDP into your HDInsight cluster head node.   It is for convenience only; not meant for complex workflow.  The JavaScript has a few features built in, that includes being able to use the HDFS commands such as ls, mkdir, copy files, etc.  Moreover, it allows you to invoke pig commands.

imageLet's go through the process of running the PIG Script with the entire Gutenberg collection, we first uploaded the MapReduce word count file, WordCount.js   [link] by typing fs.put()  it brings up dialog box for you to upload the WordCount.js file.


Next, you can verify that the WordCount.js file has been uploaded properly by typing #cat /user/admin/WordCount.js.  As you noticed, HDFS commands that normally looks like:   hdfs dfs –ls has been abstracted to #ls.

We then ran a Pig command to kick off a set of map reduce operations.  The JavaScript below is compiled into Pig Latin Java and then executed.

pig.from("asv://").mapReduce("/user/admin/WordCount.js", "word, count:long").orderBy("count DESC").take(10).to("DaVinciTop10")

  1. Load files from ASV storage, notice the format, asv://container@storageAccoutURL.
  2. Run MapReduce on the dataset using WordCount.js, results are in the format of, words, and count key value pair. 
  3. Sort the key value dictionary by descending count value. 
  4. Copy top 10 of the values to the DaVinciTop10 directory in the default HDFS.

This process may take 10s of minutes to complete, since the dataset is rather large.


The View Log link provides detailed progress logs.


You can check the progress by RDP into the HeadNode, it will give you more detailed progress than the “View Log” link on the JavaScript Console.


Click on The Reduce link in the Table above to check on the Reduce Job, notice there shuffle and sort processes.  Shuffle basically is the process where the reducer is fed with output with all the mappers output that it needs to process.


Click into the Counters link, there are significant amount of data being read and written in this process.  The nice thing about Map Reduce jobs is that you can speed up the process by adding more compute resources. The mapping phase can be significantly speed up by running more processes in parallel.


When everything finishes, the summary page tells us that the pig script was really about 5 different jobs, 07 – 11.  for learning purposes, I’ve posted my results at:


The JavaScript Console also provides you with simple graph functions.

file ="DaVinciTop10")

data = parse(, "word, count:long")



When we compare the entire Gutenberg collection with just the Davinci.txt file, there’s a significant difference, with our new data we can certainly estimate the occurrences of these top words in the English language more accurately than just looking through 1 book.



More data always gives us more confidence, that’s why big data processing is so important.  When it comes to processing large amounts of data, parallel big data processing tools such as HDInsight (Hadoop) can deliver results faster than running them on single workstations.  Map Reduce is like the assembly language of Big Data, Higher level languages such as PIG Latin can be decomposed into a series of map reduce jobs for us.

WenMing Ye (@wenmingye, a.k.a. HPC Trekker) described MapReduce on 27,000 books using multiple storage accounts and HDInsight in a 3/30/2013 post:

imageIn our previous blog, Preparing and uploading datasets for HDInsight, we showed you some of the important utilities that are used on the Unix platform for data processing.  That includes Gnu Parallel, Find, Split, and AzCopy for uploading large amounts of data reliably.  In this blog, we’ll use an HDInsight cluster to operate on the Data we have uploaded.  Just to review, here’s what we have done so far:

  1. imageDownloaded ISO Image from Gutenberg, copied the content to a local dir.
  2. Crawled the INDEXES pages and copied English only books (zips) using a custom Python script.
  3. Unzipped all the zip files using gnu parallel, and then took all the text files combined and then split them into 256mb chunks using find and split.
  4. Uploaded the files in parallel using the AzCopy Utility.
Map Reduce


Map reduce is the programming pattern for HDInsight, or Hadoop.  It has two functions, Map and Reduce.  Map takes the code to each of the nodes that contains the data to run computation in parallel, while reduced summarizes results from map functions to do a global reduction.

In the case of this word count example In JavaScript, the mapping function below, simply splits words from a text document into an array of words, then it writes it to a global context. The map function takes three parameters, key, value, and a global context object. Keys in this case are individual files, while the value is the actual content of the documents. The Map function is called on every compute node in parallel.  As you noticed, it writes a key-value pair out to the global context: the word being the key and the value being 1 since it counted 1. Obviously the output from the mapper could contain many duplicate keys (words).

// Map Reduce function in JavaScript
var map = function (key, value, context) {
    var words = value.split(/[^a-zA-Z]/);
    for (var i = 0; i < words.length; i++) {
        if (words[i] !== "")
            context.write(words[i].toLowerCase(), 1);

The reduce function also takes key, values, context parameters and is called when the Map function completes. In this case, it takes output from all the mappers, and sums up all the values for a particular key. In the end you get word:count key value pair.  This gives you a good feel of how map reduce works.

var reduce = function (key, values, context) {
    var sum = 0;
    while (values.hasNext()) {
        sum += parseInt(;
    context.write(key, sum);

To run map reduce against the dataset we have uploaded, we have to add the blob container in the cluster’s configuration page, if you are trying to learn how to create a new cluster.  Please take a look at this video:  Creating your first HDInsight cluster and run samples

HDInsight’s Default Storage Account: Windows Azure Blob Storage

The diagram below explains the difference between HDFS, or the distributed file system natively to Hadoop, and the Azure blob storage. Our engineering team had to do extra work to make the Azure blob storage system work with Hadoop.

The original HDFS uses of many local disks on the cluster, while azure blob storage is a remote storage system to all the compute nodes in the cluster. For beginners, all you have to know is that the HDInsight team has abstracted both systems for you through the HDFS tool. And you should use the Azure blob storage as a default, since when you tear down the cluster, all your files will still persist in the remote storage system.

On the other hand, when you tear down a cluster, the content you store on HDFS contained on the cluster will disappear with it. So, only store temp data that you don’t mind losing in HDFS. Or before you tear down the cluster, you should copy them to your blob storage account.

You can explicitly reference hdfs (local) by using hdfs:// while asv:/// to reference files in the blob storage system. (default).


Adding Additional Azure Blob Storage container to your HDInsight Cluster

On the head node of the HDInsight cluster in C:\apps\dist\hadoop-1.1.0-SNAPSHOT\conf\core-sites.html, you need to add:


For example, in my account, I simply copied the default property and added the new name/key pair.


In the RDP session, using the Hadoop commandline console, we can verify the new storage can be accessed.


In the JavaScript Console, it works just the same.


Deploy and Run word count against the second storage

In the samples page in the HDInsight Console.


Deploy the Word Count sample.


Modify Parameter 1 to:  asv://  asv:///DaVinciAllTopWords


Navigate all the way back to the main page and click on Job History, find the job that you just started running.



You may also check more detailed progress in the RDP session, recall that we have 40 files, and there are 16 mappers total (16 cores) running in parallel.  The current status is: 16 complete, 16 running 8 pending.



The job completed within about 10 minutes, and the results are stored in DaVinciAllTopWords directory.


The results is about 256mb



We showed you how to configure additional ASV storage on your HDInsight Cluster to run Map Reduce Jobs against.  This concludes our 3 part blog Set.

The Microsoft Enterprise Team (@MSFTenterprise) posted Changing the Game: Halo 4 Team Gets New User Insights from Big Data in the Cloud on 3/27/2013 (missed when pubished):

imageIn late 2012, Halo 4 gamers took to their Xbox 360 consoles en masse for a five-week online battle. They all had the same goal: to see whose Spartan could climb to the top of the global leaderboards in the largest free-to-play Halo online tournament in history.

imageUsing the game’s multiplayer modes, players participating in the tournament—the Halo 4 “Infinity Challenge”—earned powerful new weapons and armor for their Spartan-IV and fought their way from one level to the next. And with 2,800 available prizes, there was plenty of incentive to play.

imageBehind the scenes, a powerful new Microsoft technology platform called HDInsight was capturing data from the cloud and feeding daily game statistics to the tournament’s operator, Virgin Gaming. Virgin not only used the data to update online leaderboards each day; it also relied on the data to detect cheaters, removing them from the boards to ensure that the right gamers got the chance to win.

imageBut this new technology didn’t just support the Infinity Challenge. From day one, the Xbox 360 game has been using the Hadoop open source framework to gain deep insights into players. The Halo 4 development team at 343 Industries is taking these insights and updating the game almost weekly, using direct player feedback to tweak the game. In the process, the game’s multiplayer ecosystem continues to evolve with the community as the title matures in the marketplace.

Tapping into the Power of the Cloud

Using the latest technology has always been important to the Halo 4 development team. Since the award-winning game launched in November 2012, the team has used the Windows Azure cloud development platform to power the game’s back-end supporting services. These services run the game’s key multiplayer features, including leaderboards and avatar rendering. Hosting the multiplayer parts of the game in Windows Azure also gives the Halo 4 team a way to quickly and inexpensively increase or decrease server loads as needed.

imageAs Microsoft prepared to officially release the game, 343 Industries wanted to find a solution to mine user data with the hope of gaining insight into player behavior and gauging the overall health of the game after its release. Additionally, the Halo 4 development team was tasked with feeding daily data about the five-week online Infinity Challenge tournament to Virgin Gaming, a Halo 4 partner.

To meet these business requirements, the Halo 4 team knew it needed to find business intelligence (BI) technology that would work well with Azure. “One of the great things about the Halo team is how they use cutting-edge technology like Azure,” says Alex Gregorio, a program manager for Microsoft Studios, which developed Halo 4. “So we wanted to find the best BI environment out there, and we needed to make sure it integrated with Azure.”

Because all game data is housed in Azure, the team wanted to find a BI solution that could effectively produce BI information from that data. The team also needed to process this data in the same data center, minimizing storage costs and avoiding charges for data transfers across two data centers. The team also wanted full control over job priorities, so that the performance and delivery of analytical queries would not be affected by other processing jobs run at the same time. “We had to have a flexible solution that was not on-premises,” states Gregorio.

Microsoft HDInsight: Big Data Analytics in Azure

Although it considered building its own custom BI solution, the Halo 4 team decided to use the Windows Azure HDInsight Service, which is based on Apache Hadoop, an open-source software framework created by Yahoo! Hadoop can analyze huge amounts of unstructured data in a distributed manner. Designed for large groups of machines that do not share memory, Hadoop can operate on commodity servers and is ideal for running complex analytics.

HDInsight empowers users to gain new insights from unstructured data, while connecting that data to familiar BI tools. “Even though we knew we would be one of the earliest customers of HDInsight, it met all our requirements,” says Tamir Melamed, a development manager on the Halo 4 team. “It can run any possible queries, and it is the best format for integration with Azure. And because we owned the services that produce the data and the BI system, we knew we would be using resources in the best, most cost-effective way.”

The Halo 4 team wrote Azure-based services that convert raw game data collected in Azure into the Avro format, which is supported by Hadoop. This data is then pushed from the Azure services in the Avro format into Windows Azure binary large object (BLOB) storage, which HDInsight is able to utilize with the ASV protocol. The data can then be accessed by anyone with the right permissions from Windows Azure.

Every day, Hadoop handles millions of data-rich objects related to Halo 4, including preferred game modes, game length, and many other items. With Microsoft SQL Server PowerPivot for SharePoint as a front-end presentation layer, Azure BLOBs are created based on queries from the Halo 4 team.

PowerPivot for Excel loads data from HDInsight using the Hive ODBC driver software library for the Hive data warehouse framework in Hadoop. A PowerPivot workbook is then uploaded to PowerPivot for SharePoint and refreshed nightly within SharePoint, using the connection string stored in the workbook via the Hive ODBC driver to HDInsight. The Halo 4 team uses the workbooks to generate reports and facilitate their viewing of interactive data dashboards.

Using the Flexibility and Agility of Hadoop on Azure

For the Halo 4 team, a key benefit of using HDInsight was its flexibility, which allowed for separating the amount of the raw data from the processing size needed to consume that data. “With previous systems, we never had the separation between production and raw data, so there was always the question of how running analytics would affect production,” says Mark Vayman, lead program manager for the Halo services group. “Hadoop running on Azure BLOBs solved that problem.”

With Hadoop, the team was able to build a configuration system that can be used to turn various Azure data feeds on or off as needed. “That really helps us get optimal performance, and it’s a big advantage because we can use the same Azure data source to run compute for HDInsight on multiple clusters,” says Vayman. “It made it easy for us to drive business requests for analysis through an ad-hoc Hadoop cluster without affecting the jobs being run. So developers outside the immediate BI team can actually go in and run their own queries without being hindered by the development load our team has. Ultimately, the unique way in which Hadoop is implemented on Azure gives us these capabilities.”

Halo 4 developers have also benefited from the agility of Hadoop on Azure. “If we get a business request for analytics on Azure data, it’s very easy for us find a specific data point in Azure and get analytics on that data with HDInsight,” says Melamed. “We can easily launch a new Hadoop cluster in minutes, run a query, and get back to the business in a few hours or less. Azure is very agile by nature, and Hadoop on Azure is more powerful as a result.”

Shifting the Focus from Storage to Analysis

HDInsight was also instrumental in changing the Halo 4 team’s focus from data storage to useful data analysis. That’s because Hadoop applies structure to data when it’s consumed, as opposed to traditional data warehouse applications that structure data before it’s placed into a BI system. “In Windows Azure, Hadoop is essentially a place where all the raw data can be dumped,” says Brad Sarsfield, a Microsoft SQL Server developer. “Then we can decide to apply structure to that data at the point where it’s consumed.”

Once the Halo 4 team became aware of this capability, it shifted its mindset. “That realization had a subtle but profound effect on the team,” Sarsfield says. “At a certain point, they flipped from worrying about how to store and structure the data to concentrating on the types of questions they could ask from the data—for example, what game modes users were playing in, or how many players were playing at a given time. The team saw that it could much more readily respond to the initial requests for business insight about the game itself.”

Gaining New Insights from the Halo 4 “Infinity Challenge”

With an ability to focus more tightly on analysis, the Halo 4 team turned its attention to the Infinity Challenge. “Using Microsoft HDInsight, we were able to analyze the data during the five weeks of the Infinity Challenge,” says Vayman. “With the fast performance we got from the solution, we could feed that data to Virgin Gaming so they could update the leaderboards on the tournament website every day.”

In addition, because of the way the team set up Hadoop to work within Azure, the Halo team was able to perform analysis during the Infinity Challenge to detect cheaters and other abnormal player behavior. “HDInsight gives us the ability to easily read the data,” says Vayman. “In this case, there are many ways in which players try to gain extra points in games, and we were able to look back at previous data stored in Azure and identify user patterns that fit certain cheating characteristics, which was unexpected.”

After receiving this data from the Halo 4 team, Virgin Gaming sent out a notification that any player found or suspected of cheating would be immediately removed from the leaderboards and the tournament in general. “That was a great example of Hadoop on Azure giving us powerful analytical capabilities,” says Vayman.

Making Weekly Updates Based on User Trends

HDInsight gives the Halo 4 team daily updated BI data pulled from the game, which provides visibility into user trends. For example, the team can view how many users play every day, as well as the average length of a game and the specific game features that players use the most. Vayman says, “Having this kind of insight helps us gauge the overall health of the game and allows us to correlate the game’s sales numbers with the number of people that actually end up playing.”

Getting insights from Hadoop, in addition to Halo 4 user forums, also helps the Halo 4 team make frequent updates to the game. “Based on the user preference data we’re getting from Hadoop, we’re able to update game maps and game modes on a week-to-week basis,” says Vayman. “And the suggestions we get in the forums often find their way into the next week’s update. We can actually use this feedback to make changes and see if we attract new players. Hadoop and the forums are great tuning mechanisms for us.”

The team is also taking user feedback and giving it to the game’s designers, who can take it into consideration when thinking about creating future editions of Halo.

Targeting Players Through Email

The flexibility of the HDInsight BI solution also gives the Halo 4 team a way to reach out to players through customized campaigns, such as the series of email blasts the team sent to gamers in the initial weeks after the launch. During that campaign, the team set up Hadoop queries to identify users who started playing on a certain date. The team then wrote a file and placed it into a storage account on Windows Azure, where it was sent through SQL Server 2008 R2 Integration Services into a database owned by the Xbox marketing team.

The marketing team then used this data to send new players two emails: a generic “Welcome to Halo 4” email the day after a player began playing, and another custom email seven days later. This second email was actually one of five different emails, tailored to each user. Based on player preferences demonstrated during the week of play, this email suggested different game modes to players. The choice of which email each player received was determined by the HDInsight system. “That gave marketing a new way to possibly retain users and keep them interested in trying new aspects of the game,” Gregorio says. The Halo 4 marketing team plans to run similar email campaigns for the game until a new edition is released. “Basing an email campaign on HDInsight and Hadoop was a big win for the marketing team, and also for us,” adds Vayman. “It showed us that we were able to use data from HDInsight to customize emails, and to actually use BI to improve the player experience and affect game sales.”

Expanding the Use of Hadoop

Based on the success of HDInsight as a powerful BI tool, Microsoft has started to expand the solution to other internal groups. One group, Microsoft IT, is using HDInsight to improve its customer-facing website. “Microsoft IT is using some of the internal Azure service logs in Hadoop to mine data for use in identifying error patterns, in addition to creating reports on the site’s availability,” says Vayman. Another internal team that processes very large data volumes is also using Hadoop on Azure for analytics. “Halo 4 really helped lead the way for both projects,” Vayman says.

One reason Hadoop is becoming more widely used is that the technology continues to evolve into an increasingly powerful BI tool. “The traditional role of BI within Hadoop is expanding because of the raw capabilities of the platform,” says Sarsfield. “In addition to just BI reporting, we’ve been able to add predictive analytics, semantic indexing, and pattern classification, which can all be leveraged by the teams using Hadoop.”

Adoption is also growing because users do not have to be Hadoop experts to take advantage of the technology’s data insights. “By hooking Hadoop into a set of tools that are already familiar, such as Microsoft Excel or Microsoft SharePoint, people can take advantage of the power of Hadoop without needing to know the technical ins and outs. It’s really geared to the masses,” says Vayman. “A good example of that is the data about Infinity Challenge cheaters that we gave to Virgin Gaming. The people receiving that data are not Hadoop experts, but they can still easily use the data to make business decisions.”

No matter what new capabilities are added to it, there’s no doubt that HDInsight will continue to affect business. “With Hadoop on Windows Azure, we can mine data and understand our audience in a way we never could before,” says Vayman. “It’s really the BI solution for the future.”

WenMing Ye (@wenmingye, a.k.a. HPC Trekker) posted an 00:11:00 Introduction To Windows Azure HDInsight Service video to Channel9 on 3/22/2013 (missed when published):

imageThis is a general Introduction to  Big Data, Hadoop, and Microsoft's new Hadoop-based Service called Windows Azure HDInsight.  This presentation is divided into two videos, this is Part 1.  We cover Big Data and Hadoop in this part of the presentation.    The relevant blog post is:  Let there be Windows Azure HDInsight.  The up-to-date presentation is on github at:


<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

• Dhananjay Kumar (@Debug_Mode) described Step by Step working with Windows Azure Mobile Service Data in JavaScript based Windows Store Apps in a 4/5/2013 post:

imageIn my last post I discussed Step by Step working with Windows Azure Mobile Service Data in XAML based Windows Store Apps [see article below]. In this post we will have a look on working with JavaScript based Windows Store Apps. Last post was divided in two parts. First part was Configure Windows Azure Mobile Service in Portal. You need to follow step 1 from last step to configure Windows Azure Mobile Service Data. To work with this proceed as given in following steps,

  1. Configure Windows Azure Mobile Service in Portal. For reference follow Step 1 of this Blog
  2. Download Windows Azure SDK and install.

imageNow follow [this] post to work with Windows Azure Mobile Service Data from JavaScript based Windows Store Application.

Create [a] Windows Store Application in JavaScript

Create [a] Blank App from creating Blank App template from JavaScript Windows Store App project tab.


After creating project add Windows Azure Mobile Services JavaScript Client reference in project.


After adding reference let us go ahead and design app page to add a blogger in table. We are going to put two text boxes and one button. On click event of the button blogger will be inserted as row in data table of Windows Azure Mobile Services.


 <h2>Adding Record to Windows Azure Mobile Service Data</h2> <br />
 Name : <input id="txtname" type="text" /> <br />
 Technology : <input id="txttechnology" type="text" /> <br /> <br />
 <button id="btnInsert" >Insert Blogger</button>


Application will look like following,


We need to add reference of Windows Azure Mobile Service on HTML as following

<script src="//Microsoft.WinJS.1.0/js/ui.js"></script>
 <script type="text/javascript" src="/MobileServicesJavaScriptClient/MobileServices.js"></script>

Next let us create client for Windows Azure Mobile Service. To create this you need to pass application URL and application Key. Client in JavaScript based application can be created as following

 var client = new Microsoft.WindowsAzure.MobileServices.MobileServiceClient(

Now let us create a proxy table. Proxy table can be created as following and after creating proxy table we can add record in table as following

var bloggerTable = client.getTable('techbloggers');

var insertBloggers = function (bloggeritem) {

bloggerTable.insert(bloggeritem).done(function (item) {
 //Item Added

On click event of button we need to call insertBloggers javascript function.

btnInsert.addEventListener("click", function () {
 name: txtname.value,
 technology: txttechnology.value

On click event of button you should able to insert blogger in Windows Azure Mobile Service Data table. In further post we will learn to perform update, delete and fetch data.

• Dhananjay Kumar (@Debug_Mode) posted Step by Step working with Windows Azure Mobile Service Data in XAML based Windows Store Apps on 4/4/2013:

imageIn this post we will take a look on working with Windows Azure Mobile Service in XAML based Windows Store Application. We will follow step by step approach to learn goodness of Windows Azure Mobile Service. In first part of post we will configure Windows Azure Mobile Service in Azure portal. In second part of post we will create a simple XAML based Windows Store application to insert records in data table. This is first post of this series and in further posts we will learn other features of Windows Azure Mobile Services.

Configure Windows Azure Mobile Service on Portal
Step 1

Login to Windows Azure Management Portal here

Step 2

Select Mobile Services from tabs in left and click on CREATE NEW MOBILE SERVICE


Step 3

In this step provide URL of mobile service. You have two choice either to create mobile service in existing database or can create a new database. Let us go ahead and create a new database. In DATABSE drop down select option of Create New SQL database instance. Select SUBSCRIPTION and REGION from drop down as well.


Step 4

On next screen you need to create database. Choose either existing database server or create a new one. You need to provide credential to connect with database servers.


Step 5

After Successful creation of mobile services you need to select platform. Let us go ahead and choose Windows Store as platform


Step 6

After selecting platform click on Data in menu. After selecting Data click on ADD A TABLE


Next you need to provide Table name. You can provide permission on table. There are three options available

  1. Anybody with Application Key
  2. Only Authenticated Users
  3. Only Scripts and Admins

Let us leave default permission level for the table.


Step 7

Next click on tables. You will be navigated to table dashboard. When you click on Columns you will find one default created column id. This column gets created automatically. This column must be there in Windows Azure Mobile Service table.


On enabling of dynamic schema when you will add JSON objects from client application then columns will get added dynamically.

Create Windows Store Application in XAML

Very first you need to install Windows Azure SDK for Windows Phone and Windows 8.


After installing create a Windows Store Application by choosing Blank App template.


Before we move ahead to create Windows Store App let us go back to portal and mange App URL and key. You need key and application URL to work with Windows Azure Mobile Services from Windows Store application. You will find key and application URL at the portal.


Now go ahead and add following namespaces on MainPage.xaml.cs

using Microsoft.WindowsAzure.MobileServices;
using System.Runtime.Serialization;

Next you need to create entity class representing table from the Windows Azure Mobile Service. Let us create entity class as following. We are creating entity class TechBloggers.

public class TechBloggers
 public int id { get; set; }
 public string Name { get; set; }
 [DataMember(Name = "technology")]
 public string Technology { get; set; }


After creating entity class go ahead and global variables.

MobileServiceClient client;
 IMobileServiceTable<TechBloggers> bloggerstable;

Once global variable is defined in the constructor of page you need to create instance of MobileServiceClient and MobileServiceTable. Let us go ahead and create that in constructor of the page.

public MainPage()
 MobileServiceClient client = new MobileServiceClient("https://youappurl", "appkey");
 bloggerstable = client.GetTable<TechBloggers>();


Now let us go back and design app page. On XAML let us put two textboxes and one button. On click event of button we will insert bloggers in the table.

<Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush}" >
 <RowDefinition Height="100" />
 <RowDefinition Height="100" />
 <RowDefinition Height="100" />
 <StackPanel Grid.Row="0" Orientation="Horizontal" Margin="40,40,0,0">
 <TextBlock Text="Name" FontSize="40" />
 <TextBox x:Name="txtName" VerticalAlignment="Top" Width="400" />
 <StackPanel Orientation="Horizontal" Grid.Row="1" Margin="40,40,0,0">
 <TextBlock Text="Technology" FontSize="40" />
 <TextBox x:Name="txtTechnology" VerticalAlignment="Top" Width="400" />

 <Button Grid.Row="2" x:Name="btnInsert" Click="btnInsert_Click_1" Content="Insert Record" Height="72" Width="233" Margin="248,42,0,-14" />


Application will look like as given in below image. I know this is not best UI. Any way creating best UI is not purpose of this post


On click event of button we can insert a record to table using Windows Azure Mobile Services using following code

private void btnInsert_Click_1(object sender, RoutedEventArgs e)

TechBloggers itemtoinsert = new TechBloggers
 Name = txtName.Text,
 Technology = txtTechnology.Text



InsertItem function is written like following,

private async void InserItem(TechBloggers itemtoinsert)

await bloggerstable.InsertAsync(itemtoinsert);


On click event of button you can insert records in Windows Azure Mobile Service data table. To verify inserted records browse to portal and click on table.


In further posts we will learn update, delete and view of the records.

Clemens Vasters (@clemensv) produced a 00:25:34 Windows Azure Mobile Services - for Organizations and the Enterprise video for Channel9 on 3/25/2013 (missed when published):

image_thumb75_thumb1Last week in Redmond I had a chat with coworker Josh Twist from our joint Azure Mobile team (owning Service Bus and Mobile Services) about the relevance of Mobile Services for organizations and businesses.

imageAs the app stores grow, there's increasing competitive pressure on organizations of all sizes to increase the direct consumer engagement through apps on mobile devices and tablets, and doing so is often quite a bit of a scalability leap from hundreds or thousands of concurrent internal clients to millions of direct consumer clients.

Mobile Services is there to help and can, also in conjunction with Service Bus and other services form Microsoft and partners, act as a new kind of gateway to enterprise data and compute assets. 

Other Windows Azure Mobile Services videos:


<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

•• Julie Lerman (@julielerman) described Getting Started with WCF Data Services 5.4, OData v3 and JSON Light in a 4/6/2013 post to her Don’t Be Iffy blog:

imageTLDR: Setup steps to get WCFDS 5.4 up and running

JULBS JUBLS (Julie’s Usual Lengthy Back Story):

WCF Data Services (aka WCFDS) updates are being released at a rapid pace. I think they are on a hoped-for 6-week release schedule and they just released 5.4 a few days ago.

imageOne of the reasons is that they are adding in more and more support for features of OData v3. But rather than waiting until ALL of the features are in (by which time OData v4 may already be out! :)) they are pushing out updates as they get another chunk of features in.

Be sure to watch the blog for announcements about release candidates (where you can provide feedback) and then the releases. Here for example is the blog post announcing the newest version: WCF Data Services 5.4.0 Release.

I wanted to move to this version so I could play with some of the new features and some of the features we’ve gotten starting with v 5.1 – for example the support for the streamlined JSON output (aka “JSON light”) and Actions & Functions.

As there are a number of steps involved for getting your WCFDS to use v 5.4, I somehow performed them in an order that left me in the cold. I couldn’t even get JSON light output.

So after scratching my head for way too long, I finally started the service from scratch and this time it worked.

  1. Create a new project to host your service. I recommend a WCF Services Application project
  2. Delete the IService1.cs and Service1.cs files created in the new project.
  3. Add a new WCF Data Service item
  4. Remove the references to the following WCFDS 5.0  assemblies:
    1. Microsoft.Data.Edm
    2. Microsoft.Data.OData
    3. Microsoft.Data.Services
    4. Microsoft.Data.Services.Client
    5. System.Spatial
  5. Using Nuget, install WCF Data Services Server. The current version available on Nuget is 5.4.
    When you install this, Nuget will also install the other 4 assemblies (Edm, OData, Services.Client and System.Spatial)
  6. UPDATE! Mark Stafford (from the WCFDS team) let me in on a “secret”. The WCFDS template uses Nuget to pull in WCFDS5 APIs, so you can just use Nuget to UPDATE those assemblies! (this is for VS2012. For VS2010, use the crossed out steps 4 & 5 :))


  1. Install Entity Framework 5 from Nuget

That should be everything you need to do to have the proper version of WCFDS.

Now you can follow a normal path for creating a data service.

In case you’re not familiar with that the basic steps (just getting a service running, not doing anything fancy here)

  1. Add reference(s) to any projects that contain your data layer (in my case a data layer with EF’s DbContext) and domain classes.
  2. Modify the WCFDataService class that the Add New Item wizard created so that it uses your context. My context is called CoursesContext, so here I’ve specified that context and simply exposed all of my entity sets to be available as read-only data using the SetEntityDataAccessRule. That is not a best practice for production apps but good enough to get me started.
 public class MyDataService : DataService<CoursesContext>
    public MyDataService()
        Database.SetInitializer(new CoursesInitializer());
        public static void InitializeService(DataServiceConfiguration config)
            config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);
            config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V3;
            config.UseVerboseErrors = true;

My data layer includes a custom initializer with a seed method so in the class constructor, I’ve told EF to use that initializer. My database will be created with some seed data.

Notice the MaxProtocolVersion. The WCFDS item template inserted that. It’s just saying that it’s okay to use OData v3.

I added the UseVerboseErrors because it helps me debug problems with my service.

Now to ensure that I’m truly getting access to OData v3 behavior, I can open up a browser or fiddler and see if I can get JSON light. JSON light is the default JSON output now.

WCFDS now lets me leverage OData’s format parameter to request JSON directly in the URL. I can even do this in a browser.

Chrome displays it directly in the browser:


Internet Explorer’s default behavior for handling a JSON response is to download the response as text file and open it in notepad.

In Fiddler, I have to specify the Accept header when composing the request. Since JSON light is the default, I only have to tell it I want JSON (application/json).


Fiddler outputs the format nicely for us to view:


If I want the old JSON with more detail, I can tell it to spit out the odata as “verbose” (odata=verbose). Note that it’s a semi-colon in between, not a comma.


(I’m only showing the JSON for the first course object, the second is collapsed)


Checking for the existence of the JSON Light format and that it’s even the new JSON default, helps me verify that I’m definitely using the new WCFDS APIs and getting access to the OData v3 features that it supports. Now I can go play with some of the other new features! :)

Max Uritsky (@max_data) broke a two-year blog silence with his Windows Azure Marketplace – new end user experiences, new features, new data and app services, but still the same old excitement and added value! post of 4/5/2013:

Hello Windows Azure Marketplace users,

imageWe’re back with another exciting set of announcements! In the past 4 months, we’ve not only supported the launch of a new storefront, but also made great strides in making our service more resilient, added a feature to help users guard against interruptions to service, added a huge portfolio of data services and app services, while continuing to improve the user experience on the portal and from Office products.

imageIf you are a Developer, you will be very excited to know that we launched the Azure Store, which is a marketplace for data services and app services on the Windows Azure Portal, in October 2012. Want to learn more? Check out the keynote from Build 2012!

If you are an Analyst or an Information Worker, you will be interested in knowing that we improved the experience of using Windows Azure Marketplace from Excel PowerPivot, and we helped with the launch of Data Explorer, which is a data aggregation and shaping tool from SQL Information Services. Download Data Explorer and start playing with it right away!

Here are a few snapshots from Data Explorer:

Here are a few snapshots with the improved experience from Excel PowerPivot:

We have also continued to improve the user experience on the Windows Azure Marketplace Portal. We added a sitemap, which is a one-stop shop access point to all the important resources on the websites. With this feature, discoverability of resources on the website was greatly improved. We also made an improvement to the data visualization tool on our portal, Service Explorer, by replacing it with the more metro-looking Query Builder, to unify the data exploration experience found in Excel and PowerPivot. With this release, we also added support for download to PowerPivot 2010 and PowerPivot 2013, to target a broader set of customers.

We also added a feature in which users can opt-in to Windows Azure Marketplace automatically refilling their subscriptions when the subscription balances are low – we got a lot of feedback from our users that there ought to be a way to automatically re-subscribe to a subscription when the subscription balance runs out, and we heard the feedback, and we released the feature called Auto Refill, that did exactly that. Hmm, too good to be true? Read more about Auto Refill.

Here is a snapshot of the sitemap:

Here is a snapshot of the new data visualization tool on the Marketplace Portal:

Here are a few snapshots of Auto-Refill:

We have also released a good list of content in the past 5 months, and one of the interesting offerings was the Synonyms API that we made available through the Windows Azure Marketplace.  The Synonyms API returns alternate references to real world entities like products, people, locations, and more.

For instance, the API intelligently matches the following real world entities to commonly used synonyms:

  • Product synonyms: “Canon 600D” is synonymous with “canon rebel t3i”, etc
  • People synonyms: “Jennifer Lopez”” is synonymous with “jlo”, etc
  • Place synonyms: “Seattle Tacoma International Airport” is synonymous with “sea tac”, etc.

Try out this cool API for yourself right here.

We also released a ton of great content by Dun & Bradstreet, Digital Folio and our other key publishers, and here’s a short list:

To get a full list of data services, please click here and to get a full list of all the applications available through the Windows Azure Marketplace, please click here.

S. D. Oliver described Getting data from Windows Azure Marketplace into your Office application in a 4/3/2013 post to the Apps for Office and SharePoint blog:

moinakphotoThis post walks through a published app for Office, along the way showing you everything you need to get started building your own app for Office that uses a data service from the Windows Azure Marketplace. Today’s post is brought to you by Moinak Bandyopadhyay. Moinak is a Program Manager on the Windows Azure Marketplace team, a part of the Windows Azure Commerce Division.

Ever wondered how to get premium, curated data from Windows Azure Marketplace, into your Office applications, to create a rich and powerful experience for your users? If you have, you are in luck.

Introducing the first ever app for Office that builds this integration with the Windows Azure Marketplace – US Crime Stats. This app enables users to insert crime statistics, provided by DATA.GOV, right into an Excel spreadsheet, without ever having to leave the Office client.

One challenge faced by Excel users is finding the right set of data, and apps for Office provides a great opportunity to create rich, immersive experiences by connecting to premium data sources from the Windows Azure Marketplace.

What is the Windows Azure Marketplace?

The Windows Azure Marketplace (also called Windows Azure Marketplace DataMarket or just DataMarket) is a marketplace for datasets, data services and complete applications. Learn more about Windows Azure Marketplace.

This blog article is organized into two sections:

  1. The U.S. Crime Stats Experience
  2. Writing your own Office Application that gets data from the Windows Azure Marketplace
The US Crime Stats Experience

You can find the app on the Office Store. Once you add the US Crime Stats app to your collection, you can go to Excel 2013, and add the US Crime Stats app to your spreadsheet.

Figure 1. Open Excel 2013 spreadsheet


Once you choose US Crime Stats, the application is shown in the right pane. You can search for crime statistics based on City, State, and Year.

Figure 2. US Crime Stats app is shown in the right task pane


Once you enter the city, state, and year, click ‘Insert Crime Data’ and the data will be inserted into your spreadsheet.

Figure 3. Data is inserted into an Excel 2013 spreadsheet


What is going on under the hood?

In short, when the ‘Insert Crime Data’ button is chosen, the application takes the input (city, state, and year) and makes a request to the DataMarket services endpoint for DATA.GOV in the form of an OData Call. When the response is received, it is then parsed, and inserted into the spreadsheet using the JavaScript API for Office.

Writing your own Office application that gets data from the Windows Azure Marketplace
Prerequisites for writing Office applications that get data from Windows Azure Marketplace
  • Visual Studio 2012
  • Office 2013
  • Office Developer Tools for Visual Studio 2012
  • Basic Familiarity with the OData protocol
  • Basic familiarity with the JavaScript API for Office
  • To develop your application, you will need a Windows Azure Marketplace Developer Account and a Client ID (learn more)
  • Web server URL where you will be redirected to after application consent. This can be of the form ‘http://localhost:port_number/marketplace/consent’. If you are hosting the page in IIS on your development computer, supply this URL to the application Register page when you create your Marketplace app.
How to write Office applications using data from Windows Azure Marketplace

The MSDN article, Create a Marketplace application, covers everything necessary for creating a Marketplace application, but below are the steps in order.

  1. Register with the Windows Azure Marketplace:
    • You need to register your application first on the Windows Azure Marketplace Application Registration page. Instructions on how to register your application for the Windows Azure Marketplace are found in the MSDN topic, Register your Marketplace Application.
  2. Authentication:
  3. Receiving Data from the Windows Azure Marketplace DataMarket service

To get a real feel of the code that powers the US Crime Stats, be on the lookout for the release of the code sample on the Office 2013 Samples website.

The WCF Data Services Team reported the WCF Data Services 5.4.0 Release on 4/2/2013:

image_thumb8Today we are releasing version 5.4.0 of WCF Data Services. As mentioned in the prerelease post, this release will be NuGet packages only. That means that we are not releasing an updated executable to the download center. If you create a new WCF Data Service or add a reference to an OData service, you should follow the standard procedure for making sure your NuGet packages are up-to-date. (Note that this is standard usage of NuGet, but it may be new to some WCF Data Services developers.)


If you haven’t noticed, we’ve been releasing a lot more frequently than we used to. As we adopted this rapid cadence, our documentation has fallen somewhat behind and we recognize that makes it hard for you to try out the new features. We do intend to release some samples demonstrating how to use the features below but we need a few more days to pull those samples together and did not want to delay the release. Once we get some samples together we will update this blog post (or perhaps add another blog post if we need more commentary than a gist can convey).

What is in the release:
Client deserialization/serialization hooks

We have a number of investments planned in the “request pipeline” area. In 5.4.0 we have a very big set of hooks for reaching into and modifying data as it is being read from or written to the wire format. These hooks provide extensibility points that enable a number of different scenarios such as modifying wire types, property names, and more.

Instance annotations on atom payloads

As promised in the 5.3.0 release notes, we now support instance annotations on Atom payloads. Instance annotations are an extensibility feature in OData feeds that allow OData requests and responses to be marked up with annotations that target feeds, single entities (entries), properties, etc. We do still have some more work to do in this area, such as the ability to annotate properties.

Client consumption of instance annotations

Also in this release, we have added APIs to the client to enable the reading of instance annotations on the wire. These APIs make use of the new deserialization/serialization pipelines on the client (see above). This API surface includes the ability to indicate which instance annotations the client cares about via the Prefer header. This will streamline the responses from OData services that honor the odata.include-annotations preference.

Simplified transition between Atom and JSON formats

In this release we have bundled a few less-noticeable features that should simplify the transition between the Atom and (the new) JSON format. (See also the bug fixes below on type resolver fixes.)

Bug fixes

In addition to the features above, we have included fixes for the following notable bugs:

  • Fixes an issue where reading a collection of complex values would fail if the new JSON format was used and a type resolver was not provided
  • Fixes an issue where ODataLib was not escaping literal values in IDs and edit links
  • Fixes an issue where requesting the service document with application/json;odata=nometadata would fail
  • Fixes an issue where using the new JSON format without a type resolver would create issues with derived types
  • (Usability bug) Makes it easier to track the current item in ODataLib in many situations
  • Fixes an issue where the LINQ provider on the client would produce $filter instead of a key expression for derived types with composite keys
  • (Usability bug) Fixes an issue where the inability to set EntityState and ETag values forced people to detach and attach entities for some operations
  • Fixes an issue where some headers required a case-sensitive match on the WCF DS client
  • Fixes an issue where 304 responses were sending back more headers than appropriate per the HTTP spec
  • Fixes an issue where a request for the new JSON format could result in an error that used the Atom format
  • Fixes an issue where it was possible to write an annotation value that was invalid according to the term
  • Fixes an issue where PATCH requests for OData v1/v2 payloads would return a 500 error rather than 405
We want your feedback

We always appreciate your comments on the blog posts, forums, Twitterverse and e-mail ( We do take your feedback seriously and prioritize accordingly. We are still early in the planning stages for 5.5.0 and 6.0.0, so feedback now will help us shape those releases.

<Return to section navigation list>

Windows Azure Service Bus, Caching Access Control, Active Directory, Identity and Workflow

Glen Block (@gblock) reported “we just pushed our first release of to npm!” in a 4/4/2013 e-mail message and tweet. From NPM: - store using Windows Azure Service Bus

imageThis project provides a Node.js package that lets you use Windows Azure Service Bus as a back-end communications channel for applications.

Library Features
  • Service Bus Store
    • Easily connect multiple server instances over Service Bus
Getting Started
Download Source Code

To get the source code of the SDK via git just type:

git clone
cd ./
Install the npm package

You can install the azure npm package directly.

npm install

First, set up your Service Bus namespace. Create a topic to use for communications, and one subscription per server instance. These can be created either via the Windows Azure portal or programmatically using the Windows Azure SDK for Node.

Then, configure to use the Service Bus Store:

var sio = require('');
var SbStore = require('');

var io = sio.listen(server);
io.configure(function () {
  io.set('store', new SbStore({
    topic: topicName,
    subscription: subscriptionName,
    connectionString: connectionString

The connection string can either be retrieved from the portal, or using our powershell / x-plat CLI tools. From here, communications to and from the server will get routed over Service Bus.

Current Issues

The current version (0.0.1) only routes messages; client connection state is stored in memory in the server instance. Clients need to consistently connect to the same server instance to avoid losing their session state.

Need Help?

Be sure to check out the Windows Azure Developer Forums on Stack Overflow if you have trouble with the provided code.

Contribute Code or Provide Feedback

If you would like to become an active contributor to this project please follow the instructions provided in Windows Azure Projects Contribution Guidelines.

If you encounter any bugs with the library please file an issue in the Issues section of the project.

Learn More

For documentation on how to host Node.js applications on Windows Azure, please see the Windows Azure Node.js Developer Center.

For documentation on the Azure cross platform CLI tool for Mac and Linux, please see our readme [here] (

Check out our new IRC channel on freenode, node-azure.

Vittorio Bertocci (@vibronet) described Auto-Update of the Signing Keys via Metadata on 4/2/2013:

imageQuite a mouthful, isn’t it Smile

TL;DR version: we just released an update to the ValidatingIssuerNameRegistry which makes it easy for you to write applications that automatically keep up to date the WIF settings containing the keys that should be used to validate incoming tokens.
The Identity and Access Tools for Visual Studio 2012 will pick up the new version automatically, no action required for you.

The Validation Mechanism in WIF’s Config

When you run the Identity and Access Tool for VS2012 (or the ASP.NET Tools for Windows Azure AD) you can take advantage of the metadata document describing the authority you want to trust to automatically configure your application to connect to it. In practice, the tool adds various WIF-related sections in your web.config; those are used for driving the authentication flow.
One of those elements, the <issuerNameRegistry>, is used to keep track of the validation coordinates that must be used to verify incoming tokens; below there’s an example.

<issuerNameRegistry type="System.IdentityModel.Tokens.ValidatingIssuerNameRegistry, 
  <authority name="">
      <add thumbprint="C1677FBE7BDD6B131745E900E3B6764B4895A226" />
      <add name="" />

The key elements here are the keys (a collection of thumbprints indicating which certificates should be used to check the signature of incoming tokens) and the validIssuers (list of names that are considered acceptable values for the Issuer element of the incoming tokens (or equivalent in non-SAML tokens)).

That’s extremely handy (have you ever copied thumbprints by hand? I did, back in the day) however that’s not a license for forgetting about the issuer’s settings. What gets captured in config is just a snapshot of the current state of the authority, but there’s no guarantee that things won’t change in the future. In fact, it is actually good practice for an authority to occasionally roll keys over.

If a key rolls, and you don’t re-run the tool to update your config accordingly, your app will now actively refuse the tokens signed with the new key; your users will be locked out. Not good.

Dynamically Update of the Issuer Coordinates in Web.Config

WIF includes fairly comprehensive metadata manipulation API support, which you can use for setting up your own config auto-refresh; however it defaults to the in-box implementation of  IssuerNameRegistry, ConfigBasedIssuerNameRegistry, and in this post I made clear that ValidatingIssuerNameRegistry has clear advantages over the in-box class. We didn’t want you to have to choose between easy config refresh and correct validation logic, hence we updated ValidatingIssuerNameRegistry to give you both.

In a nutshell, we added a new static method (WriteToConfig) which reads a metadata document and, if it detects changes, it updates an <issuerNamerRegistry> in the web.config to reflect what’s published in metadata. Super-easy!

I would suggest invoking that method every time your application starts: that happens pretty often if you use the IIS defaults, and it is a safe time to update the web.config without triggering unwanted recycles. For example, here there’s how your global.asax might look like:

using System;
using System.Web.Http;
using System.Web.Mvc;
using System.Web.Optimization;
using System.Web.Routing;
using System.Configuration;
using System.IdentityModel.Tokens;

namespace MvcNewVINR
    public class MvcApplication : System.Web.HttpApplication
protected void RefreshValidationSettings() { string configPath =
              AppDomain.CurrentDomain.BaseDirectory + "\\" + "Web.config";
            string metadataAddress = 
            ValidatingIssuerNameRegistry.WriteToConfig(metadataAddress, configPath);
        protected void Application_Start()

..and that’s it Smile

We also added another method,  GetIssuingAuthority, which returns the authority coordinates it read without committing them to the config: this comes in handy when you overwrote ValidatingIssuerNameRegistry to use your custom logic, issuers repository, etc and you want to write down the info in your own custom schema.

Self-healing of the issuer coordinates is a great feature, and I would recommend you consider it for all your apps: especially now that it’s really easy to set up.

This post is short by Vittorio’s (and my) standards!

Alan Smith continued his series with Website Authentication with Social Identity Providers and ACS Part 3 - Deploying the Relying Party Application to Windows Azure Websites on 4/1/2013:

image_thumb75_thumb3Originally posted on:

In the third of the series looking at website authentication with social identity providers I’ll focus on deploying the relying party application to a Windows Azure Website. This will require making some configuration changes in the management console in ACS, and the web.config file, and also changing the way that session cookies are created

The other parts of this series are here:

The relying party website has now been developed and tested in a development environment. The next stage is to deploy the application to Windows Azure Websites so that users can access it over the internet. The use of the universal profile provider and a Windows Azure SQL Database as a store for the profile information means that no changes in the database or configuration will be required when migrating the application.

Creating a Windows Azure Website

The first step is to create a Windows Azure Website in the Azure management portal. The following screenshot shows the creation of a new website with the URL of in the West Europe region.


Note that the URL of the website must be unique globally, so if you are working through this solution, you will probably have to choose a different URL.

Configuring a Relying Party Application

With the website created, the relying party information will have to be configured for Windows Azure Active Directory Access Control, formally known as Windows Azure Access Control Service, (ACS). This is because the relying party configuration is specific to the URL of the website.

One option here is to create a new relying party with the URL of the Windows Azure Website, which will allow the testing of the op-premise application as well as the Azure hosted website. This will require the existing identity providers and rules to be recreated and edited for the new application. A quicker option is to modify the existing configuration, which is what I will do here.

The existing relying party application is present in the relying party applications section of the ACS portal.


In order to change the configuration for the host application the name, realm and return URL values will be changed to the URL of the Windows Azure Website URL ( The screenshot below shows these changes.


For consistency, the name of the rule group will also be changed appropriately.


With these changes made, ACS will now function for the application when it is hosted in Windows Azure Websites.

Configuring the Relying Party Application

For the relying party application website to integrate correctly with ACS, the URL for the website will need to be updated in two places in the web.config file. The following code highlights where the changes are made.



<claimsAuthenticationManager type="RelyingPartyApp.Code.CustomCam, RelyingPartyApp" />

<certificateValidation certificateValidationMode="None" />


<add value="" />


<issuerNameRegistry type="System.IdentityModel.Tokens.ConfigurationBasedIssuerNameRegistry, System.IdentityModel, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089">


<add thumbprint="B52D78084A4DF22E0215FE82113370023F7FCAC4" name="" />







<cookieHandler requireSsl="false" />





requireHttps="false" />



Deploying the Relaying Party Application to Windows Azure Websites Website

The next stage is to deploy the relying party application can be deployed to Windows Azure Websites. Clicking the download publish profile link will allow a publish profile for the website to be saved locally, this can then be used by Visual Studio to deploy the website to Windows Azure Websites.


Be aware that the publish profile contains the credential information required to deploy the website, this information is sensitive, so adequate precautions must be taken to ensure it stays confidential.

To publish the relying party application from Visual Studio, right-click on the RelyingPartyApp project, and select Publish.


Clicking the Import button will allow the publish profile that was downloaded form the Azure management portal to be selected.


When the publish profile is imported, the details will be shown in the dialog, and the website can be published.


After publication the browser will open, and the default page of the relying party application will be displayed.


Testing the Application

In order to verify that the application integrates correctly with ACS, the login functionality will be tested by clicking on the member’s page link, and logging on with a Yahoo account. When this is done, the authentication process takes place successfully, however when ACS routes the browser back to the relying party application with the security token, the following error is displayed.


Note that I have configured the website to turn off custom errors.


<customErrors mode="Off"/>


<!--<deny users="?" />-->


<authentication mode="None" />

The next section will explain why the error is occurring, and now the relying party application can be configured to resolve the error.

Configuring the Machine Key Session Security Token Handler

The default SessionSecurityTokenHandler used by WIF to create a secure cookie is not supported in Windows Azure Websites. The resolution for this is to configure the MachineKeySessionSecurityTokenHandler to be used instead. This is configured in the identityConfiguration section of the system.identityModel configuration for the website as shown below.




<remove type="System.IdentityModel.Tokens.SessionSecurityTokenHandler, System.IdentityModel, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089" />

<add type="System.IdentityModel.Services.Tokens.MachineKeySessionSecurityTokenHandler, System.IdentityModel.Services, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089" />


<claimsAuthenticationManager type="RelyingPartyApp.Code.CustomCam, RelyingPartyApp" />

<certificateValidation certificateValidationMode="None" />


<add value="" />


<issuerNameRegistry type="System.IdentityModel.Tokens.ConfigurationBasedIssuerNameRegistry, System.IdentityModel, Version=, Culture=neutral, PublicKeyToken=b77a5c561934e089">


<add thumbprint="B52D78084A4DF22E0215FE82113370023F7FCAC4" name="" />





With those changes made, the website can be deployed, and the authentication functionally tested again.


This time the authentication works correctly, the browser is redirected to the members page and the claims in the security token displayed.

No significant articles today


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

•• Thomas W. Shinder (MD) (@tshinder) continued his series with Active Directory Considerations in Azure Virtual Machines and Virtual Networks Part 3 – Virtual DCs and AD File Placement on 4/5/2013:

imageIn the first part of this series on Active Directory considerations in Azure Virtual Machines and Virtual Networks we took a look at Hybrid IT and how Hybrid IT can solve some important problems in your datacenter. In the second part of the series we took an overview of key Azure Virtual Machines and Virtual Networks concepts and capabilities. In this installment, part 3, we’ll examine whether it is safe to host domain controllers in a virtualized environment and where you should put the Active Directory related database and supporting files.

If you haven’t read the first two parts of the series, check these out:

Let’s start the discussion today with safety concerns regarding virtualizing domain controllers.

How safe is it to Virtualize Domain Controllers?

Cloud and Datacenter Solutions HubIf you’ve been in the virtualization space for a while, you probably know that there are issues with virtualizing domain controllers. Many of us have virtualized domain controllers in the past only to feel the pain of something going wrong and either cratering our domains or creating a lot of problems we wish we had avoided.

imageFor example, we know that backing up and restoring domain controllers can roll back the state of the domain controller and lead to issues related to inconsistencies in the Active Directory database. When it comes to virtualization, restoring snapshots would have the same effect as a restoring from backup – the previous state would be restored and lead to Active Directory database inconsistencies. The same effects are seen when you use more advanced technologies to restore a domain controller, such as using SAN snapshots and restoring those, or creating a disk mirror and then breaking the mirror and using the version on one side of the mirror at a later time.

The issue here is USN bubbles. USN bubbles can create a number of issues, including:

  • Lingering objects in the Active Directory database
  • Inconsistent passwords
  • Inconsistent attribute values
  • Schema mismatch if the Schema Master is rolled back
  • Potential for duplicated security principles

For these reasons and more, you want to make sure that you avoid USN bubbles.

Note: For more information on USN bubbles, check out How the Active Directory Replication Model Works.

imageVirtualization makes it all too easy to create a USN bubble scenario and therefore the general recommendation has been that you should not virtualize domain controllers. However, with the introduction of Windows Server 2012, it is now fully supported to virtualize domain controllers. This is accomplished by a new feature included in the hypervisor which is called the VM Generation ID. When a domain controller is virtualized on a supported virtualization platform, it will wait until replication takes place to be told what to do. If the virtualized DC is one that was restored from a snapshot, it will wait to be told what the correct state is.

For more information on VM Generation IDs, please see Introduction to Active Directory Domain Services Virtualization.

It’s important to note that VM Generation IDs need to be supported by both the hypervisor and the guest operating system. Windows Server 2012 Hyper-V and the Windows Server 2012 operating system acting as a guest supports VM Generation IDs. VMware also supports VM Generation ID when running Windows Server 2012 domain controller guests. At this time, Azure Virtual Machines and Virtual Networks does not support VM Generation ID, although that may be a function of the current customer preview offering. Make sure to check for updated guidance on Azure support for VM Generation ID support after the service becomes generally available.

When creating a domain controller in Azure Virtual Machines and Virtual Networks, make sure that you either create them new and place them in a Virtual Network, or use one that you created on premises and move it to an Azure Virtual network. You don’t want to sysprep domain controllers, mostly because it won’t work – sysprep will tell you that it won’t sysprep the domain controller because it detects when that’s being done.

Instead, move the VHD file to Azure storage and then create a new virtual machine using that VHD file. If your on premises domain controller is running on physical hardware, then do a physical to virtual conversion and move the resultant .vhd file to Azure storage and create the new virtual machine from that file. You also have the option to create a new domain controller in Azure Virtual Machines and Virtual Networks and enable inbound replication to the domain controller. In this case, all the replication traffic is inbound, so it won’t cost you any money for traffic costs for the initial inbound replication, but it will cost you in the future for outbound replication. More on that later in this series.

Where Should I Place Active Directory Related Files?

Azure supports two disk types where you can store information for virtual machines:

  • Operating System Disks (OS Disks) – used to store the operating system files
  • Data Disks – used to store any other kind of data

There is also a “temporary disk”, but you should never store data on a temporary disk because the information on the temporary disk is not persistent. It is primarily used for the page file to speed up the boot process.

The difference between a data disk and an OS disk relates to their caching policies. The default caching policy for an OS disk is read/write. The way this works is that when read/write activity takes place, it will first be performed on a caching disk. After a period of time, it will be written to permanent blob storage. The reason for this is that for the OS disk, which contains (hopefully) only the core operating system support files, the reads and writes will be small. This makes local caching a more efficient mechanism than making the writes directly to permanent storage. Also something else to be aware of is that the OS Disk size limit at this time is 127 GB, but again, this may change in the future.

The default caching policy for Data Disks is “none”, which means no caching. Data is written directly to permanent storage. Unlike OS Disks, which are currently limited to 127 GB, Data Disks currently support 1 TB. If you need more storage for a disk, you can span up to 16 disks for up to 16 TB, which is available as part of the current Extra Large virtual machine’s disk offering. Note that this is the current maximum disk size and that this might change in the future.

With all this in mind, think about where you would want to place the DIT/Sysvol location. Would it be where caching could lead to a failure to write, or would it be where Active Directory related information is immediately written to disk? If you said the latter, you’d be correct.

The main reasons for this is that write-behind disk caching invalidates some core assumptions made by Active Directory:

  • Domain controllers assert forced unit access (FUA) and expect the I/O infrastructure to honor that assumption
  • FUA is intended to ensure that sensitive writes make it to permanent media (not temporary cache locations)
  • Active Directory seeks to prevent (or at least reduce the chances) of encountering a USN bubble situation

For more information related to Active Directory and FUA, please see Things to consider when you host Active Directory domain controllers in virtual hosting environments.


In this article we took a look at issues related to the safety of putting a domain controller in a virtual environment and where you should put the Active Directory related files. The conclusion is that improvements in Windows Server 2012 and Hyper-V in Windows Server 2012 make it possible to host domain controllers safely in a virtualized environment. And while these improvements are not currently available in Azure, it is hoped that they will someday be ported to the service and that you should keep a lookout on support documentation to see if this is implemented. When it comes to placement of Active Directory related database and supporting files, the conclusion was to put them on a Data Disk, so that information is immediately written to permanent media. The next installment of this series will cover issues around optimizing your deployment for traffic and traffic related costs. See you then! –Tom.

Mary Jo Foley (@maryjofoley) asserted “A new version of Microsoft's tool for creating, publishing and maintaining Web sites and Web apps adds tighter Windows Azure integration and more” in a deck for her Microsoft delivers Webmatrix 3 Web-development tool bundle report of 4/3/2013 for ZDNet’s All About Microsoft blog:

imageMicrosoft has made available for download an updated version of its WebMatrix tool bundle.

Microsoft's WebMatrix is a free set of tools for creating, publishing and maintaining Web sites. It enables developers to quickly install and publish open-source applications or built-in templates to create, publish and maintain their Web sites. Included in the bundle are a Web server, database engines, various programming languages and more. It is aimed at developers using ASP.Net, PHP, Node.js and/or HTML5.


image_thumb75_thumb4WebMatrix 3, the latest version, adds integration with Git and Microsoft's TFS source-control systems. It also provides easy access to Windows Azure websites (the Web hosting framework codenamed "Antares"), according to Microsoft's WebMatrix site, as well as seamless access of remote sites. Other touted features of the third version include better integration with Windows Azure and improved Intellisense for PHP and Node.js.

image"When you create local projects, you’ll be able to instantly get a companion website in Windows Azure without ever leaving WebMatrix. Using the Publish button, you can easily keep these sites in sync and save your changes to the cloud," according to Microsoft's promotional page for the newest WebMatrix release.

Originally launched in 2010, WebMatrix got its start as a collection of a lightweight version of Microsoft’s IIS Web Server, known as IIS Express; an updated version of SQL Server Compact Edition; and a new “view-engine option” for ASP.Net, known as “Razor,” which enabled developers to embed Visual Basic or C# within HTML.

There's more information about WebMatrix 3 available on Microsoft's Channel 9.

Update: It looks like WebMatrix 3 may not be available for download be announced officially until April 4. Sounds like it is actually downloadable, as of April 3.

Brady Gaster (@bradygaster) posted a 00:03:50 Dropbox Deployment to Windows Azure Web Sites Channel9 video on 3/25/2013 (missed when published):

imageWindows Azure Web Sites provides a multitude of deployment options - FTP and Web Deploy publishing, as well as integrated publishing using,, and BitBucket. The latest addition to the publishing options make web site deployment as easy as copying files into a local DropBox folder on your development workstation.

imageThis video walks through the very simple process of marrying up a DropBox folder with a Windows Azure Web Site, and shows how quickly you can get a web site up and running or maintain it using simple drag-and-drop deployment.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Haishi Bai (@HaishiBai2010) described Cloud + Devices Scenarios (1): Say a Picture and provided a source code link in a 3/23/2013 post (missed when posted):

imageSay a Picture is a sample Windows Azure Cloud Service that allows remote users to dictate a picture together over connected Kinect sensors. Kinect sensors enable users to interact with the system in a casual living room setting, with multiple participants work together either locally or remotely. The solution combines device’s capability of data collection and cloud’s ability of coordinating work over the Internet to create an enjoyable experience for users. In this post I’ll walk through the creative process of this project so that you can build similar (and cooler) applications like this. I’ll also provide a link to download the complete source code at the end of this post.


What you need
System Architecture

imageOur system is comprised of a client and a server. The client talks to the Kinect sensor and uses speech recognition to convert spoken English to short commands, such as “add a horse” and “make it red”. Then, the client sends parsed short commands to the server. The server takes in commands sent from client and converts them into drawings on a HTML 5 canvas. The server is designed to support multiple concurrent sessions. Clients use the same session id share the same canvas so that multiple remote users can join the same painting session to work on the same painting. The following diagram depicts the overall system architecture:


Client Implementation

Client is implemented as a Windows Console application using C#, Kinect SDK as well as Speech SDK. The key of client implementation is the grammar tree that is used by speech recognition engine. The grammar tree is an XML document. The following is a picture representation. As you can see, this grammar is quite simple with a very limited vocabulary. In the diagram, #object represents one of recognized objects – trees, mountains, horses, and birds, etc.. #color is one of supported colors such as red, green, and blue. Using an explicit grammar tree not only allows command interpretation much easier (comparing to interpreting natural language), but also allows users to chat casually in between the commands, making it a even better experience.


Speech recognition using Speech SDK is rather simple, once you create a SpeechRecognitionEngine instance, you can load the grammar tree and then simply wait for SpeechRecognized event, in which you can get recognized text as well as a confidence value that indicates how confident the engine is about the result:

mSpeechEngine = new SpeechRecognitionEngine(info.Id);
using (var memoryStream = new MemoryStream(Encoding.ASCII.GetBytes(Properties.Resources.SpeechGrammar)))
  var g = new Grammar(memoryStream);
mSpeechEngine.SpeechRecognized +=mSpeechEngine_SpeechRecognized;

once a short sentence that matches defined grammar rules is detected, the client sends the string to server via an API call.

Server Implementation

imageServer is implemented as a Windows Azure Cloud Service with a single ASP.NET Web Role. The Web Role uses SignalR to broadcast states to all clients within the same session. And it uses HTML 5 + jQuery as the frontend. Because command interpreter is implemented as a browser-side JavaScript, the server-side code is extremely simple – it gets commands that are passed in by clients, and then broadcasts commands to all clients within the same session:

public void SendCommand(string group, string text)
  var context = GlobalHost.ConnectionManager.GetHubContext<CommandHub>();
  context.Clients.Group(group).command(new { Command = text });
Command Interpreter

The command interpreter is not that complex in this case, because we are not dealing complex constructs such as sub clauses. We can simply go through the command from left to right and handle the keywords as we go. A recommended way of creating this interpreter is to create a separate function for each of the keywords. Because we can do linear scan of words in this case, I used a simple array. However if your grammar is more complex, you may want to use operator/operand stacks. The interpreter also maintains a reference to latest referenced object as “it”. For instance, once you added a bird to the canvas, you can use command “move it left” to move the bird because “it” automatically references to the last object you interact with. The following code snippet shows the implementation of the two top-level functions. handlCommand() handles an incoming command by looping through each words and calling handleWord() on each one. Then, handleWord() method calls into corresponding functions based on the keyword it encounters.

function handleCommand(commandText) {
  commandWords = commandText.split(' ');
  var i = 0;
  while (i < commandWords.length) {
     i = handleWord(i);
function handleWord(index) {
  switch (commandWords[index]) {
    case 'add':
      return handleAddCommand(index + 1);
    case 'move':
       return handleMoveCommand(index + 1);
        return index + 1;

Note the functions don’t return any errors. They simply skip the sentences they don’t understand and wait for users to try again. A very nice command implemented by this interpreter is the “put it on …” command. For instance, in the following scenario, you have a plant that you want to put on the desk. Instead of moving it around step by step, you can directly say “put it on desk” to make the plant land perfectly on desk. Note in this version of the code I added some hardcoded adjustments to make this scenario work perfectly. A more generic implementation requires a more sophisticated object description/interaction system.


Application Scenarios

There are many possible applications scenarios of using Kinect Speech Recognition + Cloud Service. Here is a short list of some of the scenarios I can immediately think of:

  • Collaborative painting as a casual, multiplayer game.
  • Interactive interior design system that allows a designer to work with her clients remotely. And they can work together on the room layout by simply talking about it.
  • Collaborative software/hardware/system design, where team members can talk about a possible design and see it painted immediately on the screen.
  • Automatic picture/animation generation as you tell a story. This could be a nice touch to bedtime stories.

And with more sophisticated codes, you can do things such as running a physical simulation by simply describe it; playing chess by speaking out your next moves;  playing multiplayer strategy game by dictating your troops…. the possibilities are endless.

What’s next

This initial version has only limited functionality. As the goal is to allow users to interact with the system naturally, there are quite some improvements can be made to make user experiences even better. One of enhancements I’m planning is to allow the system to learn about new concepts easily so that the system can be extended to accommodate more and more scenarios. Another area of improvements is to allow rich, natural interactions among objects. The “put x on y” command is a preview of such interactions.

You can get complete source code here.

“Soma” Somasegar (@SSomasegar) reported Visual Studio 2012 Update 2 Now Available on 4/4/2013:

imageWe finished the RTM release of Visual Studio 2012 in August 2012 and launched it in September.  At that time, we committed to releasing new value into Visual Studio via a regular cadence of Visual Studio Updates, and in November 2012 we released our first, Visual Studio 2012 Update 1 (VS2012.1), which contained not only bug fixes and performance improvements, but also new functionality spanning four primary areas of investment: Windows development, SharePoint development, agile teams, and continuous quality.

imageI’m excited to announce that today we’ve shipped Visual Studio 2012 Update 2 (VS2012.2) and that it’s now available for download.  Just as with VS2012.1 (which is installed as part of VS2012.2 for those of you who don’t already have VS2012.1 installed), this release contains important fixes as well as a wealth of new functionality, addressing feedback we’ve received from the community and aligning with key software development trends in the market.  The new functionality primarily spans (though is not limited to) five areas of investment: agile planning, quality enablement, Windows Store development, line-of-business development, and the general developer experience.

Agile planning. Visual Studio 2012 introduced a wide range of capabilities focused on enabling agile teams, not only for development but also for planning.  With VS2012.2, Team Foundation Server (TFS) has been augmented with an additional variety of features to help make it even easier for agile teams to do their planning, in particular around adapting to a team’s preferences and work styles.  For example, VS2012.1 introduced new project tracking options, including a Kanban board and a cumulative flow diagram; VS2012.2 augments those experiences with the ability to customize the Kanban board to adapt it for an organization’s needs.  Other features include work item tagging that provides a simple and flexible way to add metadata to work items in support of better organization and reporting, support for emailing work items via the TFS Web Access portal, and more.

Quality Enablement. A key focus area for Visual Studio 2012 is in enabling quality to be maintained and improved throughout development cycles.  This focus can be seen not only in the RTM release, but also in VS2012.1, with the added support for code coverage with manual ASP.NET testing, with support in Test Explorer for custom “traits”, with support for cross-browser testing, and with improvements to Microsoft Test Manager.  Now with VS2012.2, support for quality enablement is taken even further.  This update introduces web-based access to the Test Case Management tools in TFS such that users can now author, edit, and execute test cases through the web portal.  It also includes the ability to profile unit tests (with results across both the unit tests and the code under test surfaced through a single report), improved unit testing support for both asynchronous code and for interactions with the UI, unit testing support for Windows Phone 8 apps, unit test playlists that enable a subset of tests to be managed together, significant improvements around testing for SharePoint 2013 (web and load testing, unit testing with emulators, coded UI support, and IntelliTrace support), and more.

Windows Store development. VS2012.2 includes additional new features for Windows Store development, beyond the quality enablement capabilities already mentioned.  For example, VS2012.1 included a new memory profiling tool for apps implemented with JavaScript, enabling developers to better understand the memory usage of their apps, to find and fix leaks, and so forth; for VS2012.2, we continued to invest in improved diagnostics for JavaScript apps with a new profiling tool that helps diagnose UI responsiveness issues and latency in visual updates.  This release also incorporates the latest version of the Windows App Certification Kit.

Line-of-business development. Beyond improved support for building Windows Store apps, VS2012.2 also brings with it a wealth of new and improved capabilities for developing and modernizing line-of-business (LOB) apps.  This includes the ability to use LightSwitch to easily build cross-browser and mobile web clients with HTML and JavaScript, with support to target SharePoint 2013 and Office 365.  It includes support in Blend for SketchFlow, WPF 4.5, and Silverlight 5.  And more. [Emphasis added.]

Development experience. As developers spend so much of their time using the IDE, it’s important that Visual Studio provide as streamlined an experience as possible.  Towards that end, we continually invest in new features and productivity enhancements to make the IDE the best and most productive environment possible, a trend we continue with VS2012.2.  Code map has been updated with improved responsiveness as well as with debugger integration, providing a visual perspective on the relationships and dependencies in code being debugged.  Symbol loading has been improved across both the profiling and IntelliTrace experiences.  The Workflow designer now has an improved debugging experience.  The XAML design surface in both Blend and the Visual Studio editor includes multiple performance and reliability improvements, in particular when loading large projects and when using third-party controls.  The IDE’s light and dark themes are now joined by a third, blue theme.  And more, such as including all of the improvements made available through ASP.NET and Web Tools 2012.2.

Install Update 2 today to get the latest support Visual Studio has to offer.  A more expansive list of what’s new in VS2012.2, including new features and bug fixes, is also available.


See also the Visual Studio LightSwitch and Entity Framework v4+ section for the HTML Client update.

Adam Grocholski (@codel8er) posted an Android to Windows 8: The Complete Guide to his Think First, Code Later blog on 4/2/2013:

imageOver the past several weeks I’ve been posting about how to build your first Windows Store application if you’re an Android developer. I want to wrap this series by providing some useful links.

All Posts in this Series
  1. Setting up the Development Environment
  2. Creating Your First Windows Store Project
  3. Exploring the Windows Store Project
  4. Running the Windows Store Application
  5. Building a Simple User Interface
  6. Creating a Second Page
  7. Adding “Back” Navigation

If you would prefer a pdf of all seven posts you can get it here:

image_thumb75_thumb5Additional Windows 8 Development Resources


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

•• Matt Sampson (@TrampSansTom) updated his ASP.NET SignalR and LightSwitch (VS 2012 Update 2)!!! of 3/14/2013 on 3/28/2013 (missed when published):

imageUpdate - I uploaded the code and project to MSDN Code Gallery here:

SignalR? What is that?

SignalR is described as “Incredibly simple real-time web for .NET”.

It’s a great way for a JavaScript client to call directly into Server side methods, and a great way for the Server to push up updates or notifications to the JavaScript client.

I heard a couple people raving about SignalR recently, and figured it was time to see if we could get LightSwitch and SignalR to meet.

LightSwitch … meet SignalR

imageWe’re going to make a simple LightSwitch (LS) application here. And then we are going to use SignalR to push up some real-time notifications to the LightSwitch HTML Client.

We’ll end up with a LS HTML App that receives a notification every time a new entity is inserted or updated.

This would allow for any user to know when ever data has been changed.

Let’s start out with a simple “Contacts” LS app.  Everything here should be pretty basic if you’ve done any LS stuff before.

  1. Create a new project – LightSwitch HTML Application (Visual C#)
  2. Call it - “ContactsSignalR”
  3. Add a table – call it “Contact”
  4. The table should look something like this when you are done:
  5. image
  6. Now let’s make a simple HTML Browse Screen around the “Contacts” data
  7. In the screen designer, select “Screen | Browse Contacts (Browse Screen)” and set the property “Behavior – ScreenType” to “Edit”. This will enable the “Save” button to show on our screen.
  8. Now let’s add a data item to our screen – so in the Screen Designer click your “Add Data Item…” button
  9. image
  10. That’ll open up a dialog – let’s call this property “updates” and make sure “Is Required” is unchecked. It should look like this:
  11. image
  12. Now drag and drop your updates data item onto your screen – put it right below the “Command Bar” so that it looks like:
  13. image
  14. Select the “updates” control on the screen designer and set the following properties:
    1. Label Position: Hidden
    2. Font Style: Large
  15. We’ll use this control later on to post our “real-time” updates to the screen
  16. Now I’d like to be able to Add and Edit new Contacts entities here, so select the Command bar, right click and say “Add button”
  17. Select Contacts.addAndEditNew and Navigate To: New Screen like this:
  18. image
  19. This is a easy way to quickly create a new Add/Edit screen for the Contacts entity
  20. Important! - After the screen is created your Screen Designer changes to show the AddEditContact screen. Make sure you double click the “BrowseContacts” screen again to set focus back.
  21. Do the same thing again – add an EditSelected button this time for the existing screen you just made, like this:
  22. image
Basic App is done. Onto SignalR

At this point, we’ve basically made a simple LS App to add and edit new Contacts.

Now we need to shove in some SignalR.  To do that we first need to “NuGet” our projects.

    1. In the Solution Explorer toolbar switch to “File View” like so:
    2. image
    3. Now right click the HTMLClient project and select Manage NuGet Packages:
    4. image
    5. Now select Online packages, search on SignalR and install the below entry:
    6. image
    7. This will add the SignalR JavaScript references to your HTML Client
    8. But we need to add them to the default.htm file as well, so open up that file and these two lines:

<script type="text/javascript" src="Scripts/jquery.signalR-1.0.1.js"></script>
< script src="../signalr/hubs"></script>

  1. The first line is a reference to the SignalR library
  2. The second line is actually a reference to some JavaScript that will be dynamically generated by our SignalR server later on.
  3. Now right click the Server project, and select “Manage NuGet Pacakages…”
  4. This time we want to install the SignalR package for .NET server components:
  5. image
NuGet is done. Put SignalR code into the Server!

We’ll need to add some Server side code here to start up the SignalR Server Hub. And to allow for the client and the server to talk to each other.

    1. Right click the Server project again and say “Add –> New Item”
    2. Add a Global.asax file to our project – we will need this to put in some custom application start up code
    3. Right click the Server project again and say “Add –> New Item”
    4. This time add a “Web Api Controller” class to our project.
      1. We only do this because it will automatically pull in some dll references for us that we’ll need later.
    5. Open up your Global.asax.cs file and paste in the follow code:
        protected void Application_Start(object sender, EventArgs e)
            var config = new Microsoft.AspNet.SignalR.HubConfiguration
                EnableCrossDomain = true

                name: "DefaultApi", routeTemplate: "api/{controller}/id", 
                defaults: new { id = System.Web.Http.RouteParameter.Optional }
    1. This code will get called when our LS app starts.  It starts up the SignalR server hub so that the clients can connect to it.
    2. Right click your Server project and add one more file – a class file and call it ContactHub
    3. Past the below code into your class:
namespace LightSwitchApplication
    public class ContactHub : Hub
  1. All we are doing is creating our own Hub here that we’ll use later to talk to the client from the server.
Put SignalR into the Client!

We need to put some basic JavaScript code into our Browse screen so that it can modify the “updates” label, and so that the Server has a function to call on the Client.

    1. Open up the Browse Contacts screen
    2. Select the Write Code – create method:
    3. image
    4. This is where we’ll add our basic JavaScript. So copy and paste the below code into the BrowseContacts.js file
/// <reference path="../GeneratedArtifacts/viewModel.js" />
/// <reference path="../Scripts/jquery.signalR-1.0.1.js" />

myapp.BrowseContacts.created = function (screen) {
    // Write code here.
    $(function () {
        contact = $.connection.contactHub;
        contact.client.broadcastMessage = function (message) {
            screen.updates = message;

        .done(function () {
        .fail(function () {
            alert("Could not Connect! - ensure EnableCrossDomain = true");
  1. Here’s what this does:
    1. $.connection.contactHub <- This is our connection to the ContactHub we made on our server
    2. contact.client.broadcastMessage <- this is the JavaScript function we are going to invoke from the Server (which we’ll do shortly).  This function will set the “updates” screen item with some text.
    3. $.connection.hub.start() – this just “starts” up the connection from the client to the SignalR Hub
Almost done! Let’s have the Server call the Client.

We need one final piece here – call into the JavaScript client every time a Contact is inserted or edited.

    1. So double click the Contact’s entity.
    2. Select Write Code – > Contacts_Inserted
    3. Paste in the below code:
        partial void Contacts_Inserted(Contact entity)
            string message = "A contact for " + entity.FirstName + " " + entity.LastName + " was just created";
            var context = GlobalHost.ConnectionManager.GetHubContext<ContactHub>();
    1. This will call the the “broadcastMessage” JavaScript function we wrote earlier for ALL active LightSwitch clients when a Contact is inserted. When that function is called – the updates label will automatically be updated to show our message.
    2. Do this again for the Contacts_Updated method. So open up the entity again. Select Write Code –> Contacts_Updated.
    3. Paste in the below code:
        partial void Contacts_Updated(Contact entity)
            string message = "A contact for " + entity.FirstName + " " + entity.LastName + " was just updated";
            var context = GlobalHost.ConnectionManager.GetHubContext<ContactHub>();
  1. Same thing here as the Inserted method, except it will only be called when a Contact is updated.
That’s it! F5 it and hang on.

F5 the LS app to build it and run it.

A browser should launch.

Go ahead and create a new Contact and Save it.  You should see something like this after you save.


This is cool…BUT try this with 2 browsers opened to really blow your mind.

So launch another instance of your web browser.

Copy and paste your http://localhost:NNNN/HTMLClient URL from your first browser into the second browser’s address bar.

Now create another record again. You’ll see this:


Both browsers got updated at the same time!

That’s pretty awesome, IMO.

So awesome, that I had to make a YouTube video for it:

Matt appears to have forgotten to embed or add a link to his video. It’s here.

• Beth Massi (@bethmassi) suggested LightSwitch HTML Client & SharePoint Resources–Get Started Building HTML5 Business Apps Today! in a 4/4/2013 post:

imageThis morning we released the LightSwitch HTML client in Visual Studio 2012 Update 2! I can’t tell you how excited I am to see this released to the public. It’s been an exciting, challenging journey and I congratulate the team on this important milestone of one of the most exciting products I have been fortunate to be a part of. Thank you to the community, the team, and my family for supporting me and LightSwitch for the last couple years!

LightSwitch in Visual Studio 2012 Update 2

(Note: if you are upgrading from Update 2 CTP4 then you can go ahead and install the final Update 2 release. Users of the LightSwitch Preview 2 please read these important upgrade instructions.)

imageWith the release of LightSwitch in Visual Studio 2012 Update 2, we’ve also released a bunch of updated and new resources for you to check out on the LightSwitch Developer Center.

New “How Do I” Videos!

imageYes that’s right folks! We’ve got a new “How Do I…?” video series that will help you get started with the new HTML client! (Yes, that’s my voice so get used to it! ;-))

I’ve got 5 live now and stay tuned for more in the coming weeks!

Updated Tutorials

We also overhauled our tutorials so if you haven’t checked them out lately I encourage you to do so. You can access them and get the download from the new HTML client page:

Explore LightSwitch Architecture & Hosting Options

imageWe also updated our architecture page. Go deeper and learn about the architecture of a LightSwitch application, including the new HTML5 client and SharePoint 2013

Exploring LightSwitch Architecture

JavaScript Samples for LightSwitch Developers

We also released the first set of JavaScript snippets that you will find useful in your LightSwitch apps. Check out the sample and stay tuned for more!

LightSwitch JavaScript Coding Examples

Use the Q & A link on the sample to provide suggestions for additional code samples that you would like to see.

Lot’s more on the LightSwitch Team Blog!

The team has been releasing a TON of great content on the LightSwitch Team Blog so check it out! If you’re new to the LightSwitch HTML Client, I recommend starting with this series that shows you how to build a modern, touch-oriented sign in sheet application:

I also recommend these awesome articles to learn more:

More LightSwitch Team Community Sites

Also check out our Facebook page for more fun stuff! And please ask your questions in the LightSwitch forum, the team is there to help!

I’ve downloaded and installed Visual Studio 2012 but haven’t tried its new LightSwitch features.

The Visual Studio LightSwitch Team (@VSLightSwitch) posted Announcing the Release of the LightSwitch HTML Client! on 4/4/2013:

image_thumb6On behalf of the LightSwitch team, I am proud to announce the release of the LightSwitch HTML client in Visual Studio 2012 Update 2! It’s been an exciting journey for us and we thank you for all your valuable feedback that got us here.

LightSwitch in Visual Studio 2012 Update 2

(Note: if you are upgrading from Update 2 CTP4 then you can go ahead and install the final Update 2 release. Users of the LightSwitch Preview 2 please read these important upgrade instructions.)

What do you get?!

imageThe HTML5 and JavaScript-based client addresses the increasing need to build touch-oriented business applications that run well on modern mobile devices. LightSwitch HTML clients are built on standards-compliant HTML5 and JavaScript and provide modern, touch-first experiences on Windows RT, Windows Phone 8, iPhones and iPads with iOS 5/6, and Android 4.x devices.

imageWith the new SharePoint 2013 apps model, we’re also bringing the simplicity and ease of building custom business apps with LightSwitch into SharePoint / Office 365. Many enterprises today use SharePoint as a collaboration hub to better people, content, and processes.  Although you can still choose to host your apps on your own or in Azure, enabling SharePoint in your LightSwitch apps allows you to take advantage of the app lifecycle management, identity, and access control capabilities within SharePoint – not to mention the business data and processes already running in SharePoint in your enterprise.

Read more about the new LightSwitch features here.

But wait, there’s more!

With the release we have also created new resources for you to check out!

Developer Center – We’ve got a refreshed website with new HTML client resources. The Developer Center is your one-stop-shop for the best LightSwitch team samples, videos, articles and documentation. Start with the basics or go deeper and explore the architecture and hosting options of LightSwitch apps.

How Do I Videos – We also have started a new “How Do I...?” video series that shows you step-by-step how to get started building HTML clients with LightSwitch. Keep tabs on this page as we’re planning a lot more!

JavaScript Samples – We also have created some JavaScript snippets that show you how to achieve many common coding tasks on the HTML client. If you’re new to JavaScript, these snippets should help!

Ask questions in the General Forum – Thanks to everyone who gave feedback on the previews! Now that we have released, we will continue our conversations in the LightSwitch General forum. The team is listening there.

The team is extremely happy to have reached this important milestone and we’re looking forward to hearing about all the amazing applications you will be building!

Philip Fu posted [Sample Of Mar 30th] How to undo the changes in Entity Framework 4.1 and later to the Microsoft All-in-One Code Framework on 3/30/2013:

Sample Download :

CS Version:

VB Version:

image_thumbThis sample demonstrates how to undo the changes in Entity Framework.

When we make the changes to the entities, we can use the SaveChanges Method to update the entities in the database. But sometimes some of the changes are wrong, and we need to rollback these changes. In this sample, we demonstrate how to use ObjectContext and DbContext to undo the changes in different levels.

  1. Context Level;
  2. Entity Set Level;
  3. Entity Level;
  4. Property Level.

imageYou can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage

Return to section navigation list>

Windows Azure Infrastructure and DevOps

My (@rogerjenn) Uptime Report for my Live OakLeaf Systems Azure Table Services Sample Project: March 2013 = 100.00% of 4/5/2013 begins:

image_thumb_thumb_thumb_thumb_thumbMy (@rogerjenn) live OakLeaf Systems Azure Table Services Sample Project demo project runs two small Windows Azure Web role instances from Microsoft’s South Central US (San Antonio, TX) data center. This report now contains more than a full year of uptime data.

image_thumb17_thumbPingdom didn’t send its e-mail report on a timely basis, so here is the Report Overview from their dashboard:


imageHere’s the detailed uptime report from Pingdom for March 2013:


• David Linthicum (@DavidLinthicum) asserted “The pressure is on to deliver tangible benefits from cloud developments, though it's often unclear how to do so ” in a deck for his For cloud deployments, it's time to just do it -- ready or not article of 4/5/2013 for InfoWorld’s Cloud Computing blog:

imageIt's speaking season, so I'm at a conference each week through mid-May. As always, I'm looking for what's hot or trendy in cloud computing right now, and trends point to a new acronym: JDID (just do it, dummy).

The chatter is not about what cloud computing is or what new concepts vendors are trying to push. The theme now is how to get this stuff working in the enterprise and making money for the business. The C-level executives are moving past the studies and strategies, and they want real results for their money.

[ From Amazon Web Services to Windows Azure, see how the elite 8 public clouds compare in the InfoWorld Test Center's review. | For the latest news and happenings, subscribe to the Cloud Computing Report newsletter. ]

imageAs a result, those in IT charged with creating and implementing cloud computing strategies are almost in a panic. They are tasked with getting something running, no matter if it's a small private storage cloud, a few instances on Rackspace, or an application migrated to Azure. It's all about the doing, but in the JDID context, a few common issues are popping up and making it hard to deliver:

  • This is new stuff, so it's difficult to find people with experience. Enterprises are working their way through their first projects without the experience and talent typically required. That will result in lots of mistakes and a few failures.
  • The technology is showing its age -- meaning it's too young. For example, many organizations using OpenStack distributions are working through some of the limitations the standard imposes on OpenStack products, due to its early state of maturity.
  • The technology solutions are much more complex than we originally expected. Most private clouds are made up of four to six different technologies, covering usage monitoring, security, and management, so system integration and a good amount of testing is required.

This JDID trend is only beginning. Over the next few years, we'll see how well cloud computing actually meets the needs of the business. The fact is, it will follow the same patterns as the adoption of other platforms over the years, including the discovery that there is no magic. At the end of the day, it's just software.

Tim Anderson (@timanderson, pictured below) reported Microsoft’s growth areas: Azure, Server with Hyper-V, Office 365, Windows Phone 

imageMicrosoft has left slip a few figures in posts from PR VP Frank Shaw and platform evangelist Steve Guggenheimer.

Observers have tended to focus on Windows “Blue” and what is happening with Microsoft’s core client operating system, but what caught my eye was a few figures on progress in other areas.

  • Windows Azure compute usage doubled in six months
  • Windows Azure revenue growing 3X
  • Office 365 paid seats tripled year on year last quarter
  • Server 2012 Datacenter edition licenses grown 80%

image_thumb75_thumb6A notable feature of these figures is that they are relative, not absolute. Office 365 is a relatively new product, and Windows Azure (from what I can tell, since Microsoft did not release numbers) performed rather badly until its renaissance in early 2011 under Satya Nadella, Scott Guthrie and others – see here for more about this). It is easy to post big multiples if you are starting from a small base.

This is real progress though and my guess is that growth will continue to be strong. I base this not on Microsoft’s PR statements, but on my opinion of Office 365 and Windows Azure, both of which make a lot of sense for Microsoft-platforms organisations migrating to the cloud. …

Read more.

<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image_thumb75_thumb7No significant articles today

<Return to section navigation list>

Cloud Security, Compliance and Governance

• Barb Darrow (@gigabarb) asserted You want to crunch top-secret data securely? CryptDB may be the app for that in a 4/5/2013 article for GigaOm’s Cloud blog:

imageThere are lots of applications for data crunching in the security-obsessed worlds of the defense, healthcare, and financial services industries. The problem is that these organizations have a hard time crunching all that data without potentially exposing it to prying eyes. Sure, it would be great to pump it all into Amazon Web Services and then run a ton of analytics, but that whole public cloud thing is problematic for these kinds of companies.

Dr. Sam Madden of MIT's CSAIL lab.

Dr. Sam Madden of MIT’s CSAIL lab.

CryptDB, a project out of MIT’s Computer Science and Artificial Intelligence Lab, (CSAIL) may be a solution for this problem. In theory, it would let you glean insights from your data without letting even your own personnel “see” that data at all, said Dr. Sam Madden, CSAIL director, on Friday.

“The goal is to run SQL on encrypted data, you don’t even allow your admin to decrypt any of that data and that’s important in cloud storage, Madden said at an SAP-sponsored event at Hack/reduce in Cambridge, Mass.

He described the technology in broad strokes but it involves an unmodified MySQL or Postgres app on the front end that talks to a CryptDB query rewriter in the middle which in turn talks to a MySQL instance at the back end.

“Each column in the original schema is mapped to one or more onions in the encrypted schema,” he said.

According to CryptDB’s web page:

“It works by executing SQL queries over encrypted data using a collection of efficient SQL-aware encryption schemes. CryptDB can also chain encryption keys to user passwords, so that a data item can be decrypted only by using the password of one of the users with access to that data. As a result, a database administrator never gets access to decrypted data, and even if all servers are compromised, an adversary cannot decrypt the data of any user who is not logged in.”

The technology is being built by a team including Raluca Ada Popa, Catherine Redfield, Nickolai Zeldovich and Hari Balarkishan.

CryptDB  could also run in a private cloud but there are still some big implementation questions. Asked how CryptDB would negotiate data transmission through firewalls, for example, Madden punted. “That’s not something we’re focusing on. The great thing about being an academic is we can ignore some problems,” he said.

Jane McCallion (@janemccallion) asserted “Microsoft cloud service now certified to transmit data securely” in a preface to her Windows Azure gets [UK] G-Cloud IL2 nod report of 4/4/2013 for the UK’s CloudPro blog:

imageWindows Azure, Microsoft’s platform-as-a-service (PaaS) offering, has been awarded the Impact Level 2 (IL2) accreditation by the Cabinet Office.

IL2 accreditation permits holders to transmit and store protected information, such as that generated by a public sector body.

image_thumb2It is the middle of three impact levels used on the Cabinet Office’s G-Cloud framework, which runs from IL1 (unclassified) to IL3 (restricted), although the scale is used in other settings and extends to IL6 (top secret).

Microsoft claims Windows Azure’s IL2 accreditation is “a key milestone” in supporting the UK Government’s aim of moving 50 per cent of new ICT services to the cloud by 2015.

imageNicola Hodson, general manager of public sector at Microsoft, said: “Following our recent Office 365 IL2 accreditation this is a further endorsement of Microsoft’s public cloud services.

“The Windows Azure platform also created more opportunities for Microsoft’s expanding SME Partner community. There are already 80 assured on the CloudStore.”

While Azure’s accreditation has indeed been welcomed by Microsoft partners Solidsoft and Dot Net Solutions, the Cabinet Office has come in for criticism in the past over the inclusion of large firms like Salesforce and Microsoft in the G-Cloud framework.

In August, managed services provider Attenda spoke out, claiming SMBs were being ignored in favour of tier one suppliers.

Surprise has also been expressed at comments made by Denise McDonagh, head of the G-Cloud initiative, at a recent Q&A session.

As well as saying the Cabinet Office was going to push for a cloud-first strategy, McDonagh also claimed G-Cloud had never been about bringing government contracts to SMBs, but was instead about “levelling the playing field” between them and their larger competitors.

<Return to section navigation list>

Cloud Computing Events

• Mary Jo Foley (@maryjofoley) asserted “Microsoft is honing its public-cloud pitch, appealing to enterprise users, just ahead of its annual management conference” in a summary of her Microsoft's latest pitch to business: Make Windows Azure 'your datacenter' article of 4/5/2013 for ZDNet’s All About Microsoft blog:

imageNext week is Microsoft's annual Microsoft Management Summit conference in Las Vegas. No, I won't be there (me and Las Vegas -- we're not friends). But I have been combing through the session list for the event, which runs from April 8 to April 12.

imageIn case you don't already know about MMS, this isn't a show for tech wimps. It's for IT managers who love things like System Center Configuration Manager Service Pack 1 and User State Migration Tookit 5.0. But it's also a place where some of Microsoft's higher-level messaging around Windows Server, System Center and Windows Azure occasionally bubble up.

Two of the sessions from the online MMS catalog piqued my interest because of their focus on getting enterprise users to see Windows Azure as YOUR datacenter."

WS-B331 Windows Azure and Active Directory
Speaker(s): David Tesar
Track(s): Windows Server & Azure Infrastructure
Session Type: Breakout Session
Product(s): Active Directory, Windows Azure
In this session you will learn how to plan, deploy and manage Active Directory within Windows Azure. Windows Azure is YOUR datacenter. Deploying Active Directory within your cloud is a key part of enabling LOB applications to work.

WS-B333 Windows Azure in the Enterprise
Speaker(s): Karri Alexion-Tiernan, Venkat Gattamneni
Track(s): Windows Server & Azure Infrastructure
Session Type: Breakout Session
Product(s): Windows Azure
In this session, you will discover how you can make Windows Azure YOUR datacenter. From compute and storage on demand, to messaging and identity services, come and see how you can power your enterprise today with Windows Azure.

I haven't heard Microsoft make this pitch in this way before. Sure, the company has been encouraging corporate customers to go the Azure route, by onboarding their existing apps using the still-in-preview Azure virtual machines and/or by writing new applications that take advantage of Windows Azure's platform-as-a-service capabilities. (This is in addition to encouraging developers of all stripes, including mobile developers, to write apps that connect to the Azure cloud.)

But telling enterprise customers that Windows Azure, which is hosted by Microsoft in its own datacenters, to consider Azure THEIR datacenter is new. (New to me, at least.)

A few years ago, Microsoft was moving toward providing its largest enterprise customers and partners with an Azure-in-a-box capability, via Azure appliances. This effort seems to have been tabled, best I can tell. Instead, Microsoft has been adding Azure features to Windows Server, enabling its hosting partners to turn their implementations of Windows Server into something that more closely resembles Windows Azure. There have been hints that Microsoft might allow large customers to deploy these same Azure features internally, but so far no announcement to that effect.

This doesn't mean that enterprise users, even those who are not sold wholescale on this public cloud thing, can't find some ways to use parts of Azure today, as Windows Azure General Manager Bill Hilf explained in a succint but largely overlooked post from a week ago. [Emphasis added.]

Among the ways enterprise users can tap into Azure, according to Hilf:

  • Store, back up and recover their data in the cloud at a lower cost than using SAN technology, via Windows Azure Storage plus StorSimple, Windows Azure Online Backup and/or SQL Availability Groups. (Note: Microsoft execs are going to be talking about StorSimple -- the cloud-storage appliance technology Microsoft acquired last year -- at MMS.)
  • Tap into "the power of big data" by pairing Azure Websites with HDInsight, so as to mine data, generate business analytics and make adjustments
  • Integrate with on-premises Windows Servers; Linux servers; System Center; data services for SQL, No SQL; .Net, Java, Node.js, Python . (However, speaking of integration, the Azure team hasn't said anything lately about what's going on with its various Azure networking services -- including Windows Azure Connect, a k a "Sydney" and Windows Azure Virtual Network, a k a "Brooklyn." But maybe those will exit preview in the coming weeks/months, along with the aforementioned Virtual Machines for Linux and Windows Server.)
  • Test services and apps quickly by using Windows Server/Linux Server virtual images (though this capability, as I've noted before, is still in preview right now).

One other MMS session that I found interesting: Windows RT in the Enterprise. Yes, Microsoft is still maintaining that Windows RT devices aren't just for consumers. And no, no one should expect to see Outlook RT debut there. It's coming. Not yet, though....

I like the term “wholescale” rather than “wholesale.” Nice addition to the lexicon, Mary Jo.

My (@rogerjenn) Sessions Related to Windows Azure at the Microsoft Management Summit 2013 post of 4/5/2013 begins as follows:

imageFollowing is a list of 17 sessions containing the keyword “Azure” being presented at this year’s Microsoft Management Summit in Las Vegas:

AM-B306 DevOps: Azure Monitoring & Authoring Updates for Operations Manager 2012 SP1

This talk will cover how to monitor Azure Applications and enhancements to System Center 2012 SP1 Authoring tools.

IM-IL202 Transform the Datacenter Immersion Part 2 of 4: Infrastructure Management

  • Track(s): Infrastructure Monitoring & Management
  • Session Type: Instructor-led Lab
  • Product(s): App Controller, Orchestrator, Virtual Machine Manager, Windows Azure

Management of the infrastructure layer of a datacenter has become very complex with the introduction of heterogeneous environments and clouds that sit on and off premises. Datacenter Administrators are tasked with ensuring that they have a firm handle on ensuring that all resources are managed efficiently, and that business units have the flexibility to delivery on their commitments. Windows Server 2012, System Center 2012 SP1, Windows Azure and SQL Server 2012 offer a new way to gain operational efficiencies and reduce costs through self-service and automation regardless of resources being on or off premises. In this lab, you will get hands on with a pre-configured environment that takes you on a tour of key scenarios that reveal how Microsoft offers a consistent management experience across on premises, service provider and public cloud environments. Scenarios included: App Controller and Self-Service, Automated resource allocation, Virtual Machine Manager and dynamic resource allocation, Virtual Machine Manager and network multi-tenancy, Windows Azure Networking, and the Orchestrator and Runbook Designer.

AM-IL203 Transform the Datacenter Immersion Part 3 of 4: Application Management

  • Track(s): Application Management
  • Session Type: Instructor-led Lab
  • Product(s): App Controller, Operations Manager

While System Center continues to be a market leader for infrastructure monitoring, it has now deepened its ability to monitor applications both internally and ""outside-in”. With dynamic reporting and monitoring of the application, your organization is now able to improve root-cause analysis processes while reducing the mean time to service restoration. In this lab, you will get hands on with a pre-configured environment that takes you on a tour of key scenarios that reveal how Microsoft supports you in leveraging public cloud infrastructure such as Windows Azure to deploy tiers of applications, which are fully integrated into the management fabric. Scenarios included: Operations Manager (Global Service Monitoring) 360 degree application monitoring, Operations Manager (Application Performance Monitoring) accurately triage application problems by monitoring Front End web applications, mid-tier web service and databases, Operations Manager and Dev/Ops remediation process, App Controller and simplified deployment of a package, Windows Azure and database replication,

IM-IL204 Transform the Datacenter Immersion Part 4 of 4: Insight and Availability

  • Track(s): Infrastructure Monitoring & Management
  • Session Type: Instructor-led Lab
  • Product(s): Data Protection Manager, Operations Manager

It’s imperative that IT knows how datacenter resources are being consumed. Having a left to right view of current utilization provides a deep understanding on how the business functions, and allows IT to plan for demand while maintaining optimum performance and minimizing costs. Providing data insights through robust reporting on usage, costs and resources is a great way to drive high quality service management and ensure a deep understanding of the consumers’ behavior and requirements. Microsoft now allows IT to perform as a service provider delivering cloud services to varying business units upon request and provide intelligent dashboards to help leadership teams make better decisions about how to scale datacenter capacity. In this lab, you will get hands on with a pre-configured environment that takes you on a tour of key scenarios that reveal how Microsoft supports you in creating a consistent management experience whether resources span from on premises, to Windows Azure or to a Service Provider. Scenarios included: Operations Manager and uncovering system dependencies with distributed applications, Building reports with Operations Manager, Chargeback Reporting, Data Protection Manager and backup status monitoring, and Enhancements to Failover Clustering. …

The post ends with the two sessions reported by Mary Jo Foley in her article quoted above and the observation that list entries for six sessions were repeated.

My (@rogerjenn) Windows Azure Application Development Sessions at TechEd North America 2013 post of 4/3/2013 begins:

imageFollowing is a list of sessions about or related to Windows Azure at TechEd North America 2013 in chronological order. I’ll expand truncated abstracts as I have time available and add speaker names as Microsoft provides them.

No Date and Time Assigned

Build Your Microsoft SharePoint Server 2013 Lab in the Cloud with Windows Azure

imageLeverage Windows Azure Virtual Machines and Virtual Networks to build your SharePoint 2013 server lab in the cloud! During this hands on lab, get a technical overview of Windows Azure virtual machines and virtual networks and then proceed to build a SharePoint 2013 cloud lab that you can leverage post-TechEd...

Lap Around Workflow Manager

imageCome learn about Workflow Manager! Workflow Manager provides the capability to host Microsoft .NET workflows in a high scale, highly available environment, both on-premises using Windows Server and in the cloud using Windows Azure.

Great Ways of Hosting a Web Application in Windows Azure

Windows Azure has a range of somewhat overlapping options for hosting a web application, all with different pricing models and technological backgrounds. These also vary from the possibilities available in local datacenters. During this session we clarify these options, then present a range of real-world...

June 2, 2013 (Pre-Conference)

How to Be a Successful DBA in the Changing World of Cloud and On-Premise Data with Thomas LaRock, Grant Fritchey

In the world of Hybrid IT, with data residing on premise and in the cloud, Database Administrators play an important role in application architecture and design. In this pre-conference we cover the changing DBA tasks to support on-premise and cloud-based performance tuning and monitoring, as well as...

Extending Your Apps and Infrastructure Into the Cloud with David Aiken

You’re in charge of reconciling the on-premise datacenter with the cloud and you have to execute flawlessly. What should go in the cloud? What should stay on premise? How do you decide? Once you’ve made your decision, how do you actually make it happen? After all, enterprise apps are complex and often...

Day 1, June 3, 2013

Introduction to Windows Azure Active Directory

Windows Azure Active Directory provides easy-to-use, multi-tenant identity management services for applications running in the cloud and on any device and any platform. In this session, developers, administrators, and architects take an end-to-end tour of Windows Azure Active Directory to learn about...

Maximum Performance: Accelerate Your Cloud Services Applications with Windows Azure Caching

Abstract coming soon

An Introduction to the Web Workload on Windows Azure

Building, deploying, scaling, and managing resilient web applications has never been easy—there are many moving parts, lots of infrastructure and a number of different software components. Windows Azure aims to make this just really simple, with technologies such as Windows Azure Web Sites, SQL...

Designing Cloud Architectures

Come and learn best practices and common patterns you can apply to your Windows Azure Cloud Service solutions. In this session you're introduced to practical solutions to common tasks and scenarios you may face in your projects, including dealing with long-running tasks, failovers, federated security...

Build Your First Cloud App: An Introduction to Windows Azure Cloud Services

So you're a Microsoft .NET developer who is ready to see what cloud has to offer. Come take a peek at Windows Azure Cloud Services, the fastest and most productive way to capture the benefits of cloud without leaving .NET and Microsoft Visual Studio—and without calling your IT department! …

And continues with sessions schedule for days one through four.

Denny Lee (@dennylee) reported Halo 4 and the Flood of Big Data at Big Data Camp / PASS BA Conference on 4/3/2013:

One of the great thing about working with the folks at 343 Industries – Halo developer – is that I get to claim my that playing Halo 4 is part of my job – awesome!

But in the middle of realizing how awful of a player I am, I was given the opportunity to work some very creative and smart folks at 343 on their HDInsight on Azure project.  You can find more information and various learnings at:

Another great place to hear a little bit more about this is at Big Data Camp at the PASS BA Conference.

So loop on by during the lightening talk and/or come find me on Thursday when I get to co-present the sessions:

image_thumb75_thumb8See you in Chicago!

[Graphic] Source:

<Return to section navigation list>

Other Cloud Computing Platforms and Services

•• Charles Arthur (@charlesarthur) asserted “PC market begins to slip and tablets will outsell desktops and laptops combined by 2015, as Android ascendancy means challenge to relevance of Microsoft, research group warns” in a deck for his Microsoft threatened as smartphones and tablets rise, Gartner warns article of 4/4/2013 for The Guardian:

imageMicrosoft faces a slide into irrelevance in the next four years unless it can make progress in the smartphone and tablet markets, because the PC market will continue shrinking, warns the research group Gartner.

It says a huge and disruptive shift is underway, in which more and more people will use a tablet as their main computing device, researchers say.

imageThat will also see shipments of Android devices dwarf those of Windows PCs and phones by 2017. Microsoft-powered device shipments will almost be at parity with those of Apple iPhones and iPads - the latter a situation not seen since the 1980s.

In a new forecast published on Thursday morning, Gartner says that by 2015 shipments of tablets will outstrip those of conventional PCs such as desktops and notebooks, as Android and Apple's iOS become increasingly dominant in the overall operating system picture. Android in particular will be installed on more than a billion devices shipped in 2014, says Carolina Milanesi, the analyst who led the research.

Operating systems 2012-2017 by Gartner Operating system shares forecast from 2012-2017. Source: Gartner, April 2013

Meanwhile a new category of "ultramobile" devices - such as the Surface Pro and the lighter ultrabook laptops - will become increasingly important as people shift towards more mobile forms of computing.

For Microsoft, this poses an important inflexion point in its history, warns Milanesi. "Winning in the tablet and phone space is critical for them to remain relevant in this shift," she told the Guardian. "We're talking about hardware displacement here - but this shift also has wider implications for operating systems and apps. What happens, for instance, when [Microsoft] Office isn't the best way to be productive in your work?"

For Microsoft, income from Windows and Office licences are key to its revenues: per-PC Windows licences generate about 50% of its profits, and Office licences almost all the rest.

But while it dominates the PC market, it is a distant third in the smartphone and tablet markets. Latest figures suggest that Windows Phone, its smartphone OS, shipped on about 3% of devices in fourth quarter of 2012, compared to 20% for Apple's iPhone and over 70% for Android - of which 50% connected to Google's servers and 20% were "white box" Android phones in China which do not use Google services.

"Android is going to get to volumes that are three times those of Windows," says Milanesi. "From a consumer perspective, the question becomes: what software do you want to have to get the widest reach on your devices? BlackBerry may say that its QNX software [used as the basis of BB10 on its new phones] can go into cars and phones, but Android is already in fridges. That's the challenge."

BlackBerry, which has just released the first of its BB10 devices, is forecast to see a slow decline in shipments through to 2017, shipping 24.1m devices then compared to 34.7m in 2012. That will leave it well behind Windows Phone in the forecast.

Tablet growth forecast Tablets will overtake desktop and notebook shipments combined, while 'ultramobiles' will grow, says Gartner

Milanesi added: "the interesting thing is that this shift in device preference is coming from a shift in user behaviour. Some people think that it's just like the shift when people moved from desktops to laptops [a process that began in the early 2000s]. But that's wrong. The laptop was more mobile than the desktop, but with the tablet and smartphone, there's a bigger embrace of the cloud for sharing and for access to content. It's also more biased towards consumption of content rather than production.

"All these things will get consumers to look for the OS and apps that can give them all that," Milanesi says.

A key problem for Microsoft is that it is the people who don't yet own PCs - in emerging markets such as Africa and China - who are most likely to have a smartphone and tablet as their first "computer". Milanesi says: "They're starting with a smartphone, not a PC, so when they're looking for something larger, they look at something that's a replacement smartphone experience - which is a tablet or ultramobile device. And Android or [Apple's] iOS are the two that they're looking at."

Microsoft could then face the vicious circle where developers considering which platform to develop apps for look at those with the largest user base - and that that will not be Windows. By 2017, she says, the number of devices being shipped with iOS, both iPhones and iPads, will be close to that with Windows and Windows Phone combined.

"And that's not assuming that Apple launches a low-end iPhone," Milanesi says. "Our numbers for Apple are conservative, because for a low-end phone it would be a guess about what price point it would use, and what the timing would be." A number of observers have suggested that Apple will launch a lower-cost iPhone in the next year to capture a larger market share, especially in the pre-pay (pay-as-you-go) market. But Apple has given no indication of whether it will do that.

That’s why I’m focused on Android 4.1+ MiniPCs as tomorrow’s low-cost PC replacement for consumers, especially couch potatoes. Read more from the OakLeaf Systems blog:

•• Barb Darrow (@gigabarb) asserted “So who will be number two in public cloud after Amazon Web Services? Smart money is now on Google Compute Engine. With caveats, of course.” in a summary of her Amazon is the cloud to beat, but Google has the cloud to watch. Here’s why article of 4/2/2013 for GigaOm’s Cloud blog:

imageAmazon Web Services is by far the biggest and most experienced public cloud provider. Accepting that, the next question is: what cloud vendor can give AWS a run for its money? Increasingly the money is on Google  – at least in compute capacity where Google Compute Engine is becoming a force to be reckoned with even though it only launched (in beta of course) just last June.

Scalr is clearly a big fan, but even if you don’t buy its rather impressive report card, there are other reasons that Google Compute Engine should be considered the biggest potential rival to AWS.

Google knows from scale

imageEven Google bashers will concede that the company understands massive scale. It has the data center fire power; it has the software tools to harness that power; and it has a deep engineering bench that includes several key hires from — you guessed it — AWS. A quick LinkedIn search shows some of these hires, but omits many. One of those is Sunil James, who worked on the AWS Virtual Private Cloud and Direct Connect and who now heads up networking services and technologies for the Google Cloud Platform.

Multi-cloud strategies demand a back-up cloud

As big and great as AWS is, most existing and potential business customers will not lock into a single cloud provider. They are still bruised from the current generation of vendor lock in. On the other hand, they can’t afford to support too many. “You can only make so many bets, and it’s clear that Google is in this public cloud game to stay,” said one vendor exec who would not be named because his company does business with Amazon.

Companies who made early bets on GCE are Cloudscaling, the OpenStack player which said last fall it will support both the AWS and GCE APIs, and RightScale, a pioneer in cloud management and monitoring that signed up as GCE’s first reseller in February.

Google is serious about GCE

Let’s face it: Google does have a bit of a credibility problem for launching, then deep-sixing services. (Hello, er, goodbye Google Reader.) But no one can seriously doubt that GCE is a priority.

“This is no skunkworks. This is not some little company they acquired. There’s a big team on the engineering side and if you look at the data center footprint, the fiber, the tech expertise, the internal platform and tools, they are serious about this,” said the vendor exec.

Dan Belcher, co-founder of Stackdriver, a Boston startup, said the time is ripe for an AWS contender to surface. The industry, he said, appears to be waiting for someone — Google? Rackspace? Someone else? to challenge AWS.

“Clearly, Google’s strategy is to differentiate on performance (overall and consistency thereof,)” he said via email. “Our first test suggests that they are delivering on that promise … so far,” he noted. He also pointed out that GCE’s admin console UI needs work and that less than a year in, there are limited services and features compared to AWS. A new Stackdriver blog details its first impressions of GCE.

The big question is how performance will hold up when the service actually leaves beta and opens up to the real world. There are reportedly tens of thousands of users queued up and ready to jump in when that happens. “Sure it feels fast with my six instances in limited preview. How will it feel when I am sharing with the rest of the world? And what has Google done to limit the host, network and API contention that plague large AWS customers?” Belcher asked.

Lack of legacy baggage helps GCE

Microsoft Windows Azure is paying the price now for Microsoft’s huge installed base of Windows and .NET legacy applications. While it’s done a good job incorporating support for open-source technologies under the Azure umbrella, that support is not on par with Windows, at least when you ask developers outside the .NET world.  Microsoft remains”weighted down by its Windows and Office mentality,” said one vendor who weighed supporting Azure but decided against it. “There are aspects of Azure that are technically superior but then their APIs are atrocious,” he said.

On the other hand …

Skeptics will always wonder if Google’s heart is in anything other than internet search and advertising. And, Google, like AWS is not particularly known for working well with others in the partner community.

The other issue is that while Google Apps has gained traction in business accounts — largely because it’s so much cheaper than Microsoft Office —  one long-time Google watcher wonders if it will ever “get its enterprise act together.”  In his view, Google Enterprise Search appliance never got traction so Google has to prove its credibility outside internet search.

Going forward, Google will also have to offer a more comprehensive menu of services. And, most importantly, it will have to bring more enterprise workloads on board so all of those companies looking for an AWS backup (or alternative) can really put GCE through its paces.

We will be talking about public and private cloud adoption, gating factors to that adoption, and other hot-button topics at GigaOM Structure in San Francisco in June.

It appears to me that GCE is a pure-play VM offering without the additional features and accouterments provided by AWS and Windows Azure. If all you need is compute and storage, GCE might fill the bill. Otherwise, I’d stick with the big two: AWS and Windows Azure.

Jeff Barr (@jeffbarr) announced Prices Reduced for Windows On-Demand EC2 Instances on 4/4/2013:

imageThe AWS team has been working hard to build powerful and exciting new features for Windows on AWS. In the last month we have released  support for SQL Server AlwaysOn Availability Groups, a beta of the AWS Diagnostics for Microsoft Windows Server, and new drivers for our virtual instances that improve performance and increase the supported number of volumes.

imageI'm happy to announce a price reduction of up to 26% on Windows On-Demand instances. This price drop continues the AWS tradition of exploring ways to reduce costs and passing the savings along to you. This reduction applies to the Standard (m1), Second-Generation Standard (m3), High-Memory (m2), and High-CPU (c1) instance families. All prices are effective from April 1, 2013. The size of the reduction varies by instance family and region. You can visit the AWS Windows page for more information about Windows pricing on AWS.

Members of the AWS team will be attending and staffing our booth at the Microsoft Management Summit in Las Vegas. If you want to learn more about AWS and how to build, deploy, and monitor your Microsoft Windows Server instances, be sure to stop by booth #733.  The team is also hosting an invitation-only customer session. If you are attending the conference and would like to receive an invitation, simply complete this survey!

Waiting for the other shoe to drop: Matching prices from the Windows Azure Team.

Jeff Barr (@jeffbarr) reported AWS Expansion in Oregon - Amazon Redshift and Amazon EC2 High Storage Instances on 4/2/2013:

imageYou can now launch Amazon Redshift clusters and EC2 High Storage instances in the US West (Oregon) Region.

Amazon Redshift
Amazon Redshift is a fully managed data warehouse service that lets you create, run, and manage a petabyte-scale data warehouse with ease. You can easily scale up by adding additional nodes (giving you more storage and more processing power) and you pay a low, hourly fee for each node (Reserved Nodes are also available and bring Amazon Redshift's price to under $1,000 per terabyte per year, less than 1/10th the cost of most traditional data warehouse systems).

Seattle-based Redfin is ready to use Amazon Redshift in US West (Oregon). Data Scientist Justin Yan told us that:

image_thumb111We took Amazon Redshift for a test run the moment it was released.  It's fast. It's easy. Did I mention it's ridiculously fast? We've been waiting for a suitable data warehouse at big data scale, and ladies and gentlemen, it's here.  We'll be using it immediately to provide our analysts an alternative to Hadoop. I doubt any of them will want to go back.

Here's a video that will introduce you to Redshift:

If you're interested in helping to build and grow Amazon Redshift, we're hiring in Seattle and Palo Alto – drop us a line! Here are some of our open positions:

High Storage Instances
We are also launching the High Storage Eight Extra Large (hs1.8xlarge) instances in the Region. Each of these instances includes 117 GiB of RAM, 16 virtual cores (providing 35 ECU of compute performance), and 48 TB of instance storage across 24 hard disk drives. You can get up to 2.4 GB per second of I/O performance from these drives, making them ideal for data-intensive applications that require high storage density and high sequential I/O.

Localytics of Boston has moved their primary analytics database to the hs1.8xlarge instance type, replacing an array of a dozen RAID 0 volumes. The large storage capacity coupled with the increased performance (especially for sequential reads and writes) makes this instance type an ideal host for their application. According to Mohit Dilawari of Localytics, "We are extremely happy with these instances. Our site is substantially faster after the recent migration, yet our instance cost has decreased.

<Return to section navigation list>