Tuesday, November 13, 2012

Windows Azure and Cloud Computing Posts for 11/9/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI,Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222


‡ New: Scott Guthrie’s deployment of an Office App to Windows Azure in Monday’s keynote (scroll down)Windows Azure Tiered (Paid) Support, Windows Azure certified as one of the Top 500 of the world’s largest supercomputers, LightSwitch Tutorial for SharePoint Apps.

Hot: Reuven Cohen (@ruv) reported The Battle For The Cloud: Amazon Proposes ‘Closed’ Top-Level .CLOUD Domain in an 11/6/2012 article for Forbes.com in the Other Cloud Computing Platforms and Services section below (updated 11/11/2012).

Editor’s note: I finally caught up with articles missed during Alix’s and my vacation at the Ahwahnee Hotel’s Yosemite Vintners’ Holiday Event, Session 1, 11/4 through 11/7/2012. Check out my photos of the trip on SkyDrive.

Updated 11/12/2012 with new articles marked .
•   Updated
11/11/2012 with new articles marked .

Tip: Copy bullet(s) or dagger, press Ctrl+f, paste it/them to the Find textbox and click Next to locate updated articles:


Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

Azure Blob, Drive, Table, Queue, Hadoop and Media Services

•• Tomica Kaniski (@tkaniski) described Configuring Online Backup for Windows Server 2012 in an 11/12/2012 post:

imageAs Windows Server 2012 includes many new and improved features, some of them often go unnoticed. The objective of this article is to explain how to set up and use one of them – Online Backup.

Online Backup feature in Windows Server 2012 provides you with option to store some of your backups into the cloud (Windows Azure). Current offering includes free preview of this service for customers of Windows Server 2012, Windows Server 2012 Essentials and System Center 2012 during a period of 6 months and allowing you to store up to 300 GB of backups per account.

imageThis feature is available as additional download – software agent which needs to be installed on any server that will be using the Online Backup feature. This piece of software is providing you the connection to Windows Azure Online Backup service.

imageInstalling, configuring and using the Online Backup feature is not very complicated. Basically, what you need to do is to install the Online Backup agent on the server you want to use Online Backup feature for, register the server in online service, create schedule with selecting what will be backed up (and when), and then wait for the backup to occur.

In more detail, here are the steps you need to take to make the Windows Online Backup work on a plain Windows Server 2012 installation:

1. Enable the Windows Server Backup feature
Windows Server Backup feature is part of Windows Server installation, but disabled by default. So, the first step is to enable it. You can enable it using the new Server Manager interface - select Manage, Add Roles and Features and, finally, select the Windows Server Backup feature and finish the wizard.
You can also install this feature by using the DISM command-line tool – you need to run Command Prompt (As Admin) and then the command dism /online /Enable-Feature:WindowsServerBackup.

2. Register for Windows Azure Online Backup

Next step is to set up an account for the Azure Online Backup at http://www.windowsazure.com/en-us/home/features/online-backup/. The registration process is simple, and at the end of it, you will get account info that will be used in steps that follow.

3. Download the backup agent from Management Portal (Dashboard)

Sign in to the Windows Azure Online Backup dashboard https://portal.onlinebackup.microsoft.com/en-US/Dashboard using the account information from the previous step. In the Overview section of your dashboard, just click on Download and Install button to get the backup agent setup file (download size is about 14 MB).

4. Install the agent

Installation of the downloaded agent software is pretty straightforward – prerequisites are the Windows PowerShell (which is installed by default on Windows Server 2012) and Microsoft Online Services Sign-in Assistant which will be automatically installed during setup. After finishing the installation wizard, you need to open the Windows Server Backup console and verify if Online Backup entry is visible on the left-hand side. If it is visible, the agent installation was successful, and you can proceed to the next step. Leave the console open.

5. Register the server in Windows Azure Online Backup service

If you select Online Backup entry in the Windows Server Backup console, you will get additional options on the right-hand side. Select Register Server option to start a wizard for configuring the online backup.

If you are using proxy to connect to the Internet, insert the required info about it. Next you will configure a passphrase for encrypting the backups in online service. Passphrase can be generated automatically if you click the Generate Passphrase button, or you can enter one by yourself (keep in mind that it needs to be a minimum of 16 characters long).

After that, you just need to enter your credentials to access Windows Azure Online Backup (from step 2), and the server will be added to your online account.

6. Create backup schedule

Now you can create the backup schedule by using the option Schedule Backup on the right-hand side of the Windows Server Backup console. Wizard for creating the backup schedule is similar to the wizard that is used to create the schedule locally. Basically, you need to select what will be backed up (you cannot select System State Backup, only files and folders – for System State Backup you still need to use the Local Backup option), when do you want the backup to occur (you can select one or more days of the week and up to 3 times per day) and retention period (available options are 7, 15, and 30 days).

7. Run backup now (or wait for scheduled backup to occur)

Final step is to select the option Backup Up Now or wait for scheduled backup to occur. Keep in mind that the backup will require some amount of your network (and Internet) bandwidth, so run the backup outside of business hours, if possible...

After this final step, your Windows Server 2012 will be backing up “to the cloud”. Restoring the files and folders is also made simple by using the Recover Data wizard in the same (Windows Server Backup) console.

Just to mention – one of the “cool” settings regarding the Online Backup feature is bandwidth throttling which allows you to control the amount of bandwidth that backup is using during work or non-work hours. This is the setting that provides you with the flexibility of doing the backups also during the work hours, which is really nice. The throttling settings are located under the Change Properties option which is visible once you register your server and set up the backup schedule…

•• M Sheik Uduman Ali (@udooz) described WAS StartCopyFromBlob operation and Transaction Compensation in an 11/12/2012 post:

imageThe latest Windows Azure SDKs v1.7.1 and 1.8 have a nice feature called “StartCopyFromBlob” that enables us to instruct Windows Azure data center to perform cross-storage accounts blob copy. Prior to this, we need to download chunks of blob content then upload into the destination storage account. Hence, “StartCopyFromBlob” is more efficient in terms of cost and time as well.

imageThe notable difference in version 2012-02-12 is that copy operation is now asynchronous. It means once you made a copy request to Windows Azure Storage service, it returns a copy ID (a GUID string), copy state and HTTP status code 202 (Accepted). This means that your request is scheduled. Post to this call, when you check the copy state immediately, it is most probably in “pending” state.

StartCopyFromBlob – An TxnCompensation operation

imageAn extra care is required while using this API, since this is one of the real world transaction compensation service operations. After making the copy request, you need to verify the actual status of the copy operation at later point in time. The later point in time would be varied from very few seconds to 2 weeks based on various constraints like source blob size, permission, connectivity, etc.

The figure below shows a typical sequence of StartCopyFromBlob operation invocation.

CloudBlockBlob and CloudPageBlob classes in Windows Azure storage SDK v1.8 provide StartCopyFromBlob() method which in turn calls the WAS REST service operation (http://msdn.microsoft.com/en-us/library/windowsazure/dd894037.aspx). Based on the Windows Azure Storage Team blog post (http://blogs.msdn.com/b/windowsazurestorage/archive/2012/06/12/introducing-asynchronous-cross-account-copy-blob.aspx), this request is placed on internal queue and it returns copy ID and copy state. The copy ID is an unique ID for the copy operation. This can be used later to verify the destination blob copy ID and also the way to abort copy operation later point in time. CopyState gives you copy operation status, number of bytes copying, etc.

Note that sequence 3 “PushCopyBlobMessage” in the above figure is my assumption about the operation.

ListBlobs – Way for Compensation

Although copy ID is in your hand, there is no simple API that receives array of copy IDs and to return the appropriate copy states. Instead, you have to call CloudBlobContainer‘s ListBlobs() or GetXXXBlobReference() to get the copy state. If the blob is created by the copy operation, then it will have the CopyState.

CopyState might be null for blobs that are not created by copy operation

The compensation action here is to take what we need to do when a blob copy operation is neither succeeded nor in pending state. Mostly, the next call of StartCopyFromBlob() will end up with successful blob copy. Otherwise, further remedy should be taken.

Final Words

It is very pleasure[able] to use StartCopyFromBlob(). It would be much more pleasure[able], if the SDK or REST version provides simple operations like the following:

  • GetCopyState(string[] copyIDs) : CopyState[]
  • RetryCopyFromBlob(string failedCopyId) : void

•• Jim O’Neil (@jimoneil) continued his Channel9 video series with Practical Azure #2: What About Blob? on 11/12/2012:

imageAs I kick off the coverage of the Windows Azure platform in earnest, join me on Channel 9 for this episode focusing on the use of blob storage in Windows Azure.

Download: MP3 MP4
(iPod, Zune HD)
High Quality MP4
(iPad, PC)
Mid Quality MP4
(WP7, HTML5)
High Quality WMV
(PC, Xbox, MCE)

imageAnd here are some of the additional reference links covered during the presentation:

•• Mark Kromer (@mssqldude) posted Big Data with SQL Server, part 2: Sqoop in an 11/11/2012 post:

imageI started off my series on Hadoop on Windows with the new Windows distribution of Hadoop known as Microsoft HDInsight, by talking about installing the local version of Hadoop on Windows. There is also a public cloud version of Hadoop on Azure: http://www.hadooponazure.com.

Here in part 2, I’ll focus on moving data between SQL Server and HDFS using Sqoop.

imageIn this demo, I’m going to move data between a very simple sample SQL Server 2012 database that I’ve created called “sqoop_test” with a single table called “customers”. You’ll see the table is very simple for this demo with just a customer ID and a customer name. What I’m going to do is to show you how the Microsoft & Hortonworks Hadoop distribution for Windows (HDInsights) includes Sqoop for moving data between SQL Server & Hadoop.

You can also move data between HDFS and SQL Server with the Linux distributions of Hadoop and Sqoop by using the Microsoft Sqoop adapter available for download here.

First, I’ll start with moving data from SQL Server to Hadoop. When you run this command, you will “import” data into Hadoop from SQL Server. Presumably, this would provide a way for you to perform distributed processing and analysis of your data via MapReduce once you’ve copied the data to HDFS:

sqoop import –connect jdbc:sqlserver://localhost –username sqoop -password password –table customers -m 1

I have 1 record inserted into my customers table and the import command places that into my Hadoop cluster and I can view the data in a text file, which most things in Hadoop resolve to:

> hadoop fs -cat /user/mark/customers/part-m-00000

> 5,Bob Smith

My SQL Server table has 1 row (see below) so that row was imported into HDFS:

The more common action would likely move data into SQL Server from Hadoop and to do this, I will export from HDFS to SQL Server. I have a database schema for my data in Hadoop that I created with Hive that creates a table called Employees. I’m going to tranform those into Customer records in my SQL Server schema with Sqoop:

> sqoop export –connect jdbc:sqlserver://localhost –username sqoop -password password -m 1 –table customers –export-dir /user/mark/data/employees3

12/11/11 22:19:24 INFO mapreduce.ExportJobBase: Transferred 201 bytes in 32.6364 seconds (6.1588 bytes/sec)
12/11/11 22:19:24 INFO mapreduce.ExportJobBase: Exported 4 records.

Those MapReduce jobs extract my data from HDFS and send it to SQL Server so that now when I query my SQL Server Customers table, I have my original Bob Smith record plus these 4 new records that I transferred from Hadoop:


<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

Herve Roggero (@hroggero) described how to Backup [and Restore] SQL Database Federation[s] in an 11/8/2012 post:

imageOne of the amazing features of Windows Azure SQL Database is the ability to create federations in order to scale your cloud databases. However until now, there were very few options available to backup federated databases. In this post I will show you how Enzo Cloud Backup can help you backup, and restore your federated database easily. You can restore federated databases in SQL Database, or even on SQL Server (as regular databases).

imageGenerally speaking, you will need to perform the following steps to backup and restore the federations of a SQL Database:

  1. Backup the federation root
  2. Backup the federation members
  3. Restore the federation root
  4. Restore the federation members

These actions can be automated using: the built-in scheduler of Enzo Cloud Backup, the command-line utilities, or the .NET Cloud Backup API provided, giving you complete control on how you want to perform your backup and restore operations.

Backing up federations

Let’s look at the tool to backup federations. You can explore your existing federations by using the Enzo Cloud Backup application as shown below. As you can see, the federation root and the various federations available are shown in separate tabs for convenience. You would first need to backup the federation root (unless you intend to restore the federation member on a local SQL Server database and you don’t need what’s in the federation root). The steps are similar than those to backup a federation member, so let’s proceed to backing up a federation member.

You can click on a specific federation member to view the database details by clicking at the tab that contains your federation member. You can see the size currently consumed and a summary of its content at the bottom of the screen.


If you right-click on a specific range, you can choose to backup the federation member. This brings up a window with the details of the federation member already filled out for you, including the value of the member that is used to select the federation member. Notice that the list of Federations includes “Federation Root”, which is what you need to select to backup the federation root (you can also do that directly from the root database tab). Once you provide at least one backup destination, you can begin the backup operation. From this window, you can also schedule this operation as a job and perform this operation entirely in the cloud. You can also “filter” the connection, so that only the specific member value is backed up (this will backup all the global tables, and only the records for which the distribution value is the one specified). You can repeat this operation for every federation member in your federation.


Restoring Federations

Once backed up, you can restore your federations easily. Select the backup device using the tool, then select Restore. The following window will appear. From here you can create a new root database. You can also view the backup properties, showing you exactly which federations will be created.


Under the Federations tab, you can select how the federations will be created. I chose to recreate the federations and let the tool perform all the SPLIT operations necessary to recreate the same number of federation members. Other options include to create the first federation member only, or not to create the federation members at all.


Once the root database has been restored and the federation members have been created, you can restore the federation members you previously backed up. The screen below shows you how to restore a backup of a federation member into a specific federation member (the details of the federation member are provided to make it easier to identify).



This post gave you an overview on how to backup and restore federation roots and federation members. The backup operations can be setup once, then scheduled daily.

Jim O’Neil (@jimoneil) completed his series with Windows 8 Notifications: Push Notifications via Windows Azure Web Sites (Part 3) on 11/8/2012:

imageIt’s finally time to get to the “Azure” piece of this three-parter! Those of you who haven't read Part 1 and Part 2 may want to at lease peruse those posts for context. Those of you who have know that I created a Windows Store application (Boys of Summer) that features push notifications to inform its end users of news related to their favorite major league teams. The Windows Store application works in concert with a Cloud Service (hosted as a Windows Azure Web Site) to support the notification workflow, and this post tackles that last piece of the puzzle, as highlighted below.

Push notification workflow, highlighting Cloud Service

imageThere are two main components to this cloud service: an ASP.NET MVC site that heavily leverages the Web API and backing storage in the form of a MySQL database. The deployment mechanism is a Windows Azure Web Site, with MySQL being selected (over Windows Azure SQL Database) since a 20MB instance of MySQL is included free of charge in the already free-of-charge entry level offering of Web Sites. I don’t anticipate my application needing more horsepower than already included in that free tier of Windows Azure Web Sites, but it’s nice to know I can step up to the shared or reserved model should Boys of Summer become wildly successful!

What follows is a rather lengthy but comprehensive post on how I set this service up inside of Windows Azure. I’ve split it into seven sections so you can navigate to the parts of specific interest to you:

    1. Step 1. Get your Windows Azure account
    2. Step 2. Create a new Windows Azure Web Site
    3. Step 3. Set up the MySQL database
    4. Step 4. Build the APIs for the cloud service
    5. Step 5. Implement the ASP.NET MVC page that issues the notifications
    6. Step 6. Implement interface with the Windows Notification Service (WNS)
    7. Step 7. Deploy the completed solution to Windows Azure
Step 1: Get Your Windows Azure Account

imageThere are a number of ways to get a Windows Azure account, and one of the easiest for ‘kicking-the-tires’ is the 90-day Free Trial. If you’re an MSDN subscriber, you already have a monthly allotment of hours as well, and all you need to do is activate the benefit.

Likewise, WebsiteSpark and BizSpark participants get monthly benefits by virtue of the MSDN subscriptions that come with those programs. Of course, you can also opt for one of the paid plans as well or upgrade after trying out one of the free tiers.

Step 2: Create a New Windows Azure Web Site

New option for creating Windows Azure assetsOnce you’ve logged into the Windows Azure portal with your Microsoft account, you’ll be able to provision any of the services available, via the NEW option at the lower left.

For Boys of Summer, I needed a Web Site along with a database.

Creating a new Web Site with Database

If you don’t see Windows Azure Web Sites listed on the left sidebar, or it’s disabled, you’ll need to visit your account page and select the preview features menu option to enable Web Sites (and other features that are not yet fully released). The activation happens in a matter of minutes, if not faster.

To create the Web Site, only a few bits of information need be entered on the two screens that appear next:

New Windows Azure Web Site properties

  • the URL for the Web Site, which must be a unique URL (in the azurewebsites.net domain). It’s this URL that will be target of the RESTful service calls made from my Windows Store application.
  • the location of the data center that will host the service.
  • what type of database to create (MySQL here, but SQL Database is also an option).
  • a database connection string name.
  • the name of the database server.
  • the data center location hosting the MySQL database, which should be the same data center housing the Web Site itself; otherwise, you’ll incur unnecessary latency as well as potential bandwidth cost penalties.

The last checkbox on the page confirms agreement with ClearDB’s legal and privacy policy as they are managing the MySQL database instances.

Once the necessary details have been entered, it takes only a minute or two to provision the site and the database, during which the status is reflected in the portal. When the site is available, it can be selected from the list of Web Sites in the Azure portal to access the ‘getting started’ screen below:

New Web Site 'getting started' page

The site (with default content) is actually live at this point, which you can confirm by pressing the BROWSE button on the menu bar at the bottom of the screen.

To get the information I need to develop and deploy the service, I next visit the DASHBOARD option which provides access to a number configuration settings. I specifically need the first two listed in the quick glance sidebar on the right of the dashboard page:

  1. I’ll use the MySQL connection string to create the tables needed to store the notification URIs for the various clients, and
  2. I’ll save the publish profile (as a local file) to later be imported into Visual Studio 2012. That will allow me to deploy my ASP.NET application directly to Windows Azure. Do be aware that this file contains sensitive information enabling deployment to the cloud, so treat publish settings file judiciously.

Key information for building the Web API service

At this point, I won’t even need to revisit the Windows Azure portal, but of course there are a number of great monitoring and scaling features I may want to consult there once my site is up and running.

Step 3. Set up the MySQL Database

In the Web API service implementation, I’m using Entity Framework (EF) Code First along with a basic Repository pattern on a VERY simple data model that abstracts the two tables with the column definitions shown below.


I used the open source MySQL Workbench (with the connection string data from the Windows Azure portal) to create the tables and populate data into the teams table. There’s also a very simple stored procedure that is invoked by one of the service methods that I’ll discuss a bit later in this post:

CREATE PROCEDURE updatechanneluri 
(IN olduri VARCHAR(2048), IN newuri VARCHAR(1024)) BEGIN UPDATE registrations SET Uri = newuri WHERE Uri = olduri; END

Since I opted for MySQL, I needed to install a provider, and specifically I installed Connector/Net 6.6.4, which supports Entity Framework and Visual Studio 2012. As of this writing, the version of the Connector/Net available via NuGet did not.

Step 4. Build the APIs for the Cloud Service

There are numerous options for building a cloud service in Windows Azure Web Sites – .NET, PHP, Node.js – and a number of tools as well like Visual Studio and Web Matrix. I opted to use ASP.NET MVC and the Web API in .NET 4.5 within Visual Studio 2012 (as you can see below).

Creating an ASP.NET Web API site

If you’re new to the ASP.NET MVC Web API, I highly recommend checking out Jon Galloway’s short screencast ASP.NET Web API, Part 1: Your First Web API to give you an overview of the project components and overall architectural approach.

My service includes three controllers: the default Home controller, a Registrations controller, and a Teams controller. Home Controller provides a default web page to send out the toast notifications; I’ll look at that one in Step 6.

The other two controllers specifically extend ApiController and are used to implement various RESTful service calls. Not surprisingly, each corresponds directly to one of the two Entity Framework data model classes (and by extension, the MySQL tables). These controllers more-or-less carry out CRUD operations on those two tables.

Step 4.1 Coding TeamsController

The application doesn’t modify any of the team information, so there are only two read operations needed:

public Team Get(String id) 

retrieves information about a given team given its id; the Team class is one of the two entity classes in my Entity Framework model.
This method is invoked via an HTTP GET request matching a route of http://boysofsummer.azurewebsites.net/api/teams/{id}, where id is the short name for the team, like redsox or orioles.

public HttpResponseMessage GetLogo(String id, String fmt)

retrieves the raw bytes of the image file for the team logo. Returning an HttpResponseMessage value (versus the actual data) allows for setting response headers (like Content-type to “image/png”). The implementation of this method ultimately reaches in to the teams table in MySQL to get the logo and then writes the raw bytes of that logo as StreamContent in the returned HttpResponseMessage. If a team has no logo file, an HTTP status code of 404 is returned.

This method responds to a HTTP GET request matching the route http://boysofsummer.azurewebsites.net/api/logos/{id}/png, where id is again the short name for a given team.

Why didn't I use a MediaTypeFormatter? If you've worked with the Web API you know it leverages the concept of content negotiation to return different representations of the same resource by using the Accept: header in the request to determine the content type that should be sent in reply. For instance, the resource http://boysofsummer.azurewebsites.net/api/teams/redsox certainly refers to the Red Sox, but does it return the information about that team as XML? as JSON? or something else?

The approach I'd hoped for was to create a MediaTypeFormatter, which works in conjunction with content negotiation to determine how to format the result. So I created a formatter that would return the raw image bytes and a content type of image/png whenever a request for the team came in with an Accept header specifying image/png. Any other requested type would just default to the JSON representation of that team's fields.

For this to work though, the incoming GET request for the URI must set the Accept header appropriately. Unfortunately, when creating the toast template request (in XML) you only get to specify the URI itself, and the request that is issued for the image (by the inner workings of the Windows 8 notification engine) includes a catch-all header of Accept: */*

My chosen path of least resistance was to create a separate GET method (GetLogo above) with an additional path parameter that ostensibly indicates the representation format desired (although in my simplistic case, the format is always PNG).

Step 4.2 Coding RegistrationsController

The Registrations controller includes three methods which manage the channel notification registrations that associate the application users’ devices with the teams they are interested in tracking. As you would probably expect, those methods manipulate the registrations table in MySQL via my EF model and repository implementation.

public HttpResponseMessage Post(Registration newReg)

inserts a new registration record into the database. Each registration record reflects the fact that a client (currently identified by a given notification channel URI) is interested in tracking a given team, identified the team id (or short name). The combination of channel URI and team should be unique, so if there is an attempt to insert a duplicate, the HTTP code of 409 (Conflict) is returned; a successful insertion yields a 201 (Created).

This POST method is invoked by the Windows Store application whenever the user moves a given team’s notification toggle switch to the On position. The URI pattern it matches is http://boysofsummer.azurewebsites.net/api/registrations, and the request body contains the channel URI and the team name formatted as JSON.

public HttpResponseMessage Delete(String id, String uri)

deletes an existing record from the registrations database, in response to a user turning off notifications for a given team via the Windows Store application interface. A successful deletion results in a HTTP code of 204 (No Content), while an attempt to remove a non-existent record returns a 404 (Not Found).

The DELETE HTTP method here utilizes a URI pattern of http://boysofsummer.azurewebsites.net/api/registrations/{id}/{uri} where id is the team’s short name and uri is the notification channel URI encoded in my modification of Base64.

public HttpResponseMessage Put(String id, [FromBody] UriWrapper u)

modifies all existing records matching the notification channel URI recorded in the database with a new channel URI provided via the message body. This is used when a given client’s notification channel URI ‘expires’ and a new one is provided, so that the client can continue to receive notifications for teams to which he previously subscribed. This method, by the way, is the one that leverages the MySQL stored procedure mentioned earlier to more efficiently update potentially multiple rows.

This PUT method matches the URI pattern http://boysofsummer.azurewebsites.net/api/registrations/{id} where id is the previous notification channel URI as recorded in the MySQL database. The replacement URI is passed via the message body as a simple JSON object. Both URIs are Base64(ish) encoded. The update returns a 204 (No Content) regardless of whether any rows in the Registration table are a match.

Step 5. Implement the ASP.NET MVC Page That Issue the Notifications

For the sample I’m presenting here, I adapted the default ASP.NET MVC home page to provide a way to trigger notifications to the end users of the Boys of Summer Windows Store app. At the moment, it’s quite a manual process and assumes that someone is sitting in front of a browser – perhaps scanning sports news outlets for interesting tidbits to send. That’s not an incredibly scalable scenario, so ‘in real life’ there might be a (semi-)automated process that taps into various RSS or other feeds and automatically generates relevant notifications.

ASP.NET MVC View for sending notifications

The controller here has two methods, a GET and a POST. The GET is a simple one-liner that returns the View with the list of teams populated from the model via my team repository implementation.

The POST, which is fired off when the Send Notification button is pressed, is the more interesting of the two methods, and appears in its entirety below with a line-by-line commentary.

   1:  [HttpPost]
   2:  public async Task<ActionResult> Index
   3:         (String notificationHeader, String notificationBody, String teamName)
   4:  {
   5:      // set XML for toast
   6:      String toast = SetToastTemplate(notificationHeader, notificationBody,             
   7:          String.Format("{0}api/logos/{1}", Request.Url, teamName));
   9:      // send notifications to subscribers for given team
  10:      List<NotificationResult> results = new List<NotificationResult>();
  11:      foreach (Registration reg in RegistrationRepository.GetRegistrations(teamName))
  12:      {
  13:          NotificationResult result = 
  14:                           await WNSHelper.PushNotificationAsync(reg.Uri, toast);
  15:          results.Add(result);
  17:          if (result.RequiresChannelUpdate())
  18:              RegistrationRepository.RemoveRegistration(teamName, reg.Uri);
  19:      }
  21:      // show results of sending 0, 1 or multiple notifications
  22:      ViewBag.Message = FormatNotificationResults(results);
  24:      return View(TeamRepository.GetTeams().ToList());
  25:  }

Lines 2-3
The POST request pulls the three form fields from the view, namely the Header Text, Message, and Team

Lines 6-7
A helper method is called to format the XML template; it’s simply a string formatting exercise to populate the appropriate parts of the template. This implementation only supports the ToastImageAndText02Template.

Line 11
A list of all the registrations for the desired team is returned from the database

Lines 13-14
A notification is pushed for each registration retrieved using a helper class that will be explained shortly.

Line 15
A list of NotificationResult instances records the outcome of each notification, including error information if present. Note though that you can never know if a notification has successfully arrived at its destination.

Lines 17-18
If the specific error detected when sending the notification indicates that the targeted notification channel is no longer valid, that registration is removed from the database.
For instance, it may correspond to a user who has uninstalled the application in which case it doesn’t make sense to continue sending notifications there and, if unchecked, could raise a red flag with the administrators of WNS.

Line 22
A brief summary of the notification results is included on the reloaded web page. If a single notification was sent, a few details are provided; if multiple notifications were attempted, only the number of successes and failures is shown.
For a production application, you may want to record each attempt in a log file or database for review to ensure the application is healthy. Additionally, failed attempts can give some high level insight to the usage patterns of your applications – who’s on line when and perhaps how many have uninstalled your app.

Line 24
As with the GET request, the model (the list of baseball teams) is reloaded.

Step 6. Implement the Interface with the Windows Notification Service (WNS)

This is the fun part: the exchange between my cloud service and WNS, which does all the heavy lifting in terms of actually delivering the toast to the devices - via the implementation behind Line 14 above:

await WNSHelper.PushNotificationAsync(reg.Uri, toast)

WNSHelper is a class I wrote to abstract the message flow between the service and WNS and as such is a simpler (but less robust) version of the Windows 8 WNS Recipe code available in the Windows Azure Toolkit for Windows 8 (the source for which is located in the /Libraries/WnsRecipe of the extracted toolkit). The code for WNSHelper is available a Gist (if you want to dive deeper), but here it is in pictures:

OAuth flow

First, my service (via the RefreshTokenAsync method) initiates a request for an access token via the OAuth 2.0 protocol. It uses Live Services to authenticate the package SID and client secret I obtained from the Windows Store when registering my application. The package SID and client secret do need to be stored securely as part of the Cloud Service, since they are the credentials that enable any agent to send push notifications to the associated application.

The HTTP request looks something like the following. Note the SID and the client secret are part of the URL-encoded form parameters along with grant_type and scope, which are always set to the values you see here.

POST https://login.live.com/accesstoken.srf HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Host: login.live.com
Content-Length: 210


Assuming the authentication succeeds, the response message will include an access token, specifically a bearer token, which grants the presenter (or ‘bearer’) of that token the privileges to carry out some operation in future HTTP requests. The HTTP response message providing that token has the form:

HTTP/1.1 200 OK
Content-Type: application/json;charset=UTF-8
Cache-Control: no-store
Pragma: no-cache
  "access_token" : "mF_9.B5f-4.1JqM",
  "token_type" : "Bearer",
  "expires_in" : 86400

An access token is nothing more than a hard-to-guess string, but note there is also an expires_in parameter, which means that this token can be used only for a day (86400 seconds), after which point additional requests using that token will result in an HTTP status code of 401 (Unauthorized). This is important, and I’ll revisit that shortly.

Once the Cloud Service has the bearer token, it can now send multiple push notifications to WNS by presenting that token in the Authorization header of push notification HTTP requests. The actual request is sent to the notification channel URI, and the message body is the XML toast template.

A number of additional HTTP headers are also supported including X-WNS-Type, which is used to indicate the type of notification (toast, tile, badge, or raw) and X-WNS-RequestForStatus, which I employ to get additional information about the disposition of the notification.

Here’s a sample notification request as sent from my cloud service:

POST https://bn1.notify.windows.com/?token=AgYAAADCM0ruyKK… HTTP/1.1
Content-Type: application/xml
Host: bn1.notify.windows.com
Authorization: Bearer mF_9.B5f-4.1JqM
X-WNS-Type: wns/toast
X-WNS-RequestForStatus: true
Content-Length: 311 <toast> <visual>
<binding template="ToastImageAndText02">
<image id="1" src="http://boysofsummer.azurewebsites.net/api/logos/tigers/png" />
<text id="1">Breaking News!!!</text>
<text id="2">The Detroit Tigers have won the ALCS!</text>

Assuming all is in order, WNS takes over from there and delivers the toast to the appropriate client device. For toast notifications, if the client is off-line, the notification is dropped and not cached; however, for tiles and badges the last notification is cached for later delivery (and when the notification queue is in play, up to five notifications may be cached).

If WNS returns a success code, it merely means it was able to process the request, not that the request reached the destination. That bit of information is impossible to verify. Depending on the additional X-WNS headers provided in the request, some headers will appear in the reply providing diagnostic and optional status information.

Now if the request fails, there are some specific actions you may need to take. The comprehensive list of HTTP response codes is well documented, but I wanted to reiterate a few of the critical ones that you may see as part of ‘normal’ operations.

Success – this is a good thing

Unauthorized – this can occur normally when the OAuth bearer token expires. The service sending notifications should continue to reuse the same OAuth token until receiving a 401. That’s the signal to reissue the original OAuth request (with the SID and client secret) to get a new token.
This means of course, that you should check for a 401 response when POSTing to the notification channel, because you’ll want to retry that operation after a new access token has been secured.

Not Acceptable – you’ve been throttled and need to reduce the rate at which you are sending notifications. Unfortunately, there’s no documentation of how far you can push the limits before receiving this response.

Gone – the requested notification channel no longer exists, meaning you should purge it from the registry being maintained by the cloud service. It could correspond to a client that has uninstalled your application or one for which the notification channel has expired and a new one has not been secured and recorded.

My implementation checks for all but the throttling scenario. If a 401 response is received, the code automatically reissues the request for an OAuth token (but with limited additional error checking). Similarly, a 406 response (or a 404 for that matter) results in the removal of that particular channel URI/team combination from the MySQL Registrations table.

Step 7. Deploy it all

Only one thing left! SHIP IT! During the development of this sample I was able to do a lot of debugging locally, relying on Fiddler quite a bit to learn and diagnose problems with my RESTful API calls, so you don’t have to deploy to the cloud right away – though with the free Windows Azure Web Site tier there’s no real cost impact for doing so. In fact, my development was somewhat of a hybrid architecture, since I always accessed the MySQL database I’d provisioned in the cloud, even when I was running my ASP.NET site locally.

When it comes time to move to production, it’s incredibly simple due to the ability to use Web Deploy to Windows Azure Web Sites. When I provisioned the site at the beginning of this post, I downloaded the publication settings file from the Windows Azure Portal. Now via the Publish… context menu option of the ASP.NET MVC project, I’m able in Visual Studio 2012 (and 2010 for that matter) to import the settings from that file and simply hit publish to push the site to Windows Azure.

Truth be told, there was one additional step – involving MySQL. Windows Azure doesn’t have the Connector / NET assemblies installed, nor does it know that it’s an available provider. That’s easily remedied by including the provider declaration in the web.config file of the project:

      <add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient" 
            description=".Net Framework Data Provider for MySQL" 
            type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data" />
  </system.data>MySQL references need to have Copy Local set to true

and making sure that each of the MySQL referenced assemblies has its Copy Local property set to true. This is to ensure the binaries are copied over along with the rest of the ASP.NET site code and resource assets.

With the service deployed, the only small bit of housekeeping remaining is ensuring that the Windows Store application routes its traffic to the new endpoint, versus localhost or whatever was being used in testing. I just made the service URI an application level resource in my Windows Store C#/XAML app so I could easily switch between servers.

<x:String x:Key="ServiceUri">http://boysofsummer.azurewebsites.net:12072</x:String>

With the service deployed, the workflow I introduced two posts ago with the picture below has been realized! Hopefully, this deep-dive has given you a better understanding of the push notification workflow, as well as a greater appreciation for Windows Azure Mobile Services, which abstracts much of this lower-level code into a Backend-as-a-Service offering that is quite appropriate for a large number of similar scenarios.


<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

Bill Hilf (@bill_hilf) posted Windows Azure Benchmarks Show Top Performance for Big Compute to the Windows Azure blog on 11/13/2012:

151.3 TFlops on 8,064 cores with 90.2 percent efficiency

imageWindows Azure now offers customers a cloud platform that can cost effectively and reliably meet the needs of Big Compute. With a massively powerful and scalable infrastructure, new instance configurations, and a new HPC Pack 2012, Windows Azure is designed to be the best platform for your Big Compute applications. In fact, we tested and validated the power of Windows Azure for Big Compute applications by running the LINPACK benchmark. The network performance was so impressive –151.3 TFlops on 8,065 cores with 90.2 percent efficiency—that we submitted the results and have been certified as one of the Top 500 of the world’s largest supercomputers.

Hardware for Big Compute

As part of our commitment to Big Compute we are announcing hardware offerings designed to meet customers’ needs for high performance computing. We will offer two high performance configurations: The first with 8 cores and 60 GB of RAM, and a second with 16 cores with 120 GB of RAM. Both configurations will also provide an InfiniBand network with RDMA for MPI applications.

The high performance configurations are virtual machines delivered on systems consisting of:

  • Dual Intel Sandybridge processors at 2.6 GHz
  • DDR3 1600 MHz RAM
  • 10 GigE network for storage and internet access
  • InfiniBand (IB) 40 Gbps network with RDMA

Our InfiniBand network supports remote direct memory access (RDMA) communication between compute nodes. For applications written to use the message passing interface (MPI) library, RDMA allows memory on multiple computers to act as one pool of memory. Our RDMA solution provides near bare metal performance (i.e., performance comparable to that of a physical machine) in the cloud, which is especially important for Big Compute applications.

The new high performance configurations with RDMA capability are ideal for HPC and other compute intensive applications, such as engineering simulations and weather forecasting that need to scale across multiple machines. Faster processors and a low-latency network mean that larger models can be run and simulations will complete faster.

LINPACK Benchmark

To demonstrate the performance capabilities of the Big Compute hardware, we ran the LINPACK benchmark, submitted the results and have been certified as one of the Top 500 of the world’s largest supercomputers. The LINPACK benchmark demonstrates a system’s floating point computing power by measuring how fast it solves a dense n by n system of linear equations Ax = b, which is a common task in engineering. This approximates performance when solving real problems.

We achieved 151.3 TFlops on 8,064 cores with 90.2 percent efficiency. The efficiency number reflects how close to the maximum theoretical performance a system can achieve, calculated as the machine’s frequency in cycles per second times the number of operations it can perform. One of the factors that influences performance and efficiency in a compute cluster is the capability of the network interconnect. This is why we use InfiniBand with RDMA for Big Compute on Windows Azure.

Here is the output file from the LINPACK test showing our 151.3 Terraflop result.

What’s impressive about this result is that it was achieved using Windows Server 2012 running in virtual machines hosted on Windows Azure with Hyper-V. Because of our efficient implementation, you can get the same performance for your high performance application running on Windows Azure as on a dedicated HPC cluster on-premises.

imageWindows Azure is the first public cloud providers to offer virtualized InfiniBand RDMA network capability for MPI applications. If your code is latency-sensitive, our cluster can send a 4 byte packet across machines in 2.1 microseconds. InfiniBand also delivers high throughput. This means that applications will scale better, with a faster time to result and lower cost.

Application Performance

The chart below shows how the NAMD molecular dynamics simulation program scales across multiple cores running in Windows Azure with the newly announced configurations. We used 16-core instances for running the application, so 32 and more cores require communication across the network. NAMD really shines on our RDMA network, and the solution time reduces impressively as we add more cores.

How well a simulation scales depends on both the application and the specific model or problem being solved.

We are currently testing the high performance hardware with a select group of partners and will make it publicly available in 2013.

Windows Azure Support for Big Compute with Microsoft HPC Pack 2012

We began supporting Big Compute on Windows Azure two years ago. Big Compute applications require large amounts of compute power that typically run for many hours or days. Examples of Big Compute include modeling complex engineering problems, understanding financial risk, researching disease, simulating weather, transcoding media, or analyzing large data sets. Customers doing Big Compute are increasingly turning to the cloud to support a growing need for compute power, which provides greater flexibility and economy than having the work done all on-premise.

In December 2010, the Microsoft HPC Pack first provided the capability to “burst” (i.e., instantly consume additional resources in the cloud to meet extreme demand in peak usage situations) from on-premises compute clusters to the cloud. This made it easy for customers to use Windows Azure to handle peak demand. HPC Pack took care of provisioning and scheduling jobs, and many customers saw immediate return on their investment by leveraging the always-on cloud compute resources in Windows Azure.

Today, we are pleased to announce the fourth release of our compute cluster solution since 2006. Microsoft HPC Pack 2012 is used to manage compute clusters with dedicated servers, part-time servers, desktop computers, and hybrid deployments with Windows Azure. Clusters can be entirely on-premises, can be extended to the cloud on a schedule or on demand, or be can be all in the cloud and active only when needed.

The new release provides support for Windows Server 2012. Features include Windows Azure VPN integration for access to on-premises resources, such as license servers, new job execution control for dependencies, new job scheduling policies for memory and cores, new monitoring tools, and utilities to help manage data staging.

Microsoft HPC Pack 2012 will be available in December 2012.

Big Compute on Windows Azure today

Windows Azure was designed from the beginning to support large-scale computation. With the Microsoft HPC Pack, or with their own applications, customers and partners can quickly bring up Big Compute environments with tens of thousands of cores. Customers are already putting these Windows Azure capabilities to the test, as the following examples of large-scale compute illustrate.

Risk Reporting for Solvency II Regulations

Milliman is one of the world's largest providers of actuarial and related products and services. Their MG-ALFA application is widely used by insurance and financial companies for risk modeling, it integrates with the Microsoft HPC Pack to distribute calculations to HPC clusters or burst work to Windows Azure. To help insurance firms meet risk reporting for Solvency II regulations, Milliman also offers MG-ALFA as a service using Windows Azure. This enables their customers to perform complex risk calculations without any capital investment or management of an on-premises cluster. The solution from Milliman has been in production for over a year with customers running it on up to 8,000 Windows Azure compute cores.

MG-ALFA can reliably scale to tens of thousands of Windows Azure cores. To test new models, Milliman used 45,500 Windows Azure compute cores to compute 5,800 jobs with a 100 percent success rate in just over 24 hours. Because you can run applications at such a large scale, you get faster results and more certainty in the outcomes as a result of not using approximations or proxy modelling methods. For many companies, complex and time-consuming projections have to be done each quarter. Without significant computing power, they either have to compromise on how long they wait for results or reduce the size of the model they are running. Windows Azure changes the equation.

The Cost of Insuring the World

Towers Watson is a global professional services company. Their MoSes financial modeling software applications are widely used by life insurance and annuity companies worldwide to develop new offerings and manage their financial risk. MoSes integrates with the Microsoft HPC Pack to distribute projects across a cluster that can also burst to Windows Azure. Last month, Towers Watson announced they are adopting Windows Azure as their preferred cloud platform.

One of Towers Watson’s initial projects for the partnership was to test the scalability of the Windows Azure compute environment by modeling the cost of insuring the world. The team used MoSes to perform individual policy calculations on the cost of issuing whole life policies to all seven billion individuals on earth. The calculations were repeated 1,000 times across risk-neutral economic scenarios. To finish these calculations in less time, MoSes used the HPC Pack to distribute these calculations in parallel across 50,000 compute cores in Windows Azure.

Towers Watson was impressed with their ability to complete 100,000 hours of computing in a couple of hours of real time. Insurance companies face increasing demands on the frequency and complexity of their financial modeling. This test demonstrated the extraordinary possibilities that Windows Azure brings to insurers. With Windows Azure, insurers can run their financial models with greater precision, speed and accuracy for enhanced management of risk and capital.

Speeding up Genome Analysis

Cloud computing is expanding the horizons of science and helping us better understand the human genome and disease. One example is the genome-wide association study (GWAS), which identifies genetic markers that are associated with human disease.

David Heckerman and the eScience research group at Microsoft Research developed a new algorithm called FaST-LMM that can find new genetic relationships to diseases by analyzing data sets that are several orders of magnitude larger than was previously possible and detecting more subtle signals in the data than before.

The research team turned to Windows Azure to help them test the application. They used the Microsoft HPC Pack with FaST-LMM on 27,000 compute cores on Windows Azure to analyze data from the Wellcome Trust study of the British population. They analyzed 63,524,915,020 pairs of genetic markers, looking for interactions among these markers for bipolar disease, coronary artery disease, hypertension, inflammatory bowel disease (Crohn’s disease), rheumatoid arthritis, and type I and type II diabetes.

Over 1,000,000 tasks were scheduled by the HPC Pack over 72 hours, consuming the equivalent of 1.9 million compute hours. The same computation would have taken 25 years to complete on a single 8-core server. The result: the ability to discover new associations between the genome and these diseases that could help potential breakthroughs in prevention and treatment.

Researchers in this field will have free access to these results and independently validate their own lab’s results. These researchers can calculate the results from individual pairs and the FaST-LMM algorithm on-demand with free access in the Windows Azure Data Marketplace.

Big Compute

With a massively powerful and scalable infrastructure, the new instance configurations, and the Microsoft HPC Pack 2012, Windows Azure is designed to be the best platform for your Big Compute applications.

We invite you to let us know about Big Compute interests and applications by contacting us at – bigcompute@microsoft.com.

- Bill Hilf, General Manager, Windows Azure Product Marketing.

John Furrier (@furrier) posted Opinion: How Big Data Can Change the Game – Big Data Propels Obama to Re-election to the SiliconAngle blog on 11/9/2012:

imageAs the election buzz about how Obama won the election in the most horific economic conditions any incu[m]bant has ever seen, many want to know why. America has yelling for years and Barack extracted that signal from the noise using big data to “listen”. He aligned his message to those who were speaking to him.

On election day America spoke and Barack Obama has won re-election. One key element of Obama’s victory that cannot be overlooked: Big Data.

Its influence on this election has been poorly documented but it played a huge role in returning Obama to the White House. CNN, Fox and the networks completely missed the Big Data/Silicon Valley angle. I’ve been saying Big Data can disrupt all industries and here it has disrupted the election. It literally put Obama over the top. It was that close. Romney just got out played in the big data listening game. Just ask Nate Silver. Enough said there.

I have been closely following the election. We had two capable candidates. Romney offered a strong fiscal policy. Obama’s social agenda was spot on. The Republicans, however, are “completely out to lunch” on the pulse of America, unable to fully understand the diversity of the country and the demographic make-up of today’s voters.

imageIf Steve Jobs were alive he may have volunteered to help Obama promote his campaign. Well he was there in spirit because it was, Jobs’ iPhone that did help. Smartphones, social media, big data and predictive analytics all played a key role in Obama’s re-election bid, serving as a parallel “ground game” to the traditional “knocking on doors” ground game.

In the summer of 2011, I met with Rayid Ghani, the chief scientist of Obama’s campaign. Rayid was formerly with Accenture Technology Labs in Chicago. It was Rayid’s job to capture the “data firehose” and work with other “alpha geeks” to develop the algorithms that fully aligned Obama’s messages to shifting voter sentiment. The messages were then shared in real-time across social media, Twitter, text messaging and email. Rayid and his team, including volunteers from Google, LinkedIn and other start-ups in Silicon valley, collected massive amounts of voter data and were able to respond to concerns almost instantly. Without this big data effort, I’m not sure Obama wins re-election.

America is about hope and growth. This is why many found Romney’s fiscal policies so attractive. But it is Obama that speaks to the heart of the upward mobility aspirations of hard working Americans, including entrepreneurs, wanna-be entrepreneurs, and immigrants who want to start their own business and contribute to society. This is the new middle class that Obama speaks to. They represent new opportunities and new growth – mirroring the country’s new demographics. It is not the old guard.

Today’s tech culture has shifted the game in terms of voting and politics. There’s a whole new generation of people coming into the electorate who are young and have a definite perspective about what the future should look like. It’s more inclusive and globally oriented. They don’t take their cues from Big Media. They use their smartphones, Twitter, Facebook and other social media to help understand what is most important to them. They use these same tools to spread their message even further. The Obama campaign understood this. The Romney campaign did not.

Using Big Data, the Obama campaign could understand real-time sentiment across targeted groups and respond almost instantly with a message that could be quickly received and spread instantly to their friends and family using these same tools. This is not spam. These were well crafted messages that people wanted to receive. Big Data allowed the campaign to clue into sentiment right away, craft the right message and respond.

The Obama campaign was not using social media simply to get out their message but using social media to help create signals, align those signals with the voters, and then mobilize those who received their signals. This is much more than texting a person urging them to vote or asking them for money. Those are important but this new big data “listening” effort was more about synthesizing cultural sentiment. Mobilizing people, connecting with everyone, giving everyone a voice, and helping spread their message.

The days when old media and traditional gatekeepers can define the issues are over. Campaigns now need to look at Twitter, Facebook and crowd behavior to understand the pulse of the electorate. When the book is written on this election it’s going to be written how big data, Internet culture, and mobile phones enabled people to share their opinions, give everyone a voice and connect with one another.

Obama understood big data and this new generation of voter, Romney did not. Silicon Valley and even credit to the “crazy ones” like Steve Jobs were essential in helping Obama win re-election.

Here is my video of my comments on the subject on SiliconANGLE.TV NewsDesk

SiliconANGLE.tv is beta testing our NewsDesk or CubeDesk as we get ready to go 24/7 as a global tech video network coverage in the SiliconANGLE.TV mission to be the ESPN of Tech.

Stay Tuned

John didn’t mention the Epic [Fail] Whale: Romney volunteers say ‘Orca’ was debacle reported by Natalie Jennings (@ngjennings, no relation) in an 11/9/2012 article for the Washington Post:

imageMitt Romney election day volunteers say a buggy and not-properly-tested poll-monitoring program created by the campaign stymied their voter monitoring efforts on Tuesday.

Orca, as the program was called, was designed as a first-of-its-kind tool to employ smartphones to mobilize voters, allowing them to microtarget which of their supporters had gone to the polls.

imageIt was kept under close wraps until just before election day. Romney campaign communications director Gail Gitcho explained it to PBS on Monday as a massive technological undertaking, with 800 people in Boston communicating with 34,000 volunteers across the country.

According to John Ekdahl in a blistering post on the Ace of Spades blog, the deployment of the new program was sloppily executed from the outset, causing confusion among the volunteers.

Ekdhal wrote that instruction packets weren’t sent to volunteers until Monday night, and they arrived missing some crucial bits of information, including how to access the app and material the volunteers would need at the polls.

From what I saw, these problems were widespread. People had been kicked from poll watching for having no certificate. Others never received their pdf packets. Some were sent the wrong packets from a different area. Some received their packet, but their usernames and passwords didn’t work…

By 2 pm, I had completely given up. I finally got a hold of someone at around 1 pm and I never heard back. From what I understand, the entire system crashed at around 4 pm. I’m not sure if that’s true, but it wouldn’t surprise me. I decided to wait for my wife to get home from work to vote, which meant going very late (around 6:15 pm). Here’s the kicker, I never got a call to go out and vote.

A volunteer from Colorado gave a similar account to the Brietbart Web site, and said “this idea would only help if executed extremely well. Otherwise, those 37,000 swing state volunteers should have been working on GOTV.”

The Romney team had seemed confident in its product before election day. Spokeswoman Andrea Saul said on Monday it would give the campaign an “enormous advantage.

Update: Romney digital director Zac Moffatt’s response to Orca’s critics

See also ArsTechnica’s Inside Team Romney's whale of an IT meltdown article of 11/9/2012 by Sean Gallagher:


Doug Mahugh (@dmahugh) posted a 00:19:46 OData and DB2: open data for the open web video segment to Channel 9’s Interoperability section on 11/9/2012:

imageOData is a web protocol that unlocks data silos to facilitate data access from browsers, desktop applications, mobile apps, cloud services, and other scenarios. In this screencast, you'll see how easy it is to set up on OData service (deployed on Windows Azure) for an IBM DB2 database (running on IBM's cloud service), with examples of how to use the service from browsers, client apps, phone apps, and Microsoft Excel.

image_thumb8Here's an overview of the screencast, with links to specific sections:

Derrick Harris (@derrickharris) reported Facebook open sources Corona — a better way to do webscale Hadoop in an 11/8/2012 post to GigaOm’s Cloud blog:

imageFacebook is at it again, building more software to make Hadoop a better way to do big data at web scale. Its latest creation, which the company has also open sourced, is called Corona and aims to make Hadoop more efficient, more scalable and more available by re-inventing how jobs are scheduled.

imageAs with most of its changes to Hadoop over the years — including the recently unveiled AvatarNode — Corona came to be because Hadoop simply wasn’t designed to handle Facebook’s scale or its broad usage of the platform. What kind of scale are we talking about? According to Facebook engineers Avery Ching, Ravi Murthy, Dmytro Molkov,‎ Ramkumar Vadali, and Paul Yang in a blog post detailing Corona on Thursday, the company’s largest cluster is more than 100 petabytes; it runs more than 60,000 Hive queries a day; and its data warehouse has grown 2,500x in four years.

imageFurther, Ching and company note — echoing something Facebook VP of Infrastructure Engineering Jay Parikh told me in September when discussing the future of big data startups — Hadoop is responsible for a lot of how Facebook runs both its platform and its business:

Almost every team at Facebook depends on our custom-built data infrastructure for warehousing and analytics, with roughly 1,000 people across the company — including both technical and non-technical personnel — using these technologies every day. Over half a petabyte of new data arrives in the warehouse every 24 hours, and ad-hoc queries, data pipelines, and custom MapReduce jobs process this raw data around the clock to generate more meaningful features and aggregations.

So, what is Corona?

In a nutshell, Corona represents a new system for scheduling Hadoop jobs that makes better use of a cluster’s resources and also makes it more amenable to multitenant environments like the one Facebook operates. Ching et al explain the problems and the solution in some detail, but the short explanation is that Hadoop’s JobTracker node is responsible for both cluster management and job-scheduling, but has a hard time keeping up with both tasks as clusters grow and the number of jobs sent to them increase.

Further, job-scheduling in Hadoop involves an inherent delay, which is problematic for small jobs that need fast results. And a fixed configuration of “map” and “reduce” slots means Hadoop clusters run inefficiently when jobs don’t fit into the remaining slots or when they’re not MapReduce jobs at all.

Corona resolves some of these problems by creating individual job trackers for each job and a cluster manager focused solely on tracking nodes and the amount of available resources. Thanks to this simplified architecture and a few other changes, the latency to get a job started is reduced and the cluster manager can make fast scheduling decisions because it’s not also responsible for tracking the progress of running jobs. Corona also incorporates a feature that divvies a cluster into resource pools to ensure every group within the company gets its fair share of resources.

The results have lived up to expectations since Corona went into full production in mid-2012: the average time to refill idle resources improved by 17 percent; resource utilization over regular MapReduce improved to 95 percent from 70 percent (in a simulation cluster); resource unfairness dropped to 3.6 percent with Corona versus 14.3 percent with traditional MapReduce; and latency on a test job Facebook runs every four minutes has been

Despite the hard work put into building and deploying Corona, though, the project still was a way to go. One of the biggest improvements currently being developed is to enable resource management based on CPU, memory and other job requirements rather than just the number of “map” and “reduce” slots needed. This will open Corona up to running non-MapReduce jobs, therefore making a Hadoop cluster more of a general-purpose parallel computing cluster.

Facebook is also trying to incorporate online upgrades, which would mean a cluster doesn’t have to come down every time part of the management layer undergoes an update.

Why Facebook sometimes must re-invent the wheel

Anyone deeply familiar with the Hadoop space might be thinking that a lot of what Facebook has done with Corona sounds familiar — and that’s because it kind of is. The Apache YARN project that has been integrated into the latest version of Apache Hadoop similarly splits the JobTracker into separate cluster-management and job-tracking components, and already allows for non-MapReduce workloads. Further, there is a whole class of commercial and open source cluster-management tools that have their own solutions to the problems Corona tries to solve, including Apache Mesos, which is Twitter’s tool of choice.
However, anyone who’s familiar with Facebook knows the company isn’t likely to buy software from anyone. It also has reached a point of customization with its Hadoop environment where even open-source projects from Apache won’t be easy to adapt to Facebook’s unique architecture. From the blog post:

It’s worth noting that we considered Apache YARN as a possible alternative to Corona. However, after investigating the use of YARN on top of our version of HDFS (a strong requirement due to our many petabytes of archived data) we found numerous incompatibilities that would be time-prohibitive and risky to fix. Also, it is unknown when YARN would be ready to work at Facebook-scale workloads.

So, Facebook plods forward, a Hadoop user without equal (save for maybe Yahoo) left building its own tools in isolation. What will be interesting to watch as Hadoop adoption picks up and more companies begin building applications atop it is how many actually utilize the types of tools that companies like Facebook, Twitter and Quantcast have created and open sourced. They might not have commercial backers behind them, but they’re certainly built to work well at scale.

Feature image courtesy of Shutterstock user Johan Swanepoel.

Full disclosure: I’m a registered GigaOm analyst.

<Return to section navigation list>

Windows Azure Service Bus, Access Control Services, Caching, Active Directory and Workflow

imageSee Vittorio Bertocci (@vibronet) reported on 11/8/2012 that he will present a session at the pattern & practices Symposium 2013 event in Redmond, WA on 1/15-1/17/2012 in the Cloud Computing Events section below.


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

•• Mike Wood (@mikewo) wrote Windows Azure Websites – A New Hosting Model for Windows Azure and Red Gate Software published it in their Simple-Talk newsletter on 10/29/2012 (missed when published):


Whereas Azure works fine for established websites with a healthy visitor-count, it hasn't suited developers who wanted to just try the platform out, or to rapidly develop a number of hosted websites. The deployment process seemed too slow. Microsoft have now found a way of accommodating the latter type of usage with Windows Azure Websites (WAWS).

imageSince Microsoft released their Windows Azure Platform several years ago, they’ve received a lot of feedback that deployment of hosted websites and applications took too long. Users also pointed out that the platform was a little out of the reach of the hobbyist who just wanted to try things out. These folks just wanted to run a website and quickly iterate changes to that site as they wrote their code.

imageIn response, Microsoft have decided to encourage people who just want to try out the platform, and hobbyists who wish to play with the technology, by creating the 90 Day Free Trial. They have also announced a new high-density website hosting feature that can go a long way in helping out small shops and hobbyists by improving the deployment speed, and allowing fast iterations for simple websites

Windows Azure Websites

imageUp until June of this year, Windows Azure had only one way of hosting a web application: their Hosted Services offering, now known as Cloud Services. When you performed a new deployment, the platform would identify a physical server with enough resources based on your configuration, create a virtual machine from scratch, attach a separate VHD that had your code copied to it, boot the machine, and finally configure load balancers, switches and DNS so that your application would come online. This process could take about five to twelve minutes to complete, and sometimes longer. When you consider all those things that are going on behind the scenes I think the wait is quite reasonable given that some enterprise shops have to wait days or weeks to get a new machine configured for them. The point here is that, for Cloud Services, the container that your application or solution runs in is a virtual machine.

In June, Microsoft provided a way to deploy websites faster by introducing Windows Azure Websites (WAWS). WAWS is a high density hosting option which uses an Application Pool Process (W3WP.exe) as the container in which your site runs. A computer process is much faster to start up than a full virtual machine: Therefore deployments and start up times for WAWS are a great deal quicker than Cloud Services. Since these processes are so fast to start up, any idle sites can be shut down (have their process stopped) to save resources, and then started back up when requests come in. By bringing sites up and down, a higher number of sites can then be distributed across a smaller number of machines. This is why it is termed ‘high density hosting’. You can host not only .NET based websites, but also sites running PHP, Node.js and older ASP.

One Website, Please

You can create and deploy a Windows Azure Website in just a few minutes. You can even select an open source application such as DasBlog or Joomla to get a jump start on your website if you like. You first need to have a Windows Azure Account by taking advantage of that Free Trial I mentioned or, if you have a MSDN subscription, you can use your Azure Benefits for MSDN. Once you have a Windows Azure account you can follow the instructions on some of the tutorials provided by Microsoft to get a website deployed. There are several great tutorials out there on getting started so I wanted to focus more on what is happening behind the scenes to make all this work.

Once you have a new, shiny website created in Windows Azure Websites, the site itself isn’t running until a request comes in. That means that, even though you can see the website in the portal that says “running”, it is not yet actually deployed to a machine in the Windows Azure datacenters. The site is registered with the Windows Azure platform and will be deployed once a request comes in. This is known as a “cold request” because the site is not deployed yet.

During a cold request, someone asks for something hosted on your website (an image, the home page, etc.). When the request comes in, it is first passed to the Windows Azure Load Balancer which sees that it is destined for a Windows Azure Website property rather than one of the Cloud Services. The request passes on to a set of servers running the IIS Application Request Routing module (I’ve nicknamed these the pirate servers since they run “ARR”). These ARR servers then look to see if they know where this website is running. Since the site hasn’t actually been started yet, the ARR servers won’t find the site and will look up the site in the Runtime Database that contains metadata for all registered websites in Windows Azure. With the information garnered from the Runtime DB, the platform then looks at the virtual resources available and selects a machine to deploy the site to.

Up to this point, the content for the website has not been deployed anywhere; so where does it live until it gets deployed? When you upload your content, it is placed into Windows Azure BLOB Storage. In fact, Microsoft has created an abstraction layer over the top of Windows Azure BLOB Storage so that it looks as if the content is available via a standard network share. Your websites share a combined total of 1GB of storage for their content on this share, unless you opt for paying for reserved instances which I’ll touch on later. This BLOB storage is kept under a storage account owned by Microsoft and is not under your own storage accounts. This is one of the reasons why you only get 1GB of storage for all your websites; you aren’t directly paying for the content storage on a website.

Once the platform realizes that the website that is being requested isn’t deployed yet it uses Dynamic Windows Process Activation Service (DWAS) on a virtual machine running IIS to create a W3WP.exe process instance. This is essentially an application pool under IIS. IIS is then pointed to the code on the abstracted network share for your website. Once the W3WP.exe process is created and started, the original request is passed in so that your site can respond to that request as normal. This whole series of events, ranging from the time the request hits Windows Azure to the time your site receives the request, should take only a few seconds. If your website takes a lot of time to warm up, whilst it perhaps works on filling caches or making a lot of database calls, that work will start after your website receives that request, so it pays to keep your start up time low if at all possible.

As long as the website is actually deployed and running, then any subsequent requests to the site will pass into ARR servers and be forwarded immediately to the virtual machine running the website. These types of requests are called “hot requests” and the overhead of the routing should be negligible.

I’m Bored, Can I Take a Break?

As I mentioned earlier, one of the reasons that WAWS is considered ‘high density hosting’ is because they can get a lot of sites running across a relatively smaller number of virtual machines. This is made possible because at any one time, not all of the sites are actually up and running at the same time: Some are idle. When a site goes idle for a period of time (i.e., it is not getting any requests or doing any work) the W3Wp.exe process that is running the site is shut down. The next request coming into this site would then be a cold request and the site would get started back up, very likely on a different machine.

The idle timeout period can fluctuate quite a bit. Currently the timeout period starts at around 20 minutes, but then it can be as short as 5 minutes. Now, you might be thinking, “Whoa! My site might be taken down even after only 5 minutes of idle time?” Yes, that is a pretty short amount of time; however, the reason the idle time fluctuates is to deal with resource management that comes along with being hosted in a shared system.

Party at My Server!

The web servers running WAWS are in a shared system, with each virtual server running many different W3WP.exe processes. Being part of a shared system means that each website is taking up some amount of memory and utilizing some portion of the CPU. The advantage is that resources can be very efficiently used to host a large number of these websites per virtual machine, but that is as long as everyone plays nice. In a shared system there is always the “noisy neighbor” issue to think about. This is where one of the other tenants on the server decides to get really obnoxious, eating up a lot of memory or hogging the CPU (like your apartment neighbors deciding to have a big block party, but not inviting you). This type of behavior can have significant impact on the other websites running on the server, which is why Microsoft has put several mitigating tactics in place to deal with this.

I’ve already mentioned the idle timeout which causes sites to be shut down if they do not see requests for some time. While it might seem counter-intuitive as to why shutting down non-busy sites helps combat noisy neighbors, it actually is a benefit to those sites that are not as busy. The idle site is torn down, thereby freeing up more resources in the form of memory and CPU on the server; however, more importantly to the site that was just shut down, the next request will be bring it back up on another machine which should not be as busy. Effectively this helps move your site away from the partiers.

The next tactic for dealing with noisy tenants is to apply quotas. Microsoft has quite a few of these for WAWS when running in the shared system. There are memory and CPU quotas that are used to keep a site from hogging too much of the machine’s finite resources. For example, the CPU quota has two thresholds: how much CPU you can use in a given day and how much of that CPU usage can be used within a five-minute period. For the free option of WAWS, the CPU quota is an hour per day and 2.5 minutes within a five minute period. This doesn’t mean that your site can only be up for an hour per day. What it really means is that, over a 24 hour period, your site can use 1 hour of CPU time in total (in theory your site would respond to requests very quickly and utilize only seconds of CPU time for each request). The second CPU threshold is to keep a site from hogging the CPU for that allowable hour by doing something CPU-intensive such as calculating Pi to the millionth digit. Sure, the thread would be switched out while this was going on and other sites would get to execute but it would certainly affect the other sites on the server.

There are other quotas as well, such as the amount of outbound traffic (your website output to users) and how much of the storage space quota I mentioned that you are taking up for your website content. Some of these quotas apply regardless of the options you choose when managing your site, and others change, or simply are removed if you choose a different plan as I’ll discuss next.

Options, Options, Options

Remember that it is not just poorly written code that might push you up against these quotas. Just by having a popular website it could cause you to hit the limit of the quotas. Once you hit a quota your site could stop responding to requests, and users will be redirected to a page indicating that the site has exceeded the quotas. Obviously this isn’t something you want to have happen for your production level sites and it is also why Microsoft offers three hosting models for WAWS: Free Shared, Paid Shared and Reserved.

The Free Shared Model is the answer for those hobbyists who want to just try some code out or test a proof of concept. As well as the quotas I’ve already mentioned, the Free Shared Model is also constrained in that you cannot have customized domain names for your website. Each subscription can have up to 10 of these Free Shared websites.

The Paid Shared Model is still hosted within the shared system, but the thresholds on the quotas have been increased. This is a good model for perhaps running a personal website, or even a business site that doesn’t see an inordinate amount of traffic. For example, in the Paid Shared model your CPU usage quota is 4 hours per day, but is still constrained to the 2.5 minutes for every 5 minute period. In the Paid Shared Model, you can also set up a custom domain name to point to your website rather than being stuck with mysite.azurewebsites.net. The announced price for the Paid Shared model after the preview period is two cents ($0.02) per hour.

If you don’t want to run under the shared system at all, you can elect to move to the Reserved Model, which allows you to select a Virtual Machine size and get your own virtual machine. This option removes any need for the memory and CPU quotas since you are the only tenant on the machine. It also increases your storage quota to 10 GB for all of your websites, but it will cost about the same as running a full virtual machine under Cloud Services once the WAWS service is generally available: This is currently twelve cents ($0.12) per hour per CPU core that you select to run the reserved instance on. The advantage of this option is that you can actually run up to 100 websites on this reserved instance, meaning that even though you are paying more you are able to co-locate many websites on a server that is dedicated to you.

It is important to note that currently if you elect to have a reserved instance ALL of your websites on that subscription will be moved to this reserved instance. This means that if you wish to have some websites running in the shared system and some running under reserved you’ll need to divide these websites across Azure Subscriptions for now.


One of the main advantages of Cloud computing is being able to scale capacity both up and down so as to meet demand. WAWS is no exception. If you elect to run your websites in the shared system, then you’ll be able to scale to have up to 3 instances of your application. This means that up to three W3WP.exe processes are running your website spread across different virtual machines in the shared system. This helps provide better throughput and better availability for your site.

If you have chosen the Paid Shared model for hosting, then you’ll be charged for each instance of a website that you are running, even if the site has gone idle. This is because the platform has to reserve capacity in order to bring the site online any time a request comes in. If you have elected the reserved model then you are charged per instance per core you’ve selected. For example, if you select a Medium-sized virtual machine to run your sites on, which is 2 cores, and you scale up to two instances then you’ll be charged for 4 total cores. …

Read more

Full disclosure: I’m a paid contributor to Red Gate Software’s ACloudyPlace blog, which is in the process of merging with the Simple-Talk newsletter.

Mark Sorenson included the following list of Windows Azure Virtual Machine (WAVM) resources in his “Customer Evaluation Guide for Virtual Machines (IaaS)” document of 11/10/2012:

Core Resources

imageMain Site for Windows Azure http://www.windowsazure.com/

Recent Events (and Video Recordings)

//build/ Conference at Microsoft Campus, October 31, 2012

Meet Windows Azure, June 7, 2012

TechEd Orlando, June 2012


Windows Azure Virtual Machines, Networking, and Hybrid

Windows Azure Blogs http://blogs.msdn.com/b/windowsazure/

Thought Leadership Series

IaaS Series

Web Sites Series

Cloud Services Series

Data Series

Forrester Research Blogs and Key Publications

Microsoft Blog

• Marcel van den Berg (@marcelvandenber) asked without answering Is Windows Azure Virtual Machines (WAVM) a true IaaS plattform? Microsoft drops SLA on single role instances in an 11/11/2012 post:

imageMicrosoft introduced a new service on Windows Azure named Virtual Machines. Using this service also advertised as feature, Windows Azure customers are able to manage and are respo[n]sible for the operating system. Virtual Machines allows a deployment of a virtual machine using a catalog or upload[ing] a self made VHD file.

This enables developers to run applications on their platform of choice. The PaaS platform which Microsoft offers on Azure does not have a choice of the operating system running underneath the development tools.

imageMicrosoft Azure Virtual Machines is currently running in a Preview version. This can be compared to a Beta status. At the annou[n]cement of the Virtual Machines feature back in June 2012 Microsoft offered two SLA’s for availability of the virtual machines. 99.9 % for single role virtual machines and 99.95 % for multiple role instances. A single role instance is a *single* VM presenting an application. If the VM becomes unavailable (crash of guest, crash of host etc), the application becomes unavailable as well. A multiple role instance has at least two VMs offering the same application. A load balancer distributes application requests over the available VMs.

The image below shows the two SLA’s presented at various TechEd events in North America, Europe and Australia. See for example Mark Russinovich his presentation on Azure Virtual Machines at TechEd 2012 USA.

Mark explained that both SLA’s for single and multiple roles instances would be effective when Windows Azure Virtual Machines goes General Availability.

Massimo Re Ferre’ (VMware employee working in vCloud Architect role) has an interesting post about the same subject titled Azure Virtual Machines: what sort of cloud beast is it? He writes about a “design for fail” IaaS cloud and an “enterprise” IaaS cloud. Very good read. [Quote marks added; be sure to read the 6/26/2012 update in red.]

Single instance role SLA dropped

However in a presentation of Mark Russinovich at the Build conference in Seattle (End of October 2012) there is only one SLA mentioned. That explicit SLA has a 99.95 % availability for multiple role instances. Mark tells there is an implicit SLA for a single instance role which can be calculated based on the 99.95 for multiple role. The implicit SLA (this is not a documented SLA but more availability expected by the customers) is 99.76%. However this SLA is not offered by Microsoft. If a customer wants a SLA, it is 99.95% for a multiple role instances virtual machine.

So gone is the 99.9% SLA.

The images below are taken from the Microsoft Build conference at Seattle. You can download the slides or watch the video here. At around 36 minutes into the presentation Mark explains about the single SLA.

So here comes the question if Windows Azure is a true IaaS platform.
Yes, it does deliver management of the operating system level by the cloud consumer. But to get a SLA the consumer needs to have at least two instances of virtual machines serving the same application. Those two need to be member of the same availability group. Then the Azure fabric controller will make sure both VMs are running on different Azure hosts located in different racks.

What could be the reason of the prerequisite to have at least two VMs in a availability set to have Microsoft offer a SLA?
I can only guess but I believe this is because of the limitation of the Azure architecture to be able to move running VMs off a host when planned maintenance needs to be done. Frequently Azure hosts needs updates bringing new functionality or security updates. Windows Azure hosts do not offer a feature which Hyper-V has called Live Migration. Azure running a dedicated, Microsoft developed hypervisor, not Hyper-V.

Azure has been designed for a PaaS role in which an application is served by multiple instances. When a single instances fails, there is no issues. So in the architecture of Azure, a Live migration feature was not a requirement.

So when an Azure hosts needs a reboot, the VMs running on that host needs a shutdown and will probably be restarted on another host. Hence the 99.9% SLA on availabil[i]ty which was until recently advertised by Microsoft and now removed.

True IaaS?
Suppose a customer wants to run a standard backoffice infrastructure on Azure: fileserver, printserver, application servers etc. All applications will need to be running on at least two servers for Microsoft giving any guarantees on availability. That will be an interesting challenge for some applications which cannot be made highly available.

I am not sure if Windows Azure Virtual Machines can be described as a true IaaS.

My answer: WAVMs are as true an IaaS as Amazon’s or others’ IaaS offerings. The requirement for two instances to obtain a 99.95% uptime SLA is related to inevitable hardware failures in the data center. Most cloud service providers run similar commodity hardware and must contend with potential VM downtime as a result.

• Tech Republic posted John Joyner’s (@john_joyner) Microsoft shares considerations for extending AD into Windows Azure on 11/1/2012 (missed when published):

imageTakeaway: John Joyner introduces the concept of running conventional Windows Server VMs with the Active Directory Services role installed in Windows Azure.

There are many reasons you might want to extend your Microsoft Windows Active Directory (AD) forest into Microsoft’s public cloud — Windows Azure. Microsoft announced in June 2012 that Infrastructure as a Service (IaaS) features such as Virtual Machines (VMs) were available in Windows Azure. While still in a “trial” phase after six months, Microsoft has continued to add new functions to Azure and publish new prescriptive guidance about using Azure IaaS with Microsoft products.

Running and extending AD into Azure

imageAn important Microsoft document, “Guidelines for Deploying Windows Server Active Directory on Windows Azure Virtual Machines” was published in October 2012. Directory, enterprise, and cloud architects appreciate–and can rightfully insist on–Microsoft publishing “best practices” for key architectures as they apply to Windows Azure. Following these architecture frameworks reduces risk by creating common and public ground rules that, when followed, enable the Microsoft ecosystem to work at its best.

imageMicrosoft accelerates the adoption of Azure IaaS by empowering customers and partners to keep pushing the limits on what’s possible with Azure IaaS. To extend AD services such as directory and authentication to VMs in Azure, an architect can now start to include Domain Controllers (DCs) and Read-only DCs (RODCs) in Azure as part of a design or solution. Microsoft lets you BYON (bring your own network) into Windows Azure, so it’s technically feasible to securely connect on-premise, WAN, and private cloud networks with Azure virtual networks.

Drivers towards extending AD into Azure

Once you have a bunch of VMs in the Azure cloud that are joined to your on-premise domain, you will discover that having a DC or RODC in the same Azure virtual network might be a good idea. Here are some technical and business drivers for this:

  1. Latency in the AD authentication because the traffic is moving at Internet speed rather than LAN speed. AD client processes like Kerberos are sensitive to timing. Some Azure virtual network connections can prove too slow for demanding applications or those with short authentication timeouts.
  2. Resiliency of the applications running on the IaaS VMs. Consider that should the Azure virtual network to the on-premise DCs fail, AD-based operations in the cloud will cease. However if there is an AD replica in Azure, such as that found on a DC or appropriately configured RODC, then the Azure cloud can survive temporary loss of connectivity to the on-premise network.
  3. Azure download bandwidth charges are saved by keeping AD-related network traffic such as DNS and LDAP in the cloud. There is no charge for uploads into Azure, so an RODC in Azure, which has no outbound replication channel, will save money compared to having Azure VMs use the Azure virtual network for all AD traffic.
  4. You may just need AD in Azure, not necessarily to extend your existing AD into Windows Azure using Azure virtual network; in fact, you may deploy AD completely within your Azure virtual network. A self-contained AD that lives only in your Azure cloud might provide directory and authentication services to elastic clusters or farms of computers that have no need for authentication with an on-premise AD.
Create an AD site in Windows Azure

Microsoft makes clear that Azure VMs hosting AD DS roles differ little in terms of how you would employ and manage DC and RODC roles on VMs in any virtualization environment. Windows Azure is fundamentally a massive network of Hyper-V hosts, so general precautions about ensuring AD recoverability when AD is deployed on VMs apply to VMs in Azure.

A simple issue to overcome involves Azure IaaS VMs always having dynamically-assigned network addresses. Actually, Azure IaaS VMs will receive addresses in the network subnet you specify during configuration of the Azure virtual network. Also, the dynamic addresses are permanent for the lifetimes of the VMs. So you can ignore DCPROMO warnings about DCs having dynamically assigned addresses and treat the Azure addresses as permanent-while keeping the VM’s network interface set to use DHCP.

Since Azure virtual network setup will force you to define a specific subnet for servers, the server subnet(s) you specify should naturally be defined in AD Sites and Services. This is something you should do as soon as you establish an Azure virtual network and before joining Azure VMs to the on-premise domain. DCs and RODCs promoted in the Azure virtual network will be correctly associated with the new AD site for Azure that you will create. Conventional IP site transport is used for AD replication. Figure A shows how an RODC in Azure looks-like any other AD site with an RODC.

Figure A

Create subnets for Azure virtual networks, and install Azure-based DCs and RODCs in their own Azure site(s).
Provision a DC with the Azure data disk type

Here is the really important and special detail when it comes to domain controllers in Azure: You must add an additional disk to the Azure VM that will be a DC, before running DCPROMO. This second disk must be of the “data” type, not the “OS” type. The C: drive of every Azure VM is of the “OS disk” type, which has a write cache feature that cannot be disabled. Running a DC with the SYSVOL on an Azure OS disk is not recommended and could cause problems with AD.

This means you must not perform a default installation of DCPROMO on an Azure VM, but rather you attach a data disk, then run DCPROMO and locate AD files such as SYSVOL on the data disk, not the C: drive. This link at Microsoft has checklists to add an Azure VM data disk or attach an empty data disk:

Note about a different product: Take into account there is another Microsoft product, Windows Azure Active Directory, which is essentially an outsourced AD that lives completely and only in the cloud. That service appeals to Microsoft Office 365, Dynamics CRM Online, and Windows InTune customers-and that service is NOT the topic of this article. This article introduces the concept of running conventional Windows Server VMs with the AD Directory Services (AD DS) role installed in Windows Azure, particularly as part of hybrid cloud including on-premise and/or private cloud AD.

The last paragraph explains why this article isn’t in the above Windows Azure Active Directory (WAAD) section.

RealWire reported Compario adopts new IaaS version of Windows Azure in a 11/8/2012 press release:

imageCompario is the first connected commerce software vendor to offer its e-commerce customers a solution based on Windows Azure from Microsoft. Online retailers will benefit from easier and more flexible resources and IT infrastructure management, allowing marketing operations and seasonal peaks to be handled more effectively.

Paris, 8 November 2012 - Compario, the connected commerce software provider, announces that its solution is now available on Windows Azure.

imageThe new version of Windows Azure allows websites to benefit from the flexibility of the Cloud, making the management and use of IT infrastructure and resources (servers, storage, networks, virtualization) agile and easily adapted according to requirements. Now completely compatible with the IaaS (Infrastructure-as-a-Service) version of Windows Azure, Compario's technology automates the detection of additional resource requirements and speeds up their allocation where tasks previously required manual intervention.

imageMarketing departments will enjoy a very high level of responsiveness in their activities, such as advertising campaigns, launches of new product ranges, flash offers, sales, etc, with guaranteed system availability, even during sudden peaks in traffic.

imageMicrosoft Windows Azure offers numerous benefits to Compario and its customers:

  • Improved speed of server deployment: Compario can now carry out a customer implementation on Windows Azure in less than an hour.

  • Responsiveness: Compario can react quickly to peaks in traffic.

  • Enhanced security: safety mechanisms are in place within the Windows Azure Cloud environment.

  • Performance: Windows Azure provides a Cloud infrastructure which can handle very large volumes of users and significant increases in load.

  • Deployment: Windows Azure allows for completely straightforward international deployment.

  • Scalability: Compario can adapt to the needs of growing e-commerce companies and support the development of their businesses even more effectively.

"The latest version of Windows Azure marks a significant milestone with the addition of the IaaS service to the platform, complementing the existing PaaS[1] offering and providing websites with new capabilities," comments Romuald Poirot, VP of Software Engineering at Compario. "Windows Azure allows us to take the Compario platform to a new level in terms of performance as well as ease of use, thanks to the much wider use of the power of the Cloud. Our customers will benefit from these new capabilities and from an even higher level of reliability, to power the performance of their retail site and their connected commerce activities. Ultimately, the Compario platform will make it possible to streamline resources management so that they can be controlled and allocated by the users themselves. In this way, our customers will use Compario as a private Cloud, in which they can automatically activate the resources they need, according to the level of development of their e-commerce site and of their marketing operations."

"At Microsoft, our core aim is to create an environment and platforms which allow our partners to grow and to accelerate their development. We are delighted that Compario is using our Windows Azure IaaS service to offer an even more high-performance service to its clients in the world of e-commerce," emphasizes Jean Ferré, director of the Developers, Platform and Ecosystem division at Microsoft France.

About Compario
Compario develops and markets Connected Commerce solutions designed to:

  • Personalize the customer journey and optimize online navigation and product search

  • Maximize product merchandising, regardless of channel or mode of access (website, smartphone, multi-touch tablet, social network)

  • Facilitate the structuring and management of the web catalog

  • Compario powers more than 100 sites in 15 countries around the world.

  • Compario is trusted by some of the biggest names in online commerce, including 3 Suisses, Casino, Damart, Decathlon, Delhaize, Intersport, Groupe Printemps, Yves Rocher.

For further information, visit: http://www.compario.com/

[1]Platform as a Service

Tarun Arora (@arora_tarun) reported a Webmatrix “The Site has Stopped” Fix in an 11/6/2012 post:

imageI just got started with AzureWebSites by creating a website by choosing the Wordpress template. Next I tried to install WebMatrix so that I could run the website locally. Every time I tried to run my website from WebMatrix I hit the message “The following site has stopped ‘xxx’”


Step 00 – Analysis

imageIt took a bit of time to figure out that WebMatrix makes use of IISExpress. But it was easy to figure out that IISExpress was not showing up in the system tray when I started WebMatrix. This was a good indication that IISExpress is having some trouble starting up.

So, I opened CMD prompt and tried to run IISExpress.exe this resulted in the below error message


So, I ran IISExpress.exe /trace:Error this gave more detailed reason for failure


Step 1 – Fixing “The following site has stopped ‘xxx’”

imageFurther analysis revealed that the IIS Express config file had been corrupted. So, I navigated to C:\Users\<UserName>\Documents\IISExpress\config and deleted the files applicationhost.config, aspnet.config and redirection.config (please take a backup of these files before deleting them).

Come back to CMD and run IISExpress /trace:Error


IIS Express successfully started and parked itself in the system tray icon.


I opened up WebMatrix and clicked Run, this time the default site successfully loaded up in the browser without any failures.

Step 2 – Download WordPress Azure WebSite using WebMatrix

Because the config files ‘applicationhost.config’, ‘aspnet.config’ and ‘redirection.config’ were deleted I lost the settings of my Azure based WordPress site that I had downloaded to run from WebMatrix. This was simple to sort out…

Open up WebMatrix and go to the Remote tab, click on Download


Export the PublishSettings file from Azure Management Portal and upload it on the pop up you get when you had clicked Download in the previous step


Now you should have your Azure WordPress website all set up & running from WebMatrix.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Maarten Balliauw (@maartenballiauw) described Sending e-mail from Windows Azure in an 11/12/2012 post:

imageNote: this blog post used to be an article for the Windows Azure Roadtrip website. Since that one no longer exists, I decided to post the articles on my blog as well. Find the source code for this post here: 04 SendingEmailsFromTheCloud.zip (922.27 kb).

imageWhen a user subscribes, you send him a thank-you e-mail. When his account expires, you send him a warning message containing a link to purchase a new subscription. When he places an order, you send him an order confirmation. I think you get the picture: a fairly common scenario in almost any application is sending out e-mails.

Now, why would I spend a blog post on sending out an e-mail? Well, for two reasons. First, Windows Azure doesn’t have a built-in mail server. No reason to panic! I’ll explain why and how to overcome this in a minute. Second, I want to demonstrate a technique that will make your applications a lot more scalable and resilient to errors.

E-mail services for Windows Azure

Windows Azure doesn’t have a built-in mail server. And for good reasons: if you deploy your application to Windows Azure, it will be hosted on an IP address which previously belonged to someone else. It’s a shared platform, after all. Now, what if some obscure person used Windows Azure to send out a number of spam messages? Chances are your newly-acquired IP address has already been blacklisted, and any e-mail you send from it ends up in people’s spam filters.

All that is fine, but of course, you still want to send out e-mails. If you have your own SMTP server, you can simply configure your .NET application hosted on Windows Azure to make use of your own mail server. There are a number of so-called SMTP relay services out there as well. Even the Belgian hosters like Combell, Hostbasket or OVH offer this service. Microsoft has also partnered with SendGrid to have an officially-supported service for sending out e-mails too. Windows Azure customers receive a special offer of 25,000 free e-mails per month from them. It’s a great service to get started with sending e-mails from your applications: after all, you’ll be able to send 25,000 free e-mails every month. I’ll be using SendGrid in this blog post.

Asynchronous operations

I said earlier that I wanted to show you two things: sending e-mails and building scalable and fault-resilient applications. This can be done using asynchronous operations. No, I don’t mean AJAX. What I mean is that you should create loosely-coupled applications.

Imagine that I was to send out an e-mail whenever a user registers. If the mail server is not available for that millisecond when I want to use it, the send fails and I might have to show an error message to my user (or even worse: a YSOD). Why would that happen? Because my application logic expects that it can communicate with a mail server in a synchronous manner.


Now let’s remove that expectation. If we introduce a queue in between both services, the front-end can keep accepting registrations even when the mail server is down. And when it’s back up, the queue will be processed and e-mails will be sent out. Also, if you experience high loads, simply scale out the front-end and add more servers there. More e-mail messages will end up in the queue, but they are guaranteed to be processed in the future at the pace of the mail server. With synchronous communication, the mail service would probably experience high loads or even go down when a large number of front-end servers is added.


Show me the code!

Let’s combine the two approaches described earlier in this post: sending out e-mails over an asynchronous service. Before we start, make sure you have a SendGrid account (free!). Next, familiarise yourself with Windows Azure storage queues using this simple tutorial.

In a fresh Windows Azure web role, I’ve created a quick-and-dirty user registration form:


Nothing fancy, just a form that takes a post to an ASP.NET MVC action method. This action method stores the user in a database and adds a message to a queue named emailconfirmation. Here’s the code for this action method:

[HttpPost, ActionName("Register")] public ActionResult Register_Post(RegistrationModel model) { if (ModelState.IsValid) { // ... store the user in the database ... // serialize the model var serializer = new JavaScriptSerializer(); var modelAsString = serializer.Serialize(model); // emailconfirmation queue var account = CloudStorageAccount.FromConfigurationSetting("StorageConnection"); var queueClient = account.CreateCloudQueueClient(); var queue = queueClient.GetQueueReference("emailconfirmation"); queue.CreateIfNotExist(); queue.AddMessage(new CloudQueueMessage(modelAsString)); return RedirectToAction("Thanks"); } return View(model); }

As you can see, it’s not difficult to work with queues. You just enter some data in a message and push it onto the queue. In the code above, I’ve serialized the registration model containing my newly-created user’s name and e-mail to the JSON format (using JavaScriptSerializer). A message can contain binary or textual data: as long as it’s less than 64 KB in data size, the message can be added to a queue.

Being cheap with Web Workers

When boning up on Windows Azure, you’ve probably read about so-called Worker Roles, virtual machines that are able to run your back-end code. The problem I see with Worker Roles is that they are expensive to start with. If your application has 100 users and your back-end load is low, why would you reserve an entire server to run that back-end code? The cloud and Windows Azure are all about scalability and using a “Web Worker” will be much more cost-efficient to start with - until you have a large user base, that is.

A Worker Role consists of a class that inherits the RoleEntryPoint class. It looks something along these lines:

public class WebRole : RoleEntryPoint { public override bool OnStart() { // ... return base.OnStart(); } public override void Run() { while (true) { // ... } } }

You can run this same code in a Web Role too! And that’s what I mean by a Web Worker: by simply adding this class which inherits RoleEntryPoint to your Web Role, it will act as both a Web and Worker role in one machine.

Call me cheap, but I think this is a nice hidden gem. The best part about this is that whenever your application’s load requires a separate virtual machine running the worker role code, you can simply drag and drop this file from the Web Role to the Worker Role and scale out your application as it grows.

Did you send that e-mail already?

Now that we have a pending e-mail message in our queue and we know we can reduce costs using a web worker, let’s get our e-mail across the wire. First of all, using SendGrid as our external e-mail service offers us a giant development speed advantage, since they are distributing their API client as a NuGet package. In Visual Studio, right-click your web role project and click the “Library Package Manager” menu. In the dialog (shown below), search for Sendgrid and install the package found. This will take a couple of seconds: it will download the SendGrid API client and will add an assembly reference to your project.


All that’s left to do is write the code that reads out the messages from the queue and sends the e-mails using SendGrid. Here’s the queue reading:

public class WebRole : RoleEntryPoint { public override bool OnStart() { CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) => { string value = ""; if (RoleEnvironment.IsAvailable) { value = RoleEnvironment.GetConfigurationSettingValue(configName); } else { value = ConfigurationManager.AppSettings[configName]; } configSetter(value); }); return base.OnStart(); } public override void Run() { // emailconfirmation queue var account = CloudStorageAccount.FromConfigurationSetting("StorageConnection"); var queueClient = account.CreateCloudQueueClient(); var queue = queueClient.GetQueueReference("emailconfirmation"); queue.CreateIfNotExist(); while (true) { var message = queue.GetMessage(); if (message != null) { // ... // mark the message as processed queue.DeleteMessage(message); } else { Thread.Sleep(TimeSpan.FromSeconds(30)); } } } }

As you can see, reading from the queue is very straightforward. You use a storage account, get a queue reference from it and then, in an infinite loop, you fetch a message from the queue. If a message is present, process it. If not, sleep for 30 seconds. On a side note: why wait 30 seconds for every poll? Well, Windows Azure will bill you per 100,000 requests to your storage account. It’s a small amount, around 0.01 cent, but it may add up quickly if this code is polling your queue continuously on an 8 core machine… Bottom line: on any cloud platform, try to architect for cost as well.

Now that we have our message, we can deserialize it and create a new e-mail that can be sent out using SendGrid:

// deserialize the model var serializer = new JavaScriptSerializer(); var model = serializer.Deserialize<RegistrationModel>(message.AsString); // create a new email object using SendGrid var email = SendGrid.GenerateInstance(); email.From = new MailAddress("maarten@example.com", "Maarten"); email.AddTo(model.Email); email.Subject = "Welcome to Maarten's Awesome Service!"; email.Html = string.Format( "<html><p>Hello {0},</p><p>Welcome to Maarten's Awesome Service!</p>" + "<p>Best regards, <br />Maarten</p></html>", model.Name); var transportInstance = REST.GetInstance(new NetworkCredential("username", "password")); transportInstance.Deliver(email); // mark the message as processed queue.DeleteMessage(message);

Sending e-mail using SendGrid is in fact getting a new e-mail message instance from the SendGrid API client, passing the e-mail details (from, to, body, etc.) on to it and handing it your SendGrid username and password upon sending.

One last thing: you notice we’re only deleting the message from the queue after processing it has succeeded. This is to ensure the message is actually processed. If for some reason the worker role crashes during processing, the message will become visible again on the queue and will be processed by a new worker role which processes this specific queue. That way, messages are never lost and always guaranteed to be processed at least once.

• J. Larry Aultman (@wph101larrya) reported Florida Election Watch 2012 Major Success in an 11/7/2012 post:

imageThe election of 2012 is in the books. Florida Department of State pioneered cloud computing at the state government level. Since October 2011 until the November 6th, 2012 election I put together a winning team who in twelve month’s time created an Azure cloud solution while conducting the Presidential Preference Primary, the Presidential Primary and finally the General Election; all in the cloud. [Emphasis added.]

imageUnlike other states reporting sites Florida’s site was 100%. Cloud works and it lowered the cost of the election!

The numbers tell the story…

Florida Election stats: 3 million pages served

According to his resume and Twitter profile, Larry is CIO, Florida Department of State.

From Larry’s resume:

Florida Department of State, like many agencies or private businesses, has legacy applications and is looking to the cloud as an alternative. I work with the Secretary of State, the Chief of Staff, and management on down the line to methodically create cloud solutions. Seems easy and straight forward; it is not. This year I pulled together five divisions’ IT into a single IT unit, created a vision, strategy, and goal for IT. Since there was no system for development, I had to create that too. Working closely with the senior management and with their complete support, the Department now has the Division of Elections’ Florida Voter Registration System deployed as an Azure Cloud service. At the same time, I formed a team to modernize the Division of Corporations’ Sunbiz™ into an Azure Cloud service that goes online January 2013. [Emphasis added.]

Mark Hachman reported Microsoft Demos A Star Trek-Style Universal Translator [Video] in an 11/9/2012 post to the ReadWriteCloud blog:

Microsoft Demos A Star Trek-Style Universal Translator [Video]

Microsoft has showed off research that takes us significantly closer to a Star Trek-style universal translator: natural language translation, in real time, in the user’s own voice.

The demonstration by Microsoft chief research officer Rick Rashid (see embedded video below) at Microsoft Research Asia’s 21st Century Computing event was part of a speech to about 2,000 students in China on October. 25, and doesn’t actually represent a product in the works. “This work is in the pure research stages to push the boundaries of what’s currently possible,” a Microsoft spokeswoman said in an email.

The potential value of such capability is enormous, and obvious. On Star Trek, the universal translator made alien relations possible. For business travelers and tourists, speaking even a few words of the native tongue, let alone fluently, can make a big difference. For immigrants, learning the language of their new country is often the biggest barrier to assimilation. That's why Microsoft - and competitors like Google, among others - have worked for years to develop real-time translation systems.

Rashid’s demonstration shows a real-time speech-to-text translation engine, with a similarly real-time assessment of its accuracy. (Microsoft didn’t say how it generated the accuracy measurement.) According to Rashid, however, the accuracy has been improved by more than 30% compared to previous generations, with a current error rate as little as one in seven or eight words, or 13% to 14%. (Disclosing the error rate is significant, as competitors like Nuance usually compare recognition rates against their own products.)

Translation Needs Big Data

imageMicrosoft is no stranger to automated translation; on Halloween, the company announced that it would be working with researchers in Central America to build a version of the Microsoft Translator Hub to preserve the Mayan language. The Hub lets users create a model, add language data, then use Microsoft’s Windows Azure cloud service to power the automated translation. The idea, as Microsoft took pains to explain, was to preserve the dying language through the next b'ak'tun, the calendar cycle that ends this December, prompting waves of end-of-the-world predictions, including movies like 2012.

As Microsoft’s Translator Hub suggests, translation is predicated upon big data. The calculations are immensely complicated, not just dealing with the phonemes that make up each word, but also working out how thoughts are organized into proper grammar, as well as other elements like the genders of certain nouns, honorifics, and other cultural nuances. Microsoft built in speech-to-text tools inside of Windows XP, as Rashid points out, but the technology suffered arbitrary speech errors of about one in every four words. Although speech-to-text (and text-to-speech) has remained inside of Microsoft’s software as an accessibility tool, it hasn’t yet served as a general replacement for the keyboard - even, as Scott Forstall’s departure from Apple demonstrates - with some of the top minds in the industry powering technologies like Apple’s Siri.

Typically, machine translation is improved through training, as the software learns how a user pronounces various phonemes and generally becomes familiar with how the user says individual words.

His Master's Voice

Rashid’s demonstration went a step farther, however. The software not only learned what Rashid was saying, but also parsed the meaning, reorganizing it into Chinese. It also took his voice and recast the Chinese phonology in Rashid’s natural voice. How? By using a few hours speech of a native Chinese speaker and properties of Rashid’s own voice taken from about one hour of pre-recorded English data - recordings of previous speeches he had made.

Real-time voice translation isn’t exactly new. In the mobile space. Both Microsoft and Google, for example, have released apps that can translate text that a smartphone camera sees. And both offer “conversational modes” that are actually more akin to a CB radio: one person talks, taps “stop,” the phone translates and plays back a recorded voice, the other person speaks, and so on. What Rashid’s demonstration showed off was a much more conversational, continuous, natural means of translation.

And as Rashid’s blog post and the video highlight, the crowd applauded nearly every line. That’s the type of response every business traveler and tourist wouldn’t mind when trying to make herself understood.

Lead image from Memory Alpha.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Note: The Cloud Computing Events section below has more information about new SharePoint 2013 and Online Apps and LightSwitch.

The Visual Studio LightSwitch Team (@VSLightSwitch) posted Survey App Tutorial: Developing a SharePoint Application Using LightSwitch to the MSDN Code Samples blog on 11/12/2012:

imageThe "Survey Application Tutorial: Developing a SharePoint Application Using LightSwitch" will show you how to use LightSwitch to build a SharePoint application with an HTML client that runs on a variety of mobile devices.

Download: C# (1.6 MB) VB.NET (1.7 MB)


imageLightSwitch now supports the ability to create SharePoint applications that can be easily installed to and launched from a SharePoint site. LightSwitch SharePoint applications automatically handle identity flow between SharePoint and the LightSwitch application and provide a code experience for interacting with SharePoint assets.

The Survey Application Tutorial: Developing a SharePoint Application Using LightSwitch will show you how to use LightSwitch to build a SharePoint application with an HTML client that runs on a variety of mobile devices.

Building the Sample

imageDownload either the VB or C# source files to a machine that has Microsoft LightSwitch HTML Client Preview 2 for Visual Studio 2012 installed. Follow the steps described in the Survey Application Tutorial to build a LightSwitch SharePoint application using these files.

To post questions regarding this tutorial, refer to the LightSwitch HTML Client forum. For more information & to download the HTML client preview see the HTML Client Resources Page on the Developer Center.

Survey Application Overiew

Contoso Foods is both a food manufacturer and distributor that sells a variety of products to grocery stores nationwide. Contoso Foods' Sales Representatives play an important role in maintaining relationships with partner stores by frequently visiting each location to deliver products and conduct quality surveys. Quality surveys are completed for every product to measure the presence that the product has within the store. Typical survey data that is collected by Sales Representatives includes:

  • Cleanliness of the product display (ranging from "very poor" to "excellent")
  • Quality of the product display (also ranging from "very poor" to "excellent")
  • Product location within an aisle (middle of aisle, end of aisle, or aisle end-cap)
  • Product shelf height position (top shelf, eye-level shelf, or bottom shelf)

In addition, as part of completing surveys, photos are taken of the product display to support the overall assessment.

On a weekly basis, Sales Representatives visit store locations to take product surveys. Currently, survey data is captured using a clipboard and pen, but this method is slow and increases the likelihood of transcription errors. Also, this method makes it difficult to take and attach photos to surveys. To address these problems, the sales team has decided to create a Survey Application that Sales Representatives can access from their tablet devices to easily collect survey data and attach photos that have been taken using their device. Specifically, the Survey Application will be an Office 365 SharePoint application created using Visual Studio LightSwitch. Key reasons for this approach are:

  • The Sales team recently switched to Office 365 for internal email and collaboration, so Sales Representatives are already used to signing into the team's SharePoint Online site to view customer contact information, marketing material, and customer invoices. Based on this, the team's SharePoint sites is the logical place to host and launch the Survey Application.
  • SharePoint Online offers easy access and management of images. SharePoint's Picture Library automatically creates thumbnail and web optimized versions of images which improves performance when displaying photos within the application.

This tutorial will walk you through the steps for developing the Survey Application that Contoso Foods' Sales Representatives will use for completing survey assessments.

imageI’ve downloaded the files and will be testing the tutorial shortly. LightSwitch SharePoint applications are hosted in Windows Azure with databases running in SQL Azure. Stay tuned for more details …

•• John Stallo of the Visual Studio LightSwitch Team (@VSLightSwitch) posted Announcing LightSwitch HTML Client Preview 2! on 11/12/2012:

imageWe are pleased to announce an updated preview of the LightSwitch HTML Client! The overly positive feedback we received following our first preview last June for building HTML5-based, touch-optimized mobile business applications has truly been energizing – thank you for your constructive feedback and support as we continue to develop this important release. Here’s a summary of what’s new in Preview 2.

Preview 2 can be Installed into Visual Studio 2012 RTM

imageEarly adopters will remember our first preview was made available only as a preconfigured VHD, a decision we made at the time to expedite getting the preview out to you for some early tire kicking. This time, with Preview 2, we’ve made it available as a simple WPI (Web Platform Installer) package that you can install on any machine that already has Visual Studio 2012 Professional or above.

Build SharePoint 2013 Apps with LightSwitch

imageThis is potentially our biggest announcement for Preview 2, and for many good reasons! SharePoint has fast become an important hub of activity within the enterprise. It’s the central portal where folks sign-in to collaborate with each other and participate in business workflows, so it makes sense that an increasingly common request we’ve heard is to create business apps where your users are: that is, run LightSwitch apps from SharePoint.

Leveraging the simplicity and productivity of LightSwitch, you can now use Preview 2 to build SharePoint 2013 apps and install them on an Office 365 site. For the developer, this means simplified deployment, central management of user identity, app installation and updates, and the ability for your app to consume SharePoint services and data in a more integrated fashion. For users of your app, this means a central place for them to sign-in once and launch modern, web-based applications on any device for their everyday tasks.


Users sign-in to a SharePoint 2013 site and launch apps from a central app catalog.


SharePoint Apps built with LightSwitch run with the user’s credentials without needing to sign-in again. The top blue bar—called the SharePoint Chrome control—allows users to jump back to the SharePoint site with a single tap.

We’ve provided a walkthrough to help you explore building SharePoint Apps with LightSwitch. You’ll also need to sign up for an Office 365 Developer account by visiting http://dev.office.com. Click the Build: Sign up and start building apps tile to start the process.


Visit http://dev.office.com to sign up for an Office 365 account.

HTML Client Experience Enhancements

We’ve continued to evolve and improve the experience for building modern HTML5-based web apps with LightSwitch, improvements that apply regardless of whether you’ll run your app on SharePoint 2013, Azure, or on-premise IIS. In Preview 2 we have significantly improved the design-time coding experience with better JavaScript IntelliSense support and debugging, introduced a number of additional coding entry points for responding to common events, and a richer set of APIs for interacting with the application. We’ve also provided an integrated experience with jQuery Mobile ThemeRoller for custom theming, introduced a new Tile list control, simplified control of layout and sizing, added support for publishing to Azure websites, and more! Stay tuned over the next several days for a series of blog articles that cover these enhancements in detail. To help you tie all this together, we’ve updated the Contoso Moving walkthrough to include new features from Preview 2.

Download Preview 2

Download the preview and tell us what you think! We’re very much looking forward to your feedback, questions, and suggestions. Please use the LightSwitch HTML Client forum to post your feedback and check the HTML Client Resources Page on the Developer Center and this blog for videos and articles as they become available!

Microsoft Office Developer Tools for Visual Studio - Preview 2

Note: This is a one-stop package which includes LightSwitch HTML Client – Preview 2 as well as other components for building SharePoint 2013 Apps.

Here’s the Web Platform Installer 4.0’s feature list for Preview 2:


Following are the elements hidden in the above screen capture:


•• Jan van der Haegen (@janvanderhaegen) made code for his LightSwitch HTML Preview - first impressions session for VSLive! available for download on 11/12/2012:

imageThis sample shows the LightSwitch HTML preview capabilities: theming, creating contents & formatting it quickly, inserting static HTML, and using some JS or JQuery to create a "list-slider" or "banner".

Download: C# (16.7 MB)

Building the Sample

imageUntill further notice, this sample can ONLY be ran with the LightSwitch HTML Preview version of visual studio 2012, download it for free.


Shows how to replace the "loading application" text, the application icon & the theme (css).


Shows how you can add new (static) images and static text.

There's never a lot of code to show when you talk the LightSwitch lingo, but adding static text as easy as:



function AddText(element, contentItem, text) { 
    var itemTemplate = $(text); 
function Footer(element, contentItem) { 
    AddText(element, contentItem, 
        "<h3>Sample 'Single Page Application' made with " + 
        "<a href='http://msdn.microsoft.com/en-us/vstudio/htmlclient.aspx'>Visual Studio LightSwitch</a>" + 
        " by <a href='http://www.switchtory.com/janvan'>Jan Van der Haegen</a>" + 
        ". - All rights reserved.<h3>"); 
lightSwitchApplication.WelcomeToVsLive.prototype.SessionFooter_render = function (element, contentItem)  { 
    Footer(element, contentItem); 

Speaker list.

Explores formatting, item tap actions.

Further formatting is applied in the "details" dialog.


Using some simple JQuery and JavaScript to turn a List that was generated by LightSwitch into a "banner" or a "slideshow": only one item is shown.

This item will then fade out, and the next item will fade in with fixed intervals (timer-based).

The JS & JQuery code is available in the sample and explained in the articles.

    More Information

    Keep an eye on MSDN's Leading LightSwitch column, the LightSwitch HTML developer center, or my personal blog for more articles and samples.

    Copyright disclaimer

    The images and text used in this sample belong to their respective owners, Microsoft and 1105Media. No copyrights infringement intended.

    For more about the LightSwitch HTML Preview, see the Soma Somasegar (@ssomasegar) described Building Apps for Office and SharePoint in an 11/12/2012 post article in the Cloud Computing Events section below.

    I encountered errors when attempting to run Jan’s code with LightSwitch HTML Preview 2 in Visual Studio 2012.

    Return to section navigation list>

    Windows Azure Infrastructure and DevOps

    The Windows Azure Team described a new tiered Windows Azure Support program on 11/13/2012:

    Windows Azure offers flexible support options for customers of all sizes - from developers starting their journey in the cloud to enterprises deploying business critical applications. These support options provide you with the best available expertise to increase your productivity, reduce your business costs, and accelerate your application development.


    Windows Azure Support Features:


    1 Additional information on Premier Support, including how to purchase, can be found here.

    2 15-minute response time is only available with the purchase of Microsoft Rapid Response and Premier Support for Windows Azure.

    3 Business hours for local languages and 24x7 for English.

    Professional Direct

    Microsoft Professional Direct Support for Windows Azure provides first-class support designed especially for mid-sized customers that require elevated support, access to experts and top-rated educational events. With Professional Direct Support, we help you maximize application uptime, reduce cost, and accelerate your development.

    • Maximize Uptime: Get unlimited phone support 24X7 with priority routing, 1-hour response time on your most critical issues, and a direct line to support advocates who provide an escalation channel when needed.
    • Reduce Cost: Receive a monthly optimization report that helps you identify areas where you can save money, improve architecture and enhance user experience.
    • Accelerate Development: Gain exclusive access to top-rated content and remote events that help your developers enhance skills, increase productivity and learn about the newest product features.
    Support Scope

    Support for billing and subscription management-related issues as well as break-fix issues is available at all support levels. Break-fix issues are problems experienced by customers while using Windows Azure where there is a reasonable expectation that Microsoft caused the problem. Developer mentoring and advisory services are available at the Professional Direct and Premier support levels.

    Products and Services covered:

    • Windows Azure services released to General Availability are covered by all support levels.
    • Preview, pre-release or beta service support may be available through our community forums.
    • Refer to the pricing page for the list of GA and Preview/Beta services.
    • Non-Microsoft technologies, when provided by Microsoft as part of a Windows Azure product feature, are covered by all support levels, such as the Windows Azure SDK and sample code for Python.

    Questions on supported products can be answered in our community forums. …

    Read more.

    Louis Columbus (@LouisColumbus) posted Cloud Computing and Enterprise Software Forecast Update, 2012 on 11/8/2012 (missed when published):

    imageThe latest round of cloud computing and enterprise software forecasts reflect the growing influence of analytics, legacy systems integration, mobility and security on IT buyer’s decisions.

    Bain & Company and Gartner have moved beyond aggregate forecasts, and are beginning to forecast by cloud and SaaS adoption stage. SAP is using the Bain adoption model in their vertical market presentations today.

    Despite the predictions of the demise of enterprise software, forecasts and sales cycles I’ve been involved with indicate market growth. Mobility and cloud computing are the catalysts of rejuvenation in many enterprise application areas, and are accelerating sales cycles. Presented in this roundup are market sizes, forecasts and compound annual growth rates (CAGRS) for ten enterprise software segments.

    Key take-aways from the latest cloud computing and enterprise software forecasts are provided below:

    • Public and private cloud computing will be strong catalysts of server growth through 2015. IDC reports that $5.2B in worldwide server revenue was generated in 2011 or 885,000 units sold. IDC is forecasting a $9.4B global market by 2015, resulting in 1.8 million servers sold. Source: IDC Worldwide Enterprise Server Cloud Computing 2011–2015 http://www.idc.com/getdoc.jsp?containerId=228916
    • IDC reports that enterprise cloud application revenues reached $22.9B in 2011 and is projected reach $67.3B by 2016, attaining a CAGR of 24%. IDC also predicts that by 2106, $1 of every $5 will be spent on cloud-based software and infrastructure. Report, Worldwide SaaS and Cloud Software 2012–2016 Forecast and 2011 Vendor Shares, Link: http://www.idc.com/getdoc.jsp?containerId=236184
    • 11% of companies are transformational, early adopters of cloud computing, attaining 44% adoption (as defined by % of MIPS) in 2010, growing to 49% in 2013. This same segment will reduce their reliance on traditional, on-premise software from 34% to 30% in the same period according to Bain & Company’s cloud computing survey results shown below. SAP is using this adopter-based model in their vertical market presentations, an example of which is shown here.

    • The three most popular net-new SaaS solutions deployed are CRM (49%), Enterprise Content Management (ECM) (37%) and Digital Content Creation (35%). The three most-replaced on-premise applications are Supply Chain Management (SCM) (35%), Web Conferencing, teaming platforms and social software suites (34%) and Project & Portfolio Management (PPM (33%). The following graphic shows the full distribution of responses. Source: User Survey Analysis: Using Cloud Services for Mission-Critical Applications Published: 28 September 2012

    • In 2011, the worldwide enterprise application software market generated $115.1B in revenue, and is projected to grow to $157.6B by 2016, attaining a 6.5% CAGR in the forecast period. Gartner reports that 38% of worldwide enterprise software revenue is from maintenance and technical support; 17% from subscription payments; and 56% from ongoing revenue including new purchases. An analysis of the ten enterprise software markets and their relative size and growth are shown in the figure below along with a table showing relative rates of growth from 2011 to 2016. Source: Forecast: Enterprise Software Markets, Worldwide, 2011-2016, 3Q12 Update Published: 12 September 2012 ID:G00234766

    • Kenneth van Surksum (@kennethvs) reported the availability of Tech: Microsoft Windows Azure Poster in an 11/2/2012 post (missed when posted):

    imageMicrosoft has made available for download a PDF detailing Windows Azure in one poster. The poster gives an overview Windows Azure for developers and IT pros. Summarizes benefits such as global coverage and extensive language support.

    imageProvides a description of current services by category: app services, data services, compute, networking, and store. Depicts common uses including Cloud Services, Virtual Machines, Web Sites, Mobile Services, and Media Services.

    imageThe poster is intended to be printed and measures 26" x 39”.


    Thanks to Marcel van den Berg for providing the news.

    Mike McKeon posted Understanding Application Tenancy with Windows Azure to the Aditi Technologies blog on 11/9/2012:

    One of the prime economic motivations to running your application in a Cloud environment is the ability to distribute the cost of shared resources among multiple customers. At least that’s what all the evangelists and marketing folks like to tell us, right? But the harsh reality of life in the Cloud is that an application’s ability to run safely and efficiently with multiple customers does not just magically ‘happen’ simply by deploying it to Azure. An application’s architecture must be explicitly and carefully designed to support running with multiple customers across multiple (VM) virtual machines. Its implementation must prevent customer-specific processing from negatively affecting processing for other tenants.

    imageHere I will attempt to simplify the concept of tenancy in Windows Azure by first defining tenants and instances. There are deeper levels this discussion could be taken as entire books have been written on multi-tenancy in shared computing/storage environment (which is what the Cloud is after all). So we will only touch the tip of the iceberg when it comes to the science of instance allocations per tenant and multi-tenant data access.

    We will look at various configurations of instances and tenancy and how to structure your application and data. And finally we will wrap up with some strategies for multi-tenancy and how business models relate to tenancy.

    David Linthicum (@DavidLinthicum) asserted “Once a leader in cloud computing, the government has fallen behind much of the tech world in cloud adoption” in a deck for his The U.S. government's cloud mandate loses steam article of 11/9/2012 for Network World’s Cloud Computing blog:

    imageGovernment and IT may not sound like a natural pairing, but here in Washington, D.C., they're more closely related than many suspect. In fact, the government's move to cloud computing can have far-reaching implications for the tech industry as a whole.

    imageNot so long ago -- 2008, to be exact -- you would've thought the government was the undisputed leader of the shift to cloud computing. Remember the NIST definition of cloud computing and the pro-cloud U.S. CIO Vivek Kundra (now an EVP at Salesforce.com)? Those days appear to be long gone; lately, the government is acting more like the larger commercial enterprises as they take baby steps to the cloud. It's time to pick up the pace.

    For government IT, the migration to cloud computing is complicated by the sheer complexity of supported environments and the unique nature of many federal business processes. However, people, processes, and yes, politics get in the way of the government cloud too.

    To succeed, cloud computing requires a change in thinking, including a willingness to give up some control in exchange for efficiency. That's a huge leap for IT teams in government. Federal IT workers may talk up cloud computing at conferences and respond to mandates (such as cloud first), but their actions -- or lack thereof -- indicate they see the cloud as unreachable, scary, and confusing.

    The fact is the government will move to cloud computing at its own pace. Some agencies will migrate more slowly than equivalent commercial enterprises, other agencies will jump to the cloud, and yet more will budge very little, if at all. Most will move backward, building more internal systems, more silos, less effective IT -- and deeper holes.

    I'm disappointed as both a cloud guy and a taxpayer. However, I'm also a realist. I've worked within and around the government long enough to understand the realities. Still, it's a shame that the government has moved from a leader in the cloud computing movement to a reluctant follower.

    I never believed that government agencies would be effective poster children for cloud computing (or any other computer-based technology.)

    <Return to section navigation list>

    Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

    english.eastday.com reported Microsoft picks city [Shanghai] for cloud computing in an 11/2/2012 post:

    imageMICROSOFT Corp has chosen Shanghai as the first city in China for commercial and public cloud computing services, Microsoft and the Shanghai government said yesterday.

    "It's a milestone agreement to show Microsoft's commitment to China," said Ralph Haupter, chairman and CEO of Microsoft China. "Microsoft will continue to expand and accelerate investment in China, driving the next era of innovation and opportunity."

    imageThe cloud agreement enables customers in China to access Microsoft's Office 365 and Windows Azure services operated by 21Vianet in China, the world's biggest personal computer and Internet market.

    Microsoft's Office 365 provides customers services including secure access to e-mails and calendars, instant messaging, conference and file sharing anywhere. Windows Azure offers customers in China with public cloud computing platform services including computing, storage, database, integration and networking services.

    The data centers and related infrastructure, which will be built in Pudong New Area, support cloud computing services that allow users to save, edit, share and process data and files through various devices, including laptops, computers and mobile phones, based on the online "cloud" platform.

    Apple Inc, Amazon Inc and Google Inc have also invested heavily in cloud computing.

    Cloud computing will create 4 million new jobs for China by 2015, IDC predicted.

    Shanghai, Microsoft and 21Vianet Group Inc, Microsoft's cloud service partner and operator in China, signed the strategic partnership agreement yesterday.

    It’s assumed that Microsoft licensed WAPA to 21Vianet Group for China in an arrangement similar to that with Fujitsu for Japan (see Windows Azure Platform Appliance (WAPA) Finally Emerges from the Skunk Works of 6/7/2011.)

    <Return to section navigation list>

    Cloud Security and Governance

    image_thumb2No significant articles today

    <Return to section navigation list>

    Cloud Computing Events

    •• Mary Jo Foley (@maryjofoley) reported Microsoft to deliver SharePoint apps for Windows 8, Windows Phone, iOS starting in early 2013 in an 11/12/2012 post to ZDNet’s All About Microsoft blog:

    imageMicrosoft is working on native mobile SharePoint-connected applications for Windows 8, Windows Phone and iOS, which will begin coming to market by early 2013.


    Microsoft officials are making available prototypes of the Windows Phone 8 version of one of the coming SharePoint applications as of November 12, the day that Microsoft's SharePoint Conference 2012 in Las Vegas kicks off.

    image"We've had a design team four times in size compared to what we've had working on previous SharePoint releases," said Jeff Teper, Corporate Vice President of Office Servers and Services Program Management. The result: Mobile apps that reflect the "modern" style that Microsoft is pushing across Windows 8 and Windows Phone.

    Among the demos the estimated 10,000 attendees of the conference will see are a SharePoint Newsfeed app and a SkyDrive Pro app.

    The SharePoint Newsfeed app is designed to provide users with access from their mobile devices to the people and documents they follow. A preview version of the Windows Phone app -- a screen capture of which is embedded in this post above -- is available today. The SharePoint Newsfeed app will also be available on Android at a later, unspecified date, Microsoft execs said.

    (To get the Windows Phone Newsfeed test build, users running Windows Phone 7.5 or 8 devices should type aka.ms/spwp into their IE mobile browsers and follow instructions from there.)

    Here's a screen shot provided by Microsoft of what a mock-up of the Newsfeed app for Windows 8. A test build of this version is not yet available:


    A separate SkyDrive Pro app will provide Windows 8 and iOS users with access to SkyDrive Pro. In spite of some Microsoft officials' claims to the contrary, SkyDrive Pro is the new name for SharePoint Workspace. It allows users to save SharePoint content for offline use. There's already a SkyDrive Pro app available as part of the Office Hub on Windows Phone 8. Microsoft officials plan to announce availability targets for the Windows 8 and iOS versions of this app "at a later time."

    The SharePoint group isn't the first Microsoft team to demonstrate and talk up plans to develop Metro-Style/Windows Store versions of their mobile apps. Microsoft's Dynamics ERP and CRM teams have shown and discussed work on this front, as well. The Microsoft CRM team said it plans to have a native, mobile version of Microsoft CRM for Windows 8 by mid-2013.

    In an interview prior to the start of the SharePoint Conference, Teper reiterated that Microsoft split the SharePoint team in two three years ago. Half the team focused on the cloud/Office 365/SharePoint Online versions of the product. The other half of the team focused on social and mobile.

    "We are looking to further pivot to a devices and services world in a more unified way," Teper said -- reflecting Microsoft's new emphasis on remaking itself as a devices and services company, an evolution of its software and services charter.

    SharePoint is now one of just a handful of Microsoft products that is contributing more than $2 billion annually in revenues, according to Microsoft officials. (That milestone was achieved at the end of Microsoft's fiscal 2012 in June 2012.) SharePoint was one of the first Microsoft products to cross the $1 billion business threshhold

    •• Soma Somasegar (@ssomasegar) described Building Apps for Office and SharePoint in an 11/12/2012 post:

    imageSeveral months ago, I shared the news of our new set of development tools for Office and SharePoint, including the in-browser “Napa” tools and the rich client Office Developer Tools for Visual Studio 2012.

    Today at the SharePoint Conference [@SPConf] in Las Vegas, we shared significant updates to these tools, with a range of improved support to make building new Office 2013, Office 365, SharePoint 2013, and SharePoint Online in Office 365 more flexible and productive.

    imageOver the last few months we’ve been making continuous updates to “Napa”, a lightweight, in-browser companion to the full Visual Studio rich client. These updates have included support for publishing apps to SharePoint, for sharing a project with a friend or with the community, lots of editor improvements, and much more. As you don’t need to install anything onto your machine in order to build apps with “Napa”, it’s the fastest way to get started with Office and SharePoint development. You can do so today at http://dev.office.com.

    Of course, as these projects grow, developers can smoothly transition their work with their projects in the browser to the rich client Office Developer Tools for Visual Studio 2012. With support ranging from new designers to new templates, these tools enable developers to create, edit, build, debug, package, and deploy apps for Office and SharePoint, across all current Office and SharePoint hosting models and app types. Today, we’re releasing Preview 2 of this suite, which you can download and install into Visual Studio Professional 2012, Visual Studio Premium 2012, and Visual Studio Ultimate 2012.

    Additionally, now included as part of the Office Developer Tools for Visual Studio 2012 is the LightSwitch HTML Client for Visual Studio 2012 – Preview 2. With this release, LightSwitch enables developers to easily build touch-oriented business applications with HTML5 that run well across a breadth of devices. These apps can be standalone, but with this preview developers can now also quickly build and deploy data-driven apps for SharePoint using the new web standards-based apps model.

    You can stay up-to-date on the latest in Office development from the team blog at http://blogs.msdn.com/b/officeapps.

    Just downloaded LightSwitch HTML Client Preview 2 and signed up for the Office 365 Preview Developer Pack Preview (must be a preview) and downloaded the Napa bits. Stay tuned for more from http://oakleafsystems210-public.sharepoint.com (presently under construction.)

    •• The Microsoft Shines the Spotlight on the New SharePoint (@SPConf) press release of 11/12/2012 described, inter alia:

    New Cloud App Model

    imageIn addition to broad investments in SharePoint Online, SharePoint 2013 also introduces a new cloud app model for the more than 700,000 developers building on SharePoint. The new app model and a new Office Store make it easier for developers to build, buy, deploy and manage applications using existing Web development skills. [Emphasis added.]

    imageScott Guthrie [@scottgu], corporate vice president in Microsoft’s Server and Tools Business division, announced new tools for developing apps for Office and SharePoint with Visual Studio 2012. They work with “Napa” tools for online development of Office 365 apps, include templates and designers to facilitate app development, and support the LightSwitch HTML client for easy creation of data-centric business apps in SharePoint 2013.

    Mobile Apps for Anywhere Access

    Finally, to help people work from anywhere and across any device, Microsoft introduced new native SharePoint mobile apps to give people access to SharePoint news feeds and documents on Windows 8, Windows Phone, iOS and Android devices. All apps will work with both SharePoint 2013 and SharePoint Online.

    The SharePoint Conference 2012 continues through Thursday, Nov. 15, 2012. Session information can be found at the SharePoint Conference 2012 website. Additional information about today’s news can be found at the Official Microsoft Blog.

    The Gu demonstrated deployment of an Office App to Windows Azure in his part of Monday Morning’s keynote, which is available on YouTube here. Scott’s demo starts at 01:30:53:


    •• My (@rogerjenn) Windows Azure-Related Sessions at the SharePoint Conference 2012 (@SPConf) post of 11/12/2012 contains the following list:

        • imageA Real-World Help Desk app: end-to-end: Eric Shupps
        • Building Cloud-hosted apps for SharePoint with PHP and node.JS: Todd Baginski
        • Building end-to-end apps for SharePoint with Windows Azure and Windows 8: Donovan Follette and Todd Baginski
        • image_thumb[1]Customer Showcase: Bringing business agility using Visio Services, Azure, Windows Phone and SharePoint Online to Deliver Australia: Ed Richard and Richard Sparreboom
        • Deploying SharePoint Farms on Windows Azure Virtual Machines: Paul Stubbs
        • image_thumb[3]Developing Advanced BI Visualizations with Visio & SharePoint in Office 365 with Azure data integration: Chris Hopkins
        • Developing for Windows Azure Web Sites and SharePoint Online: Stefan Schackow
        • Developing Hybrid apps for SharePoint: apps that work on-premises and in the cloud: Rob Howard
        • Developing SharePoint Workflows with SharePoint Designer 2013 and Visio Pro 2013: JongHwa Lim and Sam Chung
        • Introduction to Windows Azure: Mark Russinovich
        • SharePoint 2013 Workflow Development for Apps and Solutions for SharePoint 2013 with Visual Studio 2012: Tim McConnell
        • SharePoint 2013 Workflow: Architecture and Configuration: Mauricio Ordonez
        • Using Windows Azure Storage with SharePoint for Document Management: Joe Giardino
        • What's New for Developers in Project 2013: Chris Boyd and Eli Sheldon
        • Windows Azure Basics for SharePoint Developers: Steve Fox
        • Windows Azure IaaS Deep Dive for SharePoint IT Professionals: Corey Sanders and Paul Stubbs
        • Windows Azure Media Services and Building Rich Media Solutions for SharePoint: Steve Goulet
        • Windows Azure Virtual Machines (IaaS) and Virtual Networks: Mark Russinovich

    The original post includes session descriptions.

    PRNewsWire reported Aditi Technologies and Microsoft to Showcase Convergence of Windows 8 and Windows Azure in London at GoCloud8 CIO Summit in an 11/12/2012 press release:

    imageAditi Technologies (www.aditi.com) announced today its' scheduled GoCloud8 event in London in collaboration with Microsoft. GoCloud8 is one of first previews of Windows 8 and Microsoft Cloud convergence, globally. Microsoft and Aditi Technologies will host over 30 CIOs at GoCloud8 breakfast session at the Cavendish Hotel, on 13 November, 2012, London.

    Interested parties can register for the breakfast session at http://www.gocloud8.com/Windows8_Application_Development_london.aspx

    "The convergence of cloud and mobility on a single platform is a game changer," said Pradeep Rathinam, CEO of Aditi Technologies. "GoCloud8 leverages the convergence opportunities offered by Microsoft's cloud, mobile and enterprise productivity platforms to seamless connect desktops, tablets and phones. Our customers across healthcare, financial services and e-commerce domains are aggressively leveraging this convergence to engineer great user experience and achieve scale."

    Wade Wegner, Technology Evangelist and CTO, Aditi Technologies will be demonstrating real world Cloud8 cross-device applications which facilitate workforce agility and deployment scale.

    image"We are extremely excited, to unveil the impact on Windows 8 and Windows Azure on User experience. This is a new opportunity to drive better user engagement within Enterprise IT" says Kaushik Banerjee, Vice President -Europe, Aditi Technologies.

    About Aditi Technologies

    Aditi Technologies (www.aditi.com) is a technology services company, specializing in cloud based application and product development with offices in London, Seattle and Bangalore. Aditi is voted as one of the top three Microsoft cloud consulting providers globally, and one of the top five Microsoft technology partners in U.S.

    About GoCloud8 CIO summit

    The event series is co-hosted by Microsoft and Aditi across 12 cities in US and UK. The events features keynotes by Microsoft cloud leaders, demo of multi device enterprise and line of business apps and an interactive panel discussion with CIOs and CTOs on cloud adoption challenges and roadmap. GoCloud8 offers one of the first previews of the new Microsoft technology stack and its impact on enterprise IT roadmap. For more information visit http://www.GoCloud8.com

    Jim O’Neil (@jimoneil) reminded everyone of the On-line Windows Azure Conference–Nov 14th in an 11/9/2012 post:

    imageThis isn’t the first time Microsoft has a run an on-line event, but the Windows Azure Conference next Wednesday definitely includes an air of newness.

    Yes, Scott Guthrie is still keynoting and the event is still free, but the remainder of the speakers are all Windows Azure users: MVPs and Windows Azure insiders who use the tools and technologies day in and day out. You’ll get to hear real life experiences and strategies for leveraging the power of the Windows Azure cloud and its myriad services from users just like you.

    Windows Azure ConferenceThe event runs from 11:30 a.m. to 8:00 p.m. ET and will be streamed entirely on Channel 9, but do register on-line so you can be apprised of updates and additional details as they become available. The session agenda is on-line as well so you can plan your day (keep in mind the times on the site are all in the Pacific time zone!)

    Vittorio Bertocci (@vibronet) reported on 11/8/2012 that he will present a session at the pattern & practices Symposium 2013 event in Redmond, WA on 1/15-1/17/2012:

    patterns and practices Symposium 2013

    imageDid you come over here for //BUILD? It was an absolute blast! They even let me out of my cage for few hours, just the time to present a session on the latest news we introduced in Windows Azure Active Directory and spend some quality time with you guys to gather your precious & always appreciated feedback.

    imageWell, [t]his coming January you have another great opportunity to come to observe the Microsoftees in their natural habitat: the pattern&practices Symposium 2013.

    The agenda alone should be more than enough to whet your appetite for knowledge, but as you know by now attending an event in Redmond can be more than the sessions themselves. Given that the engineers are at no more than 20 mins of shuttle ride, it might happen that you as a question to a speaker and one hour later you are taking coffee with the guy who coded the feature you are asking about!

    Tomorrow is the last day of early bird pricing, hence if the above sounds enticing to you do not hesitate :-) also, the p&p guys honored me with a slot on the first day of the conference: if you want to know more about what’s cooking in Windows Azure AD land, here’s your chance of cornering me and asking me anything… see you soon!

    <Return to section navigation list>

    Other Cloud Computing Platforms and Services

    Jeff Barr (@jeffbarr) reported New Asia Pacific (Sydney) Region in Australia - EC2, DynamoDB, S3, and Much More in an 11/12/2012 post:

    imageIt is time to expand the AWS footprint once again, with a new Region in Sydney, Australia. AWS customers in Australia can now enjoy fast, low-latency access to the suite of AWS infrastructure services.

    New Region
    The new Sydney Region supports the following AWS services:

    A Tranquil Beach - Shoal Bay, Australia

    imageWe also have an edge location for Route 53 and CloudFront in Sydney.

    This is our ninth Region; see the AWS Global Infrastructure Map for more information. You can see the full list in the Region menu of the AWS Management Console:

    Over 10,000 organizations in Australia and New Zealand are already making use of AWS. Here's a very small sample:

    • The Commonwealth Bank of Australia runs customer-facing web applications on AWS as part of a cloud strategy that has been underway for the past five years. The seamless scaling enabled by AWS has allowed their IT department to focus on innovation.
    • Brandscreen, a fast-growing Australian start-up,has developed a real-time advertising trading platform for the media industry.They use Elastic MapReduce to process vast amounts of data to test out machine learning algorithms. They store well over 1 PB of data in Amazon S3 and add another 10 TB every day.
    • MYOB uses AWS to host the MYOB Atlas, a simple website builder that enables businesses to be online within 15 minutes. They currently have more than 40,000 small and medium-sized businesses using Atlas on the AWS cloud.
    • Halfbrick Studios hosts the highly acclaimed Fruit Ninja game on AWS. They use DynamoDB and multiple Availability Zones to host tens of millions of regular players.

    AWS Partner Network
    A number members of the AWS Partner Network have been preparing for the launch of the new Region. Here's a sampling (send me email with launch day updates):

    We already have a vibrant partner ecosystem in the region. Local Systems Integrators include ASG, Bulletproof Networks, Fronde, Industrie IT, The Frame Group, Melbourne IT, SMS IT and Sourced Group.

    On the Ground
    In order to serve enterprises, government agencies, academic institutions, small-to-mid size companies, startups, and developers, we now have offices in Sydney, Melbourne, and Perth. We will be adding a local technical support operation in 2013 as part of our global network of support centers, all accessible through AWS Support.

    Listen to Andy
    AWS Senior Vice President Andy Jassy will be speaking at our Customer Appreciation Day (November 13, 2012). You can register for and attend the live event if you are in Sydney, or you can watch the live stream from anywhere in the world.

    It’s not easy keeping up with AWS’ geographical and feature expansion.

    Werner Vogels (@werner) riffed on Expanding the Cloud – introducing the Asia Pacific (Sydney) Region in a 11/12/2012 post:

    imageToday, Amazon Web Services is expanding its worldwide coverage with the launch of a new AWS Region in Sydney, Australia. This new Asia Pacific (Sydney) Region has been highly requested by companies worldwide, and it provides low latency access to AWS services for those who target customers in Australia and New Zealand. The Region launches with two Availability Zones to help customers build highly available applications.

    imageI have visited Australia at least twice every year for the past four years and I have seen first-hand evidence of the tremendous interest there is in the AWS service. Many young businesses as well as established enterprises are already using AWS, many of them targeting customers globally. Cool ecommerce sites such as redbubble.com, big traffic sites such as realestate.com.au, innovative crowd sourcing with 99designs, big-data driven real-time advertising trading with Brandscreen, mobile sports apps by Vodafone Hutchinson Australia, big banks like Commonwealth Bank of Australia, these are just a small sample of the wide variety of companies that have been using AWS extensively for quite some time already, and I know they will put the new Region to good use.

    But it is not only the Australian companies who frequently requested a local AWS Region, also companies from outside Australia who would like to start delivering their products and services to the Australian market are enthusiastic about serving Australia with low latency. Many of these firms have wanted to enter this market for years but had refrained due to the daunting task of acquiring local hosting or datacenter capacity. These companies can now benefit from the fact that the new Asia Pacific (Sydney) Region is similar to all other AWS Regions, which enables software developed for other Regions to be quickly deployed in Australia as well.

    You can learn more about our growing global infrastructure footprint at http://aws.amazon.com/about-aws/globalinfrastructure. Please also visit the AWS developer blog for more great stories from our Australian customers and partners.

    Hot: Reuven Cohen (@ruv) reported The Battle For The Cloud: Amazon Proposes ‘Closed’ Top-Level .CLOUD Domain in an 11/6/2012 article for Forbes.com:

    imageAccording to a new proposal document uncovered by the website newgtldsite.com, Amazon.com is proposing a closed registry for the new .CLOUD generic top-level domain (gTLD). In the Amazon .CLOUD application it states “All domains in the .CLOUD registry will remain the property of Amazon. .CLOUD domains may not be delegated or assigned to third party organizations, institutions, or individuals.”

    What this means is unlike other top level domain such as .com .net .tv, etc, no individuals, organizations or businesses will be able to register and use a .CLOUD name for their website if the Amazon proposal ultimately wins control of the .CLOUD registry.

    image_thumb11Amazon claims this is to help prevent abuse saying in its proposal “Amazon EU S.à r.l. and its registry service provider, Neustar, recognize that preventing and mitigating abuse and malicious conduct in the .CLOUD registry is an important and significant responsibility. Amazon EU S.à r.l. will leverage Neustar’s extensive experience in establishing and implementing registration policies to prevent and mitigate abusive and malicious domain activity within the proposed .CLOUD space. .CLOUD will be a single entity registry, with all domains registered to Amazon for use in pursuit of Amazon’s business goals. There will be no re-sellers in .CLOUD and there will be no market in .CLOUD domains. Amazon will strictly control the use of .CLOUD domains.”

    imageAmazon describes its intended use of the top level .CLOUD “to provide a unique and dedicated platform for Amazon while simultaneously protecting the integrity of its brand and reputation.”

    Amazon further outlines its .CLOUD strategy saying;

    A .CLOUD registry will:

    • Provide Amazon with additional controls over its technical architecture, offering a stable and secure foundation for online communication and interaction.
    • Provide Amazon a further platform for innovation.
    • Enable Amazon to protect its intellectual property rights

    When asked what is the goal of the proposed gTLD in terms of areas of specialty, service levels or reputation? The company answered by saying “Amazon responses noted that it intends for its new .CLOUD gTLD to provide a unique and dedicated platform for stable and secure online communication and interaction. The .CLOUD registry will be run in line with current industry standards of good registry practice.”

    Also interesting when asked to describe whether and in what ways Amazon will provide outreach and communications to help to achieve its projected benefits? It said “There is no foreseeable reason for Amazon to undertake public outreach or mass communication about its new gTLD registry because domains will be provisioned in line with Amazon’s business goals.”

    Amazon isn’t alone in wanting the .CLOUD top-level domain for itself, but currently Amazon is said to be a front runner in attempting to control the .CLOUD gTLD. …

    Read more about other .CLOUD applicants and their plans.

    Yet another attempt to monopolize (i.e., abuse) the term after Dell Computer’s aborted attempt to trademark “cloud computing” in 2008. Would Amazon be a benevolent dictator of the .CLOUD TLD? Not likely.

    Following is a Twitter conversation on the subject between @rogerjenn, @samj and @rvmNL:


    Remco van Mook is Director of Interconnection, EMEA at Equinix and his Twitter profile claims he’s an “Internet numbers bigwig.”

    Jeff Barr (@jeffbarr) described Amazon SQS - Long Polling and Request Batching / Client-Side Buffering in a 7/8/2012 post:

    imageWe announced the Simple Queue Service (SQS) eight years ago, give or take a day. Although this was our first infrastructure web service, we launched it with little fanfare and gave no hint that this was just the first of many such services on the drawing board. I'm sure that some people looked at it and said "Huh, that's odd. Why is my online retailer trying to sell me a message queuing service?" Given that we are, as Jeff Bezos has said, "willing to be misunderstood for long periods of time," we didn't see the need to say any more.

    image_thumb11Today, I look back on that humble blog post and marvel at how far we have come in such a short time. We have a broad array of infrastructure services (with more on the drawing board), tons of amazing customers doing amazing things, and we just sold out our first-ever AWS conference (but you can still register for the free live stream).SQS is an essential component of any scalable, fault-tolerant architecture (see the AWS Architecture Center for more information on this topic).

    As we always do, we launched SQS with a minimal feature set and an open ear to make sure that we met the needs of our customers as we evolved it. Over the years we have added batch operations, delay queues, timers, AWS Management Console Support, CloudWatch Metrics, and more.

    We're adding two important features to SQS today: long polling and request batching/client-side buffering. Let's take a look at each one.

    Long Polling
    If you have ever written code that calls the SQS ReceiveMessage function, you'll really appreciate this new feature. As you have undoubtedly figured out for yourself, you need to make tradeoff when you design your application's polling model. You want to poll as often as possible to keep end-to-end throughput as high as possible, but polling in a tight loops burns CPU cycles and gets expensive.

    Our new long polling model will obviate the need for you to make this difficult tradeoff. You can now make a single ReceiveMessage call that will wait for between 1 and 20 seconds for a message to become available. Messages are still delivered to you as soon as possible; there's no delay when messages are available.

    As an important side effect, long polling checks all of the SQS hosts for messages (regular polling checks a subset). If a long poll returns an empty set of messages, you can be confident that no unprocessed messages are present in the queue.

    You can make use of long polling on a call-by-call basis by setting the WaitTimeSeconds parameter to a non-zero value when you call ReceiveMessage. You can also set the corresponding queue attribute to a non-zero value using the SetQueueAttributes function and it will become the default for all subsequent ReceiveMessage calls on the queue. You can also set it from the AWS Management Console:

    Calls to ReceiveMessage for long polls cost the same as short polls. Similarly, the batch APIs cost the same as the non-batch versions. You get better performance and lower costs by using long polls and the batch APIs.

    Request Batching / Client-Side Buffering
    This pair of related features is actually implemented in the AWS SDK for Java. The SDK now includes a buffered, asynchronous SQS client.

    You can now enable client-side buffering and request batching for any of your SQS queues. Once you have done so, the SDK will automatically and transparently buffer up to 10 requests and send them to SQS in one batch. Request batching has the potential to reduce your SQS charges since the entire batch counts as a single request for billing purposes.

    If you are already using the SDK, you simple instantiate the AmazonSQSAsyncClient instead of the AmazonSQSClient, and then use it to create an AmazonSQSBufferedAsyncClient object:

    // Create the basic SQS async client
    AmazonSQSAsync sqsAsync = new AmazonSQSAsyncClient(credentials);
    // Create the buffered client
    AmazonSQSAsync bufferedSqs = new AmazonSQSBufferedAsyncClient(sqsAsync);

    Then you use it to make requests in the usual way:

    SendMessageRequest request = new SendMessageRequest();
    String body = "test message_" + System.currentTimeMillis();
    request.setMessageBody( body );
    SendMessageResult sendResult = bufferedSqs.sendMessage(request);

    The SDK will take care of all of the details for you!

    You can fine-tune the batching mechanism by setting the maxBatchOpenMs and maxBatchSize parameters as described in the SQS Developer Guide:

    • The maxBatchOpenMs parameter specifies the maximum amount of time, in milliseconds, that an outgoing call waits for other calls of the same type to batch with.
    • The maxBatchSize parameter specifies the maximum number of messages that will be batched together in a single batch request.

    The SDK can also pre-fetch and then buffer up multiple messages from a queue. Again, this will reduce latency and has the potential to reduce your SQS charges. When your application calls ReceiveMessage, you'll get a pre-fetched, buffered message if possible. This should work especially well in high-volume message processing applications, but is definitely applicable and valuable at any scale. Fine-tuning can be done using the maxDoneReceiveBatches and maxInflightReceiveBatches parameters.

    If you are using this new feature (and you really should), you'll want to examine and perhaps increase your queue's visibility timeout accordingly. Messages will be buffered locally until they are received or their visibility timeout is reached; be sure to take this new timing component into account to avoid surprises.

    Help Wanted
    The Amazon Web Services Messaging team is growing and we are looking to add new members who are passionate about building large-scale distributed systems. If you are a software development engineer, quality assurance engineer, or engineering manager/leader, we would like to hear from you. We are moving fast, so send your resume to aws-messaging-jobs@amazon.com and we will get back to you immediately.

    Werner Vogels (@werner) elaborated on Improving the Cloud - More Efficient Queuing with SQS in an 11/8/2012 post:

    imageThe Amazon Simple Queue Service (SQS) is a highly scalable, reliable and elastic queuing service that 'just works'. Customers from various verticals (media, social gaming, mobile, news, advertisement) such as Netflix, Shazam and Scopely have used SQS in variety of use-cases requiring loose coupling and high performance. For example, AWS customers use SQS for asynchronous communication pipelines, buffer queues for databases, asynchronous work queues, and moving latency out of highly responsive requests paths.

    image_thumb11Today, the SQS team is launching two important features – Long Polling and richer client functionality in the SQS SDK – that we believe will extend the reach of SQS to new use cases by reducing the cost of high scale messaging for our customers.

    Long polling reduces extraneous polling to help you receive new messages as quickly as possible. Customers tell us they poll SQS quickly because they want to retrieve messages as soon as they become available. But when the message rate fluctuates, it produces empty receives, meaning extra work and extra cost.

    With long polling, SQS instead waits for a message to become available and sends it to the client if the message arrives within a customer-defined time period. By reducing extraneous polling, we expect this feature to lower the cost of using SQS for any given volume of messages, while still delivering messages quickly for customers who would otherwise poll their queues rapidly. It can eliminate the need for 'back-off algorithms' that dynamically adjust SQS polling frequencies.

    In addition to Long Polling, we are also launching richer client functionality in the Java SDK. This rich client extends the existing AmazonSQSAsyncClient interface to provide batching of outgoing messages, and also pre-fetching of incoming messages. Rich client also uses long-polling under the hood.

    When rich client receives a new outgoing message, it waits a short, configurable period of time to see if other outgoing messages arrive. If they do, they are added to the buffer. This enables your applications to take advantage of batch pricing more easily, without custom development. Rich client can prefetch batches of incoming messages, so that your application can process the new messages immediately once it's through with a current batch. Similar to long polling, we expect rich client to improve the performance of SQS for our customers while decreasing their costs.

    Historically, messaging has been an important building block for building highly reliable distributed systems. Within Amazon’s e-commerce platform, messaging systems have always been a key part of our service-oriented architecture to build an asynchronous communication pipeline between different services. Today, SQS is a key part of this architecture and is used in mission critical backend systems for a myriad of use-cases in the Kindle platform, Amazon Retail Ordering workflow, Amazon Fulfillment technologies, etc.

    Similarly, AWS customers have been using SQS in interesting ways. For example, Netflix uses SQS for a variety of use-cases such as monitoring and encoding workflows. Netflix’s Chief Architect Adrian Cockcroft blogged “Simple Queue Service (SQS) is very useful, easy to use, scalable and reliable. We ran our own message queue service in our datacenter for many years, and it wasn’t a happy experience. I’m very glad it’s someone else’s problem, and we use SQS heavily in our architecture.”

    Shazam, developers of the mobile discovery app, report using SQS as a buffer for DynamoDB. Amazon SQS provides highly scalable ‘eventual throughput’ for cases when Shazam’s message rate exceeds the throughput they provision for DynamoDB.

    Scopely, the social mobile games developer, buffers most operations via SQS to maximize performance for gamers who may play in very short bursts of activity. "The absolute minimum of activity happens synchronously," reports Scopely CTO Ankur Bulsara, "everything else happens via SQS -- asynchronously, but still very quickly.

    Higher performance at lower cost means customers can use SQS for even more demanding use cases. So we've worked hard to ensure these new features are as easy to adopt as possible. For more information on the use of Long Polling and Rich Client, please see the appropriate topics in the Amazon SQS Developer Guide.

    If you have an interesting SQS use case that you’d like to tell us about, please let me know in the comments below.For more information visit the SQS detail page.

    Barb Darrow (@gigabarb) reported Google spiffs up Cloud SQL database with more storage, faster reads in an 11/8/2012 post to GigaOm’s Cloud blog:

    imageGoogle continues to work on its cloud services, unveiling On Thursday enhancements to Cloud SQL, a version of the MySQL database running on Google’s infrastructure. The updated service gives users up to 100GB of storage — a 10x increase from the previous 10GB limit. Further, each database instance is now able to cache up to 16GB of RAM. That’s four times the previous 4GB limit and will mean faster database reads, according to a Google Enterprise blog post.

    The enhancements — which also include an asynchronous replication option to speed up database writes — are another indication that Google is taking its non-search-related infrastructure business seriously. Amazon Web Services, the king of public cloud services, certainly appears to think so. Amazon cut the price of its Relational Database Service (RDS) earlier this week, and recently sued a former AWS exec for joining Google to work on the competitive Google Compute Engine.

    imageGoogle launched a limited preview beta of Cloud SQL in October 2011. Last June, it unveiled price plans for the service, which executives said was the most-requested feature in Google App Engine. Now, the company is offering a limited-time free trial of the product for those wanting to kick the tires. The six-month trial gives users access to one Cloud SQL instance with limited RAM, .5GB of database storage, and enough network and IOPs to run the instance with “reasonable performance,” according to the Cloud SQL pricing site.

    Cloud SQL customers can now also opt to run their database instances in U.S. or European data centers — another first.

    Full disclosure: I’m a registered GigaOm Analyst.

    Running My SQL in U.S. or European data centers might be a first for Google, but Windows Azure SQL Database has been running in Microsoft’s U.S., European and Asian data centers for more than a year.

    <Return to section navigation list>