Tuesday, November 15, 2011

Windows Azure and Cloud Computing Posts for 11/15/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

• Updated 11/16/2011 4:00 PM PST with new articles marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

imageNo significant articles today.


<Return to section navigation list>

SQL Azure Database and Reporting

imageNo significant articles today.


<Return to section navigation list>

MarketPlace DataMarket, Social Analytics and OData

Updated my (@rogerjenn) sample Microsoft Codename “Social Analytics” Windows Form Client again  on 11/15/2011 again with more Social Analytic graphing features for the VancouverWindows8 Data Set:

The main form’s chart has received a facelift by replacing the Calibri font with a smaller Verdana counterpart and added series for Average Positive and Negative Tone Reliability:

image

Notice the significant change in Negative versus Positive sentiment in the tweets for 11/15 and 11/16/2011.

I also added code to generate an Excel-compatible CSV file containing daily data points saved in the user’s …\AppData\Local folder:

image

The low number of Tweets for 11/16/2011 is due to not having a full 24 hours of data. Notice that average Tone Reliability data is quite consistent over time, and negative Tones are a bit more reliable than positive tones.

I expect to make the sample project available for download the week of 11/20/2011.


Microsoft’s CodenameSocial Analytics” Team described Finding Top Keywords with the Social Analytics API in an 11/14/2011 post:

There are over 50 Entities and Service Operations available in the Social Analytics Lab API. Over the next few posts we will provide details on how the Entities can be used to accomplish some basic scenarios in the Social Analytics lab.

The first scenario we’ll look at identifies top keywords for one of the lab datasets. To accomplish this, you only need two Entities in our API (pictured below):

Keywords

The Keywords entity includes every keyword that has been defined or observed by our curation processes. After two weeks of activity on the Bill Gates Lab, we already have an inventory of over 10,000 keywords. New keywords observed in the Social Stream are marked as New. These new keywords are useful to consider when defining filters (coming in a future version) to curate relevant conversations.

With the following LINQ Statement in LINQPad, we can get a sample of the active keywords in the Bill Gates Lab:

 (
from k in Keywords
where k.ItemStatusId == 10 //Active
select new
{
k.Name,
}
).Take(10)

Name

Vaccinations

Stanford

Philanthropic

Germany

BillGates

Global Health

Bill Clinton

UN Foundation

GatesFoundation

March of Dimes

KeywordReferenceEntities

KeywordReferenceEntities are the record of what Keywords are referenced in Content Items and Message Threads. In our taxonomy, a Content Item is a post of any kind, ranging from gestures such as Facebook likes to a Tweet to a Blog Post. A Message Thread is a summary of an original post in a feed and all responses to that post in the same feed.

With the following LINQ Statement in LINQPad, we can get a sample of Keyword References:

 (
from k in KeywordReferenceEntities
select new
{
KeywordName = k.Keyword.Name,
k.MessageThread.LastReplyContentItem.Title,
k.MessageThread.LastReplyContentItem.HtmlUrl,
}
).Take(10)

KeywordName

Title

HtmlUrl

Bill Gates

"Ô Bill Gates, vamos dar um jeito na internet aqui do Brasil, hein?" http://t.co/flCjtGrx

http://twitter.com/13_levi/statuses/126703589409304576

Bill Gates

Em palestra paga por um banco americano Lula se supera: “Sem um dedo, fiz mais que Bill Gates e Steve Jobs”.

http://twitter.com/Thais_Grimald/statuses/126703689510551552

Bill Gates

Rapaz......Isso não era pra hoje não viu....Até agora sem almoçar! Se pudesse, mandava Bill Gates para junto de Jobs!

http://twitter.com/mbatistasilva/statuses/126703767683993600

Bill Gates

RT @GuidoFawkes: Nobody loves Bill Gates, but if he has funded a Malaria vaccine, it makes up for everything and is kind of more signifi ...

http://twitter.com/TadasLabudis/statuses/128649702337032192

Bill Gates

RT @GuidoFawkes: Nobody loves Bill Gates, but if he has funded a Malaria vaccine, it makes up for everything and is kind of more signifi ...

http://twitter.com/TadasLabudis/statuses/128649702337032192

BillGates

BillGates Bill Gates
For those of us lucky enough to get to work with Steve, it’s been an insanely great honor. I will miss Steve immensely.

http://twitter.com/Carolina_S_O/statuses/126704102003572736

Bill Gates

BillGates Bill Gates
For those of us lucky enough to get to work with Steve, it’s been an insanely great honor. I will miss Steve immensely.

http://twitter.com/Carolina_S_O/statuses/126704102003572736

endmalaria

RT @PATHtweets: Blog: Read commentary on 2011 @gatesfoundation Malaria Summit from our Malaria Control Program team. http://t.co/DSdlUoO ...

http://twitter.com/EVERY_ONE_CAN/statuses/126704433638805504

GatesFoundation

RT @PATHtweets: Blog: Read commentary on 2011 @gatesfoundation Malaria Summit from our Malaria Control Program team. http://t.co/DSdlUoO ...

http://twitter.com/EVERY_ONE_CAN/statuses/126704433638805504

Bill & Melinda Gates Foundation Malaria Forum

Using information systems to track a killer parasite

http://macepadiary.wordpress.com/2011/10/18/using-information-systems-to-track-a-killer-parasite/

Top Keywords for the Last Week

Using these two Entities together, we can calculate the top 5 Keywords referenced in the last week for Bill Gates. Here’s the code:

(from k in Keywords.Expand("KeywordReferences").AsEnumerable()
where k.ItemStatusId == 10
&& k.KeywordReferences.Max(kr => kr.LastUpdatedOn) > DateTime.Now.AddDays(-7)
orderby k.KeywordReferences.Count descending
select new
{
k.Id,
k.Name,
References = k.KeywordReferences.Count
}
).Take(5)
 

and the results:

Id

Name

References

9acbefab-156f-4039-98b9-0a56c2d359d6

Global Health

108

0db05e2f-233f-4155-93db-01a07c693017

philanthropic

102

d268a214-0033-4f89-b297-051f2fdf7029

Germany

91

f41b31fb-63ce-47bc-ab72-016a4e43803a

Stanford

68

a65d7774-59d7-42d5-a224-013b226cdcae

Vaccinations

35

Check out our blog later this week to find out more about what you can do with the entities in the Social Analytics Lab.

Also, please let us know what questions you have or other interesting discoveries you find in the labs.


Updated my (@rogerjenn) Microsoft Codename “Social Analytics” ContentItems Missing CalculatedToneId and ToneReliability Values on 11/15/2011 with more Social Analytic graphing features for the VancouverWindows8 Data Set:

imageUpdate 11/14/2011 1:45 PM PST: The CalculatedToneId and ToneReliability values reappeared at about 1:35 PM PST, as illustrated in this screen capture of the the test application in progress:

image

imageThe screen capture shows a later version of the chart, which adds daily count data for Tone Positives and Negatives. Execution time values are erroneous due to an apparent bug in the System.Diagnostics.Stopwatch object which causes it to reset at random intervals. I’m considering adding code to add average tone reliability values and optional labels to the points. …

Following is the graph for all 100,000 requested rows:

image

imageThe abrupt increase in Tweet count per day that occurred on 10/27/2011 is believed to be data sampling artifact.


For more details about Codename “Social Analytics,” see:


Glenn Gailey (@ggailey777) began a new OData series with a Sync’ing OData to Local Storage in Windows Phone (Part 1) post on 11/14/2011:

imageMy First T4 Template

This is the first blog post in a new series that focuses on my work to develop prescriptive guidance for synchronizing clients with cloud services—more specifically, how to best create and maintain a local cache of OData entities on a Windows Phone client. This post deals specifically with my first attempt at creating a local cache by generating client objects that support both the OData client for Windows Phone and local database on the device.

Background

imageBoth to support my writing work for OData and Windows Phone and to improve the performance of my PASS Event Browser app for Windows Phone 7.5 (“Mango”), I’ve been deep into figuring out a “best practice” for synchronizing data from an OData feed into local storage on the device. There are several reasons why storing a cache of OData object data on the device is a good idea:

  1. Reduced amount of network traffic. This is the most important benefit to caching data locally. Otherwise, every time the app starts it has to hit the OData feed to load initial data, which is a huge waste on OData feeds that are relatively static.
  2. Improved load time. It’s much faster to load data from the device than to make a call over the Web to the OData service.
  3. App don’t need network access to start. When the app is dependent on remote data, it can’t start without a connection.
  4. Puts the user in control. Users can decide on if and when to sync the local data to the OData service, since most of them pay for their data.

imageWindows Phone 7.5 includes a new local storage feature called local database. Local database is a LINQ-to-SQL implementation to access a local SQL Server CE instance running on the device. (It seems like the death of beloved LINQ-to-SQL—introduced in .NET Framework 3.5 as a key cornerstone of language-integrated query (LINQ)—has been greatly exaggerated.) This is clearly where we should be persisting our OData entities.

The Challenge

LINQ-to-SQL (L2S) was developed in parallel by the LINQ folks at the same time as LINQ-to-Entities (L2E), which shipped with the Entity Framework in .NET Framework 3.5 SP1. L2S was very good at what it was designed to do, namely to provide a lightweight, 1:1 object-relational mapper (ORM) for SQL Server to support LINQ queries. While Entity Framework is, by design, a much more powerful mapping engine based around concepts of the Entity Data Model (EDM), along with an ORM; L2E has only recently caught-up with some of the most popular ORM functionality of L2S.

In looking at the problem space, it seems obvious why L2S was chosen to support relational data storage on a device: a) the LINQ assembly was already being added in Mango, in part to support the OData client; b) it’s lightweight; and c) it only needs to support a single kind of database, SQL Server CE. The addition of a local database and L2S access is great news for Windows Phone developers, and it provides a great place to store entities downloaded from an OData service.

OData, like Entity Framework, is based on the tenets of EDM, and it also uses a similar approach to generating a client proxy as L2E in EF v1. Unfortunately, the client proxy generated from an OData service’s metadata by using DataSvcUtil.exe or Add Service Reference in Visual Studio is incompatible with local database (L2S), which requires that a different set of attributes be applied to stored types and members.

The Solution: Generate a Hybrid Proxy

Fortunately, an entity type, in general, looks the same in both local database and OData. This means that to be able to store entity data from an OData service, all we need to do is attribute the generated classes and properties of an OData client with the Table, Column, and Association mapping attributes required by local database. Then we can use this “hybrid” proxy to store our OData feeds in a local database as shown in the topic How to: Create a Basic Local Database Application for Windows Phone.

The trouble with this approach is that once you manually add attributes to a generated proxy to support local database, you are stuck with it that code page. Updates will overwrite the customizations. (As such, you might just as well use DataSvcutil.exe instead of Add Service Reference, which updates too easily). The good news is that Visual Studio does provide to us a way to generate code pages from templates.

What the Heck is T4?

Not another Terminator movie, T4 is short for Text Template Transformation Toolkit. It is “a mixture of text blocks and control logic that can generate a text file.” If you have used, say, a server-side Web scripting language to generate ASP pages, it works a bit like that. Using T4, you can define a template that parses the $metadata endpoint of an OData service to generate a hybrid proxy. Unlike Entity Framework, which now has several T4 templates, there are (as of yet) no official T4 templates for the WCF Data Services (OData) clients that I could find. Fortunately, I came across a great blog post by Alexey Zakharov on Silverlight Show that describes how to write a basic T4 template to generate a functional set of client proxy classes by parsing the metadata returned by an OData service. He even posted his T4 source code—nice!

Here’s a (partial) example of what the template code looks like for generating a C# code page:

image

When executed by Visual Studio, this template calls the linked MetadataHelper.tt template, which actually access the OData service to load the metadata into an instance of the Data class. This class is then used to generate the types and members needed to represent the OData entities on the client. The generated proxy code then looks like this:

image

The original metadata loader was very elegantly coded, but I had to make some (less elegant) modifications so that the metadata parser now doesn’t choke on complex types (and Edm.Decimal was also missing for some reason). Plus, I had to capture some additional metadata needed to generate the L2S proxy. Because the original template generated a pure OData proxy, I needed to also update the template to add the local database mapping attributes and generate a strongly-typed DataContext class.

Some Limitations

OData itself is based on the EDM, which supports complex types. However, because LINQ to SQL only supports 1:1 mapping between types, there is no concept in local database of a complex type. At this point, I don’t think that there is a workaround for storing entities with complex properties in local database. EDM also has a concept of a direct many-to-many (*:*) association between entities. This relationship can only exist in the database by using a mapping table, so I don’t think these kinds of associations can be created using L2S, but I’m not a L2S expert yet either.

When I get all of this figured out and my templates better tested, I plan to publishing them somewhere for general use, at least on MSDN Code Gallery, but also (hopefully) as a package on Nuget. There is also stuff like OData binary streams (media resources) that we need to handle outside of the generated proxy and that we don’t want to store in the local database.

I’ll be covering this in future posts, so stay tuned…


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

Michael Washam (@MWashamMS) described a Handy Library for Dealing with Transient Connectivity in the Cloud in an 11/14/2011 post:

imageThe Windows Azure CAT team has built a library (available via Nuget) that provides a lot of functionality around handling transient connection problems across the Windows Azure Platform with SQL Azure, ServiceBus, Cache, Configuration and Storage supported (this has been around awhile but it’s the first chance I’ve had to try it out).

To use add the TransientFaultHandlingFx reference:

Image

In addition to adding the assemblies it also adds a .chm to your project with full documentation on how to use the library.

image72232222222There are numerous ways to actually use the library. For SQL Azure I would recommend reading the whitepaper the Windows Azure CAT team published.

The method I chose was to configure a retry policy in my web/app.config:

<configSections>
 <section name=”RetryPolicyConfiguration” type=”Microsoft.AzureCAT.Samples.TransientFaultHandling.Configuration.RetryPolicyConfigurationSettings, Microsoft.AzureCAT.Samples.TransientFaultHandling” />
 </configSections>
 
<RetryPolicyConfiguration defaultPolicy=”FixedIntervalDefault” defaultSqlConnectionPolicy=”FixedIntervalDefault” defaultSqlCommandPolicy=”FixedIntervalDefault” defaultStoragePolicy=”IncrementalIntervalDefault” defaultCommunicationPolicy=”IncrementalIntervalDefault”>
 <add name=”FixedIntervalDefault” maxRetryCount=”10″ retryInterval=”100″ />
 <add name=”IncrementalIntervalDefault” maxRetryCount=”10″ retryInterval=”100″ retryIncrement=”50″ />
 <add name=”ExponentialIntervalDefault” maxRetryCount=”10″ minBackoff=”100″ maxBackoff=”1000″ deltaBackoff=”100″ />
 </RetryPolicyConfiguration>

From there it’s simple to create a RetryPolicy object from the configuration:

public static RetryPolicy GetRetryPolicy()
 {
 // Retrieve the retry policy settings from the application configuration file.
 RetryPolicyConfigurationSettings retryPolicySettings = ApplicationConfiguration.Current.GetConfigurationSection<RetryPolicyConfigurationSettings>(RetryPolicyConfigurationSettings.SectionName);
 
// Retrieve the required retry policy definition by its friendly name.
 RetryPolicyInfo retryPolicyInfo = retryPolicySettings.Policies.Get(“FixedIntervalDefault”);
 
// Create an instance of the respective retry policy using the transient error detection strategy for SQL Azure.
 RetryPolicy sqlAzureRetryPolicy = retryPolicyInfo.CreatePolicy<SqlAzureTransientErrorDetectionStrategy>();
 
return sqlAzureRetryPolicy;
 }

You can pass the RetryPolicy object to the extension methods (for ADO.NET in my example):

sqlCon.OpenWithRetry(rp); // for SqlConnection
object rValue = sqlCmd.ExecuteScalarWithRetry(rp); // from SQLCommand

There is functionality for LINQ as well.

This library not only makes your code robust but can save you a massive amount of time too since they have already put the resources into testing/debugging it


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

The Windows Azure Connect Team announced Windows Azure Connect is now open CTP in an 11/15/2011 post:

imageWindows Azure Connect CTP is now open to everyone. In the past, you needed to request CTP access and be approved before you could use Connect. Starting today, no approval is needed. You can use Windows Azure Connect as long as you have a Windows Azure subscription. To start using it, go to the Windows Azure Connect portal.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Nathan Totten (@ntotten) announced availability of the Windows Azure Toolkit for Social Games Version 1.1 on 11/16/2011:

imageI am happy to report that we have updated the Windows Azure Toolkit for Social Games to version 1.1. You can download the release here. This version is a significant step forward and enables developers to more easily and quickly build out their social games on the Windows Azure Platform.

imageThe biggest change we have made in this release is to separate the core toolkit from the Tankster game. After we released the Tankster sample game we received a lot of feedback asking for a simpler game that developers could use to learn. To meet this need we developed two simple games, Tic-Tac-Toe and Four in a Row, and included in the toolkit. The Tankster game is now available separately as a sample built on top of the toolkit.

Below you can see the sample Tic-Tac-Toe game.

SNAGHTML377dc3c

While the new games included in the toolkit are much simpler than Tankster, they still show the same core concepts. You can easily use these samples as a starting point to build out any number of types of games. Additionally, you will find that many of the core components of the game such as the leaderboard services, game command services can be used without any modification to the server side or client side code.

The features included in the new version of the Windows Azure Toolkit for Social Gaming are listed below.

  • Sample Games: Tic-Tac-Toe and Four in a Row
  • Game Invitations
  • Leaderboards
  • Game Friends
  • User Profile
  • Tests for both server and client code
  • Reusable JavaScript libraries

In order to make the toolkit easy to setup and start using we have included our improved dependency checker and setup scripts. When you run the default setup you simply have to open the toolkit solution and run it. Everything is preconfigured to run on your local developer machine. You can read the full instructions for the setup here.

dependancy-checker
Windows Azure Toolkit for Social Games dependency checker.

In addition to running the toolkit locally you can also use the setup tools to configuration the toolkit for deployment to Windows Azure. You can read the full instructions for deploying the toolkit here. Publishing the toolkit is even easier than before with the updated Windows Azure Tools and SDK Version 1.6. You can see the updated publish dialog below.

SNAGHTML3814174
Windows Azure Publish Dialog

As I mentioned previously, the Tankster game is still available for download as a sample. We will continue to update and release future versions of the Tankster game. For now, you can download Version 1.0 of Tankster here.

tankster-game-play

Finally, if you want to see the toolkit in action you can access it at http://watgames2.cloudapp.net/. Feel free to login and play a couple games of Tic-Tac-Toe or Four in a Row. If you use multiple browsers you can even challenge yourself to a game!

As always, please let me know if you have any comments or feedback.


Himanshu Singh reported New Videos: Travelocity and Neudesic Talk About the Advantages of Moving to the Cloud with Windows Azure in an 11/16/2011 post:

imageWatch these new video interviews with Travelocity Software Architect Principal Ramon Resma and David Pallman, GM of Custom App Development at Neudesic to learn more about their experiences in the cloud with Windows Azure.

imageBytes by MSDN: Ramon Resma

Join Brian Prince, Principle Developer Evangelist at Microsoft, and Ramon Resma, Software Architect Principal at Travelocity, as they discuss Windows Azure. During this interview Ramon talks about how using Windows Azure for a specific customer-facing web application allowed Travelocity to track metrics about how users were engaging with the new feature and then scale accordingly to accommodate traffic.

Bytes by TechNet: Interview with David Pallman

Join Chris Henley, Sr. IT Pro Evangelist at Microsoft, and David Pallman, GM of Custom App Development at Neudesic, as they discuss Windows Azure. David is a Windows Azure MVP, and an author. He leads the Windows Azure Practice at Neudesic, helping customers benefit from the cloud. During this interview, David talks about some of the advantages of Windows Azure as well as his “Windows Azure Handbook” series, which cover the full lifecycle of Windows Azure for all audiences.


Wade Wegner (@WadeWegner) reported the availability of NuGet Packages for Windows Azure and Windows Phone Developers on 11/15/2011:

imageIf you’ve been paying attention to the Windows Azure Toolkit for Windows Phone (or my twitter feed) the last couple weeks, you’ve probably noticed something about NuGet packages. We’ve been building a lot of Windows Phone and Windows Azure NuGet packages that, when composed together, give you the ability to quickly build some cool applications. To highlight this, here’s a short video that shows how you can enable push notification support in a brand new Windows Phone 7.1 project—and send notifications from a new ASP.NET MVC 3 Web Application running in Windows Azure—in less than two minutes!

imageAll of this is made possible by delivering functional, discrete, and composable NuGet packages. I’ve gotten a lot of feedback (positive and constructive, but fortunately mostly positive) about the Windows Azure Toolkit for Windows Phone, and invariably people have said that it’s too hard to decompose the sample applications – often times people just want Push Notification support or user management, and it’s too hard to get rid of the rest.

I think the NuGet packages make it very easy to do two things:

  1. Build brand new applications that quickly can get some advanced capabilities
  2. Quickly update existing applications to get some desirable enhancements

imageThis is largely possible because we can easily manage and deliver depedencies through NuGet. Let us handle the hard stuff – you focus on building out cool applications.

In this post I’d like to provide a list and description of the NuGet packages we’ve delivered so far – I imagine I’ll update this post many times to keep it accurate. I don’t plan to show exactly how to use these packages—I’ll save that for many future posts—but instead I want to use this post as a reference and guidepost moving forward.

So, without further ado, I’d like to introduce you to the two kinds of NuGet packages we have today: Client Side NuGet Packages, and Server Side NuGet Packages.

Client Side NuGet Packages

The first set of NuGet packages to be aware of are the NuGet packages for Windows Phone, designed to target Windows Phone OS 7.1 project types.

Windows Phone OS 7.1

You can use these NuGets in a number of interesting ways. For example, you can quickly incorporate the Access Control Service into your phone applications using the following NuGet packages:

If you’re not using ACS but instead want simple username/password, you can quickly incorporate membership into your phone applications using the following NuGet packages:

To get full support for Push Notifications (including Mango updates like deep linking) you can easily incorporate Push Notifications using the following NuGet packages:

  • Phone.Notifications: Class library for Windows Phone to communicate with the Push Notification Registration Cloud Service.
  • Phone.Notifications.BasePage: Base notifications page for Windows Phone to register / unregister with the Push Notification Registration Cloud Service for receiving push notification messages.

In scenarios where you’d want to secure your notification services using the Access Control Service, you can use the following packages:

  • Phone.Notifications.AccessControl: This package enables communication with the Push Notification Registration Cloud Service using Windows Azure Access Control Service (ACS) for authentication, by adding a set of base pages to the phone application.

    The dependencies and relationships between these NuGets are as follows:
    Phone.Notifications.AccessControl

In scenarios where you’d want to secure your notification services using traditional membership, you can use the following packages:

  • Phone.Notifications.Membership: This package enables communication with the Push Notification Registration Cloud Service using Membership for authentication, by adding a set of base pages to the phone application.

    The dependencies and relationships between these NuGets are as follows:
    Phone.Notifications.Membership

Server Side NuGet Packages

Over the years our team has built a lot of libraries for Windows Azure that we regularly use for samples, demos, hands-on labs, and so forth. We’ve continued to refine these libraries and have started to expose some of them as discrete NuGet packages. Here are some of them:

  • WindowsAzure.Common: Class library that provides common helpers tools for Windows Azure.
  • Storage.Providers: ASP.NET Providers (Membership, Roles, Profile and Session State Store) for Windows Azure Tables.
  • MpnsRecipe: Class library to communicate with the Microsoft Push Notification Service (MPNS).

If you plan to manage users through ASP.NET membership, we have a NuGet package that will handle everything in your Windows Azure project:

We have a set of WebAPI services that work with the Phone.Notifications NuGet packages for handling the Channel URIs and push notification registration services:

  • CloudServices.Notifications: This package contains a class library with the Push Notification Registration Cloud Service, and a WebActivator enabled class with the default configuration.
  • CloudServices.Notifications.Sql: Class library that provides storage in a SQL Azure or SQL Server database for the Push Notification Registration Cloud Service.

If you plan to use the Phone.Notifications.AccessControl NuGet package and secure the communications channel with ACS, then you can use this NuGet package:

  • CloudServices.Notifications.AccessControl: This package enables authentication using Windows Azure Access Control Service (ACS) for the Push Notification Registration Cloud Service. You just need to configure a Relying Party Application with Simple Web Token (SWT) in your ACS namespace, and configure its settings accordingly in the Web.config.

    The dependencies and relationships between these NuGets are as follows:
    CloudServices.Notifications.AccessControl

If you plan to use the Phone.Notifications.Membership NuGet package and secure the communications channel with membership, then you can use this NuGet package:

  • CloudServices.Notifications.Membership: This package enables authentication using the Membership provider for the Push Notification.

    The dependencies and relationships between these NuGets are as follows:
    CloudServices.Notifications.Membership

Finally, when working with Push Notifications, you need some kind of client to generate and send notifications. We’ve built some simple scaffolding that you can use during development (or production?) to generate and send notifications:

That’s it!

It’s a lot of resources, I know. The intent of this post isn’t to necessarily provide you with the guidance on how to use all these NuGets, but rather to explain what we have available. I plan to write a lot of blog posts that highlight real scenarios and use cases for these NuGet packages, so I’ll refer back to this post quite often. In the meantime, I hope it gives you a feel for how we’re thinking about engineering and delivering resources for Windows Phone and Windows Azure moving forward.


Steve Marx (@smarx) described Using Scala and the Play Framework in Windows Azure in an 11/15/2011 post:

imageThe Devoxx conference is running this week in Belgium. Be sure to check out the Windows Azure booth there. I didn’t make it to the conference, but I volunteered to build a demo application for the booth that would show attendees a real example of running Java in Windows Azure. As I was brainstorming what to do, I browsed through the Devoxx schedule. I noticed that there are a few mobile clients for browsing the schedule. This observation quickly lead me to the schedule API.

imageI decided it would be fun to build my own schedule browser in Windows Azure, but I didn’t know where to start. (I’m quite out of date with what’s happening in the Java world.) Reading the Devoxx schedule, I spotted a talk about the Play framework (which has an option for building in Scala), and I was quickly hooked. After a day of coding, I had built http://devoxx.cloudapp.net. The full source code is available at https://github.com/smarx/devoxxschedule.

How It Works

I won’t go into the Scala and Play framework parts of the code. The framework is a fairly straightforward MVC framework that should be comfortable for most web developers. Scala was a new language for me, and I apologize in advance if my code is ugly. I like the functional style of Scala but was unsure of some of the idiomatic ways to do things that I’m used to doing in C# with LINQ.

As for the Windows Azure part of the code, I was fortunate in that the Play framework includes some simple commands for running an application. To run an application in development mode, play run is all that’s required. To run it in production, play run --%prod. Once I realized that deployment would be as simple as executing the right command, I was able to reuse my packanddeploy project (which I blogged about a few weeks ago) to get things running.

A Few Tricks

For the most part, running the app in Windows Azure was straightforward, but I thought I’d share the few places where I learned something.

Using Blob Storage from PowerShell

To keep the deployment package small, I quickly moved away from packaging the Java runtime and Play framework with my cloud app and instead opted to keep those in blob storage and download them at runtime. I’ve used this technique before, but this time I decided to use the .NET storage client library from PowerShell (so I didn’t need to make my copy of the JDK and the Play framework public). This code was a bit interesting, mostly because I’m not too familiar with PowerShell syntax. Take a look at WorkerRole\downloadstuff.ps1 in the source to see how this works.

There are probably better ways to accomplish what I was doing here. Notably, the Windows Azure Bootstrapper (from Cumulux) could probably accomplish this in a single command. In retrospect, I probably should have just used that.

Specifying an IP Address to the Play Framework

The Play framework provides a nice command-line argument, --http.port to use a specific port, but with Windows Azure SDK 1.5 and beyond, we also need to use a specific address. (The compute emulator tries to keep the port constant by using different addresses as needed.) This was less obvious, but I eventually figured out that the Java-style -Dhttp.address parameter would do the trick.

The Home Path on Windows

If you read WorkerRole\run.cmd in the source, you’ll see that I’m passing -Duser.home to force a specific home directory for the user. I found that without this, some code in Maven ended up looking at the path C:\ and was unable to find/create the directory it needed. I suspect the need for this has something to do with the user Windows Azure creates on role instances. This was a fairly tricky one to track down, and I suspect it applies to more Java frameworks than just Play.

Resolving Play Framework Dependencies

I found that, despite already having the dependencies, I needed to ask Play to resolve them again by running play dependencies. This is presumably because the paths changed when I deployed to the cloud (as opposed to the paths on my local computer). If you read WorkerRole\run.cmd, you’ll notice that I’m using the call command in the batch file to invoke `play`. This ensures that control returns back to my batch file instead of moving permanently over to the play batch file.

Environment Variables

You’ll see a lot of environment variables (like JAVA_PATH, PORT, ADDRESS, and CONNECTION_STRING) used in the batch files. Those come from the cool xpath stuff in SDK 1.5 and up. See ServiceDefiniton.csdef for where they come from.

Full Source

This was a fun project to do, and I hope it helps those of you who are experimenting with Java on Windows Azure, particularly if you’re looking to use the Play framework.

The full source code is available at https://github.com/smarx/devoxxschedule.


Scott Densmore (@scottdensmore) announced a New Release for Windows Azure Toolkit for iOS on 11/14/2011:

imageToday I just pushed a new version of the Windows Azure Toolkit for iOS to github (tagged v1.3.0). These changes include new versions for the Windows Azure Toolkit for iOS, the Configuration Utility and the Cloud Ready Packages. There are a quite a few changes in this update including:

  • New Cloud Ready Packages to support different sizes using ACS or Membership Services
  • An update Configuration Utility to support creating the Service Definition files for the new Cloud Ready Packages
  • Support for the new Packages in the Toolkit
  • Updated header documentation
  • Support for properties in Blobs and Containers
  • Split out the Unit Testing library so it clearly identifies the tests that are Integration Tests vs Unit Tests
  • A few fixes and enhancements via the issues on GitHub.

imageThis is a pretty significant update to the Toolkit. This is a great upgrade for those using the Toolkit or those looking to add it to their Application.

imageWe are working on updating the packages with a few more enhancements. This will mean another release with these and further enhancements. If you have any issues, please report them on the github page for the appropriate project. The upcoming changes include the following:

  • Support the 2011-08-18 version changes
  • Add new properties to support the new blob API in the new Cloud Ready Packages
  • Add new integration tests for new APIs

If you want to follow along with these changes, you can watch the develop branch of each project.

We are also hoping to create a new video to show off how you can use the new toolkit.


Bruno Terkaly (@brunoterkaly) posted Table of Contents–Supporting Billions of Rows/Entities–Mobile to Cloud Series on 11/14/2011:

Table of Contents

imageTo do these labs, you will need the Azure SDK:

Description Link
Part 1 - Why Scale Matters http://blogs.msdn.com/b/brunoterkaly/archive/2011/09/27/supporting-billions-of-entities-rows-for-mobile-android-series-part-1-why-scale-matters.aspx
Part 2 - What are some high level cloud offerings? http://blogs.msdn.com/b/brunoterkaly/archive/2011/09/27/supporting-billions-of-entities-rows-for-mobile-android-series-part-2-what-are-some-high-level-cloud-offerings.aspx
Part 3–Architecture and Data Options http://blogs.msdn.com/b/brunoterkaly/archive/2011/09/28/supporting-billions-of-entities-rows-for-mobile-android-series-part-3-architecture-and-data-options.aspx
Part 4–Building a Cloud-based RESTful service for our Android, iOS, and Windows Phone 7 Clients http://blogs.msdn.com/b/brunoterkaly/archive/2011/09/28/supporting-billions-of-entities-rows-for-mobile-android-series-part-4-building-a-cloud-based-restful-service-for-our-android-ios-and-windows-phone-7-clients.aspx
Part 5–Using the Portal and Setting up your Azure Account (Microsoft Cloud) http://blogs.msdn.com/b/brunoterkaly/archive/2011/10/05/supporting-billions-of-entities-rows-for-mobile-android-series-part-5-using-the-portal-and-setting-up-your-azure-account-microsoft-cloud.aspx
Part 6–Reading and Writing to Windows Azure (Cloud-based) Tables using standard HTTP and Fiddler http://blogs.msdn.com/b/brunoterkaly/archive/2011/10/05/supporting-billions-of-entities-rows-for-mobile-android-series-part-6-reading-and-writing-to-windows-azure-cloud-based-tables-using-standard-http-and-fiddler.aspx
Part 7–Migrating your Azure (Cloud RESTful Service) to be Hosted in a Microsoft Data Center http://blogs.msdn.com/b/brunoterkaly/archive/2011/10/07/supporting-billions-of-entities-rows-for-mobile-android-series-part-7-migrating-your-azure-cloud-restful-service-to-be-hosted-in-a-microsoft-data-center.aspx
Part 8–Writing an Android Client to consume RESTful data from Azure (Microsoft Cloud) http://blogs.msdn.com/b/brunoterkaly/archive/2011/10/10/supporting-billions-of-entities-rows-for-mobile-android-series-part-8-writing-an-android-client-to-consume-restful-data-from-azure-microsoft-cloud.aspx
Part 9– Writing an iOS (iPhone/iPad/MacOS) Client to consume RESTful data from Azure (Microsoft Cloud) http://blogs.msdn.com/b/brunoterkaly/archive/2011/10/24/supporting-billions-of-entities-rows-for-mobile-android-series-part-9-writing-an-ios-iphone-ipad-macos-client-to-consume-restful-data-from-azure-microsoft-cloud.aspx
Part 10 – Writing a Windows Phone 7 Client to consume RESTful data from Azure (Microsoft Cloud) http://blogs.msdn.com/b/brunoterkaly/archive/2011/10/26/supporting-billions-of-entities-rows-for-mobile-mobile-to-cloud-series-part-10-writing-a-windows-phone-7-client-to-consume-restful-data-from-azure-microsoft-cloud.aspx
Source Code to Azure RESTful Service, Android Mobile Client, iOS/iPhone Mobile Client, and Windows Phone 7 Mobile Client http://blogs.msdn.com/b/brunoterkaly/archive/2011/11/11/source-code-to-azure-restful-service-android-mobile-client-ios-iphone-mobile-client-and-windows-phone-7-mobile-client.aspx#
   


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

image222422222222No significant articles today.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Steve Marx (@smarx) described Calling the Windows Azure Service Management API with the New .publishsettings File in an 11/16/2011 post:

imageThis week, we added a new way to get a management certificate set up to interact with the Windows Azure Service Management API. There’s a new page in the Windows Azure portal (https://windows.azure.com/download/publishprofile.aspx). Browsing there does two things:

  1. It generates a management certificate and adds it to all the subscriptions you have.
  2. It offers you a download of a .publishsettings file, which contains that certificate and the list of subscription IDs.

imageWith the new November release of the Windows Azure tools for Visual Studio, you can simply import this file and then start publishing to Windows Azure.

You can also use this file from your own code, since it contains the subscription ID(s) and management certificate you need to have to make calls to the Service Management API. The format of the file is quite simple. It contains a base64-encoded .pfx file (the certificate) and a list of subscription IDs, all in XML. The following code consumes a .publishsettings file and uses it to print out a list of all your Windows Azure applications:

using System;
using System.IO;
using System.Linq;
using System.Net;
using System.Security.Cryptography.X509Certificates;
using System.Xml.Linq;

namespace ListServices
{
    class Program
    {
        static void Main(string[] args)
        {
            var profile = XDocument.Load(args[0]);
            var req = (HttpWebRequest)WebRequest.Create(
                string.Format("https://management.core.windows.net/{0}/services/hostedservices",
                profile.Descendants("Subscription").First().Attribute("Id").Value));
            req.Headers["x-ms-version"] = "2011-10-01";
            req.ClientCertificates.Add(new X509Certificate2(Convert.FromBase64String(
                profile.Descendants("PublishProfile").Single()
                .Attribute("ManagementCertificate").Value)));
            XNamespace xmlns = "http://schemas.microsoft.com/windowsazure";
            Console.WriteLine(string.Join("\n",
                XDocument.Load(req.GetResponse().GetResponseStream())
                .Descendants(xmlns + "ServiceName").Select(n => n.Value).ToArray()));
        }
    }
}

As you can see, it’s pretty easy to use this file. I plan to update a lot of my own tools to consume .publishsettings files. It’s a much easier flow than generating a certificate locally and uploading it via the portal. (But you can still do that if you want.)

[UPDATE 1:55pm] Want more? Wade Wegner wrote a great related post: “Programmatically Installing and Using Your Management Certificate with the New .publishsettings File[See post below.]


Wade Wegner (@WadeWegner) described Programmatically Installing and Using Your Management Certificate with the New .publishsettings File in an 11/16/2011 post:

imageEarlier this week we released the Windows Azure SDK 1.6, which includes a lot of great updates to the emulators, tools for Visual Studio, and libraries. One of my favorite additions is a new way to get a management certificate installed into Windows Azure and onto your machine. You can now browse to https://windows.azure.com/download/publishprofile.aspx and login with your Live ID; this process will do two things:

  1. Generate a management certificate that is installed into Windows Azures on your behalf.
  2. Prompts you to download a .publishsettings file which includes an encoded version of your certificate and all of your subscription IDs.

imageThe new tools for Visual Studio let you easily important this file and immediately start working with your subscriptions from within Visual Studio. It’s a much simpler experience than in the past. In fact, on this weeks episode of the Cloud Cover Show (not yet published) Steve and I cover how to use this file from within your own code. While Steve beat me to it and published a great blog post showing some of the things you can do, I thought I’d take this a slightly different way and show you a couple different things:

  • How to install the certificate into your personal certificate store (which is exactly what Visual Studio is doing).
  • How to use the certificate from your person certificate store to make calls to the Service Management API.

The code is very similar. Take a look:

    var publishSettingsFile =
        @"C:\\temp\\CORP DPE Account-11-16-2011-credentials.publishsettings";
    
    XDocument xdoc = XDocument.Load(publishSettingsFile);
    
    var managementCertbase64string =
        xdoc.Descendants("PublishProfile").Single().Attribute("ManagementCertificate").Value;
    
    var importedCert = new X509Certificate2(
        Convert.FromBase64String(managementCertbase64string));

Now that we’ve imported the certificate, we can extract some information. I’ll grab the certificate thumbprint, which uniquely identifies the certificate—we’ll use it later in the post.

    string thumbprint = importedCert.Thumbprint;

Additionally, I can grab my subscription ID from the .publishsettings file – this we will also use later.

    string subscriptionId = xdoc.Descendants("Subscription").First().Attribute("Id").Value;

Now, we can take our X509Certificate2 and install it directly into our certificate store.

    X509Store store = new X509Store(StoreName.My);
    store.Open(OpenFlags.ReadWrite);
    store.Add(importedCert);
    store.Close();

After running this code, you can see that the certificate has been installed into my personal certificate store.

CertMgr

If you select the certificate you’ll see that it’s the same certificate with the same thumbprint.

certificate

Since the certificate is now loaded into the certificate store I can delete the .publishsettings file – I no longer need it. (It’s also a credential that I don’t want to let anyone else get their hands on.)

Now I have the following resources available to me:

  • My X509 certificate loaded in my personal certificate store.
  • The thumbprint for the certificate (which we’ll use to identify the right certificate).
  • My Windows Azure subscription ID.

With this information we can do the exact same thing Steve shows in his post except without the .publishsettings file.

    X509Store store = new X509Store(StoreName.My);
    store.Open(OpenFlags.ReadWrite);
    X509Certificate2 managementCert =
        store.Certificates.Find(X509FindType.FindByThumbprint, thumbrprint, false)[0];
    
    var req = (HttpWebRequest)WebRequest.Create(
        string.Format("https://management.core.windows.net/{0}/services/hostedservices",
        subscriptionId));
    
    req.Headers["x-ms-version"] = "2011-10-01";
    req.ClientCertificates.Add(managementCert);
    
    XNamespace xmlns = "http://schemas.microsoft.com/windowsazure";
    
    Console.WriteLine(string.Join("\n",
        XDocument.Load(req.GetResponse().GetResponseStream())
        .Descendants(xmlns + "ServiceName").Select(n => n.Value).ToArray()));

Essentially, we can grab the certificate out of the certificate store using the thumbprint and then make the exact same call to the service management API.

The console output below shows that I’m able to get a list of all my hosted services:

Console

It’s as simple as that!

I’m not sure that this post applies to everyone—in fact, most of you may find it boring or cryptic—but for those of you that are building content or tools you’ll probably find this a really simple way to automate a lot of the pieces. I know that my team plans to use these techniques in a lot of places to simply the experience of getting started with Windows Azure.


Brent Stineman (@BrentCodeMonkey) described Enhanced Visual Studio Publishing (Year of Azure–Week 19) [in the Windows Azure SDK 1.6] on 11/14/2011:

imageWith the latest 1.6 SDK (ok, now its actually called Azure Authoring Tools), Scott Guthrie’s promise of a better developer publishing experience has landed. Building upon the multiple cloud configuration options that were delivered back in September with the Visual Studio tools update, we have an even richer experience.

imageNow the first thing you’ll notice is that the publish dialog has changed. The first time you run it, you’ll need to sign in and get things set up.

image

Clicking the “Sign in to download credentials” link will send you to the windows.azure.com website where a publish-settings file will be generated for you to download. Following the instructions, you’ll download the file, then import it into the publishing window shown above. Then you can chose a subscription from the populated drop down and proceed.

A wee bit of warning on this though. If you have access to multiple subscriptions (own are or a co-admin), the creation of a publish-settings file will install the new certificate in each subscription. Additionally, if you click the the “Sign in to download”, you will end up with multiple certs. These aren’t things to be horrified about it, just wanted to make sure I gave a heads up.

Publish Settings

Next up is the publication settings. Here we can select a service to deploy too or create a new one (YEAH!). You can also easily set the environment (production or staging), the build configuration, and the service configuration file to be used. Setting up remote desktop is also as easy as a checkbox.

image

In the end, these settings to get captured into a ‘profile’ that is saved and can then be reused. Upon completion, the cloud service will get a new folder, “Profiles”. In this folder you will find an xml file with the extension azurePubxml that contains the publication settings.

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<AzureCredentials>my subscription</AzureCredentials>
<AzureHostedServiceName>service name</AzureHostedServiceName>
<AzureHostedServiceLabel>service label</AzureHostedServiceLabel>
<AzureSlot>Production</AzureSlot>
<AzureEnableIntelliTrace>False</AzureEnableIntelliTrace>
<AzureEnableProfiling>False</AzureEnableProfiling>
<AzureEnableWebDeploy>False</AzureEnableWebDeploy>
<AzureStorageAccountName>bmspublic</AzureStorageAccountName>
<AzureStorageAccountLabel>bmspublic</AzureStorageAccountLabel>
<AzureDeploymentLabel>newsdk</AzureDeploymentLabel>
<AzureSolutionConfiguration>Release</AzureSolutionConfiguration>
<AzureServiceConfiguration>Cloud</AzureServiceConfiguration>
<AzureAppendTimestampToDeploymentLabel>True</AzureAppendTimestampToDeploymentLabel>
<AzureAllowUpgrade>True</AzureAllowUpgrade>
<AzureEnableRemoteDesktop>False</AzureEnableRemoteDesktop>
</PropertyGroup>
</Project>

This file contains a reference to a storage account and when I looked at the account I noticed that there was a new container in there called “vsdeploy”. Now the folder was empty but I’m betting this is where the cspkg was sent to before being deployed and subsequently deleted. I only wish there was an option to leave the package there after deployment. I love having old packages in the cloud to easily reference.

If we go back into the publish settings again (you may have to click “previous” a few times to get back to the “settings” section_ and can select “advanced” you can set some of the other options in this file. Here we can set the storage account to be used as well as enable IntelliTrace and profiling.

The new experience does this using a management certificate that was created for us at the beginning of this process. If you open up the publish settings file we downloaded at the beginning, you’ll find its an XML document with an encoded string representing the management certificate to be used. Hopefully in a future edition, I’ll be able to poke around at these new features a bit more. It appears we may have one of more new API’s at work as well as some new options to help with service management and build automation.

What next?

There’s additional poking around I need to do with these new features. But there’s some great promise here. Out of the box, developers managing one or two accounts are going to see HUGE benefits. For devs in large, highly structured and security restricted shops, they’re more likely to keep to the existing mechanisms or looking at leveraging this to enhance their existing automated processes.

Meanwhile, I’ll keep poking at this a little bit as well as the other new features of this SDK and report back when I have more.

But that will have to wait until next time.


Avkash Chauhan (@avkashchauhan) posted a Windows Azure SDK 1.6 (Build 1.6.41103.1601) installation Walkthrough on 11/14/2011:

imageWindows Azure SDK 1.6 is out now and you can install it directly from Web (Web Platform Installer 3.0 Required)

Download link: http://www.microsoft.com/windowsazure/sdk/

Get more info: http://blogs.msdn.com/b/avkashchauhan/archive/2011/11/14/windows-azure-sdk-1-6-is-released.aspx

imageWhen you launch [the] Azure SDK installer, it will launch WebPI 3.0 installer to download necessary Azure SDK 1.6 components and if you have WebPI 4.0 installed in your machine when you will get the following error:

Error: Web Platform Installer 3.0 (Newer version already installed.)

To continue installation, you can uninstall WebPI 4.0 and then re-launch the Windows Azure SDK 1.6 installation and installation will work perfectly as expected.

Windows Azure Installation will start as below:

The next dialog will show you the list of components will be installed as below:

The installation will continue further [with] random update details as below:

..and as below:

… and as below:

[When] installation is done, you will be notified with the following dialog:

Now you can verify new updated Windows Azure SDK 1.6 bits are installed (as Windows Azure Authoring Tools) perfectly in your machine:

You can also verify updated Windows Azure SDK 1.6 (Build 1.6.41103.1601) is installed at following location on 64bit OS:

C:\Program Files\Windows Azure SDK\v1.6


Himanshu Singh reported Now Available! Updated Windows Azure SDK & Windows Azure HPC Scheduler SDK in an 11/14/2011 post to the Windows Azure Team Blog:

imageToday we are simplifying the development experience on Windows Azure with three updates—a new version of the Windows Azure SDK, a new Windows Azure HPC Scheduler SDK, and an updated Windows Azure Platform Training Kit. Whether you are already using Windows Azure or looking for the right moment to get started, these updates make it easier than ever to build applications on Windows Azure.

imageHighlights:

  • Windows Azure SDK (November 2011)—Multiple updates to the Windows Azure Tools for Visual Studio 2010 that simplify development, deployment, and management on Windows Azure. The full Windows Azure SDK can be downloaded via the Web Platform installer here.
  • Windows Azure HPC Scheduler SDK— Works in conjunction with the Windows Azure SDK and includes modules and features to author high performance computing (HPC) applications that use large amounts of compute resources in parallel to complete work. The SDK is available here for download.
  • Windows Azure Platform Training Kit—Includes hands-on labs, demos, and presentations to help you learn how to build applications that use Windows Azure. Compatible with the new Windows Azure SDK and Windows Azure Tools for Visual Studio 2010. The training kit can be downloaded here.

Here are the details:

The Windows Azure SDK for .NET includes the following new features:

  • Windows Azure Tools for Visual Studio 2010
    • Streamlined publishing: This makes connecting your environment to Windows Azure much easier by providing a publish settings file for your account. This allows you to configure all aspects of deployments, such as Remote Desktop (RDP), without ever leaving Visual Studio. Simply use the Visual Studio publishing wizard to download the publish settings and import them into Visual Studio. By default, publish will make use of in-place deployment upgrades for significantly faster application updates.
    • Multiple profiles: Your publish settings, build config, and cloud config choices will be stored in one or more publish profile MSBuild files. This makes it easy for you and your team to quickly change all of your environment settings.
    • Team Build: The Windows Azure Tools for Visual Studio 2010 now offer MSBuild command-line support to package your application and pass in properties. Additionally, they can be installed on a lighter-weight build machine without the requirement of Visual Studio being installed.
    • In-Place Updates: Visual Studio now allows you to make improved in-place updates to deployed services in Windows Azure. For more details visit http://blogs.msdn.com/b/windowsazure/archive/2011/10/19/announcing-improved-in-place-updates.aspx
    • Enhanced Publishing Wizard: Overhaul of publishing experience to sign-in, configure the deployment, and review the summary of changes
    • Automatic Credential Management Configuration: No longer need to manually create or manage a cert
    • Multiple Subscription Deployment Management: Makes it easier to use multiple Windows Azure subscriptions by selecting the subscription you want to use when publishing within Visual Studio.
    • Hosted Service Creation: Create new hosted services within Visual Studio, without having to visit the Windows Azure Portal.
    • Storage Accounts: Create and configure appropriate storage accounts within Visual Studio (no longer need to do this manually)
    • Remote Desktop Workflow: Enable by clicking a checkbox and providing a username/password – no need to create or upload a cert
    • Deployment Configurations: Manage multiple deployment environment configurations
    • Azure Activity Log: More information about the publish and virtual machine initialization status

For more information on Windows Azure Tools for Visual Studio 2010, see What’s New in the Windows Azure Tools.

  • Windows Azure Libraries for .NET 1.6
    • Service Bus & Caching: Service Bus and caching client libraries from the previous Windows Azure AppFabric SDK have now been updated and incorporated into the Windows Azure Libraries for .NET to simplify the development experience.
    • Queues:
      • Support for UpdateMessage method (for updating queue message contents and invisibility timeout)
      • New overload for AddMessage that provides the ability to make a message invisible until a future time
      • The size limit of a message is raised from 8KB to 64KB
      • Get/Set Service Settings for setting the analytics service settings
  • Windows Azure Emulator
    • Performance improvements to compute & storage emulators.

Click here to download the Windows Azure SDK via the Web Platform Installer.

Windows Azure HPC Scheduler SDK

Working in conjunction with the Windows Azure SDK for .NET, the Windows Azure HPC Scheduler SDK contains modules and features that developers can use to create compute-intensive, parallel applications that can scale in Windows Azure. The Windows Azure HPC Scheduler SDK enables developers to define a Windows Azure deployment that includes:

  • Job scheduling and resource management
  • Web-based job submission
  • Parallel runtimes with support for MPI applications and WCF services
  • Persistent state management of job queue and resource configuration

To download and access more information on the Windows Azure HPC Scheduler SDK, visit http://msdn.microsoft.com/en-us/library/windowsazure/hh545593.aspx.

Today’s updates arm developers with the latest tools for Visual Studio and open new doors for authoring HPC workloads in Windows Azure. This is part of our ongoing commitment to deliver world-class services in a simple and flexible way to customers.

For more information on all updates visit http://msdn.microsoft.com/en-us/library/windowsazure/hh552718.aspx.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Mary Jo Foley (@maryjofoley) reported Microsoft ready to unveil new public-private cloud-migration tools, strategy in an 11/16/2011 post:

imageMicrosoft is close to announcing some new migration tools and strategies designed to help customers to move from the private cloud to the public cloud.

imageCompany officials have been hinting for months that changes around Active Directory were on tap. On November 15, Robert Youngjohns, President of Microsoft North America Sales, made it even clearer that something big is in the works during his recent appearance at the UBS Global Technology & Services Conference. From a transcript of Youngjohns remarks:

“I think our emerging strategy on how we migrate from private cloud to public cloud is the thing you should watch over the coming weeks and months.

“Satya Nadella, who is the president of the Server and Tools Business, is making some pretty fundamental changes there. He’s announced his direction on that. But I think the full roadmap will come out over the coming weeks. And I think that’s an exciting move.

“So, going head-to-head on who has the better piece of virtualization software is probably not the winning play here. The winning play here is to focus on systems management, and to focus on how you bridge from private cloud into public cloud going forward. And I think we’re well placed on those.”

On the management front, Microsoft is putting the finishing touches on the 10 or so different systems management products that comprise the System Center 2012 suite. Microsoft is looking to launch the entire suite in March 2012 at the Microsoft Management Summit, I’m hearing.

One of the new System Center 2012 point products is System Center Virtual Machine Manager (SCVMM) 2012. SCVMM is the “private cloud” piece of Microsoft’s server application virtualization story. Microsoft officials also reconfirmed earlier this year that the company is planning to add VM role functionality to its public-cloud platform, Windows Azure. …

Read more.

SCVMM’s App Manager, formerly codenamed “Concero,” also is part of the cloud migration picture. Stay tuned for a basic tutorial on the use of “Concero” with private and public clouds.


David Linthicum (@DavidLinthicum) asserted “The new focus on private PaaS suffers from the normal cloud computing hype -- but there could be something there” as a deck for his Cloud app dev goes private article of 11/16/2011 for InfoWorld’s Cloud Computing blog:

imageAt Cloud Expo last week, many vendors focused on "private PaaS" as a form of cloud computing. They follow the logic that if you can have a private (IaaS) cloud, then you certainly can have a private PaaS cloud -- but should you?

imageThe idea is the same: Let's leverage the efficiencies of public cloud computing -- in this case, public PaaS cloud computing -- in the enterprise using servers that we can hug. The business case would be when you have the requirement for a much higher level of security and don't yet trust public PaaS providers such as Google or Microsoft. Or you can't stand the thought of any of the pretty blinking lights in the data center going away.

I've done a bit of research on private PaaS, and here are my current thoughts:

The core advantage of using a private PaaS is the fact that you don't have to deal with infrastructure, as you would when using traditional development environments. The ability to scale is provided by the platform. Thus, private PaaS removes you from the details of the infrastructure and renders it irrelevant.

Of course, this is not the first time we've heard vendors promise to abstract us away from the metal. However, the cloud computing tricks associated with hardware utilization, as applicable to private PaaS, could make this application more compelling and cost effective.

There is a claim of better portability because private PaaS runs on infrastructure that can be found in other places, including public IaaS providers. The theory is that because private PaaS can abstract infrastructure on premise, it can do so off premise as well. I'm not sure it's that easy, but I'll keep an open mind until I see this space bake a bit and case studies emerge.

For now, I'm in wait-and-see mode. Who's first?


Marketwire asserted “Solution Provides Enterprise and Government Customers With Cost Savings and Choice Within Their Cloud Strategies” in a deck for their Fujitsu Launches Hybrid Cloud Services for Microsoft Windows Azure(TM) Customers press release of 11/16/2011:

SUNNYVALE, CA, Nov 16, 2011 (MARKETWIRE via COMTEX) -- Fujitsu announced today Hybrid Cloud Services for Microsoft Windows Azure, an extension of its global cloud portfolio and its relationship with Microsoft. The offering enables government and enterprise customers to benefit from hybrid cloud solutions and in doing so achieve operating cost reductions of typically 30% or more.

imageHybrid Cloud Services gives organizations choices when they embark on a cloud strategy, putting their workloads and data wherever it is most appropriate for them -- in a public, private, or mixed environment. The solution helps companies address the challenges of interoperability, data security, governance and compliance when delivering across multiple platforms. Fujitsu delivers this through a comprehensive array of tooling, management and support services, and reference architectures to help customers manage their applications and data in the most effective and cost-efficient way.

The new Hybrid Cloud Services launched by Fujitsu offers comprehensive services to address the significant market potential in industry and government sectors that require a broader range of cloud deployment options, including hybrid options, within their business and IT environments. "Our research shows public sector organizations in most countries are concerned about data security," said Frank Gens, Chief Analyst, IDC. "Providing options to meet this concern could open up a much larger market for public cloud services and in our opinion is definitely a trend to watch."

Hybrid Cloud Services from Fujitsu links Microsoft Windows Azure-based components to Windows Server(R)-based components running either in a customer's premises or a Fujitsu cloud platform. This component portability enables Fujitsu to run an enterprise or government application in Windows Azure using data generated in one or more customer locations and to hold that data securely in a location of the customer's choice. Specifically, this addresses the need for government organizations to comply with national data regulations.

Fujitsu is launching these cloud services in five countries -- the UK, US, Australia, Spain and Canada, with more to follow soon. "Hybrid cloud services provide another important addition to the Fujitsu Cloud Portfolio and we are excited that we can now provide customers with a higher level of cost saving and flexibility from multiple cloud integration," said Cameron McNaught, Fujitsu SVP, Cloud. "Hybrid Cloud Services builds on our extensive experience in delivering Windows Azure services including the world's first independently managed Microsoft Windows Azure cloud environment delivered from the Fujitsu Global Cloud platform in Japan."

Doug Hauger, Windows Azure General Manager from Microsoft Corporation, said: "These new cloud options from Fujitsu demonstrate the potential for offering public cloud services with value-added service techniques to deliver sophisticated cloud solutions that meet customers' specific needs. The hybrid cloud model from Fujitsu, which uses the Microsoft Windows Azure platform, shows how our work together will address key customer needs such as integration with on-premise systems or specific security requirements."

For Further Information about:


Kevin Remde (@KevinRemde) posted Cloud on Your Terms Part 14 of 30: System Center Orchestrator 2012 on 2/14/2011:

imageConsider the following chart that diagrams the delivery of “IT as a Service”. This is what the private cloud is all about. And the way tasks may get done in an automated fashion is going to play a very important role.

Click to view larger

In the Microsoft solution, the tool that will allow you to create, test, and perform that automation is called System Center Orchestrator 2012. You may know it by another name…

image“Hey Kevin.. isn’t that what Opalis does?”

As I was about to say.. Yes, the current product is called Opalis. And in the new product, coming out as a part of the System Center 2012 wave, the functionality in Opalis is brought into Orchestrator; with some additional functionality included.

Click to view largerFirst I want to summarize the main areas where Orchestrator shines. And I like to think of it in musical terms such as an orchestra; the hall, the players, their instruments, the music, and the conductor (which would be you):

Process Integration - There’s a tight integration with the rest of System Center – particularly with System Center Service Manager 2012. It also preserves and integrates with your existing investments in other tools and processes; not just Microsoft’s. We can think of this connecting of heterogeneous environments together as the musical instruments and the players in our Orchestra. Something needs to bring them together.

Orchestration – It’s not enough to have all of the players and their instruments in the same room. Now you have to give them something to perform, and get all the different bits working together, in the right order, in the right way. Orchestration is the sheet music.

Automation – Now that we’ve defined the symphony, we let it fly. You are the conductor. The music flows at your command, and in your timing.

“But how is that different than Opalis?”

That’s a fair question. And here’s how I have heard it described… Opalis is a tool built mainly for the IT Professional. It allows you to author, test, and debug runbooks.

“Runbooks?”

Yes. Runbook Automation (RBA) is the ability to define the steps that are to be performed, plus the inputs and outputs, and the order in which they are to happen. (A happens before B, C depends upon B completing successfully, etc. A –> B –> C…) Defining then an overall process that involves many and varied steps and dependent inputs and outputs is where Opalis really shines. Consider the following set of steps: A folder is being monitored, and when a new file enters the folder, a task launches to copy that file to another folder, and then to make note of the operation in the event log.

Sample OIS Policy

It’s a simple, automated process that involves several steps; each depending upon the previous step.

And the job of actually launching and running these runbooks in Opalis was primarily the IT Pros’. But in System Center Orchestrator 2012 we take it to the next level and provide benefit for these additional “audiences”:

IT Business Manager – Orchestrator gives business managers and application owners direct visibility into the processes that they are interested in or have oversight for. Through a web console they can gain quick access. They can pull information from the product through the provided web service and plug it into BI or use Excel PowerPivots to work with data as a data feed.

IT Operator – Initiate, monitor, and troubleshoot automated tasks.

Developer - You can your build apps to include support for being driven by and reporting to Orchestrator in the form of Integration Packs (IPs)

IT Professional – As before, the job of authoring, testing, and debugging the runbooks is your main focus here.

System Center Orchestrator 2012, like the other parts of the System Center 2012 product set, is well integrated as a part of the whole solution, and works on behalf of the whole business and the needs of people based on their roles.

“Is there a beta or RC available for Orchestrator 2012? And when will it be released?”

I don’t know when exactly the release date is, though I’m fairly confident that it will come out along with the other products in the System Center 2012 suite. And yes, there is a Release Candidate of Orchestrator 2012 that can be downloaded from HERE, along with the other prerelease System Center 2012 products.

I hope you have found this contribution to our 30-part series useful. Let me know in the comments. And if you have missed any of the series posts, check out my summary post for links to all of the articles available so far at http://aka.ms/cloudseries.


<Return to section navigation list>

Cloud Security and Governance

Yung Chou described a new TechNet Video: Your Three Winning Numbers of Cloud Computing on 11/14/2011:

Download
WMV Download | WMA | MP3

imageAlthough cloud computing is emerging as a main delivery vehicle for IT services, there is much confusion about what it is and how it works.

This session brings clarity by examining the facts and common knowledge of IT and builds an essential body of knowledge about cloud computing with three easy-to-remember numbers. The discussion includes definitions of cloud computing, where the opportunities are, and how IT professionals can play an enabling role in transforming an existing IT establishment into a cloud-friendly, cloud-ready environment. We review the architecture and examine the service delivery models of cloud computing while we step through business scenarios and the operations of employing public cloud and private cloud.


<Return to section navigation list>

Cloud Computing Events

Steve Levine interviewed Randy Bias (@randybias) at Cloud Expo 2012 on 11/14/2011 as reported in this “Cloud Management”, CloudStack, and Other Musings post:

imageRandy Bias [pictured at right] gave an on-camera interview to Steve Levine of TheCloudist TV at Cloud Expo in Santa Clara last week. Known for calling it like he sees it, Randy covers a lot of ground in just a few minutes:

  • “Cloud management” is a term that confuses the market. Smart cloud architects must figure what vendors using this term really mean: virtual server management, application management, infrastructure management, governance management, or some other function in a particular layer of the stack?
  • PaaS has huge potential, but it’s struggling because people think they understand SaaS and IaaS better and hence focus their efforts there first.
  • OpenStack’s rapid rise underscores how big the demand is for an open cloud environment that scales. The desire, however, is ahead of the technology, and there’s more smoke than fire in OpenStack at the moment. That’s changing fast, as companies like CloudScaling, Piston, and Nebula put OpenStack clouds into production.
  • Cloud.com’s CloudStack is not truly OpenStack. Speculating that Citrix’s purchase of the company was largely defensive, Randy suggests that the bidding war to win Cloud.com might have led Citrix to look for ways to monetize its investment. Wrapping CloudStack in the OpenStack banner would be one way to get there.

Watch the video, and tell us what you think.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

James Hamilton crowed about Amazon Web Services #42 rating in the Top 500 Supercomputer Sites list for EC2 clusters in his 42: The Answer to the Ultimate Question of Life, the Universe, and Everything post of 11/15/2011:

imageYesterday the Top 500 Supercomputer Sites was announced. The Top500 list shows the most powerful commercially available supercomputer systems in the world. This list represents the very outside of what supercomputer performance is possible when cost is no object.

The top placement on the list is always owned by a sovereign funded laboratory. These are the systems that only government funded agencies can purchase. But they have great interest for me because, as the cost of computing continues to fall, these performance levels will become commercially available to companies wanting to run high scale models and data intensive computing. In effect, the Top500 predicts the future so I’m always interested in the systems on the list.

What makes this list of the fastest supercomputers in the world released yesterday particularly unusual can be found at position #42. 42 is an anomaly of the first order. In fact, #42 is an anomaly across enough dimensions that its worth digging much deeper.

Virtualization Tax is Now Affordable:

I remember reading through the detailed specifications when the Cray 1 supercomputer was announced and marveling that it didn’t even use virtual memory. It was believed at the time that only real-mode memory access could deliver the performance needed.

We have come a long way in the nearly 40 years since the Cray 1 was announced. This #42 result was run not just using virtual memory but with virtual memory in a guest operating system running under a hypervisor. This is the only fully virtualized, multi-tenant super computer on the Top500 and it shows what is possible as the virtualization tax continues to fall. This is an awesome result and many more virtualization improvements are coming over the next 2 to 3 years.

Commodity Networks can Compete at the Top of the Performance Spectrum:

This is the only Top500 entrant below number 128 on the list that is not running either Infiniband or a proprietary, purpose-built network. This result at #42 is an all Ethernet network showing that a commodity network, if done right, can produce industry leading performance numbers.

What’s the secret? 10Gbps directly the host is the first part. The second is full non-blocking networking fabric where all systems can communicate at full line rate at the same time. Worded differently, the network is not oversubscribed. See Datacenter Networks are in my Way for more on the problems with existing datacenter networks.

Commodity Ethernet networks continue to borrow more and more implementation approaches and good network architecture ideas from Infiniband, scale economics continues to drive down costs so non-blocking networks are now practical and affordable, and scale economics are pushing rapid innovation. Commodity equipment in a well-engineered overall service is where I see the future of networking continuing to head.

Anyone can own a Supercomputer for an hour:

You can’t rent supercomputing time by the hour from Lawrence Livermore National Laboratory. Sandia is not doing it either. But you can have a top50 supercomputer for under $2,600/hour. That is one of the world’s most powerful high performance computing systems with 1,064 nodes and 8,512 cores for under $3k/hour. For those of you not needing quite this much power at one time, that’s $0.05/core hour which is ½ of the previous Amazon Web Services HPC system cost.

Single node speeds and feeds:

  • Processors: 8-core, 2 socket Intel Xeon @ 2.6 Ghz with hyperthreading
  • Memory: 60.5GB
  • Storage: 3.37TB direct attached and Elastic Block Store for remote storage
  • Networking: 10Gbps Ethernet with full bisection bandwidth within the placement group
  • Virtualized: Hardware Assisted Virtualization
  • API: cc2.8xlarge

Overall Top500 Result:

  • 1064 nodes of cc2.8xlarge
  • 240.09 TFlops at an excellent 67.8% efficiency
  • $2.40/node hour on demand
  • 10Gbps non-blocking Ethernet networking fabric

Database Intensive Computing:

This is a database machine masquerading as a supercomputer. You don’t have to use the floating point units to get full value from renting time on this cluster. It’s absolutely a screamer as an HPC system. But it also has the potential to be the world’s highest performing MapReduce system (Elastic Map Reduce) with a full bisection bandwidth 10Gbps network directly to each node. Any database or general data intensive workload with high per-node computational costs and/or high inter-node traffic will run well on this new instance type.

If you are network bound, compute bound, or both, the EC2 cc2.8xlarge instance type could be the right answer. And, the amazing thing is that the cc2 instance type is ½ the cost per core of the cc1 instance.

Supercomputing is now available to anyone for $0.05/core hour. Go to http://aws.amazon.com/hpc-applications/ and give it a try. You no longer need to be a national lab or a government agency to be able run one of the biggest supercomputers in the world.


Jeff Barr (@jeffbarr) described Next Generation Cluster Computing on Amazon EC2 - The CC2 Instance Type in an 11/14/2011 post:

imageYou no longer need to build your own compute cluster in order to tackle your High Performance Computing (HPC) projects. By launching cloud-based compute instances on an as-needed basis, you can avoid waiting in lengthy queues for limited access to shared resources.

imageWe've pushed the bounds of cloud-based HPC in the past with the introduction of our Cluster Compute and Cluster GPU instances. Both of these instance types have been used in a wide variety of High Performance Computing scenarios (See our High Performance Computing page for even more info).

Today we are introducing a new member of the Cluster Compute Family, the Cluster Compute Eight Extra Large. The API name of this instance is cc2.8xlarge so we've taken to calling it the CC2 for short. This instance features some incredible specifications at a remarkably low price. Let's take a look at the specs:

Processing - The CC2 instance type includes 2 Intel Xeon processors, each with 8 hardware cores. We've enabled Hyper-Threading, allowing each core to process a pair of instruction streams in parallel. Net-net, there are 32 hardware execution threads and you can expect 88 EC2 Compute Units (ECU's) from this 64-bit instance type. That's nearly 90x the rating of the original EC2 small instance, and almost 3x the rating of the first-generation Cluster Compute instance.

Storage - On the storage front, the CC2 instance type is packed with 60.5 GB of RAM and 3.37 TB of instance storage.

Networking - As a member of our Cluster Compute family, this instance is connected to a 10 Gigabit network and offers low latency connectivity with full bisection bandwidth to other CC2 instances within a Placement Group. You can create a Placement Group using the AWS Management Console:

Pricing - You can launch an On-Demand CC2 instance for just $2.40 per hour. You can buy Reserved Instances, and you can also bid for CC2 time on the EC2 Spot Market. We have also lowered the price of the existing CC1 instances to $1.30 per hour.

You have the flexibility to choose the pricing model that works for you based on your application, your budget, your deadlines, and your ability to utilize the instances. We believe that the price-performance of this new instance type, combined with the number of ways that you can choose to acquire it, will result in a compelling value for scientists, engineers, and researchers.

Operating Systems - This instance type uses hardware-assisted virtualization (HVM), so you'll need to choose an AMI accordingly. You can use the Amazon Linux AMI or Windows 2008 R2. You can also install HPC Pack 2008 R2 Express (read Microsoft's HPC Server FAQ for more info).

We have updated the Amazon EC2: Microsoft Windows Guide with instructions on setting up an HPC cluster complete with an Active Directory Domain Controller, a DNS server, a Head Node and one or more Compute Nodes.

Speed - We have submitted benchmark results for HPL to the Top500 site. The November list came out earlier today and we are ranked at position number 42, with a speed of 240.09 teraFLOPS.

On a somewhat smaller scale, you can launch your own array of 290 CC2 instances and create a Top500 supercomputer (63.7 teraFLOPS) at a cost of less than $1000 per hour (perhaps a lot less, depending on conditions in the Spot Market). This result was obtained using a cluster of 1064 instances.

Launch - My colleague Dr. Matt Wood cooked up a CloudFormation template to make it easy for you to get started with CC2 instances. The template uses MIT's StarCluster to create a fully functioning cluster for loosely coupled or tightly parallel compute tasks with a single click. Matt says that the template will do the following:

  • Provision a new 2 node CC2 cluster with 32 hyperthreaded cores, into a new placement group.
  • Attach NFS storage, monitoring, and a 200 GB AWS Public Data Set.

The template creates a new t1.micro instance, which acts as a controller for the rest of the elastic cluster. From a basic Amazon Linux AMI, CloudFormation bootstraps all dependencies, installs and configures StarCluster and creates the necessary security credentials before provisioning the CC2 instances which spin up ready to accept jobs via Sun Grid Engine.

You can spin up the stack, log in to the controller instance and hop onto the cluster master to submit jobs, or scale the ad-hoc cluster up and down in just a few clicks. That's pretty cool for just 169 lines of declarative JSON. You can get started very quickly with this friendly button:

We are making the CC2 instance available as a public beta so a few caveats apply:

  • The instances are available in a single Availability Zone in the US East (Northern Virginia) Region. We plan to add capacity in other EC2 Regions throughout 2012. Please feel free to contact us if you are interested in CC2 support in other Regions.
  • You can run 2 CC2 instances by default. If you would like to run larger jobs, please submit an EC2 instance increase request.
  • You cannot currently launch instances of this type within a Virtual Private Cloud (VPC).

Supercomputing 11 - The AWS team will be out in force at the Supercomputing 11 conference (November 12-18 in Seattle); here's a summary of AWS activities at SC11. Our booth (#6202) will be open from the 14th to the 17th (come by and say hello). My colleague Dr. Deepak Singh, will participate in a panel on SaaS-Based Research. Deepak will host a BoF session on HPC in the Cloud ; Matt Wood will host a session on Genomics in the Cloud.

    What can you do with a supercomputer of your very own?

    Amazon Web Services providing new CC2 instances on the same day that the Windows Azure Team announced their Windows Azure HPC Scheduler SDK is an interesting coincidence.


    <Return to section navigation list>

    1 comments:

    Managed Cloud said...

    You probably had the experience of applications which ran well on the LAN and were barely usable from a remote WAN site. In the past that drove many organizations to use thin clients architecture or to look into solutions that could reduce the gap created by the WAN (compression, prioritization). Migrating to the cloud means adding a piece of network between your users and the cloud services (storage, servers). It is not free of impact on your users and the performance they experience. Before implementing, you may want to consider a certain number of points on your applications rely on the network to be sure you design the right solution!