Tuesday, May 10, 2011

Windows Azure and Cloud Computing Posts for 5/9/2011+

image2 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

image4    

• Update 5/10/2011: Added Python Tools for Visual Studio – Beta2 (just released!) to the Live Windows Azure Apps, APIs, Tools and Test Harnesses section below and fixed Technorati tags.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

Wade Wegner (@wadewegner) described Getting Started with the Windows Azure Toolkit for iOS in a 5/6/2011 post:

imageI am extremely excited to announce the immediate availability of the Windows Azure Toolkit for iOS!

This first release of the Windows Azure Toolkit for iOS provides an easy and convenient way of accessing Windows Azure storage from iOS-based applications.  As with the Windows Azure Toolkit for Windows Phone 7 we will continue to bring additional capabilities to the toolkit, such as push notifications, Access Control Service, and more.

iOSImage1iOSImage2iOSImage3

imageYou can get the toolkit—and all the source code—on github:

The toolkit works in two ways: the toolkit can be used to access Windows Azure storage directly, or alternatively, can go through a proxy service. The proxy service code is the same code as used in the Windows Azure Toolkit for Windows Phone 7 and negates the need for the developer to store the Azure storage credentials locally on the device.

The release of the Windows Azure Toolkit for iOS is a significant milestone, and reinforces my opinion that Windows Azure is a great place to run services for mobile applications.

Setting up your Windows Azure services

To quickly get your mobile services up and running in Windows Azure, take a look at the Cloud Ready Package for Devices (found under downloads in https://github.com/microsoft-dpe/watoolkitios-lib).

The Cloud Ready Package for Devices is designed to make it easier for you to build mobile applications that leverage cloud services running in Windows Azure. Instead of having to open up Visual Studio and compile a solution with the services you want to use, we provide you with the Windows Azure CSPKG and CSCFG files prebuilt – all you need to do is update the configuration file to point to your account.

In this video, you’ll see how easy it is to deploy this package to Windows Azure regardless of your operating system (e.g. Windows 7 or OSX) and target device (e.g. Windows Phone 7, iOS, or Android).

Unpacking the v1.0.0 library zip file

You can download the compiled storage library on github (found under downloads in https://github.com/microsoft-dpe/watoolkitios-lib).  When you upzip the zip file, you’ll find several folders:

  • /4.3-device – the library binary for iOS 4.3 (device)
  • /4.3-simulator – the library binary for iOS 4.3 (simulator)
  • /include – the headers for the library

Creating your first project using the toolkit

If you are not familiar with XCode, this is a short tutorial for getting your first project up and running. Launch XCode 4 and create a new project:

clip_image002[4]_thumb

Select a View-based application and click Next.

Give the project a name and company. For the purposes of this walkthrough, we’ll call it “FirstAzureProject”. Do not include Unit Tests.

clip_image004[4]_thumb

Pick a folder to save the project to, and uncheck the source code repository checkbox.

When the project opens, right click on the Frameworks folder and select “Add Files to…”

clip_image006[4]_thumb

Locate the libwatoolkitios.a library file from the download package folder (from either the simulator or device folder), and add it to the Frameworks folder.

image

Now, click on the top most project (FirstAzureProject) in the left hand column.  Click on the target in the second column.  Click on the “Build Settings” header in the third column.  Ensure that the “All” button is selected to show all settings.

In the search box, type in “header search” and look for an entry called “Header Search Paths”:

image

Double-click on this line (towards the right of the line), and click on the “+” button in the lower left.

image

Add the path to where the folder containing the header files are located (this is the include folder from the download).  For example, "~/Desktop/v1.0.0/include" if you have extracted the folder on your desktop.  Be sure to encapsulate in quotes if you have spaces in the path.

image

Now, click on the “Build Phases” tab and expand the “Link Binary with Libraries” section:

image

Click on the “+” button in the lower left, and scroll down until you find a library called “libxml2.2.7.3.dylib”.  Add this library to your project.

Testing Everything Works

Now that you’ve added all of the required references, let’s test that the library can be called.  To do this, double click on the [ProjectName]AppDelegate.m file (e.g. FirstAzureProjectAppDelegate.m), and add the following imports to the class:

#import "AuthenticationCredential.h" 
#import "CloudStorageClient.h"

Perform a build.  If the build succeeds, the library is correctly added to the project.  If it fails, it is recommended to go back and check the header search paths.

Assuming it builds, in the .m file, add the following declarations after the @synthesize lines:

AuthenticationCredential *credential; 
CloudStorageClient *client;

Now, add the following lines after the [self.window makeKeyAndVisible] line in the didFinishLaunchingWithOptions method:

credential = [AuthenticationCredential credentialWithAzureServiceAccount:@"ACCOUNT_NAME" accessKey:@"ACCOUNT_KEY"]; 
client = [CloudStorageClient storageClientWithCredential:credential]; 
[client getBlobContainersWithBlock:^(NSArray* containers, NSError* error) 
{ 
   if (error) 
   { 
      NSLog(@"%@",[error localizedDescription]); 
   } 
   else 
   { 
      NSLog(@"%i containers were found…",[containers count]); 
   } 
}];

Be sure to replace ACCOUNT_NAME and ACCOUNT_KEY with your Windows Azure storage account name and key, available on the Windows Azure portal (http://windows.azure.com).

Build and run the project.  You should something similar to the following output in the debug window:

    2011-05-06 18:18:46.001 FirstAzureProject[27456:207] 2 containers were found…

The last line shows that this account has 2 containers.  This will of course vary, depending on how many blob containers you have setup in your own Windows Azure account.

Doing more with the toolkit

Feel free to explore the class documentation to explore more of the toolkit API.  To help, here are some additional examples:

In [ProjectName]AppDelegate.m class, add the following headers:

#import "AuthenticationCredential.h" 
#import "CloudStorageClient.h" 
#import "BlobContainer.h" 
#import "Blob.h" 
#import "TableEntity.h" 
#import "TableFetchRequest.h"

In the didFinishLaunchingWithOptions method, after the [self.window makeKeyAndVisible] line, try testing a few of the following commands.  Again, running the project will return results into the debugger window.

To authenticate using account name and key:

    credential = [AuthenticationCredential credentialWithAzureServiceAccount:@"ACCOUNT_NAME" accessKey:@"ACCOUNT_KEY"];

To authenticate instead using the proxy service from the Windows Phone 7 toolkit, you can use the following:

credential = [AuthenticationCredential authenticateCredentialWithProxyURL:[NSURL URLWithString:@"PROXY_URL"] user:@"USERNAME" password:@"PASSWORD" withBlock:^(NSError *error) 
{ 
   if (error) 
   { 
      NSLog(@"%@",[error localizedDescription]); 
   } 
   else 
   { 
      NSLog(@"Successfully logged in"); 
   } 
}];

Replace the PROXY_URL, USERNAME, and PASSWORD with the information required to access your proxy service.

To create a new client using the credentials:

    client = [CloudStorageClient storageClientWithCredential:credential];

To list all blob containers (this method is not supported via the proxy server):

// get all blob containers 
[client getBlobContainersWithBlock:^(NSArray *containers, NSError *error) 
{ 
   if (error) 
   { 
      NSLog(@"%@",[error localizedDescription]); 
   } 
   else 
   { 
      NSLog(@"%i containers were found…",[containers count]); 
   } 
}];

To get all blobs within a container (this also is not supported by the proxy):

// get all blobs within a container 
[client getBlobs:@"images" withBlock:^(NSArray *blobs, NSError *error) 
{ 
   if (error) 
   { 
      NSLog(@"%@",[error localizedDescription]); 
   } 
   else 
   { 
      NSLog(@"%i blobs were found in the images container…",[blobs count]); 
   } 
}];

To get all tables from storage (this works with both direct access and proxy):

// get all tables 
[client getTablesWithBlock:^(NSArray* tables, NSError* error) 
{ 
   if (error) 
   { 
      NSLog(@"%@",[error localizedDescription]); 
   } 
   else 
   { 
      NSLog(@"%i tables found",[tables count]); 
   } 
}];

To create a table (works with both direct access and proxy):

// create table 
[client createTableNamed:@"wadestable" withBlock:^(NSError *error) 
{ 
   if (error) 
   { 
      NSLog(@"%@",[error localizedDescription]); 
   } 
   else 
   { 
      NSLog(@"Table created"); 
   } 
}];

To delete a table (works with both direct access and proxy):

//delete a table 
[client deleteTableNamed:@"wadestable" withBlock:^(NSError *error) 
{ 
   if (error) 
   { 
      NSLog(@"%@",[error localizedDescription]); 
   } 
   else 
   { 
      NSLog(@"Table was deleted"); 
   } 
}];

To get entities for a table (works with both account key and proxy):

// get entities for table developers 
TableFetchRequest* fetchRequest = [TableFetchRequest fetchRequestForTable:@"Developers"]; 
[client getEntities:fetchRequest withBlock:^(NSArray *entities, NSError *error) 
{ 
   if (error) 
   { 
      NSLog(@"%@",[error localizedDescription]); 
   } 
   else 
   { 
      NSLog(@"%i entities found in the developer table",[entities count]); 
   } 
}];

To get entities for a table using predicate (works with both account key and proxy):

// get entities for table developers with predicate request 
NSError* error = nil; 
NSPredicate* predicate = [NSPredicate predicateWithFormat:@"Name = 'Wade' || Name = 'Vittorio' || Name = 'Nathan'"]; 
TableFetchRequest* anotherFetchRequest = [TableFetchRequest fetchRequestForTable:@"Developers" predicate:predicate error:&error]; 
[client getEntities:anotherFetchRequest withBlock:^(NSArray *entities, NSError *error) 
{ 
   if (error) 
   { 
      NSLog(@"%@",[error localizedDescription]); 
   } 
   else 
   { 
      NSLog(@"%i entities returned by this request",[entities count]); 
   } 
}];

Doing even more with the toolkit

If you are looking to explore the toolkit further, I recommend looking at the sample application that can be found in the watoolkitios-samples project.  This project demonstrates all of the functionality of the toolkit, including creating, uploading, and retrieving entities from both table and blob storage.


<Return to section navigation list> 

SQL Azure Database and Reporting

Steve Yi (pictured below) reported SQL Azure Diagnostics Tool Available in a 5/9/2011 post to the SQL Azure team blog:

image Evan Basalik has written a post which details the arrival of CSS SQL Azure Diagnostics (CSAD), developed by the our support team. The tool has been developed to shorten the data collection process when troubleshooting SQL Azure issues. You can point CSAD to your SQL Azure instance, provide the appropriate credentials and will then be presented with some good summary data about your instance.

imageYou can download the CASD tool here.

The full detailed article is on the CSS SQL Server Engineers’ blog: CSS SQL Azure Diagnostics tool released – Check it out [see below].


Evan Basalik posted CSS SQL Azure Diagnostics tool released on 4/25/2011 (missed when posted):

imageI am happy to announce that CSS SQL Azure Diagnostics (CSAD) has been released. Since you cannot use PSSDiag/SQLDiag against SQL Azure, I decided to develop this tool to shorten the data collection process when troubleshooting SQL Azure issues.  You can point CSAD to your SQL Azure instance, provide the appropriate credentials and will then be presented with some good summary data about your instance.  Since I leverage the standard ReportViewer control, you can also export the reports to a number of different formats.  This makes it easy to share the reports with either your colleagues or CSS.  In addition, CSAD is a Click-Once application, so it has a very light installation and it always checks for the latest version.  (For some more details on the installation, see the very end of this post).

You can download it from http://csssqlazure.blob.core.windows.net/csssqlazuredeploy/publish.htm or click on the link above.

Let’s walk through using CSAD:

1)  Enter your server and user information

image

2)  Click “GO”

image

That’s it!

Now for the more interesting part of this post and walk through the results you get back…

The first thing you will see is a general information section:

image

Although there are just a couple of things in this section right now, it is a key area.  Here is where you can see your database size, plus CSAD runs some tests to see if you are running into any known service issues that have not yet been addressed.  As CSAD continues to develop, it will add more information here like SKU, version, etc.

Next you will see the first of the core tables – Top 10 CPU consumers:

image

This shows your queries that are consuming the most CPU, plus some pertinent information about these queries.  You can use this table to figure out which queries likely need some tuning.

Next, you will see your longest running queries:

image

If you continue down through the pages, you will then see your top logical and physical I/O consuming queries:

image

image

These last two tables should give you a pretty good idea on which queries are missing an index or have an incorrect index.  (NOTE: One of the next features I am adding is the ability to identify the missing index and generate the appropriate TSQL to create the index).

Lastly, I want to point out that you have the ability to either print or export this report:

image

The beauty of CSS SQL Azure Diagnostics is that it doesn’t use any inside information.  None – everything that is pulled is pulled from public DMVs.  In fact, and you can test this by unchecking “SQL Azure database” at the top of the page, you can run the same exact queries against an on-premises instance of SQL Server and get the same exact data back.  This is going to be one of the tenets of CSAD going forward – it will always only use queries and information that anybody can use against any SQL Server instance in the world – be it on-premises or in the cloud.  (NOTE:  Although the DMVs used are public, I don’t yet have them documented in the tool itself.  I promise to do that in a near-term release, though.  In addition, when I document the DMV queries, I will add a lot more information on the different columns in each table to help you interpret them).

INSTALLATION DETAILS

1)  CSAD does require the installation of the ReportViewer 2010 and the .NET 4.0 Client Profile.  It should check for both components on install, but you can also install them separately:

2)  No reboot is necessary

3)  Each time CSAD starts up, it checks the Azure blob storage location for a newer version and updates itself if necessary

4)  You can uninstall it by going to Control Panel –> Add/Remove Programs

5)  I have already seen a few isolated instances where the ReportViewer control wouldn’t install.  If you run into that scenario, just install it separately using the link above

P.S.  Thanks to Chris Skorlinski for providing me with the original DMV queries.


Liam Cavanagh conducted a 00:10:49 Walkthrough of SQL Azure DataSync by Liam Cavanagh Bing Video found 5/9/2011:

image

imageIn this short screen capture, Liam Cavanagh provides a tour of SQL Azure Data Sync for developers.

The page has links to several other SQL Azure Data Sync and related videos.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Maarten Balliauw (@MaartenBalliauw) explained Using dynamic WCF service routes in a 5/9/2011 post:

image For a demo I am working on, I’m creating an OData feed. This OData feed is in essence a WCF service which is activated using System.ServiceModel.Activation.ServiceRoute. The idea of using that technique is simple: map an incoming URL route, e.g. “http://example.com/MyService” to a WCF service. But there’s a catch in ServiceRoute: unlike ASP.NET routing, it does not support the usage of route data. This means that if I want to create a service which can exist multiple times but in different contexts, like, for example, a “private” instance of that service for a customer, the ServiceRoute will not be enough. No support for having http://example.com/MyService/Contoso/ and http://example.com/MyService/AdventureWorks to map to the same “MyService”. Unless you create multiple ServiceRoutes which require recompilation. Or… unless you sprinkle some route magic on top!

Implementing an MVC-style route for WCF

imageLet’s call this thing DynamicServiceRoute. The goal of it will be to achieve a working ServiceRoute which supports route data and which allows you to create service routes of the format “MyService/{customername}”, like you would do in ASP.NET MVC.

First of all, let’s inherit from RouteBase and IRouteHandler. No, not from ServiceRoute! The latter is so closed that it’s basically a no-go if you want to extend it. Instead, we’ll wrap it! Here’s the base code for our DynamicServiceRoute:

1 public class DynamicServiceRoute 2 : RouteBase, IRouteHandler 3 { 4 private string virtualPath = null; 5 private ServiceRoute innerServiceRoute = null; 6 private Route innerRoute = null; 7 8 public static RouteData GetCurrentRouteData() 9 { 10 } 11 12 public DynamicServiceRoute(string pathPrefix, object defaults, ServiceHostFactoryBase serviceHostFactory, Type serviceType) 13 { 14 } 15 16 public override RouteData GetRouteData(HttpContextBase httpContext) 17 { 18 } 19 20 public override VirtualPathData GetVirtualPath(RequestContext requestContext, RouteValueDictionary values) 21 { 22 } 23 24 public System.Web.IHttpHandler GetHttpHandler(RequestContext requestContext) 25 { 26 } 27 }

As you can see, we’re creating a new RouteBase implementation and wrap 2 routes: an inner ServiceRoute and and inner Route. The first one will hold all our WCF details and will, in one of the next code snippets, be used to dispatch and activate the WCF service (or an OData feed or …). The latter will be used for URL matching: no way I’m going to rewrite the URL matching logic if it’s already there for you in Route.

Let’s create a constructor:

1 public DynamicServiceRoute(string pathPrefix, object defaults, ServiceHostFactoryBase serviceHostFactory, Type serviceType) 2 { 3 if (pathPrefix.IndexOf("{*") >= 0) 4 { 5 throw new ArgumentException("Path prefix can not include catch-all route parameters.", "pathPrefix"); 6 } 7 if (!pathPrefix.EndsWith("/")) 8 { 9 pathPrefix += "/"; 10 } 11 pathPrefix += "{*servicePath}"; 12 13 virtualPath = serviceType.FullName + "-" + Guid.NewGuid().ToString() + "/"; 14 innerServiceRoute = new ServiceRoute(virtualPath, serviceHostFactory, serviceType); 15 innerRoute = new Route(pathPrefix, new RouteValueDictionary(defaults), this); 16 }

As you can see, it accepts a path prefix (e.g. “MyService/{customername}”), a defaults object (so you can say new { customername = “Default” }), a ServiceHostFactoryBase (which may sound familiar if you’ve been using ServiceRoute) and a service type, which is the type of the class that will be your WCF service.

Within the constructor, we check for catch-all parameters. Since I’ll be abusing those later on, it’s important the user of this class can not make use of them. Next, a catch-all parameter {*servicePath} is appended to the pathPrefix parameter. I’m doing this because I want all calls to a path below “MyService/somecustomer/…” to match for this route. Yes, I can try to do this myself, but again this logic is already available in Route so I’ll just reuse it.

One other thing that happens is a virtual path is generated. This will be a fake path that I’ll use as the URL to match in the inner ServiceRoute. This means if I navigate to “MyService/SomeCustomer” or if I navigate to “MyServiceNamespace.MyServiceType-guid”, the same route will trigger. The first one is the pretty one that we’re trying to create, the latter is the internal “make-things-work” URL. Using this virtual path and the path prefix, simply create a ServiceRoute and Route.

Actually, a lot of work has been done in 3 lines of code in the constructor. What’s left is just an implementation of RouteBase which calls the corresponding inner logic. Here’s the meat:

1 public override RouteData GetRouteData(HttpContextBase httpContext) 2 { 3 return innerRoute.GetRouteData(httpContext); 4 } 5 6 public override VirtualPathData GetVirtualPath(RequestContext requestContext, RouteValueDictionary values) 7 { 8 return null; 9 } 10 11 public System.Web.IHttpHandler GetHttpHandler(RequestContext requestContext) 12 { 13 requestContext.HttpContext.RewritePath("~/" + virtualPath + requestContext.RouteData.Values["servicePath"], true); 14 return innerServiceRoute.RouteHandler.GetHttpHandler(requestContext); 15 }

I told you it was easy, right? GetRouteData is used by the routing engine to check if a route matches. We just pass that call to the inner route which is able to handle this. GetVirtualPath will not be important here, so simply return null there. If you really really feel this is needed, it would require some logic that creates a URL from a set of route data. But since you’ll probably never have to do that, null is good here. The most important thing here is GetHttpHandler. It is called by the routing engine to get a HTTP handler for a specific request context if the route matches. In this method, I simply rewrite the requested URL to the internal, ugly “MyServiceNamespace.MyServiceType-guid” URL and ask the inner ServiceRoute to have fun with it and serve the request. There, the magic just happened.

Want to use it? Simply register a new route:

1 var dataServiceHostFactory = new DataServiceHostFactory(); 2 RouteTable.Routes.Add(new DynamicServiceRoute("MyService/{customername}", null, dataServiceHostFactory, typeof(MyService)));

Conclusion

Why would you need this? Well, imagine you are building a customer-specific service where you want to track service calls for a specific sutomer. For example, if you’re creating private NuGet repositories. And yes, this was a hint on a future blog post :-)

Feel this is useful to you as well? Grab the code here: DynamicServiceRoute.cs (1.94 kb)


<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF and Service Bus

Vittorio Bertocci (@vibronet) reported Windows Azure Toolkit for Windows Phone 7 1.2 will Integrate with ACS on 5/9/2011:

PhoneTokens

image I am sure many of you already know about the Windows Azure Toolkit for Windows Phone 7, a set of Visual Studio templates and samples which makes it easier for you to take advantage of the Windows Azure platform from your Windows Phone 7 applications.

image722322222This morning Wade disclosed the next phase in the Windows Azure initiatives for mobile platforms: [see below] he announced the availability of a new toolkit for taking advantage of Windows Azure from iOS applications, and anticipated some juicy details about the next release of the  Windows Azure Toolkit for Windows Phone 7 (expected to release during TechEd next week).

The new feature I’ve been helping out with is the ACS integration: the Visual Studio template in the TechEd release of the Windows Azure Toolkit for Windows Phone 7 will offer you the possibility of using ACS for handling authentication and authorization for your Windows Phone 7 application.

Now, I don’t want to ruin the surprise too much for you, and we are not 100% done on the new release hence some minor details may still change, but we thought it would be nice to give you a bit of a teaser.

The feature

One of the main strengths of the toolkit lies in its simplicity: install the kit, create a new project using its template, cruise from an absolutely minimalistic dialog and bam, you’re done. Hit F5 and you’re already in business. With the current versions you don’t need a Windows Azure subscription to experiment with the toolkit, in fact you don’t even need an internet connection (which is quite remarkable for a toolkit about cloud technologies!).

In the same thrift spirit, authentication and authorization were handled as simply as they could be: a classic brute-force membership store with associated role provider.

Since the toolkit release (and since WP7 developer tools, actually) we heard loud and clear that you guys want to take advantage of claims-based identity on the phone too. Last April we added a new lab to the Identity Developer Training Kit and the Windows Azure Platform Training Kit, a lab which shows how to take advantage of ACS for handling sign-in and authorization via Windows Live ID/Facebook/Google/Yahoo in your phone apps. That lab is great (I’m told) for understanding how the integration works, but if you want immediate satisfaction and just get the feature in your app the lab is not the right delivery vehicle.

That’s why we decided to add ACS integration within the toolkit template: if you have a namespace with ACS, even if it’s completely empty, with just few clicks you can go from zero to federated sign-in.

This introduces new requirements for running the toolkit, namely an internet connection and a namespace with ACS, however some of those are inherent to the scenario being enabled (if you want to allow your users to sign in using Facebook, they better have access to the Internet), some are very mild (you do know that you can subscribe and use ACS in production FOR FREE until at least the end of 2011, right?) and we made sure you can always fall back on the former membership approach, in which case you still don’t need the Internet.

The Project Template Setup Wizard

We wanted to keep things as simple as possible: that largely means gathering only the data we can’t do without and use reasonable defaults everywhere else. That means that somebody will want to tweak things here and there, but for my experience I think we nailed what most people wants in the general case. Don’t hold back with the feedback!

In a nutshell, what we did was transforming the existing template setup dialog into the first screen of a wizard:

image

Once you made your choice between using the cloud or the emulator, you can hit Next:

image

Here you get a choice. You can keep the default and hit OK, which will fall back on the membership authentication strategy the toolkit uses today.
Or you can select “Use the Windows Azure Access Control Service”: you stay in Wonderland and I show you how deep the rabbit-hole goes. Wait no, that was the wrong quote: what I meant is that the wizard displays extra options for setting up ACS.

image

Your namespace and your management key: that’s the bare minimum we need to ask you in order to set up the project in a way that will allow you to hit OK, hit F5 immediately afterwards and already have a fully functional application. Here there’s what we do just after you hit OK on that screen:

  • Go to the ACS namespace and
    • add the sample application service as one RP (as described by the ACS+WP7 lab)
    • Add all the pre-configured identity providers (Windows Live ID, Google, Yahoo)
    • Generate pass-through rules for all the identity providers configured in the namespace and assign the rule group to the RP
    • retrieve the signing key of the SWT tokens for the RP
  • On the phone app
    • hook up the ACS namespace so that the IP list and STS URLs can be correctly generated
  • On the service
    • add the necessary config values so that the service can validate the incoming tokens from ACS (via DPESamples.OAuth extensions to WIF)

It literally could not get simpler than this for the developer. Of course, the tradeoff is that there’s not much control over the process.

The main decisions we are doing for you here are

  • adding all the preconfigured IPs (the one we can add without having to ask you extra info) and make all the claims available to the service
  • using all the IPs you may previously added to the namespace (including ws-federation and Facebook ones) and generating pass-through rules for all of them

We could have given you more control already on the Wizard pages, for example allowing you to choose which individual IPs to turn on and the means to enter a Facebook IP; after all, that’s exactly what we’ve done for the ACS extensions for Umbraco (see below). However we felt that in this case simplicity was the highest order bit. After all, nothing prevents you from using the ACS management portal after the fact and modifying the settings created by the template wizard. If you feel you’d like us to take a different approach, let us know!

The Sample Application

The application generated is a variation of the existing template, where instead of signing up and in with newly generated username and password you use a token obtained via ACS from one of the configured IPs. The general architecture (client which invokes a service façade for Windows Azure resources) remains the same.

The rest of the solution changes accordingly: the service is secured via OAuth2 and WIF, just like you learned in the lab; and the authorization is handled via the same portal, but matching specific claims coming from the IPs of choice. The enforcement is entrusted to a custom ClaimsAuthorizationManager, which reads the settings from the DB at service invocation time and grants or deny access accordingly.

The sign-up flow is a bit different, due to the different nature of the credentials involved. Instead of requesting a registration step upfront, the application just presents the login screen to the user (in this case it’s the usual HDR list). If the user happens to pick an account that was never used before, the sign-in operation morphs into a sign-up. Note, the screenshots below do not include the UI elements for installing the HTTPS certificate used for securing the communications with the service.

image

All steps are explicit so that the user knows what’s going on at all times; we could have implemented an implicit registration, but that may not have been in the best interest of the user. For example, I may already have signed up with a Google account at some earlier time and now by mistake I signed in with my Yahoo account; an implicit registration would create a new user and I would be disoriented not finding the resources I created when signed in with the Google account. The registration page also allows the user to specify a user name and email address: those values are pre-populated if suitable claims are made available by the IP of choice (Windows Live ID won’t, for instance) but the user may want to specify different values (ie I may want to sign in using Google but I want my notifications to be sent elsewhere). Would you prefer if the toolkit would behave differently? Make sure to leave your feedback in the discussion section of the codeplex site.

Well, that’s it for this sneak peek. In one week I’ll hopefully get back to us with the announcement that the version 1.2 of the Windows Azure Toolkit for Windows Phone 7 is out, and with it the ACS2 goodness I described here. Until then, I suggest having some fun with the current version of the toolkit!


Wade Wegner (@wadewegner) described Updates Coming Soon to the Windows Azure Toolkit for Windows Phone 7 in a 5/9/2011 post:

imageI’ve been really pleased with our delivery cadence for the Windows Azure Toolkit for Windows Phone 7.  We first launched (v1.0) on March 23, 2011 – this launch included VS project templates, class libraries for storage and membership services, a sample application, and documentation.  We had excellent coverage by Mary Jo Foley, InfoQ, and others.  On April 12, 2011—during MIX11—we shipped an update (v1.1) that provided support for Microsoft Push Notification Services running in Windows Azure (we also shipped some bug fixes).  What’s significant about the second release is that we demonstrated momentum – we want to ship early and ship often.  Furthermore, we are working hard on updates—many of which are based on your feedback—and we’re getting them out as quickly as we can.

We want to stay as transparent as possible regarding our release plans, and consequently I’d like to tell you a bit about three significant updates coming to the next release of the Windows Azure Toolkit for Windows Phone 7.

Windows Azure Access Control Service 2.0

With the help and guidance of Vittorio Bertocci (aka Captain Identity) we are introducing integration with the Access Control Service (ACS) 2.0.  The current (and previous) versions of the toolkit provide a very simple ASP.NET membership store in Windows Azure Tables.  With the next release the new project wizard will give you the ability to choose between ASP.NET membership and the ACS.

clip_image002[6]

The wizard only asks for the essentials and will then configure ACS automatically for you.  The result is that, after the wizard completes, you can hit F5 and run.

For more details on the upcoming ACS integration, take a look at Vittorio’s post: Windows Azure Toolkit for Windows Phone 7 1.2 will Integrate with ACS.

More Style

Inspired by the Metro Design Language of Windows Phone 7, we’ve significantly updated the web application UI.  Take a look at the new look …

image

… as compared to the old.

image

Support for Windows Azure Storage Queues

One feature we “missed” in our initial releases—and to be honest it was a matter of prioritization—was support for Windows Azure storage queues.  With our next release we will provide support for queues in our storage library and the sample application.

image

We hope that you’ll find these three updates significant and worthwhile.

We plan to release version 1.2 of the toolkit at TechEd North America 2011.  As we are still finalizing the bits and pulling together the release, I won’t commit just yet to the exact date but I promise we’ll try to release as soon as possible.


<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP, Traffic Manager and CDN

Tina Stewart will present COS276-INT Windows Azure Traffic Manager: Improve Application Performance and Availability Using Global Load Balancing on 5/16/2011 at TechEd North America 2011:

  • image Tuesday, May 17 | 6:45 PM - 8:00 PM | Room: B302
  • Session Type:
  • Interactive Discussion
  • Level: 200 - Intermediate
  • Track: Cloud Computing & Online Services
  • Evaluate: - for a chance to win an Xbox 360
  • Speaker(s): Tina Stewart
imageBusinesses are deploying their applications globally thanks to Windows Azure. As demand for their applications grows, developers and IT professionals need tools to intelligently route their traffic around the world. Enter the solution, Windows Azure Traffic Manager. Traffic Manager is a cloud-based intelligent traffic management system. This session introduces Windows Azure Traffic Manager, giving an overview of technology behind its load balancing methods. The talk provides walkthroughs of common use scenarios through the Azure portal UI.


Yung Chou continued his series with Cloud Computing in PaaS with Windows Azure Connect (Part 2/2) on 5/9/2011:

imageWith the introduction of Windows Azure Connect, many options for an on-premises application to integrate with or migrate to cloud at an infrastructure level are available. The integration and migration opportunities will become apparent by examining how applications are architected for on-premises and cloud deployments. These concepts are profoundly important for IT pros to clearly identify, define, and apply while expanding the role and responsibilities into a cloud or service architect. In Part 2, let’s first review computing models before making cloud computing a much more exciting technical expedition with Windows Azure Connect.

Then Traditional 3-Tier Application Architecture

Based on a client-server model, the traditional n-tier application architecture carries out a business process in a distributed fashion. For instance, a typical 3-tier web application as shown below includes:

  • Front-end which is the web site exposed to an intended interface, either a public endpoint or an internal URL, for processing incoming HTTP/HTTPS requests
  • Middle-Tier to hold business logic and secure connection to required resources
  • Back-end, data stores

image

When deployed on premises, IT has physical access the entire infrastructure and is responsible for all aspects in the lifecycle including configuration, deployment, security, management, and disposition of resources. This had been a deployment model upon which theories, methodologies, and practices have been developed and many IT shops operated. IT controls all resources and at the same time is responsible for the end-to-end and distributed runtime environment of an application. Frequently, to manage an expected high volume of incoming requests, load-balancers which are expensive to acquire and expensive to maintain are put in place associated with the front-end of an application. To improve data integrity, clusters which are expensive to acquire and, yes, expensive to maintain are configured at the back-end. Not only load-balancer and clusters increase the complexities and are technically challenging with skillsets hard to acquire, but both fundamentally increase the capital expenses and the operational costs throughout the lifecycle of a solution and ultimately the TCO.

Now State-of-the-Art Windows Azure Computing Model

Windows Azure Platform is Microsoft’s Platform as a Service, i.e. PaaS solution. And PaaS here means that an application developed with Windows Azure Platform (which is hosted in data centers by Microsoft around the world) is by default delivered with Software as a Service, or SaaS. A quick review of the 6-part Cloud Computing for IT Pros series, one will notice that I have already explained the computing concept of Windows Azure (essentially Microsoft's cloud OS) in Computing Model and Fabric Controller. Considering Windows Azure computing model, Web Role is to receive and process incoming HTTP/HTTPS requests from a configured public endpoint, i.e. a web front-end with an internet-facing URL specified during publishing an application to Windows Azure. A Web Role instance is deployed to a (Windows Server 2008 R2) virtual machine with IIS. And the Web Role’s instances of an application are automatically load-balanced by Windows Azure. On the other hand, Worker Role is like a Windows service or batch job, which starts by itself and is the equivalent middle-tier where business logic and back-end connectivity stay in a traditional 3-tier design. And a Worker Role instance is deployed with a virtual machine without IIS in place. The following schematic illustrates the conceptual model.

image

VM Role is a definition allowing a virtual machine (i.e. VHD file) to be uploaded and run with Windows Azure Compute service. There are some interesting points of VM Role. Supposedly based on separation of responsibilities, in PaaS only Data and Application layers are managed by consumers/subscribers while Runtime layer and below are controlled by a service provider which in the case of Windows Azure Platform is Microsoft. Nevertheless, VM Role in fact allows not only Data and Application, but also Runtime, Middleware, and OS layers all accessible in a virtual machine controlled by a subscriber of Windows Azure Platform which is by the way a PaaS and not IaaS offering. This is because VM Role is designed for addressing specific issues, and above all IT pros need to recognized that it is intended as a last resort. Information of why to employ VM Role and how is readily available elsewhere, and not repeated here.

So, with Windows Azure Platform, the 3-tier design is in fact very much applicable. The Windows Azure design pattern employs Web Role as a front-end to process incoming requests as quickly as possible, while Worker Role as a middle-tier to do most of the heavy lifting, namely execute business logic against application data. The communications between Web Role and Worker Role is with Windows Azure Queue and detailed elsewhere.

Visual StudioWith Visual Studio and Windows Azure SDK, the process of developing a Windows Azure application is highly transparent to that of an on-premise application. And the steps to publish a Visual Studio cloud project are amazingly simple to simply uploading two files to Windows Azure Platform Management Portal. The two files are generated when publishing an intended cloud project in Visual Studio. They are a zipped package of application code and a configuration file with cspkg and cscfg file extensions, respectively. The publishing process can be further hardened with certificate for higher security.

Compared with on-premises computing, there are noticeable constraints when deploying application to cloud including:

  • An application must be stateless. And client-specific state information must be passed back to an associated client or store in durable storage, i.e. Windows Azure storage or SQL Azure. For an on-premise application relying a sticky session, cloud computing may present a need to re-architect data management of the application.
  • The ability for an application to self-initialize, i.e. proceed with installation and program-start without operator intervention, is imperative.

These constraints are related to enabling system management of resource pooling and elasticity which are part of the essential characteristics of cloud computing.

Two important features, high availability and fault tolerance, are automatically provided by Windows Azure. Which can significantly reduce the TCO of an application deployed to cloud compared with that of an on-premises deployment. Here, details of how Windows Azure achieves automatic high availability and fault tolerance are not included. A discussion of this topic is already scheduled to be published in my upcoming blog post. Stay tuned.image

An Emerging Application Architecture

With Windows Azure Connect, to integrate and extend a 3-tire on-premises deployment to cloud is now relatively easy to do.  As part of Microsoft PaaS offering, Windows Azure Connect automatically configures IPSec connectivity to securely connect Windows Azure role instances with on-premises resources, as indicated by the dotted lines in the following schematic. Notice that those role instances and on-premises computers to be connected are first grouped. And all members in a group are exposed as a whole and at the group level the connectivity is established. With IPSec in place, a Windows Azure role instance can join and be part of an Active Directory in private network. Namely, server and domain isolation with Windows Authentication and group polices can now be applied to cloud computing resources without significant changes of the underlying application architecture. In other words, Windows security model and system management in a managed environment can now seamlessly include cloud resources, which essentially makes many IT practices and solutions directly applicable to cloud with minimal changes.

image

With the introduction of cloud computing, an emerging application architecture is a hybrid model with a combination of components deployed to cloud and on-premises. With Windows Azure Connect, cloud computing can simply be par of and does not necessarily encompass an entire application architecture. This allows IT to take advantages of what Windows Azure Platform is offering like automatic load balancing and high availability by migrating selected resources to cloud, as indicated with the dotted lines in the above schematic, while managing all resources of an application with consistent security model and domain policies. Either the front-end of an application is in cloud or on premises, the middle-tier and the back-end can be a combination of resources with cloud computing and on-premises deployment.

Start Now and Be What’s The Next

azurefreepassWith Windows Azure Connect, both cloud and on-premises resources are within reach to each other. For IT pros, this reveals a strategic and urgent need to convert existing on-premise computing into a cloud-ready and cloud-friendly environment. This means, if not already, to start building hardware and software inventories, automating and optimizing existing procedures and operations, standardizing authentication provider, implementing PKI, providing federated identity, etc. The technologies are all here already and solutions readily available. For those feeling Windows Azure Platform is foreign and remote, I highly recommend familiarizing yourselves with Windows Azure before everybody else does. Use the promotion code, DPEA01 to get a free Azure Pass without credit card information. And make the first step of upgrading your skills with cloud computing and welcome the exciting opportunities presented to you.

Having an option to get the best of both cloud computing and on-premises deployment and not forced to choose one or the other is a great feeling. It’s like… dancing down the street with a cloud at your feet. And I say that’s amore.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Joxn announced Python Tools for Visual Studio – Beta2 (just released!) to CodePlex on 4/11/2011 (missed when posted):

An integrated environment for developing Python in VS2010
  • Advanced editing, Intellisense, browsing, “Find all refs”, REPL, …
  • Supports CPython and IronPython
  • Local & Cluster/remote debugging
  • Profiling with multiple views
  • Interactive parallel computing via integrated IPython REPL
  • Support for HPC clusters and MPI, including debugging support
  • NumPy & SciPy for .Net
  • Support for Cloud Computing (soon)
  • Support for Dryad (large scale, data-intensive parallel programming) (soon)
  • Free & Open Source (Apache 2.0)
What, Why, Who, ... ?

Python Tools for Visual Studio is a free & open source plug-in for Visual Studio 2010 from Microsoft's Technical Computing Group. PTVS enables developers to use all the major productivity features of Visual Studio to build Python code using either CPython or IronPython and adds new features such as using High Performance Computing clusters to scale your code. Together with one of the standard distros, you can turn Visual Studio into a powerful Technical Computing IDE...

Note: PTVS is not a Python distribution; it works with your existing Python/IronPython installation to provide you an integrated editing and debugging experience.

Features in depth

If you are already a Visual Studio user, you'll find Python to be a natural extension. The walk-through pages of this wiki cover the core features along with new additions such as using Windows HPC clusters, MPI, etc.
Detailed Walk-through – IDE Features
Detailed Walk-through – HPC and Cloud Features
Detailed Walk-through - NumPy and SciPy for .Net

Quick Start Guide
Installation
  1. Uninstall any previous versions of "IronPython Tools" or PTVS (if any)
  2. Install a Python distribution
  3. Install Visual Studio 2010
  4. Run the PTVS installer & you're in business.

Installation – more details

Supported Environments
  • OS: PTVS has been tested on Win7, Windows Server 2008 R2
  • CPython 2.5 through 3.2
  • IronPython 2.7 RTM
  • IPython .11+
  • PyPy and Jython are partially supported (REPL and Intellisense work; debugging and profiling not supported)

Supported Environment – more details

How to get interpreters and libraries

You can also install one of the "all-in-one" distributions such as EPD (http://www.enthought.com) or ActivePython (http://www.activestate.com), which will have the majority of the bits mentioned above. Once your languages and libraries are installed, PTVS will automatically find them and make them available through the Visual Studio "Options" UI. PTVS will also analyze the standard libraries in your distributions and provide Intellisense support for them.

Getting interpreters & libraries – more details

Other cool stuff

The Python ecosystem is full of exciting tools and libraries. Here are few that work with PTVS and IronPython. We'll add more soon:

Support & Q/A

Please use the Discussions Tab for questions/bugs/suggestions/etc.

Schedule

Here's the informal schedule for PTVS:

image


My (@rogerjenn) Microsoft Bills for ExtraSmall Azure Instances with a MSDN Ultimate Benefit post of 5/9/2011 explained:

As noted in my Republished My Live Azure Table Storage Paging Demo App with Two ExtraSmall Instances and Connect, RDP post of 5/8/2011, I upgraded my OakLeaf Systems Azure Table Services Sample Project - Paging and Batch Updates Demo from a single Small Web Role instance to two ExtraSmall Web Role instances.

Needless to say, I was surprised when I found that I was being charged for the new ExtraSmall instances:

image

Here are the billing details for the new cycle which started on 5/6/2011:

image

I then took a closer look at my MSDN Ultimate subscription benefit:

image[82]

Apparently, I had been upgraded from MSDN Premium to MSDN Ultimate and hadn’t noticed the benefit change that resulted from the transaction.

Needless to say, I have republished my project with two Small instances and edited my Republished My Live Azure Table Storage Paging Demo App with Two ExtraSmall Instances and Connect, RDP post accordingly.


Larry Grothaus described Effectively Managing 700% Usage Spikes – Elasticity and Cloud Computing in a 5/9/2011 article for Microsoft CloudPower on Forbes AdVoice:

image In April, I wrote about the Social eXperience Platform (SXP) powered by the Windows Azure platform and how it delivers cloud-powered capabilities to Microsoft Showcase and the Cloud Conversations sites.  There were some notable numbers in that post related the improved availability and reduction in costs corresponding with the team’s move to the cloud.

image I recently saw another post from Bart Robertson that further expands on that previous post and one of the key, cloud computing benefits frequently talked – elasticity.

imageIn Bart’s post, he has a graph showing a spike of over 700% in traffic that the occurred for a period of 72 hours related to the launch of online advertising.  He equates this to ‘Black Friday’ traffic that retailers experience during the holiday buying rush.  This is why cloud computing is the ideal hosting environment for applications that require ‘bursty’ resources to accommodate spikes in traffic, such as e-tailers during holiday seasons, companies that have month-end reporting or data collection requirements, or online communities that may see large traffic spikes during online activities for example.

This is why cloud computing is likened to utilities that we use at our homes, such as electricity or water.  We pay for what we use and ‘turn the faucet off’ when we’re not using resources. The alternative is traditional IT infrastructure environments where there is a large spend on cap-ex resources that may sit idle for large periods of time, waiting for the occasional times when they’re needed to scale up to meet demand.

Take a look at Bart’s post for a real-world example of cloud computing elasticity and how it makes sense for this and a variety of other IT scenarios.

If you have questions on cloud computing or what Microsoft has to offers businesses interested in investigating cloud computing, check out the Cloud Power site to get started.

More from Microsoft Cloud Power


Avkash Chauhan reported a workaround for Handing Web Role Exception - This request operation sent to net.pipe://localhost/iisconfigurator did not receive a reply within the configured timeout (00:01:00) errors on 5/9/2011:

image ERROR MESSAGE: This request operation sent to net.pipe://localhost/iisconfigurator did not receive a reply within the configured timeout (00:01:00).  The time allotted to this operation may have been a portion of a longer timeout.  This may be because the service is still processing the operation or because the service was unable to send a reply message.  Please consider increasing the operation timeout (by casting the channel/proxy to IContextChannel and setting the OperationTimeout property) and ensure that the service is able to connect to the client.

EXCEPTION DETAILS: System.TimeoutException was unhandled

Message=This request operation sent to net.pipe://localhost/iisconfigurator did not receive a reply within the configured timeout (00:01:00).  The time allotted to this operation may have been a portion of a longer timeout.  This may be because the service is still processing the operation or because the service was unable to send a reply message.  Please consider increasing the operation timeout (by casting the channel/proxy to IContextChannel and setting the OperationTimeout property) and ensure that the service is able to connect to the client.

  Source=mscorlib
StackTrace:
Server stack trace:
at System.ServiceModel.Dispatcher.DuplexChannelBinder.Request(Message message, TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout)
at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation)
at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message)
Exception rethrown at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
at IConfigurator.Deploy(String roleId, WebAppModel webAppModelPath, String roleRootDirectory, String sitesDestinationRootDirectory, String diagnosticsRootDirectory, String roleGuid, Dictionary`2 globalEnvironment)
at Microsoft.WindowsAzure.Hosts.WaIISHost.Program.Main(String[] args)
InnerException:

Root cause & problem description:

1. WaAppAgent.exe starts the Full IIS web role by starting WaIISHost.exe process which loads the Web Role DLL

2. Due to role being a full IIS host web role, IISconfigurator.exe process also starts along with WaIISHost.exe

3. IISconfigurator.exe looks the service definition file and finds <Sites> section along with PhysicalDirecotry if set, as below:

  <Sites>
<Site name="Web" PhysicalDirectory="appdir">
<Bindings>
<Binding name="Endpoint1" endpointName="Endpoint1" />
</Bindings>
</Site>
</Sites>

4. Now IISconfigurator.exe starts checking each file in PhysicalDirecotry for security setup and *if the overall process is taking more than 60 seconds*, it exceeds the default timeout is set for 60 seconds.

5. This timeout cause the WaWebHost.exe to terminate.

6. After 2 minutes, WaAppAgent.exe process restarts the WaIISHost.exe and everything happens again from step 1-5.

If you are getting this exception, the main issue here is that you have so many files in the Physical directory to be taken care by IISConfigurator.exe, which exceed the 60 seconds default time.  In my test scenario in small VM IISConfigurator.exe tool about 90 seconds to process 19000 files in physical folder. I also found the threshold is about ~15000 files in physical folder to be processing by IISConfigurator.exe in 60 seconds time. I don’t expect thousands of static files in virtual directories specific to web role. The cloud architecture is design to move static content out of the VM so if you have really thousands (15000+) of files in your web site, it is the time to move static content outside the web role.

Solution(s):

To solve this problem there are two workarounds as below:

1. Running Legacy HWC Web Role:

If you don’t have to use functionalities provided by full IIS role i.e. Multiple web sites, Virtual Directories etc, you can remove the <Sites> section in your Service Definition file as below:

    <!--<Sites>
<Site name="Web">
<Bindings>
<Binding name="Endpoint1" endpointName="Endpoint1" />
</Bindings>
</Site>
</Sites>-->

After that, you can repackage the solution and redeploy it.

2. If you must need Full IIS Role then you can use the following workaround:

  1. Copy all the files (or most of the files) in your virtual directories folder into ZIP files.
  2. When the Role starts expand the ZIP file in OnStart() function
  3. IISConfigurator.exe runs before role start so having thousands of files in ZIP will not cause the total time to exceed 60 seconds, and there will be not exception


Jamin Spitzer posted Microsoft Announces Windows Azure Toolkits for iOS, Android and Windows Phone on 5/9/2011 to the Official Microsoft Blog:

Today, Microsoft announced Windows Azure Toolkits for Devices, consisting of assets for Windows Phone, iOS and a preview of tools for Android.

Using the toolkits, developers can use the cloud to accelerate the creation of applications on the major mobile platforms. Companies, including Groupon, are taking advantage to create a unified approach to cloud-to-mobile user experience.

The average consumer uses many different devices. The PC, the phone and an array of smart Web-connected devices have created diverse computing scenarios for millions of users. For developers, this is an unparalleled opportunity - Forrester predicts that by 2015, the mobile apps services market will exceed $7 billion.

Opportunity also creates complexity for developers; they need to prioritize their investments to quickly reach the biggest and most profitable user bases. Today, it's not just about how quickly a developer can create an experience, but how quickly that developer can build apps that work with unique devices across a dozen platforms.

The toolkits leverage the cloud to simplify the complexity of supporting multiple devices. As a common back-end, developers can use cloud services to share common requirements like device notifications, authentication, storage and even higher-level services like leaderboards. At the same time, developers can maximize the performance of each mobile device by writing client code that exploits each platform. As more and more mobile applications rely on back-end services, the cloud can become increasingly useful and strategic for developers.

Companies like Groupon, as well as independent developers, can rely on the Windows Azure Toolkits for Devices to create applications on the major mobile platforms, specifically:

Windows Azure Toolkit for iPhone (v1.0). Developers can download the package and quickly get started writing iPhone apps on the Windows Azure platform without having to have intimate knowledge of Microsoft tools, such as Visual Studio. Compiled iPhone code libraries to interact with Windows Azure, a sample iOS application, documentation, and a “Cloud Ready” Windows Azure deployment package are included.

       

Windows Azure Toolkit for Windows Phone (v1.2). Originally released last month, new developer features available in the next two weeks include integration with the Windows Azure Access Control Service (e.g., a wizard, automatic setup, tooling and code), full support for Windows Azure Storage Queues and an updated user interface for the supporting Web application.

Windows Azure Toolkit for Android (Prototype Preview). With the forthcoming release this summer, developers will be able to extend the functionality now available for iOS and Windows Phone to the Android platform with the Windows Azure Toolkit for Android.

  

As Microsoft built these toolkits, we shared the technology with partners such as Groupon. As one of the hottest startups in tech today, Groupon is tackling many of the challenges noted above – like scalability and UI consistency – as it manages runaway growth most can only envy.

“At Groupon, we recognize that people aren’t tied to their computers and want to get deals - whenever and wherever they happen to be. Taking advantage of the Windows Azure Toolkits for Mobile Devices, we can rely on a common backend to create consistent, next generation mobile experiences like real-time notification services that integrate into each phone’s home screen and app experience,” said Groupon’s Michael Shim, vice president of Mobile Business Developer & Partnerships.

To simplify the process of setting up services in Windows Azure, we are also releasing a “Cloud Ready” package for the toolkit. This package is designed to allow someone to quickly get started using Windows Azure without having to open and modify the services.

Screencasts are available for developers seeking additional information: Getting Started with the iOS Toolkit and Deploying the Cloud Ready Package for Devices. Links to access the free toolkits are below:

iOS:

https://github.com/microsoft-dpe/watoolkitios-lib
https://github.com/microsoft-dpe/watoolkitios-samples
https://github.com/microsoft-dpe/watoolkitios-doc

Windows Phone 7:

http://watoolkitwp7.codeplex.com

Posted by Jamin Spitzer
Senior Director, Platform Strategy, Microsoft

See Wade Wegner’s post below for more details of the iOS toolkit.


Wade Wegner (@wadewegner) explained Using Windows Azure for Windows Phone 7 Push Notification Support in a 5/8/2011 post:

image At MIX11 we released the Windows Azure Toolkit for Windows Phone 7 v1.1.0, which included out-of-the-box support for the Microsoft Push Notification Service (MPNS) for Windows Phone. The MPNS provides an efficient way for an application to register itself for updates that are pushed directly to the phone rather than writing the application to frequently poll a web service to look for pending notifications.  This has the benefit of reducing the impact on your phone battery, as polling results in the device radio turning on frequently.

imageWhen using the MPNS you need to setup your own web service that notifies Microsoft to send the notification. It’s your responsibility to setup this service – you have to create the notification channel, bind it to the correct type of notification, and ultimately host the web service. The Windows Azure Toolkit for Windows Phone 7 makes it really easy to not only host your web service in Windows Azure but also create the channel and setup the correct notification binding (largely thanks to the Windows Push Notification Service Side Helper Library from Yochay Kiriaty).

To get started, grab v1.1 of the toolkit from our CodePlex site: http://watoolkitwp7.codeplex.com/.  If this is your first time, take a look at the Getting Started documentation and webcast.

Once you start up the application, the first thing you’ll want to do is enable push notifications.  To do this, check the box on the initial page to create the channel.

image

Once this has successfully registered, head over to the website and log in as the administrator.  There’s now a Push Notifications tab where you’ll see your user registered:

image

From here you can the the push notification channel by sending all three push types: toast, tile, and raw.  In that order, here’s what you’ll see (note: to see the tile message you must first pin the app to start):

image image image

Not bad for an out-of-the-box experience requiring ZERO updates!

To learn more about this version of the toolkit – or to see it live and in action – take a look at the session I presented at MIX11 entitled Building Windows Phone 7 Applications with the Windows Azure Platform.


Yves Goeleven described Building Global Web Applications With the Windows Azure Platform – Understanding capacity in a 5/8/2011 post:

image In this second installment of the ‘Building Global Web Applications series’, I would like to discuss the concept of ‘Capacity’ as I feel that only few people understand that it is the secret of the utility model, the business model behind cloud computing.

imageI hear, and tell, very often that cloud computing is about ‘pay for use’. But only for a few resources this is actually completely true, for many it means ‘pay for what you could potentially use’, aka the capacity of a certain resource. Let’s have a look at the pricing table of windows azure compute instances as an example:

image

When you look at this table, you can see that every windows azure role has a ‘capacity’ in terms of cpu, memory, local disk space and I/O (which actually means bandwidth), in other words the extra small instance has a potential to perform roughly 1 billion instructions per second, store 768 MB of data in memory, cache 20 GB of data on disk and transfer 5 Megabits of data per second.

When serving web pages, your role will start showing a decline in performance when either one of these 4 capacities is completely utilised. When this happens you might be tempted to either scale up or scale out in order to increase the number of users you can handle, but to be honest, this might not be the best idea, because at the same time you’re also wasting part of the 3 other capacities of your instance.

Last time, I showed you a load test on a single extra small instance, that showed signs of running out of capacity when there were more than 30 concurrent users on it. But when monitoring the instance I noticed that neither, memory, cpu nor local disk space were a problem. Only 10% of the cpu was utilitised, 82% of the memory was utilised but most of this was by the OS itself and there was an abundance of free disk space. So the bottle neck must have been the bandwith…

Let’s analyse a request and see whether or not this is true, luckily loadimpact also has a page analyser that shows you which parts of a page take how much time… as you can see from the results below, most of the time is spent on waiting for the first byte of several images (which is represented by the green bar) and waiting for the download of the larger image (represented by the blue bar). All clear indicators of the low i/o performance of an extra small role.

Now in order to increase the utilisation of other capacity types in our role, as well as increase the number of users we can handle, we should remove this bottleneck.

Ofloading the static images, that don’t require computation or memory anyway, to another medium such as blob storage or the CDN is one of the prime options.  This allows the machine to handle more requests for dynamic pages and thus increases the utilisation of both cpu and memory.

Next time we will see what exactly the impact is of offloading images to either blob storage or the CDN and how this compares to scaling out…


Linda Rosencrance wrote Partners Plan for Microsoft Dynamics ERP Cloud Vision, Part 1: Changing Business Models and Market Perception on 4/25/2011 for MSDynamicsWorld.com (missed when posted):

image Microsoft's announcement at Convergence 2011 that it plans to start supporting and, to some extent, deploying cloud versions of its Dynamics ERP solutions starting with NAV "7" had conference attendees buzzing. But some partners are still wondering exactly what this means for them. 

With many of Microsoft's own plans still uncertain or at least under wraps, partners - especially those whose business model is entirely based on license and maintenance sales - are starting to look at ways to adjust and thrive even as the landscape continues to shift.

Re-examining the partner business model

One partner, who asked not to be named, expressed a bleak view of the business prospects for many of the partners selling their own hosted Dynamics ERP solutions.  "Dynamics ERP in the cloud is a different sales model and it's going to take a lot of revenue out of people's pockets because when Microsoft ends up hosting it, it's going to be dirt cheap just like Dynamics CRM. You can't compete with CRM Online," the partner said. "But there will still be customers that want on-premise ERP. Maybe 10 years from now, customers will say the cloud is the perfect place for everything to be. But I don't think customers, particularly in the middle segment of the [US], are ready to go there yet."

But Andrew Fass, President, AVF Consulting Inc., said partners can still make money on the services side.

"When it's all up and running and the price is reasonable, there certainly is money to be made from the services component," he said.

Fass said the smaller partners will love it because the infrastructure is always the thing that holds up everything and makes it so complicated. All the smaller partners want to do is put their software in and begin the implementation but the infrastructure is such a pain, he said.

"The hosting side is the infrastructure and the capital just keeps going and going because you have to put multiple sets of clustered SQL Servers in and they're really expensive," Fass said. "And there's a very thin mark up on the infrastructure."

For smaller partners this is a very good thing because it allows them to speed up the sales cycle, he said.

"The infrastructure is a turn-on concept not a build concept so they can get customers up and running faster," Fass said. "That will help partners close more business. So why wouldn't partners be interested in that?"

Validation for Partners Already Hosting Dynamics

For Microsoft partner Tensoft, Microsoft's official announcement is a very positive thing. While the more general statements about being "all in" sounded nice enough, the official announcement that the Dynamics products are going to support cloud deployments will improve the products' images. However, Tensoft president Bob Scarborough admits his company is different than some other partners.

"We might be a little unique because we do vertical ERP not broad market cloud," he said. "We do technology- and industry-specific cloud. A lot of our value proposition is in the industry-specific stuff and the cloud is a different way to deliver it. I don't think it will affect us economically."

"It's about who has the best application platform," Scarborough said. "A significant part of the value of Salesforce.com is in its application platform and the ability for people to build products or to build integrations to the Salesforce platform for other extended platforms. But traditionally for Microsoft it has been who has the best ISV industry around its platform."

Niels Skjoldager, partner at ProISV, a small Danish software company, said he is very excited by Microsoft's announcement. ProISV has developed AX Cloud, a hosting solution that allows Microsoft partners to manage their customers' Microsoft Dynamics AX systems on Amazon's EC2 cloud platform. ProISV says its partners can offer their clients a more flexible and less expensive alternative to self-hosting or traditional partner hosting of Microsoft Dynamics AX.

"It's the best news I've heard for a long time for a couple reasons," he said. "Microsoft now openly agrees it needs to have strategy for ERP in the cloud, which is something new. Until recently the word was ERP needed to be self hosted or partner hosted and it would be a long time before there was going to be a demand for ERP in the cloud. Now that has changed to ‘yes there is a need for ERP in the cloud' but unfortunately Microsoft is not yet capable of delivering it."

Skjoldager said now his company doesn't have to convince customers that it's the right strategy to offer Dynamics AX as a cloud service. The announcement was creating confidence in the market because this is the direction that Microsoft is going.

"They say they're all in when it comes to cloud and they truly mean it," he said." It's not for everybody yet but there's a market and that market is going to take off like a rocket."

In part two of this series, we will look at the key details that Dynamics partners are still missing from Microsoft. And we will examine partners' perception of how their organizations will change in the future.

MSDynamicsWorld.com calls itself “The independent authority for news and views on Microsoft Dynamics.”


Linda Rosencrance continued her series with Partners Plan for Microsoft Dynamics ERP Cloud Vision, Part 2: Firms Await More Specifics, Ponder Options for the Future on 4/25/2011 for MSDynamicsWorld.com (missed when posted):

imageMicrosoft's announcement at Convergence 2011 that it plans to start supporting and, to some extent, deploying cloud versions of its Dynamics ERP solutions had conference attendees buzzing. But some partners are still wondering exactly what these changes will mean, and what they will look like.

In part one of this series we examined how different partners see their business model changing as the options for cloud ERP deployments evolve. Some see the changes as an opportunity to grow a targeted, industry-focused Dynamics practice.  Others are more cautious, citing customer skepticism and a number of technical hurdles still remaining.   

Still waiting for specifics

According to Tensoft president Bob Scarborough, one of the things Microsoft needs to consider around ERP is how is it going to make it a platform to let ISVs integrate easily to extend it.

"So that will be where the actual battle is fought," he said. "There are some things that are unknown. But we're excited that they've announced that they're in the cloud. It's a validation of what we've been doing in a lot of ways."

"There's no word from Microsoft about how it will work with partners, they're just leaving it to partners to do their own thing," said another partner who was at Convergence. "The next announcement will be at WPC in July and at that point they'll announce what the timing will be. They're trying to get all their applications on Azure on the back end and it will be a little more difficult than they anticipated."

Partners are happy that something was announced because a lot of them are getting beat up by the other SaaS offerings, the partner said. But they're also worried.

Changing the partner landscape

Additionally, Tensoft's Scarborough said Microsoft's ERP in the cloud announcement fits in with its stated objective of fewer, bigger partners-partners that have something to add to the channel.

"We know some verticals very deeply and that's our value proposition," he said. "If you're one of the big 10, or 20 you can say you have national focus. I think it becomes harder and harder to be the regional guy who's just an expert in the product that serves any industry without having a focus. I think even the big guys are looking for verticals now so they can add value beyond the product itself. The SaaS world expects less consulting effort they expect more do it yourself. It's a business model transition."

John Kleb, partner-in-charge at Dynamics NAV partner, Sikich LLP, said Convergence was an eye-opening experience with regard to the cloud.

"Microsoft Dynamics NAV will be leading the cloud in no time and it is critical that we re-tool ourselves at Sikich to be ready to consult in this new world," Kleb said. "I have ow put my strategic planning in high gear so we are ready for the business model changes that the cloud will drive for our organization. I can say that we are very well capitalized and although declining revenues are never a good thing, we are fully prepared to manage a revenue decline through the change from a packaged IP sales to a subscription model."

But Kleb acknowledges that Sikich will have to re-build its sales compensation plans to deal with annuity revenue rather than big upfront license sales. And the company will also have to learn ways to close more business with substantially less effort.

"We will learn to productize rather than customize," he said. "The cloud is the strongest motivation yet for building a micro-vertical [because] customizations and cloud are not very compatible."

Sikich also has to become a stronger consultancy.

"We are uniquely positioned for this effort as Sikich is a consulting firm first, and a deployment firm second," he said. "And our entire reporting model is driven from hours times rate. In the subscription business we will have to reposition our metrics to look at salary costs as a percentage of revenues. This will require a structural change, but we will live in a hybrid world for most of a decade and therefore need to handle both hourly and subscription service models of reporting."

Kleb said there is a lot of work ahead, but if done correctly the bottom line will be improved over time and the company's revenues will grow at a pace greater than it can achieve in its present environment.

ProISV's Skjoldager concedes that while the announcement is great for his company, it might not be as great for the traditional hosting partners because they'll need to figure out how to adapt to this change and they'll have to change their business models before their traditional hosting businesses become obsolete.

"I do think there will be a consolidation in the hosting business, there's no way to avoid that," he said. "This is definitely part of the Microsoft strategy to consolidate and kind of be able to manage the channel a little bit better in their opinion."

Skjoldager said in general it will be more and more difficult to be a small partner because the complexity of the technology and everything else in this business is increasing and it requires more and more knowledge and different types of resources. Small partners just can't keep up, he said.

"When Microsoft decides to host AX itself, it will hurt us economically but in a couple years we will have enough market share to be able to compete not just on price but on the quality of service and other things," he said. "I think there's room for competition.

Customers don't want to have all their eggs in one huge vendor's basket, he said. Some customers want to do one-stop shopping while others want to spread out the risk and choose what provides the best overall value. Some customers don't necessarily see Microsoft's strengths in infrastructure but in other business areas, he added.

"But I believe Microsoft will be extremely strong in the cloud business," Skjoldager said. "With ERP it's going to interesting to see how Microsoft will handle the situation [with partners] because they will continue to offer Dynamics as it is now for on-premise side-by-side with a cloud service. So how are they going to manage that and make sure that the channel is not cannibalized. It's an easy announcement to make but it's a very complex situation to manage."

<Return to section navigation list> 

Visual Studio LightSwitch

Matt Sampson posted Visual Studio LightSwitch - Exporting Data to Word using COM Sample to the MSDN Code Samples Gallery on 4/25/2011 (missed when posted):

Introduction

image This code sample shows how to export data from a Visual Studio LightSwitch desktop application into a Word Document. 

This download contains the sample application for the MSDN blog post: How Do I: Export Data to a Word Mailing Labels Document. Both Visual Basic and C# samples are available.

Getting Started

image2224222222To build this sample application you will need Microsoft Visual Studio LightSwitch Beta 2.  Download the zip file and extract the contents to \My Documents\Visual Studio 2010\Projects.  Then you can open up the .sln in Visual Studio LightSwitch and build the application.

Select Build -> Build Solution to build the sample application.

Press "F5" to debug/launch the application.

A small sample application called "Contacts" will launch.  The application itself is meant to be a simple way to handle your address book contacts

In the command bar of the application you will see a button called "Export to Word Mailing Labels Document".  Clicking this button will export all saved data from the Contacts application to a Word Document in a location you specify.  You will need Microsoft Word installed to have this work correctly.

How does it work

When the button is clicked, a Word.Application COM object is created using the Silverlight System.Runtime.InteropServices.Automation.AutomationFactory class.  Then, by utilizing the Word Object Model, we iterate over our data and send it to a Word Document.  


Return to section navigation list> 

Windows Azure Infrastructure and DevOps

Joe Panettieri reported Microsoft, Ingram Micro: Deeper Azure Cloud Partnership Coming on 5/9/2011:

imageIf you’re a VAR or MSP seeking to leverage Microsoft Windows Azure or SQL Azure cloud applications, whom can you turn to for help? The answer may involve Ingram Micro, which is quietly preparing a Microsoft Azure ISV Pilot Program, Talkin’ Cloud has heard. Ingram’s goal is to connect the dots between VARs and MSPs, and software developers that host their applications on Microsoft Azure, an emerging cloud platform.

image Ingram’s pay-for-play strategy will promote Windows Azure and SQL Azure software companies through webinars, online advertising, and face to face events that attract Ingram’s VARs and MSPs. It’s a safe bet the Microsoft Azure ISV Pilot Program will likely include the Ingram Micro Cloud Summit (June 1-2, Phoenix, Ariz.) and perhaps even optional participation in the Ingram Micro VTN (Oct. 16-19, Las Vegas), Talkin’ Cloud has heard.

The deeper Microsoft-Ingram Micro cloud relationship arrives at a key time for both companies. Little more than one year old, the Windows Azure and SQL Azure cloud platforms are gaining more ISVs (independent software vendors). But sources say Microsoft is paying selected software companies to port their applications into the Azure cloud… a clear sign that ISVs may require extra motivation to support Microsoft’s cloud effort. Also, Microsoft has launched free offers and promotions to attract Azure ISVs. And back in February 2011, Talkin’ Cloud suggested there were at least five ways Microsoft could raise Azure’s visibility with software partners and channel partners.

Meanwhile, the Ingram Micro Cloud is an emerging aggregator service that allows VARs and MSPs to find third-party SaaS applications. Ingram Micro expects more than half of it’s 20,000 active solution providers to deploy cloud software and services in the next two years, according to the company’s website. On the one hand, Ingram is trying to maintain a vendor-neutral approach — building cloud relationships with Amazon.com, Microsoft, Rackspace, Salesforce.com and dozens of small SaaS companies. But on the other hand Ingram and Microsoft are trying to put the channel spotlight on the emerging market for Windows Azure and SQL Azure software developers.

I’ve yet to speak with Ingram about the apparent Microsoft Azure ISV Pilot Program. But several reliable sources say the program will run from June through December 2011. I’m flying from New York to California at the moment. As soon as I get my feet on the ground I’ll reach out to Ingram for official comment.

Disclosure: I’m set to host sessions at the Ingram Micro Cloud Summit in June. But my involvement in that Ingram Micro summit did not trigger any information for this blog entry.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Microsoft’s Venice Team recently posted a Senior Software Test Engineer (SDET) job opening, which is no longer open:

  • Job Category: Software Engineering: Test
  • Location: Redmond, WA, US
  • Job ID: 749439-37421
  • Division: Server & Tools Business

image

Our job is to drive the next generation of the security and identity infrastructure for Windows Azure and we need good people.

We are the Venice team and are part of the Directory, Access and Identity Platform (DAIP) team which owns Active Directory and its next generation cloud equivalents. Venice's job is to act as the customer team within DAIP that represents the needs of the Windows Azure and the Windows Azure Platform Appliance teams. We directly own delivering in the near future the next generation security and identity service that will enable Windows Azure and Windows Azure Platform Appliance to boot up and operate.

If you have great passion for the cloud, for excellence in engineering, and hard technical problems, the DAIP Team would love to talk with you about this rare and unique opportunity.

In this role you will:

  • Own and deliver test assets of Windows Azure's next generation boot time security and identity infrastructure
  • Work closely with DAIP and Azure test teams insuring integration of the services
  • Advocate for product quality and provide critical input into team strategy and priorities
  • Initiate and promote engineering best practices constantly improving engineering experience …

We are an agile, small team operating in the dynamic environment with lots of room to grow. We had embraced the services world with the fast release cycles focusing on fundamentals, engineering hygiene and doing things right. We are hardworking but striving to maintain the right work-life balance. Come and join us!

WAPA appears to be alive and (possibly) well, and living in Redmond, WA.


<Return to section navigation list> 

Cloud Security and Governance

Lori MacVittie (@lmacvittie) asserted Though responsibility for taking precautions may be shared, the risk of an incident is always yours and yours alone, no matter who is driving the car in a prefix to her If Security in the Cloud Were Handled Like Car Accidents post of 5/9/2011 to F5’s DevCentral blog:

image Cloud and security still take top billing in many discussions today, perhaps because of the nebulous nature of the topic. If we break down security concerns in a public cloud computing environment we can separate them into three distinct categories of risk – the infrastructure, the application, and the management framework. Regardless of the model – IaaS, PaaS, SaaS – these categories exist as discrete entities, the differences being only in what the customer has access to and ultimately over which they have control (responsibility).

image

A Ponemon study recently reported on by InformationWeek (Document-iconCloud Vendors Punt to Security Users ) shows a vastly different view of responsibility as it pertains to cloud computing and  data security. Whether it is shared, mostly on the provider or mostly on the customer is a matter of perspective apparently but is just as likely the result of failing to distinguish between categories of security concerns. Regardless of the category, however, if we apply a couple of legal concepts used to determine “fault” in car accidents in many states, we may find some interesting comparisons and insights into just who is responsible – and for what – when it comes to security in a public cloud computing environment.

A MATTER of NEGLIGENCE
Legalese is legalese, no matter the industry or vertical, and cloud computing is no exception. As noted in the aforementioned InformationWeek article:

quote-badge "When you read the licensing agreements for cloud providers, they don't need to do anything with security--they take 'best effort,'" said Pironti [John P. Pironti, president of IP Architects]. Best effort means that should a case come to court, "as long as they can show they're doing some effort, and not gross negligence, then they're covering themselves."

In other words, providers are accepting that they have some level of responsibility in providing for security of their environments. They cannot disregard the need nor their responsibility for security of their environments, and by law they cannot disregard such efforts below a reasonable standard of effort. Reasonable being defined by what a reasonable person would consider the appropriate level of effort. One would assume, then, that providers are, in fact, sharing the responsibility of securing their environments by exerting at least ‘best effort’. A reasonable person would assume that best efforts would be comparable to those taken by any organization with a public-facing infrastructure, i.e. firewalls, DoS protection, notification systems and reasonable identity and access management policies.

Now if we treated cloud computing environments as we do cars, we might use a more granular definitions of negligence. If we look at those definitions, it may be that we can find the lines of demarcation for security responsibilities in cloud computing environments.

quote-badge Contributory negligence is a system of fault in which the injured party can only obtain compensation for injuries and damages if he or she did not contribute to the accident in any way.

In comparative negligence, the injured party can recover damages even if she was partially at fault in causing the accident. In a pure comparative system, the plaintiff’s award is reduced by the amount of her fault in the accident. Some states have what is called modified comparative fault. This is where there is a cap on how much responsibility the injured party can have in the accident.

-- Car Accident Fault and Getting What You’re Owed

In a nutshell, when it comes to car accidents “fault” is determined by the contribution to the accident which subsequently determines whether or not compensation is due. If Alice did not fulfill her responsibility to stop at the stop sign but Bob also abdicated his responsibility to obey the speed limit and the two subsequently crash, one would likely assume both contributed to the incident although with varying degrees of negligence and therefore fault. Similarly if Alice has fulfilled all her responsibilities and done no wrong, then if Bob barrels into her it is wholly his fault having failed his responsibilities. The same concepts can certainly be applied to security and breaches,  with the focus being on the contribution of each party (provider and customer) to the security incident.

Using such a model, we can determine responsibility based on the ability to contribute to a incident. For example, a customer has no control over the network and management framework of an IaaS provider. The customer has no authority to modify, change or configure network infrastructure to ensure an agreeable level of network-security suitable for public-facing applications. Only the provider has the means by which such assurances can be made through policy enforcement and critical evaluation of traffic. Alice cannot control Bob’s speed, and therefore if it is Bob’s speed that causes an accident, the fault logically falls on Bob’s shoulders – wholly. If data security in a cloud computing environment is breached through the exploitation or manipulation of infrastructure and management components wholly under the control of the provider, then the fault for the breach falls solely on the shoulders of the provider. If, however, a breach is enabled by poor coding practices or configuration of application infrastructure which is wholly under the control of the customer, then the customer bears the burden of fault and not the provider.

IT ALWAYS COMES BACK to CONTROL

In almost all cases, a simple test of contributory negligence would allow providers and customers alike to not only determine the ability to contribute to a breach but subsequently who, therefore, bears the responsibility for security. It is an unreasonable notion to claim that a customer – who can neither change, modify nor otherwise impact the security of a network switch should be responsible for its security. Conversely, it is wholly unreasonable to claim that a provider should bear the burden of responsibility for securing an application – one which the provider had no input or control over whatsoever. 

It is also unreasonable to think that providers, though afforded such a luxury by their licensing agreements, are not already aware of such divisions of responsibility and that they are not taking the appropriate ‘best effort’ steps to meet that obligation. The differences in the Ponemon study regarding responsibility for security can almost certainly be explained by applying the standards of contributory negligence. Neither provider nor customer is attempting to abrogate responsibility, in fact all are clearly indicating varying levels of contribution to security responsibility, almost certainly in equal portions as would be assigned based on a contributory negligence model of fault for their specific cloud computing model. Customers of IaaS, for example, would necessarily assign providers less responsibility than that of an SaaS provider with regard to security because providers are responsible for varying degrees of moving parts across the models. In a SaaS environment the provider assumes much more responsibility for security because they have control over most of the environment. In an IaaS environment, however, the situation is exactly reversed. In terms of driving on the roads, it’s the difference between getting on a bus (SaaS) and driving your own car (IaaS). The degree to which you are responsible for the security of the environment differs based on the model you choose to leverage – on the control you have over the security precautions.

Ultimately, the data is yours; it is your responsibility to see it secured and the risk of a breach is wholly yours. If you choose to delegate – implicitly or explicitly - portions of the security responsibility to an external party, like the driver of a car service, then you are accepting that the third party has taken acceptable reasonable precautions. If the risk is that a provider’s “best effort” is not reasonable in your opinion, as it relates to your data, then the choice is obvious: you find a different provider. The end result may be that only your own environment is “safe” enough for your applications and data, given the level of risk you are willing to bear.


<Return to section navigation list> 

Cloud Computing Events

My (@rogerjenn) 53 Cloud Computing and Online Services Sessions at TechEd North America 2011 of 5/9/2011 categorizes cloud-related sessions and events at TechEd North America 2011 to be held

image The breakdown:

  • 1 Pre-Conference Seminar
  • 17 Break-Out Sessions
  • 7 Interactive Discussions
  • 6 Windows Azure Workshops
  • 17 Hands-On Labs
  • 4 Birds-of-a-Feature Get-Togethers

Seems to me that Break-Out Sessions got short shrift this year.

Track Description

Software-plus-services is the next logical step in the evolution of computing. It represents an industry shift toward software design that is neither exclusively PC- nor browser-centric and blends traditional client-server architecture with cloud-based software delivery. The Cloud Computing & Online Services track provides information about Microsoft technology and innovation in software-plus-services. Topics include coverage of the soon-to-be-released next version of the suite: Microsoft® Office 365. Learn about enterprise-ready software services from Microsoft Online Services, such as Microsoft® Exchange Online, Microsoft® SharePoint® Online, Microsoft® Lync™ Online, Microsoft Office Communications Online, and Microsoft Dynamics® CRM Online. Gain in-depth information about the Windows Azure™ Platform, where developers can take advantage of an Internet-scale cloud services platform hosted in Microsoft data centers to build new applications in the cloud or extend existing applications.

Office 365 content appears to be missing.

The post continues with a list of links to session or event details.


The EastBay.net Users Group reported on 5/9/2011 that Bruno Terkaly will present Windows Azure AppFabric on 5/11/2011 at 6:30 PM in Livermore, CA:

image Few developers can explain the term "middleware," but that is exactly what I plan to demystify in my presentation when I illustrate the use of the Azure AppFabric. The three main pillars to the AppFabric includes: (1) Service Bus; (2) Access Control Service; (3) Caching. The AppFabric is very decoupled from the rest of the Azure stack and developers can leverage its capabilities without even physically deploying an application into the cloud.

image722322222The AppFabric includes the Service Bus, which provides secure connections between distributed and disconnected applications in the cloud. The Service Bus diversifies choices for various communication and messaging protocols and patterns, and saves the need for the developer to worry about delivery assurance, reliable messaging and scale.

imageThe AppFabric can help developer overcome otherwise impossible hurdles, such as directly connecting two computer across firewalls, load balancers, NAT devices and other networking infrastructure. Another capability is to provide federated security scenarios for REST-based web applications. Understanding the AppFabric is an essential part of the Windows Azure Platform that every developer should understand.

Held at the University of Phoenix (Map), Room 105, 2481 Constitution Drive, Livermore (Off 580 at Airway Blvd. and across from Costco)


Steve Plank (@plankytronixx) posted Windows Azure Bootcamp: Links on 5/9/2011:

  1. image Visual Studio Web Express

  2. Windows Azure SDK and Windows Azure Tools for Microsoft Visual Studio (March 2011)

  3. Microsoft SQL Server 2008 R2 RTM - Express with Management Tools

  4. Cerebrata Cloud Storage Studio

  5. Plankytronixx blog

  6. Windows Azure Product Team Blog

  7. Windows Azure site

  8. Free Windows Azure Platform Trial

  9. Free WIndows Azure Subscription for MSDN Premium, Professional, Ultimate and BizSpark subscribers

  10. Windows Azure Platform Training Kit

image

Don’t forget there are other free half-day bootcamps running in the UK this week and the 27th May.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Jeff Barr (@jeffbarr) reported AWS Mobile SDKs Now Support Additional Services on 5/9/2011:

image The AWS SDK for Android and the AWS SDK for iOS now support even more AWS services:

The SDKs already support Amazon S3, Amazon SimpleDB, Amazon Simple Queue Service, and Amazon SNS.

With the added services, you can now add the following infrastructure to your mobile applications:

  • Messaging—Send bulk and transactional email to customers using Amazon SES.
  • Compute—Launch and manage Amazon EC2 instances with a number of features for building scalable, failure resilient, and enterprise class applications.
  • Monitoring—Monitor your Amazon EC2 instances, Amazon EBS volumes, Elastic Load Balancers, and Amazon RDS database instances in real-time with Amazon CloudWatch.

The SDKs include libraries, code samples, and documentation to help you get started. We have also set up a Mobile Development Forum where you can discuss the SDKs with other developers.

Sounds to me as if Amazon’s iOS features are much more extensive than Azure’s. See Wade Wegner (@wadewegner) described Getting Started with the Windows Azure Toolkit for iOS in a 5/6/2011 post in the Azure Blob, Drive, Table and Queue Services section above.


Claudio Criscione posted OpenShift: Red Hat answers to VMware Cloud Foundry to Virtualization.info on 5/6/2011:

imageIt was in the air: with more and more movements in the cloud computing space it was only a matter of time before Red Hat released its own solution.
The (self appointed) World’s Open Source Leader released OpenShift, a Platform as a Service cloud meant for developers of a number of languages, including PHP, Java, Ruby and Python.

image OpenShift supports both a light model (Express) and an enterprise grade model (Flex), both currently in beta and scheduled for final launch soon: in the meantime they are both free, their final prices undisclosed.
Red Hat decided to leverage Amazon AWS service as its hosting platform instead of managing its own hardware, but the level of interaction with the actual virtual servers depends on the platform model: Flex will actually run inside the user’s Amazon system, while Express will leverage Red Hat’ shared systems.

OpenShift PaaS schema

PHP, Ruby and Python applications can run in the Express model, a small scale system where the application can be run in the cloud quickly, even through well known and widely adopted tools for version control like Git. One of the demos showcasing the platform is a very interesting deployment of the common and powerful enterprise grade Drupal CMS.

PHP applications can also benefit of the Flex offer, the only one available for the Java language, a powerful, scalable and full fledged platform cloud which brings complete control of the application in the hands of the developer. Applications can be deployed on middleware components such as JBoss and Tomcat, and the platform provides valuable features including versioning, monitoring and auto-scaling: leveraging this enterprise grade components and the tight integration with common development framework can be the winning move here.
Flex also provides shell-level access (i.e. the command line of the hosting server) in a dedicated environment, while Express machines are explicitly multi-tenant: an Amazon AWS account is required to use Flex.

One of the most interesting promises of the platform is the not-yet-released Power delivery model: OpenShift Power can deploy applications to the cloud that are written to Linux (i.e. written in C, or using many binary components) and anything that builds on Linux. This includes custom, legacy applications with no web frontend at all. This is a very peculiar feature which can be very interesting for large enterprises with legacy code, but we will have to wait until its release to see if it keeps it promises.

Noticeably, while VMware Cloud Foundry is entirely Open Source, some parts of OpenShift are not (like the user interface): this is however well in the tradition of Red Hat and not a big surprise in itself.


Ayende Rahien (@ayende) described RavenDB Auto Sharding Bundle Design: Early Thoughts in an 8/3/2011 post:

image RavenDB Auto Sharding is an implementation of sharding on the server. As the name implies, it aims to remove all sharding concerns from the user. At its core, the basic idea is simple. You have a RavenDB node with the sharding bundle installed. You just work with it normally.

At some point you realize that the data has grown too large for a single server, so you need to shard the data across multiple servers. You bring up another RavenDB server with the sharding bundle installed. You wait for the data to re-shard (during which time you can still read / write to the servers). You are done.

At least, that is the goal. In practice, there is one step that you would have to do, you would have to tell us how to shard your data. You do that by defining a sharding document, which looks like this:

{ // Raven/Sharding/ByUserName
  "Limits": [3],
  "Replica": 2
  "Definitions": [
    {
      "EntityName": "Users",
      "Paths": ["Username"]
    },
    {
      "EntityName": "Posts",
      "Paths": ["AuthorName"]
    }
  ]
}

There are several things to not here. We define a sharding document that shards on just one key, and the shard key has a length of 3. We also define different ways to retrieve the sharding key from the documents based on the entity name. This is important, since you want to be able to say that posts by the same user would sit on the same shard.

Based on the shard keys, we generate the sharding metadata:

{ "Id": "chunks/1", "Shards": ["http://shard1:8080", "http://shard1-backup:8080"], "Name": "ByUserName", "Range": ["aaa", "ddd"] }
{ "Id": "chunks/2", "Shards": ["http://shard1:8080", "http://shard2-backup:8080"], "Name": "ByUserName", "Range": ["ddd", "ggg"] }
{ "Id": "chunks/3", "Shards": ["http://shard2:8080", "http://shard3-backup:8080"], "Name": "ByUserName", "Range": ["ggg", "lll"] }
{ "Id": "chunks/4", "Shards": ["http://shard2:8080", "http://shard1-backup:8080"], "Name": "ByUserName", "Range": ["lll", "ppp"] }
{ "Id": "chunks/5", "Shards": ["http://shard3:8080", "http://shard2-backup:8080"], "Name": "ByUserName", "Range": ["ppp", "zzz"] }
{ "Id": "chunks/6", "Shards": ["http://shard3:8080", "http://shard3-backup:8080"], "Name": "ByUserName", "Range": ["000", "999"] }

This information gives us a way to make queries which are both directed (against a specific node, assuming we include the shard key in the query) or global (against all shards).

Note that we split the data into chunks, each chunk is going to be sitting in two different servers (because of the Replica setting above). We can determine which shard holds which chunk by using the Range data.

Once  a chunk grows too large (25,000 documents, by default), it will split, potentially moving to another server / servers.

Thoughts?


<Return to section navigation list> 

0 comments: