Saturday, March 19, 2011

Windows Azure and Cloud Computing Posts for 3/16/2011+

image A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.


• Updated 3/19/2011 9:00 AM PST with articles marked by Beth Massi, Andy Kung, Andrew Coates and Orville McDonald (LightSwitch), Mary Jo Foley (ServiceOS), Avkash Chauhan (Startup Tasks), and Gizmox (VisualWebGUI)

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Azure Blob, Drive, Table and Queue Services

Avkash Chauhan reported an Unexpected Internal Storage Client Error when using Windows Azure Storage Client Library on 3/16/2011:

image Recently I was working on ASP.NET Web role to implement Sessions between multiple instance using ASP Provider sample, in which I was using Windows Azure Storage Table to store the session data. To achieve it, I have code to access Windows Azure Storage Tables and I found that the following code was generating an exception:

             TableServiceContext tSvc = CreateDataServiceContext();
             SessionRow sessionRow = GetSession(id, tSvc);
             ReleaseItemExclusive(tSvc, sessionRow, lockId);

imageThe exceptions details are as below:

System.Configuration.Provider.ProviderException was unhandled by user code
  Message=Error accessing storage.
       at Microsoft.Samples.ServiceHosting.AspProviders.TableStorageSessionStateProvider.GetSession(String id, DataServiceContext context)
       at Microsoft.Samples.ServiceHosting.AspProviders.TableStorageSessionStateProvider.<>c__DisplayClass4.<ResetItemTimeout>b__3()
       at Microsoft.Samples.ServiceHosting.AspProviders.ProviderRetryPolicies.RetryNImpl(Action action, Int32 numberOfRetries, TimeSpan minBackoff, TimeSpan maxBackoff, TimeSpan deltaBackoff)
       at Microsoft.Samples.ServiceHosting.AspProviders.ProviderRetryPolicies.<>c__DisplayClass1.<RetryN>b__0(Action action)
       at Microsoft.Samples.ServiceHosting.AspProviders.TableStorageSessionStateProvider.ResetItemTimeout(HttpContext context, String id)
       at System.Web.SessionState.SessionStateModule.BeginAcquireState(Object source, EventArgs e, AsyncCallback cb, Object extraData)
       at System.Web.HttpApplication.AsyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
       at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)

  InnerException: Microsoft.WindowsAzure.StorageClient.StorageClientException
       Message=Unexpected internal storage client error.

            at Microsoft.WindowsAzure.StorageClient.Tasks.Task'1.get_Result()
            at Microsoft.WindowsAzure.StorageClient.Tasks.Task'1.ExecuteAndWait()
            at Microsoft.WindowsAzure.StorageClient.TaskImplHelper.ExecuteImplWithRetry[T](Func'2 impl, RetryPolicy policy)
           at Microsoft.WindowsAzure.StorageClient.CommonUtils.<LazyEnumerateSegmented>d__0'1.MoveNext()
            at System.Collections.Generic.List'1..ctor(IEnumerable'1 collection)
            at Microsoft.Samples.ServiceHosting.AspProviders.TableStorageSessionStateProvider.GetSession(String id, DataServiceContext context)

       Stack trace:
   at Microsoft.WindowsAzure.StorageClient.Tasks.Task'1.get_Result()
   at Microsoft.WindowsAzure.StorageClient.Tasks.Task'1.ExecuteAndWait()
   at Microsoft.WindowsAzure.StorageClient.TaskImplHelper.ExecuteImplWithRetry[T](Func'2 impl, RetryPolicy policy)
   at Microsoft.WindowsAzure.StorageClient.CommonUtils.<LazyEnumerateSegmented>d__0'1.MoveNext()
   at System.Collections.Generic.List'1..ctor(IEnumerable'1 collection)
   at Microsoft.Samples.ServiceHosting.AspProviders.TableStorageSessionStateProvider.GetSession(String id, DataServiceContext context)

Based on above error I found that it is caused by a known issue with ADO.NET. This error is due to a bug in the ADO.NET Client Library version 1.0. The bug will be potentially fixed in ADO.NET version 1.5. In this case you have following two options to solve this problem:

Option 1:

Re-create ADO.NET Context Object After Unexpected Internal Storage Client Error

If you are using the Windows Azure Storage Client Library to work with the Table service, your service may throw a StorageClientException with the error message "Unexpected Internal Storage Client Error" and the status code HttpStatusCode.Unused. If this error occurs, you must re-create yourTableServiceContext object. This error can happen when you are calling one of the following methods:

  • TableServiceContext.BeginSaveChangesWithRetries
  • TableServiceContext.SaveChangesWithRetries
  • CloudTableQuery.Execute
  • CloudTableQuery.BeginExecuteSegmented

If you continue to use the same TableServiceContext object, unpredictable behavior may result, including possible data corruption. You may wish to track the association between a given TableServiceContext object and any query executed against it, as that information is not provided automatically by the Storage Client Library.

The solution is described as below:

You must re-create your TableServiceContext object.

The issues here is that you have a code path which keeps retrying to connect to Azure Storage Tables if it did not connect first time and it is using the same TableServiceContent Object for subsequent retries. The suggestion is given to create a new TableServiceContent each time when the Azure Storage Table connection request is made so you will not reuse the old TableServiceContext.

Option 2:

Use ADO.NET 1.5 which is not RTM yet however not supported with .NET 4. So if you are using .NET 3.5 SP1, you can use the ADO.NET CTP2

ADO.NET Data Services v1.5 CTP2

This CTP is an early preview of the v1.5 features for ADO.NET Data Services. This CTP targets the Microsoft .NET Framework 3.5 SP1 and extends the functionality we provided in v1.0 of ADO.NET Data Services by providing additional features.

<Return to section navigation list> 

SQL Azure Database and Reporting

Steve Yi reported Migration Week Part I of II: Access Migration Video in a 3/16/2011 post:

image We've talked to many customers and partners who have started using SQL Azure to migrate departmental and desktop application data to the cloud to take advantage of the managed database services of SQL Azure for high availability and reliability, and as a spearhead for their cloud data strategy to share vital information to users on the web and to external business partners.  Doing this is a lot more straightforward than it looks, and we've put together some examples on how to do this easily.

In the video below, I'll walk you through how you can easily migrate data from Microsoft Access to SQL Azure and continue using Access as the primary user interface and continue using any business logic embedded in the app.  By doing so, you gain the ability for multiple users to access that data from anywhere on the internet without making changes to the Access application.  The data will always be available - and provides an interim strategy for extending the data to additional users and user experiences via the web or mobile applications in the future. 

This walkthrough takes an existing Access application that tracks employee expenses locally on a desktop machine.  We use the free SQL Server Migration Assistant (SSMA) for Access to migrate the data to SQL Azure.  Storing the data in SQL Azure provides high availability and redundancy for the data that doesn't exist when residing on a local machine.  In addition, key additional benefits are:

  • 1. Allows multiuser access from anywhere via internet
  • 2. Future flexibility to use the data to power web and mobile applications

And the most important thing...the Access application continues to work without making changes, except the user interface now just points to the SQL Azure database. 

You can download the sample Access database and walk through the migration steps with your own SQL Azure account by going to our Codeplex site and downloading the bits here.

If you want to view it in full-screen mode, go here and click on the full-screen icon in the lower right-hand corner of the playback window.

We've got several other tutorials on the SQL Azure Codeplex site , go to  Other walk-throughs and samples there include ones on how to secure a SQL Azure database, programming .NET with SQL Azure, and several others.

If you haven't started experimenting with SQL Azure, for a limited time, new customers to SQL Azure get a 1GB Web Edition Database for no charge and no commitment for 90 days.  This is a great way to try SQL Azure and the Windows Azure platform without any of the risk.  Check it out here

Thanks!  We'll have another post on Thursday as we walk through how to migrate a database from on-premises SQL Server to SQL Azure.

Lynn Langit (@llangit) posted Twitter Data on SQL Azure on 3/16/2011:

image Here’s the deck from my presentation at ‘24 Hours of SQL Pass’: 24 Hours of SQL Pass - SQL Azure

View more presentations from Lynn Langit.

To try out SQL Azure with no credit card, no hassle, 30 days – use this code DPWE02

imageThe Twitter parser source code (CodePlex) is here.  What this does is as follows:
1) Edit the UnitTest1.cs for your particular hashtag
2) Configure the connection string to point to a local SQL Server (or you can point directly to SQL Azure)
3) Run the Unit Tests to pull a tweet stream based on the hashtag and to format as JSON data.
4) The Unit Tests will also create a simple database (3 tables) and populate that with the JSON tweet data.


<Return to section navigation list> 

MarketPlace DataMarket and OData

• Gizmox announced Gizmox VWG Technology Now Available Through the Windows Azure Marketplace in a 3/17/2011 press release:

VWG technology allows auto-transposition of client/server application code to Windows Azure

image Tel Aviv, March 17 - Gizmox, the developer of the award-winning Visual WebGui (VWG) and cloud Platform, announced today that the Visual WebGui Instant CloudMove solution will now be available through the Windows Azure Marketplace. The partnership highlights Gizmox’a support of Windows Azure and provides enterprise customers with the shortest path to enabling their client/server applications on the cloud.

image VWG enables the auto-transposition of application code that runs locally as a client/server application into an application that runs natively on Windows Azure as a rich Web application which can then be accessed in a secure-by-design mode from any plain browser including cross mobile and tablet devices and OSs. Instant CloudMove was designed to enable the transformation of client/server code to the Web or cloud. The solution targets the architectural gap between desktop and the Web/cloud by bridging that gap using a virtualization layer atop a Microsoft ASP.NET-based Web server.

Instant CloudMove supports the transformation of code from various client/server technologies such as Visual Basic 6, Oracle Forms, Power Builder, COBOL, Magic & others, as well as Microsoft .NET (Visual Basic.NET & C#), into rich Web or mobile secured-by-design browser based applications that run natively on Windows Azure. This means the application runs as native Web or cloud whilst maintaining its pure .NET code, and that the entire process can be done without disrupting the use of the application on the original platform. This was demonstrated in a joint webcast presented by Gizmox and the Windows Azure team. Get the webcast recording here:

“Microsoft is excited to feature Gizmox’s Visual WebGui Instant CloudMove in the Windows Azure Marketplace,” said Aashish Dhamdhere, senior product marketing manager for Windows Azure at Microsoft Corp. “Through VWG, customers can easily migrate applications to Windows Azure with minimal cost and disruption.” Gizmox is one of the lead partners in Microsoft’s Building Blocks Initiative- a new program to onboard technology and business partners to accelerate the adoption and penetration of Windows Azure with ISVs, Enterprise Developers and IT Pros. Under this initiative Microsoft supports partners who have developed Windows Azure applications that could catalyze and ease the adoption of Windows Azure for the end customers.

"We are delighted that Microsoft is featuring our technology in the Windows Azure Marketplace” commented Navot Peled, President of Gizmox. “Instant CloudMove is the native .NET to Web/cloud application move solution that addresses the tens of billions of client/server lines of code in the enterprises - particularly Visual Basic 6 and .NET, and offers the shortest path to Windows Azure. It also offers the shortest path from legacy systems such as COBOL, Oracle Forms, Power Builder and others to Windows Azure. The joint offering also includes special pricing to encourage enterprises to move to the cloud. We offer attractive proof of concept programs that lets enterprises build their confidence with Visual WebGui and Windows Azure.  Other alternatives such as desktop and application virtualization are very restrictive in terms of scalability, security and accessibility. Offering the VWG solution in the Windows Azure Marketplace will ensure that those enterprises wishing to migrate to the cloud can enjoy all the benefits that our solution offers."

The joint offering is available from the Windows Azure Marketplace:

About Visual WebGui
Visual WebGui,, is the first secured-by-design .NET open source Ajax empowered Web/Cloud and Mobile HTML and HTML5-based application platform. It reproduces uncompromised desktop functionality richness on the Web, Cloud and Mobile at commoditized costs. The Visual WebGui solution offers features that build, migrate, run and manage Web, Cloud and Mobile applications. It removes the Ajax complexities and limitations for business applications, as well as the web cloud and mobile’s single most limiting factor – the network limitation. It enables applications of any size and weight to be used on desktop, web, cloud and mobile from the same codebase, essentially forever changing the way businesses use web, cloud and mobiles for their organizational needs. In 3 years, Visual WebGui has more than 700,000 downloads of its software and over 35,000 VWG applications that are already in production, at first tier organizations such as SAP, IBM, Israel Aerospace Industry, Visa, Texas Instruments, banks, medical, government and military institutions.

Tim Laverty posted Reference Data Caching to the OData blog on 3/16/2011:

What is reference data caching?

imageOne of the comments I hear from customers building apps that use OData is they aren't always connected to their data source and often want to cache data on a client.  This allows their applications to function while "off-line", to perform better, and to allow cross-session persistence.  A few examples of applications that could make use of cached data:

  • A shipping application that wants to store reference data locally that is occasionally updated, e.g. a list of zip codes in the U.S.
  • A dictionary application that stores dictionary data locally, updating it on a set cadence, e.g. once a month.
  • A translation application that stores selected language packs locally.  Updates to purchased language packs are pulled to the client at application startup.
  • A restaurant finder application that stores restaurants in a city interesting to a user locally.  Restaurant data is updated when user parameters are changed or pushed from the server as the data set changes.
  • A conference organization application that allows attendees to browse sessions, build a schedule, rate speakers, etc.  Sessions make up a large set of data that might be worthwhile to cache and periodically incrementally update.
What can do you do with cached reference data?

I'm going to walk through a hypothetical lifecycle of data in the conference organization application above to highlight some of the things an application might want to do with the cached data.  I'll assume the application is running on a phone or other mobile device and there are a large number of sessions across many tracks.

At startup the application pulls all session data for the conference using an OData query.  Once the session data is obtained it's stored in a local cache allowing the application to continue to function if the user loses connectivity.  The user opens a Search screen in order to look for sessions with "OData" in the title or description.  Users could further limit search parameters by specifying tracks of interest or the day & timeslot for sessions.  In this case the system queries the local data for the sessions.  The user views their session results and drills into those of interest.  After a particular session is found the user closes the application.  Sometime later the user wants to again browse sessions.  They start up their application at which time the application updates the local cache incrementally with updated sessions, deletes cancelled sessions, and adds new sessions.  The user then searches again.

This example illustrates a few scenarios; other scenarios and benefits I can picture would be the service pushing new session suggestions to the user based on their registration information, the service having a faster startup time because it's caching sessions locally already, the cached session data being updated on a schedule (e.g. any updates to the cached sessions are pulled once an hour to accommodate a rapidly changing conference schedule), more complex queries against the data (e.g. sessions with speakers that have great speaker ratings), and more.

While it's possible to accomplish these scenarios today by storing data locally retrieved through an OData feed, it must be manually done and keeping the data up to date once stored locally is arduous and inefficient.

Why add protocol support for reference data caching to OData?

Adding protocol support for reference data caching would provide an efficient way to retrieve changes to cached data.  This would give clients that are frequently disconnected from their data source great benefits from storing data locally.

If you look at the apps in your phone, you'll see that many of them use a nice display against locally cached data that comes from a web service.  I look across the breadth of applications that could use reference data caching features and think adding protocol support for a subset of caching and local data scenarios would be useful.

How far should we go?

I want to explore what makes sense in terms of features to support.  I think there's a sweet spot in finding the right set of features to add that allows delivering on a limited set of key scenarios fast, provide high value, and don't add a great deal of complexity to the protocol or to OData clients & servers implementing the extensions.

As with any add to the protocol the additions need to be provided in a RESTful manner and not be tied to any specific OData client or server.  Any changes made to the protocol to enable reference data caching shouldn't be required changes for existing OData clients and servers, i.e. the additions should allow backward compat support.  A client should also be able to run against a server that's "reference data caching enabled" and use it just like any other OData server, either taking advantage of the caching features or not.

My 'strawman' for the set of features that make up the sweet spot is small and I'd like your reality check on it.  Here's an initial take:

  • The protocol should support allowing sets of data to be pulled from a server and then updated incrementally. 
  • Supporting updates to the local data through a "pull" method should be supported.
  • "Pushing" updates from server to client is interesting but I don't think it's needed in a V1.  
  • Allowing writes on the local data and pushing those writes to the server on demand in an initial rev isn't required.  
  • I think providing protocol support for scheduled refreshes of data isn't necessary. 
  • The structure of data cached on a client could change, e.g. fields could be added/deleted/updated to the Conference Session in the application above.  While the system could offer the capability to update the data already pulled locally in a V1 I believe it'd be acceptable to ask the client to redownload all data with the new structure.

I'd love to hear your thoughts on this list.


I want to explore what the protocol changes might be for supporting reference data caching in the conference organization finder application we dug into above.

The conference application first went through a startup sequence and used an OData query to pull all conference sessions.  This query would look like any other OData query, e.g.


The data returned by the feed for the query would look exactly like any other dataset for an OData query with one exception:

<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<feed ...>
<link rel="" href="http:// conferenceorganization/v1/ sessions/?$deltatoken=B:405973881944444416"/>

Note the inclusion of a delta (or refresh) link after the data.  The client can store the data obtained locally then use the delta link to obtain any changes to the dataset subsequent to the initial query.   Using the link to query for new changes will give you empty results until the data in the service is updated.  If there are updates, just instance data that changed (added, updated, or deleted) is returned along with a new delta link.  This allows a client to poll the server for changes to a specific set of data keeping it up to date over time while improving performance and reducing network use.

One immediate question with the design is how the system would interact with server driven paging.  It's important that a server calculate the delta token at the moment a query is executed considering instance data could change while a client is paging through results using a series of next links.  This allows clients to use a delta link to get any changes for a set of data that occurred since the query was first executed.  How would the delta token be persisted or carried across a set of next links?  The protocol is stateless so we wouldn't want to persist it on the server (or client).  One option would be to hold the delta token inside the next link, such as:

<link rel="next" href="http://conferenceorganization/v1/ sessions/?$skiptoken='5QLs'&$deltatoken=N:405973881944444416 " />

This would allow the server to obtain the token across requests and create the delta link for use on the last page of results.


Reference data caching could enable a broad range of enhancements and new features for applications that are sometimes disconnected or would like to reduce network use in order to increase performance.  I'd like your thoughts on value of the feature, how far you think it should go, and any comments you have on the exploration of the idea.

At this stage this seems like something that we should add to version 3.0 of OData. Do you agree with that assessment? If not we should discuss how you think of relative priorities.

Some specific questions:

  • Do you agree that adding protocol support for reference data caching is worthwhile?
  • What do you think is the sweet spot of features that should be added to the protocol?
  • Do you see red flags in the exploration or ideas missing?
  • Are there any particularly applications where you'd use this?

Ron Jacobs reported the availability of a 00:16:59 - WCF Web HTTP (REST) Test Tool Preview video segment on 3/16/2011:

image The other day, I walked by the office where our team members from Shanghai work while in Redmond and I ran into Leo Shum, a test lead on the new WCF Web HTTP (REST) Test Tool. On this episode, Leo joins me in the studio to show you how this new tool works. Though it's not yet shipping, we definitely want to know what you think and how we can make it better. So take a look and let us know!

Watch the Silverlight video here.

Patrick D. Fletcher explained Cancelling an AsyncCodeActivity in a 3/16/2011 post to the .NET Endpoint blog:

image Even when AsyncCodeActivity.Cancel is called, the activity will still execute the EndExecute method, so it isn't immediately obvious why a second path of execution is necessary. Implementing this method is useful only when AsyncCodeActivityContext.MarkCanceled is implemented, which in turn is useful mainly when cancellation is meaningful for the underlying operation (i.e. if work already completed can be rolled back).

In order to implement cancellation in a useful way, do the following:

  • Call AsyncCodeActivityContext.MarkCanceled to mark the activity as canceled.
  • In the Cancel method, check to see what work had already been done by the activity, and undo the work.
  • Do not duplicate effort in the Cancel and the EndExecute methods in case of cancellation, since both methods will be called.

If no work needs to be rolled back when an AsyncCodeActivity is canceled, it is not necessary to override AsyncCodeActivity.Cancel.

<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF and Service Bus


Sean Deuby wrote Ease Cloud Security Concerns with Federated Identity for Cloud IT Pro on 3/9/2011 (missed when posted):

image For the first time in a long time, the enterprise identity landscape is evolving at its most basic level. There’s a new kid on the block, and its name is federated identity. Although the seeds of this change have been around for a while, we just didn’t recognize its importance. Federated identity is here to stay, and IT professionals and developers need to learn about it and how it will affect their work in the future.

Why We Need Federated Identity

image722322222To understand the growing popularity of federated identity, it helps to look at the challenges that IT professionals and developers face when using traditional identity authentication in the modern IT environment—in particular, the Kerberos protocol. The point behind an identity provider, such as Active Directory (AD), is to centralize identity information for resources to consume. Although identity-oriented IT pros tend to lose sight of it, the purpose of the authentication process is to determine and validate the user’s identity in order to gain access to resources.

The Kerberos security protocol (and therefore the AD domains and forests built on it) was designed to work in a fairly secure environment, such as a corporate intranet. The Kerberos protocol, as implemented in AD, provides two components: confirmation of identity and security group membership. If a resource (e.g., a DFS namespace) requires more information, such as site information, it needs to extract that information from another location—AD itself.

However, scenarios that don’t require any modification of AD to store more information are pretty simplistic in real life. Microsoft Exchange Server, for example, requires more information about a user than the base AD schema provides. So, AD admins must extend the schema to allow Exchange to store added identity data about its users. Schema extensions aren’t done casually; they take time to prepare for and schedule. As a result, other applications might choose to store identity information in databases such as SQL Server or Active Directory Lightweight Directory Services (AD LDS) that don’t require the amount of preparation a schema change does.

But what if the users and resources are in two different enterprises—for example, a joint venture or collaboration, or for a Software as a Service (SaaS) cloud application? Do you create and manage the external users’ identities by creating shadow accounts in AD, or do your developers create a separate account database to hold them? How do you keep up with the accurate provisioning and deprovisioning of these accounts? What about providing adequate security for these identities against hackers?

Most companies don’t want to manage external users’ identities and the headaches that go along with that management. If an application is intended to support multiple access scenarios, developers must build in multiple authentication mechanisms. Identity design and management in these and other scenarios become very cumbersome, and the traditional model is stretched to its limit.

What Federated Identity Is

The federated identity model can handle a variety of scenarios. Federated identity is the ability to port data across security domains using claims and assertions from a digitally signed identity provider. To understand what that definition means, let’s break it into parts. As I described in the previous section, each enterprise’s identity store can be generically described as a security domain, regardless of whether it’s using AD or some other directory product. For the purpose of this article, AD is the identity provider for scenarios inside an enterprise. For scenarios that span multiple enterprises, the identity provider is the entire enterprise that provides identity information (not just AD). As for claims and assertions, these are essential parts of what we call claims-based authentication.

Claims-based authentication is the cornerstone of federated identity. At its simplest, claims-based authentication is about presenting an application with the potentially wide variety of identity information it needs, from an identity provider it trusts, in a highly secure envelope, regardless of whether the application is inside or outside the enterprise. That’s why it can handle the two-enterprise and SaaS scenarios that I discussed in the previous section so well. Claims-based authentication adds flexibility and security, whereas traditional authentication technology gives you either flexibility (LDAP queries to AD) or security (Kerberos).

The claims-based authentication model is based on a few simple, intuitive concepts, but the authentication process can bounce back and forth quite a bit. Let’s compare some of the basics of this model with one you know a little better: the Kerberos protocol.

In AD, every authenticated user has one or more Kerberos tickets that contain identity information. A basic construct of claims-based authentication is the token, formatted in Security Assertion Markup Language (SAML). Figure 1 shows an SAML token, which is similar to a Kerberos ticket in many ways. A Kerberos ticket contains a payload, called the access token, that asserts what security groups the user is a member of. The resource (e.g., a file server) trusts this assertion, because the ticket is cryptographically confirmed to be from a valid identity source—which in AD is the Kerberos Key Distribution Center (KDC) of the domain controller (DC) the file server is talking to.

Figure 1: SAML token
Figure 1: SAML token

Read more: 2, 3, Next

Sean Deuby is technical director for Windows IT Pro and SQL Server Magazine.

<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

Steve Plank (@plankytronixx) asked If VM Role isn’t IaaS, where exactly, does it fit in? on 3/16/2011:

image Amazon, traditionally an IaaS cloud provider, has now introduced Beanstalk, a PaaS model for the cloud.  Their IaaS model has served them well: you provide a VM, they run it for you. To round out the model they provide additional services such as storage and database. This model has made Amazon by far and away the most successful cloud operator in the market, as we speak, in terms of the volume of compute they run. So, with such a successful model, that has brought them large market share, why are they introducing a PaaS model?

imageI believe it’s because to truly take advantage, customers will start to demand more PaaS-like services as cloud technologies mature. The IaaS model is so easy to migrate to the cloud because, in no small measure, you replicate what you have in your data-centre, save for those additional services I talked about, like storage (which you don’ t have to use).

This means, you replicate all the problems of running applications in your data-centre as well. That’s not to say there is no place for IaaS – there absolutely is. IaaS offers the pay-as-you-go and only-pay-for-what-you-use model that the data-centre, in most cases, can’t match. It also increases business agility. You could go from making the decision to put an application in the IaaS cloud, to actually running it in the cloud, in less than a day. In fact, in less than an hour if you want to throw caution to the wind. So, with IaaS, the transition-time from deciding to move (or create in the first place) an application to the cloud, to actually doing it, is very short indeed.

Not so with PaaS applications. The reason, is that PaaS applications have to be written with the PaaS platform in mind. You only concentrate on the business logic. The management, monitoring and provisioning is automatically handled by the platform. The application lifecycle is taken care of for you by the cloud operator. As a developer, you are granted a shared pool of compute, storage and network resources and you don’t have to manage any of that. As a developer you can concentrate on being a developer. Remember, managing applications is something you are forced to do. It’s not something that has intrinsic value to the organisation.Microsoft’s Mark Russinovich says “treat the data-centre as if it’s the machine”. You end up with the resources of the entire data-centre curling around your application.

OK – so where does that leave VM Role. I’ve heard, many times “VM Role is the way Azure does IaaS”. I’ve even heard this from Microsoft commentators – who I believe were trying to simplify a message in to a small, easily digestible chunk – but they actually got it wrong.

In a couple of previous posts, I’ve talked about why it is that VM Role is not IaaS and why, for example you definitely wouldn’t want to use VM Role for say, a Domain Controller. In this post, rather than saying what VM Role is not. I’d like to concentrate on what it is. Where it does fit in to the PaaS model.

Windows Azure is about building applications

When you build a Windows Azure application, you don’t think about the operating system. Sure you think about APIs and so on, but the operating system itself is handled by the platform. When you make updates to your application, the operating system and local storage are left untouched. It’s the same when you make configuration changes to your application – the operating system and local storage remain untouched. The notion of configuration-drift disappears. You update the application and your configuration changes flow, automatically. When you need to scale out – this is handled in a known, controlled and consistent way. Scaling is achieved through multiple instances of the application, so sure, you have to write your application in a very specific way, to allow for this. But the scale out is achieved because instance management creates identical instances. Availability comes because of the ways failure management is done – automatically, meaning you don’t suffer when hardware fails or the operating system on one of your instances crashes. You also get automated, consistent servicing of the operating system. This is achieved through image based OS patching. Security patches are applied in a known, controlled and consistent way across all your instances, with availability not having to suffer while this is done.

What I tried to do in the previous paragraph is highlight how consistently the OS is managed, such that you never have to worry about the OS. Windows Azure is a platform that has successfully managed to separate the OS from the application. But if VM Role gives you access to the underlying operating system, where on earth does it fit in to this picture if Microsoft is still saying that VM Role is a PaaS offering. The way to think of it is like this:

  1. A Web Role is a website hosted on IIS.
  2. A Worker Role is an application hosted on Microsoft’s Windows image.
  3. A VM Role is your pre-loaded application hosted on your Windows image.


You can see there is a continuum between abstraction and control. In those situations where you don’t feel you have enough control, you are forced to move away from abstraction. With VM Role, you get another module bolted on to the end of the control extreme of the continuum:


So, there are a few specific ideal-use-cases for VM Role. The first one is where the application install is lengthy. Sure, you could do it with Startup Tasks in a Worker Role, but think what that does for the responsiveness when you want to say scale out. I find that most deployments take about 10 minutes to get to a Ready state. If you add 30 minutes on to that because of a lengthy installation that has to happen during a Startup Task, it means you also need to add 30 minutes on to the time it will take you application to scale-out when you increase the instance count. If the application is already installed in to the image you supplied (see bullet #3 above), you don’t have to suffer that time-delay. The next one is error-prone application installations: the sort of things a human being can react to and make the right decisions, but which a Startup Task would struggle with. Even with a 1% or 2% failure rate on application installations, in a large deployment that can add up to a lot of instances. The third one is any application that requires a manual-installation: clicking buttons, typing in to text boxes etc.

I talked briefly about consistency above and believe it or not, VM Role benefits from all but one of the things I talked about in that paragraph. Let me just repeat that paragraph here, highlighting the only differences in red:

With VM Role, the notion of configuration-drift disappears. You update the application and your configuration changes flow, automatically. When you need to scale out – this is handled in a known, controlled and consistent way. Scaling is achieved through multiple instances of the application, so sure, you have to write your application in a very specific way, to allow for this. But the scale out is achieved because instance management creates identical instances. Availability comes because of the ways failure management is done – automatically, meaning you don’t suffer when hardware fails or the operating system on one of your instances crashes. However, you don’t get automated, consistent servicing of the operating system. You must patch the OS yourself. It’s the same with security patches and hotfixes. Again though, when you look at what you do get, a great deal of it is the same. I previously said, in a different post, that the best way to think of VM Role is that instead of supplying just the .cspkg (plus service model) to Microsoft to run (as in Web and Worker Role), you supply a .vhd. When you think of patching your application, you should consider the entire .vhd a bit like a .cspkg. If there’s something wrong with it, including the underlying OS, then you patch it. VM Role gives you an efficient way of doing this with differencing files, so you don’t have to redeploy an entire multi-gigabyte .vhd for say one patch.

There is a quid-pro-quo transaction for all the benefits of service management you still get from VM Role though. The deal is that you must write your application with 2 core architectural principles embedded throughout. The application should be stateless and the application should be capable of running on multiple instances. By being stateless, you still get the benefit of consistent updates, consistent configuration and multi-instance updates. By being capable of running on multiple instances, you still get the benefit of scale-out and high-availability. Now, this obviously means that not all applications are good candidates for being moved to VM Role.


VM Role is not an IaaS solution. It extends the limit of the abstraction-control continuum further in the direction of control than abstraction, but there is still significant abstraction. This means VM Role still benefits from the Windows Azure service model. So for example the idea of configuration-drift disappears, configuration changes automatically flow, code changes automatically flow etc. However, the deal is that you must create applications around a multi-instance, stateless architecture and you don’t get the benefits of automated OS service management. You must become the manager of the OS. However, you only do that for your “golden image” – once deployed, it flows, automatically, just like the application you installed on to it does. VM Role is particularly suited to multi-instance, stateless applications that have these properties: they have long or error-prone installation procedures or they have manual installation requirements. The way to think of VM Role is like this:

  1. A Web Role is a website hosted on IIS.
  2. A Worker Role is an application hosted on Microsoft’s Windows image.
  3. A VM Role is your pre-loaded application hosted on your Windows image.

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Avkash Chauhan solved a problem with Enabling PS Remoting (Enable-PSRemoting) in Windows Azure Role using Startup Task in a 3/17/2011 post:

imageRecently I was working on an issue in which enabling PS Remoting PowerShell script in Windows Azure the following error occurred:

Set-WSManQuickConfig : Access is denied. at line:<N> char:<N>

If you have followed the below steps to enable PS Remoting in Windows Azure application, I am sure you have seen the above described error.

1. Created a PowerShell Script (Example EnablePSRemoting.ps1) as below:

Set-ExecutionPolicy unrestricted -force


enable-psremoting -force

"After PSRemoting"

2. Created a Startup.cmd file as below:

powershell -ExecutionPolicy Unrestricted EnablePSRemoting.ps1 >>  PowershellLog.txt

3. Now included the Startup.cmd as Startup task in Service Definition file as below:

<ServiceDefinition name="MyAzureService" xmlns="">
   <WebRole name="MyWebRole">
         <Task commandLine="Startup.cmd" executionContext="elevated" taskType="simple">

4. Deployed application.

Once Windows Azure application runs on cloud you will hit the following error:

Set-WSManQuickConfig : Access is denied. at line:<N> char:<N>

The reason for above error is as below:

1. The Startup task runs on Windows Azure in no user context or in other way LocalSystem context

2. The PowerShell script which you are trying to run as Startup task designed to run in user context and because of #1, it returns "Access Denied" error.


To solve this problem you will need to use the task scheduler to create an user first and then launch the PowerShell script in newly created user context. To create the user within the startup task and then successfully enable the PS Remoting please follow David Aiken blog:

• Avkash Chauhan described Using Startup Task in Windows Azure: detailed summary on 3/17/2011:

imageUsing Windows Azure SDK 1.3 or later you have ability to launch a process (called startup task) depend on your choices such as:

  1. Before your role starts and once task is finished then Role will start
  2. Role will not wait for startup task to finish instead it will launch just after startup task is executed. Both role and startup task would be independent in this context
  3. Or you can run your role as long as your startup task is running

To add a Startup task to your role you just need to add the <Startup> <task /></Startup> entries in the Service Definition file (ServiceDefinition.csdef) similar to as below:

<ServiceDefinition name="MyService" xmlns="">
   <WebRole name="WebRole1">
         <Task commandLine="Startup.cmd" executionContext="limited" taskType="simple">

In above example the Startup.cmd is a properly create CMD file (or it could be a batch file or an EXE itself) which contain the startup application.

The commandLine attribute is the executable File Name, which is going to execute as startup task.

In Above the executionContext attribute can be:

  1. Elevated: This will cause startup task to run in Administrator mode.
  2. Limited: This will cause startup task to run with the same privilege as the role does

In above setting the taskType attribute can be:

  1. Simple: Means this task would needs to be complete before the Role Starts. If task will not complete and exit, the Role will stuck to wait till the task completion. Simple tasks run until they exit, and then the next task (or the RoleEntryPoint if this is the last startup task) is executed.
  2. Background: This task would be independent from the Role itself. First startup task will execute and immediately after the Role, and even when this task is not completed, the Role will start separately from the startup task.
  3. Foreground: The startup task will run and then Role will start so it does behave same way as the background task, however the role will stay running as long as startup task is running. You can choose it, if you want the role instance to stay up as long as the task is running. Also the role shutdown can only happen when all the foreground tasks exit.

Important Notes:

  • For task type simple, they are expected to return an exit code 0 to indicate successful completion. Without 0, the next task or role will not start.

  • If the commands in the cmd file do not adhere to convention of returning 0 for success, you may need to add “exit /b 0” to force it. Reference

Here are a few scenarios to understand it much better:

[Scenario 1]

  <Task commandLine="Startup1.cmd" executionContext="limited" taskType="simple"/>
  <Task commandLine="Startup2.cmd" executionContext="limited" taskType="simple"/>

In this situation First Startup1.cmd task will execute and when this task is complete and exit then Startup2.cmd will executed. Finally when this task is complete exit the Role will start.

[Scenario 2]

  <Task commandLine="Startup1.cmd" executionContext="limited" taskType="background"/>
  <Task commandLine="Startup2.cmd" executionContext="limited" taskType="simple"/>

In this situation First Startup1.cmd task will execute and just after Startup2.cmd will execute. Startup2.cmd task will not depend on Startup1.cmd task, however the Role will not start until Startup2.cmd task is complete and exit.

Running Startup tasks in User Context:

You must understand that these Startup tasks defined in Service Definition file, does not run in any user context and it is possible that when application you are going to run as Startup task needs an user account to execute otherwise it may return "Access Denied" Error. So you may think how to accomplish it. The only solution is the create a user account first within your startup task and then launch the startup task within the newly created user account.  David Aiken has written great article along with code snippet about running Startup task as a real user as below:

Debugging with Startup task along with Tips and Tricks:

Debug your application in compute emulator along with startup tasks and batch or command files is tricky task and no one knows better that Steve Marx so he created a very nice article about trips and trick using Startup Task as below:

How to use Startup task in Azure by Windows Azure Documentation Team:

David Chou reported the Rise of the cloud ecosystems on 3/16/2011:

image I had the opportunity to participate at this year’s CloudConnect conference, with my session on Building Highly Scalable Applications on Windows Azure, which is mostly an update from my earlier presentations at JavaOne and Cloud Computing Expo. I was pleased to learn that the cloud-optimized design leveraging distributed computing best practices approach, aligned well to similar talks by well-known cloud experts from Amazon, Google, etc. A more detailed discussion on this topic can be found in my earlier post - Designing for Cloud-Optimized Architecture.

One of the major takeaways I had from the conference, was the focus on ‘cloud platforms’ (or platform-as-a-service generally) messaging, further reinforcing the platform view of cloud computing, as opposed to the infrastructure-level perspectives, or mixed views around the popular SaaS, PaaS, and IaaS service delivery models.

Click here to view David’s deck on SlideShare.

Werner Vogels,  Vice President and CTO, Amazon.comIt started with Werner Vogels in his keynote presentation. Werner said, “it is all about the cloud ecosystem”, that “everything are cloud services; everything as a cloud service”, and “not constrained by any model”. And “let a thousand platforms bloom”, where “ecosystems grow as big as possible”. This implies that the popular models (e.g., SaaS, PaaS, IaaS) are irrelevant, because everything are simply services that we can consume, and these services can span the entire spectrum of IT capabilities as we know, and potentially more.

Infrastructure as a serviceIt is interesting to see how the platform messaging evolved over the past few years. For instance, Werner Vogels used to refer to Amazon Web Services as “Infrastructure as a Service”. However, I think Werner Vogels has always advocated the platform view, similar to Gartner’s notion of “application infrastructure as a service”, instead of the overused IaaS (infrastructure-as-a-service) service delivery model (as popularized by NIST’s definition of cloud computing). Perhaps, it was also because many people started incorrectly referring to Amazon Web Services as IaaS and not seeing the platform view, that Werner chose to clarify that models are irrelevant and “it is all about the cloud ecosystem”.

I also belong to the camp that advocates the platform view, and the further ecosystem view of cloud computing. The platform and ecosystem views of cloud computing represent a new paradigm, and promote a new way of computing. Though I think the SaaS, PaaS, and IaaS classifications (or service delivery models) still have some uses too. They are particularly relevant when trying to understand the general differences and trade-offs between the service delivery models (as defined by NIST), from a layers and levels of abstractions perspective.


Perhaps, what we shouldn’t do, is to try to fit cloud service providers into these categories/models. As often, a particular service provider may have offerings in multiple models, have offerings that don’t fit well in these models, or it’d be over-simplifying to refer to cloud platforms, like Amazon Web Services, Google App Engine, Windows Azure platform, etc., strictly in this platform-as-a-service definition. These platforms have a lot more to offer than simply a higher-level abstraction compute service.

Cloud platforms

As Werner Vogels said, “cloud is actually a very large collection of services”, cloud platforms aren’t just a place to deploy and execute workloads. Cloud platforms provide the necessary capabilities, delivered as distinct services, that applications can leverage to accomplish specific tasks.

Amazon Web Services has always been a cloud platform; today it is a collection of services that provide capabilities for compute (EC2, EMR), content delivery (CloudFront), database (SimpleDB, RDS), deployment and management (Elastic Beanstalk, CloudFormation), e-commerce (FWS), messaging (SQS, SNS, SES), monitoring (CloudWatch), networking (VPC, ELB), payments and billing (FPS, DevPay), storage (S3, EBS), support, etc. It is not just a hosting environment for virtual machines (which the popular IaaS model is more aligned with). In fact Amazon Web Services released S3 into production (March 2006) before EC2 (limited public beta in August 2006, removed beta label in October 2008).

Similarly, Microsoft has been using the platform-as-a-service messaging when describing Windows Azure platform, but it is also about the collection of various capabilities (delivered as services). For example, below is a visual representation of the application platform capabilities in Windows Azure platform that I have been using since 2009 (though the list of capabilities grew over that period):


And below shows how those capabilities are delivered as services and organized in Windows Azure platform.


This is important because the platform view is one of the major differentiators of cloud platforms when compared to the more conventional outsourced hosting providers. When building applications using hosting providers (or strictly infrastructure-as-a-service offerings), we have to incur the engineering efforts to design, implement, and maintain our own solutions for storage, data management, security, caching, etc. In cloud platforms such as Amazon Web Services, Google App Engine, and Windows Azure platform, these capabilities are baked into the platform and available as services that are readily accessible. Applications just need to use them, without having to be concerned with any of their underpinnings and operations.

An analogy can be drawn from high-level differences between getting food products from Costco to put together a semi-homemade meal, versus getting raw ingredients from a supermarket and prepare, cook, and finish a fully-homemade meal. :) The semi-homemade model offers higher agility (less time and efforts required to put together a meal and to scale it for bigger parties) and economy of scale in Costco’s case, while the fully-homemade model offers more control.

Another distinction between cloud platforms and typical IaaS offerings, is that cloud platforms are more of a way of computing – a new/different paradigm; whereas IaaS offerings are better aligned towards hosting scenarios. This doesn’t mean that there are no overlaps between the two models, or that one is necessarily better than the other. They are just different models; with some overlaps, but ideally suited for different use cases. For cloud platforms, ideal use cases are aligned to net-new, or greenfield development projects that are cloud-optimized. Again, hosting scenarios also work on cloud platforms, but cloud-optimized applications stand to gain more benefits from cloud platforms.

Cloud ecosystems

The cloud ecosystem view takes the cloud platform view one step further, and includes partners and third parties that enable their services to participate in an ecosystem. The collective set of capabilities from multiple organizations and potentially services spanning multiple platforms and cloud environments together form an ecosystem that feeds and builds upon each other (in composite, federated, application models), and generating best practices and reusable processes, communities, etc. This can also be viewed as a natural evolution of platform paradigms, when drawing inference from other models where the iterations typically evolved from technology maturity, critical mass in adoption, and then building ecosystems. The platform with the largest and most diverse ecosystem, gets to ride the paradigm shift and enjoy a dominant position for that particular generation.

The Web platform stack model I discussed back in 2007 is one way of looking at the ecosystem model (apologies for the rich color scheme; I was going through a coloring phase at the time).

In essence, a cloud ecosystem itself will likely have many layers of abstractions as well; one building on top of another. One future trend in cloud computing may very well be the continued climb into higher levels of abstraction, as differences and complexities at one level often represent development opportunities (e.g., for specializations, consolidations/aggregations, derivations, etc.) at a higher level.

Ultimately, cloud platforms enable the dynamic environments that support the construction of ecosystems. This is one aspect inherent in cloud platforms, but not as much for lower-level IaaS environments. And as the ecosystems grow in size and diversity, the network effect (as discussed briefly back in 2008) will contribute to increasingly intelligent and interactive environments, and generate, collectively, tremendous value.


Brian Hitney announced on 3/16/2011 that he’s Getting Ready to Rock (Paper and Scissors) with a new Windows Azure coding competition:

image We’re gearing up for something that I think will be truly exciting – but I’m getting ahead of myself.  This is likely going to be a long series of posts, so let me start from the beginning.

About a year or so ago, at the Raleigh Code Camp, I stumbled into a coding competition that was run by James Avery and Nate Kohari during a few hours in the middle of the day.   The concept was simple:  write a program that plays “Rock, Paper, Scissors” – you would take your code, compiled as a DLL, and upload it to their machine via a website, and the site would run your “bot” against everyone else.   Thus, a coding competition!

imageI was intrigued.  During the first round, I didn’t quite get the competition aspect since return a random move of rock, paper, or scissors seems to be about the best strategy.  Still, though, you start thinking, “What if my opponent was even more lazy and just throws rock all the time?”  So you build in a little logic to detect that.  

During round 2, though, things started getting interesting.   In addition to the normal RPS moves, a new moved called Dynamite was introduced.  Dynamite can beat RPS, but you only have a few to use per match.  (In this case, a match is when two players square off – the first player to 1,000 points wins.  You win a point by beating the other player in a single ‘throw.’  Each player has 100 dynamite per match.) 

Clearly, your logic now is fairly important.  Do you throw dynamite right away to gain the upper hand, or is that too predictable?   Do you throw dynamite after you tie?    All of a sudden, it’s no longer a game a chance. 

Now enter round 3.  In round 3, a new move, Water Balloon, is introduced.  Water Balloon can defeat dynamite, but loses to everything else.  So, if you can predict when your opponent is likely to throw dynamite, you can throw a water balloon and steal the point – albeit with a little risk.

This was a lot of fun, but I was intrigued by the back end that supported all of this – and, from a devious viewpoint, the security considerations which are enormous.  James pointed me to the base of the project, available up on GitHub by Aaron Jensen, the author of the Compete framework (which is the underlying engine) and the Rock, Paper, Scissors game which uses the Compete engine.

You can download the project today and play around.  At a couple of code camps in the months to come, I ran the same competition, and it went pretty well overall.

So, what does this have to do with anything, particularly Azure?   Two things.  First and foremost, I feel that Azure is a great platform for projects like this.  If you download the code, you’ll realize there’s a little setup work involved.  I admit it took me some time to get it working, dealing with IIS, paths, etc.   If I wanted to run this for a code camp again, it would be far easier to take an Azure package at around 5mb, click deploy, and direct people that site.  Leaving a small instance up for the day would be cheap.   I like no hassle.

The other potential is using the project as a learning tool on Azure.   You might remember that my colleague Jim and I did something similar last year with our Azure @Home series – we used Azure to contribute back to Stanford’s Folding@home project.  It was a great way to do something fun, useful, and educational.

In the coming weeks, we’re rolling out a coding competition in Azure that plays RPS – the idea here is that as a participant, you can host your own bots in a sandbox for testing, and the main game engine can take these bots and continually play in the background.  I’m hoping it’s a lot of fun, slightly competitive, and educational at the same time.  We’ve invested a bit more into polishing this than we did with @home, and I’m getting excited at unveiling.

Over the next few posts, I’ll talk more about what we did, what we’ve learned, and how the project is progressing!

<Return to section navigation list> 

Visual Studio LightSwitch and Entity Framework v4.1

• Orville McDonald described Extending LightSwitch Beta 2 Applications in a 3/19/2011 guest post to the Silverlight Team blog:


Hello Silverlight Community, I am the product manager for Microsoft® Visual Studio® LightSwitch™. Today, we are making Visual Studio LightSwitch Beta 2 publicly available for download. If you are new to Visual Studio LightSwitch Beta 2 here are some resources to get you started:

Visual Studio LightSwitch Extension Overview

For this blog post I am going to focus on extensions for Visual Studio LightSwitch applications and show you an example of how they are used. Extensions provide additional functionality that is not standard in the product. For example, adding a unique screen template that did not ship in the box. Silverlight is one of the technologies used in developing extensions. There are 6 extension points.

  • Business Type – is a refinement of a data type that provides formatting, validation and visualization.
  • Control - is the building block of extensions and is what users interact with.
  • Custom Data Source – provides connectivity to other data sources via a WCF RIA Service.
  • Screen Template – is a stencil for the layout of how controls are to be displayed at run-time.
  • Shell – provides the common look and feel of an application.
  • Theme – is the color and font palette.

Extending Applications with Extensions

Once an application is developed it can be extended with minimal changes. You will notice below that I have a completed LightSwitch sample application for a sales person. Although it is complete there are a few changes that I would like to make. Primarily I would like to change the address to display a map and I don’t like the fact that the full payment instrument is exposed (a major mistake).


The first thing we are going to change is the address to a map. Instead of redesigning our application to support a map, we will change the control that is used for displaying the address. It is currently set to a label but we will change that to the Bing Map Control.


With the address now set to the Bing Map Control I will add my Bing Maps Key to access the service.


I will also follow a similar process for the billing information, choosing the Mask Edit Textbox Control. With these changes I’ll start my application again.


You can see that by using extensions I was able to update my application even though I did not redesign it. The final thing I am going to do is change the look and feel of my application using shells. From my applications properties I can change my shell by selecting a new one from the drop down menu.


My application updates are now complete. Press F5 to run the application and it is using the new shell. The shell I am using includes a lot of features that were not available in my standard shell. Instead of focusing on the functionality I will show you how my application now appears compared to its previous view.


• Andy Kung posted Step-by-Step: How to Publish to Windows Azure to the LightSwitch Team blog on 3/18/2011. This capability is the reason for including Visual Studio LightSwitch articles in this blog:

image2224222222One of the many features introduced in Visual Studio LightSwitch Beta 2 is the ability to publish your app directly to Windows Azure with storage in SQL Azure. We have condensed many steps one would typically have to go through to deploy an application to the cloud manually.

In this tutorial, we will deploy a LightSwitch web application with Forms authentication to Windows Azure and SQL Azure. This walkthrough assumes that you already have a Windows Azure subscription and basic knowledge of Windows Azure. If you do not have a subscription, you can sign up for an account here. The sign up page will explain different pricing models as well as a Free Windows Azure trial!

Configuring Windows Azure

Before we publish an application from LightSwitch, let’s first make sure we have the necessary information and Azure configuration. Sign in to your Windows Azure account via Windows Azure web portal.


Subscription ID

We will need the Windows Azure subscription ID to publish from LightSwitch. To find it, click “Hosted Services, Storage Accounts & CDN” tab on the left side of the portal.


Select “Hosted Services” on the side menu. Select the subscription node in the middle pane. You will find your Subscription ID on the right side of the portal.


Hosted Service

With the “Hosted Services” selected on the side menu, you will see a list of existing hosted services. A hosted service is the actual website that will be hosting your LightSwitch application. You will need a hosted service to publish from LightSwitch. To create a new hosted service, click “New Hosted Service.”


In the “Create a new Hosted Service” dialog, specify a service name and a unique URL prefix. The URL will be used to access your LightSwitch application once it has been deployed. Next, choose a region to host the service. I’ll pick “Anywhere US.” Select “Do not deploy” since we’re deploying from LightSwitch instead of this web portal.


You will see the newly created service in the list.


Storage Account

You will also need a storage account to publish from LightSwitch. The storage account is used to store your LightSwitch application as it is being uploaded to Windows Azure. Select “Storage Accounts” on the left menu to see a list of existing storage accounts. To create a new storage account, click the “New Storage Account” button.


In the “Create a New Storage Account” dialog, specify a unique URL name. I’ll choose “Anywhere US” again. Click Create.


The newly created storage now shows in the list.


Database Server

Next we need to make sure you have a database server set up. You only need to do this once per subscription. If you have not set up a database server, click Database in the left menu. Select your subscription account and click “Create” button.


Specify a region and click Next.


Specify a name and password for the database administrator and click Next.


The next page of the wizard will set up firewall rules for your SQL Azure account. Check “Allow other Windows Azure services to access this server” to allow your LightSwitch application to connect to this database server.


You will also need to add a rule for your development machine to allow the LightSwitch Publish Wizard to update your database. Click Add to add a rule. In this example, I am going to allow all machines in my domain.


Click OK and then click Finish.

Once you have created a database server, select the database server node in the web portal. The Server name will be the Fully Qualified DNS Name on the right side. The administrator login will be listed on this page as well. You will need the server name and login information when you publish from LightSwitch.


Deploying a LightSwitch App

Now create a LightSwitch project with Forms authentication. I have a very simple LightSwitch application called “HelloWorld.” It has only one table and one screen. I’m ready to publish… TO THE CLOUD!

Right click on the project node in the Solution Explorer and select Publish.


Step 1: Client Configuration

The Publish Wizard now appears. It will guide us through several steps. Our first decision to make is to publish as a desktop or web application. LightSwitch supports publishing browser-based and desktop applications to the cloud. In this example, we will choose Web application and click Next to Step 2.


Step 2: Application Server Configuration

Since we want to host the application in Windows Azure, select Windows Azure option and click Next to Step 3.


Step 3: Connect to Windows Azure

In order to connect to Windows Azure, you need to provide your Windows Azure subscription ID and a management certificate. We already found the subscription ID in the Windows Azure web portal earlier. Go ahead and fill it in.


The management certificate is used to authorize your computer to update hosted services on Windows Azure. You can select an existing certificate from the dropdown menu. To create a new certificate, select “Create new self-signed certificate” from the dropdown menu.


Name the certificate and click OK.


The Windows Azure certificate store must contain a copy of the certificate. Therefore we need to upload the certificate we just created to Windows Azure. Click “Copy Path” button to copy the location of the certificate and go back to Windows Azure portal.


In the Windows Azure portal browser window, select “Management Certificates” to see a list of existing certificates. Since we want to add a new one, click “Add Certificate” button.


In the Add New Management Certificate dialog, click “Browse.”


Paste the location of the certificate in “File name” and click Open.


Click “Done” to add the certificate. You will see the newly added certificate in the list.


Back in LightSwitch, click Next in the Publish Wizard to Step 4.

Step 4: Azure Service Configuration

In this step, we need to specify the hosted service, storage account, and environment information for the deployment. We’ve already created the hosted service and storage account in the Windows Azure web portal earlier and you should see them in the dropdown menus. You can choose the environment to Staging or Production. In our example, we will keep it as Production. Click Next to Step 5.


Step 5: Security Settings

When deploying to Azure, LightSwitch requires HTTPS for secure connections to your application. This requires the use of an SSL certificate. The dropdown will list all SSL certificates that are already uploaded to Windows Azure. LightSwitch allows you to upload an existing SSL certificate (or one from a licensed vendor such as VeriSign). In our case, we’d like to test it with a self-signed certificate. Select “Create new self-signed certificate” in the dropdown menu. Please note that since it is self-signed, the published app may result in warnings from your browser.


Fill out the information in the dialog and click OK.


The self-signed certificate is now created and selected in the dropdown menu. Click Next to Step 6.


Step 6: Database Connection

We now need to specify the connection information to SQL Azure. LightSwitch requires two connection strings in the Publish Wizard: an administrator connection and a user connection. The administrator connection will only be used by the Publish Wizard to create or update your database. The user connection string will be used by your LightSwitch application to connect to the database. In the LightSwitch Publish Wizard, click the “…” button for the administrator connection.


In the Connection Properties dialog, enter the server name and login info. We’ve already gotten this information from the Windows Azure web portal earlier. Give the database a name. Click OK.


The User connection will be set to the same by default. For security purposes, create a separate username and password for the user connection string. Click “Create Database Login” button.


Specify a new user login and click Create. Then click Next to Step 7.


Step 7: Authentication

We have designed this application to use Forms authentication. Therefore we need to create an application admin account so you can log in to your app after publishing. When finished, click Next to Step 8.


Step 8: Specify a Client Application Certificate

You can choose to sign the Silverlight client application (Xap file) you are deploying. It will encrypt your client application. In our example, we will leave it unchecked. Click Next to Step 9.


Step 9: Summary

The last step shows you a summary of what you’re about to publish. FINALLY! Click Publish.


It will take about 5 minutes to publish to Windows Azure. You can see the status at the lower left corner of the LightSwitch IDE. Once it’s published, the Windows Azure web portal will launch.


In the Cloud

In the Windows Azure web portal, you can now see the application published under your hosted service. It takes about 10 to 15 minutes for the app to finish initializing. Once it is ready. Click on the DNS name link on the right side to check out the live website.


Since we used a self-sign SSL certificate (in Step 5), IE will warn about the security risk. Click “Continue to this website.” If you have used a licensed SSL certificate, you will not see the warnings.


The web application starts up showing you the log in screen. Type in the administrator credential you created in Step 7 and click “Log In” button.


Voila! We have a web application hosted in Windows Azure and SQL Azure with Forms authentication!



Starting with LightSwitch Beta 2, you now have the ability to publish your desktop and browser-based applications to Windows Azure and SQL Azure. I’ve detailed all the steps necessary to get started with Azure and deploy your first application. Once you have an Azure account and services set up, the republishing of the application is easy. Have fun building cloud-based applications with Visual Studio LightSwitch!

• Andrew Coates described LightSwitch Beta 2 Download and Training Kit in a 3/18/2011 post to his MSDN blog:

image I’ve been excited by LightSwitch since I first saw it more than 18 months ago and the recent release of the Beta 2 version and the updated Training Kit has just got me more excited.


image2224222222The Training Kit includes the following content:

  • LightSwitch Overview
    • Demo: Introducing Visual Studio LightSwitch
    • Hands-on-lab: Simple Book Store Application
    • Hands-on-lab: Enhancing the Book Store Application
  • LightSwitch Advanced Features
    • Demo: Building Your First LightSwitch Application
    • Hands-on-lab: LightSwitch Control Extensions
    • Hands-on-lab: LightSwitch Data Source Extensions

All content has been tested to work with Visual Studio 2010 SP1. The setup scripts for all hands-on labs and demo scripts have also been updated so that the content can easily be used on a machine running Windows 7 SP1.

You can download the Beta 2 release of the LightSwitch Training Kit from here:

• Beth Massi (@BethMassi) posted Channel 9 Interview: Walkthrough of a Real-World LightSwitch Application on 3/17/2011:

image In this video, I interview a couple of LightSwitch team members, Mike Droney (Tester) and Sheel Shah (PM), as they walk me through a real application that was built for our Admins in order to track hardware assets across the developer division. Mike and Sheel talk about the requirement gathering and development processes as well as some of the advanced features of the application. They were able to build a working prototype in no time and then used an iterative development approach to add more and more features that users wanted. LightSwitch let them concentrate on business value and user productivity and not worry about any of the plumbing required to build a modern Silverlight, n-tier application with a lot of advanced features including a generic report builder.


Watch: Walkthrough of a Real-World LightSwitch Application used at Microsoft

(Tip: To see the application & code better watch the High-quality WMV instead)

image2224222222Their application is based on LightSwitch Beta 2, which is available for the public to download today! To access the download & start learning LightSwitch please visit the LightSwitch Developer Center.  And for more LightSwitch resources please visit the LightSwitch Team Blog and ask questions in the LightSwitch Forums.

The Visual Studio LightSwitch Team announced the availability of LightSwitch Beta 2 Extensibility “Cookbook” in a 3/16/2011 post:

image2224222222One of the big features of LightSwitch is the ability for professional developers to write extensions to provide even more capabilities for the LightSwitch developer than what they get out of the box. A way to think of them is like add-ons. Extensions can be used to interact with the user directly, or to do some work behind the user interface such as data access. In LightSwitch there are 6 extension points: Controls,  Screen Templates, Business Types, Themes, Shells and Custom Data Sources.

We’d like to release a draft of a document that is designed to educate you on what extensibility is available for the LightSwitch product, and to illustrate the various extensions types that can provide entry points into extending a LightSwitch application. The document also contains “recipes” for creating the various extension types so you can get started building extensions for the LightSwitch community.

To build LightSwitch Extensions you will need to have Visual Studio LightSwitch Beta 2, Visual Studio 2010 SP1 Professional or higher and the Visual Studio SDK. In addition to the cookbook we are also providing a blank extension solution to help you get started:

We want your feedback! Please add a comment below and let us know how we can improve the cookbook and the extensibility experience.

Paul Patterson (@PaulPatterson) explained Microsoft LightSwitch – Reference Tables With Relationships in a 3/26/2011 post:

image Here is a simple implementation of creating lookup, or “reference”, type tables. Instead of building Choice Lists for each time the same reference list is required on a screen, creating a table containing the reference information will do just fine.

For example;

image2224222222Open LightSwitch and create a new table name OrganizationType. This is going to be the table that will contain reference information. Configure the table with the following properties…

Then create another new table named Organization. For example…

With the Organization table still open, click the Relationship… button at the top of the designer…

Create the following relationship with the OrganizationType table.

Which should result in something like this…

In the Solution Explorer, right-click the Screens folder. Select Add Screen from the pop up context menu.

In the Add New Screen dialog, select to create a List and Details Screen for the OrganizationTypes data. For example…

Create a similar screen for the Organization…

Now press F5, or click the run button, to start debugging the application.

In the Organization Types List screen, add a bunch of organization type values. For example…

Make sure to Save the changes by clicking the Save button at the top.

Now open the Organizations List Detail screen, and enter a bunch of Organizations. Notice that an Organization Type drop down is available, which allows for the optional selection of an Organization Type value…

Again, make sure to save the updated information.

This is just one example of how to implement reference type information.

Andrew Brust (@andrewbrust) posted IE9, LightSwitch Beta 2 and Zune HD: A Study in Risk Management? on 3/16/2011:

image This has been a busy week for Microsoft, and for me as well.  On Monday, Microsoft launched Internet Explorer 9 at South by Southwest (SXSW) in Austin, TX.  That evening I flew from New York to Seattle.  On Tuesday morning, Microsoft launched Visual Studio LightSwitch, Beta 2 with a Go-Live license, in Redmond, and I had the privilege of speaking at the keynote presentation where the announcement was made.  Readers of this blog know I‘m a fan of LightSwitch, so I was happy to tell the app dev tools partners in the audience that I thought the LightSwitch extensions ecosystem represented a big opportunity – comparable to the opportunity when Visual Basic 1.0 was entering its final beta roughly 20 years ago.  On Tuesday evening, I flew back to New York (and wrote most of this post in-flight). Two busy, productive days. 

image But there was a caveat that impacts the accomplishments, because Monday was also the day reports surfaced from credible news agencies that Microsoft was discontinuing its dedicated Zune hardware efforts.  While the Zune brand, technology and service will continue to be a component of Windows Phone and a piece of the Xbox puzzle as well, speculation is that Microsoft will no longer be going toe-to-toe with iPod touch in the portable music player market.

If we take all three of these developments together (even if one of them is based on speculation), two interesting conclusions can reasonably be drawn, one good and one less so. Microsoft is doubling down on technologies it finds strategic and de-emphasizing those that it does not. 

image2224222222HTML 5 and the Web are strategic, so here comes IE9, and it’s a very good browser.  Try it and see.  Silverlight is strategic too, as is SQL Server, Windows Azure and SQL Azure, so here comes Visual Studio LightSwitch Beta 2 and a license to deploy its apps to production.  Downloads of that product have exceeded Microsoft’s projections by more than 50%, and the company is even citing analyst firms’ figures covering the number of power-user developers that might use it. (I happen to think the product will be used by full-fledged developers as well, but that’s a separate discussion.) Windows Phone is strategic too…I wasn’t 100% positive of that before, but the Nokia agreement has made me confident.  Xbox as an entertainment appliance is also strategic. [Emphasis added.]

Standalone music players are not strategic – and even if they were, selling them has been a losing battle for Microsoft.  So if Microsoft has consolidated the Zune content story and the ZunePass subscription into Xbox and Windows Phone, it would make sense, and would be a smart allocation of resources.  Essentially, it would be for the greater good.

But it’s not all good.  In this scenario, Zune player customers would lose out.  Unless they wanted to switch to Windows Phone, and then use their phone’s battery for the portable media needs, they’re going to need a new platform.  They’re going to feel abandoned.  Even if Zune lives, there have been other such cul de sacs for customers.  Remember SPOT watches?  Live Spaces?  The original Live Mesh?  Microsoft discontinued each of these products.  The company is to be commended for cutting its losses, as admitting a loss isn’t easy.  But Redmond won’t be well-regarded by the victims of those decisions.  Instead, it gets black marks.

What’s the answer?  I think it’s a bit like the 1980’s New York City “don’t block the box” gridlock rules: don’t enter an intersection unless you see a clear path through it.  If the light turns red and you’re blocking the perpendicular traffic, that’s your fault in judgment.  You get fined and get points on your license and you don’t get to shrug it off as beyond your control.  Accountability is key.  The same goes for Microsoft.  If it decides to enter a market, it should see a reasonable path through success in that market.

Switching analogies, Microsoft shouldn’t make investments haphazardly, and it certainly shouldn’t ask investors to buy into a high-risk fund that is sold as safe and which offers only moderate returns.  People won’t continue to invest with a fund manager with a track record of over-zealous, imprudent, sub-prime investments.  The same is true on the product side for Microsoft, and not just with music players and geeky wrist watches.  It’s true of Web browsers, and line-of-business app dev tools, and smartphones, and cloud platforms and operating systems too. 

When Microsoft is casual about its own risk, it raises risk for its customers, and weakens its reputation, market share and credibility.  That doesn’t mean all risk is bad, but it does mean no product team’s risk should be taken lightly. For mutual fund companies, it’s the CEO’s job to give his fund managers autonomy, but to make sure they’re conforming to a standard of rational risk management.  Because all those funds carry the same brand, and many of them serve the same investors.

The same goes for Microsoft, its product portfolio, its executive ranks and its product managers.

Photo by parl, 'Risk.’ Under Creative Commons Attribution-NonCommercial-NoDerivs License

Julie Lerman (@julielerman) reported New EF4 & EF4.1 content on MSDN on 3/16/2011:

image I’ve been busily writing and recording screencasts about Entity Framework 4 and 4.1 for MSDN and some of the fruits of my labor are finally online. Although there is much more to come.

1) Drag & Drop Databinding with the Entity Framework and WPF
Learn how developers can use the Entity Framework to easily build WPF windows or even master detail windows with WPF with little or even no code at all.

2) Building an MVC 3 App with Code First and Entity Framework 4.1
In this whitepaper, Julie Lerman walks through creating a simple MVC 3 application using Entity Framework’s code first technology to describe the classes and manage all data access.

3) Building an MVC 3 App with Model First and Entity Framework 4.1
In this whitepaper, Julie Lerman walks through creating a simple MVC 3 application using Entity Framework’s model first workflow and how to use features introduced in Entity Framework 4.1 DbContext API to write the data access code.

4) Building an MVC 3 App with Database First and Entity Framework 4.1
In this whitepaper, Julie Lerman walks through creating a simple MVC 3 application using Entity Framework’s database first workflow and how to use features introduced in Entity Framework 4.1 DbContext API to write data access code.

Paul Patterson (@PaulPatterson) described Microsoft LightSwitch – Many to Many Relationships in a 3/15/2011 post:

imageHere’s a quick how-to on many to many relationships with LightSwitch.

Scenario: A book can have one or more authors. An author can contribute to one or more books.

image2224222222Create a book table  and add a BookName property.

Create a Author table and add an AuthorName property.

Create a BookToAuthor table.

Add a one to many relationship between the Author to BookToAuthor tables, and then add another one to many relationship to from the Book to the BookToAuthor table…

Create a List and Details screen for the Author table, including the addional Author BookToAuthors data…

Do the same for the Book table…

F5 to run the app…

Open the Authors List Detail screen and add a couple of Authors (without adding the book to the author).  Make sure to Save the data. Then go back to the Books List Detail. Add a book and then select the Authors from the Book To Authors list…

Similary, you can go and add some books in the Books List Detail screen, and then go back into the Authors List Detail screen to apply the books to the authors.

With a little bit of tweaking, such as using some queries, this many-to-many relationship can be fairly fun to use.

I recommend using an affinity group so that Windows Azure compute and storage, as well as SQL Azure resources for a LightSwitch project are stored in the same data center, such as South Central US (San Antonio).

Return to section navigation list> 

Windows Azure Infrastructure and DevOps

• Mary Jo Foley (@maryjofoley) described Microsoft's ServiceOS: A potential piece of Microsoft's cloud play, post-Windows 8 in a 3/18/2011 post to ZDNet’s All About Microsoft blog:

image I’ve been tracking for a few years now the Microsoft Research project known as MashupOS, then Gazelle, and most recently ServiceOS.

Last summer, Microsoft researchers were describing ServiceOS as a “multi-principal OS-based browser” designed to provide control of web applications and devices.

image This year, the description of ServiceOS has evolved. Charon at — who tipped me recently to Microsoft Research’s Drawbridge library OS initiative, sent me a link to a new abstract explaining ServiceOS that lead researcher Helen Wang posted for the recent TechFest 2011 research fair.

(ServiceOS wasn’t one of the TechFest 2011 natural-user-interface-focused projects that Microsoft touted publicly this year. I guess it was featured during the part of the TechFest fair that wasn’t open to selected press and analysts.)

The changes in how the Softies are explaining ServiceOS are pretty significant. The new abstract specifies that ServiceOS supports the software-as-a-service (SaaS) paradigm. Via ServiceOS, a “master copy of a user’s applications resides in the cloud and cached on her end devices,” the new abstract explained.

“The ServiceOS project aims to address many challenges faced by our Windows Phone platform, post Windows 8 platform, the browser platform, and Office platform,” the abstract said.

At TechFest 2011, according to the abstract, the researchers demonstrated a MinWin-based ServiceOS prototype. They showed how traditional applications, like Microsoft Word, can run on ServiceOS and how rich Web content, like a YouTube video, can be embedded “without sacrificing security.”

As with all Microsoft Research projects, there is no guarantee as to if or when they will become — in part or in total — incorporated into Microsoft’s commercial product line-up. However, Wang seems to have a pretty solid record, in terms of her technology-transfer success rate. I’ll be watching to see how ServiceOS morphs next….

Lori MacVittie (@lmacvittie) asserted Aristotle’s famous four questions can be applied to infrastructure integration as a means to determine whether an API or SDK is the right tool for the job as a preface to her An Aristotlean Approach to Devops and Infrastructure Integration post to F5’s DevCentral blog on 3/16/2011:

image While bouncing back and forth last week with Patrick Debois on the role of devops follow the #devops conversation on Twitter , vendors and infrastructure integration he left a comment on the blog post that started the discussion that included the following assertion:

quote-badge On a side note: vendors should treat their API's as first class citizens. Too often (and I personally feel iControl too) API's expose a thinking model based upon the internal implementation of the product and they are not focused on using it from a business perspective. Simplicity to understand Load balancer -> create_network, ... vs. understanding all the objects. There is real work to be done there!

Object Oriented languages are great, but sometimes a scripted language goes around easier.

Which was distilled down to: APIs need to be more than a service-enabled SDK. Nothing new there, I’ve made that assertion before (and so have many, many, many other pundits, experts, and architects). What Patrick is saying, I think, is that today it is often the case that an infrastructure developer needs not only understand the concept and relationship between a load balancer, the network, and the resources it is managing, but each individual object that comprises those entities within the SDK. In order to create a “load balancer”, for example, you have to understand not only what a “load balancer” is, but the difference between a pool and a member, monitoring objects, virtual servers, and a lengthy list of options that can be used to configure each of those objects. What’s needed is an operationally-focused API in addition to a component and object-focused SDK. 

One of the failings of SOA was that it too often failed to move beyond service-enablement into true architecture. It failed to adequately represent business objects and too often simply wrapped up programmatic design components with cross-platform protocols like SOAP and HTTP. It made it easier to integrate, in some ways, and in others did very little to encourage the efficiency through re-use necessary for SOA to make the impact it was predicted to make.

Okay, enough of the lamentation for SOA. The point of an API – even in the infrastructure world – should be to abstract and ultimately encapsulate the business or operational tasks that comprise a process. Which is a fairly wordy way to say “an API call should do everything necessary to achieve a single operational task.” What we often have today in the infrastructure world is still a service-enabled SDK; every function you could ever want to perform is available. But they are not aggregated or collected into discrete, reusable task-oriented API calls. The former are methods, the latter are process integration and invocation points. Where SOA encapsulated business functions, APIs for infrastructure encapsulate operational tasks.

That said, the more I thought about it the more I realized we really do need both. Basically I think what we have here is a “right tool for the job” issue. The question is which tool is right for which job?


Illustration: Toothpaste for Dinner


Aristotle (384 – 322 BC) is in part known for his teleological philosophy. He more or less invented the rules of logic and was most certainly one of the most influential polymaths of his era (and likely beyond). In other words, he was really, really smart.

One of his most famous examples is his four causes, in which four questions are asked about a “thing” as a means to identify and understand it. These causes were directly related to his biological research and contributed greatly to our understanding for many eons about the nature of life and animals.

  • MATERIAL CAUSE: What is it made of?
  • FORMAL CAUSE: What sort of thing is it?
  • EFFICIENT CAUSE: What brought it into being?
  • FINAL CAUSE: What is it for?

four questions for devopsThese may, for a moment, seem more applicable to determining the nature of a table; a question most commonly debated by students of philosophy late at night in coffee shops and not something that broods on the mind of those who are more concerned with meeting deadlines, taking out the garbage or how to make it to the kids’ basketball game if that meeting runs late. But they, are in fact, more applicable to IT and in particular the emerging devops discipline than it might first appear; especially when we start discussing the methods by which infrastructure and systems are integrated and managed by such a discipline.

There’s a place, I think, for both interface mechanisms – API or service-enabled SDK  – but in order to determine which one is best in any given situation, you’ll need to get Aristotlean and ask a few questions. Not about the integration method (API, SDK) but about the integration itself, i.e. what you’re trying to do and how that fits with the integration and invocation points provided by the infrastructure. 

The reason such questions are necessary is because the SDK provides a very granular set of entry points into the infrastructure. The API is then (often) layered atop the SDK, aggregating and codifying the specific methods/functions needed to implement a specific operational task, which is what an infrastructure API should encapsulate. That means it’s abstracted and generalized by the implementers to represent a set of common operational tasks. The API should be more general than the SDK. So if your specific operational process has unique needs it may be necessary to leverage the SDK instead to achieve such a process integration. The reason this is important is that the SDK often comes first because inter-vendor and even intra-vendor infrastructure integration is often accomplished using the same SDK that is offered to devops. The granularity of an SDK is necessary to accomplish specific inter-vendor integration because it is highly specific to the vendors, the products, and the integration being implemented. So the SDK is necessary to promote the integration of infrastructure components as a means to collaborate and share context across data center architectures.

Similarly, the use-case of the integration needs to be considered. Run-time (dynamic policy enforcement) is different beast than configuration-time (provisioning) methods and may require the granular control offered by an SDK. Consider that dynamic policy enforcement may involve the tweaking of a specific “application” to for one response but not another or in response to the application of a downstream policy. An application or other infrastructure solution may deem a user/client/request to be malicious, for example, and need the means by which it can instruct the upstream infrastructure to deny the request, block the user, or redirect the client. Such “one time” actions are generally implemented through specific SDK calls because they are highly customized and unique to the implementation and or solutions’ integration.

CONCLUSION: Standardized (i.e. commoditized) operational process: API. Unique operational process: SDK.

Because the very nature of codifying processes and integrating infrastructure implies myriad use-cases, scenarios, and requirements, there is a need for flexibility. That means options for integration and remote management of infrastructure components.

We need both SDKs and APIs to ensure that the drive for simplicity does not eliminate the opportunity and the need for granularity in creating unique integrations supporting operational and business interests.

Many infrastructure solutions today are lacking an SDK (one of the reasons cloud, specifically IaaS, makes it difficult to replicate an established data center architecture), and those with an SDK are often lacking an API. Do we need service-enabled SDKs? Yes. Do we need operational APIs? Yes. An API is absolutely necessary for enterprise devops to fully realize its goals of operational efficiency and codification of common provisioning and deployment processes. They’re necessary to create repeatable deployments and architectures that reduce errors and time to deploy. Simply implementing an API as a RESTful or scripting-friendly version of the SDK, i.e. highly granular function calls encapsulated using ubiquitous protocols, is not enough. 

What’s necessary is to recognize that there is a difference between an operational API and a service-enabled SDK. The API can then be used to integrate into “recipes” or what-have-you to enable devops tools such as Puppet and Chef that can be distributed and, ultimately, improved upon or modified to fit the specific needs of a given organization. But we do need both, because without the ability to get granular we may lose the flexibility and ultimately the control over the infrastructure necessary to continue to migrate from the traditional, static data centers of yesterday toward the dynamic and agile data centers of tomorrow. Without operationally commoditized APIs it is less likely that data centers will be able to leverage Infrastructure 2.0 as one of the means to bridge the growing gap between the cost of managing infrastructure components and the static budgets and resources that ultimately constrain data center innovation.

David Linthicum asserted “More enterprises that move to cloud computing do so to augment rather than replace their existing IT” as a deck for his Cloud computing's IT relief valve article of 3/16/2011 for InfoWorld’s Cloud Computing blog:

Most systems that deal with heat, water, and steam have a relief valve. This mechanism can release steam or liquid to deal with an overload in the system, typically to prevent pressure from getting too high. This relief is needed to keep the system from failing altogether.

In the world of IT, we have a similar need.

From time to time, the compute and storage demands exceed the capacity of the system, and we either have to add capacity (at high cost) that will sit idle most of the time or refuse the applications requiring the resources. The latter is typically not an option. So how can we release the pressure on our infrastructure, keep costs under control, and satisfy all application processing demands as well?

At the Cloud Connect conference last week, Neal Sample, vice president of architecture at eBay, gave a great talk on how eBay uses cloud computing to augment its existing capacity. In essence, eBay uses a public cloud to provide on-demand compute and storage resources as needed, thereby handling spikes in processing without requiring eBay to maintain idle hardware and software.

This is the elastic nature of cloud computing: the ability to allocate and deallocate resources as needed. However, in the context of a hybrid cloud architecture, it becomes even more compelling because it lets IT leverage the hardware and software that's already paid for while avoiding the purchase of additional hardware and software. Instead, IT turns to the public cloud for on-demand capacity requirements.

This is a killer application for cloud computing: using a rented infrastructure to deal with spikes in processing requirements. Indeed, as Sample put it, the cost of the public clouds eBay uses could be four times what the company is now paying, and eBay still would save a great deal of money.

This augmentation approach is successful if the following are also true:

  • It is an augmentation strategy rather than a replacement strategy. Most enterprises will not toss away their existing hardware, software, and data centers to move to the cloud, as those dollars are already spent. Advising a "dump the data center for the cloud" strategy is a sure way to become a job seeker.
  • IT remains in control. It's all about retaining control of processing and data, while outsourcing to the clouds for occasional needs.
  • The costs are easily justified, considering both the benefits and the alternatives. In other words, if the cloud costs more than having idle capacity or if the prep time needed to access the on-demand cloud is too high, think again.

Do you have your relief valve installed?

Most boilers also have a rupture disk that blows if the relief valve gets stuck.

James Hamilton pointed to More Data on Datacenter Air Side Economization in a 3'/15/2011 post:

image Two of the highest leverage datacenter efficiency improving techniques currently sweeping the industry are: 1) operating at higher ambient temperatures ( and air-side economization  with evaporative cooling (

The American Society of Heating and Refrigeration, and Air-Conditioning Engineers (ASHRAE) currently recommends that servers not be operated at inlet temperatures beyond 81F. Its super common to hear that every 10C increase in temperatures leads to 2x the failure – some statements get repeated so frequently they become “true” and no longer get questioned. See Exploring the limits of Data Center Temperature for my argument that this rule of thumb doesn’t apply over the full range operating temperatures.

Another one of those “you can’t do that” statements is around air-side economization also referred to as Outside Air (OA) cooling. Stated simply air-side economization is essentially opening the window. Rather than taking 110F exhaust air and cooling it down and recirculation it back to cool the servers, dump the exhaust and take in outside air to cool the servers.  If the outside air is cooler than the server exhaust, and it almost always will be, then air-side economization is a win.

The most frequently referenced document explaining why you shouldn’t do this is: Particulate and Gaseous  Contamination Guidelines for Data Centers again published by ASHRAE.  Even the document title sounds scary. Do you really want your servers operating in an environment of gaseous contamination? But, upon further reflection, is it really the case that servers really need better air quality than the people that use them? Really?

From the ASHRAE document:

The recent increase in the rate of hardware failures in data centers high in sulfur-bearing gases, highlighted by the number of recent publications on the subject, led to the need for this white paper that recommends that in addition to temperature-humidity control, dust and gaseous contamination should also be monitored and controlled. These additional environmental measures are especially important for data centers located near industries and/or other sources that pollute the environment.

Effects of airborne contaminations on data center equipment can be broken into three main categories: chemical effects, mechanical effects, and electrical effects. Two common chemical failure modes are copper creep corrosion on circuit boards and the corrosion of silver metallization in miniature surface-mounted components.

Mechanical effects include heat sink fouling, optical signal interference, increased friction, etc. Electrical effects include changes in circuit impedance, arcing, etc. It should be noted that the reduction of circuit board feature sizes and the miniaturization of components, necessary to improve hardware performance, also make the hardware more prone to attack by contamination in the data center environment, and manufacturers must continually struggle to maintain the reliability of their ever shrinking hardware.

It’s hard to read this document and not be concerned about the user of air-side economization. But, on the other hand, most leading operators are using it and experiencing no measure deleterious effects.  Let’s go get some more data.

Digging deeper, the Data Center Efficiency Summit had a session on exactly this topic title: Particulate and Corrosive Gas Measurements of Data Center Airside Economization: Data From the Field – Customer Presented Case Studies and Analysis. The title is a bit of tongue twister but the content is useful. Selecting from the slides:

· From Jeff Stein of Taylor Engineering:

o Anecdotal evidence of failures in non-economizer data centers in extreme environments in India or China or industrial facilities

o Published data on corrosion in industrial environments

o No evidence of failures in US data centers or any connection to economizers

o Recommendations that gaseous contamination should be monitored and that gas phase filtration is necessary for US data centers are not supported

· From Arman Shehabi of the UC Berkeley Department of Civil and Environmental Engineering:

o  Particle concerns should not dissuade economizer use

• More particles during economizer use with MERV 7 filters than during non-economizer periods, but still below many IT guidelines

• I/O ratios with MERV 14 filters and economizers were near (and often below!) levels using MERV 7 filters w/o economizers

o Energy savings from economizer use greatly outweighed the fan energy increase from improved filtration

• MERV 14 filters increased fan power by about 10%, but the absolute increase (6 kW) was much smaller than the ~100 kW of chiller power savings during economizer use (in August!)

• The fan power increase is constant throughout the year while chiller savings during economizer should increase during cooler period

If you are interested in much more detail that comes to the same conclusions that Air-Side Economization is a good technique, see the excellent paper Should Data Center Owners be Afraid of Air-Side Economization Use? – A review of ASHRAE TC 9.9 White Paper titled Gaseous and Particulate Contamination Guidelines for Data Centers.

I urge you to read the full LBNL paper but I excerpt from the conclusions:

The TC 9.9 white paper brings up what may be an important issue for IT equipment in harsh environments but the references do not shed light on IT equipment failures and their relationship to gaseous corrosion.  While the equipment manufacturers are reporting an uptick in failures, they are not able to provide information on the types of failures, the rates of failures, or whether the equipment failures are in new equipment or equipment that may be pre-Rojas. Data center hardware failures are not documented in any of the references in the white paper. The only evidence for increased failures of electronic equipment in data centers is anecdotal and appears to be limited to aggressive environments such as in India, China, or severe industrial facilities. Failures that have been anecdotally September 2010 presented occurred in data centers that did not use air economizers. The white paper recommendation that gaseous contamination should be monitored and that gas phase filtration is necessary for data centers with high contamination levels is not supported.  

We are concerned that data center owners will choose to eliminate air economizers (or not operate them if installed) based upon the ASHRAE white paper since there are implications that contamination could be worse if air economizers are used.  This does not appear to be the case in practice, or from the information presented by the ASHRAE white paper authors. 

I’ve never been particularly good at accepting “you can’t do that” and I’ve been frequently rewarded for challenging widely held beliefs. A good many of these hard and fast rules end up being somewhere between useful guidelines not applying in all conditions to merely opinions. There is a large and expanding body of data supporting the use of air-side economization.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Mary Jo Foley (@maryjofoley) claimed Microsoft's cloud buzzword of 2011: Hybrid in a 3/16/2011 post to ZDNet’s All About Microsoft blog:

image The week of March 21 kicks off in earnest the annual Microsoft annual cycle of tradeshows — and the commencement of the first of many coming mentions of a word I predict we’ll hear a lot in 2011: Hybrid.

I’m not talking about Microsoft employees’ Priuses (Prii?) here. Instead, I’m talking about Microsoft’s cloud-strategy push, which I’m predicting will be long on mentions of hybrid public/private clouds.


At next week’s Microsoft Management Summit 2011 in Las Vegas, Microsoft is expected to the wraps off its “Concero” product. Concero, if you need a refresher, is a new management tool in the System Center family that will allow customers to oversee both on-premises and cloud-based services.

Concero, from a description I found on the MMS site a couple months ago, was described this way:

“The move to cloud based deployment of services will result in deployments which are partly on private on-premise clouds based on VMM and Hyper-V and partly on Windows Azure. In this hybrid world, it is imperative to have a management tool that allows customers to deploy and manage their services across these environments. System Center codename “Concero” is a self-service portal targeted at this customer base.”

(Microsoft has removed all references of Concero and “hybrid” from the MMS session list, by the way.)

image At the upcoming Microsoft TechEd North America conference in mid-May, the “hybrid” message will get even more play, as the session list makes clear. There’s a session slated on “Combining Public and Private Clouds into Useful Hybrids.” There’s one on how to set up Exchange and Office 365 as a hybrid deployment. There’s a “Public and Private Cloud: Better Together” session on the agenda, and another focused on bridging the public and private cloud.

I recently asked Microsoft Office Division President Kurt DelBene, who was on a press tour on the East Coast, about the hybrid messaging, and he said that Microsoft’s focus on both private and public clouds reflected how customers are moving to the cloud. They’re doing it in a staged way, typically, keeping some assets on their own servers and testing the cloud infrastructure with other less “mission critical” data.

Microsoft isn’t the only company that increasingly is playing up the “H” word. Just recently, Microsoft archrival VMware also has begun talking up the importance of “hybrid cloud management.” Its recently announced vCenter Operations product will allow both internal and external virtual machines to get system configuration, performance management, and capacity management functionality. The product will be available in standard, advanced, and enterprise editions, each  available at the end of March with pricing that starts at $50 per managed VM.

Amazon, another Microsoft cloud rival, announced this week an expansion of its Virtual Private Cloud technology. Amazon’s VPC is “a secure bridge between your existing IT infrastructure and the AWS cloud using an encrypted VPN connection,” and is something in which a number of IT managers considering Amazon have expressed interest.

Amazon released a number of new features for VPC this week, including a new wizard for streamlining the set-up process; the ability to control the full network topology; Internet access via an Internet gateway; elastic IP addresses for EC2 instances within a VPC; and an option to create a VPC that doesn’t have a VPN connection. (Separately, Amazon also added support this week for Windows Server 2008 R2 instances on EC2 to its cloud lineup.)

Microsoft is working on its own networking bridge between on-premises servers and Windows Azure. (Another “hybrid” product.) That technology, Windows Azure Connect (codenamed “Project Sydney”) is supposed to be available in the first half of 2011 in final form, last we heard.

<Return to section navigation list> 

Cloud Security and Governance

Jeff Vance posted a 5 Overlooked Threats to Cloud Computing article to Datamation on 2/28/2011 (missed when posted):

image Report after report after report harps on security as the main speed bump slowing the pace of cloud adoption. But what tends to be overlooked, even by cloud advocates, is that overall security threats are changing as organizations move from physical environments to virtual ones and on to cloud-based ones.

Viruses, malware and phishing are still concerns, but issues like virtual-machine-launched attacks, multi-tenancy risks and hypervisor vulnerabilities will challenge even the most up-to-date security administrator. Here are 5 overlooked threats that could put your cloud computing efforts at risk.

1. DIY Security.
image The days of security through obscurity are over. In the past, if you were an anonymous SMB, the threats you worried about were the typical consumer ones: viruses, phishing and, say, Nigerian 419 scams. Hackers didn’t have enough to gain to focus their energy on penetrating your network, and you didn’t have to worry about things like DDoS attacks – those were a service provider problem.

Remember the old New Yorker cartoon: “on the Internet no one knows you’re a dog”? Well, in the cloud, no one knows you’re an SMB.

“Being a small site no longer protects you,” said Marisa S. Viveros, VP of IBM Security Services. “Threats come from everywhere. Being in the U.S. doesn’t mean you’ll only be exposed to U.S.-based attacks. You – and everyone – are threatened from attackers from everywhere, China, Russia, Somalia.”

To a degree, that’s been the case for a while, but even targeted attacks are global now, and if you share an infrastructure with a higher-profile organization, you may also be seen as the beachhead that attackers can use to go after your bigger neighbors.

In other words, the next time China or Russia hacks a major cloud provider, you may end up as collateral damage. What this all adds up to is that in the cloud, DIY security no longer cuts it. Also, having an overworked general IT person coordinating your security efforts is a terrible idea.

As more and more companies move to cloud-based infrastructure, only the biggest companies with the deepest pockets will be able to handle security on their own. Everyone else will need to start thinking of security as a service, and, perhaps, eventually even a utility.

2. Private clouds that aren’t.
One way that security-wary companies get their feet wet in the cloud is by adopting private clouds. It’s not uncommon for enterprises to deploy private clouds to try to have it both ways. They get the cost and efficiency benefits of the cloud but avoid the perceived security risks of public cloud projects.

Plenty of private clouds, though, aren’t all that private. “Many ‘private’ cloud infrastructures are actually hosted by third parties, which still leaves them open to concerns of privileged insider access from the provider and a lack of transparency to security practices and risks,” said Geoff Webb, Director of Product Marketing for CREDANT Technologies, a data protection vendor.

Much of what you read about cloud security still treats it in outdated ways. At the recent RSA conference, I can’t tell you how many times people told me that the key to cloud security was to nail down solid SLAs that cover security in detail. If you delineate responsibilities and hold service providers accountable, you’re good to go.

There is some truth to that, but simply trusting a vendor to live up to SLAs is a sucker’s game. You – not the service provider – will be the one who gets blamed by your board or your customers when sensitive IP is stolen or customer records are exposed.

A service provider touting its security standards may not have paid very close attention to security. This is high-tech, after all, where security is almost always an afterthought.

3. Multi-tenancy risks in private and hybrid clouds.
Many companies, when building out their private or hybrid clouds, are hitting walls. The easy stuff has been virtualized, things like test development and file printing.

“A lot of companies have about 30 percent of their infrastructure virtualized. They’d like to get to 60-70 percent, but the low-hanging fruit has all been picked. They’re trying to hit mission-critical and compliance workloads, but that’s where security becomes a serious roadblock,” said Eric Chiu, President of virtualization and cloud security company HyTrust.

Multi-tenancy isn’t strictly a public cloud issue. Different business units – often with different security practices – may occupy the same infrastructure in private and hybrid clouds.

Read more: 1, 2, Next Page

<Return to section navigation list> 

Cloud Computing Events

Cory Fowler (@SyntaxC4) asked Feeling Blue About Not Being In The Cloud? pm 3/16/2011 and announced the second installment of AzureFest for 3/30/2011 (Toronto) and 3/31/2011 (Mississauga):

image If you’re a Software Developer in the Greater Toronto Area, and haven’t yet checked out Microsoft’s Cloud Computing Platform, Windows Azure, there is no excuse not to join me at AzureFest. Your Future is in the Clouds…


This is our Second Installment of AzureFest (Hosted by ObjectSharp and Microsoft Canada) it is an event that is targeted towards getting you past the potential hurtles and trying out Windows Azure for your very first time.

We’ll explain how the Pricing Model works for Windows Azure, Walk you through the Registration Process and Walk you through your First Windows Azure Deployment.

When is AzureFest?

For your convenience there are two dates! The same content will be presented at both events so you only need to attend one.

AzureFest – Downtown Toronto: March 30th, 2011 – 6pm – 9pm

Register Now!

AzureFest – Mississauga: March 31st, 2011 – 6pm – 9pm

Register Now!

This Event is FREE!!!!

What to Bring to AzureFest
  • An open mind
  • A Laptop [We’re all in this together now..]
  • A Credit Card [or check out Windows Azure Pass]

Richard Santalesa announced on 3/16/2011 an Upcoming Free Webinar - Contracting for Cloud Computing by the Information Law Group on 4/12/2011 at 9:30 PDT:

In the next installment in our webinar series on cloud computing, Information Law Group attorneys, Richard Santalesa [left] and David Navetta [right] will examine legal issues posed by contracting for cloud computing services and review a proposed cloud user’s "Bill of Rights" that any company considering cloud computing should keep in mind given their specific goals, industry, data and security needs.

Deciding whether to take the plunge into cloud computing is a serious decision.  The real benefits of cloud computing must always be weighed against: the risk of relinquishing direct control of infrastructure and applications, the potential use of unknown subcontractors, implications of data flow and storage location, liability and indemnification issues, privacy and data breach notification considerations, and compliance with the overlapping web of federal, state, local and international laws and regulations.

You can register for this free one hour webinar here.

I believe Richard’s 12:30 EST time zone was in error. Presumably it’s daylight time on both coasts.


Matt reported on 3/16/2011 an Upcoming Webinar: Orchestrating the Cloud by Amazon Web Services on 4/1/2011 at 10:00 AM GMT:

imageAmazon's cloud computing platform makes it easy to provision infrastructure resources quickly. Spinning up a single server is straightforward, but larger deployments of multiple tier applications often require a little co-ordination.

Join me for the next of our monthly technical seminars in which we'll cover the various techniques and tools for orchestrating cloud deployments: 10am, UK time, on 1st April, 2011.

In this technical webinar, we'll use real life case studies to discuss the deployment of fully functioning application stacks with CloudFormation, configuration management, deployment best practices and how to distribute tasks and notifications across your infrastructure.

This session is free, but you'll need to register.

If there is a topic you'd like to see reviewed, drop me a line or post a comment.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Joe Panettieri described HP Cloud Strategy: The One Mistake Leo Must Avoid in a 3/16/2011 post to the TalkinCloud blog:

And so it begins: Hewlett-Packard CEO Leo Apotheker, as expected, has started to discuss his vision for cloud computing — and HP’s role in the cloud. No doubt, Apotheker will share more details during the HP Americas Partner Conference (HPAPC) later this month in Las Vegas.

HP is a bit late to the cloud discussion. Cisco Systems, IBM, Microsoft and many other traditional IT giants have already announced cloud-related channel partner programs, including:

image Clearly, HP is in catch-up mode. But has quietly built plenty of managed services that will double as cloud services. Plus, it’s still early in the cloud game and Apotheker can learn from rivals’ early mistakes in the cloud.

Glaring Error

The biggest mistake of all, so far, comes from Microsoft, which won’t allow VARs and MSPs to manage cloud billing for end customers – though Microsoft continues to listen to feedback and I suspect the policy will change later this year. Plus, I must concede: Microsoft does have plenty of partners signing up for BPOS (Business Productivity Online Suite), and interest in the forthcoming BPOS successor (Office 365) is high.

If Apotheker and HP Channel Chief Stephen DiFranco are listening to channel partners, then all of HP’s various cloud computing services will be available as white label services — for VARs and MSPs to re-brand as their own. During yesterday’s Intermedia Partner Summit in New York, multiple MSPs told me they will not embrace a cloud service unless it has white label and end-customer billing capabilities.

Note: Even pure cloud companies like Google and Rackspace allow channel partners to manage end-customer billing if they so choose.

Friends and Foes

No doubt, HP will both compete and cooperate with channel partners in the cloud. In an interview with The Wall Street Journal published earlier this week, Apotheker made it clear that HP will build its own clouds, and introduce cloud services to end-customers. The strategy is somewhat similar to ongoing moves at IBM and Microsoft.

Click here to view the embedded video.

In stark contrast, Cisco has decided not to build its own clouds, other than offering selected niche services like WebEx. Instead, Cisco wants to be an educator and an arms dealer to cloud builders, cloud providers and cloud services resellers. Cisco considered building its own big, massive clouds, but CTO Padmasree Warrior said Cisco killed the idea in order to avoid channel conflict (see TalkinCloud FastChat Video, left).

Admittedly, Cisco may miss out on some key cloud opportunities by leaving the market to partners. In stark contrast, The Wall Street Journal predicts HP will wind up competing head-on against,, and other early cloud leaders.

Either way, let’ hope HP doesn’t forget partners along the way. Avoid Microsoft’s mistake, HP. Give partners white label cloud services that allow VARs and MSPs to brand and bill the services as their own.

Follow Talkin’ Cloud via RSS, Facebook and Twitter. Sign up for Talkin’ Cloud’s Weekly Newsletter, Webcasts and Resource Center. Read our editorial disclosures here.

Read More About This Topic

Jeff Barr (@jeffbarr) announced Now Available: Windows Server 2008 R2 on Amazon EC2 on 3/15/2011:

image Today we are adding new options for our customers running Windows and SQL Server environments on Amazon EC2. In addition to running Windows Server 2003 and 2008, you can now run now run Windows Server 2008 R2. Sharing the kernel with Windows 7, this release of Windows includes additional Active Directory features, support for version 7.5 of IIS, new management tools, reduced boot time, and enhanced I/O performance. We are also adding support for SQL Server 2008 R2 and we are introducing Reserved Instances for SQL Server.

image You can now launch instances of Windows Server 2008 R2 in four different flavors:

  • Core - A scaled-down version of Windows Server, with the minimum set of server roles.
  • Base - A basic installation of Windows Server 2008 R2.
  • Base with IIS and SQL Server Express - A starting point for Windows developers.
  • SQL Server Standard 2008 R2 - A starting point for Windows developers.

Here are the details:

  • All of these AMIs are available for immediate use in every Region and on most 64-bit instance types, excluding the t1.micro and Cluster Compute families.
  • We plan to add support for running Windows Server 2008 R2 in the Amazon Virtual Private Cloud (VPC).
  • The AMIs support English, Italian, French, Spanish, German, Traditional Chinese, Korean, and Japanese. The languages are supported only within the applicable regions -- European languages in the EU and Asian languages in Singapore and Tokyo.
  • Windows Server 2008 R2 is available at the same price as previous versions of Windows on EC2. Reserved Instances and Spot Instances are also available.

Update: You can use the AWS VM Import feature to bring existing virtual machines to EC2. VM Import has been updated and now supports the Standard, Datacenter, and Enterprise editions of Windows Server 2008 R2 in both 32 and 64-bit flavors.

    To get started, you can visit the Windows section of the AMI catalog or select "Windows 2008 R2" in the Quick Start menu when you launch a new instance. Microsoft has also posted additional Amazon Machine Images with Windows 2008 R2 in the Windows section of the AMI Catalog.

    I look forward to hearing from you as you put Windows 2008 R2 to use. Leave a comment or send email to

    <Return to section navigation list>