Friday, January 18, 2013

Windows Azure and Cloud Computing Posts for 1/14/2013+

A compendium of Windows Azure, Service Bus, EAI & EDI, Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image_thumb7_thumb1

• Updated 1/18/2012 with new articles marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue, HDInsight and Media Services

Mike Neil, a Windows Azure General Manager, posted Details of the December 28th, 2012 Windows Azure Storage Disruption in US South [Data Center] on 1/16/2012:

Introduction

imageOn December 28th, 2012 there was a service interruption that affected 1.8% of the Windows Azure Storage accounts. The affected storage accounts were in one storage stamp (cluster) in the U.S. South region. We apologize for the disruption and any issues it caused affected customers. We want to provide more information on the root cause of the interruption, the recovery process, what we’ve learned and what we’re doing to improve the service. We are proactively issuing a service credit to impacted customers as outlined below. We are hard at work implementing what we have learned from this incident to improve the quality of our service.

Root Cause

There were three issues that when combined led to the disruption of service.

First, within the single storage stamp affected, some storage nodes, brought back (over a period of time) into production after being out for repair, did not have node protection turned on. This was caused by human error in the configuration process and led to approximately 10% of the storage nodes in the stamp running without node protection.

Second, our monitoring system for detecting configuration errors associated with bringing storage nodes back in from repair had a defect which resulted in failure of alarm and escalation.

Finally, on December 28th at 7:09am PST, a transition to a new primary node was initiated for the Fabric Controller of this storage stamp. A transition to a new primary node is a normal occurrence that happens often for any number of reasons including normal maintenance and hardware updates. During the configuration of the new primary, the Fabric Controller loads the existing cluster state, which in this case resulted in the Fabric Controller hitting a bug that incorrectly triggered a ‘prepare’ action against the unprotected storage nodes. A prepare action makes the unprotected storage nodes ready for use, which includes a quick format of the drives on those nodes. Node protection is intended to insure that the Fabric Controller will never format protected nodes. Unfortunately, because 10% of the active storage nodes in this stamp had been incorrectly flagged as unprotected, they were formatted as a part of the prepare action.

Within a storage stamp we keep 3 copies of data spread across 3 separate fault domains (on separate power supplies, networking, and racks). Normally, this would allow us to survive the simultaneous failure of 2 nodes with your data within a stamp. However, the reformatted nodes were spread across all fault domains, which, in some cases, lead to all 3 copies of data becoming unavailable.

Read more. Mike continues with a lengthy description of the root cause and the recovery process.


image_thumb75_thumb1Bruno Terkaly (@brunoterkaly) posted Knowing when to choose Windows Azure Table Storage or Windows Azure SQL database on 1/13/2013:

Question Recommended Technology
Are you trying to keep costs low while storing significantly large data volumes in the multi-terabyte range? Use Windows Azure Table Storage
Do you require a flexible schema where each data element being stored is non-uniform and whose structure may not be known at design time Use Windows Azure Table Storage
Do your business requirements require robust disaster recover capabilities that span geographical locations?

Do your geo-replication needs involve two data centers that are hundreds of miles apart but on the same continent?
Use Windows Azure Table Storage
Do your data storage requirements exceed 150 GB and you are reluctant to manually shard or partition your data? Use Windows Azure Table Storage
Do you wish to interact with your data using restful techniques and you don’t want to put up your own front-end Web server? Use Windows Azure Table Storage
Is your data less than one 150 GB and involves complex relationships with highly structured data? Use Windows Azure SQL Database
Do your data requirements involve complex relationships, server-side joins, secondary indexes, and the need for complex business logic in the form of stored procedures? Use Windows Azure SQL Database
Do you want your storage software to enforce referential integrity, data uniqueness,primary and foreign keys? Use Windows Azure SQL Database
Do you wish to continue to use your SQL reporting and analysis tooling? Use Windows Azure SQL Database
Does your database system relied heavily on stored procedures to support business rules? Use Windows Azure SQL Database
Do your business applications currently execute SQL statements that include joins, aggregation, and complex predicates? Use Windows Azure SQL Database
Does each entity or row of data that you insert exceed 1 MB? Use Windows Azure SQL Database
Does your code depend on ADO.NET or ODBC? Use Windows Azure SQL Database

Mingfei Yan described How to generate Http Live Streaming (HLS) content using Windows Azure Media Services in a 1/13/2013 post:

imageIf you want to deliver video content to iOS devices and platform, the best option you have is to package your content into Http Live Streaming. HLS is Apple’s implementation of adaptive streaming and here is some useful resources from Apple. Apple implements the format but they don’t provide hosting. You could use Apache server for hosting HLS content, but better, you could choose Windows Azure Media Services – a way to host video in the cloud. Therefore, you don’t need to manage infrastructure and worry about scalability: Azure takes care of all that for you.

imageScenario One: You have a .Mp4 file and you want to package into HLS and stream out from Windows Azure Media Services.

Here is how you could do it through Windows Azure Management Portal:

1. Login to https://manage.windowsazure.com/ and if you don’t have a media services account yet, here is how you could obtain a preview account.

image_thumb75_thumb22. Here is how the portal looks like. Click on Media Services tab and choose an existing MP4 file you have. If you don’t have any available MP4 file in portal, you could click on UPLOAD button at the bottom to upload a file from your laptop. Note: you could only upload file size less than 200MB through Azure portal. If you want to upload file bigger than 200 MB, please use our Media Services API.

Portal for Media Services

3. After having your MP4 file in place, click on the Encode button below and choose the third profile: Playback on IOS devices and PC/MAC. After you hit okay, it will start to produce two files.

4. Now, after you kick off the encoding job, you will see two new assets get generated. For my case, they are “The Surface Movement_small-mp4-iOS-Output” and “The Surface Movement_small-mp4-iOS-Output-Intermediate-PCMac-Output”. Here, any name with an “Intermediate” is a SMOOTH STREAMING content and the other one is HLS content. We firstly package your H.264 content into Smooth Streaming and then we mark it as HLS content.

Question: If I don’t need Smooth Streaming content, could we delete it?

Answer: Yes, you could. But if you use portal to do the conversion Smooth Streaming asset will be created.

Question: what profile of HLS content do I generate here?

Answer: The profile we used here is “H264 Smooth Streaming 720p“, and you could check out details here. If you want to encode into other HLS profile, you will need to use our API. For Azure portal, only one profile is provided.

5. After the packing is done, click on the HLS asset (without Intermediate in the name) and click on PUBLISH button at the bottom. Now, your HLS content is hosted on Windows Azure Media Services, and you could grab the link at Publish URL column. That’s the link you put in your video application (iOS native app or HTML5 web app for Safari) and enjoy your video!

6. I will publish another blog post on how to generate HLS content through Azure Media Services .NET SDK.

The post How to generate Http Live Streaming (HLS) content using Windows Azure Media Services appeared first on Mingfei Yan.

image_thumb1


<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

• Brian Hitney explained Dealing with Expired Channels in Windows Azure Mobile Services in a 1/18/2013 post:

What’s this? Another Windows Azure Mobile Services (WAMS) post?!

In the next version of my app, I keep a record of the user’s Channel in order to send down notifications. The built in todo list example does this or something very similar. My table in WAMS looks like:

image

Not shown are a couple of fields, but of particular interest is the device Id. I realized that one user might have multiple devices, so the channel then is tied to the device Id. I still haven’t found a perfect way to do this yet – right now, I’m using a random GUID on first run.

In my WAMS script, if the point that is submitted is “within range” of another user, we’ll send a notification down to update the tile. I go into this part in my blog post: Best Practices on Sending Live Tiles. But what do you do if the channel is expired? This comes up a lot in testing, because the app is removed/reinstalled many times.

I stumbled on this page, Push Notification Service Request and Response Headers, on MSDN. There is a lot of great info on that page. While I should have more robust solution for handling all these conditions, the one in particular I’m interested in is the Channel Expired response, highlighted below:

image

Obviously getting a new channel URI is ideal, but the app has to do that on the client (and will) next time the user runs the app. In the mean time, I want to delete this channel because it’s useless. In my script which sends the notifications, we’ll examine the result on the callback and either delete the channel if expired, or, if success, send a badge update because that’s needed, too. (Future todo task: try to combine Live Tile and badges in one update.)

push.wns.send(channelUri, payload, 'wns/tile', 
 {
        client_id: 'ms-app://<my app id>',
        client_secret: 'my client secret',
        headers: { 'X-WNS-Tag' : 'SomeTag' }   
 }, 
      function (error, result) {
      if (error)
      {
          //if the channel has expired, delete from channel table
          if (error.statusCode == 410){
              removeExpiredChannel(channelUri)               
          }
      }
      else
      {
            //notification sent
            updateBadge(channelUri);
      }
    }
);

Removing expired channels can be done with something like:

function removeExpiredChannel(channelUri) 
{                  
       var sql = "delete from myapp.Channel where ChannelUri = ?";
       var params = [channelUri];       
       mssql.query(sql, params,
       {
            success: function(results) {
                console.log('Removed Expired Channel: ' + channelUri)
        }
    });
} 

On my todo list is to add more robust support for different response codes – for example, in addition to a 410 response, a 404 would also want to delete the channel record in the table.


• Jesus Rodriguez posted Microsoft Research Mobile Backend as a Service: Introducing Project Hawaii on 1/15/2013:

Microsoft Research(MS Research) is an infinite source of technical innovation. Because of my academic background, I am constantly following the new MS Research projects and drawing ideas and inspiration from them. Recently, I came across Project Hawaii (http://research.microsoft.com/en-us/projects/hawaii/) which provides a set of mobile services hosted in Windows Azure for computational and data storage. Sounds familiar? Yes, Project Hawaiioverlaps slightly with Windows Azure Mobile Services but it focuses on new and innovative service capabilities. In this first release, Project Hawaiienables the following capabilities:

  • The Key-Value service enables a mobile application to store application-wide state information in the cloud.
  • The Optical Character Recognition (OCR) Service returns the text that appears in a photographic image. For example, given an image of a road sign, the service returns the text of the sign.
  • The Path Prediction Service predicts a destination based on a sequence of current locations and historical data.
  • The Relay Service provides a relay point in the cloud that mobile applications can use to communicate.
  • The Rendezvous Service maps from well-known human-readable names to endpoints in the Hawaii Relay Service.
  • The Speech-to-Text Service takes a spoken phrase and returns text. Currently this service supports English only.
  • The Translator service enables a mobile application to translate text from one language to another, and to obtain an audio stream that renders a string in a spoken language.

Obviously, given my recent work in the mobile backend as a service (mBaaS) space, Project Hawaii results super interesting to me. After spending a few hours playing with the current release, I thought the experience would make a few interesting blog posts.

Let’s start with the Hawaii’s key-value service:

Project Hawaii’s Key-Value Service (KVS) provides a simple key-value store for mobile applications. By using the KVS, an application can store and retrieve application-wide state information as text using key-value pairs.

Obtaining a Project Hawaii Application ID

Prior to use any of the Project Hawaii services, developers need to obtain a valid application ID. We can achieve that by going to the Project Hawaii signup page (http://hawaiiguidgen.cloudapp.net/default.aspx.) and registering your Windows Live credentials. After that, you will obtain an application identifier that can be used to authenticate to the different cloud services. As illustrated in the following figured.

After having completed this process, we need to register our application in the Windows Azure Marketplace.

Using the Project Hawaii Key Value Service

As its name indicates, the key-value service provides a service interface that enables mobile applications to store information in key-value pair forms. The main vehicle to leverage this capability is a RESTful interface abstracted by SDKs for the Android, Windows Phone and Windows 8 platforms. In the case of Windows 8, the KeyValueService class included in the Microsoft.Hawaii.KeyValue.Client namespace abstracts the capabilities of the Project Hawaii Key-Value service. The following matrix summarizes some of the operations provided by the KeyValueService class.

image

Like any good mBaaS SDK, the KeyValueService class provides a very succinct syntax to integrate with the Project Hawaii key-value service. For instance, the following code illustrates the process of inserting different items using the Project Hawaii key-value service.

private const string clientID = "My Client ID";

private const string clientSecret = "My Client Secret";

private void SetItem_Test()

{

KeyValueItem item1 = new KeyValueItem() { Key = "Key2", Value = "value2" };

KeyValueService.SetAsync(clientID, clientSecret, new KeyValueItem[1]{item1}, this.OnSetComplete, null) ;

}

private async void OnSetComplete(SetResult result)

{

await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, async () =>

{

if (result.Status != Microsoft.Hawaii.Status.Success)

Result.Text = "Success";

else

Result.Text = "Error";

});

}

Similarly, applications can query items stored in the key-value infrastructure using the following syntax.

private void GetItem_Test()

{

KeyValueService.GetByKeyAsync(clientID, clientSecret, "Key1", GetByKeyComplete, null);

}

Key-value storage can be a really useful capability in mobile applications. The Project Hawaii Key Value service provides a very simple mechanism to enable mobile application to leverage these capabilities using a very simple syntax.

We will cover other capabilities of Project Hawaii in future blog posts.


Nick Harris (@cloudnick) and Nathan Totten (@ntotten) produced CloudCover Episode 98 - Mobile Services, ASP.NET Facebook Template, and Github Publishing Demos on 1/12/2013:

imageIn this episode Nick and Nate ring in the New Year with a variety of news and announcements about Windows Azure. Additionally, Nick gives some demos of the new scheduling feature in Windows Azure Mobile services. Nate shows the new Facebook Template for ASP.NET MVC that includes the Facebook C# SDK and deploys the template to Windows Azure Web Sites using the new Github continuous integration feature.

image_thumb75_thumb2In the News:


Brian Hitney described Scrubbing UserId in Windows Azure Mobile Services in a 1/10/2013 post to the US DPE Azure Connection blog:

First, many thanks to Chris Risner for the assistance on this solution! Chris is part of the corp DPE team and has does an extensive amount of work with Windows Azure Mobile Services (WAMS) – including this session at //build, which was a great resource for getting started.

imageIf you go through the demo of getting started with WAMS building a TodoList, the idea is that the data in the todo list is locked down to each user. One of the nice things about WAMS is that it’s easy to enforce this via server side javascript … for example, to ensure only the current user’s rows are returned, the following read script can be used that enforces the rows returned only belong to the current user:

function read(query, user, request) {
   query.where({ userId: user.userId });    
   request.execute();
}

image_thumb75_thumb2If we crack open the database, we’ll see that the userId is an identifier, like the below for a Microsoft Account:

MicrosoftAccount:0123456789abcd

When the app connects to WAMS, the data returned includes the userId … for example, if we look at the JSON in fiddler:

image

The app never displays this information, and it is requested over SSL, but it’s an important consideration and here’s why. What if we have semi-public data? In the next version of Dark Skies, I allow users to pin favorite spots on the map. The user has the option to make those points public or keep them private … for example, maybe they pin a great location for stargazing and want to share it with the world:

image

… Or, maybe the user pins their home locations or a private farm they have permission to use, where it might be inappropriate to show publically.

Now here comes the issue: if a location is shared publically, that userId is included in the JSON results. Let’s say I launch the app and see 10 public pins. If I view the JSON in fiddler, I’ll see the userId for each one of those public pins – for example:

image

Now, the userId contains no personally identifiable information. Is this a big deal, then? It’s not like it is the user’s name or address, and it would only be included in spots the user is sharing publically anyway.

But, if a hacker ever finds a way to map a userId back to a specific person, this is a security issue. Even my app doesn’t know who the users really are, it just knows the identifier. Still, I think from a best practice/threat modeling perspective, if we can scrub that data, we should. Note: this issue doesn’t exist with the todo list example, because the user only, and ever, sees their own data. [Emphasis added; see note below.]

Ideally, what we’d like to do is return the userId if it’s the current user’s userId. If the point belongs to another user, we should scrub that from the result set. To do this via a read script in WAMS, we could do something like:

function read(query, user, request) {
   
   request.execute( {
    success: function(results) {
        
         //scrub user token      
         if (results.length > 0) 
         {
            for (var i=0; i< results.length; i++)
            {
                if (results[i].UserId != user.userId)
                {
                    results[i].UserId = 'scrubbeduser';                                                                                                                                      
                }                           
            }              
          }       
           
        request.respond();
    }
  });
}

If we look at the results in fiddler, we’ll see that I’ll get my userId for any of my points, but the userId is scrubbed if it’s another user’s points that are shared publically:

image

[Note: these locations are random spots on the map for testing.]

Doing this is a good practice. The database of course has the correct info, but the data for public points is guaranteed to be anonymous should a vulnerability ever present itself. The downside of this approach is the extra overhead as we’re iterating the results – but, this is fairly minor given the relatively small amounts of data.

Technical point: In my database and classes, I use Pascal case (as a matter of preference), as you can see in the above fiddler captures, such as UserId. In the todo example and in the javascript variables, objects are conventionally camel case. So, if you’re using any code here, just be aware that case does matter in situations like this:

 if (results[i].UserId != user.userId) // watch casing!

Be sure they match your convention. Since Pascal case is the standard for properties in C#, and camel case is the standard in javascript, properties in .NET can be decorated with the datamember attribute to make them consistent in both locations – something I, just as a matter of preference, prefer not to do:

[DataMember(Name = "userId")]
public string UserId { get; set; }
Note: OakLeaf Systems’ Privacy Statement says the following about its version of the Todo List application that is undergoing submission to the Windows Store:

Collection of Personal Information (OakLeaf ToDo List Sample and Windows Store Apps)
The OakLeaf ToDo List Windows Mobile Services Demo is a free Windows Store App for computers running Windows 8 and devices running Windows RT. This app accumulates the text of ToDo items from users who sign in with their Microsoft Account (formerly Live ID). Access to individual users’ ToDo items items is provided by disguised representation of their Microsoft Account ID in the form of a Globally Unique Identifier (GUID). A user cannot view other users’ active ToDo items. Representatives of OakLeaf Systems can view all users’ ToDo items, but cannot associate them with a user’s identity. Representatives of OakLeaf systems periodically delete all ToDo items marked completed to reduce the database size.

image_thumb18


<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

image_thumb8No significant articles today


<Return to section navigation list>

Windows Azure Service Bus, Caching Access Control, Active Directory, Identity and Workflow

Clemens Vasters (@clemensv) described the Utopia ESB in a 1/15/2013 post:

imageThe basic idea of the Enterprise Service Bus paints a wonderful picture of a harmonious coexistence, integration, and collaboration of software services. Services for a particular general cause are built or procured once and reused across the Enterprise by ways of publishing them and their capabilities in a corporate services repository from where they can be discovered. The repository holds contracts and policy that allows dynamically generating functional adapters to integrate with services. Collaboration and communication is virtualized through an intermediary layer that knows how to translate messages from and to any other service hooked into the ESB like a babel fish in the Hitchhiker’s Guide to the Galaxy. The ESB is a bus, meaning it aspires to be a smart, virtualizing, mediating, orchestrating messaging substrate permeating the Enterprise, providing uniform and mediated access anytime and anywhere throughout today’s global Enterprise. That idea is so beautiful, it rivals My Little Pony. Sadly, it’s also about as realistic. We tried regardless.

File:ESB.pngAs with many utopian ideas, before we can get to the pure ideal of an ESB, there’s some less ideal and usually fairly ugly phase involved where non-conformant services are made conformant. Until they are turned into WS-* services, any CICS transaction and SAP BAPI is fronted with a translator and as that skinning renovation takes place, there’s also some optimization around message flow, meaning messages get batched or de-batched, enriched or reduced. In that phase, there was also learning of the value and lure of the benefits of central control. SOA Governance is an interesting idea to get customers drunk on. That ultimately led to cheating on the ‘B’. When you look around and look at products proudly carrying the moniker ‘Enterprise Service Bus’ you will see hubs. In practice, the B in ESB is mostly just a lie. Some vendors sell ESB servers, some even sell ESB appliances. If you need to walk to a central place to talk to anyone, it’s a hub. Not a bus.

Yet, the bus does exist. The IP network is the bus. It turns out to suit us well on the Internet. Mind that I’m explicitly talking about “IP network” and not “Web” as I do believe that there are very many useful protocols beyond HTTP. The Web is obviously the banner example for a successful implementation of services on the IP network that does just fine without any form of centralized services other than the highly redundant domain name system.

Centralized control over services does not scale in any dimension. Intentionally creating a bottleneck through a centrally controlling committee of ESB machines, however far scaled out, is not a winning proposition in a time where every potential or actual customer carries a powerful computer in their pockets allowing to initiate ad-hoc transactions at any time and from anywhere and where we see vehicles, machines and devices increasingly spew out telemetry and accept remote control commands. Central control and policy driven governance over all services in an Enterprise also kills all agility and reduces the ability to adapt services to changing needs because governance invariably implies process and certification. Five-year plan, anyone?

If the ESB architecture ideal weren’t a failure already, the competitive pressure to adopt direct digital interaction with customers via Web and Apps, and therefore scale up not to the scale of the enterprise, but to scale up to the scale of the enterprise’s customer base will seal its collapse.

Service Orientation

While the ESB as a concept permeating the entire Enterprise is dead, the related notion of Service Orientation is thriving even though the four tenets of SOA are rarely mentioned anymore. HTTP-based services on the Web embrace explicit message passing. They mostly do so over the baseline application contract and negotiated payloads that the HTTP specification provides for. In the case of SOAP or XML-RPC, they are using abstractions on top that have their own application protocol semantics. Services are clearly understood as units of management, deployment, and versioning and that understanding is codified in most platform-as-a-service offerings.

That said, while explicit boundaries, autonomy, and contract sharing have been clearly established, the notion of policy-driven compatibility – arguably a political addition to the list to motivate WS-Policy as the time – has generally been replaced by something even more powerful: Code. JavaScript code to be more precise. Instead of trying to tell a generic client how to adapt to service settings by ways of giving it a complex document explaining what switches to turn, clients now get code that turns the switches outright. The successful alternative is to simply provide no choice. There’s one way to gain access authorization for a service, period. The “policy” is in the docs.

The REST architecture model is service oriented – and I am not meaning to imply that it is so because of any particular influence. The foundational principles were becoming common sense around the time when these terms were coined and as the notion of broadly interoperable programmable services started to gain traction in the late 1990s – the subsequent grand dissent that arose was around whether pure HTTP was sufficient to build these services, or whether the ambitious multi-protocol abstraction for WS-* would be needed. I think it’s fairly easy to declare the winner there.

Federated Autonomous Services

imageWindows Azure, to name a system that would surely be one to fit the kind of solution complexity that ESBs were aimed at, is a very large distributed system with a significant number of independent multi-tenant services and deployments that are spread across many data centers. In addition to the publicly exposed capabilities, there are quite a number of “invisible” services for provisioning, usage tracking and analysis, billing, diagnostics, deployment, and other purposes. Some components of these internal services integrate with external providers. Windows Azure doesn’t use an ESB. Windows Azure is a federation of autonomous services.

imageThe basic shape of each of these services is effectively identical and that’s not owing, at least not to my knowledge, to any central architectural directive even though the services that shipped after the initial wave certainly took a good look at the patterns that emerged. Practically all services have a gateway whose purpose it is to handle and dispatch and sometimes preprocess incoming network requests or sessions and a backend that ultimately fulfills the requests. The services interact through public IP space, meaning that if Service Bus wants to talk to its SQL Database backend it is using a public IP address and not some private IP. The Internet is the bus. The backend and its structure is entirely a private implementation matter. It could be a single role or many roles.

Any gateway’s job is to provide network request management, which includes establishing and maintaining sessions, session security and authorization, API versioning where multiple variants of the same API are often provided in parallel, usage tracking, defense mechanisms, and diagnostics for its areas of responsibility. This functionality is specific and inherent to the service. And it’s not all HTTP. SQL database has a gateway that speaks the Tabular Data Stream protocol (TDS) over TCP, for instance, and Service Bus has a gateway that speaks AMQP and the binary proprietary Relay and Messaging protocols.

Governance and diagnostics doesn’t work by putting a man in the middle and watching the traffic coming by, which is akin to trying the tell whether a business is healthy by counting the trucks going to their warehouse. Instead we are integrating the data feeds that come out of the respective services and are generated fully knowing the internal state, and concentrate these data streams, like the billing stream, in yet other services that are also autonomous and have their own gateways. All these services interact and integrate even though they’re built by a composite team far exceeding the scale of most Enterprise’s largest projects, and while teams run on separate schedules where deployments into the overall system happen multiple times daily. It works because each service owns its gateway, is explicit about its versioning strategy, and has a very clear mandate to honor published contracts, which includes explicit regression testing. It would be unfathomable to maintain a system of this scale through a centrally governed switchboard service like an ESB.

Well, where does that leave “ESB technologies” like BizTalk Server? The answer is simply that they’re being used for what they’re commonly used for in practice. As a gateway technology. Once a service in such a federation would have to adhere to a particular industry standard for commerce, for instance if it would have to understand EDIFACT or X.12 messages sent to it, the Gateway would employ an appropriate and proven implementation and thus likely rely on BizTalk if implemented on the Microsoft stack. If a service would have to speak to an external service for which it would have to build EDI exchanges, it would likely be very cost effective to also use BizTalk as the appropriate tool for that outbound integration. Likewise, if data would have to be extracted from backend-internal message traffic for tracking purposes and BizTalk’s BAM capabilities would be a fit, it might be a reasonable component to use for that. If there’s a long running process around exchanging electronic documents, BizTalk Orchestration might be appropriate, if there’s a document exchange involving humans then SharePoint and/or Workflow would be a good candidate from the toolset.

For most services, the key gateway technology of choice is HTTP using frameworks like ASP.NET, Web API, probably paired with IIS features like application request routing and the gateway is largely stateless.

In this context, Windows Azure Service Bus is, in fact, a technology choice to implement application gateways. A Service Bus namespace thus forms a message bus for “a service” and not for “all services”. It’s as scoped to a service or a set of related services as an IIS site is usually scoped to one or a few related services. The Relay is a way to place a gateway into the cloud for services where the backend resides outside of the cloud environment and it also allows for multiple systems, e.g. branch systems, to be federated into a single gateway to be addressed from other systems and thus form a gateway of gateways. The messaging capabilities with Queues and Pub/Sub Topics provide a way for inbound traffic to be authorized and queued up on behalf of the service, with Service Bus acting as the mediator and first line of defense and where a service will never get a message from the outside world unless it explicitly fetches it from Service Bus. The service can’t be overstressed and it can’t be accessed except through sending it a message.

The next logical step on that journey is to provide federation capabilities with reliable handoff of message between services, meaning that you can safely enqueue a message within a service and then have Service Bus replicate that message (or one copy in the case of pub/sub) over to another service’s Gateway – across namespaces and across datacenters or your own sites, and using the open AMQP protocol. You can do that today with a few lines of code, but this will become inherent to the system later this year.


Steve Plank (@plankytronixx) reposted Windows Azure Active Directory Cartoon on 1/12/2013:

imageI posted this video on to Channel 9 before Christmas but I can see something has gone wrong with the indexing and it’s pretty undiscoverable on the site. Thought I’d make it known through the blog.


Abishek Lal continued his series with Enterprise Integration Patterns with Service Bus (Part 2) on 1/11/2013:

Priority-Queues

imageThe scenario here is that a receiver is interested in receiving messages in order of priority for a single or multiple senders. A common use cases for this is event notification, where critical alerts need to be processed first. Today Service Bus Queues do not have the capability to internally sort messages by priority but we can achieve this pattern using Topics and Subscriptions.

imageWe achieve this scenario by routing messages with different priorities to different subscriptions. Routing is done based on the Priority property that the sender adds to the message. The recipient processes messages from specific subscriptions and thus achieves the desired priority order of processing.

The Service Bus implementation for this is thru use of SQL Rules on Subscriptions. These Rules contain Filters that are applied on the properties of the message and determine if a particular message is relevant to that Subscription.

Service Bus Features used:

  1. SQL Filters can specify Rules in SQL 92 syntax
  2. Typically Subscriptions have one Rule but multiple can be applied
  3. Rules can contain Actions that may modify the message (in that case a copy
    of the message is created by each Rule that modifies it)
  4. Actions can be specified in SQL 92 syntax too.

The code sample for this is available here.

image_thumb9


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

Kevin Remde (@KevinRemde) continued his series with 31 Days of Servers in the Cloud – Creating Azure Virtual Machines with App Controller (Part 13 of 31) on 1/13/2013:

imageAs you know, if you’ve been following our series, “31 Days of Servers in the Cloud”, Windows Azure can become an extension of your datacenter, and allow you to run your servers in the cloud.

“We get it, Kevin.”

imageAnd you’ve seen excellent articles in this series already, describing how to use the Windows Azure portal to create your virtual machines, how to upload your own VM hard disks into the cloud and use them to build machines, and more. In today’s installment, I’m going to show you how easy it is to connect App Controller (a component of System Center 2012) to your Windows Azure account, and then how to use App Controller to create virtual machines in your Windows Azure cloud.

image_thumb75_thumb4To do this, we need to have a few preliminaries in place:

  1. You have a Windows Azure subscription, and have requested the ability to preview the use of Windows Azure virtual machines. (If you don’t have an account, you can start a free 90-day trial HERE.)
  2. You have System Center 2012 App Controller installed. (Download the System Center 2012 Private Cloud evaluation software HERE.)
    NOTE: You will need System Center 2012 SP1 App Controller, which at the time of this writing is available to TechNet and MSDN subscribers and volume license customers only; but will very soon be generally available. I will update this blog post as soon as that happens.

So, with nothing more assumed then just those basics, let’s walk through the following steps:

  1. Connect App Controller to your Windows Azure subscription (READ THIS POST for the instructions on how to do this.)
  2. Create a Storage Account in Windows Azure
  3. Use App Controller to create a new Virtual Machine

Assuming you’ve done part 1, and have your connection to your Windows Azure subscription set up in App Controller, let’s move on.

Create a Storage Account in Windows Azure

There are many ways to create a new storage account:

  • I could use the Windows Azure administrative portal
  • I could use PowerShell for Windows Azure and the New-AzureStorageAccount cmdlet
  • Or I could do it using App Controller.

For our purposes, let’s use App Controller.

Open App Controller and login as your administrative account. On the left, select Library.

image

Click Create Storage Account. Give your storage account a name, and choose a region or an affinity group.

image

Click OK. You should see something that looks like this at the bottom-right of the browser window:

image

After a few minutes, a refresh of the Library page should show you that you now have your new storage account available.

image

Now we need to create a container to hold our machine disk(s). With your new storage account selected, Click Create Container.

image

Give your container a name and click OK.

In a very short while, you’ll see your new container.

image

Now we’re ready to create virtual machines.

Use App Controller to create a new Virtual Machine

Open App Controller and login as your administrative account.

On the left, select Virtual Machines. This is where we can see, manage, and create new virtual machine and service deployments. (If you’re doing this for the first time, you won’t see items in your list here just yet.)

image

Click Deploy. The New Deployment window opens up.

image

Under Cloud, click Configure…, then select your Windows Azure connection as the cloud into which you’re going to deploy your new virtual machine.
(Note: In my App Controller, I’ve also connected to a local VMM Server, which is why I see this other cloud in my list.)

image

Click OK.

Now you will see this:

image

Click Select an Item… under Deployment Type. Now you’ll see a screen that looks something like this:

image

This is where you can choose to build a new machine or service based on existing, provided images, or images or disks you’ve uploaded into your own Windows Azure storage. In this example, I’m going to select Images on the left, and choose to build a new Windows Server 2012 machine using the provided image.

Once I click OK, I now see this:

image

So the next thing I need to do is click Configure… under Cloud Service. Virtual machines and services all run in the context of cloud services. For our example, we’re going to assume that you haven’t created any machines or other items that requires a service, so your list is going to be empty. You’ll use this screen to create and then select your new service.

image

Click Create… and then fill in cloud service details (Name, Description) and the cloud service location (a unique public URL, plus a geographic region or affinity group).

imageClick OK, and then select your new service and click OK again.

image

Next we need to configure the deployment:

image

Click Configure… under Deployment. Now you’ll see this:

image

Enter a deployment name, and optionally associate your machine with a virtual network if you have one. (If you don’t have, or don’t select a network, you will be creating the machine and service to handle networking within the service automatically.) Click OK.

Now it’s time to configure the virtual machine itself.

image

Click Configure… under Virtual Machine.

Now we set the general properties…

image

Note: an Availability Set is not required, but a new one can be created or an existing one selected from here.

Set the Disks…

image

When I click Browse…, I’m given the ability to choose the location for my disks in Windows Azure storage, as well as to add (or create) additional data disks for this machine. For our example let’s use the storage account and container we created earlier. I won’t be adding any data disks.

image

For the Network…

image

…I’ll just leave the default. I could use this opportunity to define additional endpoints for connections to services on this machine, or I could do it later.

For Administrator password

image

…enter a password for the local administrator account. (It also looks like you can use this to assign the computer to a domain if you happen to have a domain controller in the same network or service. I haven’t yet tried, this, so I can’t comment further.)

Click OK.

image

And now click Deploy.

You’ll see a notification towards the bottom right that should look something like this:

image

And after several minutes, looking in the Virtual Machines area of App Controller, you will see your new machine appear. Its status will change to “provisioning”, and eventually “running”.

image

Notice also that if you select your new machine, you also have the option now to connect to it via Remote Desktop! (Cool!) Log in as the Administrator with the administrator password you assigned, and you’re in!

image

Naturally, you can very easily use App Controller to delete your machines, disks, storage containers, and storage accounts, too. (Remember to do that when you’re done. Even if a machine isn’t running, you’re still being billed for it and for the storage being used!)


Kevin Remde (@KevinRemde) continued his series with 31 Days of Servers in the Cloud–Use PowerShell to create a VM in Windows Azure (Part 14 of 31) on 1/14/2013:

imageAs I’m sure most of you reading this already know, PowerShell is THE tool for automation and management, and the foundation for the configuration of effectively all products from Microsoft these days, as well as many other companies. So it shouldn’t surprise you (especially if you’ve been following our “31 Days of Servers in the Cloud” series) that PowerShell is also able to configure and manage resources “in the cloud”.

image_thumb75_thumb5Last week in Part 5, for example, I showed you how to connect PowerShell to your Windows Azure account. All you had to do was:

  1. imageGet a Windows Azure account (start with the free 90-day trial),
  2. Get the Windows Azure PowerShell tools, and
  3. Follow some simple instructions to set up the secured connection for Windows Azure management.

In today’s installment of our series, my friend Brian Lewis shows you how you can take PowerShell in Windows Azure to the next level, and actually use it to create Virtual Machines running in your Windows Azure cloud.

READ HIS EXCELLENT ARTICLE HERE


And if you need to catch up on any of our series, CLICK HERE for the full list of links.

Try Windows Azure free for 90 days

 

 


Nathan Totten (@ntotten) described Static Site Generation with DocPad on Windows Azure Web Sites in a 1/11/2013 post:

imageThere has been a lot of interest recently with static content generation tools. These tools allow you to generate a website from source documents such as markdown and serve static html files. The advantage of static sites is that they are extremely fast and very inexpensive to host. There are plenty of ways you can host static content that is already generated, but if you want a solution that provides integrated deployment and automated generation you can easily setup Windows Azure Web Sites to host your statically generated site.

image_thumb75_thumb5SHAMELESS PLUG: You can host up to 10 sites on Windows Azure Web Sites for free. No trial, no expiration date, completely free. :)

I have previously written about Jekyll and Github pages for generating static content. For this post I am going to use my new favorite tool, DocPad. I like DocPad because it is written entirely in Node.js. This gives you the ability to use all kinds of cool Node tools like Jade, CoffeeScript, and Less. To get started with DocPad you just need to install it using Node Package Manager. Run the following commands to install DocPad and then, in an empty directory, create and run a new DocPad site.

imageRunning DocPad in an empty directory will scaffold your new site. You will be asked which template you would like to use. I am going to select “Twitter Boostrap with Jade”. DocPad will create the site and also initialize the empty folder as a git repository.

docpadrun

After the site is generated the docpad server will run and you can access the site at http://localhost:9778/.

docpad-localhost

Now that we have a basic DocPad site running locally we are ready to deploy it to Windows Azure Web Sites. There are a few minor changes that you will need to make in order to get the site running on Windows Azure Web Sites.

First, we need to trigger the static content to be generated when the site is deployed. In order to do this we are going to create a simple deploy script that Windows Azure Web Sites will run on each deployment. To create the default deploy script, run “azure sites deploymentscript –node” in your site’s directory. The default deployment script will setup your site for deployment to Windows Azure Web Sites. We will make a modification to the script that will build the static content for the DocPad site.

docpad-deploymentscript

docpad-deploymentfiles

You will see four new files created in your DocPad site .deployment, deploy.cmd. web.config, and iisnode.yml. The .deployment file tells Windows Azure which command to run and the deploy.cmd is the actual deployment script. Web.config tells IIS to use the IISNode handler for serving Node.js content and iisnode.yml contains the IISNode settings.

Open the deploy.cmd file to configure the script to build our static docpad site. Inside the file you will see a section titled Deployment. This section contains two groups of commands title KuduSync and Install npm packages. Directly after the Install npm packages code add the lines shown below.

This script will generate the static content in the /out folder. Now we need to make one minor configuration change to make sure our deploy.cmd runs correctly. By default, DocPad prompts you to agree to their terms every time your generate a static site. This will cause our Windows Azure deployment to fail. To disable this behavior simply set prompts to false in the docpad.coffee configuration as shown below.

Next, we need to setup the server to correctly host the docpad site. This can be done with a single line of Node.js code. Create a file named server.js in the root of your site with the following content.

Everything is now setup and we are ready to deploy. For this deploy I am going to use continuous integration from Github to Windows Azure Web Sites. If you haven’t already done so, you will need to create a new Github repository. Once your repository is created you can create a new Windows Azure Web Site and link it to your repository for automatic continuous deployments.

In order to setup continous integration, open the web site’s dashboard and click “Set up Git publishing” in the “quick glance” section on the right. After your Git repository is ready click the section labeled ”Deploy from my GitHub Repository” and click “Authorize Windows Azure” as shown below.

docpad-setupgitpublishing

If you have never done this before, you will be prompted to Authorize Windows Azure to access your Github account. After Windows Azure is authorized select your Github repository as shown below.

docpad-selectrepository
Now that everything is connected for continuous integration we just need to push our site to github. Run the following commands to add the files, commit them, add the remote, and push the code.

After the push is complete you will see that your site deployment kicks off almost immediately.

docpad-deployment

The deployment should finish in about 30 seconds and your site will be ready. Browse to the site and you will see your new docpad site.

docpad-site

You can find the full source for this post on Github here.

image_thumb11


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Yvonne Muench (@yvmuench) recounted After the Storm - ESRI Maps Out the Future in the Cloud in a 1/16/2013 post:

imageA storm hits. Trees down power lines, water levels rise. Emergency teams scramble to figure out the damage, who’s vulnerable? First responders think visually so answering these questions often starts with a map. And those maps are frequently powered by Esri, a leader in Geographic Information Systems (GIS). A couple weeks ago I visited their Redlands, CA headquarters on what just happened to be the day after Hurricane Sandy hit the Eastern seaboard.

imageHistorically GIS solutions have been delivered as complicated desktop apps, which required trained GIS specialists to use. But for about a year now Esri has included a cloud-based component to the system called ArcGIS Online. It lets users, even laymen like you and me, create and instantly publish and share interactive maps. During the storm they experienced an intense spike in demand. The system scaled 3x in one day going from 50 million maps to 150 million just after the peak of the storm, all hosted on Windows Azure. To learn more about how cloud is affecting their industry, I chatted with Russ Johnson, Esri’s Director of Public Safety and National Security Solutions, as well as Paul Ross, Product Manager for ArcGIS Online.

The past…

imageBefore joining Esri, Russ used to be a Commander in National Incident Response teams. He described how knowing what was in place before the disaster is so critical, especially when leading a diverse team that spans local, state and federal organizations. This information would typically be printed and distributed to emergency workers. One of the biggest challenges was to find the data, because local government is often in disarray due to the disaster. For Sept 11th it took at least a week to find the data, bring it together, normalize it, create a database, then produce maps. Once maps were produced people came out of the woodwork with asks. “What bridges can carry big loads? What government buildings are vacant, damaged, etc…”

After initial assignments are made, you typically plan and refresh every twelve hours. That could require four to five hours in a helicopter surveying the incident. Then a couple hours with the planning team to generate and print new assignments for the response teams.

The present…

Enter the cloud which is beginning to fundamentally change a thirty year process in a couple of key ways. First there’s access to unbelievable amounts of quality base data already online. Then there’s ease of use – the new app allows people to quickly create maps almost anyone can use, no specialists required. And instead of printed maps, or at best converted to static PDFs and then posted online, now data and intelligence is coming in real time, maps are dynamically updating - you can get the current status of roads, gas stations, shelters. These maps can be almost instantaneously accurate and easily shared broadly, across multiple agencies. Resource assignments can be made digitally and sent directly to emergency responders. The public can access the same maps. The result is more informed decisions, better outcomes, and greater continuity of governance.

Another new capability enabled by cloud is dynamic mash ups. For example take a map of flood zones, then bring in real time stream gauges to show current water levels. Or take a map of shelters and layer on the open commercial stores nearby. One map generates another. Laymen users combine static and dynamic data by themselves. The possibilities are endless. For example… here’s a relevant question many people had - how will Superstorm Sandy affect voter turnout in the 2012 US presidential election? This map took precinct-level historical voting data and overlaid FEMA impact zones for the disaster.

Darker shaded counties were most damaged by the storm. By mashing up storm and voting data, one could assess impact of the storm on expected voter turnout by political party.

The future…

We’re on the cusp of something even more powerful - crowd sourcing for damage assessment. Leading up to the storm, on the spur of the moment Esri created a cloud-based mobile solution allowing individuals to report and upload conditions on the ground into a central database. With the wide and growing availability of smart phones, this raises the notion of every member of the public as a possible sensor for damage assessment. Powerful stuff.

Even though we’re not quite there yet, the cloud has already been a game changer in this industry. Paul Ross explained that for the first time with Hurricane Sandy there was no hesitation to use the cloud for mission critical information. Utility companies posted outage maps – identifying where power was out and when you can expect it to come back. State and local governments posted evacuation maps and impact areas.

What I find inspiring is that Esri is not just doing the same things in the cloud as they did on premises. They are doing things that are only possible in the cloud. It shows the power of software to help solve real human problems and the power of the cloud to deliver step-change improvements in how it’s done.

About Esri and the Microsoft Disaster Response Program

Esri and Microsoft partner to provide public and private agencies and communities information maps during disasters in a cloud computing infrastructure. Microsoft directs citizens to the maps via online platforms such as MSN, Bing, and Microsoft.com.


• Philip Fu described [Sample Of Jan 15th] Startup tasks and Internet Information Services (IIS) configuration in Windows Azure on 1/15/2012:

imageYou can use Startup element to specify tasks to configure your role environment. Applications that are deployed on Windows Azure usually have a set of prerequisites that must be installed on the host computer. You can use the start-up tasks to install the prerequisites or to modify configuration settings for your environment. Web and worker roles can be configured in this manner.

imageYou can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.


Michael Collier (@MichaelCollier) posted Tips for Publishing Multiple Sites in a Web Role on 1/14/2013:

imageIn November 2010, with SDK 1.3, Microsoft introduced the ability to deploy multiple web applications in a single Windows Azure web role. This is a great cost savings benefit since you don’t need a new role – essentially a virtual machine – for each web application you want to deploy.

imageCreating a web role that contains multiple web sites is pretty easy. Essentially, you need to add multiple <Site> elements to your web role’s ServiceDefinition.csdef file. Each <Site> element would include a physicalDirectory element that references the location of the web site to be included.

<Sites>
  <Site name="WebRole1" physicalDirectory="..\..\..\WebRole1">
<Bindings>
<Binding name="Endpoint1" endpointName="Endpoint1" />
</Bindings>
</Site>
<Site name="WebApplication1" physicalDirectory="..\..\..\WebApplication1\">
<Bindings>
<Binding name="Endpoint1" endpointName="Endpoint2" />
</Bindings>
</Site>
<Site name="WebApplication2" physicalDirectory="..\..\..\WebApplication2\">
<Bindings>
<Binding name="Endpoint1" endpointName="Endpoint3" />
</Bindings>
  </Site>
</Sites>

image_thumb75_thumb5For additional detailed information on creating a web role with multiple web sites, I suggest following the guidance provided at these excellent resources:

The above resources provide a great starting point. However, there is a once piece of what I think is important information that is missing. When Visual Studio and the Windows Azure SDK (via CSPACK) create the cloud deployment package (.cspkg), the content listed at the physicalDirectory location is simply copied into the deployment package. Meaning, any web applications there are not compiled as part of the process, no .config transformations take place, and any code-behind (.cs) and project (.csproj) files are also copied.

WebApplication1_AllFiles

What’s going on? CSPack is the part of the Windows Azure SDK that is responsible for creating the deployment package file (.cspkg). As CSPack is part of the core Windows Azure SDK, it doesn’t know about Visual Studio projects. Since it doesn’t know about the Visual Studio projects located at the physicalDirectory location, it can’t do any of the normal Visual Studio build and publish tasks – thus just copying the files from the source physicalDirectory to the destination deployment package.

However, when packaging a single-site web role, CSPack doesn’t rely on the physicalDirectory attribute. With a single-site web role, the packaging process is able to build, publish, and create the deployment package.

The Workaround

Ideally, each web site should be published prior to packaging in the .cspkg. Currently there is not a built-in way to do this. Fortunately we can use MSBuild to automate the build and publish steps.

  1. Open the Windows Azure project’s project file (.ccproj) in an editor. Since the .ccproj is a MSBuild file, additional data points and build targets can be added here.
  2. Add the following towards the bottom of the .ccproj file.
<PropertyGroup>
  <!-- Inject the publication of "secondary" sites into the Windows Azure build/project packaging process. -->
  <!-- CleanSecondarySites; should be in CoreBuildDependsOn. Not working with Package in Visual Studio (works from cmd line though).  Investigating. -->
  <CoreBuildDependsOn>
    CleanSecondarySites;
    PublishSecondarySites;
    $(CoreBuildDependsOn)
  </CoreBuildDependsOn>
  <!-- This is the directory within the web application project directory to which the project will be "published" for later packaging by the Azure project. -->
  <SecondarySitePublishDir>azure.publish\</SecondarySitePublishDir>
</PropertyGroup>
<!-- These SecondarySite items represent the collection of sites (other than the web application associated with the role) that need special packaging. -->
<ItemGroup>
  <SecondarySite Include="..\WebApplication1\WebApplication1.csproj" />
  <SecondarySite Include="..\WebApplication2\WebApplication2.csproj" />
</ItemGroup>
<Target Name="CleanSecondarySites">
  <RemoveDir Directories="%(SecondarySite.RootDir)%(Directory)$(SecondarySitePublishDir)" />
</Target>
<Target Name="PublishSecondarySites" Condition="'$(IsExecutingPublishTarget)' == 'true'">
  <!--
    Execute the Build (and more importantly the _WPPCopyWebApplication) target to "publish" each secondary web application project.
    
    Note the setting of the WebProjectOutputDir property; this is where the project will be published to be later picked up by CSPack.
  -->
  <MSBuild Projects="%(SecondarySite.Identity)" Targets="Build;_WPPCopyWebApplication" Properties="Configuration=$(Configuration);Platform=$(Platform);WebProjectOutputDir=$(SecondarySitePublishDir)" />

Finally, for each secondary site defined in the .csdef, update the the physicalDirectory attribute to reference the publishing directory, azure.publish\.

<Site name="WebApplication1" physicalDirectory="..\..\..\WebApplication1\azure.publish">
<Bindings>
<Binding name="Endpoint1" endpointName="Endpoint2" />
  </Bindings>
</Site>

How does this all work? By adding the CleanSecondarySites and PublishSecondarySites targets to the <CoreBuildDependsOn> element, that is forcing CleanSecondarySites and PublishSecondarySites to happen before any of the default targets included in <CoreBuildDependsOn> (defined in the Microsoft.WindowsAzure.targets file). Thus, the secondary sites are built and published locally before any of the default Windows Azure build targets execute. The IsExecutingPublishTarget condition is needed to ensure PublishSecondarySites happens only when packaging the Windows Azure project (either from Visual Studio or via the command line with MSBuild).

I like this approach because it seems like a fairly clean approach and I don’t have to modify build events (keep reading) for each secondary web site I want to include. I can keep the above build configuration snippet handy and use it in future projects quickly.

Alternative Approach

The above approach relies on modifying the project file to help automate the build and publish of any secondary sites. An alternative approach would be to leverage build events. I first learned of this approach from “Joe” in the comments section of Wade Wegner’s blog post.

  1. Select your Windows Azure project, and select Project Dependencies. Mark the other web apps as project dependencies.build_dependencies
  2. Open the project properties for each web application (not the one used as the Web Role).
  3. For the Build Events, you’ll need to add a pre-build and post-build event.

Pre-Build

rmdir "$(ProjectDir)..\YOUR-AZURE-PROJECT\Sites\$(ProjectName)" /S /Q

Post-Build

%WinDir%\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe "$(ProjectPath)"
/T:PipelinePreDeployCopyAllFilesToOneFolder /P:Configuration=$(ConfigurationName);PreBuildEvent="";PostBuildEvent="";
PackageAsSingleFile=false;_PackageTempDir="$(ProjectDir)..\YOUR-AZURE-PROJECT\Sites\$(ProjectName)"

The pre-build event cleans up any files left around from a previous build. The post-build event will trigger a local file system Publish action via MSBuild. The resulting published files going to the “Sites” subdirectory.

Finally, be sure to update the physicalDirectory element in ServiceDefinition.csdef to reference the local publishing subdirectory (notice the ‘Sites’ directory in the updated snippet below).

<Site name="WebApplication1" physicalDirectory="..\..\Sites\WebApplication1\">
<Bindings>
<Binding name="Endpoint1" endpointName="Endpoint2" />
  </Bindings>
</Site>

When you “Publish” the Windows Azure project, all your web sites will build and publish to a local directory. All the web sites will have the files you would expect.

Final Helpful Info

When deployed to Windows Azure, the secondary sites referenced in the physicalDirectory attribute are placed in the sitesroot folder of your E or F drive. The site that is the web role is actually compiled, published and deployed to the approot folder on the E or F drive. If you want to see this layout locally, first unzip the .cspkg and then unzip the .cssx file (in the extracted .cspkg). This is the layout that is deployed to Windows Azure.

Special thanks to Paul Yuknewicz and Phil Hoff for their insightful feedback and assistance on this post.

image_thumb22


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

• Beth Massi (@bethmassi) suggested that you Get Started Building SharePoint Apps in Minutes with LightSwitch in a 1/17/2013 post to the Visual Studio LightSwitch blog:

imageI’ve dabbled in SharePoint 2010 development in the past by using Visual Studio. In fact, I wrote a fair share of articles and samples about it. However I’ve been slacking when it comes to really learning the new app model in SharePoint 2013. I’ve got a good understanding of the architecture, have played with Napa a little, but I just haven’t really had the time to dig into the details, get dirty, and build some real SharePoint apps.

imageLuckily, one of my favorite products has come to save me! In the latest LightSwitch Preview 2, we have the ability to enable SharePoint 2013 on our LightSwitch projects. This gives us access to SharePoint assets as well as handling the deployment of our application into the SharePoint app catalog. In no time you can create a business app using the LightSwitch HTML client, deploy it to SharePoint 2013, and run it from a variety of mobile devices.

So why would you want to deploy a LightSwitch app to SharePoint? I mean, I can just host this app on my own or in Azure, right? Yes, you can still host LightSwitch apps yourself, however, enabling SharePoint in your LightSwitch apps allows you to take advantage of business data and processes that are already running in SharePoint in your enterprise. Many enterprises today use SharePoint as a portal of information and applications while using SharePoint’s security model to control access permissions. So with the new SharePoint 2013 apps model, this makes running LightSwitch applications from SharePoint / Office 365 very compelling for many businesses.

Sign Up for an Office 365 Developer Account

The easiest way to get started is to sign up for a free Office 365 Developer account. Head to dev.office.com to get started. When you sign up, you’re required to supply a subdomain of .onmicrosoft.com and a user ID. After signup, you use the resulting user ID (i.e. userid@yourdomain.onmicrosoft.com) to sign in to your portal site where you administer your account. Your SharePoint 2013 Developer Site is provisioned at your new domain: http://yourdomain.onmicrosoft.com.

You can see your developer site by selecting SharePoint under the Admin menu on the top of the page. This will list all your site collections. Make sure you use this developer site for your LightSwitch apps otherwise when you debug your application you will get an error “Sideloading of apps is not enabled on this site.”

image

Get the LightSwitch HTML Client Preview 2

In order to get LightSwitch SharePoint & HTML functionality, you’ll need to have Visual Studio 2012 installed and then you can install the LightSwitch Preview 2 which is included in the Office Developer Tools Preview 2.

Install: Microsoft Office Developer Tools for Visual Studio 2012 - Preview 2

Build an App – Here’s a Tutorial

Now you’re ready to build an app! We’ve got a tutorial that walks you through building a survey application using LightSwitch that runs in SharePoint. I encourage you to give it a try, it should take under an hour to complete the tutorial, and in the end you’ll have a fully functional modern app that runs on a variety of mobile devices.

LightSwitch SharePoint Tutorial

This tutorial demonstrates how LightSwitch handles the authentication to SharePoint using OAuth for you. It also shows you how to use the SharePoint client object model from server code, as well as writing WebAPI methods that can be called from the HTML client. Check out my finished SharePoint app! (click images to enlarge)

image           image

image

If you’ve got questions and/or feedback, please head over to the LightSwitch HTML Client Preview Forum and let the team know.

More Resources & Reading

My LightSwitch HTML 5 Client Preview 2: OakLeaf Contoso Survey Application Demo on Office 365 SharePoint Site post, updated 1/8/2013, covers the autohosting model which involves Windows Azure. It’s important to note that the Office Store isn’t accepting SharePoint apps that use the autohosting model and has offered no timetable for when the embargo might be lifted.


• Beth Massi (@bethmassi) listed the Most Popular LightSwitch Team Blog Posts of 2012 on 1/16/2012:

imageI was doing some content analysis this morning for this blog and the LightSwitch team blog and I thought it would be fun to list off the most viewed LightSwitch team articles of 2012. Then I thought it would probably be helpful to also look at the most popular “How Do I” videos on the Developer Center as well.

imageIt’s exciting to see one of the most popular posts was the announcement of the HTML Client which brings the ease and speed of business app development to mobile devices as well as optional deployment and integration with SharePoint 2013 / Office 365. See my post on how to get started with the HTML Client.

Make sure you check these gems out!

Top 20 LightSwitch Team Articles (most views in 2012)
Top 10 How Do I Videos

See all the videos on the LightSwitch Developer Center.


Rowan Miller posted Entity Framework Links #3 on 1/15/2013:

imageThis is the third post in a regular series to recap interesting articles, posts and other happenings in the EF world.

In December our team announced the availability of EF6 Alpha 2. Scott Guthrie also posted about this release. The announcement post provides details of the features included in this preview. Of particular note, we were excited to include a contribution from AlirezaHaghshenas that provides significantly improved warm up time (view generation), especially for large models. We’ve also got some more performance improvements coming in the next preview of EF6. The Roadmap page on our CodePlex site provides details of all the features we are planning to include in EF6, including those that are not yet implemented.

We shipped some fixes for the EF Designer in Visual Studio 2012 Update 1.

There were a number of good posts about Code First Migrations . Doug Rathbone blogged about using Code First Migrations in team environments. This one’s a bit older, but we wanted to point out a good post by Jarod Ferguson with some good tips for using Code First Migrations. Rowan Miller blogged about customizing the code that is scaffolded by Code First Migrations.

Devart announced Spatial data type support in their EF provider for Oracle.

Diego Vega posted details about a workaround for performance with Enumerable.Contains and non-Unicode columns against EF in .NET 4.0.

Julie Lerman published an article in MSDN Magazine about shrinking EF models with DDD bounded contexts. She also blogged about an issue she has seen with machine.config files messing up Code First provider factories.

In case you missed it, the EF Team is now on Twitter and Facebook, follow us to stay updated on a daily basis.


Heinrich Wendel described Visualizing List Data using a Map Control in a 1/14/2013 post to the Visual Studio LightSwitch blog:

image_thumb6The true power of LightSwitch lies in its combination of Access-like ease of use and quick ramp-up, while also remaining attractive for complex coding scenarios. The new LightSwitch HTML Client allows you to build modern apps in a couple of minutes, deploy them to online services like SharePoint and Azure and access them from a variety of mobile devices.

You already learned about the basic usage of the new LightSwitch HTML Application project type and screen templates in our previous blog posts. We also introduced you to some of our more powerful concepts. We discussed the new HTML Client APIs, such as Event Handling and Promises, showed you how to use them to write your own custom controls that bind to data and how to integrate existing jQueryMobile controls.

In this blog post we will dig even deeper into code and show you how to implement a custom collection control that shows a map. While LightSwitch already provides two collection controls out of the box, namely the List and Tile List controls, location based data is usually visualized using a map. This is especially useful on mobile devices and LightSwitch provides an API that allows you to write your own collection controls.

We will use the familiar Northwind database in this article. Instead of downloading it, we will just connect to an existing OData endpoint on the web. The Northwind database comes with a list of customers and their addresses. We will simply display those customers as pins on Bing Maps and allow users to drill into the details by clicking on one of the pins. In addition to that, we will limit the number of customers displayed on the list by implementing a paging mechanism.

Before we start you should make sure that you have the latest LightSwitch HTML Preview 2 installed on your machine. You can download the LightSwitch HTML Client Preview 2 here. We will go through the basic steps very quickly. If you are not familiar with those you should first walk through our previous tutorials.

Creating a project and connecting to existing data

We will start with a new “LightSwitch HTML Application” project. After creating the project select “Attach to external Data Source”, choose “OData Service” and put in the following endpoint address: “http://services.odata.org/Northwind/Northwind.svc”. Select “None” for the Authentication Type and proceed to the next screen. Check the box to include all Entities and close the dialog by clicking “Finish”. After those preparation steps your project should look similar to the following screenshot:

blog1

Building the basic screen

Our small sample application only needs one Screen. We will start with the “Browse” Screen Template. Right click the “Client” node in the Solution Explorer, select “Add Screen…”, use the “Browse Data Screen” and select the “NorthwindEntitiesData.Customers” Entity. This will create a Screen with a list of all customer names that are in the database.

Now we will enhance this Screen to show some more information when selecting a customer. Add a new Dialog to the Screen by dragging the “Selected Item” property of the “Customers” Query into the Dialogs node. Then select the Rows Layout and set “Use read-only controls” in the Properties Window. Finally, select the List and change the “Item Tap” action in the Properties Window to show the created Dialog.

The result should look similar to the following screenshot:

blog2

Running (F5) the application will show you a list of all customers and selecting one will open a dialog with all the details.

bog3

Implementing the Control

Now we are ready to replace the built-in List control with our custom maps control. Go back to the Screen Designer and change the List into a Custom Control. Now that we have a Custom Control we have to actually implement the code to render it.

Adding the lightswitch.bing-maps control

First we need to add the lightswitch.bing-maps control to our Solution. It will provide a wrapper around the Bing Maps SDK that can be used in your LightSwitch application. We already used a similar wrapper in the Contoso Moving walkthrough. This one is slightly enhanced to provide some additional functionality like adding multiple pins. Switch to the “File View” in the Solution Explorer and add the lightswitch.bing-maps.js (attached to the end of this blog post) to the “Scripts” folder of the Client project. Then open default.htm and add the following line at the beginning of the script block:

<script type="text/javascript" charset="utf­8" 
src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=7.0"></script>

And this one at the end of the script block:

<script type="text/javascript" charset="utf­8" src="Scripts/lightswitch.bing-maps.js"></script>
Implementing the custom control

Let’s implement the code of the actual control now. Select the Custom Control in the Screen Designer and select “Edit Render Code” from the Properties Window. This will open the BrowseCustomers.js file.

bog4

In there, replace the prepopulated code with the following one:

/// <reference path="../GeneratedArtifacts/viewModel.js" />

var mapDiv;
var current = 0;
var step = 15;

myapp.BrowseCustomers.Customer_render = function (element, contentItem) {
    mapDiv = $('<div />').appendTo($(element));
    $(mapDiv).lightswitchBingMapsControl();

    var visualCollection = contentItem.value;
    if (visualCollection.isLoaded) {
        showItems(current, step, contentItem.screen);
    } else {
        visualCollection.addChangeListener("isLoaded", function () { 
showItems(current, step, contentItem.screen); }); visualCollection.load(); } }; function showItems(start, end, screen) { $(mapDiv).lightswitchBingMapsControl("resetMap"); $.each(screen.Customers.data, function (i, customer) { if (i >= start && i <= end) { $(mapDiv).lightswitchBingMapsControl("addPinAsync", customer.Address,
customer.City, customer.Country, i + 1, function () { screen.Customers.selectedItem = customer; screen.showDialog("Details"); }); } }); };

This code will basically initialize the Bing Maps control and then add pins for the first 15 items in the list. Running the application will now no longer show a list, but a map with 15 pins. Clicking one of the pins will open a details dialog again.

blog5

The code explained

In order to understand the code we first have to understand the Visual Collection and its API. The Visual Collection is used as proxy between queries and the UI and provides various properties and methods, amongst them:

  • visualCollection.isLoaded: This property tells you to whether the data has already been loaded.
  • visualCollection.data: This property gives you to access to the loaded data.
  • visualCollection.load(): This method will start to load the data.
  • visualCollection.selectedItem: This property gives you access to the currently selected item.

In the render method we initialize the lightswitch.bing-maps control and append it to the DOM tree. We will then check whether the visualCollection is already loaded. If that is the case, we add the customers to the list. Otherwise we add a change listener which will call showItems() as soon as the data has been loaded.

In the showItems() method we first clear the map. Then we iterate over the list of customers and add the first 15 items to the map. We also pass a callback function which will open the details dialog as soon as a pin is been clicked.

Adding support for Paging

At the moment the map only shows 15 customers. We do not want hundreds of pins to be displayed on the map at once, this would look very chaotic. But we want to be able to look at the next 15 customers. Therefore we are going to implement a paging mechanism.

We are going to add support for going to the next 15 items and back again using two buttons. First add a New Group below the Custom Control, change it to a Columns Layout and set its “Height” to “Fit to Contents” in the Properties Window. Then add two Buttons inside of it (+Add, New Button). In the Add Button dialog select the “Write my own method” option. Name the methods “PreviousItems” and “NextItems”.

blog6

Right-click on the NextItem button and select “Edit execute code” and add the following code:

myapp.BrowseCustomers.NextItems_execute = function (screen) {
    current = current + step;
    if (current + step > screen.Customers.count) {
        if (screen.Customers.canLoadMore) {
            screen.Customers.loadMore().then(function (result) {
                showItems(current, current + step, screen);
            });
        }
    } else {
        showItems(current, current + step, screen);
    }
};

myapp.BrowseCustomers.PreviousItems_execute = function (screen) {
    current = current - step;
    showItems(current, current + step, screen);
};

Running the application will now give you the following screen with a working paging mechanism:

blog7

The code explained

In order to implement the paging mechanism we fist have to understand the built-in query paging mechanism of LightSwitch. In order to save bandwidth LightSwitch will only load a limited number of items at once. It is not important to know how many items are actually loaded just the fact that not all items are loaded at once. The Visual Collection offers two properties and one method for dealing with paging:

  • visualCollection.count: This property tells you the number of currently loaded items. It is important to notice that this is not the total number of items.
  • visualCollection.canLoadMore: This property tells you whether there are more items that can be loaded.
  • visualCollection.loadMore(): This will load more items.

In the execute method of the NextItems button we first check whether we already have enough items loaded to be displayed. If that is not the case we will check whether we can actually load more items and then load them. Using a Promise object, which we explained in our previous API post, we then show the newly loaded items using the showItems() method that we already implemented before.

Wrapping it up

That’s it! One screen, an external control and a few lines of glue code is all you need. The simple yet powerful API of LightSwitch makes it easier than ever to leverage the existing HTML ecosystem.

Please also note that if you want to use the Bing Maps control in a production environment you have to acquire your own API key. You can get the code on the Bing Maps SDK webpage. After you have the code you need to replace our test code in lightswitch.bing-maps.js with your own one.

I hope you will play around with some of the more advanced capabilities of our LightSwitch HTML Client Preview 2 and provide us with feedback in the MSDN Forums. We are always eager to learn more about what you are trying to accomplish using LightSwitch.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Brihadish Koushik posted a link to A Guide to Troubleshooting Windows Azure Applications to the Aditi Technologies blog on 1/17/2013:

imageTypically on Windows Server Applications, troubleshooting is done by turning on IIS logs and event logs. These logs will survive restarts and developers examine them when a problem occurs. The same process can be followed in a Windows Azureapplication if remote desktop is enabled. Developers can connect to each instance to collect diagnostic data. The collection can then be done by simply copying the data to a local machine. However, this process is time consuming and will fail if the instance is re-imaged Also, it becomes quite impractical when dealing with many instances.

imageWindows Azure Diagnostics (WAD) provides functionality to collect diagnostic data from an application running on Windows Azure and store them on Windows AzureStorage. The easiest way to setup WAD is to import the Windows Azure Diagnostics module to the application’s service definition and then configure the data sources for which diagnostic data is to be collected.

Download the free whitepaper below for an in-depth guide on how to troubleshoot Windows Azure Applications.


Mary Jo Foley (@maryjofoley) asserted “Microsoft is planning to make its 'Drawbridge' virtualization/hosting technology available on its Windows Azure cloud” in a deck for her Microsoft to offer its 'Drawbridge' virtualization technology on top of its Windows Azure cloud post of 1/14/2013 to ZDNet’s All About Microsoft blog:

imageIt looks like it's full-steam-ahead for Microsoft Research's 'Drawbridge' library OS technology.

According to a job posting for a software development engineer in test on the Microsoft careers site (which Charon at Ma-Config.com unearthed this past weekend), Microsoft is working on "delivering v1 of a new virtualization technology for Windows Azure." That technology, according to the job post, is "Drawbridge," which is an operating system technology developed by some of the same individuals who created the "Singularity" microkernel.

The job posting describes Drawbridge as "an innovative new hosting model." Microsoft officials prevsiously have described Drawbridge as "a form of virtualization that seeks to replace the need for a virtual machine to run software across disparate platforms."

MScloudOS

A 2011 white paper describing Azure explained that cloud hosting services like Amazon's EC2 and Windows Azure "might use the library OS design to substantially lower their per-sandbox costs." The authors noted that even though "VMMs (virtual machine managers) offer the benefits of a complete OS, and thus will likely always have their place in server consolidation, the library OS uses far fewer resources and thus offers lower costs, particularly for cloud applications with low CPU utilization."

Currently, Microsoft is offering Windows Azure customers "preview"/test versions of persistent virtual machines for Windows Azure which enable them to run Linux and/or Windows Server, along with their associated applications, on Windows Azure. Just last week, Microsoft announced VM Depot, a catalog of open-source virtual machine images for Windows Azure. Via this catalog, developers can build, deploy and share custom open-source stacks for Windows Azure.

image_thumb75_thumb5There's no word in the aformentioned job post about when users can expect a preview/test of Drawbridge running on Windows Azure. I'd be surprised if anyone from the company mentions it on January 15 during a Cloud OS briefing for press and analysts. (Cloud OS is the term Microsoft is using to refer to Azure the Windows Server OS and, increasingly, other technologies including System Center and SQL Server.)

During tomorrow's Cloud OS briefing, Microsoft officials are on tap to "detail several new Microsoft management products and services, which deliver against Microsoft’s Cloud OS vision," according to the invitation I received. I'd assume that the recently released System Center 2012 Service Pack 1 will be a main topic of conversation. The Configuration Manager piece of System Center SP1 -- along with the fourth version of the Windows Intune management service (which is hosted on Windows Azure) -- is key to Microsoft's strategy for managing Windows 8, Windows RT, Windows Phone 8, ioS and Android devices.

imageThe SP1 System Center Virtual Machine Manager component also includes new technologies of potential interest to those managing their own host, networking and storage systems.


James Urquhart (@jamesurquhart) posted Devops, complexity and anti-fragility in IT: An introduction to GigaOm’s Cloud blog on 1/13/2013:

imageSome time ago, my friend Phil Jaenke and I (and a few others) got into a debate on Twitter. The discussion started as an exploration of the changing nature of software development, operations and change control, and whether they are good or bad for the future of software resiliency. It resulted in a well-articulated post from Phil, arguing that you can’t have resiliency without stability, and vice versa.

imageHowever, as I started trying to outline a response, I realized that there was a lot of ground to cover. The core of Phil’s argument comes from his background as a hardware and systems administration expert in traditional IT organizations. And with that in mind, what he articulates in the post is a reasonable way to see the world.

imageHowever, cloud computing is changing things greatly for software developers, and these new models don’t take kindly to strict control models. From an application down perspective, Phil’s views are highly suspect, given the immense success of companies like Etsy and Netflix (despite their recent problems) have had with continuous deployment and engineering for resiliency.

Reconciling the two views of the world means exploring three core concepts required to understand why a new IT model is emerging, but not necessarily replacing everything about the old model.

The first of these concepts is devops, which earned its own three-part series from me a few years ago, and has since spawned off its own IT subculture. The short, short version of the devops story is simple: modern applications (especially on webscale, or so-called “big data” apps) require developers and operators to work together to create software that will work consistently at scale. Operations specifications have to be integrated into the application specifications, and automation delivered as part of the deployment.

In this model, development and operations co-develop and very often even cooperate, thus the term devops.

The second concept is one that I spoke about often in 2012: complex adaptive systems. I’ve defined that broad concept in earlier posts, but the stability-resiliency tradeoff is a concept that is derived from the study of complex adaptive systems. Understanding that tradeoff is critical to understanding why software development and operations practices are changing.

The third concept is that of anti-fragility, a term introduced by Nassim Nicholas Taleb in his recent book Anti-fragile: Things That Gain from Disorder. Anti-fragility is the opposite of fragility: as Taleb notes, where a fragile package would be stamped with “do not mishandle,” an anti-fragile package would be stamped “please mishandle.” Anti-fragile things get better with each (non-fatal) failure.

Although there are elements of Taleb’s commentary that I don’t love (the New York Times review linked to above covers the issues pretty well), the core concept of the book is a critical eye-opener for those trying to understand what cloud computing, build automation, configuration management automation and a variety of other technologies are enabling software engineers to do today that were prohibitively expensive even 10 years ago.

So, over the next few weeks, I will try to explore these concepts in greater detail. Along the way, I will endeavor to address Jaenke’s concerns about the ways in which these concepts can be misapplied to some IT activites.

Please join me for this exploration. Use the comments section to push back when you think I am off-base, acknowledge when what I say matches what you have experienced, and, above all, how you think about how your organization and career will change one way or another.

Full disclosure: I’m a registered GigaOm analyst.


David Linthicum (@DavidLinthicum) asserted “Most tools miss the point of the cloud, relying on simplistic assumptions that can lead you astray” in a deck for his Why cloud computing ROI tools are worthless post of 1/11/2012:

imageWith the rise of cloud computing comes a rise in tools and models that estimate the cost benefit of the technology. Most are created and promoted by cloud providers that sell their services, and a few come from analysts and consulting organizations. Whatever their source, their ROI calculations are based on the same assumption: Cloud computing avoids hardware and software investments, and because you pay only for the resources you use, the cost of those resources should align directly with the amount you require.

imageThose assumptions sound great, but the resulting ROI calculations are drastic oversimplifications of the problems the cloud is there to solve. In fact, these ROI calculators confuse businesses about the real value of the cloud and mislead both IT and business units.

The ability to determine the ROI of cloud computing is not a simple modeling exercise, as most people seem to think. To truly understand and calculate the business values of using cloud computing -- public, private, or hybrid -- requires a complex and dynamic analysis that is unique to the problem domain you're trying to address.

In other words, the value of cloud computing depends directly on the type of business, the core business processes, and the specific problems you're looking to solve. Additionally, you need to determine how much value you truly get from the increased agility and scalability that are core benefits of cloud-based platforms.

The bottom line: You can't use a simplistic ROI calculator that looks only at hardware and software avoidance. Anyone who trots out the hoary "ability to avoid capex costs" concept as the primary driver for adopting the cloud simply lacks a clear understanding of how to determine the true value of cloud computing.

Businesses that buy into the oversimplified models to determine the ROI of cloud computing may eventually find out they were way off the mark. They will have invested in cloud technology where they should not have, wasting money and time. Likewise, they won't have invested in cloud technology where it actually was the better approach for providing strategic benefit.

So ignore those simple cloud ROI calculators, and do the deep work to understand what you truly need and what you don't. It takes more upfront effort, but it pays off handsomely.


Simon Munro (@simonmunro) offered a Free eBook on Designing Cloud Applications on 1/9/2013 (missed when published):

imageToo often we see cloud project fail, not because of the platforms or lack of enthusiasm, but from a general lack of skills on cloud computing principles and architectures. At the beginning of last year I looked at how to address this problem and realised that some guidance was needed on what is different with cloud applications and how to address those differences.

image_thumb75_thumb6The result was a book that I wrote and published “CALM -Cloud ALM with Microsoft Windows Azure”, which takes the approach that implementation teams know how to do development, they just don’t know how to do it for the cloud, and they need to adopt cloud thinking into their existing development practices.

The “with Windows Azure” means that the book has been written with specific examples of how problems are solved with Windows Azure, but is not necessarily a book about Windows Azure — it applies as much to AWS (except you would have to figure out the technologies that apply yourself).

CALM takes an approach to look at certain models and encourages filling in the detail of the models in order to come up with the design. The models include the lifecycle model, which looks at load and traffic over time, the availability model, data model, test model and others. In looking at the full breadth of ALM (not just development), some models apply to earlier stages (qualify and prove), as well as post-delivery models, such as the deployment, health and operational models.

CALM is licensed as open source, which also means that it is free to download, read and use. It is available on github at github.com/projectcalm/Azure-EN, with pdf, mobi (Kindle), and raw html available for download on this share. A print version of the book is also available for purchase on Lulu.

I encourage you to have a look at CALM, let others know about it, ask any questions, and give me some feedback on how it can be made better.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

• Charles Babcock (@babcockcw) asserted “With System Center's new service pack and Windows Server 2012, an IT administrator can create Hyper-V virtual machines and deploy them to internal data center, remote hosting service provider or public cloud, such as a Windows Azure site” in a deck for his Microsoft Enhances System Center For Hybrid Cloud Work article of 1/16/2013 for InformationWeek’s Cloud blog:

imageMicrosoft upgraded its System Center suite Tuesday to make it a hybrid cloud manager and a more complete manager of the capabilities included in Windows Server 2012, launched last September.

As System Center 2012 and Windows Server 2012 mesh more tightly together, Microsoft has achieved what it refers to as its "Cloud Operating System," in the words of Michael Park, corporate VP for server and tools business marketing, in an interview. The term means mainly that System Center can now scale across many Windows Server 2012 servers, and link up operations that may be scattered across more than one data center.

System Center 2012 was augmented Tuesday with Service Pack 1. It's been extended so that a single instance of System Center's Virtual Machine Manager module can handle up to 8,000 virtual machines on a cluster with 64 hosts. Add another Virtual Machine Manager instance and manage another 8,000 VMs.

image_thumb75_thumb7The combination of System Center with SP1 and Windows Server 2012 allows an IT administrator to create Hyper-V virtual machines and deploy them to his own data center, a remote hosting service provider or a public cloud, such as a Windows Azure site.

If the hosting service provider is one of 14,000 that has equipped itself with its own version of System Center, then the enterprise IT manager will be able to see any type of workload -- on-premises, in a hosting service provider's data center or in the Windows Azure cloud -- from a single System Center console. That's because Microsoft supplies Service Provider Foundation as part of the hosting provider's version, and it includes an API that lets an on-premises system call to a service provider system to gain a view of a particular running workload.

[ Microsoft previously upgraded System Center for the cloud era. See Microsoft System Center 2012 Focuses On Private Cloud. ]

"System Center can be the glue that brings it all together for the customer," said Mike Schutz, general manager of server and tools business marketing, in an interview. Microsoft has previously talked up some of the capabilities now included in Service Pack 1. "This release puts an exclamation point on them," he added.

System Center is a suite of eight management modules. Virtual Machine Manager for generating and deploying virtual machines is one; Configuration Manager for capturing the specifications of each server is another. Originally designed to manage Windows Server, it has been extended to also manage several versions of Linux, Oracle Solaris and HP/UX.

Windows Server 2012 included a new version of Hyper-V that could generate a virtualized network on top of a physical network. The capability is essential to using automated processes to provision and manage virtual machines, and it was Microsoft's own step toward creating what rival VMware refers to as the software-defined data center.

But Microsoft has waited until now to give System Center the ability to use Hyper-V virtual networking capabilities. With Service Pack 1, network provisioning allows a newly created virtual machine to be assigned a virtual subnet and virtual routing, defined in System Center policies. The capability brings more automation and flexibility to managing the virtual data center under Hyper-V.

Read more: Page 2: Microsoft Adds End-User Device Management, Website Monitoring


Tim Anderson (@timanderson) posted Making sense of Microsoft’s Cloud OS on 1/15/2013:

imagePeople have been talking about “the internet operating system” for years. The phrase may have been muttered in Netscape days in the nineties, when the browser was going to be the operating system; then in the 2000s it was the Google OS that people discussed. Most notably though, Tim O’Reilly reflected on the subject, for example here in 2010 (though as he notes, he had been using the phrase way earlier than that):

Ask yourself for a moment, what is the operating system of a Google or Bing search? What is the operating system of a mobile phone call? What is the operating system of maps and directions on your phone? What is the operating system of a tweet?

On a standalone computer, operating systems like Windows, Mac OS X, and Linux manage the machine’s resources, making it possible for applications to focus on the job they do for the user. But many of the activities that are most important to us today take place in a mysterious space between individual machines.

imageIt is still worth reading, as he teases out what OS components look like in the context of an internet operating system, and notes that there are now several (but only a few) competing internet operating systems, platforms which our smart mobile phones or tablets tap into and to some extent lock us in.

But what on earth (or in the heavens) is Microsoft’s “Cloud OS”? I first heard the term in the context of Server 2012, when it was in preview at the end of 2011. Microsoft seems to like how it sounds, because it is getting another push in the context of System Center 2012 Service Pack 1, just announced. In particular, Michael Park from Server and Tools has posted on the subject:

At the highest level, the Cloud OS does what a traditional operating system does – manage applications and hardware – but at the scope and scale of cloud computing. The foundations of the Cloud OS are Windows Server and Windows Azure, complemented by the full breadth of our technology solutions, such as SQL Server, System Center and Visual Studio. Together, these technologies provide one consistent platform for infrastructure, apps and data that can span your datacenter, service provider datacenters, and the Microsoft public cloud.

In one sense, the concept is similar to that discussed by O’Reilly, though in the context of enterprise computing, whereas O’Reilly looks at a bigger picture embracing our personal as well as business lives. Never forget though that this is marketing speak, and Microsoft consciously works to blur together the idealised principles behind cloud computing with its specific set of products: Windows Azure, Window Server, and especially System Center, its server and device management piece.

A nagging voice tells me there is something wrong with this picture. It is this: the cloud is meant to ease the administrative burden by making compute power an abstracted resource, managed by a third party far away in a datacenter in ways that we do not need to know. System Center on the other hand is a complex and not altogether consistent suite of products which imposes a substantial administrative burden on those who install and maintain it. If you have to manage your own cloud, do you get any cloud computing benefit?

The benefit is diluted; but there is plentiful evidence that many businesses are not yet ready or willing to hand over their computer infrastructure to a third-party. While System Center is in one sense the opposite of cloud computing, in another sense it counts because it has the potential to deliver cloud benefits to the rest of the business.

Further confusing matters, there are elements of public cloud in Microsoft’s offering, specifically Windows Azure and Windows Intune. Other bits of Microsoft’s cloud, like Office 365 and Outlook.com, do not count here because that is another department, see. Park does refer to them obliquely:

Running more than 200 cloud services for over 1 billion customers and 20+ million businesses around the world has taught us – and teaches us in real time – what it takes to architect, build and run applications and services at cloud scale.

We take all the learning from those services into the engines of the Cloud OS – our enterprise products and services – which customers and partners can then use to deliver cloud infrastructure and services of their own.

There you have it. The Cloud OS is “our enterprise products and services” which businesses can use to deliver their own cloud services.

What if you want to know in more detail what the Cloud OS is all about? Well, then you have to understand System Center, which is not something that can be explained in a few words. I did have a go at this, in a feature called Inside Microsoft’s private cloud – a glossary of terms, for which the link is currently giving a PHP error, but maybe it will work for you.

image

It will all soon be a little out of date, since System Center 2012 SP1 has significant new features. If you want a summary of what is actually new, I recommend this post by Micke Schutz on System Center 2012 SP1; and this post also by Schutz on Windows Intune and System Center Configuration Manager SP1.

My even shorter summary:

  • All System Center products now updated to run on, and manage, Server 2012
  • Upgraded Virtual Machine Manager supports up to 8000 VMs on clusters of up to 64 hosts
  • Management support for Hyper-V features introduced in Server 2012 including the virtual network switch
  • App Controller integrates with VMs offered by hosting service providers as well as those on Azure and in your own datacenter
  • App Controller can migrate VMs to Windows Azure (and maybe back); a nice feature
  • New Azure service called Global Service Monitor for monitoring web applications
  • Back up servers to Azure with Data Protection Manager

and on the device and client management side, new Intune and Configuration Manager features. It is confusing; Intune is a kind-of cloud based Configuration Manager but has features that are not included in the on-premise Configuration Manager and vice versa. So:

  • Intune can now manage devices running Windows RT, Windows Phone 8, Android and iOS
  • Intune has a self-service portal for installing business apps
  • Configuration Manager integrates with Intune to get supposedly seamless support for additional devices
  • Configuration Manager adds support for Windows 8 and Server 2012
  • PowerShell control of Configuration Manager
  • Ability to manage Mac OS X, Linux and Unix servers in Configuration Manager

What do I think of System Center? On the plus side, all the pieces are in place to manage not only Microsoft servers but a diverse range of servers and a similarly diverse range of clients and devices, presuming the features work as advertised. That is a considerable achievement.

On the negative side, my impression is that Microsoft still has work to do. What would help would be more consistency between the Azure public cloud and the System Center private cloud; a reduction of the number of products in the System Center suite; a consistent user interface across the entire suite; and simplification along the lines of what has been done in the new Azure portal so that these products are easier and more enjoyable to use.

I would add that any business deploying System Center should be thinking carefully about what they still feel they need to manage on-premise, and what can be handed over to public cloud infrastructure, whether Azure or elsewhere. The ability to migrate VMs to Azure could be a key enabler in that respect.


<Return to section navigation list>

Cloud Security, Compliance and Governance

James Kaplan, Chris Rezek, and Kara Sprague asserted “IT and business executives need to apply a risk-management approach that balances economic value against risks” as a preface to their Protecting information in the cloud article of 1/9/2013 for the McKinsey Quarterly magazine. From the introduction:

    image_thumb2The use of highly scaled, shared, and automated IT platforms—known as cloud computing—is growing rapidly. Adopters are driven by the prospects of increasing agility and gaining access to more computing resources for less money. Large institutions are building and managing private-cloud environments internally (and, in some cases, procuring access to external public clouds) for basic infrastructure services, development platforms, and whole applications. Smaller businesses are primarily buying public-cloud offerings, as they generally lack the scale to set up their own clouds.

    In This Article
    • Protecting information in the cloud article, Advantages of cloud computing, Business TechnologyExhibit 1: Comparing deployment models highlights options.
      • Exhibit 2: Data must be managed and protected.
        • Exhibit 3: A mixed-cloud strategy will strike the best balance of technology benefits and risk management.
          • Exhibit 4: A risk-management approach requires changes across several dimensions.

          imageAs attractive as cloud environments can be, they also come with new types of risks. Executives are asking whether external providers can protect sensitive data and also ensure compliance with regulations about where certain data can be stored and who can access the data. CIOs and CROs are also asking whether building private clouds creates a single point of vulnerability by aggregating many different types of sensitive data onto a single platform.

          Blanket refusals to make use of private- or public-cloud capabilities leave too much value on the table from savings and improved flexibility. Large institutions, which have many types of sensitive information to protect and many cloud solutions to choose from, must balance potential benefits against, for instance, risks of breaches of data confidentiality, identity and access integrity, and system availability.

          Register to continue.


          <Return to section navigation list>

          Cloud Computing Events

          Brian Prince reported upcoming IaaS in the Cloud Bootcamp[s] in the Midwest in a 1/14/2013 post:

          imageWhether you build apps or support the infrastructure that runs the apps, the cloud can be a really big place. For some, it’s a natural evolution for their application and infrastructure to embrace the power and scale of the cloud. For others, it’s a journey that has to begin with a single step.

          Windows Azure provides that first step with a scalable, flexible platform for deploying your applications your way. With our Infrastructure as a Service platform (IaaS) called Windows Azure Virtual Machines, you get the flexibility to choose between Windows and Linux with full control over the operating system configuration and installed software, matched with the portability of Hyper-V disk images. Windows Azure Virtual Machines provide the perfect environment for meeting all of your Infrastructure-as-a-Service needs.

          image_thumb75_thumb8To learn more about our Infrastructure as a Service platform, we invite all developers and IT Professionals to join local Microsoft cloud experts as they introduce you to the Microsoft Cloud Platform, dive deep into Windows Azure Virtual Machines, and help walk you through a hands-on demonstration of the power of IaaS on the Windows Azure platform.

          See below for the sessions requirements. I will be at many of these events, and I hope to see you there.

          Schedule and registration links:

          Registration Links / City

          Date

          Start Time

          End Time

          Location

          Austin

          3/5/2013

          8:30 AM

          5:00 PM

          Microsoft Austin Office

          Indianapolis

          3/6/2013

          8:30 AM

          5:00 PM

          Microsoft Indianapolis Office

          Cincinnati

          3/12/2013

          8:30 AM

          5:00 PM

          Microsoft Cincinnati Office

          Houston

          3/13/2013

          8:30 AM

          5:00 PM

          Microsoft Houston Office

          St. Louis

          3/19/2013

          8:30 AM

          5:00 PM

          Microsoft Creve Coeur Office

          Chicago

          3/21/2013

          8:30 AM

          5:00 PM

          Microsoft Chicago Office

          Nashville

          4/4/2013

          8:30 AM

          5:00 PM

          Microsoft Franklin Office

          Edina

          4/9/2013

          8:30 AM

          5:00 PM

          Microsoft Edina Office

          Session Requirements

          Be sure to bring a modern laptop that is capable of running the following to make the most of your time at the Bootcamp:

          • A modern operating system, including Windows 7, Windows 8, Linux and Mac OS X
          • A modern web browser, including IE 9, IE 10, Chrome, Firefox and Safari
          • A remote desktop client

          Pre-installed on Windows 7 or Windows 8 :

          • Microsoft client available for Mac at http://www.microsoft.com/en-us/download/details.aspx?id=18140
          • Mac and Linux client available from 2X at http://www.2x.com/rdp-client/windows-linux-mac/downloadlinks/
          • The lab portion of this exercise will require you to connect to the Windows Azure Portal via a modern web browser where you will provision three separate virtual machines in the cloud and configure them each via a Remote Desktop client connection. The lab materials are all online, so no special software is required to install or use them.
          • If you want to work on other labs while you’re here, you might also want to install the various tools and frameworks that are part of the Windows Azure SDK. Check out various downloads here. Installers are available on that site for Windows, Mac and Linux platforms. Details on system requirements for those SDKs can be found by following that link.

          All participants registering for the event will get a FREE 90-day trial of the Windows Azure platform and services, including access to the Virtual Machines preview.

          All participants that successfully complete the lab and demonstrate their running application to the instructor will be put into a drawing for some amazing prizes!


          Craig Kitterman (@craigkitterman) posted Windows Azure Community News Roundup (Edition #50) to the Windows Azure blog on 1/11/2013:

          image_thumb75_thumb8Editor's note: This post comes from Mark Brown, Windows Azure Community Manager.

          Welcome to the newest edition of our weekly roundup of the latest community-driven news, content and conversations about cloud computing and Windows Azure.

          Here is what we pulled together this week based on your feedback:

          Articles, Videos and Blog Posts

          Cool Code Samples and Apps

          Upcoming Events and User Group Meetings

          North America

          Europe

          Rest of World/Virtual

          Interesting Recent Windows Azure Discussions onStack Overflow

          If you have comments or suggestions for the weekly roundup, or would like to get more involved in a Windows Azure community, drop me a line on twitter @markjbrown

          <Return to section navigation list>

          Other Cloud Computing Platforms and Services

          • My First Look at the CozySwan UG007 Android 4.1 MiniPC Device review updated 1/17/2012 indicates that the Chrome browser running on this device runs the LightSwitch HTML 5 Client Preview 2: OakLeaf Contoso Survey Application Demo on Office 365 SharePoint Site as expected, without the problems reported in Running the SurveyApplicationCS Demo Project under Android Jelly Bean 4.2 on a Google Nexus 7 Tablet:

          UG007 Specifications and Accessories

          According to Amazon, the specs for the CozySwan unit (edited for clarity) are as follows:

          • imageOperating System: Google Android 4.1.1 Jelly Bean with Bluetooth
          • CPU: RK3066 1.6GHZ Dual ARM Cortex-A9 processor
          • GPU: Mali 400MP4 quad-core; supports 1080P video (1920 by 1080 pixels)
          • RAM: 1GB DDR3
          • Internal Memory: 8 GB Nand Flash
          • External Memory: Supports Micro-SD card, up to 32GB
          • Networking: WiFi 802.11b/g/n with internal antenna
          • Ports: 1 USB 2.0 host and 1 Micro USB host* and 1 Micro-SD card slot (see photo at right); 1 HDMI male under a removable cover (see photo above)
          • Power: 90-230V, 50/60Hz, 30 W input to wall wart [with UK (round pin) power plug*]; output: 5V/2A
          • Video Decoding:MPEG 2 and 4.H.264; VC-1; Divx; Xvid; RM8/9/10; VP6
          • Video Formats: MKV, TS, TP, M2TS, RM/RMVB, BD-ISO, AVI, MPG, VOB, DAT, ASF, TRP, FLV
          • Audio Decoding: DTS, AC3, LPCM, FLAC, HE-AAC
          • Images: JPEG, PNG, BMP, GIF

          * The instruction sheet says the Micro USB connector is for power. See Startup Issues below.)

          The package I received contained the following items:

          1. The UG007 Mini PC device
          2. imageA 5V/2A power supply with Euro-style round power pins, not US standard blades.
          3. A USB 2.0 male to Micro USB type A male power cable
          4. A six-inch female HDMI to male HDMI cable to connect to an HDTV HDMI input
          5. An 8-1/2 x 11 inch instruction leaflet printed on both sides and written in Chingrish.

          Note: There are many similar first-generation devices, such as the MK802, which use the RK3066 CPU, run Android 4.0 and don’t support v4.1 or Bluetooth. Make sure you purchase a second-generation device. …


          OakLeaf Systems Retail Survey HTML5 Client Autohosted by SharePoint Online 2013 and Windows Azure

          Logging into the LightSwitch HTML 5 Client Preview 2: OakLeaf Contoso Survey Application Demo on Office 365 SharePoint Site with the Chrome browser provides a different (and better) experience than that described in Running the SurveyApplicationCS Demo Project under Android Jelly Bean 4.2 on a Google Nexus 7 Tablet:

          • Navigating to the http://oakleafsystems210.sharepoint.com Office 365 Developer Edition site the first time without the Fiddler2 proxy specified opens a sign-in form to enter Office 365 credentials. This page doesn’t appear when using the Nexus 7. Subsequent logins require use of the Fiddler2 proxy, as with the Nexus 7.
          • The home page of the Developer Edition doesn’t require opening a menu page to start the Survey app, as described for the Nexus 7. Instead, the home page appears with a link to the Survey app as shown here:

          SurveyAppHome

          Successive pages appear as expected:

          SurveyLandingPage

          SurveyPhoto


          • Arik Hesseldahl (@ahess247, pictured below at right) reported HP’s Head of Cloud Computing Zorawar “Biri” Singh Departs in a 1/17/2013 article for the All Things D blog:

          imageWord has just leaked out of Hewlett-Packard that Zorawar “Biri” Singh, senior vice president and general manager for Cloud Services, is leaving the company.

          ejection_seatRoger Levy, the group’s vice president for technology and customer relations, will replace him on an interim basis. I haven’t been able to find out if Singh is leaving for another job or if he’s just leaving.

          The departure was confirmed by an HP spokesman moments ago, sending the following statement:

          biri_singh“HP remains committed to our Converged Cloud portfolio. In particular, HP Cloud Services is critical to HP’s efforts to deliver superior public cloud infrastructure, services and solutions to our customers. Roger Levy, vice president, Technology and Customer Operations of HP Cloud Services, will serve as the interim leader for HP Cloud Services. The company thanks Zorawar ‘Biri’ Singh for his passion and commitment to drive our public cloud vision and wish him well.”

          imageSingh had overseen HP’s global cloud computing footprint, including its infrastructure, platform services and ecosystem efforts.

          He joined HP from IBM, hired by former CEO Léo Apotheker to build a cloud service platform that could compete with Amazon Web Services. Apotheker announced the effort at a big HP event in San Francisco in March of 2011.

          It’s one part of the strategy elements initiated by Apotheker — who was CEO for only about 11 months — that HP has kept in place.


          • Jeff Barr (@jeffbarr) described Endpoint Renaming for Amazon RDS on 1/15/2013:

          imageYou can now change the name and endpoint of an existing Amazon RDS database Instance via the AWS Management Console, the Amazon RDS API, or the Amazon RDS Command Line toolkit. This feature is available in all AWS regions and for all of the database engines supported by Amazon RDS.

          image_thumb111There are two main uses for this feature:

          1. Simplified Data Recovery - Amazon RDS gives you multiple options for data recovery including Point in Time Recovery, Read Replica Promotion, and Restore from DB Snapshot. Now that you have the power to change the name and endpoint of a newly created DB Instance, you can have it assume the identity of the original Instance, eliminating the need for you to update your application with the new endpoint.
          2. Simplified Architectural Evolution - As your RDS-powered applications grow in size, scope, and complexity, the role of individual Instances may evolve. You can now rename the instances to keep their names in sync with their new roles.

          Here's how you rename an RDS Database Instance from the AWS Management Console. First you enter the new name:

          Then you confirm the change:

          No significant articles today


          <Return to section navigation list>

          0 comments: