Friday, January 11, 2013

Windows Azure and Cloud Computing Posts for 1/8/2013+

A compendium of Windows Azure, Service Bus, EAI & EDI, Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image_thumb7_thumb1

Updated 1/11/2013 with new articles marked .
•    Updated
1/10/2013 with new articles marked .

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue, HDInsight and Media Services

Andy Cross (@AndyBareWeb) explained HDInsight: Workaround error Could not find or load main class on 1/9/2013:

imageSometimes when running the C# SDK for HDInsight, you can come across the following error:

The system cannot find the batch label specified – jar
Error: Could not find or load main class c:\apps\dist\hadoop-1.1.0-SNAPSHOT\lib\hadoop-streaming.jar

imageTo get around this, close the command shell that you are currently in and open up a new hadoop shell, and try your command again. It should work immediately.

This tends to occur after killing a hadoop job, and so I am assuming something that this activity does changes the context of the command shell in such a way that it can no longer find the hadoop javascript files. I’ve yet to get to the bottom of it, so if anyone has any bright ideas, let me know on comments

image_thumb1


<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

•• Sandrino di Mattia (@sadrinodm) described how to build your own SQL Server Agent for Windows Azure SQL Database with the Scheduler in an 1/11/2013 post:

imageIf you worked with the Windows Azure SQL Database in the past you’ll know that there is no support for SQL Server Agent jobs. According to the official guidelines you should use a SQL Server Agent which runs on-premises and connect it to your Windows Azure SQL Database. But this only works if you have the required infrastructure available to you on-premises (or you could host it in a VM).

imageBesides that you also have the SQL Azure Agent project on CodePlex which is the result of a series of blog posts on the SQL Azure blog (part 1, part 2 and part 3). This project is just a proof of concept but it’s a good base to go and build your own SQL Azure Agent. The downside to this is that you need to run it in a Web/Worker Role which might be overkill in some cases.

Let’s look at how the (Mobile Services) Scheduler can be used to create an alternative to the SQL Server Agent. Before we get started I advise you to check my previous post which covers the basics of the Scheduler: Job scheduling in Windows Azure

The database

Take the following scenario: you have a customer which would like to move an application to Windows Azure. It was pretty easy to move their web application to Windows Azure Web Sites. The migration of the database also worked out pretty well and here is the result:

Now the only thing that didn’t work was migrating a SQL Server Agent job. The customer has a job which runs once a day and deletes records in the Logs database which are older than 1 month (not very original, I know). The job is actually very simple: it calls the sp_ClearOldLogs stored procedure. If your jobs contain lots of code I suggest you move this to a stored procedure first. If all the logic which we want to execute resides in a stored procedure, then the only thing we need is a way to schedule when this stored procedure should run.

Scheduling the stored procedure

At the moment the Scheduler is only available in Windows Azure Mobile Services (WAMS). Before we can start configuring the scheduler we’ll need to set up a new WAMS application. When you do so, make sure you choose the database which contains the stored procedure you want to execute:

After you created the WAMS application you’ll see a new login and user appear in the database. The scheduler will use this new user to execute the stored procedures:

Keep in mind that the new user won’t have the required permissions to execute the stored procedure. That’s why you’ll need to grant the EXECUTE permission first:

GRANT EXECUTE ON [dbo].[sp_ClearOldLogs] TO [GcQuMKtYVILoginUser]

GO

You can now open the WAMS application and go to the Scheduler tab. This is where you’ll be able to create a new job and choose the schedule. In free mode you are limited to 1 job for each WAMS application (but you can create 10 free WAMS applications which means you can create up to 10 free jobs).

Open the newly created job and go to the script tab. This is where you’ll be able to write code which will execute the stored procedure. Here is an example which does some logging and executes the stored procedure:

function Call_sp_ClearOldLogs() {

console.log("Executing sp_ClearOldLogs...");

mssql.query('EXEC dbo.sp_ClearOldLogs', {

success: function(results) {

console.log("Finished executing sp_ClearOldLogs.");

}

});

}

Even if you’re not familiar with this Javascript syntax that shouldn’t be a problem. You can write all your logic in a stored procedure and just create a small script like I did.

One last thing, after you save the script make sure you also press the Enable button. If you don’t, the script will never run.

To test if everything works I just press the Run button and look at my Logs table. After a few seconds the stored procedure was executed and removed all records in the Logs table older than one month. And that’s all you need to run a job!

Alerts and Notifications

When you work with the SQL Server Agent you can configure alerts and notifications for your jobs, let’s see what we can do about that.

If you look back at the script you’ll see that I call console.log, which will write to the log of your WAMS application. If you open the application in the portal you can view your logs under the Logs tab:

This is a great way to keep track of when your job was executed and if there were any issues. If you’re more a command line person you can also use the Windows Azure CLI to fetch the logs: azure mobile log

At the moment the scheduler script supports 3 modules: “azure”, “request” and “sendgrid”. But the request and sendgrid modules allow you to do virtually anything. You can use the request module to send SMS messages with Twilio (this is something you might want to do in case of an issue):

var httpRequest = require('request');

var account_sid = "<< account SID >>";

var auth_token = "<< auth token >>";

// Create the request body

var body = "From=" + from + "&To=" + to + "&Body=" + message;

// Make the HTTP request to Twilio

httpRequest.post({

url: "https://" + account_sid + ":" + auth_token +

"@api.twilio.com/2010-04-01/Accounts/" + account_sid + "/SMS/Messages.json",

headers: { 'content-type': 'application/x-www-form-urlencoded' },

body: body

}, function (err, resp, body) { console.log(body); });

And the sendgrid module allows you to send emails with SendGrid:

var sendgrid = new SendGrid('**username**', '**password**');

sendgrid.send({

to: '**email-address**',

from: '**from-address**',

subject: 'Error while exeucting stored procedure',

text: 'Error message here'

}, function(success, message) {

// If the email failed to send, log it as an error so we can investigate

if (!success) {

console.error(message);

}

});

And there you go. You can now hook up a WAMS Scheduler to your database, schedule the execution of stored procedures, follow up on these jobs through the logs and even send out notifications. In most cases this should cover everything you need to replace the SQL Server Agent and make it easier to move your database to the cloud. And once you see that this doesn’t cover all your requirements you can always move to a full-blown Worker Role solution afterwards.


•• Jim O’Neil (@jimoneil) continued his video series with Practical Azure #8: SQL Data Sync on 1/10/2013:

imageCabaret not withstanding, it’s data that makes the world go ‘round, and one of the incredibly awesome capabilities you have when using SQL Server and SQL Database in Windows Azure, is the ability to move rather seamlessly between on-premises assets and the cloud.

imageIn previous episodes, I covered the concepts of SQL Database and SQL Database Federations, and now in my latest segment of Practical Azure (on MSDN DevRadio), I delve into how to synchronize that data across multiple on-premises and cloud hosted databases.

image


Nick Harris (@cloudnick) wrote on 1/10/2013 a brief update on additional resources for Windows Azure Mobile Services that his team has just released today including:

  • Added a new Code Samples page to WindowsAzure.com
  • Updated WindowsAzure.com Tutorials and Resources page to include new tutorials and related videos from the new Windows Azure Mobile Services channel 9 series
  • Five additional Windows Store + Mobile Services scenario based code samples – details and links follow:

clip_image001

Geolocation sample end to end using Windows Azure Mobile Services (New)

This sample provides an end to end location scenario with a Windows Store app using Bing Maps and a Windows Azure Mobile Services backend. It shows how to add places to the Map, store place coordinates in a Mobile Services table, and how to query for places near your location.

clip_image002

Enqueue and Dequeue messages with Windows Azure Mobile Services and Services Bus (New)

My Store - This sample demonstrates how you can enqueue and dequeue messages from your Windows Store apps into a Windows Azure Service Bus Queue via Windows Azure Mobile Services. This code sample builds out an ordering scenario with both a Sales and Storeroom and app.

clip_image003

Capture, Store and Email app Feedback using Windows Azure Mobile Services(New)

This sample shows how you can implement a Feedback charm option in your Windows Store application and submit the feedback to be both stored Windows Azure Mobile Services and emailed directly to you.

clip_image004

Upload File to Windows Azure Blob Storage using Windows Azure Mobile Services (New)

This demonstrates how to store your files such as images, videos, docs or any binary data off device in the cloud using Windows Azure Blob Storage. In this example we focus on capturing and uploading images, with the same approach you can upload any binary data to Blob Storage.

clip_image005

Create a Game Leaderboard using Windows Azure Mobile Services (New)

The My Trivia sample demonstrates how you can easily add, update and view a leaderboard from your Windows Store applications using Windows Azure Mobile Services.

imageIf you have just returned from vacation and have not yet had the opportunity to check out the Windows Azure Mobile Services below, I would encourage you to [investigate the linked articles, which] detail a wealth of up to date content made available to help you both get started and to use at your local events.


• James Conard (@jamescon, pictured below) described New Windows Azure Mobile Services Getting Started Content in a 1/9/2012 post to the Windows Azure blog:

imageEditor's Note: This post was written by Nick Harris, Windows Azure Technical Evangelist.

It’s been less than five months since we introduced the first public preview for Windows Azure Mobile Services and in this short time we have seen continual additions to the Mobile Service offering including:

  • imageSDKs for Windows Store, Windows Phone 8 and iOS apps
  • Auth using Microsoft Account, Facebook, Google and Twitter
  • Push Notification support via WNS, MPNS and APNS
  • Structured storage
  • Scheduler to execute tasks on a schedule e.g aggregating feeds, sending notifications, crunching data
  • Deployment in North Europe, East and West US datacenters

This post details wealth of new content we have recently released designed to help you get started with Windows Azure Mobile Services including Videos, Code Samples and Tutorials.

New Videos

We recently launched a new Windows Azure Mobile Services series to help people get started with Mobile Services. Through this series you will learn how Mobile Services can:

  • Provide turnkey backend solutions that connect your mobile apps to the cloud within minutes
  • Store data off device and read it back into your apps
  • Add custom business logic utilizing server scripts
  • Implement user authentication in your apps using popular social identity providers such as Microsoft Account, Facebook, Twitter and Google.
  • Implement Push Notifications in your apps to keep your users up to date with your latest app content
  • Execute tasks on a schedule e.g aggregating feeds, sending notifications, crunching data etc.
  • Accelerate your mobile app development for Windows Store, Windows Phone 8 and iOS

Here are some quick links to the current videos within the series:

This post continues with descriptions of the scenario-based code samples mentioned in the preceding post.


Chris Klug (@ZeroKoll) explained Fileuploads through Windows Azure Mobile Services - take 2 in an 1/8/2013 post:

imageSo a couple of weeks ago I posted this blog post on how to upload files to blob storage through Mobile Services. In it, I described how one could do a Base64 encoded string upload of the file, and then let the mobile service endpoint convert it and send it to blob storage.

imageThe upsides to this is that the client doesn’t have to know anything about where the files are actually stored, and it doesn’t need to have blob storage specific code. Instead, it can go on happily knowing nothing about Azure except Mobile Services. It also means that you don’t have to distribute the access keys to your storage together with the application.

image_thumb75_thumb2I did however mention that there was another way, using shared access signatures (SAS). Unfortunately, these have to be generated by some form of service that has knowledge of the storage keys. Something like a Azure compute instance. However, paying for a compute instance just to generate SASes (plural of SAS…?) seems unnecessary, which is why I opted to go with the other solution.

However, Ryan CrawCour, a dear friend of mine, just had to say that he wasn’t convinced, which has now been nagging me for a while. So to solve that, I have deviced another way to use SAS while using only Mobile Services. And even though he is likely to have some opinion about this as well, it at least made the nagging feeling go away for a while.

DISCLAIMER: This is somewhat of a hack. I assume that there will be better ways to do this in the future, but for now it works even if it might not be my finest solution to date. My biggest issue with it is a part of the JavaScript that I will point out later, but it works. But don’t blame me when it causes Azure to explode and tear down the internet when you use it…

Ok, let’s go! Like everything else in the current version of Mobile Services, we need a table to create an endpoint to play with. In this case, I have created a table called “sas”. The table itself will not be used, it is only there to enable me to execute my serverside scripts… Because of this, I have restricted access to everything but “read”, as that is the only thing that will be used…

The next part is to create an entity to be used to send and receive data from the service. I called it SAS and it looks like this

[DataTable(Name = "sas")]
public class SAS
{
public int Id { get; set; }
[DataMember(Name = "container")]
public string Container { get; set; }
[DataMember(Name = "filename")]
public string FileName { get; set; }
[DataMember(Name = "url")]
public string Url { get; set; }
}

As you can see, it includes a Name and a FileName property as well as the mandatory Id property. These properties will be used to pass the requred information to the endpoint. The Url property will be used for returning the signed Url.

(You could get away with removing the SAS entity and writing an OData query instead, but I prefer LINQ…)

Using it looks like this

var sas = (await App.MobileService.GetTable<SAS>().Where(x => x.FileName == "myfile.txt" && x.Container == "mycontainer").ToEnumerableAsync()).Single();
var url = sas.Url;

The real functionality is obvously in the other end, at the server. Here, I have created a “read script” for the table. This script will take the query and use the information in it to create a signed url.

The read() method looks like this

function read(query, user, request) {

var filename = query._parsed.filter.left.right.value;
var containername = query._parsed.filter.right.right.value;

var url = getSignedBlobUrl(60, "teched", containername, filename);
console.log('Created new SAS: ' + url);

request.respond(statusCodes.OK, [{ url: url }]);

}

As you can see, it populates the filename and containername variables using the query’s _parsed member. I know that JavaScript members starting with an underscore are supposed to be private, and using the _parsed member is really not a good practice, but it was the only way I could find to easily get hold of the data sent to the server. There might be better ways to solve this, and I will look into it, but for now, this works…

Next it uses a method called getSignedBlobUrl(), which I will talk about in just a minute. Once the signed url has been generated, it is returned to the client using request.respond() instead of actually executing the query.

Ok, so what does the getSignedBlobUrl() do? Well, it just creates a well-formed signed url to the specified blob. Like this

function getSignedBlobUrl(expiryTimeout, accountName, containerName, blobName)
{
var start = new Date();
var end = new Date(start.getTime() + (1000 * 60 * expiryTimeout));

var signature = generateSignature(start, end, accountName, containerName, blobName);
var queryString = "?st=" + encodeURIComponent(start.toIsoString()) + "&se=" +
encodeURIComponent(end.toIsoString()) + "&sr=b&sp=w&sig=" +
encodeURIComponent(signature);
return "http://" + accountName + ".blob.core.windows.net/" + containerName + "/" + blobName + queryString;
}

First it creates a timespan, within which the signature is valid, by using 2 Date objects. Azure limits this to 60 minutes or something, but that should be more than enough.

As you can see, it uses a method called generateSignature() to generate the actual signature. This method generates a HMAC-SHA256 signature generated using the blob storage key and a predefined string presentation of the parameters used in the querystring that is passed to the blob storage.

The actual Uri is then created by combining the path to the blob and a very funky querystring. The querystring includes a bunch of parameters such as start och end time for the access, what type of access (blob or container) it should have, what access rights it needs (read or write), and finally it includes the newly generated signature.

The signature generation looks like this

var crypto = require('crypto')
var key = new Buffer('XXXXX', 'base64')

function generateSignature(startTime, endTime, account, container, blobName) {
var stringToSign = "w\n" +
startTime.toIsoString() + "\n" +
endTime.toIsoString() + "\n/" +
account + "/" + container + "/" + blobName + "\n"
var hash = crypto.createHmac('sha256', key).update(stringToSign).digest('base64')
return hash;
}

It isn’t very complicated. It concatenates a string using a predefined format and then uses the crypto package to create the signature.

The “w” at the start of the stringToSign string defines the access, in this case write access, then it is the start time and end time of the SAS in the correct format, and finally it is the path to the blob to access.

Ok, that’s about it! The only thing that the very focused people will have noticed is that JavaScript does not include a toIsoString() method on the Date object. That is a separate method I have declared on the Date object’s prototype as follows

Date.prototype.toIsoString = function() {  
var d = this;
function p(i) { return ("0" + i).slice(-2); }
return "yyyy-MM-ddTHH:mm:ssZ"
.replace("yyyy", d.getFullYear())
.replace("MM", p(d.getUTCMonth() + 1))
.replace("dd", p(d.getUTCDate()))
.replace("HH", p(d.getUTCHours()))
.replace("mm", p(d.getUTCMinutes()))
.replace("ss", p(d.getUTCSeconds()));
};

It is just a helper to get the date string in a format that works for the call…

Ok, that’s it! For real this time!

Except for the somewhat annoying use of the _parsed member in the JavaScript and the slightly odd way to execute the query on the client, it is actually quite a neat solution. Being able to generate SAS urls without a compute instance is actually quite useful in some cases. And even though I prefer uploading files the other way, this could still be really useful. And cheaper… Incoming data is free in Azure, so uploading the file is free either way, but if your storage is not in the same datacenter as the Mobile Service instance, then doing it the other way would incur charges when passing the file from the Mobile Service to the blob storage. Something that this solution does not.

Well, I guess it is better that I end this post before I get into talking about all the pros and cons of the 2 different solutions. They both do the job, so it is up to you to decide…

And no…there is no code for download this time. I have already shown it all, and it wasn’t that much…


Philip Fu posted [Sample Of Jan 8th] How to view SQL Azure Report Services to the Microsoft All-In-One Code Framework blog on 1/8/2013:

image_thumb75_thumb2Sample Download:

imageCS Version: http://code.msdn.microsoft.com/CSAzureSQLReportingServices-e3ffff52

VB Version: http://code.msdn.microsoft.com/AzureSQLReportingServices-432d3fc9

The sample code demonstrates how to access SQL Azure Reporting Service.


image

You can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.


Brian Hitney described Calling Stored Procedures from Windows Azure Mobile Services in a 1/7/2012 post:

imageI was surprised, yet delighted, that Windows Azure Mobile Services uses a SQL database. Schema-less table storage has its place and is the right solution at times, but for most data driven applications, I’d argue otherwise.

imageIn my last post, I wrote about sending notifications by writing the payload explicitly from a Windows Azure Mobile Service. In short, this allows us to include multiple tiles in the payload, accommodating users of both wide and square tiles.

In my application, I want to execute a query to find push notification channels that match some criteria. If we look at the Windows Azure Mobile Services script reference, the mssql object allows us to query the database using T-SQL and parameters, such as:

mssql.query('select top 1 * from statusupdates', {
    success: function(results) {
        console.log(results);
    }
});

In my case, the query is a bit more complicated. I want to join another table and use a function to do some geospatial calculations – while I could do this with inline SQL like in the above example, it’s not very maintainable or testable. Fortunately, calling a stored procedure is quite easy.

Consider the following example: every time the user logs in, the Channel URI is updated. What I’d like to do is find out how many new locations (called PointsOfInterest) have been modified since the last time the user has logged in. To do that, I have a stored procedure like so:

create procedure [darkskies].[NewLocationsForChannel] 
(
    @channelUri as nvarchar(512) = null
)
as

select c.ChannelUri, count(1) as NumNewLocations
from darkskies.Channel c
inner join darkskies.PointOfInterest p 
on c.UserId = p.UserId 
where p.LastUpdated > c.LastUpdated
and c.ChannelUri = @channelUri
group by c.ChannelUri

Writing something like that inline to the mssql object would be painful. As a stored procedure, it’s much easier to test and encapsulate. In my WAMS script, I’ll call that procedure and send down a badge update:

function updateBadge(channelUri) 
{                  
       var params = [channelUri];
       var sql = "exec darkskies.NewLocationsForChannel ?";
       mssql.query(sql, params,
       {
            success: function(results) {
                if (results.length > 0) {
                    for (var i=0; i< results.length; i++)
                    {
                           if (results[i].ChannelUri !== null && 
                                results[i].ChannelUri.length > 0)
                           {                                                      
                                push.wns.sendBadge(results[i].ChannelUri, 
                                    results[i].NumNewLocations);
                           }             
                    }               
              }
        }
    });
} 

This section of code only updates the badge of the Windows 8 Live Tile, but it works out nicely with tile queuing:

image

Note: this app is live in the Windows 8 Store, however, at the time of this writing, these features have not yet been released. In the next few posts, we’ll look at the notifications a bit more, including how to pull off some geospatial stuff in WAMS.


Brian Hitney described Best Practice for Sending Windows 8 Tiles from Mobile Services in another 1/7/2013 post:

imageThose that know me know I am not a fan of javascript, in pretty much all of its forms (including node.js), however, I’m really digging Windows Azure Mobile Services (WAMS). WAMS allows you to easily provide a back end to applications for storing data, authenticating users, and supporting notifications on not just Windows and Windows Phone, but also iOS with future plans of supporting Android soon.

imageNow, I mention javascript because WAMS provides a slick node-like powered data service that makes it really easy to store data in the cloud. The ToDoList example exercise illustrates the ease at storing user data in the cloud and hooking it up with authentication and notification support. The nice thing about the authentication is that it’s easily integrated into the backend:

image

But, more on this later. Right now, I want to deal with notifications in WAMS. In WAMS, you have the opportunity to right custom server-side javascript to do things like send notifications on insert/update/delete/read access:

image

In my case, I want to send a tile update if the new data meets some criteria. Let’s start all the way down the code and work our way out, starting with the notification piece. One page you MUST have bookmarked is the tile template catalog on MSDN. This page defines the XML syntax for all possible tiles your tile can have, including both small/square tiles, and large/wide tiles. All of these have a defined schema, such as this for TileSquarePeekImageAndText04:

<tile>
  <visual>
    <binding template="TileSquarePeekImageAndText04">
      <image id="1" src="image1" alt="alt text"/>
      <text id="1">Text Field 1</text>
    </binding>  
  </visual>
</tile>

TileSquarePeekImageAndText04 example

Which produces a tile that “peeks”, such as this (which flips between the top half and bottom half):

Yes, it’s easy to laugh at the magic “04” in the template title. I like to joke that my personal favorite is TileWideSmallImageAndText03. But, there variety is crucial to creating the ideal app experience and that depends on how you want to display the data -- and that requires knowing the XML template.

Now, in WAMS, there’s a great tutorial on sending some basic notifications. In that walkthrough, a notification is sent via the server-side javascript like so:

push.wns.sendTileSquareText02(“https://bn1.notify.windows.com?[snip]”, { text1: “some text”, text2: “more text”});

Now, at first glance, this is very nice because WAMS will write the XML for you. However, you still must know what data the template requires. Does it need an image? One text line? Two? You get the point. Unsurprisingly, calling that method will generate XML like:

<tile>
  <visual>
    <binding template="TileSquareText02">
      <text id="1">some text</text>
      <text id="2">more text</text>
    </binding>  
  </visual>
</tile>

You can learn more about this in the WAMS script reference. Another must-have bookmark. However, I recommend you don’t use these at all, and instead write the XML payload directly. This is for a few reasons, but primarily, it’s for control – and, really, you have to know the fields required anyway and you’ll still have the tile catalog page open for reference.

In looking at the mpns (Microsoft Push Notification Service) library a bit closer (awesome job by the guys, by the way) up on git, it has this method:

var raw = new mpns.rawNotification('My Raw Payload', options);

When developing my app, I realized I had no idea what tile size the user has. Some may opt to use a wide tile, others a small tile. I needed different tiles to support both. I didn’t like sending two notifications (seems wasteful, doesn’t it?) and to do this efficiently, it’s easier to just create the payload explicitly that includes all tiles. For example, this includes two completely different tiles:

var payload = "<tile><visual><binding template='TileWideImageAndText02'>" +
      "<image id='1' src='" + xmlEscape(bigImage) + "' alt='map'/>" +
      "<text id='1'>" + text1 + "</text>" +
      "<text id='2'>" + text2 + "</text>" +
      "</binding>" +
      "<binding template='TileSquareImage' branding='none'>" +
      "<image id='1' src='" + xmlEscape(smallImage) + 
      "' alt='map'/></binding></visual></tile>";

push.wns.send(channelUri, payload, 'wns/tile', 
{
        client_id: 'ms-app://<snip>',
        client_secret: 'i will never tell',
        headers: { 'X-WNS-Tag' : 'MainTile' }   
}...

Sure, it doesn’t look as clean and (gasp!) we have to do string concatenation. But, it’s only a couple of minutes more work and just more flexible. Like I said: either way, you need to know the template. In my case, I’m sending both notifications in one payload. The first is TileWideImageAndText02, which produces a nice image with the text on the bottom describing the image. If the user has a small tile, it will use TileSquareImage, which basically just forgoes the text and just displays the image. After trying a few, I settled on this combination as the best user experience. This is an easy way, with minimal effort, to support both wide and narrow tiles.

As an aside, I recommend setting the tag (X-WNS-Tag) header, particularly if your app cycles tiles and you want to replace a specific tile. Also, it’s a good ideal to XML escape all data, which I’m doing with the long image URLs … and this, I believe, is taken right from the mpns library:

var xmlEscape = function (text) {
    return text.replace(/&/g, '&amp;')
       .replace(/</g, '&lt;')
       .replace(/>/g, '&gt;')
       .replace(/"/g, '&quot;');    
}

If you don’t escape the data and have some illegal chars in there as a result, the notification gets sent correctly (that is, accepted), but gets ignored by the client.

Now that I’ve got the basic code to send a tile, I needed to filter some data and run a query and sort users by distance. Sound like fun? I’ll write about that next…

image_thumb18


<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

osroca (@bigmlcom) asserted Azure DataMarket + BigML = Powerful Insights in a 1/9/2013 post to the BigML machine learning made easy blog:

imageBigML has made the process of creating a predictive model from a dataset one-click easy. For example, a store owner can use her sales data to predict the optimal inventory levels given the time of year. But what if you have an idea for a model, for example predicting the unemployment rate for London Boroughs based on demographic information, but you don’t have the data? While it is trivial to build this model with BigML once you have data, where do you get it and how do you know if the data is from a reliable source?

This is where data markets shine, providing an easy to search repository of curated datasets that can be combined with your own data to build models with more meaningful insights. A prominent example of a data market is the Windows Azure Marketplace DataMarket. It offers a wide spectrum of public and commercial datasets that are exposed via an OData API. While BigML can already import datasets from the Azure DataMarket via the OData API using our remote sources or directly from an Azure blob, we wanted to make it even easier. With the new BigML Data Marketplaces widget, you can now directly browse datasets from Azure DataMarket and import them into BigML with just one click.

The BigML Data Marketplaces widget can be enabled in the “Data Marketplaces” section of your account settings. You will need to grant BigML access to your dataset subscriptions by entering your Account Key and Customer ID from your Azure Marketplace Account.

bigml_data_marketplaces

Once enabled, you will see a new icon in the sources tab of your dashboard which can be used to activate the Azure DataMarket browser.

bigml_sources

This will bring up a list of Azure DataMarket datasets. Clicking on a dataset will reveal a full description and a link that can be used to subscribe to the dataset, if you were not already subscribed.

bigml_azure_browser_london_borough

Once you find a dataset that you want to analyze using BigML, you can select one of its Entity Sets and how many rows you want to use to create a new BigML Source.

bigml_azure_entity_set

So once you create a new BigML Source:

bigml_london_borough_source

your BigML Dataset and predictive model are just a few clicks away.

bigml_london_borough_dataset

bigml_london_borough_model

The new BigML Data Marketplace widget makes it easier than ever to make insightful models by providing easy access to the well structured and rich selection of datasets from the Azure DataMarket. This shows how fabulous combining cloud-based applications is becoming: without installing and configuring any software, downloading or uploading any data and just with a browser you can analyze hundreds of datasets to derive many powerful insights.

We are very thankful to the folks at Azure DataMarket and specially to Rene Bouw for his help and support during the integration process.

image_thumb8No significant articles today


<Return to section navigation list>

Windows Azure Service Bus, Caching Access Control, Active Directory, Identity and Workflow

•• Clemens Vasters (@clemensv) produced Agile Waterfalls, Backlogs, Cutlines, Shiproom. Talking with @AbhishekRLal about how we build Service Bus on 1/10/2013:

Yesterday I sat down with my teammate Abhishek and talked about how we build Service Bus - not how we code it, but how we run the process inside the team and how we get from features sitting on the humongous backlog to working features in the service.

We also talk about the three different disciplines Program Management, Development, and Test/QA and how the checks and balances between the disciplines helps with getting things out on schedule and at great quality.


•• Clemens Vasters (@clemensv) also produced Negotiate, Promise, Do. Transactions. on 1/10/2013:

Over the holidays, the topic of transactions flared up on Twitter amongst a number of distributed systems .NET luminaries and it turned out that there isn't always clear agreement even about the basic notions around transaction technology as the overall technology stack has evolved and there are now databases that sit entirely in memory, for instance. Can those databases participate in a distributed transaction even if they're not "durable"?

What are the challenges around making two or more things work together? Do I even care?

To start that discussion, this episode is an introduction to what transactions are and what they're for and I am explaining the "traditional" transaction properties using a few low-tech non-code examples and a little role play with the help of my teammates Will Perry (@willpe) and Abhishek Lal (@AbhishekRLal)

For some good depth information on the subject of transaction isolation, you might find useful the respective Wikipedia article useful that comes with a range of examples.


•• Philip Fu posted [Sample Of Jan 10th] Call web services through service bus by Windows Phone Application to the Microsoft All-In-One Code Framework on 1/10/2013:

Sample Download:

CS Version: http://code.msdn.microsoft.com/CSAzureServiceBusWithWindow-13e510e2

VB Version: http://code.msdn.microsoft.com/VBAzureServiceBusWithWindow-419b7197

The sample code demonstrates how to expose an on-premise REST service to Internet via Service Bus, then you can access this service by Windows Phone application.

imageYou can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.


Francois Lascelles (@flascelles) posted Give me a JWT, I’ll give you an Access Token on 1/4/2012:

imageOne of the common misconceptions about OAuth is that it provides identity federation by itself. Although supporting OAuth with federated identities is a valid pattern and is essential to many API providers, it does require the combination of OAuth with an additional federated authentication mechanism. Note that I’m not talking about leveraging OAuth for federation (that’s OpenID Connect), but rather, an OAuth handshake in which the OAuth Authorization Server (AS) federates the authentication of the user.

image_thumb75_thumb3There are different ways to federate the authentication of an end user as part of an OAuth handshake. One approach is to simply incorporate it as part of the authorization server’s interaction with the end user (handshake within handshake). This is only possible with grant types where the user is redirected to the authorization server in the first place, such as implicit or autz code. In that case, the user is redirected from the app, to the authorization server, to the idp, back to the authorization server and finally back to the application. The federated authentication is transparent to the client application participating in the OAuth handshake. The OAuth spec (which describes the interaction between the client application and the OAuth Authorization Server) does not get involved.

illustration1

Another approach is for the client application to request the access token using an existing proof of authentication in the form of a signed claims (handshake after handshake). In this type of OAuth handshake, the redirection of the user (if any) is outside the scope of the OAuth handshake and is driven by the application. However, the exchange of the existing claim for an OAuth access token is the subject of a number of extension grant types.

One such extension grant type is defined in SAML 2.0 Bearer Assertion Profiles for OAuth 2.0 specification according to which a client application presents a SAML assertion to the OAuth authorization server in exchange for an OAuth access token. The Layer 7 OAuth Toolkit has implemented and provided samples for this extension grant type since its inception.

illustration2

Because of the prevalence of SAML in many environments and its support by many identity providers, this grant type has the potential to be leveraged in lots of ways in the Enterprise and across partners. There is however an alternative to bloated, verbose SAML assertions emerging, one that is more ‘API-friendly’, based on JSON: JSON Web Token (JWT). JWT allows the representation of claims in a compact, JSON format and the signing of such claims using JWS. For example, OpenID Connect’s ID Tokens are based on the JWT standard. The same way that a SAML assertion can be exchanged for an access token, a JWT can also be exchanged for an access token. The details of such a handshake is defined as part of another extension grant type defined as part of JSON Web Token (JWT) Bearer Token Profiles for OAuth 2.0.

Give me a JWT, I’ll give you an access token. Although I expect templates for this extension grant type to be featured as part of an upcoming revision of the OAuth Toolkit, the recent addition of JWT and JSON primitives enables me to extend the current OAuth authorization server template to support JWT Bearer Grants with the Layer 7 Gateway today.

The first thing I need for this exercise is to simulate an application getting a JWT claim issued on behalf of a user. For this, I create a simple endpoint on the Gateway that authenticates a user and issues a JWT returned as part of the response.

idppolicy

Pointing my browser to this endpoint produces the following output:

idoutput

Then, I extend the Authorization Server token endpoint policy to accept and support the JWT bearer grant type. The similarities between the SAML bearer and the JWT bearer grant types are most obvious in this step. I was able to copy the policy branch and substitute the SAML and XPath policy constructs for JWT and JSON path ones instead. I can also base trust on HMAC type signatures that involve a share secret instead of a PKI based signature validation if desired.

newAS

I can test this new grant type using REST client calling the OAuth Authorization Server’s token endpoint. I inject in this request the JWT issued by the JWT issuer endpoint and specify the correct grant type.

illustration5

I can now authorize an API call based on this new access token as I would any other access token. The original JWT claim is saved as part of the OAuth session and is available throughout the lifespan of this access token. This JWT can later be consulted at runtime when API calls are authorized inside the API runtime policy.

image_thumb9


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

Craig Kitterman sponsored a Deploying OpenLogic CentOS images on Windows Azure Virtual Machines post by OpenLogic’s Eric Weidner, pictured below, on 1/11/2012:

imageEditor’s Note: Today’s post is brought to us by Eric Weidner, OpenLogic Co-founder and Director of Engineering, describing how the company provides support and services for CentOS customers, including details on how to get OpenLogic CentOS images running on Windows Azure Virtual Machines.

imageOpenLogic provides services and support for over 700 different open source packages, including commercial-grade support for CentOS, an enterprise-class Linux distribution derived from the publicly-available source code for Red Hat Enterprise Linux. Our goal with supporting CentOS is to enable Enterprises to take advantage of a fully open alternative to the popular enterprise Linux distributions that customers already know and use.

Since April or so of last year, we have had a close working relationship with the Windows Azure team, with the goal of making Open Logic CentOS images available in the image gallery of the Windows Azure Preview Management Portal. Our counterparts, like Henry Jerez, have been very focused on delivering a great solution for our mutual customers, working with us to meet a series of goals in order to make our “go live” dates.

What’s great about CentOS and Windows Azure is that there’s really very little required to get the OpenLogic images running. For users, it’s as easy as picking the Open Logic CentOS image in the Windows Azure portal, answering a few questions for the basic setup, and then a CentOS server can be launched in about five minutes. There are also tools available to give developers the ability to automate interactions with the platform.

Customers running OpenLogic CentOS images on Windows Azure can expect to have a vast, selectable and truly predictable deployment process in place. Additionally, they get servers running a distribution they are already familiar with using in their traditional data centers.

To illustrate just how easy it is, below is the step-by-step detail for how to create a custom virtual machine running an OpenLogic CentOS image using the Windows Azure Management Portal:

  1. Sign in to the Windows Azure Management Portal. On the command bar, click New.
  2. The VM OS Selection dialog box opens. You can now select an image from the Image Gallery.
  3. Click Platform Images, select the OpenLogic CentOS 6.2 image, and then click the arrow to continue.
  4. Click Platform Images, select the OpenLogic CentOS 6.2 image, and then click the arrow to continue.
    The VM Configuration dialog box appears.
  5. In Virtual Machine Name, type the name that you want to use for the virtual machine. The name must be 15 characters or less. For this virtual machine, type MyTestVM1.
  6. In New User Name, type the name of the account that you will use to administer the virtual machine. You cannot use root for the user name. For this virtual machine, type NewUser1.
  7. In New Password, type the password that is used for the user account on the virtual machine. For this virtual machine, type MyPassword1. In Confirm Password, retype the password that you previously entered.
  8. In Size, select the size that you want to use for the virtual machine. The size that you choose depends on the number of cores that are needed for your application. For this virtual machine, accept the default of Extra Small.
  9. Click the arrow to continue.You can connect virtual machines together under a cloud service to provide robust applications, but for this tutorial, you only create a single virtual machine. To do this, select Standalone Virtual Machine.
  10. You can connect virtual machines together under a cloud service to provide robust applications, but for this tutorial, you only create a single virtual machine. To do this, select Standalone Virtual Machine.
  11. A virtual machine that you create is contained in a cloud service. In DNS Name, type a name for the cloud service that is created for the virtual machine. The entry can contain from 3-24 lowercase letters and numbers. This value becomes part of the URI that is used to contact the cloud service that the machine belongs to. For this virtual machine, type MyService1.
  12. You can select a storage account where the VHD file is stored. For this tutorial, accept the default setting of Use Automatically Generated Storage Account.
  13. In Region/Affinity Group/Virtual Network, select West US for where the location of the virtual machine.
  14. Click the arrow to continue.
  15. The options on this page are only used if you are connecting this virtual machine to other machines or if you are adding the machine to a virtual network. For this virtual machine, you are not creating an availability set or connecting to a virtual network. Click the check mark to create the virtual machine.

    The virtual machine is created and operating system settings are configured. When the virtual machine is created, you will see the new virtual machine listed as Running in the Windows Azure Management Portal.

Easy as that! As I said previously, about a five minute process in total.

It’s great to see the commitment to open source projects by Microsoft, and the moves they’ve been making to open up Windows Azure to Linux. Not only has Microsoft open sourced the drivers, including contributing them to the upstream kernel projects, to allow people to run Linux on their hypervisors and platforms, but they’ve also created open source tools for developers to use to interact with the platform. You can also find the source code and instructions for building from source and running the drivers on Github and Codeplex.

For a summary of how this work is benefiting CentOS and Windows Azure customers, check out my interview alongside OpenLogic’s CEO Steven Grandchamp on the Microsoft Openness blog . To start running OpenLogic’s CentOS images as part of the current Virtual Machines Preview, go to the Windows Azure site.


•• Larry Franks (@larry_franks) described Using the VM Depot in a 1/10/2013 post:

imageYesterday, Microsoft Open Technologies announced a complementary service to Windows Azure VMs - the VM Depot. The depot is a community-driven catalog of open source VM images. This lets you create and share VMs with custom configurations or specific software stacks installed.

imageDoug Mahugh also posted a getting started article that gives the basics of using the service.

I spent some time working with the depot last night, and here are the things I learned.

Requirements

You can probably guess that you'll need a Windows Azure subscription, but there's a few more things you'll need to do.

  1. Make sure the VM preview feature is enabled for your service. You can do this by signing into your subscription and going to https://account.windowsazure.com/PreviewFeatures. Once here, sign up for the Virtual Machines & Virtual Networks option if it is not already active.

  2. Make sure you have the latest version of the Windows Azure Command-line tools, as the depot produces a deployment script that uses a newish parameter (-o). You can update the command line tools by doing one of the following:

    • Using the npm install azure-cli -g command to install the latest Azure command-line bits.

    OR
  3. You'll also need to import your account settings if you haven't already. You can find the steps for this in the How to download and import publish settings section of this article.

Using a community image

If you want to create a new VM using a community image, the process is pretty simple:

  1. Open http://vmdepot.msopentech.com/List/Index in your browser.

  2. Find an image you want from the list. You can either scroll through the list or use the search bar at the top. The following image illustrates using the search field to find a VM that has Riak.

  3. At this point you can either click the Deployment Script link to the far right of the VM entry you want to use or the Deployment Script icon at the top to retrieve a deployment script. You'll need to agree to the terms of use and select a region, and then you'll be given a command similar to the following:

    azure vm create DNS_PREFIX -o vmdepot-66-2-2 -l "East US" USER_NAME [PASSWORD] [--ssh] [other_options] 
  4. You will need to replace the fields such as USER_NAME with actual values. Here's what each should go in each of these fields:

    • DNS_PREFIX should be replaced with whatever you want this machine to be called. I'm going to use myawesomevm.

    • USER_NAME should be replaced with the user name you want to login to the machine as.

    • [PASSWORD] should be the password you want to use for this user (don't include the [] characters around the password).

    • [--ssh] should be changed to just --ssh if you want to enable SSH (and you probably do, otherwise how are you connecting to the machine?)

    • [other_options] should be replaced with any other options available for the azure vm create command. For example, --vm-size small.

    A final command line should look something like:

    azure vm create larryriak -o vmdepot-66-2-2 -l "East US" larry secretpassword --ssh 

    Run this command from a command-line. It will copy the disk image from the VM Depot to storage in your subscription, and then provision a VM that uses this disk image.

  5. At this point, the new VM should show up in the Windows Azure portal.

    After the VM status changes to running, you should be able to use SSH to connect to the VM and use it as you normally would.

Publishing a VM

Doug Mahugh's article provides information on publishing an image to the VM Depot. I didn't go through the entire process of publishing a VM because I didn't want to clutter up the VM Depot with "Larry's great generic Linux VM for testing purposes". The steps seem relatively straight forward though.

Create a custom VM and publish a VHD to the VM Deopot
  1. Create a VM. You can find a walk through of this process in the Create a virtual machine running Linux tutorial.

  2. Once it's been created, SSH to the VM and install things people might find useful.

  3. Capture an image of the VM to a VHD. You can find these steps in the How to capture an image of a virtual machine running Linux article.

  4. Set the storage container that contains the .VHD to public. You can do this in the Windows Azure Portal by:

    1. Selecting the storage account.

    2. Selecting Containers.

    3. Selecting the container (vhds in this case) and clicking edit container. You'll get a dialog similar to the following:

    4. Select Public Container for the access level of this container, and then click the checkbox.

Publishing the VHD to the VM Depot

You'll need to create an account on the VM Depot for this step. It allows you to use a Windows LiveID, Google ID, or Yahoo! ID. To create an account, just click on the Sign In link in the upper right to set this up.

After you've created an account and signed in, perform the following steps to share your VM with the community:

  1. Click the Publish icon at the top of the page.

  2. Enter information about the VM image: name, description, what packages are installed, legal terms, etc.

  3. The URL of the VHD to publish is the full URL to the VHD in your public container. You can get this by performing the following steps:

    1. Go to the Windows Azure Portal.

    2. Go to the storage account that contains this VHD.

    3. Select Containers, and then select the container.

    4. A list of the objects in the container, along with the full URL to each item, will be displayed. Just note the URL and use it in the URL of the VHD to publish field in the VM Depot.

  4. Once you've specified the VHD path and filled out all the information, agree to the terms and click the publish button.

Final Thoughts

The VM Depot is a great addition to the Windows Azure VM story. Previously you had to select a raw OS image and manually install your software stack on it. With the Depot, you can now pick an image that already has the stack you need, as well as share your custom stack with the community. And since it's based on the Windows Azure command-line tools, it allows you to create the command-line once in the portal and then use it in your automation scripts, or hand it out to co-workers who need to create their own VMs.

There's already a lot of VMs in the Depot for both specific OS releases (Debian Wheezy and Mageia) and specific software and software stacks (LAMP, Ruby, JRuby, WordPress, Joomla, Drupal). It will be interesting to see what new VMs show up now that this is open to the community.

Any thoughts on specific OS or software stacks that you'd like to see in the VM Depot?


•• Wely Lau (@Wely_Live) continued his series with Windows Azure Virtual Machine: A look at Windows Azure IaaS Offerings (Part 2) on 1/8/2013:

We’ve seen the basic concept of Azure IaaS in my last article. This article will take a deeper look at how Images and Disks are being used in Windows Azure Virtual Machine. Later in the article I’ll bring you another tutorial to let you have better understanding and hands-on experience.

There are two basic yet important concepts in Windows Azure Virtual Machine: Images and Disks. Although both of them are eventually in VHD format, there are significant differences between them.

Images

Images are virtual hard drives (VHDs) that have already been generalized (technically, beensys-prepped /generalize). They are basically templates that will be used to clone the Virtual Machine. They come without any specific settings such as computer name, user account, and network settings.

Predefined / Platform Images

Windows Azure provides numbers of predefined images including Windows and Linux. The following figure shows the predefined images on Windows Azure as of today.

VM platform images

Figure 1 – Virtual Machine Platform Images

Creating or Bringing Our Own Images

Apart from predefined image, we can actually provide our own images as well. This will be certainly useful when we want to reuse the configured VM in the future. It could be done either by capturing a running VM on Windows Azure or uploading the VHD On-Premise with CSUPLOAD.

Both techniques require us to sysprep the VHD properly. Eventually, the image should be created in the portal.

creating image from vhd

Figure 2 – Creating Image from VHD

Disk

Disks are the actual VHDs that are ready to be mounted by the Virtual Machine. There are two kinds of Disk: OS Disk and Data Disk.

OS Disk

OS Disk is a VHD that is being instantiated by an image and obviously contains operating system files. At the time a VM is being provisioned, the OS Disk will be automatically created and mounted as C:\ drive.

The default caching policy for OS Disk is enabled for ReadWrite. Meaning that, although the OS Disk is stored at Windows Azure Storage as Page Blob, there will be a caching disk sitting on the host OS. At any time reading / writing happens on the OS Disk, it will always reach the caching disk first and gradually flush them to Blob Storage. The reason why ReadWrite cache being enabled for OS Disk is because the usage pattern that Azure team expects. As the working sets of data being read and written are relatively small, it fits perfectly to have a local cache so that it can perform efficiently.

The maximum size of OS Disk is 127 GB as of today. The recommended approach is to let customers store larger data in the Data Disk.

Data Disk

Data Disk is VHD that allows us to store any data. The Data Disk can then be mounted on the VM. T As the data disks are stored in Windows Azure Blob Storage as page blobs, it inherits from the maximum size of 1 TB. However, there are limits on how many disk can be mounted. This depends on the size of Virtual Machine as presented below.

VM size for Azure VM

Figure 3 – VM Size for Azure VM
(From WAPTK – WindowsAzureVirtualMachines.pptx slide 16)

The default caching policy for Data Disk is “None” or No Cache. This means that when any reading or writing happens, it always goes directly to Blob Storage.

*Temporary Disk

Apart from OS Disk and Data Disk, there is also a temporary disk stored in the VM itself. This is used for the OS Paging file. Importantly, the disk is considered not persistent.

The following diagram illustrates how the disks are being stored in Windows Azure Storage.

How disks are stored

Figure 4 – How disks are stored

A hands-on tutorial

We have talked about the concepts above. Now let’s jump into the demo to see them in action. I assume you have gone through the tutorial in my previous article, please do so if you have not.

Attaching Disks to Virtual Machines

1. Log in to New Management Portal with your Live Id. After successfully logging in, navigate to the Virtual Machine section and you will see the Dashboard tab appear. At the bottom part of Dashboard, you will notice the “disk” section. By default, there is only one disk attached, which type is OS Disk. If you notice carefully, the OS Disk VHD refers to Windows Azure Storage URL.

Virtual Machine dashboard

Figure 5 – Virtual Machine dashboard

2. Now, click on the “Attach” button and select the “Attach Empty Disk”.

Attaching Disk to VM

Figure 6 – Attaching Disk to VM

As the dialog box show up, define the File Name as “DataDisk1” and Size as “1023”.

Attaching an Empty Disk

Figure 7 – Attaching an Empty Disk

It may take a while (2 to 3 minutes) to get the Data Disk ready.

3. Repeat Step 2 one more time. Define File Name as “DataDisk2” and let the Size remain the same as “1023”.

4. After a while, you can see that there are two additional data disks being attached besides the original OS Disk.

OS Disk and Data Disk on VM

Figure 8 – OS Disk and Data Disk on VM

5. Click “Connect” to remote desktop inside the VM. When the RPD file is prompted, simply open it.

6. Once you have successfully remote desktopped inside the VM, open up Server Manager and expand the Storage – Disk Management Menu.

7. You might be prompted with the Initialize Disk dialog. This dialog appears since we have just attached two disks on the VM but haven’t initialized them yet. We are required to select the partition type either: MBR and GPT. In this demo, we select “MBR” and click “OK”.

Initializing Data Disks

Figure 9 – Initializing Data Disks

Striping Volume to Data Disks in VM

The earlier section of this article mentions that the maximum size of each blob is 1 TB. People often make the mistake of thinking that the maximum size of data you can store in Azure Disk is 1 TB. This is not really true, as we can actually store up to 16 TB data (for Extra Large VM). The idea is to use Striped Volume in Windows.

8. Right click on “Disk 2″ which we have just initialized and click “New Striped Volume”.

new striped volume

Figure 10 – New Striped Volume

9. As the dialog comes up, add the “Disk 3” on the Available list and click “Add”. Click “Next” to proceed.

Adding Disk to Striped Volume

Figure 11 – Adding Disk to Striped Volume

10. Choose your preferred Drive Letter. In this example, I selected E. Click “Next”.

assign drive letter or path

Figure 12 – Assign Drive Letter or Path

11. The next step is about formatting the volume. Simply give the volume a label. In this case, I call it DATA. Click “Next” to finalize the wizard

defining format volume

Figure 13 – Defining Format Volume

12. Open up Windows Explorer, notice that the DATA Volume can take up to 2 TB size as they are basically stored in two different Data Disks.

Result of striped disk

Figure 14 – Result of striped disk

Conclusion and coming up next

We’ve seen how the image and disk being design and used in Windows Azure Virtual Machine.

In the subsequent article, we will discuss other aspects of Windows Azure IaaS in more detail such virtual network capabilities and also how PaaS and IaaS work together to bring more possibilities.

Review

Technical Reviewer: Corey Sanders, Principal Program Manager Lead, Microsoft Corporation.

References


• Avcash Chauhan (@avkashchauhan) described Using OS disk VHD to create a new Virtual Machine if OS VHD is still on lease in Windows Azure Virtual Machines in a 1/10/2012 post:

imageThere is a situation [with] Windows Azure Virtual machines where either you have deleted the Virtual Machines for any reason or Virtual Machine is deleted due some other reason. You may have already know that the OS disk vhd is still saved in your Azure Storage because when virtual machine is deleted the OS disk and other data disk are still saved at their respective Windows Azure Storage location

imageWhen you want to reuse the OS disk vhd you might encounter the following problems:

1. You can see that OS Disk is still showing attached to Virtual Machine as shown below.

image2. You can deleted the OS VHD from storage as while deleting VHD you get the following error:

Error deleting blob '/vhds/avkashsql2012-avkashsql2012-2012-07-18.vhd': details

There is currently a lease on the blob and no lease ID was specified in the request.

3. If you decide to use the same OS VHD to create an OS image for creating your Virtual Machine, you will get error as below:

The VHD http://portalvhds63*.blob.core.windows.net/vhds/avkashsql2012-avkashsql2012-2012-07-18.vhd is already registered with image repository as the resource with ID avkashsql2012-avkashsql2012-0-20120718183454.

The bottom line is OS VHD is there but you cannot use it for any purpose.

Root cause:

- The root cause of this problem is that the VHD blob is still in lease due to some code issue and locked up in a way that it is not re-usable until you break the lease so it is free to use.

Solution:

You cannot remove the blob lease directly at portal so you would need to use the PowerShell script as described below:

To delete OS VHD blob:

- If you have to delete the blob the you must need to break the least first and only then you would be able to delete

To reuse the OS VHD blob to create OS Image:

If you are looking for an alternative solution using the OS VHD to create an OS image and then use OS image to quickly create new Virtual Machine here is the solution.

- Download CloudXplorer

- Configured you Azure Blob Stroage where your locked OS VHD is stored. You can get this info from portal.

- Now access the OS VHD you want to reuse:

- Try renaming the VHD which will return ERROR as below:

- However even after the error you will see the rename did worked as below:

- This new OS VHD blob is ready to use as it is brand new.

- Now go back to Windows Azure Portal and access

  • Virtual Machines -> Images -> Create Image (+)
  • Route to your vhd container and then select your newly created VHD as shown below:

- You can now see that a new OS Image is created as below in Virtual Machines > Images section as below:

- Finally now you use this newly created OS Image to create a new Virtual Machines as below:

-


Bill Hilf (@bill_hilf) reported VM Depot delivers more options for users bringing their custom Linux images to Windows Azure Virtual Machines on 1/9/2013:

imageHappy New Year Windows Azure community! One of the first pieces of 2013 Windows Azure news is something that I am particularly excited about. Some of you might know that I was recruited to Microsoft in 2004 to help with the company’s approach to open source software. So each time we take a step to help the overall open source community, I cannot help but get excited.

imageMicrosoft Open Technologies, Inc. has just announced VM Depot, a community-driven catalog of open source virtual machine images for Windows Azure. On VM Depot the community can build, deploy and share their favorite Linux configurations, create custom open source stacks, work with others and build new architectures for the cloud that benefit from the openness and flexibility of Windows Azure. You can find the announcement from Microsoft Open Technologies, Inc. at the Port 25 BLOG.

image_thumb75_thumb4Here’s a quick look at VM Depot, where Azure users can bring their own custom Linux images to Azure Virtual Machines (currently in preview).

Here is where you get started: VM Depot. Remember to rate the images; our tech community thrives on your feedback. I’m looking forward to seeing this community develop, and special thanks to Bitnami, Alt Linux, Basho and Hupstream for helping us get this launched!

image_thumb11


Doug Mahugh (@dmahugh) posted Getting Started with VM Depot to the Ineroperability @ Microsoft blog on 1/9/2012:

imageDo you need to deploy a popular OSS package on a Windows Azure virtual machine, but don’t know where to start? Or do you have a favorite OSS configuration that you’d like to make available for others to deploy easily? If so, the new VM Depot community portal from Microsoft Open Technologies is just what you need. VM Depot is a community-driven catalog of preconfigured operating systems, applications, and development stacks that can easily be deployed on Windows Azure.

You can learn more about VM Depot in the announcement from Gianugo Rabellino over on Port 25 today. In this post, we’re going to cover the basics of how to use VM Depot, so that you can get started right away.

Deploying an Image from VM Depot

Deploying an image from VM Depot is quick and simple. As covered in the online documentation, VM Depot will auto-generate a deployment script for use with the Windows Azure command-line tool for Mac and Linux that you can use to deploy virtual machine instances from a selected image. You can use the command line tool on any system that supports Node.js – just install the latest version of Node and then download the tool from this page on WindowsAzure.com. For more information about how to use the command line tool, see the documentation page.

imagePublishing an Image on VM Depot

To publish an image on VM Depot, you’ll need to follow these steps:

Step 1: create a custom virtual machine. There are two approaches you can take for creating your custom virtual machine. The quickest and simplest approach is to create a Linux virtual machine from the image gallery in Windows Azure and then customize your VM by installing or configuring open source software on it. And for those who’d like to build an image from scratch, you can create and upload a virtual hard disk that contains the Linux operating system and then customize your image as desired.

Regardless of which approach you used to create your image, you’ll then need to save it to a public storage container in Windows Azure as a .VHD file. The easiest way to do this is to deploy your image to Azure as a virtual machine and then capture it to a .VHD file. Note that you’ll need to make the storage container for your .VHD file public (they’re private by default) in order to publish your image – you can do this through the Windows Azure management portal or by using a tool such as CloudXplorer.

Step 2: publish your image on VM Depot. Once your image is stored in a public storage container, the final step is to use the Publish option on the VM Depot portal to publish your image. If it’s your first time using VM Depot, you’ll need to use your Windows Live™ ID, Yahoo! ID, or Google ID to sign in and create a profile.

See the Learn More section for more detailed information about the steps involved in publishing and deploying images with VM Depot.

As you can see, VM Depot is a simple and powerful tool for efficiently deploying OSS-based virtual machines from images created by others, or for sharing your own creations with the developer community. Try it out, and let us know your thoughts on how we can make VM Depot even more useful!


Adron Hall (@adron) rang in with New Relic, The King Makers, MS Open Tech, Riak VMs and Life Gets Easier Today on 1/9/2012:

Today Microsoft released, with partnerships with a number of companies including Basho, Hupstream and Bitnami, the VM Depot. I’ve always followed Bitnami, so it’s really cool to see their VM releases for Jenkins (CI Build Server), WordPress, Ruby 1.9.3 stack, Node.js and about everything you can imagine out their along side our Basho Riak CentOS image.

If you want a great way to get kick started with Riak and you’re setup with Windows Azure, now there is an even easier way to get rolling.

Over on the Basho blog we’ve announced the MS Open Tech and Basho Collabortation. I won’t repeat what was stated there, but want to point out two important things:

  1. Once you get a Riak image going, remember there’s the whole community and the Basho team itself that is there to help you get things rolling via the mail list. If you’re looking for answers, you’ll be able to get them there. Even if you get everything running smoothly, join in anyway and at least just lurk. :)
  2. The RTFM value factor is absolutely huge for Riak. Basho has a superb documentation site here. So definitely, when jumping into or researching Riak as software you may want to build on, use for your distributed systems or the Riak Key Value Databases, check out the documentation. Super easy to find things, super easy to read, and really easy to get going with.

So give Riak a try on Windows Azure via the VM Depot. It gets easier by the day, and gives you even more data storage options, distribution capabilities and high availability that is hard to imagine.

New Relic & The Rise of the New Kingmakers

imageIn other news, my good friends at New Relic have released a new book in partnership with Redmonk Analyst Stephen O’Grady @, have released a book he’s written titled The New Kingmakers, How Developers Conquered the World. You may know New Relic as the huge developer advocates that they are with the great analytics tools they provide. Either way, give a look see and read the book. It’s not a giant thousand page tomb, so it just takes a nice lunch break and you’ll get the pleasure of flipping the pages of the book Stephen has put together. You might have read the blog entry that started the whole “Kingmakers” statement, if you haven’t, give that a read first.

I personally love the statement, and have used it a few times myself. In relation to the saying and the book, I’ll have a short review and more to say in the very near future. Until then…


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

•• Gaurav Mantri (@gmantri) described (Some) Best Practices for Building Windows Azure Cloud Applications in a 1/11/2012 post:

imageIn this blog post, I will talk about some of the best practices for building cloud applications. I started working on it as a presentation for a conference however that didn’t work out thus this blog post. Please note that these are some of the best practices I think one can follow while building cloud applications running in Windows Azure. There’re many-many more available out there. This blog post will be focused on building Stateless PaaS Cloud Services (you know that Web/Worker role thingie) utilizing Windows Azure Storage (Blobs/Queues/Tables) and Windows Azure SQL Databases (SQL Azure).

imageSo let’s start!

Things To Consider

Before jumping into building cloud applications, there’re certain things one must take into consideration:

  • Cloud infrastructure is shared.
  • Cloud infrastructure is built on commodity hardware to achieve best bang-for-buck and it is generally assumed that eventually it will fail.
  • A typical cloud application consist of many sub-systemswhere:
    • Each sub-system is a shared system on its own e.g. Windows Azure Storage.
    • Each sub-system has its limits and thresholds.
  • Sometimes individual nodes fail in a datacenter and though very rarely, but sometimes entire datacenter fails.
  • You don’t get physical access to the datacenter.
  • Understanding latency is very important.

With these things in mind, let’s talk about some of the best practices.

Best Practices – Protection Against Hardware Issues

These are some of the best practices to protect your application against hardware issues:

  • Deploy multiple instances of your application.
  • Scale out instead of scale up or in other words favor horizontal scaling over vertical scaling. It is generally recommended that you go with more smaller sized Virtual Machines (VM) instead of few larger sized VMs unless you have a specific need for larger sized VMs.
  • Don’t rely on VM’s local storage as it is transient and not fail-safe. Use persistent storage like Windows Azure Blob Storage instead.
  • Build decoupled applications to safeguard your application against hardware failures.
Best Practices – Cloud Services Development

Now let’s talk about some of the best practices for building cloud services:

  • It is important to understand what web role and worker role are and what benefit they offer. Choose wisely to distribute functionality between a web role and worker role.
  • Decouple your application logic between web role and worker role.
  • Build stateless applications. For state management, it is recommended that you make use of distributed cache.
  • Identify static assets in your application (e.g. images, CSS, and JavaScript files) and use blob storage for that instead of including them with your application package file.
  • Make proper use of service configuration / app.config / web.config files. While you can dynamically change the values in a service configuration file without redeploying, the same is not true with app.config or web.config file.
  • To achieve best value for money, ensure that your application is making proper use of all VM instances in which it is deployed.
Best Practices – Windows Azure Storage/SQL Database

Now let’s talk about some of the best practices for using Windows Azure Storage (Blobs, Tables and Queues) and SQL Database.

Some General Recommendations

Here’re some recommendations I could think of:

  • Blob/Table/SQL Database – Understand what they can do for you. For example, one might be tempted to save images in a SQL database whereas blob storage is the most ideal place for it. Likewise one could consider Table storage over SQL database if transaction/relational features are not required.
  • It is important to understand that these are shared resources with limits and thresholds which are not in your control i.e. you don’t get to set these limits and thresholds.
  • It is important to understand the scalability targets of each of the storage component and design your application to stay within those scalability targets.
  • Be prepared that you’ll encounter “transient errors” and have your application handle (and recover from) these transient errors.
    • It is recommended that your application uses retry logic to recover from these transient errors.
    • You can use TOPAZ or Storage Client Library’s built-in retry mechanism to handle transient errors. If you don’t know, TOPAZ is Microsoft’s Transient Fault Handling Application Block which is part of Enterprise Library 5.0 for Windows Azure. You can read more about TOPAZ here: http://entlib.codeplex.com/wikipage?title=EntLib5Azure.
  • For best performance, co-locate your application and storage. With storage accounts, the cloud service should be in the same affinity group while with WASD, the cloud service should be in the same datacenter for best performance.
  • From disaster recovery point of view, please enable geo-replication on your storage accounts.
Best Practices – Windows Azure SQL Database (WASD)

Here’re some recommendations I could think of as far as working with WASD:

  • It is important to understand (and mentioned above and will be mentioned many more times in this post :) ) that it’s a shared resource. So expect your requests to get throttled or timed out.
  • It is important to understand that WASD != On Premise SQL Server. You may have to make some changes in your data access layer.
  • It is important to understand that you don’t get access to data/log files. You will have to rely on alternate mechanisms like “Copy Database” or “BACPAC” functionality for backup purposes.
  • Prepare your application to handle transient errors with WASD. Use TOPAZ for implementing retry logic in your application.
  • Co-locate your application and SQL Database in same data center for best performance.
Best Practices – Windows Azure Storage (Blobs, Tables & Queues)

Here’re some recommendations I could think of as far as working with Windows Azure Storage:

  • (Again) It is important to understand that it’s a shared resource. So expect your requests to get throttled or timed out.
  • Understand the scalability targets of Storage components and design your applications accordingly.
  • Prepare your application to handle transient errors with WASD. Use TOPAZ or Storage Client library’s Retry Policies for implementing retry logic in your application.
  • Co-locate your application and storage account in same affinity group (best option) or same data center (next best option) for best performance.
  • Table Storage does not support relationships so you may need to de-normalize the data.
  • Table Storage does not support secondary indexes so pay special attention to querying data as it may result in full table scan. Always ensure that you’re using PartitionKey or PartitionKey/RowKey in your query for best performance.
  • Table Storage has limited transaction support. For full transaction support, consider using Windows Azure SQL Database.
  • With Table Storage, pay very special attention to “PartitionKey” as this is how data in a table is organized and managed.
Best Practices – Managing Latency

Here’re some recommendations I could think of as far as managing latency is concerned:

  • Co-locate your application and data stores. For best performance, co-locate your cloud services and storage accounts in the same affinity group and co-locate your cloud services and SQL database in the same data center.
  • Make appropriate use of Windows Azure CDN.
  • Load balance your application using Windows Azure Traffic Manager when deploying a single application in different data centers.
Some Recommended Reading

Though you’ll find a lot of material online, a few books/blogs/sites I can recommend are:

Cloud Architecture Patterns – Bill Wilder: http://shop.oreilly.com/product/0636920023777.do

CALM (Cloud ALM) – Simon Munro: https://github.com/projectcalm/Azure-EN

Windows Azure Storage Team Blog: http://blogs.msdn.com/b/windowsazurestorage/

Patterns & Practices Windows Azure Guidance: http://wag.codeplex.com/

Summary

What I presented above are only a few of the best practices one could follow while building cloud services. On purpose I kept this blog post rather short. In fact one could write a blog post for each item. I hope you’ve found this information useful. I’m pretty sure that there’re more. Please do share them by providing comments. If I have made some mistakes in this post, please let me know and I will fix them ASAP. If you have any questions, feel free to ask them by providing comments.


•• Kevin Remde (@KevinRemde) vered TechNet Radio: Cloud Innovators – How Datacastle uses Windows Azure to Protect Business Data on 1/10/2012:

imageIn this episode I talk with Craig Blessing, Vice President at Datacastle. We discuss how his company uses Windows Azure to protect business data. Tune in as he outlines for us Datacastle’s innovative cloud solutions which help organizations have secure, anytime, anywhere access to their data.

Download:

After watching this video, follow these next steps:

Step #1 – Start Your Free 90 Day Trial of Windows Azure
Step #2 – Download Windows Server 2012
Step #3 – Begin building your own Virtual Machines in Windows Azure!

If you're interested in learning more about the products or solutions discussed in this episode, click on any of the below links for free, in-depth information:

Resources:

Websites & Blogs:

Videos:

Virtual Labs:

clip_image001Follow @technetradio
clip_image002Become a Fan @ facebook.com/MicrosoftTechNetRadio

clip_image001Follow @KevinRemde
clip_image002Become a Fan of Full of I.T. @ facebook.com/KevinRemdeIsFullOfIT

clip_image004Subscribe to our podcast via iTunes, Stitcher, or RSS


•• Brian Swan (@brian_swan) announced PHP 5.4 Available in Windows Azure Web and Worker Roles in a 1/8/2012 post:

imageOne of the things that may have slipped past you in the holiday madness was this: PHP 5.4 is available in Windows Azure Web and Worker Roles. This is just a quick post to bring you up to speed on this feature. (If you are wondering about PHP versions in Windows Azure Web Sites, see PHP 5.4 available in Windows Azure Web Sites.)

imageWith the latest Windows Azure SDK for PHP, you can use PowerShell cmdlets to create a Windows Azure project, add PHP web and worker roles, and now, specify the version of PHP to be used in the roles…including PHP 5.4. Here are the steps to creating a project and selecting a specific version of PHP (more details are in this article: How to create PHP web and worker roles):

1. Create an Azure project:

New-AzureServiceProject projectName

2. Add a PHP Web or Worker role:

Add-AzurePHPWebRole roleName

-OR-

Add-AzurePHPWorkerRole roleName

3. Specify the PHP version to be used in the role (the currently available versions are 5.3.17 or 5.4.0):

Set-AzureServiceProjectRole roleName php 5.3.17

-OR-

Set-AzureServiceProjectRole roleName php 5.4.0

That’s it. However, note that currently you can only choose from two versions of PHP: 5.3.17 and 5.4.0. The team in charge of supporting PHP in web and worker roles is working hard to make several versions of PHP available in the near future. To see what versions are available, use the Get-AzureServiceProjectRoleRuntime cmdlet:

PS C:\MyProject> Get-AzureServiceProjectRoleRuntime

At the time of this writing, here’s what you will see (note the IsDefault flag is set to true for PHP 5.3.17, indicating that it will be the default PHP version installed):

Runtime : Node
Version : 0.6.17
PackageUri : http://nodertncu.blob.core.windows.net/node/0.6.17.exe
IsDefault : False

Runtime : Node
Version : 0.6.20
PackageUri : http://nodertncu.blob.core.windows.net/node/0.6.20.exe
IsDefault : True

Runtime : Node
Version : 0.8.4
PackageUri : http://nodertncu.blob.core.windows.net/node/0.8.4.exe
IsDefault : False

Runtime : IISNode
Version : 0.1.21
PackageUri : http://nodertncu.blob.core.windows.net/iisnode/0.1.21.exe
IsDefault : True

Runtime : Cache
Version : 1.8.0
PackageUri : http://nodertncu.blob.core.windows.net/cache/1.8.0.exe
IsDefault : True

Runtime : PHP
Version : 5.3.17
PackageUri : http://nodertncu.blob.core.windows.net/php/5.3.17.exe
IsDefault : True

Runtime : PHP
Version : 5.4.0
PackageUri : http://nodertncu.blob.core.windows.net/php/5.4.0.exe
IsDefault : False

After you have selected your PHP version, you can customize the it (change configuration settings and enable/disable extensions). Or, you can provide your own PHP runtime. These articles will show you how:

As always, feedback on this feature is welcome


Microsoft Certified Professional Magazine offered a Blue Sky Possibilities: Sitecore CMS Azure Edition whitepaper on 1/10/2012 (free registration required):

imageSitecore’s new CMS Azure Edition delivers “access to the cloud without all the drama”. This Platform as a Service offering on Microsoft Windows Azure leverages the considerable advantages of the Azure platform to enable scalable, enterprise-class deployment of Sitecore-powered websites. With the Sitecore PaaS solution, organizations can:

  • Retain high levels of control over their Sitecore solution
  • Scale websites quickly and easily to new geographies
  • Respond immediately to business needs and surges in demand
  • Enjoy low costs of entry and ongoing operations

image_thumb75_thumb5Download now!

Full disclosure: I’m a contributing editor for 1105 Media’s Visual Studio Magazine, a sister publication to MCP Magazine.

image_thumb22


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

•• Paul van Bladel (@paulbladle) posted A downloadable sample app for integrating SignalR in a LightSwitch project on 1/6/2012 (missed when published):

imageIntroduction

I recently wrote some articles about the duo LightSwitch – SignalR, which has a huge potential.

imageCurrently, the SignalR libraries are moving to a first major release. As I write, SignalR has a Release Candidate (RC 1) version. Unfortunately this makes that the silverlight nuget package is a bit out of sync. This will be probably fixed when SignalR has a final release.

I’m getting quite some questions from LightSwitch developers about SignalR/LightSwitch. Unfortunately, they run into the above problem.

Therefore, I created a small sample application which has all binary references in place. Furthermore the sample contains also a winforms client making a SignalR connection to the LightSwitch server.

The purpose of the sample is not to show you signalR (the above links are more useful for that), but just to make sure you have the correct binaries at your disposal.s

SignalR_Sample

Update: The Winforms project might miss the signalR assemblies. Simply take the latest nuget bits for the Winforms project (or if you only want to focus on the LightSwitch projects, simply unload the Winforms project):

image


Philip Fu posted [Sample Of Jan 9th] 3 Tier ASP.NET Web App wit Entity Framework (and Self Tracking Objects) in Windows Azure to the Microsoft All-In-One Code Framework blog on 1/9/2013:

imageSample Download:

CS Version: http://code.msdn.microsoft.com/CSAzureNTierWebRoleWithSess-65c3d320

VB Version: http://code.msdn.microsoft.com/VBAzureNTierWebRoleWithSess-43981cba

The sample code demonstrates how to build a simple 3-tier Asp.net Web Role.

imageYou can find more code samples that demonstrate the most typical programming scenarios by using Microsoft All-In-One Code Framework Sample Browser or Sample Browser Visual Studio extension. They give you the flexibility to search samples, download samples on demand, manage the downloaded samples in a centralized place, and automatically be notified about sample updates. If it is the first time that you hear about Microsoft All-In-One Code Framework, please watch the introduction video on Microsoft Showcase, or read the introduction on our homepage http://1code.codeplex.com/.


Updated the Attempting to List SharePoint Apps in the Office Store Preview section of my (@rogerjenn) LightSwitch HTML 5 Client Preview 2: OakLeaf Contoso Survey Application Demo on Office 365 SharePoint Site on 1/8/2012 with details of the Office Store’s embargo on autohosted SharePoint apps, which renders moot the issue of inability to upload large SharePoint.app files due to timeouts:

imageThe Office Store, which is similar to the Windows Store for Windows 8 apps, enables ISVs and individual developers to distribute free or paid SharePoint apps to Office 365 subscribers and SharePoint 2013 clients. Vivek Narasimhan announced ­­The Office Store is now open! in an 8/26/2012 post to the Apps for Office and SharePoint blog.

imageSubmitting an app to the Office Store Preview follows a simplified version of the process for the Window Store, described in my Windows Azure Mobile Services Preview Walkthrough–Part 4: Customizing the Windows Store App’s UI post of 9/15/2012 and Windows Azure Mobile Services Preview Walkthrough–Part 5: Distributing Your App From the Windows Store of 9/22/2012.


image_thumb6On 1/8/2012, I learned from a member of the Commerce UX team that even if I was to be able to upload the SharePoint.app file, the Office Store team would reject it because it's autohosted in SharePoint.

To quote my Office Store Embargoes Autohosted SharePoint Apps for Indeterminate Period thread of 1/8/2012 in the Developing Apps for SharePoint 2013 forum:

From an obscure paragraph in Validation policies for apps FAQ:

"Can I submit an autohosted app for SharePoint to the Office Store?


The infrastructure for autohosted apps will remain in preview status for a period of time after SharePoint 2013 releases. Autohosted apps (which includes all apps that depend on Microsoft Access) will not be accepted by the Office Store during this preview phase. [Emphasis added.]

For more information on autohosted apps, see How to: Create a basic autohosted app in SharePoint 2013."

I assume that the author of the foregoing forgot to include Visual Studio LightSwitch HTML 5 Client Preview 2 and Windows Azure Mobile Services with the Microsoft Access reference.

This issue kills any chance devs have to work through the issues of developing and certifying SharePoint apps that use LightSwitch HTML 5 Client Preview 2 and Windows Azure Mobile Services in the configuration demonstrated by Scott Guthrie at the SharePoint Conference 2012.

In addition, the Commerce UX team member pointed out that you must edit the the app manifest to specify which locales are supported by your app for SharePoint or the Office Store team won’t certify your app. From Ricky Kirkham’s Locale support information is required for all apps in the SharePoint Store post to the Apps for Office and SharePoint blog of 10/12/2012:

Hi, I'm Ricky Kirkham from the developer documentation team for SharePoint 2013. I'd like to let you all know how you can use the app manifest to specify which locales are supported by your app for SharePoint. You are required to specify supported locales, or your app will not be accepted by the SharePoint Store.

The <Properties> element of the app manifest must always contain a child element that identifies the locales that the app supports. For the final release version of SharePoint 2013, the element is <SupportedLocales> and it must have a <SupportedLocale> child for every locale that the app supports, even if there is just one locale. Note that you identify the locale with the CultureName attribute. The value of this attribute is a locale identifier in the Internet Engineering Task Force (IETF)-compliant format LL-CC. The following is an example.

<App … >

<Properties>

<SupportedLocales>
<SupportedLocale CultureName="en-us" />
<SupportedLocale CultureName="ja-jp" />
</SupportedLocales>
</Properties>

</App>

There are a couple of small, and temporary, extra points to know. First, for an undetermined period of time after the release of SharePoint 2013, the SharePoint Store will not have any UI to tell potential app purchasers which locales are supported by your app, meaning that all users will assume that all apps support the en-us locale. But, if you have localized your app for other locales, you should include them in your <SupportedLocales> element so that you will not have to upload a new version of the app when the store's UI is expanded. And, so that users will automatically start seeing more locale options for your app when the UI supports this.

Second, if you have signed up for a SharePoint Online developer site, please note that it might not be converted to use the release version of SharePoint 2013 for a few weeks after release. While your developer site is still based on SharePoint 2013 Preview, you have to use the <SupportedLanguages> element instead of the <SupportedLocales> element. The SharePoint
Store will accept either element for now, but will switch in the future to allow only the SupportedLocale element.

The <SupportedLanguages> element has no child elements or attributes. Its value is a simple semi-colon delimited list of locales. The following is an example.

<App … >

<Properties>

<SupportedLanguages>en-us;ja-jp</SupportedLanguages>
</Properties>

</App>

The <SupportedLanguages> element will continue to work even on the release version for some time, but it is deprecated in favor of <SupportedLocales>, so on new apps you should Always use <SupportedLocales>. And if you have a reason to update an app that uses <SupportedLanguages>, we recommend that you switch to <SupportedLocales> as part of the
update.

I hope this post helps your apps sail smoothly through the submission process! [Emphasis Ricky’s.]

Not much hope for autohosted apps, Ricky.


Beth Massi (@bethmassi) posted LightSwitch Community & Content Rollup–December 2012 on 1/8/2013:

imageLast year I started posting a rollup of interesting community happenings, content, samples and extensions popping up around Visual Studio LightSwitch. If you missed those rollups you can check them all out here: LightSwitch Community & Content Rollups.

I realize I’m a week late for this one but like most folks I’ve been on vacation for the holidays. HAPPY NEW YEAR from all of us on the LightSwitch team! It’s great to be back to work and I’m looking forward to a happy, healthy, and geeky 2013. Although December is normally a very quiet month, a lot of goodness around Visual Studio LightSwitch happened. Check it out!

LightSwitch Cosmopolitan Shell Source Code Released!

imageI know it took a while (we were tied up in some mumbo jumbo with our legal department) but we finally released the source code to the LightSwitch Cosmo Shell! This is the default shell used in new LightSwitch projects created with Visual Studio 2012. If you want to tweak the current theme & shell to suit your specific needs, this will give you a great starting point with this easily customizable sample. The code and XAML is structured to facilitate making incremental changes to the default shell.

Check out the LightSwitch Team Blog for details: The Cosmopolitan shell and theme source code is released!

LightSwitch Speaking Tour = Success!

In December I travelled all around Eastern Canada (and Vermont) spreading the LightSwitch love. If you missed my trip report, you can read it here - Trip Report: Eastern Canada LightSwitch Speaking Tour

Some key takeaways for me:

  • A lot of developers had misconceptions about LightSwitch and had never tried it themselves but were immediately impressed with what it could do once I showed them
  • Adding mobile HTML as an alternate client can really fill a gap in the development community. Most developers I spoke with are being “forced” to learn JavaScript & HTML to keep up with business demands and the plethora of mobile devices being used in the enterprise.
  • Being able to use LightSwitch as a way to build and deploy data services to Azure is very compelling for native (Win8, iOS, Android, etc.) developers. They can quickly create the shared backend services and concentrate on the clients.

For more, read on….

Update available for WCF Data Services

An update was made available for WCF Data Services 5.0.0 in December which includes some important updates and bug fixes. If you are using LightSwitch in Visual Studio 2012 we encourage you install this update. Read more about it here: Update available for WCF Data Services 5.0.0

More Notable Content this Month

Samples (see all 96 of them here):

Team Articles:

In December, the team continued to write articles on how to write JavaScript with the HTML Client Preview.( We promise we have a lot more on the way!) A couple of our devs also wrote up some tips & tricks posts…

Community Articles:

I was on a pretty long vacation this month so I may have missed some articles of note. If so, feel free to add a comment below. Many thanks to all our rock star bloggers for contributing in December, particularly Michael Washington!

Top Forum Answerers

Thanks to all our contributors to the LightSwitch forums on MSDN. Thank you for helping make the LightSwitch community a better place. Another huge shout out to Yann Duran who consistently provides help in our General forum!

Top 5 forum answerers in November:

image

Keep up the great work guys!

LightSwitch Team Community Sites

Become a fan of Visual Studio LightSwitch on Facebook. Have fun and interact with us on our wall. Check out the cool stories and resources. Here are some other places you can find the LightSwitch team:

LightSwitch MSDN Forums
LightSwitch Developer Center
LightSwitch Team Blog
LightSwitch on Twitter (@VSLightSwitch, #VS2012 #LightSwitch)


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Michael Washam (@MWashamMS) described Upgrading Windows Azure Cloud Services to Server 2012 and .NET 4.5 in a 10/9/2012 post:

In this post I’ll walk through the options for upgrading an existing Cloud Service using OS Family (1 or 2) to use Windows Server 2012 (OS Family 3) and .NET 4.5.

Traditionally, to upgrade a Cloud Service all that is required is to click the update button in the Windows Azure portal after uploading your updated package or publishing with Visual Studio. However, when changing OS Families from (1 or 2) to 3 you will receive the error saying “Upgrade from OS family 1 to OS family 3 is not allowed”. This is a temporary restriction on the update policy that we are working to remove in an upcoming release.

In the mean time there are two workarounds to updating your existing Windows Azure Cloud Service to run with .NET 4.5 and OS family 3 (Server 2012):

  • VIP Swap (recommended approach)
  • Delete and Re-deploy

Both have different pros/cons that are outlined in the table below. Detailed walkthroughs of both the options are provided below:

  PROS CONS
VIP Swap Fast and simple. Usual VIP swap restrictions apply.
Delete/Re-deploy Can make any change to the updated application. VIP swap restrictions do not apply. Loss of availability while the application is deleted and then redeployed. Potential change in the public IP address after the redeployment.

Configuring Your Project for Upgrade

Step 1: Open the solution in Visual Studio.

In the example below the solution is named Sdk1dot7 and there are two projects. The first is an MVC Web Role project (MvcWebRole1) and the second is the Windows Azure Cloud Service project (Sdk1dot7).

csu1

Step 2: Upgrade the Project

Right click on the Cloud Service project (Sdk1dot7) and select properties. Note in the picture below, the properties page shows that this project was built the June 2012 SP1 Windows Azure Tools version. Click the “Upgrade” button. After the upgrade, if you check this properties sheet again, it should show that the current Windows Azure Tools
version is October 2012.

csu2

Step 3: Open ServiceConfiguration.Cloud.cscfg: csu3

Change OSFamily from (1 or 2) to 3.

csu4

New Value: osFamily=”3″

Step 4: Modify the Web Role to use .NET 4.5

Open the properties sheet of the WebRole project by right-clicking the WebRole project and clicking on “Properties”.

csu5

In the “Application” tab look for the “Target framework” dropdown. It should show .NET 4.0.

csu6

Open the dropdown and select .NET 4.5. You’ll get a “Target Framework Change” dialog box, click “Yes” to proceed. The Target Framework should now read .NET 4.5.

Rebuild by hitting ‘F6’. You might get some build errors due to namespace clashes between some new libraries that have been introduced in .NET 4.5. These are easy enough to fix. If you cannot, feel free to add the comment and I’ll respond.

Deploying using VIP Swap

You can deploy from within Visual Studio or from the Windows Azure Portal. In this post, I’ll show the steps to deploy through the portal.

Step 1: Generate the .cspkg and .cscfg files for upload.

Right click on your Cloud Service project (Sdk1Dot7) and select Package:

csu6.5

After the packaging is complete, a file explorer window will open with the newly created .cspkg and .cscfg files for your Cloud Service.

Step 2: Uploading the Files to the Staging Slot using the Windows Azure Portal

Open the Windows Azure portal at https://manage.windowsazure.com and select your cloud service. Click on the “Staging” tab (circled in red in the accompanying picture below).

Once on the staging tab, click on the “Update” button on the bottom panel (circled in green in the accompanying picture below).

csu7

From there a dialog will open requesting the newly created files packaged from Visual Studio.

Select “From Local” button for both and upload the files that were generated during packaging. Remember to check the “Update even if one or more roles contain single instance” if you have a single instance role. These options are circled in red in the picture below. Click on the check marked circle to proceed.

csu8

Step 3: Test the new deployment

At this point you will have your application running on OS family 3 and using .NET 4.5 in the staging slot and your original application using OS family 1/2 and .NET 4.0 in the production slot. Browse to the application by clicking the Site URL on the dashboard under the staging slot.

Step 4: Perform the VIP Swap to Production

On either the production or the staging tab, click on the “swap” button located in the bottom panel next to the update button (circled in green in the accompanying picture).

csu9

After this operation completes, you will have your application running on OS family 3 and using .NET 4.5 in place of the original application.

Deleting Your Deployment

The second option is to delete your deployment. This is not going to be the recommended approach for a production application because you will have downtime and there is a probability of losing the current IP address assigned to your VIP. This option is really only useful for dev/test where you do not want to go through the VIP swap life cycle or you are making changes to the cloud service that are restricted during an in-place upgrade using VIP swaps.

To delete your deployment open the Windows Azure portal at https://manage.windowsazure.com and select your cloud service. Click the “STOP” button in the bottom panel (circled in green in the accompanying picture). Click “yes” on the dialog box that pops up.

csu10

Once the service is deleted you can simply republish from Visual Studio or package and upload using Visual Studio + the management portal.


• Glen Block described Automating the cloud with Windows Azure Command Line Tools in a 1/10/2012 post to the Windows Azure blog:

If you are one of the many users of our Windows Azure Powershell cmdlets or our Windows Azure command line tool, then you know that the CLI makes it really easy to manage and deploy Websites, Mobile Services, VMs, Service Bus and much more in Windows Azure from your favorite shell prompt on Windows, Mac and Linux.

That’s not all you can do though, you can do much more! You can take our tools and use them in your favorite scripts as part of your automation infrastructure. Or you can use them right from within your favorite development environments.

Below is a bunch posts both the community and our team talking about this Azure automation goodness.

General scripting

These posts cover the basis of scripting from different shell environments.

Scripting Mobile Services

In this mini-series of scripting posts, Josh Twist from the Mobile Services team shows you how to use the CLI to automate tasks related to Mobile Services.

Virtual Machines (Powershell only)

Michael Washam has a great ongoing series of posts so how to use the Powershell cmdlets.

Using the CLI within your favorite development environments

In this set of posts you’ll see how as a developer you can use our tools from within IDEs including Visual Studio, PHP Storm and Cloud 9.

As you can see, there are some great things you can do automating the cloud using the command line tools in your scripts.


Peter Wayner of InfoWorld’s Test Center Team asserted “The InfoWorld Test Center picks the best hardware, software, development tools, and cloud services of the year” in a deck for its InfoWorld's 2013 Technology of the Year Award winners: Microsoft Windows Azure slide show of 1/9/2013:

image

image_thumb75_thumb6One of the simplest ways to get a server full of Microsoft Windows today is to click through a few forms on the Windows Azure platform. Voilá, you have a machine running Windows in the cloud. Azure offers full-featured Windows machines at rates that rival those for Linux instances on other clouds. If you want Linux, Java, Python, Node.js, MySQL, or NoSQL, they're available too.

imageMicrosoft is integrating Azure with its other products at a level you don't see in cloud-only companies. Clearly Microsoft views Azure as a crucial vector for delivering its platform in all its various combinations.


Peter Wayner asserted “Microsoft's cloud wows with great price-performance, Windows toolchain integration, and plenty of open source options” in a deck for his Review: Windows Azure shoots the moon article of 12/19/2012 for InfoWorld’s Cloud Computing blog (missed when published):

image_thumb75_thumb6A long time ago in a century slipping further and further away, Bill Gates compared MSN with the exploding World Wide Web, saw the future, and pivoted nicely to embrace the Internet. A few decades later, someone at Microsoft looked at the cloud and recognized that the old days of selling Windows Server OS licenses were fading. Today we have Windows Azure, Microsoft's offering for the cloud.

imageAzure is a cloud filled with racks and racks of machines like other clouds, but it also offers a wider collection of the building blocks enterprise managers need to assemble modern, flexible websites. There are common offerings such as virtual machines, databases, and storage blocks, along with not-so-common additions such as service buses, networks, and connections to data farms address verifiers, location data, and Microsoft's own Bing search engine. There are also tools for debugging your code, sending emails, and installing databases like MongoDB and ClearDB's version of MySQL.

All of these show that Microsoft is actively trying to build a system that lets developers easily produce a working website using the tools of their choice. Azure is not just delivering commodity Microsoft machines and leaving the rest up to you -- it's starting to make it simpler to bolt together all of the parts. The process still isn't simple, but it's dramatically more convenient than the old paradigm.

Not-only-Windows Azure
The Azure service is a godsend for those who are heavily invested in Microsoft's operating systems. Many of the big clouds offer only Linux or BSD machines. Rackspace charges 33 percent more to build out a Microsoft Windows server, but Azure rents a Windows machine at the same bargain rate as Linux.

Did I say the same as Linux? Yes, because Microsoft is fully embracing many open source technologies with Azure. You can boot up a virtual machine and install a few of the popular Linux distros like Ubuntu Server 12.04 or OpenSuse 12.1. There aren't many choices of open source distros, but Microsoft has chosen a few of the more popular ones. They cost the same as the standard Windows Server 2008 R2 and Windows Server 2012 offerings.

Microsoft's embrace of open source is on full display with Azure. The company is pushing PHP, Node.js, Python, Java (if you consider Java open source), and even MySQL. Well, that's not exactly true. You can create running versions of Drupal or WordPress, and Azure will set up MySQL back ends for you. If you go to the SQL tab to start up your own SQL database, you can provision an instance of Microsoft SQL Server, but there's no mention of MySQL. That's because Microsoft is letting a third party, ClearDB, deliver MySQL. It's one of a dozen or so extras you can buy.

The websites with Drupal or WordPress are among many options available. Microsoft will let you have up to 10 free ones with your account. Then you push your HTML or PHP to them with Git, and the server does the rest. (Notice the embrace of Git too.)

These free options are come-ons. If your website takes off and you start getting traffic, you can upgrade to shared services or full, managed machines that can be load balanced. The documentation is a bit cagey about what happens as you start fiddling with the Scale control panel, but you get better guarantees of service and less throttling. If you move over to the Reserved setting, you get dedicated virtual machines with resource guarantees. This is a pretty simple way to build and test a website before deploying it for production.

Your list of Azure virtual machines and services can grow pretty quickly.

Your list of Azure virtual machines and services can grow pretty quickly.

Read more.


David Linthicum (@DavidLinthicum) asserted “Though many cloud providers see value from removing the personal touch, they may discover that customer service is key to success” in a deck for his Cloud computing's Achilles' heel: Poor customer service article of 1/8/2013 for InfoWorld’s Cloud Computing blog:

imageI'm consistently taken aback by many businesses' disregard for customer service. As long as customers push back on companies that treat them shabbily, enterprises willing to cut service will find themselves out of business or forced to merge with establishments that treat their customers better.

Giving short shrift to customer service remains an issue in the cloud, which is based on the notion of automation and self-provisioning at scale. Dealing with people individually seems contrary to the idea of the cloud. Many public cloud providers assumed they could just put a layer of Web pages between them and their customers, and all would be right -- no phones to answer, no planes to board.

The truth of the matter is that small businesses drove the initial growth of cloud computing. Because typical small businesses can't pay much for cloud services, they weren't surprised when they couldn't get a person on the phone. The cloud providers that courted small businesses continued to grow without much of an investment in customer service.

These days, larger enterprises are investing in public clouds, and they're accustomed to real people talking to them on the phone, account managers in their offices, and cell numbers for support engineers on call around the clock. In other words, they want public cloud providers to offer the same level of customer service as the larger enterprise software providers.

The problem is that many of the public cloud providers are not set up to meet this level of customer service. They simply don't have the people or the systems in place. To establish such systems and personnel, they'll have to raise their prices -- and no one is doing that these days.

But as public clouds push into larger enterprises, they will have no choice but to provide a richer customer service experience. Large enterprise IT demands that level of service, and public clouds won't be able to penetrate the large enterprise market without it.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image_thumb75_thumb7No significant articles today


<Return to section navigation list>

Cloud Security, Compliance and Governance

Chris Hoff (@Beaker) suggested Wanna Be A Security Player? Deliver It In Software As A Service Layer… on 1/9/2012:

imageAs I continue to think about the opportunities that Software Defined Networking (SDN) and Network Function Virtualization (NFV) bring into focus, the capability to deliver security as a service layer is indeed exciting.

I wrote about how SDN and OpenFlow (as a functional example) and the security use cases provided by each will be a differentiating capability back in 2011: The Killer App For OpenFlow and SDN? Security, OpenFlow & SDN – Looking forward to SDNS: Software Defined Network Security, and Back To The Future: Network Segmentation & More Moaning About Zoning.

Recent activity in the space has done nothing but reinforce this opinion. My day job isn’t exactly lacking in excitement, either :)

image_thumb2As many networking vendors begin to bring their SDN solutions to market — whether in the form of networking equipment or controllers designed to interact with them — one of the missing strategic components is security. This isn’t a new phenomenon, unfortunately, and as such, predictably there are also now startups entering this space and/or retooling from the virtualization space and stealthily advertising themselves as “SDN Security” companies :)

Like we’ve seen many times before, security is often described (confused?) as a “simple” or “atomic” service and so SDN networking solutions are designed with the thought that security will simply be “bolted on” after the fact and deployed not unlike a network service such as “load balancing.” The old “we’ll just fire up some VMs and TAMO (Then a Miracle Occurs) we’ve got security!” scenario. Or worse yet, we’ll develop some proprietary protocol or insertion architecture that will magically get traffic to and from physical security controls (witness the “U-TURN” or “horseshoe” L2/L3 solutions of yesteryear.)

The challenge is that much of Security today is still very topologically sensitive and depends upon classical networking constructs to be either physically or logically plumbed between the “outside” and the asset under protection, or it’s very platform dependent and lacks the ability to truly define a policy that travels with the workload regardless of the virtualization, underlay OR overlay solutions.

Depending upon the type of control, security is often operationalized across multiple layers using wildly different constructs, APIs and context in terms of policy and disposition depending upon it’s desired effect.

Virtualization has certainly evolved our thinking about how we should think differently about security mostly due to the dynamism and mobility that virtualization has introduced, but it’s still incredibly nascent in terms of exposed security capabilities in the platforms themselves. It’s been almost 5 years since I started raging about how we need(ed) platform providers to give us capabilities that function across stacks so we’d have a fighting chance. To date, not only do we have perhaps ONE vendor doing some of this, but we’ve seen the emergence of others who are maniacally focused on providing as little of it as possible.

If you think about what virtualization offers us today from a security perspective, we have the following general solution options:

  1. Hypervisor-based security solutions which may apply policy as a function of the virtual-NIC card of the workloads it protects.
  2. Extensions of virtual-networking (i.e. switching) solutions that enable traffic steering and some policy enforcement that often depend upon…
  3. Virtual Appliance-based security solutions that require manual or automated provisioning, orchestration and policy application in user space that may or may not utilize APIs exposed by the virtual networking layer or hypervisor

There are tradeoffs across each of these solutions; scale, performance, manageability, statefulness, platform dependencies, etc. There simply aren’t many platforms that natively offer security capabilities as a function of service delivery that allows arbitrary service definition with consistent and uniform ways of describing the outcome of the policies at these various layers. I covered this back in 2008 (it’s a shame nothing has really changed) in my Four Horsemen Of the Virtual Security Apocalypse presentation.

As I’ve complained for years, we still have 20 different ways of defining how to instantiate a five-tupule ACL as a basic firewall function.

Out of the Darkness…

The promise of SDN truly realized — the ability to separate the control, forwarding, management and services planes — and deploy security as a function of available service components across overlays and underlays, means we will be able to take advantage of any of these models so long as we have a way to programmatically interface with the various strata regardless of whether we provision at the physical, virtual or overlay virtual layer.

It’s truly exciting. We’re seeing some real effort to enable true security service delivery.

When I think about how to categorize the intersection of “SDN” and “Security,” I think about it the same way I have with virtualization and Cloud:

  • Securing SDN (Securing the SDN components)
  • SDN Security Services (How do I take security and use SDN to deliver security as a service)
  • Security via SDN (What NEW security capabilities can be derived from SDN)

There are numerous opportunities with each of these categories to really make a difference to security in the coming years.

The notion that many of our network and security capabilities are becoming programmatic means we *really* need to focus on securing SDN solutions, especially given the potential for abuse given the separation of the various channels. (See: Software Defined Networking (In)Security: All Your Control Plane Are Belong To Us…)

Delivering security as a service via SDN holds enormous promise for reasons I’ve already articulated and gives us an amazing foundation upon which to start building solutions we can’t imagine today given the lack of dynamism in our security architecture and design patterns.

Finally, the first two elements give rise to allow us to do things we can’t even imagine with today’s traditional physical and even virtual solutions.

I’ll be starting to highlight really interesting solutions I find (and am able to talk about) over the next few months.

Security enabled by SDN is going to be huge.

More soon.

No significant articles today

 


<Return to section navigation list>

Cloud Computing Events

image_thumb75_thumb8No significant articles today


<Return to section navigation list>

Other Cloud Computing Platforms and Services

• Larry Dignan (@ldignan) asserted “Amazon's infrastructure as a service unit may be underestimated by Wall Street. Bottom line: AWS may change Amazon's profit profile completely in the years to come, argues Macquarie Capital” in his Amazon's AWS: $3.8 billion revenue in 2013, says analyst post of 1/7/2013 to ZDNet’s Cloud blog:

imageAmazon Web Services is expected to have revenue of $3.8 billion in 2013 and could be worth $19 billion to $30 billion if it were a standalone company, argued Macquarie Capital analysts in a research note.

Macquarie's argument, led by analyst Ben Schachter, relies on the addressable market for cloud computing and the assumption that AWS accounts for all of Amazon's growth in the "other" revenue category.

imageThe Macquarie research note landed at the same time as a Morgan Stanley upgrade. Scott Devitt upgraded Amazon based on international growth and global fulfillment services. Devitt called AWS a strategic asset.

Schachter's report on AWS was far more interesting. Schachter said that AWS is likely to land more large enterprises, a reality that is likely to boost growth. To date, AWS has relied on startups and small companies. The large company argument adds up. At AWS' customer and partner powwow last year, companies like Pfizer were readily available.

imageIndependently, I've confirmed the profile of one U.S. sales region for AWS. In a nutshell, this region features top 200 accounts that range from $5,000 a month to about $200,000 a month. If you extrapolate those numbers throughout the U.S., combine international markets and smaller accounts paying about $1,000 a month it's clear that AWS has some serious growth ahead. That growth can come from additional partnerships and better channel efforts alone.

Macquarie's Schachter is estimating that AWS' current addressable market was $11 billion in 2012 and the unit delivered actual revenue of about $2 billion. In 2013, Schachter estimates that AWS will have revenue of $3.8 billion.

awsrevgrowth2013

Among the key points from the Macquarie report:

  • AWS is 100 percent gross margin business for Amazon. Amazon's AWS costs run through its technology and content expense line. As AWS grows faster than Amazon's retail business, the gross margin profile for the entire company changes.
  • Storage growth for AWS' S3 services is exponential and can carry growth for years.
  • AWS is expected to have revenue of $3.8 billion in 2013, $6.2 billion in 2014 and $8.8 billion in 2015. In 2015, AWS will be 7 percent of Amazon's revenue---significant, but not large enough for the retailer to be required to break out numbers in its financial reports.
  • Comparisons to AWS are tricky since RackSpace is among the only standalone direct competitors. Savvis and Terremark compete with AWS, but those outfits are subsidiaries of CenturyLink and Verizon, respectively.

Schachter said:

Using our estimate of $3.8bn for 2013 AWS revenues, and applying a ~5x multiple based on the comps noted above, we arrive at a valuation of ~$19bn for the business on an EV/Sales basis (equating to ~$41/share of AMZN stock). Importantly, we believe this to be a conservative valuation multiple, as AWS revenues are growing much faster than any of the comps incorporated above. At an 8x valuation multiple, we estimate the AWS business could be worth $30bn as a stand-alone company, or ~$66/share.


Jeff Barr (@jeffbarr) explained Amazon CloudWatch - Alarm Actions in a 1/8/2013 post:

imageAs you probably know, Amazon CloudWatch provides monitoring services for your cloud resources and your applications. You can track cloud, system, and application metrics, see them visually, and arrange to be notified (via a CloudWatch alarm) if they go beyond a value that you specify. For example, you can track the CPU load of your EC2 instances and receive a notification (via email and/or Amazon SNS) if it exceeds 90% for a period of 5 minutes.

Today we are giving you the ability to stop or terminate your EC2 instances when a CloudWatch alarm is triggered. You can use this as a failsafe (detect an abnormal condition and then act) or as part of your application's processing logic (await an expected condition and then act).

image_thumb111Before we dig in, I should remind you of one thing. If you are using EBS-backed EC2 instances, you can stop them at any point, with the option to restart them later, while retaining the same instance ID and root volume (this is, of course, distinct from the associated termination option).

Failsafe Ideas
If you (or your developers) are forgetful, you can detect unused EC2 instances and shut them down. You could do this by detecting a very low load average for an extended period of time. This type of failsafe could be used to reduce your AWS bill by making sure that you are not paying for resources you're not actually using.

You could also implement a failsafe that would detect runaway instances (for example, CPU pegged at 100% for an extended period of time). Perhaps your application gets stuck in a loop from time to time (only when you are not looking, of course). You could also use our CloudWatch monitoring scripts to detect and act on other situations, such as excessive memory utilization).

Processing Logic
Many AWS applications will pull work from an Amazon SQS queue, do the work, and then pass the work along to the next stage of a processing pipeline. You can detect and terminate worker instances that have been idle for a certain period of time.

You can use a similar strategy to get rid of instances that are tasked with handling compute-intensive batch processes. Once the CPU goes idle and the work is done, terminate the instance and save some money!

Application Integration
You can also create CloudWatch alarms based on Custom Metrics that you observe on an instance-by-instance basis. You could, for example, measure calls to your own web service APIs, page requests, or message postings per minute, and respond as desired.

Setting Up Alarm Actions
You can set up alarm actions from the EC2 or CloudWatch tabs of the AWS Management Console. Let's say you want to start from the EC2 tab. Right-click on the instance of interest and choose Add/Edit Alarms:

Choose your metrics, set up your notification (SNS topic and optional email) and check Take the action, and choose either Stop or Terminate this instance:

The console will confirm the creation of the alarm, and you're all set (if you asked for an email notification, you need to confirm the subscription within three days):

Your Turn
I can speak for the entire CloudWatch team when I say that we are interested in hearing more about how you will put this feature to use. Feel free to leave a comment and I'll pass it along to them ASAP.

No significant articles today


0 comments: