Sunday, November 25, 2012

Windows Azure and Cloud Computing Posts for 11/22/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI, Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image_thumb7_thumb1

‡    Updated 11/25/2012 11:00 AM PST with new articles marked .
Updated
11/24/2012 5:00 PM PST with new articles marked .
    Updated
11/23/2012 5:00 PM PST with new articles marked .

Tip: Copy bullet(s) or dagger, press Ctrl+f, paste it/them to the Find textbox and click Next to locate updated articles:

image

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue, Hadoop and Media Services

•• Gaurav Mantri (@gmantri) described Storage Client Library 2.0 – Migrating Queue Storage Code in an 11/24/2012 post:

imageA few days ago, I wrote a blog post about migrating code from storage client library 1.7 to 2.0 to manage Windows Azure Table Storage. You can read that post here: http://gauravmantri.com/2012/11/17/storage-client-library-2-0-migrating-table-storage-code/. In this post, I will talk about migrating code from storage client library 1.7 to 2.0 for managing Windows Azure Queue Storage. Unlike table storage changes, the queue storage changes are not that significantly different than the previous version so hopefully the exercise to migrate code would be rather a painless one.

imageLike the previous post, I will attempt to provide some code sample[s] through which I will try [to] demonstrate how you can do some common tasks when working with Azure Queue Storage. What I did is wr[i]te two simple console applications: one which uses storage client library version 1.7 and the other which uses version 2.0 and in those two applications I demonstrated some simple functionality.

Read These First

image_thumb75_thumb1Since version 2.0 library is significantly different than the previous ones, before you decide to upgrade your code to make use of this version I strongly urge you to read up the following blog posts by the storage team as there’re many breaking changes.

Introducing Windows Azure Storage Client Library 2.0 for .NET and Windows Runtime

http://blogs.msdn.com/b/windowsazurestorage/archive/2012/10/29/introducing-windows-azure-storage-client-library-2-0-for-net-and-windows-runtime.aspx

Windows Azure Storage Client Library 2.0 Breaking Changes & Migration Guide

http://blogs.msdn.com/b/windowsazurestorage/archive/2012/10/29/windows-azure-storage-client-library-2-0-breaking-changes-amp-migration-guide.aspx

Getting Started

Before jumping into the code, there’re a few things I would like to mention:

Storage Client Libraries

To get the reference for storage client library 1.7, you can browse your local computer and navigate to the Azure SDK installation directory (C:\Program Files\Microsoft SDKs\Windows Azure\.NET SDK\2012-10\ref – assuming you have SDK 1.8 installed) and select Microsoft.WindowsAzure.StorageClient.dll from there.

To get the reference for storage client library 2.0 (or the latest version for that matter), I would actually recommend getting this using Nuget. That way you’ll always get the latest version. You can simply get it by executing the following command in Nuget Package Manager console: Install-Package WindowsAzure.Storage. While it’s an easy way to get the latest version upgrades, one must not upgrade it before ensuring the new version won’t break anything in the existing code.

Namespaces

One good thing that is done with version 2.0 is that the functionality is now neatly segregated into different namespaces. For queue storage, following 2 namespaces are used:

using Microsoft.WindowsAzure.Storage.Queue;
using Microsoft.WindowsAzure.Storage.Queue.Protocol;
Queue Request Options and Operation Context

One interesting improvement that has been done is that with every storage client library function, you can pass 2 additional optional parameters: Queue Request Options and Operation Context. Queue Request Options object allows you to control the retry policies (to take care of transient errors) and some of the server side behavior like request timeout. Operation Context provides the context for a request to the storage service and provides additional runtime information about its execution. It allows you to get more information about request/response plus it allows you to pass a client request id which gets logged by storage analytics. For the sake of simplicity, I have omitted these two parameters from the code I included below.

Operations

Now let’s see how you can perform some operations. What I’ve done is first showed how you did an operation with version 1.7 and then how would you do the same operation with version 2.0.

Create Queue

If you’re using the following code with version 1.7 to create a queue:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            queue.CreateIfNotExist();

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            queue.CreateIfNotExists();

Only difference that you will see in the two functions above is version 2.0 is now grammatically correct :) [in 1.7, it is CreateIfNotExist() and in 2.0, it is CreateIfNotExists() --> notice an extra “s”].

Delete Queue

If you’re using the following code with version 1.7 to delete a queue:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            queue.Delete();

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            queue.DeleteIfExists();

One interesting improvement with 2.0 is that it now eats up “Resource Not Found (HTTP Error Code 404)” exception if you’re trying to delete a queue which does not exist when you use “DeleteIfExists()” function. With 1.7, you don’t have that option and you would need to handle 404 error in your code. Please note that “Delete()” function is still available in 2.0 on a queue.

List Queues

If you’re using the following code with version 1.7 to list queues:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            var queues = cloudQueueClient.ListQueues();

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            var queues = cloudQueueClient.ListQueues();

As you can see listing queues functionality is more or less the same in both versions.

Add Message

If you’re using the following code with version 1.7 to add a message to the queue:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            string messageContents = "This is a test message";
            CloudQueueMessage message = new CloudQueueMessage(messageContents);
            queue.AddMessage(message);

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            string messageContents = "This is a test message";
            CloudQueueMessage message = new CloudQueueMessage(messageContents);
            queue.AddMessage(message);

As you can see add message functionality is same in both versions.

Peek Messages

As you know, peek messages functionality allows you to fetch up to 32 messages from a queue without altering their visibility. If you’re using the following code with version 1.7 to peek messages from a queue:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            int numMessagesToFetch = 32;//Max messages which can be fetched at a time is 32
            IEnumerable<CloudQueueMessage> messages = queue.PeekMessages(numMessagesToFetch);

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            int numMessagesToFetch = 32;//Max messages which can be fetched at a time is 32
            IEnumerable<CloudQueueMessage> messages = queue.PeekMessages(numMessagesToFetch);

As you can see peek messages functionality is same in both versions.

Peek Message

While peek messages functionality allows you to fetch multiple messages from a queue, you can use peek message functionality to fetch a single message without altering its visibility. If you’re using the following code with version 1.7 to peek at a message from a queue:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            CloudQueueMessage message = queue.PeekMessage();

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            CloudQueueMessage message = queue.PeekMessage();

As you can see peek message functionality is same in both versions.

Get Messages

Like peek messages functionality, get message functionality allows you to fetch up to 32 messages from a queue. The difference is that when you “get” messages from a queue, they become invisible to all other applications for some amount of time specified by you. Or in other words, a message gets “de-queued”. If you’re using the following code with version 1.7 to get messages from a queue:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            int numMessagesToFetch = 32;//Max messages which can be fetched at a time is 32
            TimeSpan visibilityTimeout = TimeSpan.FromSeconds(30);//Message will be invisible for 30 seconds.
            IEnumerable<CloudQueueMessage> messages = queue.GetMessages(numMessagesToFetch, visibilityTimeout);

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            int numMessagesToFetch = 32;//Max messages which can be fetched at a time is 32
            TimeSpan visibilityTimeout = TimeSpan.FromSeconds(30);//Message will be invisible for 30 seconds.
            IEnumerable<CloudQueueMessage> messages = queue.GetMessages(numMessagesToFetch, visibilityTimeout);

As you can see get messages functionality is same in both versions.

Get Message

While get messages functionality allows you to fetch multiple messages from a queue, you can use get message functionality to fetch a single message and making it invisible to other applications for a certain amount of time. If you’re using the following code with version 1.7 to get a message from a queue:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            TimeSpan visibilityTimeout = TimeSpan.FromSeconds(30);//Message will be invisible for 30 seconds.
            CloudQueueMessage message = queue.GetMessage(visibilityTimeout);

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            TimeSpan visibilityTimeout = TimeSpan.FromSeconds(30);//Message will be invisible for 30 seconds.
            CloudQueueMessage message = queue.GetMessage(visibilityTimeout);

As you can see get message functionality is same in both versions.

Delete Message

If you’re using the following code with version 1.7 to delete a message from a queue:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            CloudQueueMessage message = queue.GetMessage();
            queue.DeleteMessage(message);
            //Or you could use something like this
            //queue.DeleteMessage(message.Id, message.PopReceipt);

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            CloudQueueMessage message = queue.GetMessage();
            queue.DeleteMessage(message);
            //Or you could use something like this
            //queue.DeleteMessage(message.Id, message.PopReceipt);

As you can see delete message functionality is same in both versions.

Get Approximate Messages Count

A queue can contain very large number of messages however you can only fetch up to 32 messages at a time for processing (or peeking). As the name suggests, using this functionality you can find out the approximate number of messages in a queue. If you’re using the following code with version 1.7 to get approximate messages count in a queue:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            int approximateMessagesCount = queue.RetrieveApproximateMessageCount();
            //or you could use something like this
            //queue.FetchAttributes();
            //int approximateMessagesCount = queue.ApproximateMessageCount.Value;

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            queue.FetchAttributes();
            int approximateMessagesCount = queue.ApproximateMessageCount.Value;

Please note that “RetrieveApproximateMessageCount()” method is not available in version 2.0.

Clear Queue

Clear queue functionality allows you to delete all messages from a queue without deleting the queue. If you’re using the following code with version 1.7 to clear a queue:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentialsAccountAndKey(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            queue.Clear();

You would use something like this with version 2.0 to achieve the same:

            CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
            CloudQueueClient cloudQueueClient = storageAccount.CreateCloudQueueClient();
            CloudQueue queue = cloudQueueClient.GetQueueReference(queueName);
            queue.Clear();

As you can see clear queue functionality is same in both versions.

Closing Thoughts

As I mentioned above and demonstrated through examples, there are a few differences in storage client library 1.7 and 2.0 as far as managing queues are concerned. However in my opinion they are not as drastic as with the tables and the migration should be considerably smooth.

Finally, don’t give up on Storage Client Library 1.7 just yet. There [a]re still some components which depend on 1.7 version. Good example is Windows Azure Diagnostics which still depends on the older version at the time of writing this blog. Good thing is that both version 1.7 and 2.0 can co-exist in a project.

Source Code

You can download the source code for this project from here: Sample Project Source Code

Summary

The examples I presented in this post are quite basic but hopefully they should give you an idea about how to use the latest version of storage client library. In general, I am quite pleased with the changes the team has done. Please feel free to share your experience with migration exercise by providing comments. This will help me and the readers of this blog immensely. Finally, if you find any issues with this post please let me know and I will try and fix them ASAP.

image


<Return to section navigation list>

Windows Azure SQL Database, Federations and Reporting, Mobile Services

•• Shay Yannay (@ShayYannay) described Windows Azure SQL Database Management [with PowerShell Scripts] in an 11/21/2012 post to The Code Project (missed when published):

Introduction

imageWindows Azure SQL Database provides a very handy management commands which are exposed either by REST API or PowerShell Cmdlets.

Looking closely we can find various management operations such as:

  1. Creating/deleting a Windows Azure SQL Database server in our subscription (the sql server instance equivalent in on-prem)
  2. Defining firewall rules to allow access at the server or database level (SQL Database provides 2 firewall layers - server level and database level)
  3. Updating the server main password.

imageThe management operations above can well be performed from the azure management portal , so you probably ask yourself why are those commands exposed in the first place ?
The answer is simple - Automation

Here are a few steps that probably every company does in order to set-up a windows azure cloud service:

  1. First they will create a Server in Windows Azure SQL Database,
  2. Than they will create a database instances on that server,
  3. After that a firewall rules definitions will need to be set in order for the application to get access to the databases
  4. Finally the cloud service package will be uploaded to the Windows Azure Cloud.
imageUsing the code

Automating that process can really speed things up when setting a new environment.
Puling our sleeves up let's create a PowerShell script that accommodate the configuration process above:

# Create a new server
New-AzureSqlDatabaseServer -AdministratorLogin [user_name] -AdministratorLoginPassword [password] -Location [data_center_name]

# Create server firewall rule
New-AzureSqlDatabaseServerFirewallRule –ServerName "[server_name]" -RuleName "allowAzureServices" -StartIpAddress 0.0.0.0 –EndIpAddress 0.0.0.0

# Setup a new database
$connectionString = "Server=tcp:[server_name].database.windows.net;Database=master;User ID=[user_name]@[server_name];Password=[password];Trusted_Connection=False;Encrypt=True;" 
$connection = New-Object System.Data.SqlClient.SqlConnection
$connection.ConnectionString = $connectionString
$connection.Open()

# Verify the existence of the desired database
$command = New-Object System.Data.SQLClient.SQLCommand
$command.Connection = $connection
$command.CommandText = "select name from sys.databases where name='[database_name]'"
$reader = $Command.ExecuteReader()

if(!$reader.HasRows){
# Create the database
$command.CommandText = "CREATE DATABASE [database_name]"
$command.ExecuteNonQuery()
}
$reader.Close
$connection.Close

# Create a cloud service
$packagePath = "[.cspkg path]" 
$configPath = "[.cscfg path]"
New-AzureService -ServiceName "[service_name]" -Label "[service_label]" -Location "[data_center_location]"

# Upload an application package to the cloud service production slot
Set-AzureSubscription "[subscription_name]" -CurrentStorageAccount "[azure_storage_account_name]"
New-AzureDeployment -ServiceName "[service_name]" -Slot "Production" -Package $packagePath -Configuration $configPath -Label "[deployment_label]"  
Points of Interest

The automation process above can be launched no[t] only from your local computer but from the cloud service itself simply by adding a start-up script which will be launched when the role instance starts (of course need to take care that a single VM will handle the flow).

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


•• Tom Rudick (@tmrudick) presented a 00:21:06 Deep Dive into Windows Azure Mobile Services session to the Cascadia JS conference on 11/12/2012 (missed when presented):

image

A deep dive into Azure's new Mobile Service development platform - get your mobile app cloud-enabled in minutes.


Chris Klug (@ZeroKoll) offered An Introduction to Windows Azure Mobile Services on 11/22/2012 (missed when published):

imageAt the time of writing, Mobile Services is still in preview, so I believe that you have to “request” access to it. But as soon as you have, you get a new icon in your menu in the Azure management portal, which is all cool. But what is Windows Azure Mobile Services (Mobile Services from now on)?

imageWell, Mobile Services is basically a “layer” on top of Microsofts cloud offering. Initially, it is a great abstraction for SQL Databases, but the idea, as I have understood it at least, is that it will grow as the amount of Azure services expand, giving the users a simple API to work against. And in doing so, will make us as developers much more productive. But as I said, today, it is basically a very nifty layer on top of SQL Databases. However, that layer is really cool, simple to work with, and supports very rapid development.

image_thumb75_thumb2I would say that Mobile Services is a simple backend as a service, that works VERY well if you are building an “app”. And that is probably not too far off from what Microsoft sees it as either, at least if you consider the platforms they support at the moment, which is Windows Phone 8, Windows 8 Store Apps and iOS.

But that’s enough talk. Let’s have a look at how it works…

The first thing you need to do after signing up for the preview, is to create a new mobile service, which is done through the Azure Management Portal.

new_mobile_service

The first thing you need to do is to find a unique name for your service. This is pretty easy at the moment as it is still in preview… Next, you decide whether you want to use an existing SQL Database, or if you want to use a new one.

The cool thing here, is that Mobile Services will prefix all its tables, so you can just add it to an existing db if you want to. And if you go for a new one, you can still connect to it from any other environment and query the tables, or even add new ones. However, adding new ones, will NOT make them available to your Mobile Services client…

At the moment Mobile Services is only available in West and East US, but that will obviously change when the service goes live…

Next, you get to either select database and logins and stuff, or select the name of the database if you decide to create a new. In my case, I decided to create a new and go tthe below screen.

new_mobile_service_step_2

Once you have configured the database, and clicked the little “V”, it will take a couple of seconds, and then the portal will tell you that your service is ready. That’s it! A brand new backend up and running…

Ok, so where do you go from here? Well, it actually keeps being this simple… If you click the newly created Mobile Service, you will be met by a screen that tells you that your service was created. It also asks you what platform you want to work with. In my case, I will choose Windows Store. Beneath that, you get 2 choices, “Create a Windows Store app” or “Connect an existing Windows Store app”.

If you choose any of the other platforms, the options will be the same…but for that platform of course…

If you choose to create a new application, the portal will help you to create a table, and give you a VS2012 solution with a fully working application, and if you choose to connect an existing app, if will give you a code snippet to get Mobile Services working in your application (this is what I will be doing).

In either case, it also gives you links to download VS2012 Express as well as the Mobile Services SDK. In my case, I will use my regular VS2012, but I still need to download the SDK. But since I have already installed it, I can just go ahead and build myself an application, so let’s do that.

I start off by creating a blank Store App project. I then add 2 references, one to Json.Net, which is a big part in the SDK, and one to the MobileServices assembly.

According to a bunch of sites on-line, the Mobile Services assembly should be available as an “Extension” under “Assemblies” in the “Add Reference” dialog in VS2012. However, that is missing in my case, and I have also heard other people having the same issue. Luckily, you can manually add the reference by browsing to “C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0\ExtensionSDKs\MobileServicesManagedClient\0.2.0.0\References\CommonConfiguration\neutral\Microsoft.WindowsAzure.MobileServices.Managed.dll”.

If you are doing WP8, the location is “C:\Program Files (x86)\Microsoft SDKs\Windows Phone\v8.0\ExtensionSDKs\MobileServicesManagedClient\0.2.0.0\References\CommonConfiguration\neutral\Microsoft.Azure.Zumo.WindowsPhone8.Managed.dll”, and I love the fact that the WP8 assembly includes the word “Zumo”, I guess that must be a codename or something… [see below]

I also want to mention that there are minor differences in the APIs for the platforms, but they are mostly identical…

Now that we have the 2 references, I can go back to the Azure Portal and copy the code-snippet that it gives me, and paste it into my App.xaml.cs file.

new_mobile_service_step_3

sealed partial class App : Application
{
public static MobileServiceClient MobileService = new MobileServiceClient(
"https://darksidecookie.azure-mobile.net/",
"XXXXXXXXXXX"
);
...
}

For this simple example, I will go ahead and declare the Mobile Services proxy like this, but for anything more “real world like” I would definitely suggest NOT to add static variables like this to the App.xaml.cs just to make it available to the entire application. It stinks like hell! But this is a demo, so I will go ahead and do it anyway…

Ok, now the application is prepared to use Mobile Services, so let’s go ahead and do just that.

First off, I add some XAML for input in my MainPage.xaml

<Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush}">
<StackPanel HorizontalAlignment="Center" VerticalAlignment="Center">
<TextBox x:Name="txtMessage" Width="200" />
<Button Content="Save Message" Click="SaveMessage" />
</StackPanel>
</Grid>

As you can see, it is just a simple StackPanel with a TextBox and a Button. Next up is to handle the button’s Click event. But before I can save the message, I need to do two things. First of all, I need to create a table to store it in. So I go back to the Azure Management Portal and click the “DATA” link at the top of the “darksidecookie” service page.

This gives me a view of all the tables in the service. At the moment, there are none, so I click the “Create” link and enter the name of the table, which is “messages” in this case.

new_mobile_service_step_4

Two things to note. The first being my choice to name the table “messages”. I go for a lower-case name as all the communication back and forth to the Mobile Service endpoint is going to be Json, and since JavaScript is camel-cased, I just think it looks good to have the table name camel-cased. But you can obviously name the table however you want…

The second thing to note is the permissions settings. All tables have the ability to set the permissions for all CRUD updates. The settings a fairly coarse though, “Everyone”, “Anybody with the application key”, “Only Authenticated Users” and “Only Scripts and Admins”.

“Everyone” is pretty obvious, it lets anyone do the action. And since we are talking about a public REST endpoint, it REALLY means everyone.

“Anybody with the application key”means that anyone that has the application key can do it, which basically means anyone that has your application. This is a very weak form of security, but it is better than “Everyone”, and is also the default.

“Only Authenticated Users” means that only authenticated users can do it…doh! I will do another post on authentication, which is REALLY simple, but for now I can say that you can VERY easily authenticate your users of the Mobile Service using Microsoft ID, Facebook, Twitter and Google.

And the final one, “Only Scripts and Admins”, means that only scripts (I will talk more about them soon) and special admin applications can use the table in the defined manner…

Ok, enough about that. I will just leave them all to be the default “Anybody with the application key”. After clicking ok, it takes a couple of seconds, but then the table has been created, and it is time to go back to the application.

The second thing to do before implementing the Button’s click handler is to create an entity to send to the service. In my case, I will create a really simple class that looks like this

public class Message
{
public int Id { get; set; }
public string Msg { get; set; }
}

However, Mobile Service defaults to inserting entities into tables based on their class names, and the property values into columns named the same thing as the property. In my case, this doesn’t work as my table is named “tables”. Not to mention that I want my column to be named using camel-casing. Luckily, this is easy to change by using a couple of attributes

[DataTable(Name = "messages")]
public class Message
{
[DataMember(Name = "id")]
public int Id { get; set; }

[DataMember(Name = "message")]
public string Msg { get; set; }
}

The DataTableAttribute class is in the Microsoft.WindowsAzure.MobileServices namespace, and the DataMemberAttribute in System.Runtime.Serialization.

A final note on the DTO class is the Id property. All entities being inserted into a Mobile Service table needs to have an Id. I guess that is pretty obvious, but I thought I would mention it…

Ok, now that we have an entity that we can send across the wire, it is time to go about doing it, which is really simple.

private async void SaveMessage(object sender, RoutedEventArgs e)
{
var msg = new Message
{
Msg = txtMessage.Text
};
await App.MobileService.GetTable<Message>().InsertAsync(msg);
new MessageDialog("Done inserting object. Id: " + msg.Id).ShowAsync();
}

It is just a matter of creating an instance of the entity to send to the service, and then use GetTable<T>() to get hold of the “table” and finally call InsertAsync() to insert the entity. And just for the fun of it, I await the insertion before showing a MessageDialog confirming the insertion as well as showing the Id, which has been “magically” set…

After running the application and pushing the buttons once, I can go to the Management Portal once again. But this time, I click the “DATA” link, and then the “messages” table. This will bring up a page where I can browse the data in the table, as well as do some other things…

In my case, the page looks like this after inserting a single message

mobile_service_data_browsing

But wait…how did that work? I never told the table that there was a column named “message”. Well…by default, Mobile Services will automatically create a table schema that can incorporate the data from your entity. This can be turned off, but it is pretty cool during development.

But what if I change my entity? Well, it just works. Let’s say I add a “subject” property as well, and a TextBox for populating it of course

[DataTable(Name = "messages")]
public class Message
{
[DataMember(Name = "id")]
public int Id { get; set; }
[DataMember(Name = "message")]
public string Msg { get; set; }
[DataMember(Name = "subject")]
public string Subject { get; set; }
}

Well, inserting one of those and then refreshing the Manage Portal’s “Browse” view gives me this

mobile_service_data_browsing2

You have to admit that that is pretty cool!

But having to augment my entity like that everytime is really annoying form a coding point of view (not really, but let’s say it is…). Not to mention that we will probably end up with a whole heap of these DTOs. Isn’t there another “easier” way? Well, there is… We can go for straight up Json using the power of Json.Net. So instead of having a class like that, we could just do this

var msg2 = new JsonObject();
msg2.Add("message", JsonValue.CreateStringValue(txtMessage.Text));
msg2.Add("subject", JsonValue.CreateStringValue(txtSubject.Text));
await App.MobileService.GetTable<Message>().InsertAsync(msg2);

And THAT is VERY flexible! And if you have ReSharper installed, it “squiggly-lines” it, and tells you to convert it to

var msg2 = new JsonObject
{
{"message", JsonValue.CreateStringValue(txtMessage.Text)},
{"subject", JsonValue.CreateStringValue(txtSubject.Text)}
};
await App.MobileService.GetTable<Message>().InsertAsync(msg2);

which is actually both much more readable and a few keypresses less…

Ok, one final thing I want to show. This post is getting quite long as ususal, but a Mobile Services just wouldn’t be complete without talking about the power of scripts…

All tables have scripts that run on each of the CRUD operations. You can find them in the portal by clicking the table you want, and the clicking “SCRIPT”. By default they look like this

function insert(item, user, request) {

request.execute();

}

And yes, they are JavaScript. Mobile Services run Node.js, so you will have to do JavaScript, but other than that, scripts are REALLY cool. They are VERY flexible, just as JavaScript, and you get a bunch of power through Node.js.

I will write more about scripts in a future blog post, but for now, I want to give a quick intro.

The objects passed into the method are as follows.

The “item” is the entity that was send from the client. It is a Json object representing your entity. You can add and remove properties as you like.

The “user” object is an object representing the user that sent it. As we haven’t authenticated the user in this demo, it will just be an un-authenticated user.

And finally, the “request” object is the allmighty object that does the actual execution. By calling request.execute(), the script is telling the service that I want to do my "insert/update/read/delete based on the values in the “item” at the current time. You can however also call request.respond() and send an HTTP code to the client if you want to. That way you can make your Mobile Service behave like a proper RESTlike endpoint.

Ok…so what con you do with the scripts? Well….just about anything you can think of, just as with any other code… Let’s do something simple just to demonstrate scripting.

I change my insert script to the following for my “messages” table

function insert(item, user, request) {
item.created = new Date();
request.execute();
}

Oh…you do get VERY rudimentary IntelliSense in the script editor as well. It gives you little squiggly lines and so on….but it is VERY rudimentary…

After saving the script like that, and inserting a new entity, my table looks like this

mobile_service_data_browsing3

So by just adding new properties using the flexibility of JavaScript, it is possible to augment items before they enter the database…cool…

Ok. that’s if for this time! I will be back with more Mobile Services goodness very soon!

Code available here: DarksideCookie.MobileServices.Intro.zip (24.41 kb)

However, remember that the code sample requires VS2012 with the Mobile Services SDK installed as well as an Azure account with Mobile Services activated and set up. And you need to update the App.xaml.cs with the correct endpoint configuration…

Chris: “Zumo” was the code name for Windows Azure Mobile.

image_thumb18


<Return to section navigation list>

Marketplace DataMarket, Cloud Numerics, Big Data and OData

•• Steve Clayton (@stevecla) described Supercomputing on Demand with Windows Azure in an 11/21/2012 post to the Next at Microsoft blog:

image

Last week I posted the tweet above and decided it was worthy of some explanation in a post. It came from a post on the Microsoft Research Connections blog that details how Azure is being used to crunch huge volumes of data in the quest for clues to help combat bipolar disease, coronary artery disease, hypertension, inflammatory bowel disease (Crohn’s disease), rheumatoid arthritis, and type I and type II diabetes.

Research in these areas is notoriously tricky due to the requirement for a large amount of data and the potential for false positives arising from data sourced from related individuals. A technique and algorithm known as linear mixed models (LMMs) can eliminate this issue but they take an enormous amount of compute time and memory to run. To avoid this computational roadblock, Microsoft Research developed the Factored Spectrally Transformed Linear Mixed Model (better known as FaST-LMM), an algorithm that extends the ability to detect new biological relations by using data that is several orders of magnitude larger. It allows much larger datasets to be processed and can, therefore, detect more subtle signals in the data. Utilizing Windows Azure, MSR ran FaST-LMM on data from the Wellcome Trust, analyzing 63,524,915,020 pairs of genetic markers for the conditions mentioned above.

27,000 CPU’s were used over a period of 72 hours. 1 million tasks were consumed —the equivalent of approximately 1.9 million compute hours. If the same computation had been run on an 8-core system, it would have taken 25 years to complete.

That’s supercomputing on demand and it’s available to everyone – as is the result of this job in Epistasis GWAS for 7 common diseases in the Windows Azure Marketplace.

You can hear more from David Heckerman, Distinguished Scientist, Microsoft Research and Robert Davidson, Principal Software Architect, Microsoft Research, eScience in the video below regarding the implications of this work. Fascinating stuff.

image

Click here to watch the YouTube video.


• Alon Shachar and Nir Channes of SAP will present a Develop and consume RESTful services using the new Eclipse OData modeler session to EclipseCon Boston 2013 on 3/26/2013:

imageThere are lots of ways to develop consumption applications for both mobile & non-mobile .One of the emerging protocols in that context is the Open Data Protocol (OData). The interest in OData has grown exponentially and professional developers have adopted it for creating light weight business applications. But how do they do it? What is the natural development environment and, more specifically, are there any built-in Eclipse capabilities for developing an OData based project?

Now, for the first time, it is possible to model, create, and consume OData services in one Eclipse plug-in. Using the new OData graphical modeler for Eclipse (built on Graphiti), you can create a new OData model or modify an existing one. You can then consume it utilizing the built-in Java toolkit which guides you through the process of creating your business application.

Please join our session, where we will begin with OData basics then demo an end-to-end development story, starting from OData service modeling through service creation, and end it with the creation of a light weight Java based (Android) application. We will also present the basic architecture behind the tool, which (built on EMF) can be extended to different platforms and technologies.

SAP is an ardent supporter of the Open Data Protocol.


Ralf Handl, SAP, Susan Malaika, IBM, and Michael Pizzo, Microsoft, wrote OData Extension for JSON Data: A Directional White Paper and Oasis-Open published it on 5/18/2012 (missed when published). From the Introduction:

image_thumb8This paper documents some use cases, initial requirements, examples and design principles for an OData extension for JSON data. It is non-normative and is intended to seed discussion in the OASIS OData TC for the development of an OASIS standard OData extension defining retrieval and manipulation of properties representing JSONdocuments in OData.

JSON [1] has achieved widespread adoption as a result of its use as a data structure in JavaScript, a language that was first introduced as the page scripting language for Netscape Navigator in the mid 1990s. In the 21st Century, JavaScript is widely used on a variety of devices including mobiles [1], and JSON has emerged as a popular interchange format. JavaScript JSON was standardized in ECMAScript [2]. The JSON Data Interchange Format is described in IETF RFC 4627 [3] JSON documents were initially stored in databases as character strings, character large objects (CLOBs), or shredded into numerous rows in several related tables. Following in the steps of XML, databases now have emerged with native support for JSON documents such PostGres [4], CouchDB [5], and MongoDB [6]. JSON databases are an important category of Document Databases [7] in NoSQL [8]. One of the main cited attractions of JSON databases is schema-less processing, where developers do not need to consult database administrators when data structures change.

Common use cases for JSON databases include:

  • Logging the exchanged JSON for audit purposes
  • Examining and querying stored JSON
  • Updating stored JSON
  • Altering subsequent user experiences in accordance with what was learnt from user exchanges from the stored JSON

Just as the SQL query language was extended to support XML via SQL/XML [9], query languages such as XQuery are evolving to explore support for JSON, e.g., XQilla [10] and JSONiq [11]. XML databases such as MarkLogic [12] offer JSON support.

Note that for document constructs such as XML and JSON, temporal considerations, such as versioning, typically occur at the granularity of the whole document. Concrete examples include versions of an insurance policy, contract, and mortgage or of a user interface.

JSON properties are not currently supported in OData. We suggest that an OData
extension be defined to add this support. Properties that contain JSON documents will be identified as such, and additional operations will be made available on such properties.

References:

  1. JSON http://en.wikipedia.org/wiki/JSON
  2. ECMAScript http://www.ecma-international.org/publications/files/ECMA-ST/Ecma-262.pdf
  3. IETF RFC 4627 http://www.ietf.org/rfc/rfc4627.txt
  4. PostGres http://www.postgresql.org/docs/devel/static/datatype-json.html
  5. Apache CouchDB http://en.wikipedia.org/wiki/CouchDB
  6. MongoDB http://en.wikipedia.org/wiki/MongoDB
  7. Document databases http://en.wikipedia.org/wiki/Document-oriented_database
  8. NoSQL databases http://en.wikipedia.org/wiki/NoSQL
  9. SQL/XML http://en.wikipedia.org/wiki/SQL/XML
  10. XQilla: XQuery extensions for JSON http://xqilla.sourceforge.net/ExtensionFunctions
  11. JSONiq : XQuery extension for JSON http://www.w3.org/2011/10/integrationworkshop/p/Documentation-0.1-JSONiq-Article-en-US.pdf
  12. Marklogic : http://en.wikipedia.org/wiki/MarkLogic
  13. JSON Schema: http://tools.ietf.org/html/draft-zyp-json-schema-03

Development of JSON Lite payloads for OData is now well under way in the OASIS Open Data Protocol (OData) TC. Susan Malaika wrote and oasis published Open Types and Document Annotations for JSON Extensions in OData: Use Cases on 11/9/2012.


Ian Armas Foster (@ianarmasfoster) reported MIT Sloan Sees Big Future in Big Data in an 11/21/2012 article for Datanami:

imageLast week’s Supercomputing conference had a larger focus on big data than SCs past, with HPCwire’s Michael Feldman and Intersect360’s Addison Snell declaring “big data” as one of the winners of the conference in their Soundbite podcast.

The MIT Sloan Management Review, which runs its own well-known thought leadership series, took a gander at the big data industry in an attempt to determine and communicate where precisely it is heading and how companies can take advantage.

According to the report, there are three aspects in enterprise that would spell success for an organization looking to run big data: paying attention to flows instead of stocks, relying on data scientists instead of data analysts, and moving analytics from the pure IT perspective to a broader operational function.

The first point, moving one’s attention from stocks to flows, refers not to things traded on Wall Street but rather building up warehouses or stocks of data and analyzing it at one time. As data comes in almost constantly, it becomes necessary to continually analyze that data. But an unfortunate side effect is that the one who looks at the past three months’ worth of data ends up being stuck in the past.

Marketing campaigns are getting more personalized and targeted as big data allows companies to understand their customers better. The report’s first point of viewing data flow as a stream instead of a stack feeds nicely into the second point of moving from data analysts to data scientists. Insteadof data analysts asking simple questions of small datasets, data scientists are allowed to model complex queries from the large databases. It is this talent and research that allows marketing specialists to better target their campaigns.

Of course, the relative shortage of data scientists has been well-documented on this site and others over the last year or so. The report mentions EMC Greenplum’s efforts to work with universities to train the data scientists of tomorrow, but there is likely to be a gap in talent for the foreseeable future.

To help bridge that gap, the report mentions its third point: incorporating big data analytics into all aspects of an organization instead of harboring it in the IT section and copying it out when necessary. In particular, they mention eBay’s problem of having data copied 20 to 50 times over the organization, which is a waste of valuable resources. Integrating the entire operation into one data center, whether it be in the cloud, on a Hadoop cluster, or wherever, eliminates that inefficiency.

For the most part, these trends seem relatively straightforward to those in the know. But those in the know may not necessarily be those who are making the executive decisions. This MIT Sloan report is a helpful guide to that end.

Related Articles


See Andy Kung described Building a LightSwitch HTML Client: eBay Daily Deals in an 11/21/2012 post, which uses an OData data source, in the Visual Studio LightSwitch and Entity Framework v4+ section below.


<Return to section navigation list>

Windows Azure Service Bus, Access Control Services, Caching, Active Directory and Workflow

Nathan Totten (ntotten) and Abhishek Lal (@AbhishekRLal) started a Windows Azure Service Bus Tutorials video series on Channel9 with 2 episodes dated 11/15/2012 (missed when published):

image_thumb75_thumb3Applications and Services are increasingly connected and require integration across platform and network boundaries.

imageWindows Azure Service Bus provides rich messaging and connectivity features for todays connected devices and continuous services. In this series learn about the latest improvements and features available and get in-depth guidance on how to implement rich messaging patterns with Windows Azure.

Windows Azure SDK 1.8 Updates for Service Bus

imageWith the Azure SDK 1.8 release, Service Bus has added several new capabilities allowing you to build connected clients and services. This is an overview of all the new messaging capabilities and Relay enhancements. Features include message lock renewals, QueueToQueue transfer, metrics based queries for Queues/Topics and lots more. Learn about the new scenarios and patterns that can now be achieved using this internet scale messaging service.

Windows Azure SDK Visual Studio Tooling Updates for Service Bus

imageThe Azure SDK 1.8 release delivers key visual studio tooling enhancements for Service Bus. We have added several new capabilities allowing you to develop and debug your applications. This is an overview of all the new Server Explorer capabilities and some messaging feature enhancements. Features include importing your namespaces, new properties for creating and monitoring entities, updating current entities and lots more. Learn about these new features and scenarios that help you in developing applications with Service Bus.

image_thumb9


<Return to section navigation list>

Windows Azure Virtual Machines, Virtual Networks, Web Sites, Connect, RDP and CDN

Brady Gaster (@bradygaster) continued his Real World Problems with Windows Azure Web Sites blog series with a Multiple Environments with Windows Azure Web Sites post on 11/23/2012:

imageThis is the second post in the Real World Problems with Windows Azure Web Sites. The first post summarized how one can manage multiple environments (development, staging, production, etc) using a Git repository with a branching strategy. Not everyone wants to use Git, and most would prefer to stay in their favorite IDE – Visual Studio 2012 – all day to do pretty much everything. My buddy Sayed Hashimi told me about Visual Studio profiles a few weeks ago and I’d been wanting to write up something on how it could work with Windows Azure Web Sites. This post follows up on the idea of managing multiple Windows Azure Web Sites, but rather than do it with Git, I’ll show you how to manage multiple sites with only Visual Studio’s awesome publishing-with-profiles features.

Set Up the Environments

imageThe first step in the process is to have your multiple sites set up so that you have environmental isolation. In this case, I’m being thorough and requiring there are two gates prior to production release. All three of these sites are in the free zone, for this demonstration.

01-sites-provisioned

If this was fully realistic, the production zone would probably be at least shared or reserved, so that it had a domain name mapped to it. That’s the only site that would cost money, so the development and staging sites would have no impact on the cost I’ll incur for this setup.

Once the sites have been created I’ll go into each site’s dashboard to download the site’s publish settings profile. The publish settings files will be used from within Visual Studio to inform the IDE how to perform a web deploy up to my Windows Azure Web Site environment.

02-download-publish-profile

Once I’ve downloaded each of these files I’ll have them all lined up in my downloads folder. I’ll be using these files in a moment once I’ve got some code written for my web site.

SNAGHTMLbb9ed58

Now that I’ve got all my environments set up and have the publishing settings downloaded I can get down to business and write a little code.

Setting up the Web Application Project

I know I’ll have some environmental variances in the deployment details of this web application. I’ll want to use different databases for each environment, so I’ll need to have three different connection strings each site will have to be configured to use for data persistence. There’ll be application settings and details and stuff, so the first thing I’ll do in this simple ASP.NET MVC project is to prepare the different publishing profiles and the respective configuration for those environments.

To do this, I’ll just right-click my web project and select the Publish menu item. I’m not going to publish anything just yet, but this is the super-easiest way of getting to the appropriate dialog.

image

When the publishing dialog opens, I’ll click the Import button to grab the first environment’s publish settings files.

SNAGHTMLbc3d1f0

I’ll grab the first publish settings file I find in my downloads folder, for the site’s development environment.

SNAGHTMLbc4eff3

Once I click Open, the wizard will presume I’m done and advance to the next screen. I’ll click the Profile link in the navigation bar at this point one more time, to go back to the first step in the wizard.

If, at any point during this process you’re asked if you want to saved the profile, click yes.

SNAGHTMLbc7a83d

I’ll repeat the import process for the staging and production files. The idea here is, to get all of the publish settings files imported as separate profiles for the same Visual Studio web application project. Once I’ve imported all those files I’ll click the Manage Profiles button. The dialog below should open up, which will show me all of the profiles I’ve imported.

SNAGHTMLbcad50b

This part isn’t a requirement for you or a recommendation, but I don’t typically need the FTP profile so I’ll go through and delete all of the *FTP profiles that were imported. Again, not a requirement, just a preference, but once I’m done with it I’ll have all the web deploy profiles left in my dialog.

SNAGHTMLbcc102b

I’ll just click Close now that I’ve got the profiles set up. Now that the profiles are setup they’ll be visible under the Properties/PublishProfiles project node in Visual Studio. This folder is where the XML files containing publishing details are stored.

image

With the profile setup complete, I’m going to go ahead and set up the configuration specifics for each environment. By right-clicking on each *.pubxml file and selecting the Add Config Transform menu item, a separate *.config will be created in the project.

image

Each file represents the transformations I’ll want to do as I’m deploying the web site to the individual environment sites. Once I’ve added a configuration transformation for each profile, there’ll be a few nodes under the Web.config file I’ll have the opportunity of configuring specific details for each site.

image

Now that I’ve got the publish profiles and their respective configuration transformation files set up for each profile, I’ll write some code to make use of an application setting so I can check to make sure the per-profile deployment does what I think it’ll do.

Now, if you’re thinking to yourself this isn’t very practical, since I couldn’t allow my developers to have the ability of deploying to production and you’re compelled to blow off the rest of this post since you feel I’ve completely jumped the shark at this point, keep on reading. I bring it back down to Earth and even talk a little release-management process later on.

Environmental Configuration via Profiles

Now I’ll go into the Web.config file and add an appSetting to the file that will reflect the message I want users to see whenever they browse to the home page. This setting will be specific per environment, so I’ll use the transformation files in a moment to make sure each environment has its very own welcome message.

image

This is the message that would be displayed to a user if they were to hit the home page of the site. I need to add some code to my controller and view to display this message. It isn’t very exciting code, but I’ve posted it below for reference.

First, the controller code that reads from configuration and injects the message into the view.

image

Then I’ll add some code to the view to display the message in the browser.

image

When I browse the site I’ll get the obvious result, a simple hello message rendered from the configuration file on my local machine.

SNAGHTMLd3eb220

I’ll go into the development configuration profile file and make a few changes – I strip out the comments and stuff I don’t need, and then I add the message appSetting variable to the file and set the transformation to perform a replace when the publish happens. This basically replaces everything in the Web.config file with everything in the Web.MySite-Dev - Web Deploy.config file that has a xdt:Transform attribute set to Replace.

image

I do the same thing for the staging profile’s configuration file…

image

… and then for the production profile’s configuration file.

image

With the environmentally-specific configuration attributes set up in the profile transformations and the publish profiles set up, everything should work whenever I need to do a deployment to any of the environments. Speaking of which, let’s wrap this up with a few deployments to our new environments!

Deployment

The final step will be to deploy the code for the site into each environment to make sure the profile configuration is correct. This will be easy, since I’ve already imported all of my environments’ configuration files. I’ll deploy development first by right-clicking the project and again selecting the Publish context menu item. When the publish wizard opens up I need to select the development environment’s profile from the menu.

SNAGHTMLd55a0ea

Once the publish process completes the site will open up in my browser and I can see that the appropriate message is being displayed, indicating the configuration transformation occurred properly based on the publish profile I’d selected to deploy.

SNAGHTMLd572671

Next, I right-click the project and select Publish again, this time selecting the staging environment.

SNAGHTMLd585116

When the publish completes, the staging welcome message is displayed.

SNAGHTMLd59b886

If I repeat the same steps for production, the appropriate message is displayed there, too.

SNAGHTMLd5aeeb4

In a few short steps, I’m able to set up a series of environments and publish profiles, that work together to allow me separate deployment environments, with little extra work or overhead. Since the profiles are linked to the configuration transformations explicitly, it all just works when I deploy the site.

Release Management

As promised earlier in that blockquote up there, I want to stay with the “these are real world scenarios as much as possible based on my real-world experiences and questions I’ve been asked” mantra, I feel it’s necessary to get into the idea of release management insomuch as how it’d apply here. In the previous example I was using Git branches to gate releases. In this example, I’m not using any centralized build solution, but rather assuming there’s a source control environment in between the team members – developers, testers, release management, and so on – but that the whole team just chooses to use the Web Deploy awesomesauce built into Visual Studio.

Think of a company with aggressive timelines but who still take care to gate releases but choose not (for whatever reason) to set up a centralized build system. This company still feels strongly about managing the release process and about maintaining separate chains of testing and signoff responsibility as code is moved through the environments on the way to a production release, but they love using Visual Studio and Web Deploy to get things into the environments as quickly as possible.

The diagram below demonstrates one potential release cycle that could make use of the publish profile method of gating deployments through a series of environmental gates.

deployment-strategy

Assume the team has come to a few conclusions and agreements on how their release cycle will execute.

  • All the team members are relatively technical and comfortable using Visual Studio with web application projects
  • The team uses a source control method to share source code and to distribute it internally between team members
  • The web application project checked into source control has with it the publish profile for deploying the site into the development Windows Azure Web Site
  • Testers maintain copies of the staging publish profile setting, are regarded as the owners of the staging environment, and are the only team members who can deploy code to the staging Windows Azure Web Site
  • Release managers maintain copies of the production publish settings files, are regarded as the owners of the production releases, and are the only team members who can deploy code to the production environment
  • As developers, testers, and RM’s complete their respective testing phases in the environments they own and are ready to sign off, they escalate the deployment process to the next level
  • Following escalation, the first general step is to test the previous environment for verification purposes, then to deploy to the next environment and to begin testing of the deployment in that environment

Luckily, this sort of situation is quite possible using publish profiles and free Windows Azure Web Sites used as environmental weigh stations on the way to be deployed to a production site that’s deployed to multiple large reserved instances (for instance).

Summary

The convenient partnership between web publishing and Windows Azure Web Sites shouldn’t be regarded as an indicator of it creating the potential for cowboy coding, but more considered a tool that when coupled with a responsible release cycle and effective deployment gating can streamline and simplify the entire SDLC when your business is web sites.

I hope this post has introduced you to a method of controlling your deployment environments, while also allowing you to do the whole thing from within Visual Studio. Later on, I’ll follow up this post with an example of doing this sort of thing using Team Foundation Services.

Hopefully, you have enough ammunition to get started with your very own Windows Azure Web Site account today, for free, and you feel confident you’ll be able to follow your very own release management process, without the process or architecture slowing you down. If you have any questions about this approach or the idea in general, feel free to use the comments form below.

What I like about Brady’s tutorials are the numerous screen captures to illustrate the steps.


Brady Gaster (@bradygaster) began a Real World Problems with Windows Azure Web Sites blog series by describing Multiple Environments with Windows Azure Web Sites in an 11/21/2012 post:

imageThis is the first post in the Real World Problems with Windows Azure Web Sites blog series, as it intends to answer one of the most common questions I receive when I’m doing presentations about Windows Azure Web Sites. This situation demonstrates a typical setup, wherein a site owner has multiple environments to which they push their web site. This setup is extremely valuable for staging site releases and for delivering solid web applications or for doing A-B testing of a site’s changes.

imageIn order to make sure your changes are okay, it helps to have a staging and production environment to use to make sure things are good before one makes their changes live in a production environment. My good friend and colleague Cory Fowler blogged about continuous deployment with Windows Azure Web Sites, and my other good buddy Magnus Martensson did a great presentation at Windows AzureConf on the topic. I’ve done countless demonstrations of continuous deployment with Web Sites, but one question always comes up, that this post intends to answer.

That’s all well and good and I know I can deploy my site automatically each time I make a change, but that’s not realistic. It’s like, if I use Windows Azure Web Sites to host my site, it’ll be deployed each time I check in code – even when I didn’t want to deploy the site. How do I control what gets deployed and control the deployment and maintain quality?

That’s a great question, and it’s one that most have struggled to answer. It’s also a barrier for many who are thinking of using Windows Azure Web Sites but who don’t want to manage their site like they’re running their company out of a garage. This “happy path” deployment mantra isn’t real-world, especially for site owners who want to stage changes, test them out, and be certain their changes won’t cause any problems following a hasty deployment process.

Multiple Sites for Multiple Environments

As with any multiple-environment setup, the first thing I need to do to support having multiple environments is to create multiple sites in Windows Azure Web Sites. Using the portal, this is quite simple. The screen shot below shows you what this would look like. Note, I’ve got a “real” site, that’s my production site area, and I’ve also added a staging site to the account.

image

In particular, take note of how both of these sites are in the “free” tier. Let’s say you’ve got a production site in the “non-free” zone because you want to map your domain name to it, scale it up or out, or whatever else. I’ll leave my staging site in the free tier and not incur any charges on it.

Why is this important? Because I won’t need to pay anything additional for having multiple sites. Since most users won’t have the staging URL, or it won’t matter what it is since it’ll just be for testing purposes, I don’t need to map a domain name to it, scale it, or anything like that. It’s just there for whenever I need to do a deployment for testing purposes or verification purposes. This setup won’t require you to spend any more money.

Using GitHub.com for Continuous Deployment

In this example, I’ll be using GitHub.com to manage my source code. You don’t have to use GitHub.com for this, so if you’re new to Git don’t freak out, you have other options like CodePlex, BitBucket, or TFS. Heck, you could even automate an FTP deployment if you want to.

The first step in setting up GitHub.com integration with a web site is to load up the site’s dashboard and to click the Quick Link labeled “Set up Git publishing” as is illustrated in the screen shot below.

image

Once the repository setup completes, the portal will allow me to specify what type of Git repository I want to connect to my site. I’ll select GitHub.com from the list of options, as you’ll see below.

associating-prod-with-git

If this is the first time I’ve tried to connect a GitHub.com repository to a web site, I’ll be asked to allow the partnership to take place.

authorize-azure-to-git

By clicking the Allow button, I let GitHub.com know I’m okay with the partnership. The final step in tying a repository to a site is to select the repository I want to be associated with the site, which you’ll see in the screen shot below.

image

I’ll repeat this process for the staging site, too, and I’ll associate it with the exact same repository. This is important, as I’ll be pushing code to one repository and wanting the deployment to happen according to which site I want to publish.

Two Sites, One Repository? Really?

Sounds weird, right? I’ve got two sites set up now, one for the production site, the other for the staging site, but I’ve associated both sites with the same repository. Seems a little weird, as each time I push code to the repository, I’ll be deploying both sites automatically. The good part is, there’s this awesome feature in Git that I can use to make sure I’m deploying to the right spot. That feature is called branching, and if you’re acquainted with any modern source control management product, you probably already know about branching. You probably already use branching to control deviations in your code base, or to fix features and bugs. With Windows Azure Web Sites’ support for branches, you can use them for environmentally-specific deployment practices too. The best part is, it’s quite easy to set up, and that’s just what I’ll show you next.

Configuring Branch Associations

Before I write any code, I’ll need to set up the branch “stuff” using the portal. To do this, I’ll go into my production site’s dashboard and click the Configure link in the navigation bar. Scrolling down about half way, I can see that the production site’s set up to use the master branch. The master branch is the default branch for any Windows Azure Web Site, but as you’ll see here, the portal gives me the ability to change the branch associated with an individual web site.

image

Now, I’ll go into my staging site and I’ll set the associated branch for that site to staging. This means that, each time I check code into the master branch, it’ll be deployed to the production site, and each time I check code into the staging branch, it’ll be deployed to the staging site.

image

With the setup out of the way I’ll be able to write some code that’ll be deployed automatically when I need it to be deployed, and to the right place.

Code, Commit, Deploy

Now that my sites are all configured and pointing to the right branches I’ll need to set up local Git repository, write some code, and check that code into the repository. Once that’s all done I’ll create a second branch called staging that I’ll use to push code to my staging site.

The first step is, obviously, to write some code. I won’t do much complicated stuff for this demo. Instead I’ll just make a simple MVC site with one view. In this view I’ll just put a simple message indicating the site to which I intend to deploy the code.

image

Now, I’ll open up Powershell to do my Git stuff. As Phill Haack points out in this epic blog post on the topic, posh-git is a great little tool if you’re a Powershell fan who also uses Git as a source control method.

SNAGHTML52be3f4

I’ll initialize my Git repository using git init, then I’ll tell the Git repository where to push the code whenever I do a commit using the git remote add origin [URL] command. Finally, I’ll use the git add and git commit commands to push all of my changes into my local repository. All of the files in this changeset will scroll up the screen, and when the commit completes I’ll push the changes back up to GitHub.com, to the master branch, using the git push origin [branch] command.

SNAGHTML5383630

Once the commit finishes and the code is pushed up to GitHub.com, Windows Azure Web Sites will see the commit and perform a deployment. If you’re watching the portal when you do the commit you’ll see the deployment take place [almost magically].

image

If I click the Browse button at the bottom of the portal the site will open up in a browser and I can verify that the change I just committed was deployed. So far, so good.

production-site-up-and-edited

Setting Up the Staging Branch

Now that the production site’s been deployed I need to set up the staging environment. To do this, I’ll go back into my favorite Git client, Powershell, and I’ll create a new branch using the git checkout –b [branch] command. This will create a new branch in my local Git repository. Then, it’ll switch to that repository and make it active. If I type git branch in Powershell, it’ll show me all the branches in my local repository. The green line indicates the branch I’m currently working on.

SNAGHTML53f8690

Now that the branch has been created, I’ll make some small change in the code of the site. Once this change has been made I’ll be pushing it up to GitHub.com again, but this time I’ll be pushing it into the newly-created staging branch, so the production branch code in the master branch is safe and sound.

image

Switching back to Powershell, I’ll commit the changes to the staging branch in my local repository. Then, I’ll push that code back up to GitHub.com, this time specifying the staging branch in my git push command.

SNAGHTML545785b

This time, I’ll watch the staging site in the portal. As soon as the publish is complete, the portal will reflect that a deployment is taking place.

image

Once the deployment completes, I can check the changes by clicking the Browse button at the bottom of the portal page. When the staging site opens up, I can see that the changes were pushed to the site successfully.

staging-site-up

If go back to the production site, it still says Production on it. Pushing to the staging site didn’t affect my production site, and vice-versa. I’ve got dual-environment deployments, based on source code branches, and I’m able to test things out in one environment before pushing those changes to production. Everyone wins!

Local Git Branch Niftiness

One of the neat things about using Git branches (at least it was nifty to me), is that all the code for all the branches is stored in your local repository. Switching between branches automatically results in the source code being restored on your local drive. The whole thing happens automatically. Demonstrating how this works is as easy as switching branches while you have a source code file open in Visual Studio.

So let’s say I still have the staging branch set as my working branch and I’ve got the source code open up in Visual Studio. If I got to my Powershell client again and switch the branch using the git checkout [branch] command, Git changes the branch on the fly for me. In so doing, all the files in the local directory are replaced with the files from the newly-selected branch.

SNAGHTML54d7c5b

The moment I switch back to Visual Studio, it warns me that the file has changed on disk and asks me if I’d like to reload it.

SNAGHTML54e6e2d

If I click the Yes button and then look at my file, I’ll see that the file has been changed to the one resident in the new branch.

image

In this way, Git keeps all the branches of my source code in my local repository, so I can make changes to the branches as needed and then commit those changes back to the local repository. Once I’m ready, I can push those files back up to the origin (in this case, GitHub.com), and everything’s cool.

Summary

Web site environments are a real-world method of controlling, gating, and reverting deployments. Using multiple environments, development shops can make sure their changes took place properly before crashing a production site with poor changes. Windows Azure Web Sites is a real-world web hosting platform that can be used to solve real web site challenges. With a little thought and planning, it’s easy to use Windows Azure Web Sites to host multiple versions of a web site. Testing can happen live, without affecting production deployments. I hope this post has introduced a good method of achieving separate site environments, and that you can see yet another way Windows Azure Web Sites can help you get your site up and running, keep it continually deployed, and reduce your concerns over the nature of continuously deploying to a production web site environment using simple tricks like Git branches.


Brady Gaster (@bradygaster) began a Real World Problems with Windows Azure Web Sites blog series on 11/21/2012:

imageI’ve been asked a lot of great questions about Windows Azure Web Sites since the feature launched in June. Things like on-premise integration, connecting to service bus, and having multiple environments (like staging, production, etc), are all great questions that arise on a pretty regular cadence. With this post, I’m going to kick off a series on solving real-world problems for web site and PaaS owners that will try to address a lot of these questions and concerns.

imageI’ve got a few blog posts in the hopper that will address some of these questions, rather than just cover how certain things are done. Those posts are great and all (and a lot of fun to write), but they don’t answer some real-world, practical questions I’ve been asked this year. Stay tuned to this area of my site, as I’ll be posting these articles over the next few weeks and probably into the new year. As I post each of these solutions I’ll update this post so you have a one-stop shop to go to when you need to solve one of these problems.

Posts in this Series

Multiple Environments with Windows Azure Web Sites
In this post I demonstrate how to have production and staging sites set up for your web site so that you can test your changes in a sandbox site before pushing your production site and potentially causing damage to it (and your reputation). If you’ve wondered how to gate your deployments using Windows Azure Web Sites, this is a good place to start.

See the first post in this section.


Sandrino di Mattia (@sandrinodm) proposed Adding multiple endpoints to your Windows Azure Virtual Machines by using a CSV file in an 11/22/2012 post:

imageIn order to manage endpoints of a Virtual Machine you have 2 options: use the portal or use Powershell. If you use the portal it’s not so easy to add a list or a range or ports, you would need to add these ports one by one. The other way you can manage these endpoints is by writing Powershell scripts and that’s what I did over and over again for the past few months.

imageWhat I wanted was some way to easily add endpoints to newly created Virtual Machines. A CSV file would be perfect for this. It’s easy to manage (in Excel for example) and you can create different templates to improve reusability. That’s why I created the Import-AzureEndpointsFromCSV.ps1 script.

# Arguments.

param

(

[Microsoft.WindowsAzure.Management.ServiceManagement.Model.PersistentVMRoleContext]$vm = $(throw "'vm' is required."),

[string]$csvFile = $(throw "'csvFile' is required."),

[string]$parameterSet = $(throw "'parameterSet' is required.")

)

Get-ChildItem "${Env:ProgramFiles(x86)}\Microsoft SDKs\Windows Azure\PowerShell\Azure\*.dll" | ForEach-Object {[Reflection.Assembly]::LoadFile($_) | out-null }

 

# Add endpoints without loadbalancer.

if ($parameterSet -eq "NoLB")

{

Write-Host -Fore Green "Adding NoLB endpoints:"

$endpoints = Import-Csv $csvFile -header Name,Protocol,PublicPort,LocalPort -delimiter ';' | foreach {

New-Object PSObject -prop @{

Name = $_.Name;

Protocol = $_.Protocol;

PublicPort = [int32]$_.PublicPort;

LocalPort = [int32]$_.LocalPort;

}

}

 

# Add each endpoint.

Foreach ($endpoint in $endpoints)

{

Add-AzureEndpoint -VM $vm -Name $endpoint.Name -Protocol $endpoint.Protocol.ToLower() -PublicPort $endpoint.PublicPort -LocalPort $endpoint.LocalPort

}

}

# Add endpoints with loadbalancer.

elseif ($parameterSet -eq "LoadBalanced")

{

Write-Host -Fore Green "Adding LoadBalanced endpoints:"

$endpoints = Import-Csv $csvFile -header Name,Protocol,PublicPort,LocalPort,LBSetName,ProbePort,ProbeProtocol,ProbePath -delimiter ';' | foreach {

New-Object PSObject -prop @{

Name = $_.Name;

Protocol = $_.Protocol;

PublicPort = [int32]$_.PublicPort;

LocalPort = [int32]$_.LocalPort;

LBSetName = $_.LBSetName;

ProbePort = [int32]$_.ProbePort;

ProbeProtocol = $_.ProbeProtocol;

ProbePath = $_.ProbePath;

}

}

 

# Add each endpoint.

Foreach ($endpoint in $endpoints)

{

Add-AzureEndpoint -VM $vm -Name $endpoint.Name -Protocol $endpoint.Protocol.ToLower() -PublicPort $endpoint.PublicPort -LocalPort $endpoint.LocalPort -LBSetName $endpoint.LBSetName -ProbePort $endpoint.ProbePort -ProbeProtocol $endpoint.ProbeProtocol -ProbePath $endpoint.ProbePath

}

}

else

{

$(throw "$parameterSet is not supported. Allowed: NoLB, LoadBalanced")

}

 

# Update VM.

Write-Host -Fore Green "Updating VM..."

$vm | Update-AzureVM

Write-Host -Fore Green "Done."

view rawImport-AzureEndpointsFromCSV.ps1This Gist brought to you by GitHub.

With this script you can import 2 types of endpoints:

  • Normal endpoints (not load-balanced)
  • Load-balanced endpoints
image_thumb75_thumb4Creating a CSV file

So what you would do first is create the endpoints in CSV format. By using Excel it’s actually very easy to duplicate the defined endpoints by dragging the cells down (but you can also use Notepad or any other text editor). The normal endpoints are defined by these columns: Name, Protocol, PublicPort, LocalPort.

The load balanced endpoints need a little more information: Name, Protocol, PublicPort, LocalPort, LBSetName, ProbePort, ProbeProtocol, ProbePath

After defining the endpoints, simply export to a CSV file:

Importing the CSV file

Calling the script and importing the CSV file is very easy. You simply call the script and pass the required parameters:

  • The virtual machine
  • The filename of the CSV file
  • The type of CSV file: NoLB or LoadBalanced

In case you want to add load-balanced endpoints you will need to add those to each virtual machine in the cloud service.

Import-AzurePublishSettingsFile 'C:\...'

Select-AzureSubscription -SubscriptionName ...

 

# Import normal endpoints.

$vm = Get-AzureVM MyVirtualMachine

.\Import-AzureEndpointsFromCSV.ps1 $vm Normal-Endpoints.csv NoLB

 

# Import load-balanced endpoints.

$vm = Get-AzureVM MyLoadBalancedCloudSvc

.\Import-AzureEndpointsFromCSV.ps1 $vm[0] LB-Endpoints.csv LoadBalanced

.\Import-AzureEndpointsFromCSV.ps1 $vm[1] LB-Endpoints.csv LoadBalanced

.\Import-AzureEndpointsFromCSV.ps1 $vm[2] LB-Endpoints.csv LoadBalanced

view rawgistfile1.ps1This Gist brought to you by GitHub.

After importing the endpoints you’ll see them showing up in the portal:

And after you imported the load-balanced endpoints for all your machines in the same cloud service you’ll see that these endpoints show up as being load balanced:

The script and the sample CSV files are available on GitHub.

No significant articles today

image_thumb1


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Himanshu Singh (@himanshuks) posted Real World Windows Azure: Serbian National News Agency Website Successfully Serves Visitors During Election Spikes on 11/21/2012:

imageAs part of the Real World Windows Azure series, we connected with Aleksandar Milinković, head of technical services at Tanjug to learn more about how the news agency’s website served spikes of 400,000 visitors using Windows Azure. Read Tanjug's success story here. Read on to find out what Aleksandar had to say.

Himanshu Kumar Singh: Tell me more about Tanjug?

Aleksandar Milinković: As the Serbian National News Agency, Tanjug releases about 400 news items every day on the most important political, economic, social, cultural and sports events in the country and abroad.

In addition to daily uploading written, photo, audio and video reports for the website users and visitors, Tanjug news service is an invaluable resource for all media in emergency and unexpected situations. An event certain to attract great interest of Serbian public is elections at all levels.

HKS: What led you to evaluate cloud services?

image_thumb75_thumb5AM: In usual circumstances, IT infrastructure used by Tanjug functions in such a way that it runs at capacity on a daily basis. In preparation for presidential, parliamentary and local elections scheduled for May 2012, we were on the lookout for a solution that could quickly and easily scale to prepare for high loads, while adhering to budget restrictions. In addition, the solution would need to seamlessly integrate with the current IT infrastructure consisting of 250 workstations and 24 servers, with a heterogeneous software environment comprising Microsoft Windows and Ubuntu Linux operational systems.

HKS: How did you decide to use Windows Azure?

AM: We selected Windows Azure because of its scalability, ease of deployment and low cost, but most importantly the platforms ability to integrate our existing infrastructure.

How does Windows Azure fit into the solution?

AM: The current website, developed in Microsoft ASP.NET technology and run by Microsoft IIS, was given a new segment entitled “Elections 2012”, the hosting of which was switched to Windows Azure platform. Any visitor clicking on the link “Elections 2012” on Tanjug website homepage would be automatically redirected to the website segment hosted on Windows Azure platform.

In order to meet and evenly distribute maximum possible loads, six small Windows Azure instances were deployed. For data storage 500 GB were set aside in Windows Azure storage order, and management of the database was entrusted to 5 GB Windows Azure SQL Database system. In order to ensure additional reliability, Content Delivery Network (CDN) mechanism for caching and distribution according to visitor location was used, which was crucial for high quality placing of contents requiring large bandwidth. The development of this segment of Tanjug website involved Visual Studio 2010, along with ASP.NET MVC 3 technology and Entity Framework 4.0.

HKS: What are some of the operational benefits you’ve seen with Windows Azure?

AM: By introducing this cloud supplement to the existing IT structure, Tanjug managed to scale up its website in a simple and efficient way, with minimum investment and adequately met high loads during peaks in the election process. During the final five days of the election campaign, our website had more than 400,000 visitors, and thanks to the integration of the existing and cloud infrastructure we managed to keep the quality of our services at the level our clients expect. Moreover, it was not necessary to invest in expensive IT infrastructure, the capacities we would not be able to make adequate use of after the elections are over.

Read how others are using Windows Azure.

image_thumb22


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Michael Washington (@ADefWebserver) explained Theming Your LightSwitch Website Using JQuery ThemeRoller in an 11/23/2012 post:

image

imageThe Visual Studio LightSwitch HTML Client uses JQuery Mobile and it is compatible with http://jquerymobile.com/themeroller/. This article is covering information that is contained in Beth Massi’s LightSwitch HTML Client Tutorial - Contoso Moving. However, that tutorial is currently only available in a Microsoft Word document not a web page. The information is also available at this link, but MSDN documentation has to be translated into several languages so it avoids screen shots. Therefore we will cover the information here.

The Application Before

imageWe will use the Contoso Moving application from Beth’s tutorial. This is the completed application (it takes about 20 minutes to get the application to this point):

image

image

image

Get The Current Theme

image

To get the current theme, we first switch to File View.

image

We open the Default.htm page and see that it is using the theme in the dark-theme.css file.

image

Open up the dark-theme.css file and copy the contents.

Using ThemeRoller

To use ThemeRoller, go to: http://jquerymobile.com/themeroller/

image

Switch to version 1.1.1 (or whatever version of JQuery Mobile that is in the Default.htm file of the LightSwitch website).

image

Click on Import.

image

Paste in the contents of the dark-theme.css file and click Import.

image

You can now design the theme. You drag the little color blocks and drop them on the mock up of the sample application.

image

When you are done, click the Download button.

image

Give the theme a name and click the Download Zip button.

image

Download the .zip file and un-zip it.

image

Right-click on the Content folder and select Add then Existing Item.

image

Select the theme.

image

Update the default.html file to point to the new theme.

The Application After

When we run the application it has a new theme:

image

image

image


• Michael Washington (@ADefWebserver) described Creating JavaScript Using TypeScript in Visual Studio LightSwitch in an 11/22/2012 post to his Visual Studio LightSwitch Help site:

imageJavaScript is very close to C# so the only thing I dislike about it, is that it doesn’t compile. it just throws errors at run-time. Using TypeScript, I can write JavaScript in an environment that provides compile-time validation. You can download and install the TypeScript plug-in for Visual Studio 2012 from here: http://www.typescriptlang.org/#Download.

imageI am using the LightSwitch HTML Client Preview 2.

image

The Application

image

For the sample application, I created a simple one table HTML app.

image

It displays the first name of the entries.

image

It has a popup screen that allows you to create and edit entries.

The JavaScript

image

In the screen designer, we can change the List to render using a Custom Control.

image

In the Properties for the Custom Control we click the Edit Render Code button.

image

This takes us to the JavaScript code file where we can write our own code to render the contents.

This is where we could potentially create a lot of code that does not compile and only throws errors at run-time.

Using TypeScript

image

To use TypeScript we first need to switch to File View.

image

We add a New Item to the Scripts folder.

image

We add a JavaScript file, but give it an extension of .ts rather than the usual .js.

image

The file will be created, and the TypeScript editor will open.

image

When we enter the following TypeScript code and save the file, the JavaScript file (and a minimized version) is automatically saved:

class FormatName {
	_firstname: string;
        _lastname: string;
        _age: number;
      constructor (
        firstname: string, 
        lastname: string,
        age: number
        ) {
        this._firstname = firstname;
        this._lastname = lastname;
        this._age = age;
	}
	ReturnName() {
	var formatedName = 
            this._lastname.toUpperCase() 
            + ", " + 
            this._firstname.toLowerCase()  
            + " [" + this._age.toString() + "]";
        
         return formatedName
       }
}   

When we create the TypeScript code we get full type checking and intellisense. You can learn more about TypeScript at: http://www.typescriptlang.org.

image

Next, we want to click on the .ts file, and in Properties set the Package Action to None (otherwise we will get a message box asking us if we want to update it each time we debug).

image

We can also place the TypeScript files in their own folder.

Consume The Code

image

We open the Default.htm page and add a reference to our JavaScript file (not the TypeScript file).

image

We return to the rendering method and add the following code:

    var itemTemplate = $("<div></div>");
    var FirstName = contentItem.data["FirstName"];
    var LastName = contentItem.data["LastName"];
    var Age = contentItem.data["Age"];
    var objFomatName = new FormatName(FirstName, LastName, Age);
    var FinalName = $("<h1>" + objFomatName.ReturnName() + "</h1>");
    FinalName.appendTo($(itemTemplate));
    itemTemplate.appendTo($(element));

image

When we run the application, it works!

TypeScript Links: Videos, Websites, Documents
LightSwitch HTML and JavaScript
Thanks

A special thanks to Jan Van der Haegen and Pat Tormey for feedback and improvements.

Download Code

The LightSwitch project is available at http://lightswitchhelpwebsite.com/Downloads.aspx

(you must have HTML Client Preview 2 or higher installed to run the code)

According to a message of 10/3/2012 from Stephen Provine of the LightSwitch team in the LightSwitch HTML Client Preview forum’s TypeScript and LightSwitch HTML - a perfect match? thread:

The LightSwitch team very much has TypeScript on the radar and are looking into whether it makes sense to integrate it into the LightSwitch design experience for a future release. It's good to hear that there is some interest from the community in doing this!


Beth Massi (@bethmassi) described Building an HTML Client for a LightSwitch Solution in 5 Minutes on 11/22/2012:

imageIf you haven’t heard, we released the HTML Client Preview 2 last week which adds the ability to create HTML5/JavaScript based clients to your LightSwitch solutions. I thought it would be a good exercise to add an HTML companion application to my Contoso Construction sample to demonstrate just how quickly you can build these clients.

imageContoso Construction is a sample that uses LightSwitch extensions and slightly more advanced coding techniques in order to integrate with mail servers, automate Word and Excel, create map visualizations, provide advanced data filtering, and connect to OData sources. The Silverlight client is meant to run on the desktop and is used by the construction company to manage construction projects.

One of the features of the application is the ability to add photos of the construction site.

image

imageRight now, however, the way it works is someone on the site has to bring back the pictures to the office in order to upload them or the crew needs to carry around a laptop in order to run the rich desktop application. Instead, we want to be able to provide the ability to upload photos using any modern mobile device. So let’s see how we can build this companion client in minutes.

Add a Client

First thing we need to do is add a new mobile client to our current project. (You will need to first install the LightSwitch HTML Client Preview 2 in order to get this functionality. Keep in mind that this release is still in preview so make sure you don’t do this to any production applications – please use a copy ;)).

Right-click on the project node and select “Add Client…”

image

Then select HTML Client and name the client “MobileClient”.

image

Once you do this, LightSwitch will upgrade your project to support the new features in Preview 2. Keep in mind, once you upgrade the project, you will not be able to open it on another machine unless you have the Preview 2 installed.

Add a “Home” Screen

When users first open the application we want them to see a list of current projects, just like they see on the Home screen on the desktop app. To add a screen, right-click on the MobileClient (which is now set as your startup project for debugging) and select “Add Screen…”.

image

This will open up the Add New Screen dialog. Select the Browse Data Screen template, then select the CurrentProjects query as the Screen Data and name the screen “Home”. Then click OK.

image

The Screen Designer will open. Because this particular query has an optional parameter that retrieves all projects in the system, LightSwitch will place the parameter input on the screen as well. In this case, we want only the current projects displayed so delete this control.

image

If you hit F5 to run the app at this point, we’ll now see the list of current projects on the home screen.

image

Add a View Details Screen

Now we want to display a screen that shows the project details as well as the photos. So right-click on the MobileClient and “Add Screen…” again. This time select the View Details Screen template. Select Project as the screen data and then make sure to include the related Pictures.

image

On this screen we also want the mobile user to be able to update the notes field if they need to. So change the Notes field to a TextArea control.

image

Wire Up the Tap Event

In order to open the details screen we’ll need to wire up the tap event of the list box on the Home screen. Open the Home screen, select the Current Project list and in the Properties window under Actions, add a Tap action by clicking the link.

image

This will pop up a dialog that let’s us call up the details screen. Select “Choose an existing method” then select “showViewProjectDetail”. Finally, you will need to specify that the CurrentProject.selectedItem should be sent to the details screen.

image

Now if we hit F5 to run this we can tap on a project in the list and the View Project Detail screen will appear. Notice that the child relation to the Pictures is automatically displayed on a separate tab.

image

Displaying Images in Tiles

Next we want to display the photos in a tiled list when the user opens the Pictures tab. Open the ViewProjectDetail screen and first set the Picture summary control to a Rows layout. This will display the image & the notes controls.

image

Next change the List control to a Tile List control.

image

Hit F5 to run the application, select the first project then select the Pictures tab and you will see the tiled list of picture thumbnails.

image

Add a Dialog to Upload Pictures

Now it’s time to add the main part of functionality to this app. We want to allow users to upload new photos. Right now using Preview 2 we need to add some custom code to do this. Luckily all the code we need is part of the HTML Client tutorial. There are a couple files we need from this project to incorporate into ours:

Sample Resources\image-uploader.js
Sample Resources\image-uploader-base64-encoder.aspx

We need to add these to our client project manually. In order to do this simply flip to “File View” which allows you to see the physical layout of the solution.

image

Next, open up the MobileClient node and add an existing file to the \Scripts node.

image

Select the image-uploader.js and the image-uploader-base64-encoder.aspx. Next open up default.htm located in the root by double-clicking on it and add the following reference at the end of the script block:

<script type="text/javascript" src="Scripts/image-uploader.js"></script>

Flip back to Logical view and then open the ViewProjectDetail screen. Select the Dialogs node at the bottom, and then select and select Add Dialog. In the Properties window, change the dialog’s Name to ImageUploader.

image

From the left-hand side of the Screen Designer (the view model), drag the SelectedItem node of Pictures into the new dialog. Delete the Project item.

image

Next select the newly-added Selected Item node. In the Properties window, set both Width and Height to Fit to Content. Change Note’s type to Text Area.

image

Next, the fun part -- writing some JavaScript code :). Switch the dialog’s Picture field from an Image to a Custom Control.

image

In the Properties window, choose the Edit Render Code hyperlink or select the “Write Code” button at the top right of the designer and select the _render method.

image

Add the following code (in bold):

myapp.ViewProjectDetail.Image1_render = function (element, contentItem) {
    // Write code here.
    createImageUploader(element, contentItem, "max-width: 300px; max-height: 300px");
};
The call redirects all of the heavy-lifting to the image-uploader.js file we added. The customizable styling statement ("max-width: 300px; max-height: 300px") specifies the image preview size. Now all we need to do is call up this dialog. Let’s add a button on the screen tab to do this. Select the Pictures tab and select Add, New Button.

image

In the Add Button dialog box, choose the Pictures.AddAndEditNew method, and Image Uploader as the “Navigate To” dialog.

image

Set Data Default Value

Next we need to add code in order to set the “Updated” field automatically when a picture is uploaded. This is a specific requirement of Contoso Construction’s schema, the Picture.Updated field is required, but it’s internal so it’s not displayed on the screen. Select the Projects entity to open the Data Designer. You will notice at the bottom of the Data Designer there are now different perspectives. The Server perspective is where all your business logic resides. This is the middle-tier data services and is where “the buck stops here” code resides like data validation and complex data processing. This is why we can add an HTML client to an existing project with minimal code, because most of your code is here.

In order to add default values on our HTML client, select the “MobileClient” view and then select the Updated field. Then drop down the “Write Code” button and select “created” method. This allows you to set defaults on the JavaScript client.

image

Write this code (in bold):

myapp.Picture.created = function (entity) {
    // Write code here.
    entity.Updated = new Date();
};

RUN IT! When we F5 to run our application, navigate to a project, then select the pictures tab, we can now click the “Add Picture” button at the bottom of the screen to add a new image and some notes.

image

Clicking the checkmark at the top right will enter the new picture into the list. When you’re done uploading images, click the Save button on the top right of the screen to save them all.

image

Note that if the user tries to navigate away from the screen, they will be prompted to save or discard changes just like you would expect.

Add the Company Logo

Finally, let’s add the company logo to the mobile client so this looks professional and company branded. Flip to file view in Solution Explorer. Then expand the MobileClient project node and look under Content/Images and you will see a user-logo.png. Simply replace that file with the logo you want.

image

You can also replace the user-splash-screen.png in order to display a different image when the application is loading. Now when we run the application, we’ll see the Contoso Construction icon in the upper left of the Home screen.

image

Deploy to Azure Website

Now that we have our HTML client application we can deploy it in 3 minutes to an Azure Website like I showed here:
Easily Deploy Your LightSwitch Apps to Azure Websites in Minutes

I also encourage you to work through the HTML Client tutorial to see end-to-end how to build an HTML companion client & deploy it to Azure. Once the app is deployed to the internet, the construction crew can now use the HTML client app on their smart phones and tablets.

Wrap Up

As you can see, LightSwitch makes it super simple and fast to build HTML clients. You only need to know a little HTML5 & JavaScript to customize the UI like we did in order to add customizations. LightSwitch has always been able to focus the developer on writing code that provides the business value (custom controls, business rules) and not have to worry about all the plumbing (data access, service implementations, etc). This still holds true with the addition of the HTML client.

Try it out and let us know what you think.

For the details of deploying an LightSwitch HTML Client App to Windows Azure and SharePoint, see my LightSwitch HTML Client Preview 2: OakLeaf Contoso Survey Application Demo on Office 365 SharePoint Site post of 11/20/2012.


Andy Kung described Building a LightSwitch HTML Client: eBay Daily Deals in an 11/21/2012 post:

imageWith the release of the LightSwitch HTML Client Preview 2, I’d like to take this opportunity to showcase the simple steps to get an HTML client up and running. The holiday season is now upon us. In this tutorial, we will create a simple HTML client to display daily deals from eBay. Something like this:

clip_image001

Make sure you have the LightSwitch HTML Client Preview 2 installed on your machine. Ready? Let’s start!

Create a project

Launch Visual Studio 2012. Create a LightSwitch HTML Application project by going to File, New Project. You can choose a VB or C# project. Name the project DailyDeals.

clip_image003

Start with data

After the project is created, LightSwitch will prompt you to start with data. As you know, LightSwitch supports producing and consuming OData feeds in Visual Studio 2012. I found an eBay OData feed listed on the Open Data Protocol website that we can use in our example.

Let’s attach to the OData service. Click Attach to external Data Source.

clip_image004

In the Attach Data Source Wizard, select OData Service, then click Next.

clip_image005

Enter the URL for the eBay OData feed (http://ebayodata.cloudapp.net/). Select None for authentication (since the feed doesn’t require any authentication). Click Next.

clip_image006

The wizard finds all the data entities from the data service. In our example, we’re only interacting with the Deals table. Select Deals and click Finish.

clip_image007

Deals table now appears in the Solution Explorer. We have successfully attached to the eBay OData service.

clip_image008

Next, let’s create a screen for Deals. In Solution Explorer, right click on the Client node and select Add Screen.

clip_image009

In the Add New Screen dialog, select Browse Data Screen template. Name the screen DailyDeals. Select Deals as the screen data. Click OK.

clip_image011

DailyDeal screen is created and opened in Screen Designer.

clip_image012

Let’s see what we’ve got so far. Run the application by pressing F5. The HTML client is now running in your default browser!

clip_image013

We’ve got a list of daily deals from eBay. It is a good starting point, but there are some things we can make better. For example:

  • The screen says Daily Deals and the list also has a Deals header. It seems redundant.
  • The information in each deal doesn’t make much sense to me. What I really want is a list of product pictures and a way to drill in for more information about the deal.

Let’s see how we can go about to address these issues. Close the browser and return to the Visual Studio IDE.

Turn off list header

Since the screen title already says Daily Deals. It is redundant for the list to show a header. Select Deals in the screen content tree and uncheck Show Header in the Properties window.

clip_image014

Use a tile list

I’d like to present the deals as image thumbnails instead of a vertical list of text. LightSwitch provides a Tile List control just for that! In the screen content tree, select Deals and change its control from List to Tile List.

clip_image015

Once the Tile List is selected, the Deal node underneath it will expend to show all the fields available. clip_image016

You can think of the Deal node as a tile. By default, it is a 150x150 pixel tile. You can customize the size via the Properties window. We will keep it 150x150 in our example.

Delete every but Picture Url under Deal. Set it to use Image control. Note that the built-in Image control works with both binary data as well as an image URL. In the Properties window, set the Width and Height of the image to 150x150. The image will take up the entire tile.

clip_image018

Press F5 and run the application. We’ve got a list of product images! Next, let’s allow the user to find information about the deal by tapping on an image. Close the application and return to the Visual Studio IDE.

clip_image019

Create a detail dialog

In Screen Designer, drag Selected Item from the left pane to create a new dialog under Dialogs node.

clip_image020

Change the Display Name of the dialog to Daily Deal via Properties window. Also,

  • Change Label Position to None, since we don’t want to show labels of the fields in the dialog.
  • Check Use read-only controls, since we don’t allow users to edit the data.

In the dialog, delete everything but Picture Url, Title, and Converted Current Price.

  • Use Image control for Picture Url.
  • Move the Picture Url to the front.

clip_image021

Next, let’s make it open the dialog when the user tap on a deal in the tile list. Select the Deals node (Tile List) and find the Item Tap action in the Properties window. Click None.

clip_image023

Configure the tap action as follows. This indicates that we want to show the Daily Deal dialog with the Item Tap event on the list items. Click OK.

clip_image024

Press F5 to run the application. Tap on a deal to launch the deal dialog!

clip_image025

Alright, we’re getting closer! We can certainly make this dialog look a little better. For example:

  • Make the picture bigger to fill up the dialog
  • Add a currency symbol for the price
  • Emphasis on the title and price with some formatting

Close the application and return to the Visual Studio IDE.

Customize the dialog

In the Screen Designer:

clip_image026

  • Select Daily Deal (Dialog), set its Width to Fixed Width of 400 pixels.
  • Select Picture Url, set the size to 400 x 350 via Properties window.
  • Select Title, set the Font Style to Strong via Properties window.
Create a custom control

Next, let’s add a currency symbol to the Converted Current Price. Since the Text control doesn’t know that it’s a currency, we will write our own HTML to visualize the value.

Select Converted Current Price and choose Custom Control. Then, select Edit Render Code via the Properties window. This will launch the code editor.

clip_image027

Write the following code to visualize the price value with a currency symbol in an h1 tag.

myapp.DailyDeals.ConvertedCurrentPrice_render = function(element, contentItem) {

var itemTemplate = $("<div></div>");

var title = $("<h1>$" + contentItem.value + "</h1>");

title.appendTo($(itemTemplate));

itemTemplate.appendTo($(element));

};

Add a button

Finally, we need to enable the user to go to the actually site to purchase the item. Let’s add a button in the dialog to do that.

Right click on the Daily Deal dialog and select Add Button.

clip_image028

Create a method called ViewDeal and click OK.

clip_image029

A button is now added inside the dialog. Double click on View Deal button to edit the method in the code editor.

clip_image030

Write the following code to open a browser window.

myapp.DailyDeals.ViewDeal_execute = function(screen) {

window.open(screen.Deals.selectedItem.DealUrl, "mywindow");

};

Press F5 to run the application. Tap a product image to see the much improved dialog!

clip_image031

Run the application on a device

So far we’ve been using the application in the debug mode with the default desktop browser. I encourage you to publish the application to Azure and view it on your favorite mobile devices.


The Visual Studio LightSwitch Team (@VSLightSwitch) posted LightSwitch HTML Client Tutorial - Contoso Moving to the MSDN Code Library on 11/12/2012 (missed when published):

imageThe new HTML5 and JavaScript-based client is an important companion to our Silverlight-based desktop client. This tutorial walks through building out the mobile client used by Contoso Movers’ planning specialists.

Download: VB.NET (6.1 MB)

imageThe new HTML5 and JavaScript-based client is an important companion to our Silverlight-based desktop client that addresses the increasing need to build touch-oriented business applications that run well on modern mobile devices.

In this tutorial, we’ll build a touch-first, modern experience for mobile devices. To help ground the tutorial, we’ve created a fictional company scenario that has a need for such an application.

Helpful resources

As you walk through this tutorial, please bear in mind that there are useful resources available to help you should you get stuck or have a question:

The Contoso Moving Application

Contoso Moving is an application that’s used by Contoso Movers, Inc. to take the inventory of customers’ residences prior to moving. The data collected via the application helps Contoso Movers determine the resources required to move a particular client’s belongings—how many trucks, people, boxes, etc. need to be allocated. The application is comprised of two clients that serve distinct business functions:

  1. Schedulers use a desktop application to service new customer requests and create appointments. This application is a rich desktop application primarily geared towards heavy data entry with the keyboard and mouse, since Schedulers are on the phone with customers a lot and need to enter quite a bit of information into the system during the course of a day.
  2. Planning Specialists use a tablet device to quickly take inventory—on location—of each residence on the specialist’s schedule for the day. Taking inventory involves detailing each room in the residence, its size and entry requirements (if any), and listing its contents. Pictures are often taken of each room so the movers have a point of reference when they arrive. Secondarily, Planning Specialists may make notes about parking restrictions for the move team (i.e., where they can park the truck during the move).

This tutorial walks through building out the mobile client used by Contoso Movers’ planning specialists.

Unfortunately, Planning Specialists can’t use an Android tablet, such as Google’s Nexus 7, with apps using LightSwitch HTML Client Preview 2. Maybe later.


Return to section navigation list>

Windows Azure Infrastructure and DevOps

Kristian Nese (@KristianNese) described Creating a management infrastructure in Windows Azure in an 11/22/2012 post:

imageThis blog post will only provide some guidance on how to get started with IaaS in Windows Azure, and not explaining the wide aspect of the architecture in Azure, and neither the common pitfalls since I have described this previously over the years.

imageThis is the project:

“We have a large infrastructure on-premise, and many locations around the world. We want a reliable monitoring infrastructure, using System Center to tell us what’s going on, also if the entire location goes under water etc. We have some spare resources, but we want this operation to be totally separated from the bits and bytes we use on a day-to-day basis”.

This brings Windows Azure to the table.

First thing first, since this should be completely separated from the wide diversity of AD topologies in the business, we must start to create a new Active Directory forest in Azure

  1. Log on the Windows Azure portal with your account.
  2. Create a Virtual Network and Storage in the same affinity group (important).
  3. Download and install the Powershell module for Windows Azure from http://go.microsoft.com/?linkid=9811175&clcid=0x409
  4. Run Windows Azure Powershell as an Administrator on your computer/server – and execute the following cmdlets, one by one.

Set-ExecutionPolicy RemoteSigned

Import-Module “C:\Program Files (x86)\Microsoft SDK\Windows Azure\Powershell\Azure\Azure.psd1”

Get-AzurePublishSettingsFile

The last cmdlet will direct you to Windows Azure where you should already be signed in, and let you download the settings for your account. Save this file to a folder on one of your HDDs.

Run the following cmdlet:

Powershell ise

This will start Windows Azure PowerShell ISE where you can deploy your domain controllers and your virtual machines.

It’s really important that you perform these operations with powershell so that your domain controller can survive servicing in Azure without losing any data (ok, I’ll explain why in another blog post. But generally speaking, the HA mechanism in Azure is not similar to the one you may be familiar with through Hyper-V and Failover Clustering on-premise).

Paste the following script into Powershell ISE:

Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"
Import-AzurePublishSettingsFile 'C:\azure\TheSubscriptionFileYouDownloaded.publishsettings'
Set-AzureSubscription -SubscriptionName '3-Month Free Trial' -CurrentStorageAccount 'YourStorageAccount'
Select-AzureSubscription -SubscriptionName '3-Month Free Trial'
#Deploy the Domain Controller in a virtual network
#-------------------------------------------------
#Specify my DC's DNS IP (127.0.0.1)
$myDNS = New-AzureDNS -Name 'myDNS' -IPAddress '127.0.0.1'
$vmname = 'VMName'
# OS Image to Use
$image = 'MSFT__Windows-Server-2012-Datacenter-201210.01-en.us-30GB.vhd'
$service = 'ServiceName'
$AG = 'YourAffinityGroup'
$vnet = 'YourVirtualNetworkName'
#VM Configuration
$MyDC = New-AzureVMConfig -name $vmname -InstanceSize 'Small' -ImageName $image |
Add-AzureProvisioningConfig -Windows -Password 'Password' |
Set-AzureSubnet -SubnetNames 'TheSubnetYouCreatedinYourVirtualNetwork'
New-AzureVM -ServiceName $service -AffinityGroup $AG -VMs $MyDC -DnsSettings $myDNS -VNetName $vnet

This should start the deployment of your first VM, ready to be provisioned as a domain controller.

Once it's completed, attach two empty virtual hard disks to this VM (one disk for AD and one disk for backup. You can specify the size you'd like), and log on.
Depending of which OS you are running, you should go ahead and configure those newly attached disks so that they are ready to be used by the VM.

Create a NTDS folder on one of the disks, for AD.

Once this is done, go ahead and install Active Directory on your virtual machine, and place the AD settings in your NTDS folder you created.
When your VM is installed with AD DS, perform a backup of the server OS to the other vhd you created.

So far, so good.

To deploy virtual machines to your newly created domain, use this script:

Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"
Import-AzurePublishSettingsFile 'C:\azure\TheSubscriptionFileYouDownloaded.publishsettings'
Set-AzureSubscription -SubscriptionName '3-Month Free Trial' -CurrentStorageAccount 'YourStorageAccount'
Select-AzureSubscription -SubscriptionName '3-Month Free Trial'
#Deploy a new VM and join it to the domain
#-------------------------------------------
#Specify my DC's DNS IP (192.168.0.4) <-- this is just an example. use your newly created DC IP
$myDNS = New-AzureDNS -Name 'VMName' -IPAddress '192.168.0.4'
# OS Image to Use
$image = 'MSFT__Windows-Server-2012-Datacenter-201210.01-en.us-30GB.vhd'
$service = 'NewServiceName'
$AG = 'YourAffinityGroup'
$vnet = 'YourVirtualNetworkName'
$pwd = 'Password'
$size = 'Small'
#VM Configuration
$vmname = 'VMName'
$MyVM1 = New-AzureVMConfig -name $vmname -InstanceSize $size -ImageName $image |
Add-AzureProvisioningConfig -WindowsDomain -Password $pwd -Domain '*corp*' -DomainPassword 'Password' -DomainUserName 'Administrator' -JoinDomain 'FQDN'|
Set-AzureSubnet -SubnetNames 'TheSubnetYouCreatedinYourVirtualNetwork'
New-AzureVM -ServiceName $service -AffinityGroup $AG -VMs $MyVM1 -DnsSettings $myDNS -VNetName $vnet

This should deploy a VM, ready to use in your newly created domain in Windows Azure.

Once ready, attach some disks here as well for your data partitions.
You are now ready to install System Center Operation Manager 2012 in this VM, member of the domain you have created.


The StorageServer blog posted Gartner report claims that cloud computing PaaS revenue will reach $1.2 billion in 2012 on 11/19/2012:

imageGartner predicts that Cloud based Platform-as-a-Service revenue is all set to reach $1.2 billion by the end of 2012. When compared to the year 2011, the PaaS revenue was just $900 million. As per the research firm prediction the PaaS revenue will be $1.5 billion in 2013 and will reach $2.9 billion by 2016.

Cloud computing based Platform-as-a-Service includes suites of application infrastructure services, like application platforms as a service (aPaaS) and integration platform-as-a-service (iPaaS); special application infrastructure services like database platform as a service, business process management PaaS, messaging as a service and further functional kinds of middleware offered as a cloud service.

A technological analyst from Gartner says that, out of all the cloud technological aspects, IaaS and SaaS platforms are the most established and traditional cloud forms from a competitive viewpoint, while PaaS is the least advanced.

For this reason, in coming years Platform as a service cloud services will intensify in such a way that the battle between vendors and products will increase. So, many new players will also venture into the market in order to capture some of the market share and gain prominence in this period.

The largest segments within the PaaS market spending are application based platform with 34.5% investment, accounting as a platform which occupied 31% of total PaaS spending, application like cycle management services which occupied 12% of spending, BPM platform services which occupied 11.6% of spending and cloud integration service which occupied 11.4% of revenue.

Garner has predicted that the potential spending will be on an average of $360 million/year from 2011 to 2016.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image_thumb75_thumb7No significant articles today


<Return to section navigation list>

Cloud Security and Governance

Xath Cruz analyzed Data Protection and Privacy Issues in Latin America for the CloudTimes blog on 11/21/2012:

latin america cloud 300x249 Data Protection and Privacy Issues in Latin America

Due to the speed in which technology – particularly IT and cloud computing – develops, as well as their tendency to cross international borders and jurisdictions, there is a mounting pressure on the legal systems of the countries concerned to create new policies or modify existing ones in order to adapt to the ever changing IT landscape, particularly with regard to data transfer, data collection, and data privacy, as different countries will have different (and sometimes conflicting) laws and policies, resulting in international data transfers that are otherwise perfectly legal in their originating countries violating laws in different countries where the data transmission passes, or where the data itself ended up stored.

imageThe International collection, transfer, and storage of data and its conflicts with various international laws were already a problem in the early days of the Internet, but the rise of cloud computing has further exacerbated the phenomenon. All the various kinds of cloud applications and services – from cloud storage, to web based email, to cloud-based tools such as office suites – result in data being accessed, transferred, and stored in different countries on a regular basis.

It is therefore important that questions about the legal model that should be responsible for these kinds of transactions be posed, and that the most appropriate models for regulation and processing in cloud computing be the adoption of contractual stipulations aimed at preserving data integrity and privacy, while preventing it from turning into a loophole that cyber criminals or malicious individuals can use as a safe haven or protection from the law.

Personal Data Protection and Privacy

One of the key things about the data protection system in Latin America is that it differs greatly from the European and US models. Unlike Europe, there is no international treaty or supranational regional body of rules that regulates the protection of personal data nor their transfer. Unlike the US, a large number of Latin American countries and legal systems have data protection provisos enshrined in their Constitution. However, the problem with Latin American legal systems is that in spite of having provisions regarding protection of personal data in their constitutions, most of the countries have no legal rules building on said constitutional precepts. At best, the countries in question have had a delayed legislative development.

Argentina on Personal Data Protection

In Argentina, the regulations on personal data protection are developed out of a series of principles established in sections 4 to 12 of Law 25326. Argentina’s more proactive approach to safeguard the personal data through legislation has led to the country being recognized by the European Commission as the only Latin American country with an adequate level of protection, resulting in the country becoming the main recipient of personal data transferred from Spain to other Latin American countries.

In order to protect personal data, Argentina has created various principles of purpose, data quality, proportionality, transparency in processing, safety, modification, access, and opposition as well as restriction of successive transfers to outside countries. The only exclusions to Argentina’s laws regarding personal data are judicial and police collaboration, the fight against terrorism, the exchange of medical information, stock exchange and bank transfers, or transfers agreed within the framework of an international treaty.

Argentina’s laws provide no special provisions or special rules on the right to privacy when it comes to the Internet. The country’s justice courts tend to treat cases regarding privacy on the Internet the same as the ones in other media, such as TV and print. Their laws also lump the Internet and services provide through it in “files, databases, or other technical media for data processing.”

Colombia on Personal Data Protection

In Colombia, while there is a constitutional rule on the right to privacy, honor and good name, as well as legal protection granted to habeas data, there are no legal regulations that help develop the rights issued for more than 17 years. The right to privacy in Colombia was developed mainly on the basis of previous decisions, focusing on Constitutional rulings that started to define their essential core.

Colombia’s model regarding international transfer and data processing is extremely protectionist; any data processing requires previous, express, and informed consent. This is an extremely high standard that can’t even be found in European models.

When it comes to access to personal information via the Internet, Colombia’s draft statutory law contains a prohibition stating that “personal data, except for public information, shall not be available on the Internet.” This odd stipulation treats the Internet only as a means of communication, overlooking the fact that there are numerous services provided through the Internet that would require privacy protection.

The above means that Colombia prohibits the transfer or storage of personal data via Internet and cloud computing systems. In fact, their laws can be stretched and interpreted in a way that prohibits disclosure of private information like biographies or personal information on social networks. The key takeaway is that Colombia’s laws are not meant for a world with Internet, as they are made up of obsolescent rules.

Chile on Personal Data Protection

In Chile, the processing of personal data in registries or databases maintained by public or private bodies is ruled by Law 19628 on the Protection of Private Life or protection of personal data. Law 19628 governs over data processing that consists of personal information collection, processing, transfer and storage, and is applied to processing, collection, and storage of data over the Internet. This is because Chile has not yet created specific rules custom-tailored to the IT sector, and has been using or extending preexisting legislation to deal with cases.

When it comes to transfer of personal data internationally, Chile’s laws do not establish a specific pattern. They consider international transfer of data as included in the concept of processing, and is allowed as long as it complies with the provisions of their law for data processing. However, in the original text submitted before their Chamber of Deputies, which was not approved by the law makers, transfers to countries or third parties are prohibited if they don’t have the same level of protection as those prevailing in Chile.

Chile’s law is the subject of controversy and criticism due to its lack of principle of purpose with regard to personal data processing, which means it lacks legal effect. Currently, their legislative branch is working on a bill that will incorporate the principle of purpose in Law 19628, which is a step in the right direction but not enough, as the law in question uses broad strokes and can still be subject to loopholes, especially due to the rapidly-developing nature of cloud computing technology.

Mexico on Personal Data Protection

Mexico is fairly late when it comes to crafting laws on personal data protection, having passed one only last 2010. Like with other countries, Mexico’s laws on personal data hinges on principles of purpose and consent, establishing that any processing of personal data is subject to the owner’s consent, with the stipulation that there are different ways of expressing consent, and that there are specific cases where consent may be bypassed, such as anything that concerns national security.

Due to its relatively young age, having been made when the cloud and IT industry is already in full swing, Mexico’s rules regarding personal data processing are much more suited to personal data processing over the Internet and over the cloud. For instance, provided that the data transfer happened and includes a privacy notice, the cybernaut’s behavior implies that they have accepted the conditions set forth by such notice (re: implicit consent). This opt-out system is conducive to smooth data transfers over the Internet and in the cloud.

The same amount of protection is afforded to international data transfers. For instance, Section 36 gives authorization to international data transfers as long as they are carried out in accordance with the privacy notice. Their laws also anticipate scenarios in which international data transfers are authorized even if the owner of the information has not given their consent, either intentionally or due to lack of foreknowledge. The events in question tend to cover a broader scope than those covered by the legislations of other countries in the region. In cases like this, Mexican law seems flexible enough and capable of adapting to the changes in IT trends in a reasonable speed and manner.

Data Retention: Conflicts Regarding the Right to Privacy

Personal Data Retention means the storage of personal information, from telephone call records and Internet traffic logs, and communication content – by public entities or business companies. Many rules regarding personal data retention consider the protection of personal information as another form of data processing.

Majority of legal systems do not allow communication violations, under the principle associated to the very origins of the liberal state in the 18th century. However, a series of recent events have set precedents for nations to create restrictions on said principle, allowing communication violations and rights to privacy to be withheld in the interest of national security or criminal prosecution. One of the main reasons for the shift is the terrorist attacks that hit New York on September 11, 2001. Another would be those that took place in Madrid on March 11, 2004, as well as those that occurred in London on July 7, 2005. The attacks led the European and US authorities to consider data retention as an effective means of predicting, anticipating, and preventing terrorist attacks, as well as help greatly in the fight against organized crime.

The legal developments concerning the above were not meant with praise, though, as the measures were deemed exaggerated and a danger to the rights to privacy. They are also criticized for being disproportionate, as the effectiveness of the measures do not compensate for the damages and limitations sustained by the rights to privacy: the results achieved by the measures are not considered enough justification for the extremely high levels of privacy limitation and intrusion. Thus, modifications to the scope of said measures have been proposed in order to meet halfway and achieve a more balanced scenario.

Latin America on Retention of Personal Information

Latin America currently doesn’t have any rules that can be assimilated to the European directive on personal data conservation or retention. It is worth noting that Latin American legislators usually have legal frameworks regulating the intervention in communications (particularly telecommunications) when there is a previous judicial order.

However, different countries tend to have different rules when it comes to imposing a series of obligations in terms of information retention, especially when it comes to credit and financial information and regarding obligations of the data banks that store this kind of information and their reports. This is attributed by pundits to the fact that majority of Latin American countries have not had to deal with any attack from outside forces that could have motivated the government to redesign its communication laws and internet traffic information, in an effort to help police authorities access pertinent information.

Conclusions and Recommendations

Due to Europe’s strong influence on Latin American legislation with regard to rules on personal data protection, Latin America has laws, policies, and rules regarding personal data protection on the cloud that are up to par with European standards. While they’re not excellent by any means and still have room for improvement, they can be considered as capable of providing adequate protection. This is why the last 11 years sees a lot of Latin American countries undergoing a transition from the habeas data model to general legislations with stringent and specific data protections.

Generally, Latin American legislations tend to be stringent and based on their European counterparts, but it is fortunate that many of their laws are flexible and capable of acknowledging changes and developments in the IT and cloud computing industry. This ensures that there is a balanced scenario; privacy and personal data protection on the cloud is a fundamental right that must be protected, but the protection should be done in a reasonable, flexible, and proportionate manner. At the end of the day, Latin American countries would all benefit from working together on a joint policy to establish reasonable and balanced protection standards appropriate to contemporary privacy issues, which will ensure the adoption of a fair and adaptable regulatory framework designed to respond to the challenges posed by the Internet and the cloud.

Related Articles:

image_thumb2I was surprised to find Brazil missing from the lists of countries surveyed.

No significant security articles today


<Return to section navigation list>

Cloud Computing Events

•• Microsoft’s 2013 MVP Global Summit will take place 2/18 through 2/21/2012 in Bellevue and Redmond, Washington, according to the event’s site, which went live on 11/23/2012:

image

The Summit hotels appear to be the same as last year. The agenda and session list weren’t posted as of 11/24/2012.


•• Himanshu Singh (@himanshuks) posted Windows Azure Community News Roundup (Edition #46) on 11/23/2012:

imageWelcome to the latest edition of our weekly roundup of the latest community-driven news, content and conversations about cloud computing and Windows Azure. Here are the highlights for this week.

Articles, Videos and Blog Posts
imageUpcoming Events and User Group Meetings

North America

Europe

Rest of World/Virtual

Recent Windows Azure MSDN Forums Discussion Threads

Recent Windows Azure Discussions on Stack Overflow

Send us articles that you’d like us to highlight, or content of your own that you’d like to share. And let us know about any local events, groups or activities that you think we should tell the rest of the Windows Azure community about. You can use the comments section below, or talk to us on Twitter @WindowsAzure.


See the Alon Shachar and Nir Channes of SAP will present a Develop and consume RESTful services using the new Eclipse OData modeler session to EclipseCon Boston 2013 on 3/26/2013 article in the Marketplace DataMarket, Cloud Numerics, Big Data and OData section above.


Chris Klug (@ZeroKoll) posted Code from the Sweden Azure User Group presentation this week on 11/22/2012:

imageOk guys, here is the code that I used during the SWAG presentation this week. I promised to get it on-line before the end of the week, and it seems like that actually happened. Yay!

It includes the client end of it, as well as the scripts needed on the server end. Just remember that you need to configure your account details in App.xaml.cs, sign up for SendGrid if you want to send e-mails, and configure the storage account settings in the images insert script.

image_thumb75_thumb8Any questions? Just ask!

Code: SWAG Code.zip (76.73 kb)


<Return to section navigation list>

Other Cloud Computing Platforms and Services

‡ Sophie Curtis (@SCurtisss) wrote an ICANN reveals objections to proposed top-level domains story, which PC World published on 11/24/2012:

imageA panel representing about 50 of the world's national governments has revealed a list of the proposed generic top-level domain (gTLD) names to which there have been objections.

Back in May, the ICANN registration process for new gTLDs finally drew to a close, and in June ICANN published a list of which domain names had been applied for and by whom. A total of 1930 applications were received for suffixes such as .cloud, .music, .docs and .lol.

imageICANN said at the time that anyone who objected to an application and believed they had the grounds to do so could file a formal objection within seven months.

In August it was revealed that Saudi Arabia had objected to a variety of new gTLDs including .gay, which it said promotes homosexuality and could be offensive to societies that consider it to be contrary to their culture.

People from other countries also complained about some of the proposed gTLDs, for example about the use of the .patagonia, which is said to be the name of a geographical region and should not be assigned to a private company.

imageNow the the Government Advisory Committee (GAC), which provides advice to ICANN on issues of public policy, has filed 242 "Early Warnings" on applications that are thought to be controversial or sensitive.

Early Warnings mainly consist of requests for information, or requests for clarity on certain aspects of an application. They are intended to give the applicant an opportunity to withdraw their application and recover the bulk of their $185,000 (£116,300) registration fee.

Applicants have 21 days to respond to the Early Warning. If the matter is not resolved amicably, the GAC can lodge a formal complaint in April. …

Read more.


You can read ICANN’s GAC Early Warning List here. Following are Early Warning entries for the TLD .cloud:

Application ID Number Applicant Filing GAC Member Early Warning
cloud 1-1315-79670 Amazon EU S.à r.l. Australia Cloud-AU-79670.pdf
.クラウド[cloud] 1-1318-69604 Amazon EU S.à r.l. Australia CloudIDN-AU-69604.pdf
cloud 1-1099-17190 Charleston Road Registry Inc. Australia Cloud-AU-17190.pdf
cloud 1-1027-19707 Symantec Corporation Australia Cloud-AU-19707.pdf

Following is a copy of page 1 of Australia’s Early Warning for ID Number 1-1315-79670:

image

It’s clear from Amazon’s proposal to monopolize the .cloud TLD (see below) that the firm would allow no third-party access to the TLD. Sandeep Ramchandan reported that Amazon has applied for 75 TLDs in his 00:21:36 The New gTLD Opportunity - Sandeep Ramchandani, Business Head, Radix Registry YouTube video.


Following is a copy of my article of 11/11/2012 for the Other Cloud Computing Platforms and Services section of my Windows Azure and Cloud Computing Posts for 11/9/2012+ post:

Reuven Cohen (@ruv) reported The Battle For The Cloud: Amazon Proposes ‘Closed’ Top-Level .CLOUD Domain in an 11/6/2012 article for Forbes.com:

imageAccording to a new proposal document uncovered by the website newgtldsite.com, Amazon.com is proposing a closed registry for the new .CLOUD generic top-level domain (gTLD). In the Amazon .CLOUD application it states “All domains in the .CLOUD registry will remain the property of Amazon. .CLOUD domains may not be delegated or assigned to third party organizations, institutions, or individuals.”

What this means is unlike other top level domain such as .com .net .tv, etc, no individuals, organizations or businesses will be able to register and use a .CLOUD name for their website if the Amazon proposal ultimately wins control of the .CLOUD registry.

image_thumb11Amazon claims this is to help prevent abuse saying in its proposal “Amazon EU S.à r.l. and its registry service provider, Neustar, recognize that preventing and mitigating abuse and malicious conduct in the .CLOUD registry is an important and significant responsibility. Amazon EU S.à r.l. will leverage Neustar’s extensive experience in establishing and implementing registration policies to prevent and mitigate abusive and malicious domain activity within the proposed .CLOUD space. .CLOUD will be a single entity registry, with all domains registered to Amazon for use in pursuit of Amazon’s business goals. There will be no re-sellers in .CLOUD and there will be no market in .CLOUD domains. Amazon will strictly control the use of .CLOUD domains.”

imageAmazon describes its intended use of the top level .CLOUD “to provide a unique and dedicated platform for Amazon while simultaneously protecting the integrity of its brand and reputation.”

Amazon further outlines its .CLOUD strategy saying;

A .CLOUD registry will:

  • Provide Amazon with additional controls over its technical architecture, offering a stable and secure foundation for online communication and interaction.
  • Provide Amazon a further platform for innovation.
  • Enable Amazon to protect its intellectual property rights

When asked what is the goal of the proposed gTLD in terms of areas of specialty, service levels or reputation? The company answered by saying “Amazon responses noted that it intends for its new .CLOUD gTLD to provide a unique and dedicated platform for stable and secure online communication and interaction. The .CLOUD registry will be run in line with current industry standards of good registry practice.”

Also interesting when asked to describe whether and in what ways Amazon will provide outreach and communications to help to achieve its projected benefits? It said “There is no foreseeable reason for Amazon to undertake public outreach or mass communication about its new gTLD registry because domains will be provisioned in line with Amazon’s business goals.”

Amazon isn’t alone in wanting the .CLOUD top-level domain for itself, but currently Amazon is said to be a front runner in attempting to control the .CLOUD gTLD. …

Read more about other .CLOUD applicants and their plans.

Yet another attempt to monopolize (i.e., abuse) the term after Dell Computer’s aborted attempt to trademark “cloud computing” in 2008. Would Amazon be a benevolent dictator of the .CLOUD TLD? Not likely.

Following is a Twitter conversation on the subject between @rogerjenn, @samj and @rvmNL:

image

Remco van Mook is Director of Interconnection, EMEA at Equinix and his Twitter profile claims he’s an “Internet numbers bigwig.”


• Jeff Barr (@jeffbarr) posted AWS Marketplace - Additional EC2 Operating System Support (FreeBSD, Debian, CentOS) on 11/23/2012:

imageWe're working hard to make the AWS Marketplace even more flexible and to make sure that it contains the operating systems, tools, and other resources that you need. Today we are adding support for three new open source operating systems: FreeBSD, CentOS, and Debian. We are also making it easier for you to find software that runs on the operating system of your choice.

Expanded Operating System Support
imageYou can now launch three new operating systems from within the AWS Marketplace:

FreeBSD® is an advanced operating system for modern server, desktop and embedded computer platforms. FreeBSD provides advanced networking, impressive security features, and world class performance, and is used by some of the world's busiest websites.

Debian is a popular and influential Linux distribution. The current stable release includes support for over 29,000 packages.

CentOS is a free, Enterprise-class Linux distribution derived from publicly available sources. CentOS conforms fully with the upstream vendor's redistribution policy and aims to be 100% binary compatible with their offering.

These operating systems come directly from the Open Source community and are available at no charge other than the usual cost for the EC2 instances. You can find them in the Operating Systems section of the Marketplace.

Improved Searching
You can now search for software that's running on the operating system of your choice:

I hope that you enjoy these new additions to the Marketplace.


Barb Darrow (@gigabarb) asserted Amazon’s dead serious about the enterprise cloud in an 11/21/2012 post to GigaOm’s Cloud blog:

imageAs wildly successful as Amazon Web Services have been, there’s still a lot of noise about how big enterprises don’t want to put their precious workloads on this public cloud infrastructure. The Amazon cloud is not safe or reliable enough for these important workloads, some say.

image_thumb11Here’s a news flash: Big companies may or may not be wary of Amazon’s cloud, but they’re already using it. And this despite multiple snafus at Amazon’s US-East data center complex in the past year. It’s a pretty safe bet that virtually every Fortune 1000 company is running workloads beyond test and dev in Amazon’s cloud and that means trouble for incumbent IT providers like IBM, HP, Dell and others which are scrambling to respond.

Case in point: Cloudyn CEO Sharon Wagner, whose company helps businesses make best use of AWS, told me that 30 percent of its AWS customers are large enterprises. And while their applications vary, they do include business-critical workloads, and not just development and testing, he said.

Ken Ziegler, CEO of Logicworks, a New York City-based cloud computing and managed hosting provider, agreed that big accounts aren’t just fooling around with AWS.

“Many of the most cited barriers to cloud adoption have been addressed at this point and it’s getting more difficult for territorial IT decision-makers to defend managing infrastructure in-house. You’d be surprised just how many companies have already made the move. It’s not just Netflix.”

imageAmazon is pressing its first mover advantage to reinforce the notion that it is “the” brand in cloud. “As Kleenex is to tissue, Amazon is to cloud,” Ziegler said. To capitalize on that sentiment, Logicworks this week launched a new managed service that will enable it to manage business customers’ AWS deployments.

Over the past year, AWS unveiled an array of more enterprise-like support and service options. Expect Amazon execs — including CEO Jeff Bezos (pictured above), AWS Senior Vice President Andy Jassy, and CTO Werner Vogels to talk more about this market at the AWS Re:invent show next week in Las Vegas. The show also flaunts a pretty robust enterprise IT conference track. The timing is good: An array of competitive public cloud offerings are now coming online from Rackspace, Hewlett-Packard, and others.

AWS girds for more competition

Amazon is nothing if not proactive. Just as it rolls out services before announcing them, now it’s prepping for more intense competition for enterprise workloads. Rivals say they are better suited for enteprise needs than Amazon. Rackspace says its customer support sets it apart; HP says its enterprise service level agreements (SLAs) will win enterprise customers over.

Sources say that Amazon now offers special deals including discounts to enterprise companies doing as little as $250,000 a year in AWS business. Six months ago, it only offered such deals to companies doing at least $1 million of business annually. Why the change now? One thing that IBM and HP have that Amazon does not is long-term ties to big customers. Amazon did not respond to requests for comment on its discounting practices.

Said one AWS partner: “AWS feels that IBM entering with SmartCloud and HP with its public cloud may take away enterprise customers because [those older vendors] have much better relationships with them.” Developments like Telefonica’s joint public cloud offering with Joyent is also a problem for Amazon given that telcos also have tight enterprise relationships and telcos “own the network edge,” he said.

A stealth attack on enterprise IT

Some AWS partners said the company prefers to work under the radar in general and that stealth mode hid what they say is an escalated enterprise sales push. AWS has hired sales engineers and others from enterprise-focused companies like HP, SunGard and EMC.

“One of the senior AWS guys told us ‘we like that our competitors don’t think we’re active in the enterprise. When they find out it’ll be too late,’” he said.

One thing’s for sure, Amazon has a huge head start in public cloud services. The total net sales attributed to the company’s “other” category –which largely consists of AWS — were $608 million in Amazon’s third quarter ending September 30. For the nine months preceding that, “other” sales totaled $1.582 billion. So to say AWS is now a $2-billion-a-year business is not a stretch.

Amazon’s problem is that it’s had that field much to itself so far. That won’t be true going forward.

Windows Azure has a ways to go to catch up to AWS.

Full disclosure: I’m a registered GigaOm analyst.


<Return to section navigation list>

0 comments: