Wednesday, April 25, 2012

Windows Azure and Cloud Computing Posts for 4/20/2012+

A compendium of Windows Azure, Service Bus, EAI & EDI Access Control, Connect, SQL Azure Database, and other cloud-computing articles. image222

image433

Updated 4/27/2012 at 5:00 PM PDT with new articles marked by Himanshu Singh, Karsten Januszewski, Andrew Edwards

• Updated 4/25/2012 at 5:00 PM PDT with new articles marked by Michael Washam, Greg Oliver, Jeff Barr, Alan Smith, Brian Loesgen, Bruno Terkaly, Jim O’Neil, Jan Van der Haegen, Derrick Harris, Michael Simons, Tim Anderson, Adam Hoffman, Carl Nolan, Windows Azure Operations Team and Me.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:


Azure Blob, Drive, Table, Queue and Hadoop Services

•• Andrew Edwards reported CloudDrive: Possible Data Loss when calling Create() or CreateIfNotExist() on existing drives in a 4/26/2012 post to the Windows Azure Storage Team blog:

Windows Azure Drive is in Preview, and we recently identified a timing bug in the CloudDrive Client Library (SDK 1.6 and earlier) which can cause your CloudDrive to be accidentally deleted when you call ‘Create()’ or ‘CreateIfNotExist()’ on an existing drive. For your existing drive to be accidently deleted, there must be a period of unavailability of your Windows Azure Storage account during the call to ‘Create()’ or ‘CreateIfNotExist()’.

imageYour service is more likely to hit this bug if you frequently call ‘Create()’, which is sometimes done if you use the following pattern where you call ‘Create()’ before you call ‘Mount()’ to ensure that the drive exists before you try to mount it:

try
{
    drive.Create(...);
}
catch(CloudDriveException)
{
    ...
}

drive.Mount(...);

Another common pattern can occur when using the new ‘CreateIfNotExist()’ API followed by a ‘Mount()’ call:

drive.CreateIfNotExist(...);
drive.Mount(...);

We will fix this timing bug in SDK 1.7.

To avoid this timing bug now, you should add a test for the existence of the blob before attempting to create it using the following code:

CloudPageBlob pageBlob =
    new CloudPageBlob(drive.Uri.AbsoluteUri, drive.Credentials);

try
{
    pageBlob.FetchAttributes();
}
catch (StorageClientException ex)
{
    if (ex.ErrorCode.Equals(StorageErrorCode.ResourceNotFound))
    {
        // Blob not found, try to create it
        drive.Create(...);
    }
}

•• Andrew Edwards also reported CloudDrive::Mount() API takes a long time when the drive has millions of files in a 4/26/2012 post to the Windows Azure Storage Team blog:

Windows Azure Drive is in Preview, and we have identified an issue with the CloudDrive::Mount() API where it will take 5 to 20 minutes to mount a drive that contains millions of files. In these cases, the majority of time used by CloudDrive::Mount is spent updating the ACLs (access control lists) on all the files on the drive. The Mount() API attempts to change these ACLs on the root of the drive so that lower privileged roles (web and worker roles) will be able to access the contents of the drive after it is mounted. However, the default setting for ACLs on NTFS is to inherit the ACLs from the parent, so these ACL changes are then propagated to all files on the drive.

imageThe workaround for this issue is to mount the drive once the slow way, and then permanently break the ACL inheritance chain on the drive. At that point, the CloudDrive::Mount() API should always take less than one minute to mount the drive.

To break the ACL inheritance chain perform the following steps:

  1. Mount the drive
  2. Open a command shell
  3. Run the following commands (assuming that z: is where the drive is mounted):
    z:
    cd \
    icacls.exe * /inheritance:d
  4. icacls.exe will print out a list of files and directories it is processing, followed by some statistics:
    processed file: dir1
    processed file: dir2
    processed file: dir3
    processed file: dir4
    processed file: dir5
    Successfully processed 5 files; Failed processing 0 files
  5. Finally you should unmount the drive.

Once you have done the above, subsequent calls to CloudDrive::Mount will be faster.


• Carl Nolan (@carl_nolan) posted .Net Hadoop MapReduce Job Framework - Revisited on 4/25/2012:

imageIf you have been using the Framework for Composing and Submitting .Net Hadoop MapReduce Jobs you may want to download an updated version of the code:

http://code.msdn.microsoft.com/Framework-for-Composing-af656ef7

image_thumb3_thumbThe biggest change in the latest code is the modification of the serialization mechanism. Formerly data was written out of the mapper, and combiner, as a string. This has now been changed to use a binary formatter. This means that the input into the mappers and reducers is no longer a string but rather an object; which can then be cast directly to the expected type. Here are the new Combiner and Reducer base classes:

[F# classes elided for brevity.]

C# Combiner

  1. [AbstractClass]
  2. public abstract class CombinerBase : MapReduceBase
  3. {
  4. protected CombinerBase();
  5. public abstract override IEnumerable<Tuple<string, object>> Combine(string key, IEnumerable<object> values);
  6. }

C# Reducer

  1. [AbstractClass]
  2. public abstract class ReducerBase : MapReduceBase
  3. {
  4. protected ReducerBase();
  5. public abstract override IEnumerable<Tuple<string, object>> Reduce(string key, IEnumerable<object> values);
  6. }

image_thumb1[1]Here is a sample of an implemented Map and Reduce types:

C#

  1. using System;
  2. using System.Collections.Generic;
  3. using System.Linq;
  4. using System.Text;
  5. using MSDN.Hadoop.MapReduceBase;
  6. namespace MSDN.Hadoop.MapReduceCSharp
  7. {
  8. public class MobilePhoneMinMapper : MapperBaseText
  9. {
  10. private Tuple<string, object> GetLineValue(string value)
  11. {
  12. try
  13. {
  14. string[] splits = value.Split('\t');
  15. string devicePlatform = splits[3];
  16. TimeSpan queryTime = TimeSpan.Parse(splits[1]);
  17. return new Tuple<string, object>(devicePlatform, queryTime);
  18. }
  19. catch (Exception)
  20. {
  21. return null;
  22. }
  23. }
  24. public override IEnumerable<Tuple<string, object>> Map(string value)
  25. {
  26. var returnVal = GetLineValue(value);
  27. if (returnVal != null) yield return returnVal;
  28. }
  29. }
  30. public class MobilePhoneMinCombiner : CombinerBase
  31. {
  32. public override IEnumerable<Tuple<string, object>> Combine(string key, IEnumerable<object> value)
  33. {
  34. yield return new Tuple<string, object>(key, value.Select(timespan => (TimeSpan)timespan).Min());
  35. }
  36. }
  37. public class MobilePhoneMinReducer : ReducerBase
  38. {
  39. public override IEnumerable<Tuple<string, object>> Reduce(string key, IEnumerable<object> value)
  40. {
  41. yield return new Tuple<string, object>(key, value.Select(timespan => (TimeSpan)timespan).Min());
  42. }
  43. }
  44. }

The changes are subtle but they simplify the processing in the combiner and reducer, removing the need for any string processing. To add the visibility of the data coming out of the reducer, a string format is still used.

The other change is around support for multiple key output from the mapper. Lets start with a sample showing how this is achieved:

  1. // Extracts the QueryTime for each Platform Device
  2. type StoreXmlElementMapper() =
  3. inherit MapperBaseXml()
  4. override self.Map (element:XElement) =
  5. let aw = "http://schemas.microsoft.com/sqlserver/2004/07/adventure-works/StoreSurvey"
  6. let demographics = element.Element(XName.Get("Demographics")).Element(XName.Get("StoreSurvey", aw))
  7. seq {
  8. if not(demographics = null) then
  9. let business = demographics.Element(XName.Get("BusinessType", aw)).Value
  10. let bank = demographics.Element(XName.Get("BankName", aw)).Value
  11. let key = Utilities.FormatKeys(business, bank)
  12. let sales = Decimal.Parse(demographics.Element(XName.Get("AnnualSales", aw)).Value) |> box
  13. yield (key, sales)
  14. }
  15. // Calculates the Total Revenue of the store demographics
  16. type StoreXmlElementReducer() =
  17. inherit ReducerBase()
  18. override self.Reduce (key:string) (values:seq<obj>) =
  19. let totalRevenue =
  20. values |>
  21. Seq.fold (fun revenue value -> revenue + (value :?> decimal)) 0M
  22. Seq.singleton (key, box totalRevenue)

Using multiple keys from the Mapper is a two-step process. Firstly the Mapper needs to be modified to output a string based key in the correct format. This is done by passing the set of string key values into the Utilities.FormatKeys() function. This concatenates the keys using the necessary tab character. Secondly, the job has to be submitted specifying the expected number of keys:

MSDN.Hadoop.Submission.Console.exe -input "stores/demographics" -output "stores/banking"
-mapper "MSDN.Hadoop.MapReduceFSharp.StoreXmlElementMapper, MSDN.Hadoop.MapReduceFSharp"
-reducer "MSDN.Hadoop.MapReduceFSharp.StoreXmlElementReducer, MSDN.Hadoop.MapReduceFSharp"
-file "%HOMEPATH%\Projects\MSDN.Hadoop.MapReduce\Release\MSDN.Hadoop.MapReduceFSharp.dll"
-nodename Store -format Xml -numberKeys 2

One final note, in the Document Classes folder there are two versions of the Streaming jar; one for running in azure and one for when running local. The difference is that they have been compiled with different version of Java. Just remember to use the appropriate version (dropping the –local and –azure prefixes) when copying to your Hadoop lib folder.

Hopefully you will find these changes useful.


• Adam Hoffman (@stratospher_es) posted Windows Azure Storage for the ASP.NET Developer to the US DPE Azure Connection blog on 4/25/2012:

imageThis post is part of a series called “Windows Azure for the ASP.NET Developer” written by Rachel Appel, Adam Hoffman (that's me), and Peter Laudati. You can see the complete list of posts in the series at the US DPE Azure Connection site.

As an ASP.NET developer, you’ve undoubtedly built applications that have requirements for persistent data storage. Anything but the most trivial application likely has this need. Depending on the scale of your needs, you might have turned to:

  • storing your data on the local file system,
  • storing your data on a shared directory,
  • storing your data in a relational database like SQL Server (this could be either the full SQL Server editions, or the more modest SQL Server Compact edition)
Different Names for the Same Thing

In the cloud, you have comparable options, but using different methods. Let’s go through them.

Local File System

imageLocal disk is available to each Windows Azure role, but this is not persistent. If your role process goes down it may be restarted on another node, so the local disk is not for persistent data. If you want to translate your use of the local file system to Azure, you can use Windows Azure Drives (also known as XDrives), which is really the use of a “blob” in the cloud, mounted as a virtual hard drive. See http://aka.ms/XDrives for more details. Note that this solution is simple, but doesn’t scale to multiple writer instances, as Windows Azure Drives are currently limited to allow only one instance at a time to mount a Windows Azure Drive for writing (although multiple instances can read simultaneously).

Shared Directory

If you’re currently storing your data in a shared directory, the Windows Azure Drives might provide a quick alternative as well, provided your needs are modest and you can live with a single instance (or at least a single writer instance). Many applications can make minor modifications and take quick advantage of Windows Azure Drives. For a thorough review of creating, mounting and otherwise working with XDrives, see the Working with Drives lab in the Windows Azure Training Kit at http://aka.ms/AzureDrives. Also, it turns out that there is a workaround if you need to allow multiple writers to a single drive - see http://aka.ms/XDriveMultiWrite for a creative solution to the problem. It takes a little glue up, but might well work for your situation.

Relational Databases

If you have been using SQL Server for your data persistence needs, there’s a very straightforward transition to the cloud using SQL Azure or SQL Azure Federations (if you had lots and lots of data). For the most part, this is really just SQL Server in the cloud, and moving to it is very straightforward. For complete coverage of using SQL Azure, see Rachel Appel’s post in the series. Additionally, if you’re looking for tools to help you migrate your SQL Server data to SQL Azure, you should look at George Huey’s SQL Azure Migration Wizard, or SQL Azure Federation Migration Wizard. Since the SQL Azure topic is so well covered in Rachel’s post, we’ll not discuss it further here.

Now, you might think that we’ve pretty much covered all the methods that you need to know about for persistent data storage in the cloud, but it turns out that Azure has a couple of other tricks up its sleeve. Let’s take a look now at the other Windows Azure Storage components – Tables, Blobs and Queues.

(Not such) Tiny Vessels

The Windows Azure Storage services are broken into four pieces, each one specialized for a different purpose. Each of them allows for storage of vast amounts of data, but vary based on the service. They are:

  1. Tables
  2. Blobs
  3. Queues, and,
  4. Drives

We’ve already covered Drives above, so now let’s take a look at our other choices.

Tables

Windows Azure Storage Tables are extremely scalable, and can hold billions of rows (or objects) and terabytes of data. Azure will take care of keeping these tuned, and can potentially spread them over thousands of machines in the data center. Additionally, your data is replicated over two data centers, which provides for disaster recovery in the very unlikely event that an entire data center is disabled. The details of this are at http://aka.ms/StorageGeoReplication.

Tables are both similar to and different from SQL Server tables, and can take some getting used to, but for non-relational data needs, they’re hard to beat. In order to understand Tables, you’ll need to understand partition keys (which support scalability of tables) and row keys (which are the unique key per partition, very similar to primary keys in SQL Server). For an excellent overview of Tables, see Julie Lerman’s MSDN article at http://aka.ms/AzureTables. To really understand the nitty gritty around choosing partition and row keys, take a look at Jai Haridas’ session from PDC09 at http://www.microsoftpdc.com/2009/svc09 .

As an ASP.NET developer, it might take some time to get used to not relying (only) on SQL Server (or SQL Azure) for all of your tabular persistent data needs, but you in cases where relational consistency isn’t the high order bit, and scalability and performance are, take a serious look at moving parts (or all) of your data to Azure Table storage.

Blobs

Windows Azure Storage Blobs provide the ability to store and serve large amounts of unstructured data anywhere in the world. They support security, so you can expose data publicly or with fine grained control, over HTTP or HTTPS. Additionally, they can handle huge amounts of data. A single blob can be up to a terabyte in size, and a single storage account in Azure can handle up to 100 terabytes of data! They are handy for all sorts of uses, including the serving of relatively static content from a website (images, stylesheets, etc.), media objects, and much more.

As an ASP.NET developer, you’ve likely grown used to hosting your website assets on your web servers, or maybe on a file server shared amongst your web servers. This is a pretty typical pattern, but Blobs provide you with the opportunity to move these resources. Pick up your images, stylesheets and script, and move them to Blob storage. Why should you move them? There are at least three good reasons.

  1. Moving them to Blob storage instead of your web roles reduces load on your valuable web servers, freeing them up to serve more traffic more quickly.
  2. Because you can’t easily update folders on your Web Roles without a redeployment, moving them to Blob storage allows you to make changes to your assets without redeploying.
  3. Moving them to Blob storage allows you to easily enable the Azure Content Delivery Network (CDN) if you want to really push the performance needle. Learn more about the CDN at http://aka.ms/CDN.

For a thorough walkthrough of all aspects of working with blobs, see the guide at http://aka.ms/Blobs. For a thorough overview of Blobs, Brad Calder’s PDC talk is hard to beat. See it at http://www.microsoftpdc.com/2009/SVC14.

Queues

Windows Azure Storage Queues are a straightforward, reliable queue message storage system. They provide an infrastructure to store up work for an army of worker roles that you can create. Think of these as the orders coming into a very busy kitchen, staffed by as many cooks as you want to spin up worker roles. Additionally, the Queue has the smarts to be sure that the cook who takes one of the orders actually does his job and delivers back the order – if he doesn’t (in a time that you specify), then the order will go back into the queue so that one of your more reliable employees can get it done.

For a great overview of some of the new features of queues (including larger message sizes and the ability to schedule messages into the future) see http://aka.ms/NewQueueFeatures.

There’s additional great information about Windows Azure Storage Queues contained in Jai Haridas’ PDC09 session as well at http://www.microsoftpdc.com/2009/svc09. Finally, for a quick lab using Queues and Blob storage, you can see my post at http://www.stratospher.es/blog/post/getting-started-with-loosely-coupled-applications-using-azure-storage-and-queues.

As an ASP.NET developer, Queues offer you a chance to handle complex information processing needs in a very elegant way. Perhaps you’ve found yourself needing to do some sort of long running process from your web application, like uploading and processing images, or the like. In the past, you either would synchronously handle the processing in the UI thread on the server, keeping the user waiting for completion, or roll your own storage, queuing and processing. With Azure Queues, there are much more elegant, reliable and performant methods available to you. For an example of taking your processing off the UI thread, and improving user satisfaction, see my previous post at http://www.stratospher.es/blog/post/getting-started-with-loosely-coupled-applications-using-azure-storage-and-queues.

To understand why Windows Azure offers these new methods of storing persistent data, it’s important to remember that Azure is a Platform as a Service, as opposed to Infrastructure as a Service. As a result of this, it can offer additional platform services that are useful for application developers in common application scenarios. The Windows Azure team has codified several of these services in Windows Azure Storage services, which consists of Tables, Blobs and Queues.

For What Reason

So you might be thinking, “hey, it’s great that these choices exist, but why would I choose them?” After all, you already have SQL Azure available, which is very straightforward to work with – why take the time to write different code and use Tables, Blobs and Queues instead?

One possible reason is cost. Despite the fact that SQL Azure is extremely reasonably priced, if you have lots and lots of data, Table storage is available for a fraction of the cost. In fact, at the time of this writing, a gigabyte of data in a SQL Azure database costs (a very reasonable) $9.99 per month. A gigabyte of data in table storage, however, would only cost $0.125 (yes, that’s 12.5 cents) per month. Prices vary as you go up a bit (SQL Azure is discounted for each gigabyte over the first – a 10 GB database is only $45.96 per month), but in general, Table storage is a tiny fraction of the cost of SQL Azure storage.

OK, so it’s cheaper, but how do you decide between the options from a capabilities standpoint. Table storage is cheaper, but if it doesn’t have the capabilities we need, we’ll still need SQL Azure, right?

Generally speaking, the choice comes down to the nature of the data. If you have relational data, and need relational data access, you’ll enjoy the full relational capabilities of SQL Azure. Windows Azure Table Storage, on the other hand, doesn’t easily lend itself to relational querying, and is less useful in this situation. However, if the nature of your data is more object based (or maybe file based), then Windows Azure Storage (either Tables or Blobs) will suit your needs, and save you money at the same time. Many applications will have a blend of these needs, and it’s not at all unreasonable to think that you’ll end up using a combination of these technologies to best cover your requirements. One of the great advantages of the rich suite of Azure technologies is that you have the luxury of choosing from multiple solutions to best suit your needs.

For a rich comparison of SQL Azure and Windows Azure Table storage, see Joseph Fultz’s MSDN Magazine article at http://aka.ms/SQLAzureVsAzureTables.

Does this all sound a little familiar? Maybe you’ve been reading up on so-called NoSQL technologies like CouchDB and MongoDB and are wondering, is this the same thing? In many ways, the answer is yes. In fact Windows Azure Tables are very much a type of NoSQL data store. For a very rich discussion of NoSQL, and Azure Table’s place within it, see the whitepaper at http://aka.ms/AzureNoSQL.

And We’d Brave Those Mountain Passes

OK, we’ve talked about a bunch of useful technologies, and you’re (probably) intrigued right about now. Maybe you’re planning on using Tables or Blobs, but have existing data that you’ll need to move in order to do so. Are there any tools that can help with our move to Windows Azure Storage?

As it turns out there is. A company called Cerebrata makes a tool called Cloud Storage Studio which, in addition to allowing you to interrogate and work with your Tables, Blobs and Queues, has a feature for uploading data from SQL Server databases. You can read about that feature at http://aka.ms/CerebrataDataMigrate or check out the whole product at http://aka.ms/CerebrataCloudStorageStudio . Other great tools for working with your storage exist as well. You could try:

Scientist Studies

If you really want to get the deepest details on the mechanics of Windows Azure Storage, there’s a remarkable paper (and video) at http://aka.ms/StorageMadScience. Mad science, it is, and all so that you can rely on this mechanism for your storage needs.

A Movie Script Ending

So, some of you might have noticed something while you were reading this article. Did you catch it? Feel free to tweet me your guess at @stratospher_es . The first one to figure it out gets a (completely nominal) prize from me, and a hearty “congratulations”.

Ready to start developing for Azure? Use the pretty colored boxes at the top of this post to either activate your free Azure benefits if you're a MSDN subscriber, or a 90 day trial. Once you've done that, get the tools and get going!


M. Sheik Uduman Ali (udooz) described Azure Storage Services Asynchronously in Java in a 4/20/2012 post:

imageWhen performing I/O bound operation, the program should use asynchronous approach. This is particularly important when you access the Azure storage services. As of now, Azure managed libraries for .NET and Java do not support asynchronous APIs. Instead, by using underlying run time’s asynchronous programming approaches along with Azure storage services REST API makes you do I/O bound operations on Azure storage services.

imageIn this post, I explain how to access Azure blob storage services asynchronously in Java. I have used following libraries:

  • org.apache.commons.codec-1.6.jar (for base64 string encoding)
  • async-http-client-1.7.3.jar (Ning’s Async Http Client library – https://github.com/sonatype/async-http-client)
  • log4j-1.2.16.jar and slf4j-*-1.6.4.jar (Logging)
AzureStorage Class

The class “AzureStorage” contains the implementation to create HTTP request object for accessing the Azure RESTful resources.

public class AzureStorage...
// fields
String storageMedium;
String accountName;
byte[] secretKey;
String host;
java.util.regex.Pattern urlAbsolutePathPattern;
//ctor
AzureStorage(String accountName, String storageMedium, String base64SecretKey)
// public method
Request get(String resourcePath)
// utility method
String createAuthorizationHeader(Request request)

The constructor requires storage account name, storage medium (this is neither Java nor Azure terminology, just identify whether you want to access blob, table or queue) and the primary shared key of the account. In this post, I just provide simple get() method for GET related Azure storage APIs. The input to the method is the resource path. Most the REST API requires authorization which in-turn sign the particular request by shared key. createAuthorizationHeader() method does this job.

The Ctor

public AzureStorage(String accountName, String storageMedium, String base64SecretKey) {
this.accountName = accountName;
this.storageMedium = storageMedium;
this.host = "core.windows.net";
secretKey = Base64.decodeBase64(base64SecretKey);
urlAbsolutePathPattern = Pattern.compile("http(?:s?)://[-a-z0-9.]+/([-a-z0-9]*)/\\?.*");
}

The host field contains the part of base Azure storage URL. The primary shared key for the account has been converted to base 64 decoded byte array. Since, there is no AbsolutePath facility from an URL in Java world, I have used regular expression here. For example, the absolute path of the URL “https://myaccount.blob.core.windows.net/acontainer/?restype=container&comp=list” is “acontainer”.

The get() method

A HTTP request should be made to access a Azure storage with following details:

GET https://myaccount.blob.core.windows.net/acontainer?restype=container&comp=acl&timeout=90 HTTP/1.1
x-ms-version: 2009-09-19
x-ms-date: Fri, 20 Apr 2012 11:12:05 GMT
Authorization: SharedKey myaccount:9S/gs8jkAQKAN1Gp/y82B8jHR2r7HShZSiPdl2JSWQw=

The above request specifies the URL for the resource, the REST API version, the request time stamp, authorization header. The get() method here frames these request headers. To want to know the complete details of request and response of Azure REST API, visit http://msdn.microsoft.com/en-us/library/windowsazure/dd179355.aspx.

public Request get(String resourcePath) {
String RFC1123_PATTERN = "EEE, dd MMM yyyy HH:mm:ss z";
DateFormat rfc1123Format = new SimpleDateFormat(RFC1123_PATTERN);
rfc1123Format.setTimeZone(TimeZone.getTimeZone("GMT"));
// remaining code
}

The rfc1123Format is used to send request time stamp in RFC 1123 format as shown in the HTTP request. The below code snippet creates the com.ning.http.client.Request object.

String url = "https://" + this.accountName + "." +
this.storageMedium + "." + this.host + "/" + resourcePath;
RequestBuilder builder = new RequestBuilder("GET");
Request request = builder.setUrl(url)
.addHeader("content-type", "text/plain")
.addHeader("content-length", "0")
.addHeader(HeaderDate, rfc1123Format.format(new Date()))
.addHeader(HeaderPrefixMS + "version", "2009-09-19")
.build();

The below code creates signed Authorization header.

String authHeader = "";
try {
authHeader = createAuthorizationHeader(request);
} catch (InvalidKeyException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (UnsupportedEncodingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (NoSuchAlgorithmException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
request.getHeaders().add("Authorization", "SharedKey " + this.accountName
+ ":" + authHeader);
return request;

This part in-turn calls method createAuthorizationHeader().

The createAuthorizationHeader() method
private String createAuthorizationHeader(Request request)
throws UnsupportedEncodingException,
NoSuchAlgorithmException, InvalidKeyException {
FluentCaseInsensitiveStringsMap headers = request.getHeaders();
StringBuffer stringToSign = new StringBuffer();
stringToSign.append(request.getMethod() + "\n");
stringToSign.append("\n\n0\n\n");
stringToSign.append(headers.get("content-type").get(0) + "\n");
stringToSign.append("\n\n\n\n\n\n");
// remaining code part
}

The authorization header should be like

Authorization="[SharedKey|SharedKeyLite] <AccountName>:<Signature>"

The createAuthorizationHeader() method mainly creates the “<Signature>” string. The “Signature” is a HMAC-SHA256 based computed hash of the following content:

GET\n /*HTTP Verb*/
\n /*Content-Encoding*/
\n /*Content-Language*/
\n /*Content-Length*/
\n /*Content-MD5*/
\n /*Content-Type*/
\n /*Date*/
\n /*If-Modified-Since */
\n /*If-Match*/
\n /*If-None-Match*/
\n /*If-Unmodified-Since*/
\n /*Range*/
x-ms-date:Sun, 11 Oct 2009 21:49:13 GMT\nx-ms-version:2009-09-19\n /*CanonicalizedHeaders*/
/myaccount/myaccount/acontainer\ncomp:metadata\nrestype:container\ntimeout:20 /*CanonicalizedResource*/

For more details about this, visit: http://msdn.microsoft.com/en-us/library/windowsazure/dd179428.aspx
The above Java code adds the string starting from GET to Range. For this demonstration, I skipped most of the headers with newline and added only content-length and content-type headers. The below code constructs the CanonicalizedHeaders.

List httpStorageHeaderNameArray = new ArrayList();
for(String key : headers.keySet()) {
if(key.toLowerCase().startsWith(HeaderPrefixMS)) {
httpStorageHeaderNameArray.add(key.toLowerCase());
}
}
Collections.sort(httpStorageHeaderNameArray);
for(String key : httpStorageHeaderNameArray) {
stringToSign.append(key + ":" + headers.get(key).get(0) + "\n");
}

The below code constructs the CanonicalizedResource.

java.util.regex.Matcher matcher = urlAbsolutePathPattern.matcher(request.getUrl());
String absolutePath = "";
if(matcher.find()) {
absolutePath = matcher.group(1);
}else {
throw new IllegalArgumentException("resourcePath");
}
stringToSign.append("/" + this.accountName + "/" + absolutePath);
if(absolutePath.length() > 0) stringToSign.append("/");
stringToSign.append("\n");
List paramsArray = new ArrayList();
for(String key : request.getQueryParams().keySet()) {
paramsArray.add(key.toLowerCase());
}
Collections.sort(paramsArray);
for(String key : paramsArray) {
stringToSign.append(key + ":" + request.getQueryParams().get(key).get(0) + "\n");
}

Finally, the whole string should be signed with by the shared key.

byte[] dataToMac = stringToSign.substring(0, stringToSign.length() -1).getBytes("UTF-8");
SecretKeySpec signingKey = new SecretKeySpec(secretKey, "HmacSHA256");
Mac hmacSha256 = Mac.getInstance("HmacSHA256");
hmacSha256.init(signingKey);
byte[] rawMac = hmacSha256.doFinal(dataToMac);
return Base64.encodeBase64String(rawMac);
The Calling Side

At the calling end, when you invoke the get() method, it returns the com.ning.http.client.Request instance. You can make request asynchronously using Ning’s async library as shown below

AzureStorage blobStorage = new AzureStorage("account-name", "blob|table|queue", "sharedkey");
Request request = blobStorage.get("ablobcontainer/?restype=container&comp=list");
AsyncHttpClient client = new AsyncHttpClient();
ListenableFuture response = client.executeRequest(request, new AsyncHandler() {
private final Response.ResponseBuilder builder = new Response.ResponseBuilder();
public STATE onBodyPartReceived(final HttpResponseBodyPart content) throws Exception {
builder.accumulate(content);
return STATE.CONTINUE;
}
public STATE onStatusReceived(final HttpResponseStatus status) throws Exception {
builder.accumulate(status);
return STATE.CONTINUE;
}
public STATE onHeadersReceived(final HttpResponseHeaders headers) throws Exception {
builder.accumulate(headers);
return STATE.CONTINUE;
}
public Response onCompleted() throws Exception {
return builder.build();
}
public void onThrowable(Throwable arg0) {
// TODO Auto-generated method stub
}
});

Visit http://sonatype.github.com/async-http-client/request.html for more details about the above code. After that, you can do other computations. When you reach the place where you want the response for the asynchronous request, you can do the following:

// till here, there are other interesting computation done
while(! response.isDone()) {
if(response.isCancelled()) break;
}
Response actualResponse;
try {
actualResponse = response.get();
System.out.println(actualResponse.getStatusCode());
System.out.println(actualResponse.getResponseBody());
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ExecutionException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

The “while” part just wait for the asynchronous operation to be completed. After that, it processes the com.ning.http.client.Response instance.

You can download the complete example from http://udooz.net/file-drive/doc_download/24-asyncazureaccessfromjava.html


<Return to section navigation list>

SQL Azure Database, Federations and Reporting

imageNo significant articles today.


<Return to section navigation list>

MarketPlace DataMarket, Social Analytics, Big Data and OData

My (@rogerjenn) Microsoft Codename “Data Transfer” and “Data Hub” Previews Don’t Appear Ready for BigData post updated 4/24/2012 begins:

imageOr even MediumData, for that matter:

image_thumb15_thumbNeither the Codename “Data Transfer” utility nor Codename “Data Hub” application CTPs would load 500,000 rows of a simple Excel worksheet saved as a 17-MB *.csv file to an SQL Azure table.

The “Data Transfer” utility’s Choose a File to Transfer page states: “We support uploading of files smaller than 200 MB, right now,” but neither preview publishes a row count limit that I can find. “Data Hub” uses “Data Transfer” to upload data, so the maximum file size would apply to it, too.

Both Windows Azure SaaS offerings handled a 100-row subset with aplomb, so the issue appears to be row count, not data structure.

Update 4/24/2012 8:15 AM PDT: A member of the Microsoft Data Hub/Transfer team advised that the erroneous row count and random upload failure problems I reported for Codename “Data Transfer” were known issues and the team was working on them. I was unable to upload the ~500,000-row files with Codename “Data Hub”; see the added “Results with Codename “Data Hub” Uploads” section at the end of the post.

Update 4/23/2012 10:00 AM PDT: Two members of Microsoft Data Hub/Transfer team reported that they could upload the large test file successfully. Added “Computer/Internet Connection Details” section below. Completed tests to determine maximum file size I can upload. The My Data page showed anomalous results but only the 200k row test actually failed on 4/23. See the Subsequent Events section.

Background

The Creating the Azure Blob Source Data section of my Using Data from Windows Azure Blobs with Apache Hadoop on Windows Azure CTP post of 4/6/2012 described the data set I wanted to distribute via a publicly accessible, free Windows Azure DataMarket dataset. The only differences between it and the tab-delimited *.txt files uploaded to blobs that served as the data source for an Apache Hive table were

  • Inclusion of column names in the first row
  • Addition of a formatted date field (Hive tables don’t have a native date or datetime datatype)
  • Field delimiter character (comma instead of tab)

Following is a screen capture of the first 20 data rows of the 500,000-row On_Time_Performance_2012_1.csv table:

imageClick images to display full-size screen captures.

You can download sample On_Time_Performance_YYYY_MM.csv files from the OnTimePerformanceCSV folder of my Windows Live SkyDrive account. On_Time_Performance_2012_0.csv is the 100-row sample file described in the preceding section; On_Time_Performance_2012_1.csv has 486,133 data rows.

Tab-delimited sample On_Time_Performance_YYYY_MM.txt files (without the first row of column names and formatted date) for use in creating blobs to serve as the data source for Hive databases are available from my Flight Data Files for Hadoop on Azure SkyDrive folder.

Provision of the files through a private Azure DataMarket service was intended to supplement the SkyDrive downloads.

Computer/Internet Connection Details:

Intel 64-bit DQ45CB motherboard with Core 2 Quad CPU Q9950 2.83 GHz, 8 GB RAM, 750 GB RAID 1 discs, Windows 7 Premium SP1, IE 9.0.8112.16421.

AT&T commercial DSL copper connection, Cayman router, 2.60 Mbps download, 0.42 Mbps upload after router reboot, 100-Mbps wired connection from Windows 2003 Server R&RA NAT. …

Continues with

sections and ends:

Conclusion

Neither Codename “Data Hub” nor Codename “Data Transfer” appears to be ready for prime time. Hopefully, a fast refresh will solve the problem because users’ Codename “Data Hub” preview invitations are valid only for three weeks.

Subsequent Events

Members of the Microsoft Data Transfer/Data Hub team weren’t able to reproduce my problem on 4/22 and 4/23/2012. They could process the 486,133-row On_Time_Performance_2012_1.csv file without difficulty. To determine at what file size uploading problems occurred for me, I created files of 1,000, 10,000, 100,000, 150,000, and 200,000 data rows from On_Time_Performance_2012_1.csv. I’ve uploaded these files to the public OnTimePerformanceCSV folder of my Windows Live SkyDrive account.

Results with Codename “Data Transfer” Uploads

All files appeared to upload on Monday morning, 4/23/2012, but My Data showed incorrect Last Job Status data for all but the 10,000-row set. I used Codename “Data Transfer” instead of “Data Hub” to obtain Job Status data. Data below was refreshed about 15 minutes after completion of the the 2012_1.csv file; I failed to save the 1,000-row set:

image

The 100k file created 100,000 rows, as expected, 200k added no rows to the table, and a rerun of 2012_1 created the expected 486,133 rows:

image

Microsoft’s South Central US (San Antonio) data center hosts the e3895m7bbt database. It’s possible that problems affecting Windows Compute there on 4/19 and 4/20 (see my Flurry of Outages on 4/19/2012 for my Windows Azure Tables Demo in the South Central US Data Center post) spilled over to SQL Azure on 4/22/2012, but that’s unlikely. However, the unexpected results with the 200k table and anomalous Last Job Status data indicates an underlying problem. I’ll update this post as I obtain more information on the problem from Microsoft.

Results with Codename “Data Hub” Uploads

I was able to upload all test files (100, 1,000, 10,000, 100,000, 150,000 and 200,000 rows) but unable to upload the On_Time_Performance_2012_1.csv file to one of the four free databases with Codename “Data Hub” after three tries. The service forcibly disconnects after data upload completes, so data doesn’t transfer from the blob to the database table.

So I used the data source I created with Codename “Data Transfer” as an external data source to publish the data. None of the data fields were indexed, which displayed the following error (in bold red type) in the “My Offerings” page’s Status/Review section:

All queryable columns must be indexed: Not all queryable columns in table "On_Time_Performance_2012_1" are indexed. The columns that are not indexed are: "ArrDelayMinutes", "Carrier", "DayofMonth", "DepDelayMinutes", "Dest", "FlightDate", "Month", "Origin", "Year".

Codename “Data Transfer” doesn’t offer an option to index specific columns so I added indexes on all fields except RowId, Month, Origin and Year with SQL Server Management Studio and cleared the Queryable checkboxes for the latter three fields on the My Offerings - Data Source page.

Here’s Data Explorer’s Table View of part of the first few rows:

image

Check the Codename “Data Transfer” feedback page for my improvement suggestions:

“Data Hub” doesn’t appear to have its own feedback page.


The Astoria Team reported WCF Data Services 5.1.0-rc Prerelease on 4/20/2012:

Less than two weeks ago, we released WCF Data Services 5.0.0. Today, we are releasing 5.1.0-rc as a NuGet prerelease.

What is in the prerelease

This prerelease contains several bug fixes:

Getting the prerelease

The prerelease is only available on NuGet and must be installed using the prerelease cmdlet. Prereleases are not displayed in the Manage NuGet Packages dialog. To install this prerelease NuGet package, you can use one of the following commands in the Package Manager Console:

  • Install-Package <PackageId> –Pre
  • Update-Package <PackageId> –Pre

Our NuGet package ids are:

Call to action

If you have experienced one of the bugs mentioned above, we encourage you to try out the prerelease bits in a preproduction environment. As always, we’d love to hear any feedback you have!


<Return to section navigation list>

Windows Azure Service Bus, Access Control, Identity and Workflow

• Himanshu Singh (@himanshuks) reported on 4/26/2012 Automatic Migration of Access Control Service Version 1.0 by June 2012:

imageLast year, we announced the sunset of Windows Azure Access Control Service (ACS) version 1.0 and the availability of the ACS 1.0 Migration Tool that helps customers to easily migrate their ACS 1.0 namespaces to ACS 2.0.

imageWe’re excited to share that we are able to automatically migrate many customers to ACS 2.0 in June 2012. Details regarding this have been already sent to customers with ACS 1.0 namespaces via email last month.

If you have one or more ACS 1.0 namespaces, did not receive an email, and would like more information about automatic migration to ACS 2.0, please send an email to acsmigration@microsoft.com with your Windows Azure subscription ID and ACS 1.0 namespaces.

Click here to see a list of important differences between ACS 1.0 and ACS 2.0.


• Jim O’Neil (@jimoneil) posted Fun with the Service Bus (Part 1) on 4/24/2012:

imageLast week, I filled in for Mike Benkovich on his weekly Cloud Computing: Soup to Nuts series with a presentation on the Windows Azure Service Bus. The recording should be available soon, if not already, but I thought I’d follow up with a multi-part blog series that covers the material I presented.

Service Bus example The context for the talk was pretty simple, an application running “on-premises” and behind the firewall (your local machine, in this case) that echoes messages sent via a public website, hosted on Windows Azure (but it could be anywhere). When the message arrives at the service host, a window pops up showing the message.

Get the sample code!

imageThe ‘on-premises’ piece of this example is a very simple WPF application that listens for messages, either on a WCF endpoint or via the Windows Azure Service Bus.. Messages in this context have three fields: the message text, name of the sender, and a string indicating the color of the window that displays the message on-premises. The result looks something like the following, where the browser on the left is the client, and the simple WPF application at the bottom right is the service host. The Metro-esque windows are the result of three messages having been sent and processed by the service hosted by the WPF application.

This post covers a lot of ground, so here’s a quick outline to help you navigate through the material:

Service Contract

The contract for the WCF service is pretty simple and defined in a standalone assembly referenced by both the Web site client project and the WPF application that hosts the service endpoint.

using System.ServiceModel;

namespace RelayService.Interfaces
{
    [ServiceContract(Name = "IEchoContract", 
                     Namespace = "http://samples.microsoft.com/ServiceModel/Relay/")]
    public interface IEchoContract
    {
        [OperationContract]
        void Echo(string sender, string text, string color);
    }

    public interface IEchoChannel : IEchoContract, IClientChannel { }
}
On-Premises WCF Sample

Before we get to the Service Bus, let’s take a quick look at the implementation of this scenario using a self-hosted WCF application.

WPF Implementation

The WPF application in the sample project can either self-host a WCF service or open an endpoint exposed on the Service Bus; a simple checkbox on the main window controls the behavior. To keep things simple, the WCF service endpoint is set up with a simple BasicHttpBinding (that means no authentication on the service, not something you’d do in production!).

ServiceHost host = default(ServiceHost);

// local WCF service end point  
host = new ServiceHost(typeof(EchoService));
host.AddServiceEndpoint("RelayService.Interfaces.IEchoContract",
     new BasicHttpBinding(),
     http://localhost:7777/EchoService);
ASP.NET Client Implementation

The client implementation is straightforward, and here as in the rest of the examples in the series, I’m using code (versus configuration) to provide all of the details. You can, of course, use the web.config file to specify the binding and endpoint.

ChannelFactory<IEchoContract> myChannelFactory =
    new ChannelFactory<IEchoContract>(
        new BasicHttpBinding(), "http://localhost:7777");

IEchoContract client = myChannelFactory.CreateChannel();
client.Echo(userName, txtMessage.Text, ddlColors.SelectedItem.Text);

((IClientChannel)client).Close();
myChannelFactory.Close();

Service Bus Relay Example

When you use a Service Bus relay, you’re essentially establishing an endpoint that’s hosted in the Windows Azure cloud, secured by the Access Control Service. The main difference in the code is the endpoint configuration. But before you can configure the endpoint in your application code, you’ll need to establish that relay endpoint in the cloud, so let’s walkthrough that now.

Configuring a Service Bus namespace

You create Service Bus namespaces via the Windows Azure portal; it’s a six step process.

Configuring a Service Bus endpoint

  1. Select the Service Bus, Access Control & Caching portion of the portal.
  2. Select Service Bus
  3. Select New on the ribbon to create a new namespace. This will bring up the Create a new Service Namespace dialog. The same namespace can be used for Access Control, Service Bus, and Cache, but here we need only the Service Bus.
  4. Enter a unique namespace name; the resulting Service Bus endpoint will be uniquenamespacename.servicebus.windows.net, which being a public endpoint must be unique across the internet.
  5. Select the Windows Azure data center you wish to host your service. As of this writing, the East and West US data centers do not yet support the Service Bus functionality, so you’ll see only six options.
  6. Create the namespace. This will set up a Service Bus namespace at uniquenamespacename.servicebus.windows.net and an Access Control Service namespace at uniquenamespacename-sb.accesscontrol.windows.net. A service identity is also created with the name of owner and the privileges (claims) to manage, listen, and send (messages) on that Service Bus endpoint.
Controlling Access to the Service Bus namespace
Access Control Service for the Service Bus

Authentication and authorization of the Service Bus is managed by the Windows Azure Access Control Service (ACS) via the namespace (the one with a –sb suffix) that is set up when you create the Service Bus namespace. You can manage the associated ACS namespace from the Window Azure portal by highlighting the desired Service Bus namespace and selecting the Access Control Service option from the ribbon.

That will bring you to yet another part of the portal with a number of somewhat daunting options:

  • Trust relationships is where you configure identity providers, such as Windows Live ID, Google, Yahoo!, Facebook, or Active Directory Federation Services 2.0. You can use these providers to authenticate and authorize access to the Service Bus endpoint. The Service Bus endpoint itself is a relying party application, namely an application that is secured by ACS. Each relying party application has at least one set of rules associated with it; these rules essentially define what operations a given identity can perform on the relying party application.
    For a Service Bus endpoint, there are three operations that can be carried out:
    • Send – send messages to an endpoint, queue or topic.
    • Listen – receive messages from an endpoint, queue, or subscription.
    • Manage – set up queues, topics, and subscriptions.
  • Service settings is where you manage certificates and keys as well as service identities that are not managed by other identity providers (like Live ID or Google). For the Service Bus, the ACS automatically manages the certificates and keys (so you don’t ever worry about that part), and it also creates a service identity with the name owner and claims that allow owner to perform all three of the operations (send, listen, and manage). You can think of owner as the sa, root, or superuser of the ACS namespace, and from that it should follow that you’ll rarely – if ever – want to use owner as part of your application. Instead you should create additional user(s) and apply the principle of least privilege.
  • Administration is where you control who can administer the ACS endpoint. The default administrator will be the Service Administrator of the subscription under which the namespace was created. That last statement has specific implications when co-administrators of an Windows Azure subscription are in play. The co-admin can create the Service Bus namespace, but she cannot manage its access and will be greeted by the rather cryptic error below if she tries:

The remedy is to add the co-admin as a portal administrator, by logging into the ACS portal with the identity of the subscription’s Service Administrator and adding the Live ID of the co-admin:

Adding co-admin for ACS

  • The Development section provides guidance and code snippets to integrate the ACS ‘magic’ into your applications.
Adding a new Service Identity

So what’s the workflow to granting access to a new user? There are two different answers here, one for creating a service identity and the other for handling identity federation via a third-party identity provider (like Live ID). We’ll tackle the first (and easier) case here, and we’ll look at the federation scenario in a subsequent post.

First you need a new service identity,so select the Service identities option on the sidebar. (Note that the auto-generated owner identity is in place)

Service identity listing

Select the Add link, and create a new identity:

Add service identity

  1. Specify the user identity, use ‘guest’ here to mesh with the default identity assumed by the sample code.
  2. Provide an optional description visible only in the portal.
  3. Select “Symmetric Key” for the credential type. There are two other options: Password and X.509 Certificate, but both require a bit more code and configuration to implement. Consult the ACS samples on CodePlex for scenarios that make use of these alternative credential types.
  4. Generate a key for the guest identity. You’ll also need to add this key value in the web.config file for the RelayServiceClient application. While you’re at it, you may as well change the SBNamespace value to the Service Bus namespace you just created.

    web.config setting

  5. Save the new identity. By default the credentials will expire in a year.
Adding Authorization Claims for the New Service Identity

What we’ve done so far is create a new service identity with the name guest, but that identity has no privileges, so if you try using it (with code we haven’t quite looked at yet), you’d get an HTTP 403 (Forbidden) error. That’s in contrast to a 401 (Unauthorized) error that you would get when attempting to access the Service Bus with a non-existent user. So let’s set the guest user up with the ability to make calls against the Service Bus namespace.

First let’s create a new rule group. Strictly speaking we could just use the default group (which includes the authorization rules for owner), but a new group can provide a bit more flexibility and reinforce that the owner identity is “special.”

Rule groups

Enter a name for the group:

Add rule group

After you save the new group, a list of all of the rules associated with this group appears. Of course, there are none yet, so you’ll need to add at least one. There is an option to generate rules based on identity providers you’ve associated with the Service Bus namespace, but we’re using service identities, so you’ll need to use the Add link to set up the authorization rules.

Edit rule group

What you’ll be adding here is a new claim rule. The concept behind claims is actually rather simple, although the user interface belies that! A claim is some statement about a user or identity that is vouched for by a given identity provider. For instance, when you successfully login via your Google ID, your application is passed back a list of claims associated with your identity that Google is willing to vouch for, so if your relying party application is willing to trust Google, then it can trust the bits of information that are returned as claims.

Claims can be just about any statement or fact about the user/identity that the identity provider is storing and willing to share. Google, for instance, will provide a name, nameidentifier, and email address as claims; Windows Live ID provides only a nameidentifier (a tokenized version of your Live ID). The types and number of claims emitted by different identity providers vary, and that’s where claim rules come in. You want to insulate your relying application from the vagaries of all the identity provider options, so rather than tapping into the raw claims that each of those providers sends, you set up claims that are meaningful to your application and then map the various incoming claims to the contextual claims your application understands.

In this case, the relying party is the Service Bus, and it understands three claims: Send, Manage, and Listen. Our specific goal though is to allow the guest identity only the capability to Send messages on the Service Bus, not manage or listen, so you’ll need to create one claim that authorizes that function. Each claim rule is essentially an IF-THEN statement:.

Add claim rule

  1. The input claim issuer refers to the identifying party. The dropdown contains whatever identity providers you have enabled for the ACS controlling the Service Bus endpoint (via the Identity providers option under Trust Relationships in the left sidebar (not shown)). In this case, we’re using a service identity (guest) that is managed by the Access Control Service.
  2. The input claim type refers to one of potentially many claims that the identity provider is willing to offer up about the user/identity that it has authenticated. The list of claims relevant to the selected provider is prepopulated. For ACS-controlled service identities, the user name is provided as a nameidentifier claim; you’ll need to consult your identity provider’s documentation to determine what each claim type actually contains. Claims that aren’t pre-populated can be specified manually in the Enter type: text box. If you only care that the identity provider authenticates a user, you can select Any for the claim type, which simply indicates you don’t care about the specifics of the claims.
  3. To complete the “IF statement,” you specify what the specific claim value should be in order to fire the rule. If you don’t care about the specific value a claim has (and only that the identity provider offers up the given claim), you can specify Any here. In our case, the requestor must be guest, so that’s added as a specific value. If you have multiple other identities that you want to allow access, you would create additional rules. And if you have more complex rules (with “AND” conditions) you can add a second input claim as a compound condition to govern the claim rule.

So that’s the IF part! Now you need to map that input claim to output claims that the relying party (Service Bus) is interested in. We don’t really care at this point that the user’s name is guest, but we do want to know what that user is allowed to do. Granted you could do all that checking in code – if (user == ‘guest’ ) then let user call methods, etc. – but that’s horribly hard-coded and very difficult to maintain from a code and security management perspective (what happens if the guest password is compromised? how do you revoke access?) So what you’ll do instead is map the input claim to an output claim, namely one that gives Send privileges to guest.

Claim rule output

  1. The output claim type needed here is unique to the Service Bus, so the various options in the drop down aren’t applicable, and we specify the type net.windows.servicebus.action explicitly.
  2. There are three possible values: Send, Listen, and Manage; of course, we want just Send for guest.
  3. You can specify a description for the claim here; it will appear in the list of rules for the given Rule Group. It’s a good idea to put something meaningful here, because the Rule Group page listing will show only the output claim type. If you have multiple claims for different action values, they won’t otherwise be distinguishable in the listing.
  4. Save the rule!

Rule group after adding new Send rule

Note that the Used by the following relying party application text box is still empty; that’s because we haven’t associated this rule group with the relying party (the Service Bus). There’s essentially a many-to-many relationship between relying party applications and the rule groups, so we’ll need to revisit the Relying party applications entry screen to associate the new rule group with the Service Bus. Note that the default rule group created (for owner) is already in selected.

Associating rule group with relying party application

The sample code associated with this blog series assumes that you’ve created a service identity with the name guest and provided the claim rule as detailed above. The code also requires a second service identity with the name wpfsample that has an output claim rule of Listen; that is the user that initiates listening on the Service Bus endpoint within the WPF client application. You can follow the same steps above to create this second identity and associate it with a new (or existing rule group). You can use the default owner as well, but per the earlier recommendation, it’s best to reserve that service identity for administrative use and not incorporate it into your application logic or configuration.

Locking down the Service Bus with ACS was a admittedly a bit of work and somewhat of a diversion from our real goal (that’s likely why so many example just use owner, even though they shouldn’t!). Armed with the new identities guest and wpfsample and their requisite claims, let’s next look at the code for using the Service Bus endpoint instead of the WCF endpoint we started with.

Modifying WPF ServiceHost Implementation to Use Service Bus Relay

Here’s the code to have the WPF application listen on the Service Bus endpoint versus setting up a WCF endpoint on premises:

   1:  ServiceHost host = default(ServiceHost);
   2:   
   3:  // ServiceBusEndpoint
   4:  host = new ServiceHost(typeof(EchoService));
   5:  host.AddServiceEndpoint("RelayService.Interfaces.IEchoContract",
   6:      new BasicHttpRelayBinding(),
   7:      ServiceBusEnvironment.CreateServiceUri("https",
   8:          Properties.Settings.Default.SBNamespace,
   9:          "EchoService"));
  10:             
  11:  // Add the Service Bus credentials to all endpoints specified in configuration.
  12:  foreach (ServiceEndpoint endpoint in host.Description.Endpoints)
  13:  {
  14:      endpoint.Behaviors.Add(new TransportClientEndpointBehavior()
  15:          {
  16:              TokenProvider = TokenProvider.CreateSharedSecretTokenProvider(
  17:                  "wpfsample",
  18:                  Properties.Settings.Default.SBListenerCredentials)
  19:          });
  20:  }

It starts out much like the pure WCF sample, but there are some notable differences:

  • Line 6: we use a new binding type, BasicHttpRelayBinding, which you can think of as the BasicHttpBinding tuned and adapted for the Service Bus. There are a number other Service Bus bindings that correspond to traditional on-premises WCF bindings, as well as some specific to the cloud environment.
  • Lines 7-9: the CreateServiceUri method is the preferred mechanism for constructing the Service Bus endpoint URI; however, in this case, you could simply construct a string like “https://yoursbnamespace.servicebus.windows.net/EchoService.” The SBNamespace property in Line 8 refers to a setting you provide in the WPF application (App.config) which specifies the name of the Service Bus namespace you created earlier.
  • Lines 12 – 20 is where the Service Bus credentials are added to the endpoints exposed by the Service Bus. Line 17 shows the name of the service identity (hardcoded to wpfsample), and Line 18 specifies the symmetric key associated with that service identity in the ACS portal. You’ll need to copy and paste the credential from the portal to the SBListenerCredentials setting in the App.config file of WPF application (or via the properties dialog in Visual Studio).

And that’s it, now when the Start Service button is clicked (and the Use Service Bus checkbox selected on the WPF UI), the client will open a relay point in whatever Windows Azure data center hosts that Service Bus endpoint. Depending on the location, you may notice a few seconds of latency as the connection is established.

Now that the WPF application is listening for requests on the endpoint, let’s send some messages via the ASP.NET client.

Targeting the Service Bus Endpoint in the ASP.NET Client
   1:  // ServiceBus endpoint constructed using App.config setting
   2:  ChannelFactory<IEchoContract> myChannelFactory =
   3:      new ChannelFactory<IEchoContract>(
   4:          new BasicHttpRelayBinding(),
   5:          new EndpointAddress(ServiceBusEnvironment.CreateServiceUri(
   6:              "https", 
   7:              ConfigurationManager.AppSettings["SBNameSpace"], 
   8:              "EchoService")));
   9:   
  10:  // add credentials for user (hardcoded in App.config for demo purposes)
  11:  myChannelFactory.Endpoint.Behaviors.Add(
  12:      new TransportClientEndpointBehavior()
  13:      {
  14:          TokenProvider = TokenProvider.CreateSharedSecretTokenProvider(
  15:              userName, ConfigurationManager.AppSettings["SBGuestCredentials"])
  16:      }
  17:  );
  18:   
  19:  // traditional WCF invocation
  20:  IEchoContract client = myChannelFactory.CreateChannel();
  21:  client.Echo(txtUser.Text, txtMessage.Text, ddlColors.SelectedItem.Text);
  22:   
  23:  ((IClientChannel)client).Close();
  24:  myChannelFactory.Close();

Above is the code the client (ASP.NET) application uses to send a message to the Service Bus to be relayed to the WPF application hosted ‘on-premises.’ You can run the Web site locally (by hitting F5 in Visual Studio) and you’ll still be exercising the Service Bus in the cloud; for more fun though, you may want to deploy the ASP.NET application as a Windows Azure cloud service so you can have others bring up the site and send message to your local machine. A cloud services project is provided for you, but you’ll need to configure the publication settings for your own cloud account.

  • Lines 2-8 set up the client ChannelFactory. Like in the WPF application, BasicHttpRelayBinding is used, and the Service Bus endpoint is constructed via the CreateServiceUri method. You’ll need to modify the app.config file to provide your Service Bus endpoint name as the SBNamespace setting.
  • Lines 11-17 add the Service Bus credentials for the identity passed in via userName (by default it’s the value guest). The credential key, which you configured in the ACS portal earlier, needs to be provided in the app.config file as the SBGuestCredentials setting.
  • Lines 18-24 are no different from the traditional WCF invocation; here the Echo method is invoked passing in the values provided by the user in the ASP.NET form.
Give it a Spin!

At this point, you’re (finally) ready to run the sample. Once you’ve loaded the solution into Visual Studio, you can just hit F5 to run it. The ASP.NET web site and the WPF application are set to be the startup projects, so you won’t (and don’t) need to run the Windows Azure emulator to see the Service Bus in action. Here’s a play-by-play of the user interaction; keep in mind that the use-case here is two different machines on completely different networks, separated by firewalls.

  1. Check the Use Service Bus checkbox; this will have the ServiceHost within the WPF application open an endpoint on the Service Bus (using the namespace you’ve set up, the name of which you’ve also specified the App.config file).
  2. Click the Start Service button (it won’t say Stop Service until the service is running). If this fails, there’s likely some problem with the wpfsample service identity you set up (you did that one up, right?)
  3. In the ASP.NET client, pick your favorite color from the list.
  4. Enter the message you want to send via the Service Bus.
  5. Select the option to invoke the message via the Service Bus endpoint, versus the local WCF endpoint.
  6. Send the message, and….
  7. You should see the message box appear on the same machine that’s running the WPF application.

A working example is great, but I tend to learn more from applications that fail! Here are some scenarios to consider:

  • Try to send a message under the user name of “joe”. You should see a 401 error displayed at the top of the page because there is no such identity known to the Service Bus.
  • Specify a user name of “owner.” owner does exist (you got that free when the namespace was created), but the credentials you put into the app.config won’t match. You’ll likewise get a 401 error.
  • Specify a user name of “wpfsample” and change the credentials in app.config to be the symmetric key associated with wpfsample (you can get the value from the ACS portal). You should see a FaultException indicating the Send action is not permitted; that’s because wpfsample only has Listen privileges.
  • Stop the service via the button on the WPF application, and try to send a message. In this case the client gets either a FaultException, with the message “No services is hosted at the specified address”, or a ProtocolException and buried within that exception’s message is an indication that there were no active listeners registered for the endpoint.

That last bit of behavior – where the entire operation depends on both the client and the server being up and listening at the same time – can be a bit constraining, especially if you have independent entities chatting across the Service Bus. What if one of the parties is down for maintenance? It’s incumbent on the sender to catch the faults and retry later or provide some other remediation. Wouldn’t it be great if you could guarantee the messages would arrive and be processed (asynchronously) even if both parties weren’t ‘on line’ at the same time? Well you can, and that’s where Service Bus queues come in – the topic of the next installment in this series.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Traffic Manager, Connect, RDP and CDN

Michael Washam (@MWashamMS) posted Automated Global Deployments with Traffic Manager and PowerShell on 4/24/2012:

imageOn my recent appearance on Cloud Cover my demo failed :(.

Sad to say but it happens. Especially, when you don’t test your demo after rebuilding your laptop. That’s OK though. The beauty of the Internet is you can always make up for it later by posting a working example :).

imageSo here it is in it’s entirety a script that deploys a Windows Azure solution to two data centers and configures Traffic Manager to balance traffic between the endpoints.

#Select subscription first
select-Subscription azurepub

$certpath = 'C:\managementCert.pfx'
$mcpwd = 'certPassword!'

# Deployment package and configuration 
$PackageFile = 'C:\Deployment\MortgageApp.cspkg'
$ServiceConfig = 'C:\Deployment\ServiceConfiguration.Cloud.cscfg'

# Set up our variables for each data center 
$NCService = 'WoodGroveNC'
$EUService = 'WoodGroveEU'
$NCStorage = 'demoncstorage1'
$EUStorage = 'demoeustorage1'
$NCDC = 'North Central US'
$EUDC = 'North Europe'
$TMProfileName = 'WoodGroveGlobalTM'

# Set up our variables for each data center 
$NCService = 'WoodGroveNC'
$EUService = 'WoodGroveEU'
$NCStorage = 'demoncstorage1'
$EUStorage = 'demoeustorage1'
$NCDC = 'North Central US'
$EUDC = 'North Europe'
$TMProfileName = 'WoodGroveGlobalTM'


# Created Hosted Service in North Central US
New-HostedService -ServiceName $NCService -Location $NCDC | 
	Get-OperationStatus -WaitToComplete


# Created Hosted Service in Europe

New-HostedService -ServiceName $EUService -Location $EUDC | 
	Get-OperationStatus -WaitToComplete



# Get a reference to the newly created services
$hostedServiceNC = Get-HostedService -ServiceName $NCService
$hostedServiceEU = Get-HostedService -ServiceName $EUService
	


# Add the management certificate to each hosted service 
$hostedServiceNC | Add-Certificate -CertToDeploy $certpath -Password $mcpwd |
	Get-OperationStatus -WaitToComplete
	
$hostedServiceEU | Add-Certificate -CertToDeploy $certpath -Password $mcpwd | 
	Get-OperationStatus -WaitToComplete


# Create a new storage account in each data center
New-StorageAccount -StorageAccountName $NCStorage -Location $NCDC | 
	Get-OperationStatus -WaitToComplete    
	
New-StorageAccount -StorageAccountName $EUStorage -Location $EUDC | 
	Get-OperationStatus -WaitToComplete    

$hostedServiceEU | New-Deployment -Name $EUService -StorageAccountName $EUStorage `
				   -Package $PackageFile -Configuration $ServiceConfig  | 
				   Get-OperationStatus -WaitToComplete


# Deploy the hosted service to each data center 
$hostedServiceNC | New-Deployment -Name $NCService -StorageAccountName $NCStorage `
				   -Package $PackageFile -Configuration $ServiceConfig  | 
				   Get-OperationStatus -WaitToComplete



# Start each service 
$hostedServiceNC | Get-Deployment -Slot Production | Set-DeploymentStatus -Status Running `
				 | Get-OperationStatus -WaitToComplete
				 
$hostedServiceEU | Get-Deployment -Slot Production | Set-DeploymentStatus -Status Running `
			     | Get-OperationStatus -WaitToComplete


# Configure Traffic Manager to enable Performance Based Load Balancing and Failover
$profile = New-TrafficManagerProfile -ProfileName $TMProfileName `
		  -DomainName 'woodgroveglobal.trafficmanager.net'

$endpoints = @()
$endpoints += New-TrafficManagerEndpoint -DomainName 'WoodGroveNC.cloudapp.net'
$endpoints += New-TrafficManagerEndpoint -DomainName 'WoodGroveEU.cloudapp.net'


# Configure the endpoint Traffic Manager will monitor for service health 
$monitors = @()
$monitors += New-TrafficManagerMonitor –Port 80 –Protocol HTTP –RelativePath /



# Create new definition
$createdDefinition = New-TrafficManagerDefinition -ProfileName $TMProfileName -TimeToLiveInSeconds 300 `
			-LoadBalancingMethod Performance -Monitors $monitors -Endpoints $endpoints -Status Enabled
			

# Enable the profile with the newly created traffic manager definition 
Set-TrafficManagerProfile -ProfileName $TMProfileName -Enable -DefinitionVersion $createdDefinition.Version

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

•• Karsten Januszewski (@irhetoric) posted Announcing Windows Azure Achievements for Visual Studio on 4/26/2012:

imageToday we are pleased to announce the release of Windows Azure Achievements for Visual Studio. Building on the existing Visual Studio Achievements extension, this updated release brings 15 new Windows Azure-based achievements to the popular Visual Studio extension.

imageIf you aren’t familiar with Visual Studio Achievements, it is an extension for Visual Studio, which enables developers to unlock badges and compete against one another for a place on a leader board based on the code they write, its level of sophistication, and the Visual Studio capabilities they use to do so. It was released in January and it has been downloaded over 80,000 times.

imageThe extension works by launching a background thread each time code is compiled, looking for code constructs. It also listens for particular events and actions from Visual Studio. When certain criteria or actions are detected, the plug-in triggers a pop-up alert and awards a new badge, which then updates the developer’s Channel 9 profile as well as the public leader board.

Each time you earn a badge, a unique page is created with your profile picture, the badge and a description. You can tweet about achievements, share them on Facebook, and show a list of achievements on your blog using the Visual Studio Achievements Widget.

Today’s update to the extension adds 15 new achievements, all focused on Windows Azure development. For Windows Azure developers who use .NET and Visual Studio, this is a chance to show their prowess with Windows Azure tools and SDK.

The Windows Azure achievements are at once playful and pragmatic. They focus on a wide range of skills with the Windows Azure cloud platform, including publishing to Windows Azure from Visual Studio, exercising features such as Windows Azure Service Bus and using toolkits, such as the Social Gaming Toolkit for Windows Azure.

Each achievement description contains links to documentation and tutorials where developers unfamiliar with the feature can learn more.

Here are the available achievements:

Download the extension here.


Brian Loesgen (@BrianLoesgen) spun a A Tale of Two Emulators and a VM for Windows Phone and Azure on 4/24/2012:

imageI’m not sure how many other people on the planet may want to do this, but I developed a technique which I thought was really cool, and is somewhat non-obvious, so I thought I’d blog about it.

imageWhen I develop, out of years of habit, I always install my development environment in a virtual machine, leaving the host with just my productivity software like Outlook. However, the Windows Phone emulator is not supported in a virtual machine. This forced me into installing in my host machine, leaving me with 2 machines:

  1. Host machine, with Visual Studio 2010 Express for Windows Phone
  2. Virtual machine, with Visual Studio Ultimate, SQL Server, Azure tools

Fine. Not what I ideally would have liked, but I can live with that. I was developing a Windows Phone app that calls some RESTful services running on Azure, so my workflow was:

  1. Develop/debug/test RESTful service in VM
  2. Deploy service to Azure
  3. Work on phone app hitting that service live-on-Azure

That worked fine, until I ran into a situation where I was throwing an exception in the RESTful service (due to the data I was submitting), and I had not set up diagnostics for the Azure app. What to do next?

I reasoned that since the host and the VM were on the same network, I should be able to talk between them. However, that would mean using the Windows Phone *emulator* in my host to call into a service running in the Windows Azure *emulator* inside the virtual machine. Can that work? YES!!

Here’s how:

  1. In the VM, run Windows Advanced Firewall and set up inbound and outbound rules to allow full access to port range 81-82 (this is what I did, your mapping may vary, watch the messages as the Azure emulator starts)
  2. In the VM, run the cloud app containing your services, in debug mode with breakpoints set
  3. In the VM, attach to the Azure emulator process (DevFC)
  4. In the VM, do an IPCONFIG to see what your IP address is
  5. In the host, ping that IP address just as a reality check to ensure you can get there
  6. In the host, change your service base URI you are calling to match the VM (in my case it was http://192.168.1.7:82, remember the emulator does port remapping to avoid conflicts)
  7. In the host, run your app and call the service
  8. In the VM, notice that your breakpoint has been hit!

So there you have it: emulator-to-emulator communications with a VM hop in between. Using this technique, you can develop a Windows Phone app running against an Azure backend, debugging both sides at the same time. As an added bonus, you can do this without actually calling anything on Azure, or needing to re-deploy.


• Bruno Terkaly (@brunoterkaly) described Developing Windows Azure, Cloud-Based Applications With Windows 8-What you need and how to get there on 4/12/2012:

Exercise 1: Introduction – Cloud Development on Windows 8

imageI am now doing most, if not all my development on Windows 8. Don’t get me wrong I am a die-hard fan of Windows 7. I go weeks without rebooting – through airports, the back of a motorcycle. But now that Windows 8 is working so well, I’m just spending all my time there. Windows 8 feels like a brave new, re-imagined world, yet with some familiar, feel-at-home features that make the upgrade feel really smooth. Because I do a lot of presenting, I am hesitant to adopt Consumer Preview software. But I am perfectly at home in Windows 8 – even as I write this post.

The convincing argument was that I could easily run Visual Studio 2010 and Visual Studio 2011 at the same time, on the same machine. I've also got all my familiar databases and tooling as well.

Windows 8 Metro and Windows Azure Developer Tooling on the same machine - delightful

Yes, you heard me right. It works great to run both Visual Studio 2010 and Visual Studio 2011 at the same time. I use Visual Studio 2010 to do the back-end Azure http servers. And I use Visual Studio 2011 to do the Windows 8 Metro client application.

You can go back and forth between both environments, client and server. You can hit breakpoints in the server when a Windows 8 Metro Client makes a service request.

Can you do it? Can Windows 8 be your only machine?
For me it is working out well. I haven't been hit with any obstacles that prevent me not to be a 100% on Windows 8. Some of my friends are running Macs with Windows 8 and simultaneously dual booting into MacOS.

I am about to show you a cloud project on Windows Azure that runs on Windows 8. I've even had multiple debugging sessions with both Visual Studio 2010 and Visual Studio 2011 running at the exact same time.

SQL Server 2008 R2 installed just fine.

My custom, hand-built utilities that I use every day came to the Windows 8 world without a single hitch. I am not going to bore you with the long list of things are that are going well.


Task 1 - Migration Underway

My old projects have all opened, compiled, with little or no hassle. I have some low level API code in C++, even like keyboard hooks that ported right over. I also have some LiveWriter plugins I've written that work great with LiveWriter on Windows 8. I am delighted at the level of compatibility. I personally see no reason to just be 100% on Windows 8.

I focus on the cloud quite a bit so I obviously want to run my previous cloud projects on Windows 8. It turns out this is working great for me. My samples just work.


Task 2 - Installing Visual Studio and Windows 8

You can develop Windows Azure applications on Windows 8. But you will need to manually install and configure your development environment.

First off, you will need to add Visual Studio 2010. This is quite easy. I haven’t tried the Express versions, but my Visual Studio 2010 Ultimate works and I expect most, if not all, versions of Visual Studio should work.

Visual Studio 2011 does not yet support the Azure SDK and tooling. But Visual Studio 2010 works great, luckily. No learning curve there, things work just as you'd expect.

  1. Install Windows 8: Download Windows 8
  2. Make sure Visual Studio 2011 is downloaded and installed: Visual Studio 11 Express Beta (and more)
  3. Open the Windows Feature configuration settings.
    • Press the Windows logo key to show the Start area in the Windows shell.

Task 3 - Adding and enabling Windows Features
  1. .NET Framework 3.5 (includes .NET 2.0 and 3.0)
    • Windows Communication Foundation HTTP Activation
  2. .NET Framework 4.5 Advanced Services
    • ASP.NET 4.5
    • WCF Services =>
      • HTTP Activation
      • TCP Port Sharing
  3. Internet Information Services
    • Web Management Tools => IIS Management Console
    • World Wide Web Services =>
      • Application Development Features =>
        • .NET Extensibility 3.5
        • .NET Extensibility 4.5
        • ASP.NET 3.5
        • ASP.NET 4.5
        • ISAPI Extensions
        • ISAPT Filters
    • Common HTTP Features =>
      • Default Document
      • Directory Browsing
      • HTTP Errors
      • HTTP Redirection
      • Static Content
    • Health and Diagnostics =>
      • Logging Tools
      • Request Monitor
      • Tracing
  4. Security => Request Filtering

Task 4 - Review Settings
Make sure when you are adding features, it looks like this.

Windows feature configuration settings required for Windows Azure SDK


Task 5 - Install SQL Server 2008 R2 Express with SP1.

It is helpful to install the server with tools.

  1. This installer is identified by the WT suffix.
    • Choose SQLEXPRWT_x64_ENU.exe or SQLEXPRWT_x86_ENU.exe.
    • Choose New Installation or Add Features.
    • Use the default install options.

Microsoft® SQL Server® 2008 R2 SP1 - Express Edition


Task 6 - Install Azure SDK and Tooling

At this writing it is version 1.6 for the Azure SDK.

  1. A couple of things to remember.
  2. First, order matters. Second, be sure to remove any prior versions of the Azure SDK and tooling you may have.
  3. Let’s be clear: Uninstall any existing versions of the Windows Azure SDK for .NET on the machine.
  4. Download and install the Windows Azure SDK for .NET - November 2011 individual components from the download center page for Windows Azure SDK - November 2011: Azure SDK and Tooling for Windows 8 (as of 4/23/2012)
  5. Note that the all-in-one installer available from the Windows Azure .NET Developer will not work in this scenario.
  6. Choose the correct list below based on your platform, and install the components in the order listed:
    • 32-bit:
      • WindowsAzureEmulator-x86.exe
      • WindowsAzureSDK-x86.exe
      • WindowsAzureLibsForNet-x86.msi
      • WindowsAzureTools.VS100.exe
    • 64-bit:
      • WindowsAzureEmulator-x64.exe
      • WindowsAzureSDK-x64.exe
      • WindowsAzureLibsForNet-x64.msi
      • WindowsAzureTools.VS100.exe

Exercise 2: Testing the Windows Azure SDK and Tools

This next section is about partially validating that you have a clean install. We cannot test all aspects of the Azure SDK and tooling, but we'll some very import parts validated.


Task 1 - New Project, Add Code, Run Code, Be Happy
This is where the rubber meets the road.

  1. Start Visual Studio 2010 as Administrator (not version 2011)
  2. Select New Project from the Start Page
  3. Do not select Windows 8 Cloud Application.
    • It is used to build a cloud service to support rich Windows Metro style apps
    • It contains Visual Studio project templates for a sample Metro style app and a Windows Azure cloud project.
      • We will explore this later
    in0fwtep
  4. Type in a project name of ValidateInstall
  5. Click on ASP.NET Web Role and then click the >> arrows
    • Get ready to rename it
      eeydr40w
  6. Hit F2 or click to rename the web role

    • Call it ValidateInstall_WebRole
  7. You can edit Default.aspx
    • It shows up automatically after you finish adding your web role
      jexiolbwraduyk53
    • Just add some text. Note that I added some (Bruno Terkaly)
  8. Go to the Debug Menu and choose Start Debugging
    evb144pk
    • This should start up some emulators
      agpfzonz
  9. You should see the debugging environment startup
    • The compute and storage emulators should automatically start
  10. The welcome screen lets you know some basics are working
    hxpocsgu
  11. You should be able to navigate to the system tray of Windows 8 to see that the emulators are running
    • Compute Emulator
    • Storage Emulator

      hrhfiu1x

Windows 8 and Windows Azure (Cloud) works great together

I had a great time testing all the pieces. I got both Visual Studio 2010 and 2011 to work together perfectly. I was able to query a RESTFUL end point from Windows 8 Metro app running Visual Studio 2011 and go against the RESTFUL endpoint hosted in Visual Studio 2010.

image

Note * Means same machine


Some more resources

You can learn more about running Windows Azure and Windows 8 here.

image 


Steve Plank (@plankytronixx) posted VIDEO: Windows Azure: How Retail Manager Solutions changed their business on 4/24/2012:

imageJim Chapman, CEO of Retail Manager Solutions, talks about their move from on-premises software to the cloud and then tells us how Windows Azure has changed their business.

imageSales cycle times have been dramatically reduced, customers save money and there is less work to do to get a solution up and running for one of their customers.


Roope Astala provided a guided Tour of “Cloud Numerics” Linear Algebra Library in a 4/23/2012 post to the Microsoft Codename “Cloud Numerics” blog:

The LinearAlgebra namespace in “Cloud Numerics” is divided into 3 main classes:

  • imageMicrosoft.Numerics.LinearAlgebra.Decompositions
  • Microsoft.Numerics.LinearAlgebra.LinearSolvers
  • Microsoft.Numerics.LinearAlgebra.Operations

In this post we’ll look at examples from each of the classes, and how they can be combined to implement interesting algorithms on matrices.

Operations

Let’s kick off the tour with an example using methods from the Operations class. We compute the largest eigenvector of a matrix by power iteration. We use Operations.MatrixMultiply to iterate the result, and Operations.Norm, first to scale the result to unit length, and then to check the convergence

using System;
using System.Text;
using Microsoft.Numerics;
using msnl = Microsoft.Numerics.Local;
using msnd = Microsoft.Numerics.Distributed;
using Microsoft.Numerics.Statistics;
using Microsoft.Numerics.LinearAlgebra;
namespace MSCloudNumericsApp
{
public class Program
{
static void LargestEigenVector()
{
long size = 20;
var D = ProbabilityDistributions.Uniform(0d, 1d, new long[] { size, size });
var x = ProbabilityDistributions.Uniform(0d, 1d, new long[] { size,1 });
Microsoft.Numerics.Local.NumericDenseArray<double> xnew;
int maxiter = 20;
double tol = 1E-6;
int iter = 0;
double norm = 1.0;
while (iter < maxiter && norm > tol)
{
iter++;
xnew = Operations.MatrixMultiply(D, x);
xnew = xnew / Operations.Norm(xnew,NormType.TwoNorm);
norm = Operations.Norm(x - xnew, NormType.InfinityNorm);
x = xnew;
Console.WriteLine("Iteration {0}:\t Norm of difference: {1}", iter, norm);
}
Console.WriteLine("Largest eigenvector: {0}", x);
}
}
}

For normalization we use the L2 norm, by passing the NormType.TwoNorm argument value.

For exact eigenvalues, the identity A*x=e*x should hold. We check this by comparing x and xnew: at convergence they should be close to one another. We perform the comparison by using NormType.InfinityNorm that gives the largest magnitude element-wise difference.

The algorithm should reach convergence after about 8 iterations.

Matrix decompositions

The decompositions, broadly speaking, fall into two categories

  1. Decompositions that facilitate solving systems of linear equations, for example LU and Cholesky
  2. Decompositions aimed at analyzing the spectrum of a matrix, for example EigenSolve and Svd

The workflow for category 1 involves computing the decomposition and passing the result object into the appropriate linear solver function. For example, one can use Cholesky decomposition as

var choleskyDecomposition = Decompositions.Cholesky(A, UpperLower.UpperTriangle);
x = LinearSolvers.Cholesky(choleskyDecomposition, b);

Because this category of methods is closely related to linear solvers, we’ll take a closer look in next section “Linear solvers”

The methods from category 2 can be used to represent a matrix in a form that can be analyzed more easily. Let’s try an example: singular value decomposition to compute a low-rank approximation of a matrix. In the following code, we decompose the matrix and then reconstruct it, first by keeping only the largest singular value, and then adding second largest value, and so forth.

static void TestSvdConvergence()
{
long rows = 20;
long columns = 10;
var D = ProbabilityDistributions.Uniform(0d, 1d, new long[] { rows, columns });
var svdResult = Decompositions.SvdThin(D);
var approxS = msnl.NumericDenseArrayFactory.Zeros<double>(new long[] { columns, columns });
for (int i = 0; i < columns; i++)
{
approxS[i, i] = svdResult.S[i];
var reconstructedD = Operations.MatrixMultiply(svdResult.U, Operations.MatrixMultiply(approxS, svdResult.V));
double norm = Operations.Norm(D - reconstructedD, NormType.FrobeniusNorm);
Console.WriteLine("Frobenius norm of difference with {0} singular values {1} ", i + 1, norm);
}
}

We compare the reconstructed matrix to the original one by computing the Frobenius norm of the difference. As we increase the number of singular values, the norm gets smaller and smaller as the reconstructed matrix gets closer and closer to the original one. For all 10 singular values the result should be exactly the same, apart from small numerical noise.

Note that we are using the Decompositions.SvdThin method. This is more memory-efficient than Svd for non-square matrices. To give an example, if your data was a tall-and-thin 1 million by 100 matrix of doubles, the resulting U matrix would be a huge million by million array that consumes 8 terabytes of memory, with 999900 rows filled with zeros. However, SvdThin keeps only the 100 non-zero rows, producing a much more manageable 0.8 gigabyte array.

Linear solvers

There are two kinds of operations: general-purpose ones and specialized ones that can be much faster if your data has special symmetries. Consider, for example, Cholesky decomposition. If your data happens to be a symmetric positive-definite matrix, it’s much faster to compute the Cholesky decomposition first, and then use the LinearSolvers.Cholesky method to solve the linear system of equations, compared to using the general-purpose LinearSolvers.General method.

There’s another benefit to using specialized methods. Consider what LinearSolvers.General does under the covers: it first computes a matrix factorization internally and then solves the system of linear equations against the factorized matrix. Often, this internal factorization step is the expensive part of the computation. The factorization only depends on matrix A. If you have to compute linear solve many times against the same matrix - for example, in iteration A*x2 = f(x1), A*x3 = f(x2) - it would be nice to be able to re-use the same factorization. Using the specialized method allows you to do exactly that: pre-compute the Cholesky decomposition once and then re-use it as input to LinearSolvers.Cholesky.

Let’s look at an example to compare the performance of the two approaches. We’ll first need to generate a matrix that is guaranteed to be symmetric and positive-definite. First, we matrix multiply an n-by-1 and a 1-by-n random number array to create an n-by-n distributed matrix: this is an efficient way to create a large distributed matrix. Second, we add together the matrix and its transpose to create a symmetric matrix. Finally, we increase the values on the diagonal to make the matrix positive-definite: this follows from the Gershgorin circle theorem.

static msnd.NumericDenseArray<double> CreatePositiveDefiniteSymmetricMatrix(long n)
{
msnd.NumericDenseArray<double> columnVector = ProbabilityDistributions.Uniform(0.1d, 0.9d, new long[] { n, 1 });
msnd.NumericDenseArray<double> rowVector = ProbabilityDistributions.Uniform(0.1d, 0.9d, new long[] { 1, n });
msnd.NumericDenseArray<double> diagVector = ProbabilityDistributions.Uniform(2.0d, 4.0d, new long[] { n });
var A = Operations.MatrixMultiply(columnVector, rowVector);
// Make matrix symmetric
A = A + A.Transpose();
// Increase the diagonal values to make sure the matrix is positive-definite
for (long i = 0; i < n; i++)
{
A[i, i] = n * diagVector[i];
}
return A;
}

We’ll time 5 repeated computations of LinearSolve.General.

static void TimeLinearSolversGeneral(int nIter, msnd.NumericDenseArray<double> A, msnd.NumericDenseArray<double> b, msnd.NumericDenseArray<double> delta)
{
msnd.NumericDenseArray<double> x;
DateTime t0 = DateTime.Now;
for (int i = 0; i < nIter; i++)
{
x = LinearSolvers.General(A, b);
x = x + delta;
}
Console.WriteLine("General linear solve time: {0} seconds ", (DateTime.Now - t0).TotalMilliseconds / 1000.0);
}

Next, we perform the same computation using Cholesky decomposition: we store the decomposition and re-use it as input to the LinearSolvers.Cholesky method. Note that because the matrix is assumed symmetric, the Cholesky method only uses values in its upper or lower triangle, as specified by the UpperLower argument.

static void TimeLinearSolversCholesky(int nIter, msnd.NumericDenseArray<double> A, msnd.NumericDenseArray<double> b, Microsoft.Numerics.Distributed.NumericDenseArray<double> delta)
{
msnd.NumericDenseArray<double> x;
DateTime t1 = DateTime.Now;
var choleskyDecomposition = Decompositions.Cholesky(A, UpperLower.UpperTriangle);
x = LinearSolvers.Cholesky(choleskyDecomposition, b);
for (int i = 0; i < nIter; i++)
{
x = LinearSolvers.Cholesky(choleskyDecomposition, b);
x = x + delta;
}
Console.WriteLine("Linear solve time using Cholesky: {0} seconds ", (DateTime.Now - t1).TotalMilliseconds / 1000.0);
}

The Cholesky code should be 6-7 times faster.

Examples as single code

This concludes the our tour of the LinearAlgebra namespace. To try the examples yourself, you can create a Cloud Numerics application and copy-paste the code below:

using System;
using System.Text;
using Microsoft.Numerics;
using msnl = Microsoft.Numerics.Local;
using msnd = Microsoft.Numerics.Distributed;
using Microsoft.Numerics.Statistics;
using Microsoft.Numerics.LinearAlgebra;
namespace MSCloudNumericsApp
{
public class Program
{
static void LargestEigenVector()
{
long size = 20;
var D = ProbabilityDistributions.Uniform(0d, 1d, new long[] { size, size });
var x = ProbabilityDistributions.Uniform(0d, 1d, new long[] { size,1 });
Microsoft.Numerics.Local.NumericDenseArray<double> xnew;
int maxiter = 20;
double tol = 1E-6;
int iter = 0;
double norm = 1.0;
while (iter < maxiter && norm > tol)
{
iter++;
xnew = Operations.MatrixMultiply(D, x);
xnew = xnew / Operations.Norm(xnew,NormType.TwoNorm);
norm = Operations.Norm(x - xnew, NormType.InfinityNorm);
x = xnew;
Console.WriteLine("Iteration {0}:\t Norm of difference: {1}", iter, norm);
}
Console.WriteLine("Largest eigenvector: {0}", x);
}
static void TestSvdConvergence()
{
long rows = 20;
long columns = 10;
var D = ProbabilityDistributions.Uniform(0d, 1d, new long[] { rows, columns });
var svdResult = Decompositions.SvdThin(D);
var approxS = msnl.NumericDenseArrayFactory.Zeros<double>(new long[] { columns, columns });
for (int i = 0; i < columns; i++)
{
approxS[i, i] = svdResult.S[i];
var reconstructedD = Operations.MatrixMultiply(svdResult.U, Operations.MatrixMultiply(approxS, svdResult.V));
double norm = Operations.Norm(D - reconstructedD, NormType.FrobeniusNorm);
Console.WriteLine("Frobenius norm of difference with {0} singular values {1} ", i + 1, norm);
}
}
static msnd.NumericDenseArray<double> CreatePositiveDefiniteSymmetricMatrix(long n)
{
msnd.NumericDenseArray<double> columnVector = ProbabilityDistributions.Uniform(0.1d, 0.9d, new long[] { n, 1 });
msnd.NumericDenseArray<double> rowVector = ProbabilityDistributions.Uniform(0.1d, 0.9d, new long[] { 1, n });
msnd.NumericDenseArray<double> diagVector = ProbabilityDistributions.Uniform(2.0d, 4.0d, new long[] { n });
var A = Operations.MatrixMultiply(columnVector, rowVector);
// Make matrix symmetric
A = A + A.Transpose();
// Increase the diagonal values to make sure the matrix is positive-definite
for (long i = 0; i < n; i++)
{
A[i, i] = n * diagVector[i];
}
return A;
}
static void TimeLinearSolversGeneral(int nIter, msnd.NumericDenseArray<double> A, msnd.NumericDenseArray<double> b, msnd.NumericDenseArray<double> delta)
{
msnd.NumericDenseArray<double> x;
DateTime t0 = DateTime.Now;
for (int i = 0; i < nIter; i++)
{
x = LinearSolvers.General(A, b);
x = x + delta;
}
Console.WriteLine("General linear solve time: {0} seconds ", (DateTime.Now - t0).TotalMilliseconds / 1000.0);
}
static void TimeLinearSolversCholesky(int nIter, msnd.NumericDenseArray<double> A, msnd.NumericDenseArray<double> b, Microsoft.Numerics.Distributed.NumericDenseArray<double> delta)
{
msnd.NumericDenseArray<double> x;
DateTime t1 = DateTime.Now;
var choleskyDecomposition = Decompositions.Cholesky(A, UpperLower.UpperTriangle);
x = LinearSolvers.Cholesky(choleskyDecomposition, b);
for (int i = 0; i < nIter; i++)
{
x = LinearSolvers.Cholesky(choleskyDecomposition, b);
x = x + delta;
}
Console.WriteLine("Linear solve time using Cholesky: {0} seconds ", (DateTime.Now - t1).TotalMilliseconds / 1000.0);
}
public static void Main(string[] args)
{
// Initialize Runtime
NumericsRuntime.Initialize();
LargestEigenVector();
// Test convergence of matrix approximation from singular value decompositions
TestSvdConvergence();
int nIter = 5;
long n = 2000;
msnd.NumericDenseArray<double> A = CreatePositiveDefiniteSymmetricMatrix(n);
msnd.NumericDenseArray<double> b = ProbabilityDistributions.Uniform(0.1d, 0.9d, new long[] { n });
msnd.NumericDenseArray<double> delta = ProbabilityDistributions.Uniform(0.1d, 0.9d, new long[] { n });
// Time general algorithm
TimeLinearSolversGeneral(nIter, A, b, delta);
// Time Cholesky
TimeLinearSolversCholesky(nIter, A, b, delta);
// Shut down the Microsoft Numerics runtime.
NumericsRuntime.Shutdown();
}
}
}

It’s clear to me from the preceding narrative and code that solving linear algebra problems with Codename “Cloud Numerics” isn’t a piece of cake. For more details about using Codename “Cloud Numerics”, Visual Studio and Windows Azure, see my earlier

tutorials.


Shreekanth Joshi wrote ISV Guest Blog Series: Persistent Systems Takes to Windows Azure– Delivers CloudNinja for Java which David Makogon posted to the Windows Azure blog on 4/23/2012:

Editor’s Note: Today’s post, written by Shreekanth Joshi, AVP Cloud Computing at Persistent Systems describes how the company uses Windows Azure to develop and deliver Java based applications for their ISV customers.

Persistent Systems is a global company that specializes in software product and technology services. We focus on developing best-in-class solutions in four key next-generation technology areas: Cloud Computing, Mobility, BI & Analytics, and Collaboration. Persistent Systems has been an early entrant in the cloud space and has partnered with many pioneering start-ups and innovative enterprises to help develop and deploy various cloud applications. We’ve utilized our finely-tuned product engineering processes to develop innovative solutions for more than 300 customers across North America, Europe, and Asia.

imageBuilding on our SaaS capabilities and experience, we’ve established dedicated competency centers for leading cloud platforms. As active participants in the Windows Azure community, we have released Open Source Projects including:

Persistent Systems has recently launched a new open source project, CloudNinja for Java as described below.

CloudNinja for Java

The demand for Java-based applications on Windows Azure is increasing as customers are realizing the openness of Windows Azure that can provide scalability and high availability to their Java applications. We get a lot of questions about from how to design various project components to manage single-tenant and multi-tenant applications, to how to integrate project components with Windows Azure services. The challenge often faced by our customers while learning Windows Azure is that there is only a limited amount of informative articles and code samples that cover platforms other than .NET.

Windows Azure is often perceived as being a .NET Cloud Platform, which isn’t true. This misconception is based on demos and how-to blogs that are written around Microsoft Visual Studio. As it turns out, Windows Azure provides virtual machines that are either Windows Server 2008 SP2 or Windows Server 2008 R2, meaning that most of the Windows-based executable or scripts can be run on Windows Azure.

To increase awareness about the openness of Windows Azure, we recently released CloudNinja for Java, a reference application to illustrate how to build a multi-tenant application for Windows Azure. CloudNinja for Java encompasses the following features and functionalities:

  • Tenant on-boarding
  • Tenant-level customization (for example, managing logos)
  • Per-tenant data isolation
  • Per-tenant metering
  • Providing support for log-in via different identity providers. For example, Yahoo!, Google, and Windows Live ID
  • General purpose task scheduler

This application is built on several common OSS libraries, such as Spring, Hibernate, Log4j and jqPlot.

The project runs in Windows Azure and was developed entirely using Windows Azure Plugin for Eclipse with Java. Here is the illustration that depicts the architecture of CloudNinja.

We utilized various Windows Azure services in the development and deployment of CloudNinja for Java. Some of the most important ones are:

We believe that CloudNinja for Java will be beneficial to the Java community and will encourage Java developers to create their own applications for Windows Azure.

When deploying a Java application for Windows Azure, there are a few things to consider:

  • Whether to bundle 3rd-party tools and runtimes with deployments
  • Monitoring an application
  • Dealing with missing SDK functionality
Whether to Bundle 3rd-party Tools and Runtimes with Deployments

Windows Azure plugin for Eclipse combines a Java application, along with a Windows Azure project, into a package that can be uploaded and deployed to Windows Azure. The Windows Azure project contains a startup script responsible for setting up the Java environment and starting the application, including:

  • WAR file deployment
  • Environment variable configuration Windows Azure plugin for Eclipse combines a Java application, along with a Windows Azure project, into a package that can be uploaded and deployed to Windows Azure. The Windows Azure project contains a startup script responsible for setting up the Java environment and starting the application, including:

Windows Azure plugin for Eclipse combines a Java application, along with a Windows Azure project, into a package that can be uploaded and deployed to Windows Azure. The Windows Azure project contains a startup script responsible for setting up the Java environment and starting the application, including:

  • Web Server setup (in our case, Apache Tomcat)
  • JRE setup
  • WAR file deployment
  • Environment variable configuration

When a Windows Azure virtual machine instance is started, the environment needs to be set up. This is done by the startup script. Since Tomcat and the JRE are not part of the Windows Server VM image, we need to provide these ourselves. It’s very easy to include these in the Windows Azure deployment package. However, adding Tomcat and the JRE adds about 70 MB to the package size, coupled the runtime applications to the deployment. This has a few issues:

  • It takes longer to upload the deployment package to Windows Azure.
  • If we ever want to update the version of Tomcat or the JRE, we have to redeploy the entire package.

We don’t need to include Tomcat and the JRE with our deployment package. To avoid the abovementioned issues, we stored our runtimes in Windows Azure Blob storage. The startup script downloads Apache Tomcat and the JRE from the Blob storage and then starts Apache Tomcat. The download from the Windows Azure Blob storage to the virtual machine storage is extremely fast, since our virtual machines and storage accounts are in the same data center. The package size for such Java applications was reduced by the total size of Apache Tomcat and the JRE installable.

While upgrading our deployment to the latest Tomcat and JRE, we needed to copy new versions to the Blob storage. Then, using Windows Azure portal, we reimaged all virtual machines instances that host our application, and we were running the latest Tomcat/JRE version without having to redeploy any of our code!

Monitoring an Application

Windows Azure provides a diagnostics monitor that’s able to capture performance counters, event logs, file directories, Windows Azure infrastructure logs and crash dumps. The default Windows Azure project with Java doesn’t offer diagnostics configuration. For CloudNinja for Java, we created a standalone utility that sets up specific performance counters we wanted to watch (mainly CPU utilization).

We also needed to look at Tomcat access logs and Storage Analytics logs to generate usage statistics for each tenant. To capture Tomcat logs, we configured Windows Azure Diagnostics to monitor files in the directory where Tomcat stores its access log files. The diagnostics monitor pushes the access logs to blobs, where we access them from a worker process later. This works extremely well, since we’re able to access logs for each running instance of Tomcat. Since these files are stored in Windows Azure storage, it’s easy to access them from external programs as well, so we could manually inspect the log files if something does not work as per our expectation.

When it comes to monitoring blob storage, the story changes a bit. In CloudNinja for Java, we store tenant logos in blob storage. We have image tags in our html output that point directly to these logos, which doesn’t go through a Apache Tomcat server, For capturing these blob accesses and related bandwidth, Windows Azure provides Storage Analytics. There are two types of analytics: Logging, which provides detailed statistics on every single blob access, and metering, which provides hourly rollups. Analytics are disabled by default. For CloudNinja for Java, we enabled logging for our Windows Azure Storage account. These logs provide details such as timestamp, source IP, blob being accessed, and result status (successful or failed access). Storage Analytics is a great Windows Azure feature that was especially helpful for CloudNinja for Java, being a multi-tenant application for which we wanted to capture the tenant-level storage usage.

While debugging, logging is not always enough. We enabled remote access, allowing our developers to connect to the virtual machines that host our application. Through the Windows Azure portal, we can open up a Remote Access connection to any running instance.

Dealing with Missing SDK Functionality

Some Windows Azure features are not yet provided in the Windows Azure SDK for Java. One such feature on which we really depend on is Access Control Services (ACS). We wanted to use ACS in CloudNinja for Java to allow tenants to login using one of several identity providers, such as Google, Yahoo! and Live ID.

As it turns out, ACS has a complete REST-based management interface. With a bit of help from a REST library (Restlet), we were able to build ACS into CloudNinja for Java. The same technique could be used for any other REST-based feature in Windows Azure. Fortunately, the Windows Azure SDK for Java covers quite a bit today:

  • Service Runtime
  • Storage (blobs, queues, tables)
  • Service Bus
Summary

Windows Azure is a core technology for Persistent Systems, and we continue to build Java-based solutions for our customers. We successfully use almost every feature of Windows Azure in Java applications, without having to write .NET code. This is illustrated very well in CloudNinja for Java. For example, we interacted with the ACS Management Service even when the Microsoft’s Windows Azure SDK for Java does not have the API support for this service. We could successfully utilize the REST-based API to create relying party applications that interact with the ACS Management Service. This REST-based approach really opens up opportunities to develop Windows Azure applications in any programing language.

We encourage the Java community to refer CloundNinja for Java to build a multi-tenant application for Windows Azure.

Read the blog post, “Introducing CloudNinja for Java”, on the Persistent Systems blog for more information.


Joe Panettieri (@joepanettieri) reported Avanade Cloud Services Manager: Private, Public Cloud Management? in a 4/24/2012 post to the TalkinCloud blog:

imageAvanade has launched Cloud Services Manager, an extension to Microsoft System Center 2012 that allows customers to manage virtualized environments (VMware, Citrix XenServer and Microsoft Hyper-V) across private clouds and public clouds — including Amazon Web Services and Windows Azure.

According to Avanade: “Cloud Services Manager delivers a single view of all private cloud services that is automatically customized for a person’s role, giving them the most relevant information to scale up or down a private cloud service.”

imageNo doubt, thousands of VARs and MSPs are seeking a single dashboard to manage public and private clouds. But Avanade is an MSP (managed services provider) in its own right. I doubt the company will sell or offer Cloud Services Manager to third-party MSPs (but I’ve asked the company if that’s the case and will update this blog when I receive an answer).

Avanade was originally funded by Microsoft and Accenture in 2000 to focus on Microsoft-centric consulting services — such as Windows Server and Exchange Server deployments. These days, Avanade is a subsidiary of Accenture.

Avanade says the Cloud Services Manager solution “brings together multiple self-service activities, such as provisioning or changing new services and monitoring costs.” A drag-and-drop user interface, Avanade says, can help customers to design and validate private cloud services more quickly.

Among Avanade’s key focus areas for the tool: Delivering CRM solutions via public, hybrid or private clouds.


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

Tim Anderson (@timanderson) asked Microsoft’s Visual Studio LightSwitch: does it have a future? in a 4/25/2012 post:

imageA recent and thorough piece on Visual Studio LightSwitch [see below] prompted a Twitter discussion on what kind of future the product has. Background:

  • LightSwitch is an application generator which builds data-driven applications.
  • A LightSwitch application uses ASP.NET on the server and Silverlight on the client.
  • imageLightSwitch applications can be deployed to Windows Azure
  • LightSwitch apps can either be browser-hosted or use Silverlight out of browser for the desktop
  • LightSwitch is model-driven so in principle it could generate other kinds of client, such as HTML5 or Windows 8 Metro.
  • LightSwitch first appeared last year, and has been updated for Visual Studio 11, now in beta.

image_thumb1I have looked at LightSwitch in some detail, including a hands-on where I built an application. I have mixed feelings about the product. It was wrongly marketed, as the kind of thing a non-professional could easily pick up to generate an application for their business. In my opinion it is too complex for most such people. The real market is professional developers looking for greater productivity. As a way of building a multi-tier application which does its best to enforce good design principles, LightSwitch is truly impressive; though I also found annoyances like skimpy documentation, and that some things which should have been easy turned out to be difficult. The visual database designer is excellent.

The question now: what kind of future does LightSwitch have? Conceptually, it is a great product and could evolve into something useful, but I question whether Microsoft will stick with it long enough. Here is what counts against it:

  • The decision to generate Silverlight applications now looks wrong. Microsoft is not going to do much more with Silverlight, and is more focused on HTML5 and JavaScript, or Windows Runtime for Metro-style apps in Windows 8 and some future Windows Phone. There is some family resemblance between Windows Runtime and Silverlight, but not necessarily enough to make porting easy.
  • There is no mobile support, not even for Windows Phone 7 which runs Silverlight.
  • I imagine sales have been dismal. The launch product was badly marketed and perplexing to many.

What about the case in favour? Silverlight enthusiast Michael Washington observes that the new Visual Studio 11 version of LightSwitch generates OData feeds on the server, rather than WCF RIA Services. OData is a REST-based service that is suitable for consumption by many different kinds of client. To prove his point, Washington has created demo mobile apps using HTML5 and JQuery – no Silverlight in sight.

image

Pic from here.

Washington also managed to extract this comment from Microsoft’s Steve Hoag on the future of LightSwitch, in an MSDN forum discussion:

LightSwitch is far from dead. Without revealing anything specific I can confirm that the following statements are true:

- There is a commitment for a long term life of this product, with other versions planned

- There is a commitment to explore creation of apps other than Silverlight, although nothing will be announced at this time

Hoag is the documentation lead for LightSwitch.

That said, Microsoft has been known to make such commitments before but later abandon them. Microsoft told me it was committed to cross-platform Silverlight, for example. And it was, for a bit, at least on Windows and Mac; but it is not now. Microsoft was committed to IronRuby and IronPython, once.

For those with even longer memories, I recall a discussion on CompuServe about Visual Basic for DOS. This was the last version of Microsoft Basic for DOS, a fine language in its way, and with a rather good character-based interface builder. Unfortunately it was buggy, and users were desperate for a bug-fix release. Into this discussion appeared a guy from Microsoft, who announced that he was responsible for the forthcoming update to Visual Basic for DOS and asked for the top requests.

Good news – except that there never was an update.

The truth is that with LightSwitch still in beta for Visual Studio 11, it is unlikely that any decision has been made about its future. My guess, and it is only that, is that the Visual Studio 11 version will be little used and that there will be no major update. If I am wrong and it is a big hit, then there will be an update. If I am right about its lack of uptake, but its backing within Microsoft is strong enough, then maybe in Visual Studio 12 or even sooner we will get a version that does it right, with output options for cross-platform HTML5 clients and for Windows Phone and Windows Metro. But do not hold your breath.

Related posts:

  1. Microsoft releases Visual Studio LightSwitch: a fascinating product with an uncertain future
  2. Visual Studio LightSwitch – model-driven architecture for the mainstream?
  3. Ten things you need to know about Microsoft’s Visual Studio LightSwitch

“LightSwitch applications can be deployed to Windows Azure” easily is the reason I cover Visual Studio LightSwitch in this blog.

I wrote a tongue-in-cheek Visual Basic for MS-DOS is a Killer App of All Time? post to the OakLeaf blog in December 2006. Unlike LightSwitch, Visual Basic for MS-DOS (codenamed “Escher”) was a dog. I know, because I was a beta tester for both Visual Basic for Windows and MS-DOS. I stuck with the latter and never regretted it. I asked in my An (Almost) Lifetime of BASIC essay of 2001 “Does anyone remember VB for DOS, a.k.a Escher?”

Hopefully, Visual Studio LightSwitch will live long and prosper.


Michael Simons posted User Defined Relationships within Attached Database Data Sources (Michael Simons) to the Visual Studio LightSwitch blog on 4/24/2012:

The first release of Visual Studio LightSwitch (LightSwitch V1) allows users to define relationships between tables within the intrinsic/built-in data source (ApplicationData). When attaching to existing data sources, LightSwitch will import the relationships defined within the data source. In addition LightSwitch allows users to define relationships between tables of different data sources, these are called virtual relationships. LightSwitch V1 does not however allow users to define relationships within any attached data sources. This can be an issue when attaching to certain database data sources because it is common for the logical relationships to not be defined within the database schema as referential integrity constraints.

image_thumb1When you define relationships between your entities, LightSwitch can provide a much better experience when creating screens. Instead of manually defining queries to pull related records of data, LightSwitch can do that automatically for you based on the relationship. In addition, if you do need to define custom queries, having the relationships defined often makes it easier to filter the data as desired.

Within Visual Studio 11 Beta, a feature has been added to LightSwitch that allows users to define relationships within attached database data sources. These relationships behave just as if they were modeled within the data source.

Example

Let’s take a look at how to define a relationship within an attached database data source within LightSwitch. For this example, I have defined the following data schema in a SQL database. Notice the SalesOrder contains a Customer foreign key but there is no relationship defined between SalesOrder and Customer.

image

Because the database does not define this relationship, LightSwitch will not import it when attached to. To define this relationship, you can use the Add Relationship functionality within LightSwitch.

image

From the Add New Relationship dialog, you must first pick which tables you want to relate.

image

If two tables are picked within the same attached database data source, a field mapping section will appear within the Add New Relationship dialog. You must pick the fields the relationship is defined on. This may look familiar as it is the same experience for defining a virtual relationship. Validation messages at the bottom of the dialog will guide you in defining the relationship correctly.

image

In addition to defining the field mappings, you will also be able to define the multiplicity and navigation property names.

image

Once you press OK on the Add New Relationship dialog, the relationship will be added and will appear within the table designer.

image

One thing I find useful is to uncheck the Display by Default property of all foreign key properties that are part of these user defined relationships because the associated reference property is going to be displayed by default and that is what the end user is going to want to see.

image

Once the relationship is defined, it can be used just as if the relationship was defined within the data source. To illustrate this, you can define a screen that shows all SalesOrders for Customers who live in a particular state. To do this you would first need to create an Editable Grid screen for the SalesOrders EntitySet.

image

Next click on the Edit Query link of the SalesOrders screen member and define the query to look like the following.

image

Once this is done, you should navigate back to the screen designer and drag the State parameter onto the screen. Also for demonstration purposes, you can add the Customer.State property to the screen.

image

When this application is run, you should see something similar to the following.image

When I enter in a state parameter value, you can see how the filter previously defined on the relationship works.

image

Detailed Behavior

As noted earlier, any relationships defined within an attached database data source can de used just as if the relationship was defined within the database schema.

  1. You can define screens that display data across the relationship.
  2. You can define queries that filter and sort data across the relationship.
  3. You can define screens that eagerly load data across the relationship. This can improve the performance of loading certain types of screens.
  4. When inserting, updating or deleting data you don’t need to worry about persistence ordering because LightSwitch will handle all of this for you. For example suppose a ChangeSet is constructed that contains a new Customer instance and a new SalesOrder instance for the Customer. If the Customer table has an automatically assigned identity within the database, then Lightswitch will ensure the customer is inserted first so its identity is established. Once this is done the Customer foreign key held by the SalesOrder’s will be updated by LightSwitch so that it correctly references the newly inserted Customer.

As with most features, there are always constraints.

  1. Relationships cannot be defined between tables of attached OData, SharePoint, and Custom RIA data sources. They can only be defined within the intrinsic data source as well as attached database data sources. If you are working with a custom RIA data source, then you own/wrote the data source and can define the relationship yourself within your custom data source implementation. If you are working with a credible OData or SharePoint data source, then it should have already defined the logical relationships that exist and there should be no need to define any other relationships.
  2. Relationships must be defined on the primary key of the table on the primary side of the relationship. You will not be able to define relationships that are based on shared/common values.
  3. The multiplicity of the relationship must match the nullability of the foreign key properties. If the foreign key property is required then the relationship must be a ‘One’ on the foreign key side of the relationship. Likewise if the foreign key property is nullable, then the relationship must be a ‘Zero or One’ on the foreign key side of the relationship.
  4. In some databases, nullable columns are not used to indicate an optional foreign key. Instead, alternative values, such as empty string or 0 are used. This is commonly referred to as a sentinel value. In these situations, you will be forced to model the associations as required (1-Many instead of 0..1-Many) and updates to data source may be blocked by the built in validation logic if the sentinel value does not reference a valid row in the destination table.
Conclusion

With LightSwitch in Visual Studio 11 you can now define relationships in external database data sources. Hopefully you will find this feature useful and that it will allow you to more easily build rich applications with LightSwitch.


Alexandra Weber Morales wrote Flipping on LightSwitch and SD Times on the Web published it on 4/23/2012:

imageEveryone’s aiming for the “citizen developer,” but lately that elusive customer has led Microsoft on a snipe hunt. With LightSwitch, the company has a rapid application development (RAD) tool, that no one—not even Redmond or its partners—seems to know quite what to do with.

When Microsoft announced the Visual Studio LightSwitch beta in August 2010, the target audience was Access users. The developer community response was predictably negative, envisioning monstrosities similar to what corporate users often created with Microsoft Access, which grew until they “became self-aware and literally devoured the entire office,” blogged Kevin Hoffman, “The .NET Addict,” in July 2010.

“Finally, as this story always ends the same way, someone from IT was tasked with converting this Access DB/Excel sheet/pile of sticky notes into a legitimate, supportable, maintainable application.”

One year later, with the first release of Visual Studio LightSwitch, Microsoft had refined its target: both end-user developers and professional developers. But several Microsoft partners who offer Silverlight controls have only cautiously tinkered with LightSwitch extensions. Developers continue to disdain the IDE despite its proclaimed utility for prototyping and productivity. And fear abounds over Microsoft’s Silverlight strategy as the market shifts away from that cross-platform plug-in toward HTML5 plus JavaScript.

In February, Microsoft announced exciting changes for LightSwitch in the Visual Studio 11 beta, which comes with a Go Live license, meaning projects built with the beta can be deployed to production. With major enhancements such as OData support, the IDE is poised for greatness, according to its small but vocal fan club. Amid all the obfuscation, will LightSwitch finally shine?

Doing the right thing
“There’s a lot of misinformation around LightSwitch,” said Beth Massi, Microsoft’s senior program manager for the Visual Studio community. “This is definitely a developer tool. It is in Visual Studio. We’re targeting an end-user developer. It is RAD, not a code generator. You are describing an app, and it is making technology choices for you under the hood. Basically, the only code you write is the only code you could write, which is the business logic.”

Like dBase, Visual FoxPro and Visual Basic before it, LightSwitch is designed for quickly building data-centric apps. Unlike these earlier dev tools, however, it abstracts today’s .NET stack for three-tier applications with:

  • Presentation in Silverlight
  • Logic in C# or VB .NET, and data via OData and the Entity Framework, both hosted in ASP.NET
  • Data storage in SQL Server Express (included in LightSwitch), SQL Server or, in the cloud, SQL Azure

“It’s designed to launch users down a path of success,” praised Rich Dudley, developer evangelist for ComponentOne, a maker of Windows and .NET controls. “If you start your app in LightSwitch, later you can deploy it to a Web server where the department can consume it, and you can scale and maintain it,” he said.

Unlike Access, he said, “Here’s a tool that will get you 80 to 90% of what you need and lighten the load off your developers. Microsoft always likes to highlight that it’s forcing the Model View ViewModel pattern...plus the Entity Framework, OData services for the communication layer, and Silverlight for the UI.” His colleague, product manager Dan Beall, concurred: “It’s actually forcing you to do something right.”

Productive, portable and scalable
The promise of LightSwitch, then, is for quickly building line-of-business applications that manipulate structured data, such as customers and orders, while being portable and scalable.

From a UI standpoint, LightSwitch screen design is hierarchical and declarative, replacing the typical WYSIWYG form designer with a process whereby the detail in the data model itself allows a UI to be interpreted and generated (not to mention easily re-skinned or updated without altering a line of code).

From a data standpoint, the ADO.NET Entity Framework lets you program against a conceptual application model instead of a specific relational storage schema, which avoids hard-coded dependencies. LightSwitch is designed to sidestep the typical context shifting that slows citizen developers down when they must switch from specifying application behavior to defining regular expressions. Business Types, such as Email or PhoneNumber, come with appropriate input masks and UI behavior.

Authentication, permissions and security are also built into LightSwitch’s underlying technologies, including IIS and ASP.NET, and developers have the opportunity to make important access control decisions at each application layer.
Finally, when it comes to deployment, LightSwitch applications, which are generated in Silverlight, can run on desktops, inside the browser, or in the cloud. A SQL Server Express database is included in LightSwitch. Scalability is built-in, unlike those Access or shared Excel files business users are so fond of. And the MVVM pattern at the heart of the LightSwitch application is contained in an ApplicationDefinition.lsml (XML-compliant LightSwitch Markup Language) file, detaching it somewhat from platform specifics. …

Read more: Pages 2, 3, 4


Return to section navigation list>

Windows Azure Infrastructure and DevOps

• The Windows Azure Operations Team reported [Windows Azure Compute] [North Central US] [Yellow] Windows Azure Service Management Degradation on 4/24/2012:

Apr 24 2012 10:00PM We are experiencing a partial service management degradation with Windows Azure Compute in the North Central US sub region. At this time some customers may experience errors while deploying new hosted services. There is no impact to any other service management operations. Existing hosted services are not affected and deployed applications will continue to run. Storage accounts in this region are not affected either. We are actively investigating this issue and working to resolve it as soon as possible. Further updates will be published to keep you apprised of the situation. We apologize for any inconvenience this causes our customers.

Apr 24 2012 11:30PM We are still troubleshooting this issue, and capturing all the data that will allow us to resolve it. Further updates will be published to keep you apprised of the situation. We apologize for any inconvenience this causes our customers.

Apr 25 2012 1:00AM We are working on the repair steps in order to address the issue. Further updates will be published to keep you apprised of the situation. We apologize for any inconvenience this causes our customers.

Apr 25 2012 1:40AM The repair steps have been executed successfully, the partial service management degradation has been addressed and the resolution verified. We apologize for any inconvenience this caused our customers.

Following is a snapshot of the Windows Azure Service Dashboard details pane on 4/25/2012 at 9:00 AM PDT:

image

The preceding report was similar to the one I reported for the South Central US data center in my Flurry of Outages on 4/19/2012 for my Windows Azure Tables Demo in the South Central US Data Center post of 4/19/2012. That problem affected hosted services (including my Windows Azure tables demo app). I also encountered problems creating new Hadoop clusters on 4/24/2012. Apache Hadoop on Windows Azure runs in the North Central US (Chicago) data center, so I assume services hosted there were affected, too.

Update 4/25/2012 9:30 AM PDT: The problem continues with this notice:

Apr 25 2012 3:12PM We are experiencing a partial service management degradation with Windows Azure Compute in the North Central US sub region. At this time some customers may experience errors while carrying out service management operations on existing hosted services. There is no impact to creation of new hosted services. Deployed applications will continue to run. Storage accounts in this region are not affected either. We are actively investigating this issue and working to resolve it as soon as possible. Further updates will be published to keep you apprised of the situation. We apologize for any inconvenience this causes our customers.


• Michael Washam (@MWashamMS) appeared in CloudCover Episode 77 - Windows Azure Powershell Updates on 4/23/2012:

Join Nate and Nick each week as they cover Windows Azure. You can follow and interact with the show at @CloudCoverShow.

imageIn this episode, we are joined by Michael Washam — Senior Technical Evangelist for Windows Azure — who shows us some updates to the Windows Azure PowerShell Cmdlets. Michael walks us through a variety of scenarios that will help developers and administrators with management, deployment, and diagnostics on Windows Azure.

In the News:

In the Tip of the Week, we discuss a blog post by our good friend Wade Wegner. Wade shows us a quick tip on handling table storage queries that have no results using the .Net SDK.

Follow @CloudCoverShow
Follow @cloudnick
Follow @ntotten

"I just keep this up all day because I miss him." - @cloudnick //cc @WadeWegner twitter.com/ntotten/status…

— Nathan Totten (@ntotten) April 6, 2012


Greg Oliver (GoLiveMSFT) described Windows Azure Diagnostics–From the Ground Up on 4/21/2012:

image(The information in this post was created using Visual Studio 2010 and the Windows Azure SDK 1.6 in April of 2012. All tests were done in the compute emulator.)

I think of Azure diagnostics in two broad categories: the stuff that you get more-or-less for free and the stuff that you log with statements similar to Trace.WriteLine. There is more – quite a lot actually – but with these two forming the basis the rest should flow more easily by referencing the documentation in MSDN.

imageStarting with Visual Studio 2010, create a new cloud project and add a worker role. Delete the Diagnostics plug in from your CSDEF file and the Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString from all CSCFG files (they have to match). Run the project. Take a look at the contents of the new GUID/directory/DiagnosticStore directory as depicted below. Put the lines back in the config files and run it again, examining the DiagnosticStore directory. Explore the Monitor directory deeply. You should see some pretty interesting things. This page gives the details:
http://msdn.microsoft.com/en-us/library/windowsazure/hh411546.aspx

What we’ve just demonstrated (with zero code) is the level of diagnostics information that you get more-or-less for free with the worker role and web role templates. This MSDN page details what we just did and what it means:
http://msdn.microsoft.com/en-us/library/windowsazure/microsoft.windowsazure.diagnostics.diagnosticmonitor.aspx

Before:

image

After:

image

The information that’s being gathered is in a binary format, so you can’t view it easily. To do that, you need to set up a periodic transfer of the information to Azure table storage. This not only converts the information into a readable form, but also puts it into a location that is easy to access.

There is a singleton DiagnosticsMonitor associated with each role (web or worker) in your service. This object defines how much, how often and what is pushed out to table storage. There is a factory default configuration embedded into the object that says “push nothing to table storage”. There are three straightforward ways of changing this. One of them is to add code to your role. OnStart is the typical location of this code. Putting it here affects all role instances. Here’s an example (#1): (information logged with this example appears in Azure table WADDiagnosticInfrastructure)

image

Another way (#2) is to edit a rendition of the default configuration file that is put into blob storage when your instances start up. There is one of these files for each instance. If you edit this file, the DiagnosticsMonitor will pick up the change and change its behavior accordingly. So where are you going to find these files? In a container called “wad-control-container”. Which storage account? The one designated by the Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString setting in your CSCFG file. Seriously, try it! Don’t put the code above into OnStart, run your role, then edit the configuration file. Wait a couple minutes and voila! Diagnostics information will start dropping into table storage.

This page gives another code sample and some explanation of this:
http://msdn.microsoft.com/en-us/library/windowsazure/microsoft.windowsazure.diagnostics.diagnosticmonitorconfiguration.aspx

This page talks more about the various methods of altering DiagnosticMonitor’s configuration:
http://msdn.microsoft.com/en-us/library/windowsazure/hh411551.aspx

So much for the more-or-less free stuff. What about sending trace output to the Azure logs?

This is almost as free. You have to write the Trace.WriteLine statements. And you have to tinker with the DiagnosticMonitorConfiguration a bit as well, but no more than you did for the previous stuff. Instead of DiagnosticMonitorConfiguration.DiagnosticInformationLogs as in the example above, set values for the properties of DiagnosticMonitorConfiguration.Logs instead. You can do this in either of the three ways noted above (I only gave examples of two, but you get the point.) The information shows up in WADLogsTable.

An important aspect to diagnostics is Trace Listeners. Listeners basically hook up information sources (IIS, your Trace.WriteLine code, etc.) with output mechanisms (text file, output window in Visual Studio, etc.) The web and worker role templates also include a listener for Azure diagnostics: Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener. This is configured in <system.diagnostics>in web.config or app.config. If you remove the listener, nothing breaks but you won’t get any output, either. Try it! Just remove the listener from app.config, run your project, then take a look in the Resources directory (detailed above). It’s pretty empty.

Cost

Logging isn’t free! It’s using Windows Azure Storage, both blobs and tables. There are about 2.6 million minutes per month. If you push trace information once per minute and only use one storage transaction per push, that’s about $2.60 / month. Now multiply that by the number of things you’re logging. We are careful about writing multiple records at a time to minimize costs, but we can only do so much. Since you can configure logging so easily, be judicious about the amount and frequency of logging.

Tools

I used a couple of tools to assist with these tests. I used Azure Storage Explorer to access my developer storage directly – blobs/tables/queues. Cerebrata Azure Diagnostics Manager v2 did a really nice job of formatting the diagnostic information that was pushed into table storage. It allows you to look at it from offline, historical and live sources.


<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image

No significant articles today.


<Return to section navigation list>

Cloud Security and Governance

image_thumbNo significant articles today.


<Return to section navigation list>

Cloud Computing Events

• I’m (@rogerjenn) reporting that the Azure Insiders group has started a public Windows Azure Events section on Lanyrd. Following are a few of the new entries to the feed on 4/24/2012:



• Jan Van der Haegen (@janvanderhaegen) asked POLL: what #LightSwitch session would you attend at CommunityDays? on 4/22/2012:

imageO yes, thanks to Gill Cleeren, Community Days is the place to be in Belgium on June 21st, for the 6th time in a row! And what’s even better, at least one of the sessions will be a LightSwitch session, presented by yours truly.

image_thumb1I proposed two possible talks to Gill, and he just confirmed that at least one will definitely be on the agenda! Which is awesome news, but also implies I have to prioritize. Tell me, if you are / would be attending Community Days, which session would you prefer? …

See original post to vote.

Which LightSwitch session would you prefer if you are / would be attending Community Days 2k12?

1. Visual Studio LightSwitch: Rise of the citizen developer (300)

Gartner reports that by 2014, citizen developers will build at least 25% of new business applications. With Visual Studio LightSwitch, Microsoft’s new kid on the block, they now have the tooling to meet the needs of today’s digital world – in the cloud, mobile and social. As professional software engineers, it is our responsibility to guide them through the more complex aspects of developing line-of-business applications. This session will teach you the LightSwitch advanced customization techniques you need to know to lead the citizen developers to glory, or to join their ranks and build applications faster than you could ever imagine.

2. Visual Studio LightSwitch: Under the hood (400)

Visual Studio LightSwitch is the fastest racing car on the line-of-business application track, you’ve seen it race, you’ve seen it win, maybe you even took it for a spin yourself. This session offers you a front row ticket to the pit stop, and give you a glimpse of what’s under the hood. Whether you’re interested in the race car’s metadata driven architecture, the DRY principle, code generation techniques, MEF, or how to add a new preset to the radio tuner, this is the session where you’ll get the exclusive insights you’re looking for.


• Alan Smith announced Stockholm, Friday 27th April: Hybrid Applications with Clemens Vasters + Win a Hot Air Balloon Ride! on 4/20/2012 (missed when posted):

The Sweden Windows Azure Group (SWAG) will hold a session on “Hybrid Applications – Building Solutions that span On-Premises Assets and the Cloud” with Clemens Vasters, principle technical lead for the Windows Azure Service Bus. It should be a great session with a lot of inside information from the guy behind the technology.

BaloonSmallMicrosoft Sweden has been kind enough to give us three “Windows Azure Hot Air Balloon” ride tickets, which we were tempted to use ourselves, but will raffle to three lucky attendees at the meeting. A great way to “move to the cloud…”.

Date: Friday 27th April, 17:30 – 20:30
Location: knowit Stockholm: Klarabergsgatan 60 4tr

Register here: http://swag10.eventbrite.com/

See you there.


Adam Hoffman (@stratospher_es) asked Don't have time for a full day Azure Bootcamp? How about taking the afternoon off? on 4/24/2012:

imageUSCloud

The Azure Bootcamps that we're giving right now are an awesome learning opportunity, and not to be missed if you can spare a day to your higher education (heh, heh, a cloud joke...). But maybe you don't have the time to commit a whole day to a full Azure bootcamp? Can you maybe just duck out early for the afternoon? If so, the Hands On Azure events might be just the ticket for you. They're already in full swing, but there's still a few left. St. Louis will be delivered by my good buddy Clint Edmonson, and the last 4 (Indy, Chicago, Waukesha and Downers Grove) will be delivered by yours truly.

imageSo, come on - play hooky and get some learnings... If you want to get a headstart on the cloud, make sure to activate your MSDN Azure benefits or a 90 day trialbefore you get to class so that we can hit the ground running.


Kristian Nese (@KristianNese) reported that he’ll be presenting at Microsoft System Center Boot Camp in Tønsberg, Norway on 5/14 and 5/15/2012:

imageAre you a Microsoft Partner and want to dig into the great new contents that ships with System Center 2012?

Then you should sign up for the System Center Boot Camp in Tønsberg (Norway) at http://blogs.technet.com/b/partnerbloggen/archive/2012/03/01/er-du-klar-for-fremtiden-bli-med-p-229-lanseringsbootcamp-p-229-system-center-2012.aspx.

I’ll have quite a few sessions during the boot camp where I’ll be touching Data Protection Manager, App Controller and...behold... Virtual Machine Manager.
At least, that`s my agenda, but I’ll also show the components in the context of a private cloud that also includes Orchestrator, Operations Manager and Service Manager - it`s all a part of my demo environment, so I can give the audience the demos they need and expect, and a true understanding of how to increase the value they`re already giving to their customers and clients.

By the way, I`m also counting the days for next week where I’ll talk about Windows Server 2012 and Hyper-V 3.0 for hosters as well as SC 2012 during the "Hosting Day" at Microsoft HQ at Lysaker.
Alright, I have some slides to polish.


<Return to section navigation list>

Other Cloud Computing Platforms and Services

• Derrick Harris (@derrickharris) reported VMware buys big data startup Cetas in a 4/24/2012 post to the GigaOm Pro blog:

imageVMware has acquired Cetas, a Palo Alto, Calif.-based big data startup that provides analytics atop the Hadoop platform. Terms of the deal haven’t been disclosed yet, but Cetas is an 18-month-old company with tens of paying customers, including within the Fortune 1000, that didn’t need to rush into an acquisition. So, why did VMware make such a compelling offer?

Because VMware is all about applications, and big data applications are the next big thing. Hypervisor virtualization is the foundation of everything VMware does, but it’s just that — the foundation. VMware can only become the de facto IT platform within enterprise data centers if applications can run atop those virtualized servers.

That’s why VMware bought SpringSource, GemStone and WaveMaker, then actual application providers Socialcast and SlideRocket. It’s why VMware developed vFabric and created the Cloud Foundry platform-as-a-service project and service to make it as easy as possible to develop and run applications.

Cetas deployed on-premise

Cetas is the logical next step, a big data application that’s designed to run on virtual resources – specifically Amazon Web Services and VMware’s vSphere. In fact, Co-Founder and CEO Muddu Sudhakar told me, its algorithms were designed with elasticity in mind. Jobs consume resources while they’re running and then the resources go away, whether the software is running internally or in the cloud. There’s no need to sell physical servers along with the software.

It doesn’t hurt, either, that Cetas can help VMware compete on bringing big data to bear on its own infrastructure software. As Splunk’s huge IPO illustrated, there’s a real appetite for providing analytics around operational data, coming from both virtual machines and their physical hosts. In this regard, Cetas will be like the data layer that sits atop virtual servers, application platforms and the applications themselves, providing analytics on everything.

Sudhakar said this type of operational analysis is one of Cetas’ sweet spots, along with online analytics a la Google and Facebook, and enterprise analytics. The product and includes many algorithms and analytics tools designed for those specific use cases out of the box (it even gives some insights automatically), but also allows skilled users to build custom jobs.

Going forward, Sudhakar said Cetas will continue to operate as a startup under the VMware umbrella — which means little will change for its customers or business model — while also working to integrate the software more tightly with the VMware family.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

Microsoft is taking the home-grown approach with SQL Azure Labs app previews and the Apache Hadoop on Windows Azure framework.


• Jeff Barr (@jeffbarr) reported DynamoDB Scales Out to Three New Locations in a 4/24/2012 post:

imageEffectively immediately, Amazon DynamoDB is available in three additional AWS Regions. Here's the complete list:

  • US East (Northern Virginia)
  • EU (Ireland)
  • Asia Pacific (Tokyo)
  • US West (Northern California) - New
  • US West (Oregon) - New
  • Asia Pacific (Singapore) - New

You can find the complete list of HTTP and HTTPS service endpoints here.

imageFor more information about DynamoDB, check out the DynamoDB Developer Guide. You may also find Dave Lang's recent post on Libraries, Mappers, and Mock Implementations to be of interest.

AWS Evangelist Matt Wood is planning to host a webinar on Building Applications with Amazon DynamoDB next month. Matt is a very highly regarded speaker and I'm really looking forward to watching his webinar. The webinar will address the use of time series data, avoiding hot spots, modeling of complex data structures, automatic scaling, CloudWatch metrics, and more.

As you may be able to tell from our today's release and the recent release of the BatchWriteItem API, the DynamoDB team is definitely moving fast.If you are ready to move fast, consider these positions on the team:


Stuart J. Johnston reported AWS Marketplace offers Amazon EC2 customers one-click cloud in a 4/24/2012 post to the SearchCloudComputing.com blog:

The new AWS Marketplace gives IT customers a quick and easy way to launch cloud-based software in EC2 environments -- similar to Amazon.com’s one-click shopping experience.

The company launched Amazon Web Services (AWS) Marketplace on Thursday. Customers can pick an application from a menu of software and SaaS products and launch it with a single click. Billing is on an hourly or monthly basis and appears on the customer's regular AWS bill.

The Marketplace is a natural expansion of Amazon’s public cloud service, and it fits well with the company’s existing retail business model, said Al Hilwa, program director for applications development software at IDC, an analyst firm in Framingham, Mass.

This move brings AWS much closer to the model of centralized distribution that Google occupies.

Carl Brooks, analyst at Tier1 Research

"We have seen marketplaces for everything, and so why not one for cloud services?" Hilwa added.

Initial offerings available via the AWS Marketplace include a selection of development and business software, including software infrastructure, developer tools and business applications, Werner Vogels, CTO of Amazon.com, said in a blog post.

"Once you find software you like, you can deploy that software to your own EC2 [Elastic Compute Cloud] instance with one click ... or using popular management tools like the AWS Console," he added.

Customers can choose offerings from 10gen Inc., CA Technologies, Canonical, Couchbase, Check Point Software Technologies, IBM, Microsoft, Red Hat., SAP and Zend Technologies. To participate in the Marketplace, developers upload their software as an Amazon Machine Image and specify the hourly cost; AWS does the rest.

Even though there are similarities to other so-called "marketplaces," analysts said the offering is not a clone.

"It boils down to Amazon's emergence as a key distribution point for services and technology that reaches well beyond peddling infrastructure on demand and really brings [Amazon] much closer to something like Google," said Carl Brooks, analyst for infrastructure and cloud computing at Tier1 Research, a division of 451 Research based in New York.

While the AWS Marketplace doesn't make Amazon direct competitors to the App Store, "this move brings AWS much closer to the model of centralized distribution that Google occupies," Brooks added. Hilwa agrees.

"It appears [Amazon now has] critical mass to be such a purveyor," he added. "The marketplace will itself be an attraction for additional vendors to use AWS."

Full disclosure: I’m a paid contributor to SearchCloudComputing.com.


Joe Brockmeier (@jzb) posted Inside Amazon's AWS Marketplace: The Cloud in a Shopping Cart to the ReadWriteCloud blog on 4/19/2012:

imageAmazon is trumpeting the launch of its AWS Marketplace, but what does the storefront really offer that Amazon Web Services customers didn't have before? While the Marketplace is another boost for AWS, it's a lot less exciting than some folks are making it out to be.

imageIf you've missed the hoopla over the marketplace, here's the skinny: Amazon is putting all the Amazon Machine Images (AMIs) from vendors in a one-stop shop with pricing, reviews and a simple interface for finding the software you want. Basically, Amazon has taken a virtual application marketplace to a slightly higher level to let users pick virtual appliances and launch them on EC2.

The Good

Amazon is making it much easier for users to find AMIs that they might want to use. It's very convenient to be able to search for, say, a AMI with Node.js pre-configured and be able to launch it right away. So if you're in need of a single server instance of a specific application, the AWS Marketplace is pretty nifty. But it's really more of a convenience store than a full-blown marketplace, at least right now.

You get a full description of the AMI, its customer rating (if it's been rated, few have been so far), base OS and so forth. You see what level of support is included with the image (if any), and in some cases you also see the estimated monthly cost.

This puts Amazon a little bit ahead of other cloud providers, which don't have a big "marketplace" of supported applications. It's a good move for Amazon and companies like BitNami that want to provide support for applications, or that want to offer software at a premium above the EC2 instance cost. Billing is simplified, so customers just pay through Amazon and the partners get their money from Amazon.

Why It's Underwhelming

This is all good, as far as it goes. Unfortunately, it doesn't go very far yet. Virtual appliances are nothing new. This makes it simpler to consume single-server applications, but still leaves a lot of configuration to the end users.

Consider, for instance, deploying Sharepoint with Amazon Virtual Private Cloud (VPC). If you look at the whitepaper that Amazon published on this (PDF), the architecture is much less simple than a single EC2 image. If your needs go further than a single EC2 image, the marketplace (at least right now) has nothing for you.

What would be really impressive is to see CloudFormation templates that provide highly available applications or more complex applications using Amazon RDS, and so on. We suspect that will come down the pike eventually, but the marketplace as it stands right now doesn't really tap the potential of AWS.

But it does plant a flag for Amazon, and gives the company yet another feature over the competition. While its competitors are still working on catching up to Amazon on features, Amazon is rallying partners to deliver services on its infrastructure.

Does Amazon have a big leg up with the AWS Marketplace? Are you more likely to use the AWS cloud thanks to the market? We would love to hear your thoughts about this in the comments.


<Return to section navigation list>

0 comments: