Tuesday, February 15, 2011

Windows Azure and Cloud Computing Posts for 2/14/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px33   

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

Adron Hill (@adronbh) suggested that you Put Stuff in Your Windows Azure Junk Trunk – Windows Azure Worker Role and Storage Queue in a 2/14/2011 post:

image Click on Part 1 and Part 2 of this series to review the previous examples and code.  First and foremost have the existing code base created in the other two examples opened and ready in Visual Studio 2010.  Next, I’ll just start rolling ASAP.

imageIn the JunkTrunk.Storage Project add the following class file and code to the project. This will get us going for anything else we needed to do for the application from the queue perspective.

public class Queue : JunkTrunkBase
{
    public static void Add(CloudQueueMessage msg)
    {
        Queue.AddMessage(msg);
    }

    public static CloudQueueMessage GetNextMessage()
    {
        return Queue.PeekMessage() != null ? Queue.GetMessage() : null;
    }

    public static List<CloudQueueMessage> GetAllMessages()
    {
        var count = Queue.RetrieveApproximateMessageCount();
        return Queue.GetMessages(count).ToList();
    }

    public static void DeleteMessage(CloudQueueMessage msg)
    {
        Queue.DeleteMessage(msg);
    }
}

Once that is done open up the FileBlobManager.cs file in the Models directory of the JunkTrunk ASP.NET MVC Web Application. In the PutFile() Method add this line of code toward the very end of that method. The method, with the added line of code should look like this.

public void PutFile(BlobModel blobModel)
{
    var blobFileName = string.Format("{0}-{1}", DateTime.Now.ToString("yyyyMMdd"), blobModel.ResourceLocation);
    var blobUri = Blob.PutBlob(blobModel.BlobFile, blobFileName);

    Table.Add(
        new BlobMeta
            {
                Date = DateTime.Now,
                ResourceUri = blobUri,
                RowKey = Guid.NewGuid().ToString()
            });

    Queue.Add(new CloudQueueMessage(blobUri + "$" + blobFileName));
}

Now that we have something adding to the queue, we want to process this queue message. Open up the JunkTrunk.WorkerRole and make sure you have the following references in the project.

Windows Azure References

Windows Azure References

Next create a new class file called PhotoProcessing.cs. First add a method to the class titled ThumbnailCallback with the following code.

public static bool ThumbnailCallback()
{
    return false;
}

Next add another method with a blobUri string and filename string as parameters. Then add the following code block to it.

private static void AddThumbnail(string blobUri, string fileName)
{
    try
    {
        var stream = Repository.Blob.GetBlob(blobUri);

        if (blobUri.EndsWith(".jpg"))
        {
            var image = Image.FromStream(stream);
            var myCallback = new Image.GetThumbnailImageAbort(ThumbnailCallback);
            var thumbnailImage = image.GetThumbnailImage(42, 32, myCallback, IntPtr.Zero);
            thumbnailImage.Save(stream, ImageFormat.Jpeg);
            Repository.Blob.PutBlob(stream, "thumbnail-" + fileName);
        }
        else
        {
            Repository.Blob.PutBlob(stream, fileName);
        }
    }
    catch (Exception ex)
    {
        Trace.WriteLine("Error", ex.ToString());
    }
}

Last method to add to the class is the Run() method.

public static void Run()
{
    var queueMessage = Repository.Queue.GetNextMessage();

    while (queueMessage != null)
    {
        var message = queueMessage.AsString.Split('$');
        if (message.Length == 2)
        {
            AddThumbnail(message[0], message[1]);
        }

        Repository.Queue.DeleteMessage(queueMessage);
        queueMessage = Repository.Queue.GetNextMessage();
    }
}

Now open up the WorkerRole.cs File and add the following code to the existing methods and add the additional even method below.

public override void Run()
{
    Trace.WriteLine("Junk Trunk Worker entry point called", "Information");

    while (true)
    {
        PhotoProcessing.Run();

        Thread.Sleep(60000);
        Trace.WriteLine("Working", "Junk Trunk Worker Role is active and running.");
    }
}

public override bool OnStart()
{
    ServicePointManager.DefaultConnectionLimit = 12;
    DiagnosticMonitor.Start("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString");
    RoleEnvironment.Changing += RoleEnvironmentChanging;

    CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
    {
        configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
        RoleEnvironment.Changed += (sender, arg) =>
        {
            if (arg.Changes.OfType<RoleEnvironmentConfigurationSettingChange>()
                .Any((change) => (change.ConfigurationSettingName == configName)))
            {
                if (!configSetter(RoleEnvironment.GetConfigurationSettingValue(configName)))
                {
                    RoleEnvironment.RequestRecycle();
                }
            }
        };
    });

    Storage.JunkTrunkSetup.CreateContainersQueuesTables();

    return base.OnStart();
}

private static void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
    if (!e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)) return;

    Trace.WriteLine("Working", "Environment Change: " + e.Changes.ToList());
    e.Cancel = true;
}

At this point everything needed to kick off photo processing using Windows Azure Storage Queue as the tracking mechanism is ready. I’ll be following up these blog entries with some additional entries regarding rafactoring and streamlining what we have going on. I might even go all out and add some more functionality or some such craziness! So hope that was helpful and keep reading. I’ll have more bits of rambling and other trouble coming down the blob pipeline soon! Cheers!


<Return to section navigation list> 

SQL Azure Database and Reporting

Bill Zack asked and answered Feeling constrained by the 50GB SQL Azure Limit? Try Sharding in a 2/14/2011 post to the Ignition Showcase blog:

image Database Sharding is a way to partition multiple SQL Azure databases to get around the 50GB maximum database size limit.  Using this technique you can implement a scale-out network of SQL Server databases that can handle massive capacity requirements.

Michael Heydt from Sungard has produced an outstanding tutorial on the subject. 

imageIn the tutorial he also devotes some coverage to the upcoming SQL Azure Federation Service which will make scale-out via Sharding even easier.

imageI ’d say v1 of the SQL Azure Federation Services CTP will make scale-out via sharding possible; v2 will make it easier. Check Cihan Biyikoglu’s MSDN blog for the latest in SQL Azure Federation news.

Stay tuned for my cover article about SQL Azure sharding coming in the March 2011 issue of Visual Studio Magazine.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Alex James (@adjames) posted Even more Any and All to the OData blog on 2/15/2011:

image I was talking to a few people last week and one of them suggested another change in the any/all syntax. The more I looked at it the more I liked it. In fact when I asked a few of people over twitter it seemed everyone else liked it to.

imageThe suggestion was to replace this:

~/Movies/?$filter=any(Actors a: any(a/Awards))

With this:

~/Movies/?$filter=Actors/any(a: a/Awards/any())

There are a couple of things I think are nice about this:

  • It looks a lot like code you might write in something like C# after replace '.'s with '/'s - as is the OData idiom - which hopefully will make it easier to learn.
  • It separates the collection variable (i.e. 'Actors') from the range variable (i.e. 'a'). I think this helps because using the old syntax people might be conditioned by the language they use every day to write: any(Actor a: …) instead of any(Actors a: …)

So I think we should do this.

But what do you think?


Marcelo Lopez Ruiz described datajs support for DataTime and DateTimeOffset in a 2/15/2011 post:

imageYesterday we updated the datajs codebase to support DateTime and DateTimeOffset. Here are some development notes on this that may be interesting.

Recognizing DateTime and DateTimeOffset values

imageATOM has a pretty straightforward representation for DateTime and DateTimeOffset, so it's really a matter of recognizing these on the wire and parsing them / serializing them.

JSON uses a convention where /Date(nnn)/ is used, where nnn is the number of miliseconds as per the Date(number) constructor. On the wire, this is actually serialized as \/Date(nnn)\/. The catch is that those backslashes escape the forward slashes even though escaping is not required, so it makes it harder to mis-recognize a string as a date value. However because we use the browser JSON parser, we don't get to see this escape mechanism and it's very hard to produce, so currently the library produces the ATOM format that some implementations recognize (including the WCF Data Services one).

Now, in the ATOM case this is unambiguous, because the type information comes out of line in an XML attribute. But in the JSON case, someone could "poison" the data by passing in a value that looks like a Date format, so by default datajs won't convert string values into Dates even if they look like they might be. This way if your page relies on string properties being strings and not something else, a malicious user can't break your site.

There are two ways to have datajs produce Date objects for JSON responses:

  • If you provide metadata, the library knows for certain what is and isn't a date, and uses that information.
  • If you don't provide metadata, but you trust that the server won't hand out "dangerous" date (or your page won't have a problem with it), you can set the recognizeDates property to true in the OData.jsonHandler object.

Representing DateTime and DateTimeOffset values

Representing DateTime values is pretty straightforward - we just use a Date object. You can use the UTC methods to work with this value "from the wire", or you can of course use the local values to adjust it to the local user's timezone.

For DateTimeOffset, we use a Date object with the time adjusted to UTC so you can use the UTC APIs without  worrying about the local timezone of the user, or again adjust it as necessary (and without the offset getting in the way). We annotate it with an __edmType field with the value "DateTimeOffset", and we annotate the offset value in an __offset field. The whole thing round-trips, and you can play with the values as needed.


Marius Oiaga (@mariusoiaga) announced New Joomla Extensions: Bing Maps, Windows Live ID, OData and Silverlight Pivot Viewer in a 2/15/2011 post to the Softpedia blog:

image Four new Joomla extensions are now available allowing customers that leverage the CMS to seamlessly integrate various Microsoft technologies into their projects.

The Redmond company provided Schakra and MindTree with the necessary funding for them to put together the following Joomla extensions: Bing Maps, Windows Live ID, OData and the Silverlight Pivot Viewer.

imageUndoubtedly, some web developers might already be familiar with these resources, although not related to the Joomla content management system.

In the second half of January 2011, the launch of Drupal 7 also brought into the limelight the interoperability work that Microsoft poured into making sure that the open source CMS would play nice with its technologies and Cloud platform.

At that time the software giant announced the availability of four new Drupal modules, namely Bing Maps, Windows Live ID, OData and the Silverlight Pivot Viewer, which are now also offered to those taking advantage of Joomla.
I myself find Joomla a tad friendlier than Drupal, and I tend to prefer it, so I welcome the fact that the extensions mentioned above have been tailored to both content management systems.

Gianugo Rabellino [see below], Senior Director of Open Source Communities for Microsoft provided a quick overview of the extensions:

"Bing Maps extension (http://joomlacode.org/gf/project/bingmaps/):
With this extension, Joomla! users can easily include customized Bing Maps into the content they are publishing, and administrator can preconfigure how the map should look, and where it can be added

Silverlight Pivot viewer extension (http://joomlacode.org/gf/project/pivotviewer/):
With this extension Joomla! users can visually navigate with the Silverlight Pivot viewer through large amount of data. Administrators define what is the data source using a set of preconfigured options like OData, RSS, media files, etc.

Windows Live ID extensions(http://joomlacode.org/gf/project/windowsliveid/):
With this extension Joomla! users can associate their Joomla! account to their Windows Live ID, and then to login on Joomla! with Windows Live ID.

OData extension (http://joomlacode.org/gf/project/odata/):
imageWith this extension Joomla! administrator can provide users with quick access to any OData source, like the Netflix catalog (check the list of live OData services), and let them include these in any content type (such as articles). The generic extension includes a basic OData query builder and renders data in a simple HTML Table."


Gianugo Rabellino included a link to the Joomla OData Extension in his Relationships… It’s Complicated! post of 2/14/2011 to the Interoperability @ Microsoft blog:

image Isn't Valentine's Day a perfect occasion to think about relationships? Other than my family, the relationship I care most about nowadays is the one between Microsoft and the Open Source communities which, to put it mildly, have been interesting in the past. As I'm learning my way through this new adventure, I have been considering our track record from the early stages and, more importantly, thinking about the future.

Make no mistake. Relationships are hard and high in maintenance – especially when there is some history to them. Entering the state where water is really under the bridge is tough, and the one and only remedy I can think of is to build those bridges one stone at a time, and show that you really care. I firmly believe Microsoft is on the right track here: first as an outsider, then as a partner, and finally as an employee. For the past few years I saw the tide turning, and Microsoft becoming increasingly more open. We are building those bridges, and we are doing it in the one and only way Open Source communities care: by showing commitment, and contributing code.

We understand that we are far from being done, which is why I have started looking outside of Microsoft and reaching out to communities to continue the ongoing conversation, and to show the world how much we have changed and become more open. But showing the whats and the hows is notenough: we want to get to the next step, and delve into the reasons leading us to steer the ship towards open water. As the story unfolds and I start touring the world to meet as many communities as I can and gather the feedback we need so much to move forward and have a productive relationship.

Speaking of travel, I just came back from my European tour, where I visited Italy, Germany, the UK and Belgium. This was my first “toe in the water”, and it was a priceless learning experience, where I managed to reconnect with old friends and meet new people from the Open Source world. In Italy I had a chance to see how HTML5 is going to play a huge part in the future of the Web (you don’t want to lose the upcoming “HTML, ci siamo” event). In Germany I walked away with a miniature model of the “we love developers” double decker Microsoft bus that is making the rounds to show all the efforts Microsoft is doing in enrolling developers. In the UK, I was blown away by the amount of information, tutorials, interviews and other good stuff the www.ubelly.com fine folks are doing. And in Belgium I had a great meeting with some of the most well respected PHP developers who are constructively having a discussion on how to improve their experience on Azure, and helping to plug on the community creativity with a very contest (if you live in Europe, and grok PHP, you should definitely sign up!).

On top of that, I spent my last day in Europe visiting and attending FOSDEM, the largest Free Software event in Europe. There, I had the pleasant surprise of a day packed with casual encounters in the hallways which turned into extremely practical conversations on how Microsoft and the FLOSS communities can move on and work together on real problems, real projects and real code.

And code does definitely matter, so let me finish by announcing some new released projects, freshly baked and wrapped in a proper Valentine's day chocolate box. Today we announced the availability of four new extensions for Joomla! that allow Joomla! administrators/developers to provide users with the following integrated features: Bing Maps, Windows Live ID, OData and the Silverlight Pivot Viewer. These extensions are developed and contributed by Schakra and MindTree, with funding provided by Microsoft. Here’s a quick overview of the extensions:

Bing Maps extension (http://joomlacode.org/gf/project/bingmaps/):
With this extension, Joomla! users can easily include customized Bing Maps into the content they are publishing, and administrator can preconfigure how the map should look, and where it can be added.
clip_image002 clip_image004

Silverlight Pivot viewer extension (http://joomlacode.org/gf/project/pivotviewer/):
With this extension Joomla! users can visually navigate with the Silverlight Pivot viewer through large amount of data. Administrators define what is the data source using a set of preconfigured options like OData, RSS, media files, etc, .

clip_image006 clip_image008

Windows Live ID extensions(http://joomlacode.org/gf/project/windowsliveid/):
With this extension Joomla! users can associate their Joomla! account to their Windows Live ID, and then to login on Joomla! with Windows Live ID.
clip_image010

imageOData extension (http://joomlacode.org/gf/project/odata/):
With this extension Joomla! administrator can provide users with quick access to any OData source, like the Netflix catalog (check the list of live OData services), and let them include these in any content type (such as articles). The generic extension includes a basic OData query builder and renders data in a simple HTML Table.

clip_image012 clip_image014

Code speaks, content matters. To close on Joomla!, we’ve also just published a new tutorial explaining how to get Joomla! up and running on Windows Azure using the Windows Azure Companion. And by the way we will be at J-and-Beyond conference May 6th-8th, to showcase more Joomla! and Microsoft technologies interop.

As always I look forward to your comments and feedback.

Gianugo is Microsoft’s Senior Director of Open Source Communities.


Marcelo Lopez Ruiz explained A bit about how datajs is run in a 2/14/2011 post:

image Today's post simply discusses a bit how the datajs project is run, and why we think it makes sense.

The landscape for web developers is changing pretty fast. Browsers improve and introduce new capabilities, cloud systems bring new posibilities for products and business, user expectations change every other month (everyone wants everything to be faster, more reliable, more polished, and accessible from more devices).

imageWe believe that to succeed in this environment, we need to be able to adjust quickly and keep an eye on delivering working software that makes our customers happy.

As such, here are some principles that guide how we run datajs.

  • Transparency. datajs is an open source project so developers are able to look under the hood and see how things get done. They can figure out whether and how the code fits their needs. We'll discuss feedback and issues in public, provide sensible updates to the codebase frequently, and generally develop the project "in the open".
  • Simplicity. I'm a firm believer in the idea that "if debugging is 50% harder than coding, and you code to the maximum of your ability, you don't have the ability to debug what you just wrote". We strive to keep our code obvious. There is a long list of benefits to keeping things simple: less errors, easier to predict how things behave, easier to reuse and adapt code, etc.
  • Humility. datajs is not going to be the one and only library you'll ever need. Many other libraries are great at what they do, and web developers should be free to pick what works best for them and not have to sort through a mess of overlapping or conflicting functionality. This principle applies to other aspects of how we run the project and links back to transparency in some ways, but more on that some other day.
  • Speed. We should be able to add and change things fast in the project, and iterate to explore and discover - that's one dimension. The library should also be fast - it's those user expectations again!
  • Value. At the end of the day, datajs has got to deliver some useful functionality to developers and end users. Thankfully, we're not short on ideas, and the changing landscape won't leave us without opportunities to innovate anytime soon. But tying this back to the 'humility' thing, we don't think we have all the answers, and we're looking forward to working with the datajs community to figure out where we can get the biggest bang for the buck.

As always, if you have any questions or want to discuss anything, feel free to comment on this blog or start a conversation on the http://datajs.codeplex.com/discussions page.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

The Claims-Based Identity Team announced the decision not to ship Windows CardSpace 2.0 and release a CTP of U-Prove in a Beyond Windows CardSpace post of 2/15/2011:

For several years Microsoft has advocated the claims based identity model for more secure access and use of online applications and services. With enhancements to our existing platform, such as Active Directory Federation Services 2.0 and Windows Identity Foundation, we’ve made progress in that initiative. Claims-based identity is used widely inside Microsoft and is now part of many Microsoft products, such as SharePoint, Office 365, Dynamics CRM, and Windows Azure.

Microsoft has been a leading participant in the identity community and an active contributor to emerging identity standards. We have increased our commitment to standardization activities and added support into our products for the SAML 2.0, OpenID 2.0, OAuth WRAP and OAuth 2.0 protocols.

There is one component of our identity portfolio where we have recently decided to make a change. Windows CardSpace was initially released and developed before the pervasive use of online identities across multiple services. Perhaps more importantly, we released the user component before we and others had delivered the tools for developers and administrators to easily create claims-ready services. The identity landscape has changed with the evolution of tools and cloud services. Based on the feedback we have received from partners and beta participants, we have decided not to ship Windows CardSpace 2.0.

Claims-based identity remains a central concept for Microsoft’s identity strategy, and its role in our overall strategy continues to grow. Furthermore, we are not abandoning the idea of a user agent for exchanging claims. As part of our work on claims-based identity we are releasing a new technology preview of U-Prove. This release of U-Prove will take the form of a user agent that takes account of cloud computing realities and takes advantage of the high-end security and privacy capabilities within the extended U-Prove cryptographic technology.

Mary Jo Foley (@maryjofoley) adds more background to the demise of CardSpace 2.0 in her RIP, Windows CardSpace. Hello, U-Prove post of 2/15/2011 to ZDNet’s All About Microsoft blog:

image For a while, had been wondering when Microsoft would ship CardSpace 2.0, the last, un-delivered piece of its Geneva set of security wares. The answer, it turns out, is never.

CardSpace, which got its start as “Windows InfoCard,” attempted to represent an individual’s digital identity that the user could use to communicate with a third party entity.

From a February 15 post on the Microsoft “Claims-Based Identity” blog (which I found via a tweet from @Carnage4Life):

“Windows CardSpace was initially released and developed before the pervasive use of online identities across multiple services. Perhaps more importantly, we released the user component before we and others had delivered the tools for developers and administrators to easily create claims-ready services. The identity landscape has changed with the evolution of tools and cloud services.  Based on the feedback we have received from partners and beta participants, we have decided not to ship Windows CardSpace 2.0.”

According to the blog post, in spite of the elimination of CardSpace, Microsoft is still a big proponent of claims-based identity concepts, and the company has baked support for these identity solutions into SharePoint, Office 365, Dynamics CRM, and Windows Azure.

“Microsoft has been a leading participant in the identity community and an active contributor to emerging identity standards.  We have increased our commitment to standardization activities and added support into our products for the SAML 2.0, OpenID 2.0, OAuth WRAP and OAuth 2.0 protocols,” the blog post noted.

Microsoft also is putting its weight behind a new Microsoft claims technology called U-Prove, according to the post. U-Prove is “an advanced cryptographic technology that, combined with existing standards-based identity solutions, overcomes this long-standing dilemma between identity assurance and privacy,” according to the test page.

Microsoft has made available to testers for download a second Community Technology Preview build (via the Connect site) for its U-Prove Agent. The Agent is “software that acts as an intermediary between websites and allows sharing of personal information in a way that helps protect the user’s privacy,” the U-Prove Frequently Asked Questions (FAQ) document explains. U-Prove is based on technology that Microsoft bought when it acquired Credentica in 2008.

“Geneva” was the codename for a number of Microsoft identity wares. It became the codename for the most recently delivered version of Active Directory Federation Services (ADFS) and Windows CardSpace, as well. The programming framework supporting the current version of ADFS originally was codenamed “Zermatt,” then, later, also took on the “Geneva” codename.


Eugenio Pace (@eugenio_pace) explained ACS as a Federation Provider – Claims transformation in a 2/14/2011 post:

To work properly, a-Order needs a number of claims to be supplied:

  1. User name
  2. Organization
  3. Role

The "Organization” claim is used to filter orders belonging to a specific customer of Adatum. For example, Litware users (like Rick) will eventually end up with a token containing a claim with “Organization=Litware”. All this is done in step 3 here in the diagram below:

image

Adatum’s FP takes whatever token it gets from the outside world and “normalizes” it to whatever the application needs. ADFS for example, ships with a powerful language to define these transformations. In our sample, we ship a simple “simulation” of a real STS, so our rules are all coded in C# and are obviously not “production”:

image

With ACS in the picture there are 2 places where transformation could happen though:

  1. In ACS
  2. In Adatum’s FP

In this scenario Adatum owns both (its own on-premises issuers and an instance in ACS), so it has full control of either components. Which one to use depends on many factors and there’s no “single right” way of doing it. Let’s consider one reason to keep mappings on Adatum’s side. for this we’ll pick one of the transformations required: the simple rule of associating “Mary@Gmail.com”  with Organization=“Mary Inc”.

This rule would be fixed in ACS, there is no dynamic discovery or lookup code that ACS can execute at this time. It is likely that Adatum keeps a master record of all the companies it works with and the contact information associated with them. If that is the case, it’s probably better to have ADFS call a component or the master record database using the built-in SQL integration  capabilities (if using ADFS). If Mary changes her e-mail, everything would just work. If the rule was in ACS, it would require Adatum to update the rule every time there’s an update.

Of course, ACS does provide an API for updating the configuration. So you could achieve something similar by just automating the update. Different companies will be more or less comfortable with one approach or the other.

The highest order bit in this situation is that the app remains completely isolated from these changes, as CodingOutLoud mentioned in his comment in a previous post.


Maarten Balliauw (@maartenballiauw) described how to Authenticate Orchard users with AppFabric Access Control Service in a 2/14/2011 post:

image From the initial release of Orchard, the new .NET CMS, I have been wondering how difficult (or easy) it would be to integrate external (“federated”) authentication like Windows Azure AppFabric Access Control Service with it. After a few attempts, I managed to wrap-up a module for Orchard which does that: Authentication.Federated.

image7223222After installing, configuring and enabling this module, Orchard’s logon page is replaced with any SAML 2.0 STS that you configure. To give you a quick idea of what this looks like, here are a few screenshots:

Orchard Log On link is being overridden                Orchard authentication via AppFabric

Orchard authenticated via SAML - Username is from the username claim

As you can see from the sequence above, Authentication.Federated does the following:

  • Override the default logon link
  • Redirect to the configured STS issuer URL
  • Use claims like username or nameidentifier to register the external user with Orchard. Optionally, it is also possible to configure roles through claims.

Just as a reference, I’ll show you how to configure the module.

Configuring Authentication.Federated – Windows Azure AppFabric side

In my tests, I’ve been using the AppFabric LABS release, over at https://portal.appfabriclabs.com. From there, create a new namespace and configure Access Control Service with the following settings:

Identity Providers
  • Pick the ones you want… I chose Windows Live ID and Google
Relying Party Applications

Add your application here, using the following settings:

  • Name: pick one :-)
  • Realm: The http(s) root URL for your site. When using a local Orchard CMS installation on localhost, enter a non-localhost URL here, e.g. https://www.examle.org
  • Return URL: The root URL of your site. I chose http://localhost:12758/ here to test my local Orchard CMS installation
  • Error URL: anything you want
  • Token format: SAML 2.0
  • Token encryption: none
  • Token lifetime: anything you want
  • Identity providers: the ones you want
  • Rule groups: Create new rule group
  • Token signing certificate: create a Service Namespace token and upload a certificate for it. This can be self-signed. Ensure you know the certificate thumbprint as we will need this later on.
Edit Rule Group

Edit the newly created rule group. Click “generate” to generate some default rules for the identity providers chosen, so that nameidentifier and email claims are passed to Orchard CMS. Also, if you want to be the site administrator later on, ensure you issue a roles claim for your Google/Windows Live ID, like so:

Add a role claim for your administrator

Configuring Authentication.Federated – Orchard side

In Orchard, download Authentication.Federated from the modules gallery and enable it. After that, you’ll find the configuration settings under the general “Settings” menu item in the Orchard dashboard:

Authentication.Federated configuration

These settings speak for themselves mostly, but I want to give you some pointers:

  • Enable federated authentication? – Enables the module. Ensure you’ve first tested the configuration before enabling it. If you don’t, you may lose access to your Orchard installation unless you do some database fiddling…
  • Translate claims to Orchard user properties? – Will use claims values to enrich user data.
  • Translate claims to Orchard roles? – Will assign Orchard roles based on the Roles claim
  • Prefix for federated usernames (e.g. "federated_") – Just a prefix for federated users.
  • STS issuer URL – The STS issuer URL, most likely the root for your STS, e.g. .accesscontrol.appfabriclabs.com">https://<account>.accesscontrol.appfabriclabs.com
  • STS login page URL – The STS’ login page, e.g. .accesscontrol.appfabriclabs.com:443/v2/wsfederation">https://<account>.accesscontrol.appfabriclabs.com:443/v2/wsfederation
  • Realm – The realm configured in the Windows Azure AppFabric Access Control Service settings
  • Return URL base – The root URL for your website
  • Audience URL – Best to set this identical to the realm URL
  • X509 certificate thumbprint (used for issuer URL token signing) – The token signing certificate thumbprint


See Jeff Barr announced AWS Identity and Access Management Users Can Now Log in to the AWS Management Console in a 2/14/2010 post to the Amazon Web Services blog in the Other Cloud Computing Platforms and Services section below.

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Christian Weyer posted iAzure: Mobile HTML apps Windows Azure Storage on 2/15/2011:

image Currently I am researching how to design and build mobile HTML-based apps for various device platforms (iOS, Android and Windows Phone 7). For this I was looking for a good development and deployment strategy – and I think I found it: Windows Azure!

The idea is the following:

  • use Windows Azure Blog Storage to host the HTML, CSS, JS and resource files
  • use a 3rd party tool to map my blob storage account as a folder in my local Windows Explorer
  • just open the files from this mounted folder in Visual Studio and work with them
  • save the files ‘locally’ (which actually saves them to Windows Azure Blob Storage)
  • access the HTML ‘app’ from your device with a custom DNS domain name mapped to your Windows Azure Blob Storage

imageOk, one step after the other.

If you are looking for a tool to use various cloud storage providers (like Amazon S3 or Windows Azure Blog Storage) and mount your storage accounts locally in Windows Explorer, then I would recommend you to look into Gladinet Cloud Desktop.

So, let’s mount one of my Windows Azure Blob Storage accounts via Cloud Desktop. In the Virtual Directories Manager navigate to Virtual Directories and click the Mount symbol:

image

Select the Windows Azure Blob Storage provider and give it a name (this will be the name of the mounted folder in Windows Explorer):

image

On the next page you need to specify the storage credentials:

image

And voila! We have our blob storage account mapped locally (OK, I already placed two files in there…):

image

As this is now a Windows Explorer mounted drive and folder we can simply open up the HTML file in Visual Studio, change it, and save it:

image

OK – on to finishing the re-requisites to finally access the HTML page.
I do not want to expose and use the original Windows Azure Blob Storage URL and therefore map it to one of my domains like this (details of this process can be found here):
image

With the custom domain mapping set up I can now access the HTML page from my mobile device:

image


Note:
this sample HTML page uses JavaScript to request data from a WCF ‘REST’ service which is exposed via the Windows Azure AppFabric Service Bus.


Christian Weyer described iPush-up: notifying mobile HTML apps from your server code on 2/15/2011:

imageIn my previous post I talked about ‘hosting’ HTML pages (and later apps) in Windows Azure Blob Storage. Now I am going to explain what the sample HTML page showed there can do:

when the page is loaded in the browser of our mobile device we will be able to push data/messages from any application through a Cloud service into the device.

imageHow can this be achieved?
First, I am going to use a Cloud service called Pusher. Pusher enables large scale push-style communications from any code directly into out HTML pages/apps (including mobile devices like the iPhone, iPad or Android devices).

Note: where it says ‘Server’ we could have just any kind of application.

Pusher tries to use WebSockets, and if it is not available or does not succeed it falls back to a Flash-based socket communication. Have a look here which browsers are supported.

For the purpose of this blog post, we will need to code artifacts:

  • JavaScript in an HTML page running on a mobile device browser
  • sample Windows Forms application pushing messages through Pusher into the devices

Good news is that Pusher is offering a JavaScript object model which offers access to their public REST API. And the JS library is really very easy to use – just create a Pusher object, subscribe to a channel and and bind to events, e.g.:

<head>
    <title>thinktecture tecTeacher</title>
    <script src="http://js.pusherapp.com/1.7/pusher.js" type="text/javascript"></script>
    <script type="text/javascript">
        // Enable pusher logging - don't include this in production
        Pusher.log = function () {
            if (window.console) window.console.log.apply(window.console, arguments);
        };

        // Flash fallback logging - don't include this in production
        WEB_SOCKET_DEBUG = true;

        var pusher = new Pusher('MY_APP_KEY');
        pusher.subscribe('tecteacher_episodes');
        pusher.bind('new_episode_available',
          function (data) {
              alert("Pusher: " + data);
          });
    </script>

Easy going.

On the server/pushing side we want to have a similar experience. And we are in luck, there is PusherDotnet which we can use in our .NET code:

var provider = new PusherProvider(applicationId, applicationKey, applicationSecret);       
var request = new SimplePusherRequest(
    textBoxChannel.Text, 
    textBoxEvent.Text, 
    textBoxPushText.Text);

provider.Trigger(request);

I am sure you understand this code snippet without any further comment Smile.

Now, when running the Windows Forms pushing app…

image_thumb[4]

… tadaaa!
We can see the the notification message being delivered to the HTML page (which still being served from Windows Azure Blob Storage, BTW) running on my iPhone (actually to Mobile Safari):

image

Very powerful stuff!


The Windows Azure Team posted a Windows Azure Supports Australian Flood and Cyclone Relief Efforts case study brief on 2/15/2011:

imageIn January 2011, the Australian state of Queensland experienced a devastating flooding event, with three quarters of the state declared a disaster zone and thousands of homes inundated with floodwaters. As the floodwaters peaked and the scale of the disaster became apparent, businesses across the country started pledging their time, goods and services to help with clean up and recovery efforts. But without a system in place to capture these pledges of support and match them to the pressing community needs, the Queensland Government had to rely on inefficient manual processes.

As part of its response to the flood recovery efforts, Microsoft Australia offered to build and host a site to address this urgent need. Utilizing Microsoft's Solutions Development Centre, a development team comprising of Microsoft staff and partners from Devtest, Readify and Oakton designed, developed, tested and deployed the Queensland Floods Business Pledges site that allowed businesses to search through existing community needs and pledge their support, and allowed Queensland Government staff to manage and coordinate the recovery efforts.

Built on Windows Azure, the site was first released to the public after four days of development, with a fully featured release out just four days later. With no existing customer infrastructure in place to support such a solution and uncertainty about how many users would be accessing the system, Windows Azure allowed a solution to be deployed rapidly and scaled up and down as needed.

As the team was putting the finishing touches on the final production release, the news hit that Tropical Cyclone Yasi was heading towards the Queensland coast - the only part of the state unaffected by the floods. In less than one day, the team was able to develop and deploy an additional site on Windows Azure to assist with disaster recovery the efforts.

Glenn Walker, the Executive Director of Information and Communication Services and CIO of the Department of Community Safety, had the following to say about the solution: "By leveraging Windows Azure Cloud Services the Queensland Government was able to deploy an agile and scalable web capability... the ability of the cloud to rapidly absorb additional users ensured that services were always available and performed at levels beyond our expectations."

Please click here to read more case studies of Windows Azure in action.


Avkash Chauhan observed After uploading VM Role VHD you may be unable to login VHD over RDP and role status shows “Starting...” in a 2/15/2011 post:

image It is possible that after you have successfully uploaded your VHD to your subscription using CSUPLOAD, you will see two issues:

1. You can verify that your VHD is working fine (using URL to verify that IIS is running or have some diagnostic data to verify as well) however in the Windows Azure Portal you will see the service status as "Starting...."

2. If you try to login to your using Portal based RDP setting, you will not be able to login

imageHere are more details on this regard.

One more known issue could be the root cause of this problem. The problem what we found is related with “Windows Azure Remote Forwarder Service” which can cause remote desktop client to cut off suddenly or take a long time to “secure the connection” depending how to problem affected. It is also possible that the above problems could cause the forwarder service to eventually crash.

The possible solution is to delay “Windows Azure Remote Forwarder Service” startup by any possible way to avoid the race condition. And to do so you can try any of the following to solve this problem:

1. Reboot the VMs from the portal. The reboot should shake loose the race so that the forwarder service works properly. This may require multiple reboots, but requires no change to the VM image.

2. Add a worker role and run the remote forwarder service there. (Please keep in mind that this is not a good option as you may incur additional cost of having an extra role. The good thing is that this configuration will remove the race condition due to a different initialization code path.

3. Set the remote forwarder service start to Manual and use a scheduled task to start the service a minute after system boot. This should avoid the race entirely at the cost of having to wait an additional minute in Ready before you can remote in.

I personally like the option #3 and have described in detailed as below:

1. Open your VHD using HyperVisor and go to Administrative Tools -> Services

2. Go to the “Windows Azure Remote Forwarder Service” and set “Start up” type to “Automatic (Delayed Start)”

3. Your Windows Azure related services in the VM should look like as below:

4. Now please shutdown your VM properly and then use CSUPLOAD to upload again.


Peter Neubauer and Magnus MÃ¥rtensson posted Announcing Neo4j on Windows Azure on 2/14/2011:

image Neo4j has a ‘j’ appended to the name. And now it is available on Windows Azure? This proves that in the most unlikely of circumstances sometimes beautiful things can emerge. Microsoft has promised Java to be a valued “first class citizen” on Windows Azure. In this blog post we will show that it is no problem at all to host a sophisticated and complex server product such as the Neo4j graph database server on Window Azure. Since Neo4j has a REST API over HTTP you can speak to this server from your regular .NET (or Java) applications, inside or outside of the cloud just as easily as you speak to Windows Azure Storage.

Intro

imageThis first version (1.0 "JFokus") of our deployment is a bit simplified in some areas. Still it is a complete and fully functioning deploy of Neo4j to Windows Azure. We are already working on the next major release (2.0) which will be much more turn-key; just upload the application to Windows Azure and launch.
Furthermore we have serious plans to use this approach, Neo4j in Windows Azure, on a live project where we are backing a server application with complex graph calculations. We will layer spatial and social graphs in combined searches on the server side and serve condensed search results to the client applications outside of the Cloud.
This project is not a toy it’s the real deal and it runs very smoothly – Java runs with little or no hassle on Windows Azure!

If you are a .NET developer reading this post

What we have enabled for You, dear .NET developer, is to leverage a really powerful graph database and make it available in Your Windows Azure applications!

You can think of Neo4j as a high-performance schema-free graph engine with all the features of a mature and robust database. The programmer works with an object-oriented, flexible network structure rather than with strict and static tables — yet enjoys all the benefits of a fully transactional, enterprise-strength database.
The data model consists of Nodes, typed Relationships between Nodes and Key-Value pairs on both Nodes and Relationships, called Properties. This is how the Matrix characters and their relationships could look in a Neo4j data model:

How to communicate with it? It is very straight forward: Neo4j communicates using a REST based API over HTTP. This means that you can communicate with it just as easily as you can with standard Windows Azure Storage.

What we have done

The fact of the matter is that Neo4j has been running on Windows for a long time. What we have done in this project is to host it on Windows Azure. We have taken into account such things as dynamic port allocation and the subsequent version will also automatically handle storage backups. The following steps are involved in the deploy of version 1.0:

  • Upload a Java Runtime Environment (JRE) to Windows Azure Blob Storage.
  • Upload Neo4j to Windows Azure Blob Storage.
  • Upload the deployment of the Neo4j Windows Azure hosting project to Windows Azure – which will launch the install automatically.

The install will:
  • Download from Windows Azure Blob Storage to our Windows Azure server instance, and deploy, both the JRE and the Neo4j Server.
  • Configure diagnostics on the Windows Azure server instance to also include the Neo4j logs in the diagnostics collections.
  • Modify the configuration of Neo4j to listen to a run time assigned port, to point to the database storage location and to know the location of the JRE etc.

That completes the install. Next Windows Azure will launch Neo4j – and we receive MAGIC!
Brief comments

This version has a few manual deployment steps to many which we will mitigate in the subsequent versions of this project.
Diagnostics in Windows Azure could not be simpler; Neo4j logs it’s activity, as most servers do, to a configurable directory. Windows Azure is enabled to include custom directories in the standard diagnostics collections which is easily configurable on the machine at startup. This means you can reach the Neo4j diagnostics output for debugging and monitoring.

We will also store the data files of the graph database in a blob in Windows Azure Storage. This will make the database automatically triple-redundantly backed up with automatic fail over. This is built into Windows Azure with no extra effort on our part.
Let’s go into a bit more technical detail below. If this is not your cup of tea; scroll to the end for the summary!

How we have done it
Solution

There is much less code in this solution than you perhaps think? All we need is a hosting project which will host Neo4j in Windows Azure. It also takes care of downloading, installing and configuring Neo4j.
Apart from the tests in our solution we have (in alphabetical order from the screen shot):

  • CollectDiagnosticsData: A small project to trigger diagnostics transfer from our Cloud instance to Cloud storage. This is only used for debug purposes and is not a part of the deployed solution. The trigger is fired from a console window on your local machine when and if you want to view the logs of the application.
  • Diversify.WindowsAzure.ServiceRuntime: A general library that enhances testability in the Windows Azure SDK.
  • Neo4j.Azure.Server: The Windows Azure deployment definition project. This is the thing that is packed up and deployed to Windows Azure. It acts as a bag with configuration for the projects that make up the application.
  • Neo4jServerHost: A Windows Azure Worker Role project that hosts Neo4j.
Configuration

Having the application configuration settings separate from your code in Windows Azure is key. The way we have coded our solution is to extract all external links and configuration settings from the code and put it in the Service Definition file* of our Windows Azure Solution. When we have done that we can specify the associated configuration values in the Service Configuration file*.
This gives us the ability to, for instance, upgrade the version of Neo4j simply by replacing the zip-file in blob storage by modifying a few configuration values. No code change required.

As a general rule of thumb you want to make your Windows Azure deployments as configurable as possible to enable easy in place upgrading of your service in the future.

Installation

This is the bit that is more complex in version 1.0 than we’d like. ;~)
The installation of Neo4j involves manually uploading the artifacts of Neo4j and the JRE to Windows Azure Blob Storage before deploy. Sure it’s a fairly normal approach for this type of deployment but it can be made more accessible for a demo application such as this. Again this project is a complete and fully functioning version of Neo4j in Windows Azure but there exists no application that cannot be improved. We want the next version (2.0) to be tun-key in the sense that you should be able to download Neo4j and launch only for full function!
Please note that you can also use another approach for installation in Windows Azure which is to use a so called startup task.

Running the server

When the solution is installed we are ready to run launch Neo4j. A batch file is executed in order to launch through a standard Process.Start() operation.
There should perhaps be more to say here at launch but there really isn’t. It is this simple.
The hosting application kicks of the Neo4j server instance in Windows Azure. All of the configuration of the server is done in the installation steps prior to starting the server.

The Web administration

When the server is running, head over to http://localhost:7474/ to see the web administration:

It gives you access to the main performance measures, a data browser, a scripting console using the Gremlin graph scripting language to test out ideas, and monitoring details regarding the server.

The port on which an application is run on your local Development Emulator is dynamically set. 7474 is the default Neo4j port in the configuration files for the server. The Windows Azure hosting project will dynamically read the allocated port and set it in the config before it launches our server. In my case (Magnus) on my local dev machine the dynamic port was 5100. So for me the link http://localhost:5100/ was correct. Try that or read from the console output when you are running the demo which port your instance launches on. Fortunately the dynamic port selected by the Compute Emulator on the local machine seems to be the same over time.

How do I connect - The Neo4j REST API

The REST API to the Neo4j server is built to be self - explaining and easy to consume, normally mounted at http://localhost:7474/db/data. You can find the docs here. A basic request to the data root URI of your new Neo4j server using CURL looks like

view sourceprint?

curl -H Accept:application/json http://localhost:7474/db/data/ and gives the response

{

"node" : "http://localhost:7474/db/data/node",

"node_index" : "http://localhost:7474/db/data/index/node",

"relationship_index" : "http://localhost:7474/db/data/index/relationship",

"reference_node" : "http://localhost:7474/db/data/node/0",

"extensions_info" : "http://localhost:7474/db/data/ext",

"extensions" : {

}

}

This describes the whole database and gives you further URLs to discover indexes, the reference data node, extensions and other good information. A REST representation of the first node (without any properties) looks like:

view sourceprint?

curl http://localhost:7474/db/data/node/0

{

"outgoing_relationships" : "http://localhost:7474/db/data/node/0/relationships/out",

"data" : {

},

"traverse" : "http://localhost:7474/db/data/node/0/traverse/{returnType}",

"all_typed_relationships" : "http://localhost:7474/db/data/node/0/relationships/all/{-list|&|types}",

"property" : "http://localhost:7474/db/data/node/0/properties/{key}",

"self" : "http://localhost:7474/db/data/node/0",

"properties" : "http://localhost:7474/db/data/node/0/properties",

"outgoing_typed_relationships" : "http://localhost:7474/db/data/node/0/relationships/out/{-list|&|types}",

"incoming_relationships" : "http://localhost:7474/db/data/node/0/relationships/in",

"extensions" : {

},

"create_relationship" : "http://localhost:7474/db/data/node/0/relationships",

"all_relationships" : "http://localhost:7474/db/data/node/0/relationships/all",

"incoming_typed_relationships" : "http://localhost:7474/db/data/node/0/relationships/in/{-list|&|types}"

In order to get started, please go over to The main Neo4j Wiki page . For the server, there is a good getting started guide or look at some of the projects using Neo4j:

What can I do with it ?
Building applications with the Neo4j Server is really easy. Either you can just use the raw REST API to insert and update your data, or use one of the bindings to Ruby, .NET, PHP and other languages to start interacting with Neo4j.
Neo4j really shines when it comes to deep traversals of your data and analysis of different aspects of your domain. The flexibility of a graph really helps in a lot of scenarios, not only social networking as in the following example.
As a small example - this is what you do to build a sample LinkedIn - like social network and execute a Shortest Path query against it and make a recommendation engine based on that (taken from Max de Marzi’s Neography Ruby bindings for the Neo4j Server). Install them with

gem install neography

A small Ruby example (let’s say in a file called linkedin.rb):

view sourceprint?

require 'rubygems'

require 'neography'

@neo = Neography::Rest.new

def create_person(name)

@neo.create_node("name" => name)

end

def make_mutual_friends(node1, node2)

@neo.create_relationship("friends", node1, node2)

@neo.create_relationship("friends", node2, node1)

end

def suggestions_for(node)

@neo.traverse(node,"nodes", {"order" => "breadth first",

"uniqueness" => "node global",

"relationships" => {"type"=> "friends", "direction" => "in"},

"return filter" => {

"language" => "javascript",

"body" => "position.length() == 2;"},

"depth" => 2})

end

johnathan = create_person('Johnathan')

mark = create_person('Mark')

phill = create_person('Phill')

mary = create_person('Mary')

luke = create_person('Luke')

make_mutual_friends(johnathan, mark)

make_mutual_friends(mark, mary)

make_mutual_friends(mark, phill)

make_mutual_friends(phill, mary)

make_mutual_friends(phill, luke)

puts "Johnathan should become friends with #{suggestions_for(johnathan).map{|n| n["data"]["name"]}.join(', ')}"

After executing this code with Ruby:

view sourceprint?

ruby linkedin.rb

You should get the resulting recommendation

view sourceprint?

Johnathan should become friends with Mary, Phill

You can of course see the increase of data in the Web dashboard at http://localhost:7474, too.
There are a number of other cool examples, for instance an IMDB simulation with recommendations against a Neo4j server instance. Enjoy!

.NET Client library
If you want to talk to a Neo4j instance from your .NET code you will of course need a client library that knows how to communicate with the REST API. There is a blog post here Neo4j .NET Client over HTTP using REST and json that discusses this concept and what would be required to create such a client library. Also there exists a library which is certainly a very good place to start if you want to communicate this way: Neo4RestNet
Note: It would be nice to teach Neo4j to use another form of communication more easily consumed by .NET code where perhaps the library pieces are more evolved. We are current looking into this and will keep you posted.
I want to play with it. Where can I get it?

Glad you like it and happy that you want to give it a spin!
If you want to look at our Windows Azure solution you only need to

  • Download the Visual Studio 2010 Neo4j Windows Azure hosting project.

If you are aiming to test run our solution either locally on your machine or in the cloud you need a few more pieces of the puzzle. (Again this is version 1.0 and it involves a few more manual steps than we’d like.)

  • Download Neo4j.
  • Download a Java Runtime Environment.
  • Upload Neo4j and JRE to Windows Azure Blob Storage (Or just use your local Development Storage Emulator) to test this on your local machine.
  • Launch the hosting project in Visual Studio.
  • Configure the solution with your own Windows Azure Storage credentials.
  • Deploy Neo4j to your Windows Azure account or hit F5 to run it in your local Development Fabric Emulator).
The source of the Service Definition files, Service Configuration files, Development Storage and Development Fabric Emulators are part of the Windows Azure Visual Studio tools project for Neo4j that you can download and install from here.
Summary
During the coding and testing of this project a few experiences are inescapable:
  • Java runs very well on Windows Azure. In fact if you are able to run your Java application on a regular Windows Server it will run on a Windows Azure instance. with a little tweaking and fiddling to make this happen, of course.
  • Fiddling with folders and paths in your Windows Azure applications to let everything find where everything else is takes some getting used to. Extracting configuration settings is an absolute must! You have to handle this well in order to do run-time configuration changes down the road.
  • It is advised to pack the JRE along side the Java application you are deploying to reduce the number of steps required to install the server application on start up.

In version 2.0 of this project we hope to make the Visual Studio Solution very much more turn-key. All you should need to do to test drive this application is to download the solution and launch it. Instantly you should have a running Neo4j server! We intend to do this by downloading the JRE and Neo4j server direct from http://neo4j.org. We will also look into securing the database files and also add multiple instances of servers collaborating together. This last bit, in Cloud-lingo, is called to “scale out”.
Another thing on our list is to make this Java server bark in a different tongue. ;~) But more about this is to come down the line.

If you do look at this project and have comments or feedback feel free to contact us @noopman and @peterneubauer. Hope you will enjoy this new and shiny toy as much as we do!
Cheers,

Magnus MÃ¥rtensson – Business Responsible Cloud @ Diversify
Peter Neubauer – VP Product Management @ Neo Technology

image Magnus: As a .NET Architect and Cloud specialist I am continuously searching for new tools for my toolbox. There are enormous amounts of great tools out there – and Neo4j is one that outshines the bulk of them. Having the power of a graph database at your fingertips is a fantastic power to harness. With this easy deploy to Windows Azure graph data is no longer a stranger in the .NET field.

image Peter: The Neo4j community has seen a lot of interest from the .NET developer community lately. Working with Azure as a Platform-as-a-Service hosting environment for Neo4j gives finally .NET developers the possibility to use all the great features and performance gains of Neo4j on a Microsoft-supported infrastructure. The prospect of a solid NoSQL - offering in the space of graph databases is very exciting for the project.
It has been a pleasure to work in collaboration between Diversify and Neo4j and with Microsoft on this project and we are very thankful for this opportunity to have fun with a great and unexpected technology combination.


Buck Woody posted Windows Azure Use Case: Web Applications on 2/14/2011:

This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx

Description:

image Many applications have a requirement to be located outside of the organization’s internal infrastructure control. For instance, the company website for a brick-and-mortar retail company may want to post not only static but interactive content to be available to their external customers, and not want the customers to have access inside the organization’s firewall.

imageThere are also cases of pure web applications used for a great many of the internal functions of the business. This allows for remote workers, shared customer/employee workloads and data and other advantages. Some firms choose to host these web servers internally, others choose to contract out the infrastructure to an “ASP” (Application Service Provider) or an Infrastructure as a Service (IaaS) company.

In any case, the design of these applications often resembles the following:

WebAppsWeb

In this design, a server (or perhaps more than one) hosts the presentation function (http or https) access to the application, and this same system may hold the computational aspects of the program. Authorization and Access is controlled programmatically, or is more open if this is a customer-facing application. Storage is either placed on the same or other servers, hosted within an RDBMS or NoSQL database, or a combination of the options, all coded into the application.

High-Availability within this scenario is often the responsibility of the architects of the application, and by purchasing more hosting resources which must be built, licensed and configured, and manually added as demand requires, although some IaaS providers have a partially automatic method to add nodes for scale-out, if the architecture of the application supports it. Disaster Recovery is the responsibility of the system architect as well.

Implementation:

In a Windows Azure Platform as a Service (PaaS) environment, many of these architectural considerations are designed into the system.

WebAppsAzure

The Azure “Fabric” (not to be confused with the Azure implementation of Application Fabric - more on that in a moment) is designed to provide scalability. Compute resources can be added and removed programmatically based on any number of factors. Balancers at the request-level of the Fabric automatically route http and https requests. The fabric also provides High-Availability for storage and other components. Disaster recovery is a shared responsibility between the facilities (which have the ability to restore in case of catastrophic failure) and your code, which should build in recovery.

In a Windows Azure-based web application, you have the ability to separate out the various functions and components. Presentation can be coded for multiple platforms like smart phones, tablets and PC’s, while the computation can be a single entity shared between them. This makes the applications more resilient and more object-oriented, and lends itself to a SOA or Distributed Computing architecture.

It is true that you could code up a similar set of functionality in a traditional web-farm, but the difference here is that the components are built into the very design of the architecture. The API’s and DLL’s you call in a Windows Azure code base contains components as first-class citizens. For instance, if you need storage, it is simply called within the application as an object.  Computation has multiple options and the ability to scale linearly.

You also gain another component that you would either have to write or bolt-in to a typical web-farm: the Application Fabric. This Windows Azure component provides communication between applications or even to on-premise systems. It provides authorization in either person-based or claims-based perspectives.

SQL Azure provides relational storage as another option, and can also be used or accessed from on-premise systems. It should be noted that you can use all or some of these components individually.

Resources:

Design Strategies for Scalable Active Server Applications - http://msdn.microsoft.com/en-us/library/ms972349.aspx

Physical Tiers and Deployment  - http://msdn.microsoft.com/en-us/library/ee658120.aspx


Philipp Aumayr (@paumayr) explained Automated deploying and testing of services in Azure in a 2/14/2011:

image We have been experimenting with automated testing of our services against windows azure lately. Our goal was to deploy a service in the build process, run a unit test, and then undeploy the process. Assuming that you have a hosted service ready (something.cloudapp.net) and a solution with a unit test running locally against a service (running locally, when executing the unit test, there are a few steps one needs to take in order to test against an azure-deployed instance of this:

  • Enable the build machine to deploy to your subscription
  • Build the cspkg file during the build process
  • Deploy to azure using the windows azure cmdlets
  • Run the unit test against the newly deployed service
  • Remove the deployed service after running the unit test
Enabling the build machine to deploy to Azure

imageWindows Azure provides a web service for managing your subscription programatically. It allows you to create roles, change instance counts, create storages and queues, and so on. Since sending passwords back and forth is not a really safe idea, the service using client certificiates. This basically means, that you create a certficiate on the client machine and upload the public key of the certificate to azure. This way you tell your azure subscription to trust content from the machine where you created the certificate. So, to let the build machine do this, login to the build machine with the user that runs the tfs build agent. Open up a cmd.exe, navigate to a directory where you feel comfortable creating a certificate ( a new folder e.g.) and execute the following command (I found this here):

makecert -r -pe -a sha1 -n "CN=Windows Azure Authentication Certificate" 
         -ss My -len 2048 
         -sp "Microsoft Enhanced RSA and AES Cryptographic Provider" 
         -sy 24 testcert.cer

This creates a testcert.cer file containing the public key of a newly created certificate. Remember that this file is nothing critical. It only contains the public key of the certificate. The private key is locked up in the key store of the machine. You can upload this key to the windows azure management portal to allow the machine to manage the subscription (using the recently created certificate). The private key is also not exportable from the keystore. This implies that you have to create a new certificate for every machine you want to deploy from. All of this mechanisms is nothing new and quite commonplace in other secure applications. SSH with password-less login does exactly the same thing. Credit to Microsoft for doing things this way (and making it the only way to do it!). It really helps preventing to have passwords that are checked into source trees. Now that we can deploy to the azure from the build machine, we can go use the azure command lets. But, uhm, we need something to deploy first, so let's enable packaging of the service in the build process.

Building the cspkg in the build process

This one is actually quite easy: A Cloud project, as created with the Azure SDK and VS 2010, is basically a MSBuild script based upon the Azure MSBuild template / tasks file. It therefore comes with a Publish task that creates a Publish folder in the output directory and packages the cspkg / cscfg file to that directory. All we have to do is to make sure that the target is called. Open up the build definition, navigate to Process and within the Advanced settings, set the MSBuild Arguments property to "/t:Build;Publish".

This tells MSBuild to execute the Build and the Publish target. When you execute the build with those arguments, a Publish folder will be copied to the Drop location containing the cspkg and csconfig file. It is also available after the Solution has compiled and means we can access it during the build process. Now that we have our config and the package, let's get to the real meat and deploy the service in the build process.

Deploying to Azure using Windows Azure Cmdlets

Now, Windows Azure having a REST service API is all great, we can access it using any language that can do a web request. But honestly, the working solely with the REST API can be quite cumbersome: In order to deploy a service you have to upload a package to blob storage and the do a deploy from there. Windows Azure cmdlets are extensions to the powershell that make life a lot easier. First, install them on the build machine. You can find them here. Of course it requires powershell, but I think you could have guessed that (You can find PowerShell 2.0 here). Installing the azure cmdlets is quite straight forward, so I won't describe it here.

The windows azure cmdlets allows you to write commands like this:

Get-HostedService "MyService" -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot 'Staging' |
    Set-DeploymentStatus 'Running' |
    Get-OperationStatus -WaitToComplete

This gets the service "MyService" using the certificate $cert and the subscription $sub. How do we fill those variables? The certificate can be found by its thumbprint:

$cert = Get-Item cert:\CurrentUser\My\$certTP

where $certTP is the thumbprint of the certificate, as presented by the Azure management portal. $sub is just the subscription id, a Guid, also 1:1 the way it is presented in the Azure Managment portal. The first command returns a hosted service object, that is piped to the Get-Deployment command, which returns the a Deployment that is in turn piped to the Set-DeploymentStatus cmdlet. The Set-DeploymentStatus returns an asynchronous operation object which is piped to the Get-OperationStatus cmdlet. The -WaitToComplete flag makes sure that the command waits until the asynchronous operation has finished. This does not mean that the service is available after the operation status has completed (unfortunately), so have we have to poll until the role status is ready. All together we need a script that deploys a package to staging swaps it to productive and removes the (old) staging environment. I think it is pretty self-explanatory, so I won't comment too much:

# certificatethumb subscriptionId servicename package config
$certTP = $args[0]
$cert = Get-Item cert:\CurrentUser\My\$certTP
$sub = $args[1]
$storageAccount = $args[2]
$servicename = $args[3]
$package = $args[4]
$config = $args[5]
$label = $args[6]

Add-PSSnapin AzureManagementToolsSnapIn

New-Deployment -serviceName $servicename -storageserviceName $storageAccount
-subscriptionId $sub -certificate $cert -slot 'Staging'
-package $package -configuration $config -label $label |
    Get-OperationStatus -WaitToComplete

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot 'Staging' |
    Set-DeploymentStatus 'Running' |
    Get-OperationStatus -WaitToComplete

Get-Deployment staging -subscriptionId $sub -certificate $cert -serviceName $servicename |
    Move-Deployment |
    Get-OperationStatus -WaitToComplete

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot 'Staging' |
    Set-DeploymentStatus 'Suspended' |
    Get-OperationStatus -WaitToComplete

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot 'Staging' |
    Remove-Deployment |
    Get-OperationStatus -WaitToComplete
   
Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot 'Production' |
    Set-DeploymentStatus 'Running' |
    Get-OperationStatus -WaitToComplete
   
$ready = $False
while(!$ready)
{
    $d = Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
            Get-Deployment -Slot 'Production'
    $ready = ($d.RoleInstanceList[0].InstanceStatus -eq "Ready") -and ($d.Label -eq $label)
}

So, now that we have a script, that does our deployment. Let's plug it into the build process! TFS Build 2010 (or TFS 2010 Build ?) is based upon WorkFlow 4.0. So we get a nice UI for editing our build process. The place where we want to deploy our package is right between building and testing. So navigate to Sequence -> Run On Agent -> Try Compile, Test and Associate Changesets and Work Items -> Sequence -> Compile, Test, and Associate Changesets and Work Items -> Try Compile and Test -> Compile and Test -> For each configuration in BuildSettings.PlatformConfigurations -> Compile and Test for Configuration. Compile and Test for Configuraiton looks something like this:

Compile And Test For Configuration with Deploy to Azure

As you can see, I added a Sequence called "Deploy to Azure" in there, it looks like this

Deploy to Azure Workflow 4.0 Sequence

First it runs to FindMatchingFiles tasks for the config and the cspkg file. The arguments to the FindMatchingFiles activity is "String.Format("{0}\Publish\*.cscfg", BinariesDirectory)" and "String.Format("{0}\Publish\*.cspkg", BinariesDirectory)" respectively. This prevents us from having to pass arguments to the build process and therefore hard"coding" the name of the service. Note that currently we only deploy the first package found. It also builds a path to the powershell script to deploy.  The Invoke PS to upload and publish activity is the part where the magic happens: It invokes the powershell script and passes the arguments. The Arguments property is a bit more complex, so I'll post it here:

String.Format("-File ""{0}"" {1}",
  deployScriptPath,
  String.Format("""{0}"" ""{1}"" ""{2}"" ""{3}"" ""{4}"" ""{5}"" ""{6}""",
    AzureCertificateThumbprint,
    AzureSubscriptionID,
    AzureStorageName,
    AzureHostedServiceName,
    packagePath.Single(),
    configPath.Single(),
    LabelName))

AzureCertificateThumprint, AzureSubscriptionID, AzureStorageName and AzureHostedServiceName are arguments to the build process, so that we can pass them to the build process in the build definition. packagePath and configPath are the enumerations returned by the FindMatchingFiles Activities and LabelName is a variable from the default template containing the name of the build. This way we get our version/build number into the deployment name.

Running the unit test against the newly deployed service

This one is quite easy again: All we have to do is change the endpoint in the configuration file for the unit test that we are executing. Point it to the URL where the hosted service is running and run the build again. This currently is a bit weak, as we have to change the configuration if we want to execute the unit tests locally. I'm still investigating on what would be the best way to make the end point configurable, but argument-passing is definitely not the strength of MSTest.

Anyhow, once you start a new build, the service should be deployed to the azure and the unit test should pass just fine. When you open up the build log, you will see a task that took about 11 minutes to execute:

WorkFlow 4.0 build log of deploying to Azure

Yes, it takes azure quite a while to get a service up and running, but hey it's firing up a new machine for the role you are deploying, so not too bad after all.

Remove the deployed service after running the unit test

Since we don't want to hog up resources in Azure and save some money, we want to make sure, that we delete the role, once the service has passed. All we need to do this, is add another Invoke Process to execute a second powershell script that undeploys from Azure. The script looks like this:

# certificatethumb subscriptionId servicename
$certTP = $args[0]
$cert = Get-Item cert:\CurrentUser\My\$certTP
$sub = $args[1]
$servicename = $args[2]


Add-PSSnapin AzureManagementToolsSnapIn

Get-HostedService $servicename -Certificate $cert -SubscriptionId $sub |
    Get-Deployment -Slot 'Production' |
    Set-DeploymentStatus 'Suspended' |
    Get-OperationStatus -WaitToComplete
 
Remove-Deployment -Slot 'Production' -ServiceName $servicename -SubscriptionId $sub -Certificate $cert |
    Get-OperationStatus -WaitToComplete

You'll have to plug it right after the "If Not Disable Tasks" in the build process:

Workflow 4.0 deploy/undeploy from azure

I think you can figure out the required arguments yourself. Just make sure you don't by mistake take down your productive service. I recommend using different subscriptions or so :) As you can see, Windows Azure Cmdlets really make things a lot easier. It would be nice to have Workflow activities to manage Azure instances, but quite frankly, with the current (perceived) speed of the Workflow editor I'd rather type a powershell script and only do the boilerplate things in Activities.


<Return to section navigation list> 

Visual Studio LightSwitch

image22242222No significant articles today.

 


Return to section navigation list> 

Windows Azure Infrastructure

The Windows Azure OS Updates Team reported Windows Azure Guest OS 1.10 (Release 201101-01) on 2/15/2011:

Now deploying to production for guest OS family 1 (compatible with Windows Server 2008 SP1).

The Windows Azure OS Updates Team posted Windows Azure Guest OS 2.2 (Release 201101-01) on 2/15/2011:

Now deploying to production for guest OS family 2 (compatible with Windows Server 2008 R2).


Cory Fowler described Golden Eggs of Windows Azure ‘Cloud’ Best Practices in a 2/15/2011 post:

Remembering back to when I was in the Audience in the Community, the one thing I always wanted to find out about was Best Practices for the Particular Technology I was listening about. I’m sure a number of people feel the same way.

Best Practices are hard to introduce in a talk as you’re typically only speaking to 10% (if that) of your audience and leaving the other 90% scratching their heads. For this reason, I am providing this blog post as a resource to find allow people interested in Best Practices to proactively seek them out with a little bit of Guidance.

Getting in the Know

Let’s face it, you may want to know the Best Practices from the beginning as you think you’re doing yourself a favour by knowing how to be a smooth operator. STOP, Let’s take a moment to step back and thing about this.

Just because something has been outlined as a best practice doesn’t guarentee that it is for your particular situation. Part of Identifying a Best Practice is knowing your options for a particular situation. Once  you know you have the right fit for your particular situation then you can extend your implementation to leverage a Best Practice to Guarentee that you’ve solidified your feature in it’s Best Possible Functioning Implementation.

My First Tip for seeking out best practices is to Know your Platform & Your Options.

There are a number of resources for getting to know Windows Azure on my blog [which I’ve recently installed Microsoft Translator to provide great content for all to read] and the Windows Azure Team Blog.

Further Research is Necessary

As good as the content is that you read online, you will want to turn to a number of printed [or electronic] books. Here are a few books that I would suggestion.

Manning Books – Azure In Action

AzureInAction

This book is not about Best Practices. However, it provides the best explanation of the Windows Azure Internals to date. The first few chapters provide insight into the Fabric Controller and the Windows Azure Fabric.

I would consider this an initial “Deep Dive” Starter read to get into Microsoft’s Cloud Computing initiative and an understanding of Windows Azure’s offerings in the Platform as a Service (PaaS) Cloud Space.

Microsoft Patterns & Practices

cat

My most recent read was Moving Applications to the Cloud by the Microsoft Pattern and Practices Team. This book was very insightful as to some of the Practices that Microsoft has been implementing while moving into the cloud, obviously obfuscated through a hypothetical company Adatum and their Expenses Tracking System aExpense.

This book got me thinking about a number of great architecture concepts and some great value add code that can be re-used over a number of Projects.

Developing-Applications-for-the-Cloud-on-Windows-Azure

I enjoyed the previous book so much that I will be picking up the other guidance book from Microsoft, Developing Applications for the Cloud.

I’m going out on a limb to say that based on the previous book, I’m betting that this book will be rather insightful, hopefully providing more guidance on Architecting Applications for Windows Azure.

The Cloud Developer Tool Belt Reference Guide

This next resource might not be something you will read end to end. I would say that this is definitely an item you should be referring to when designing your Cloud Architecture.

AzureScope: Benchmarking and Guidance for Windows Azure is an all encompassing guide for Best Practices, Benchmarks, Code Samples next to which my Essential Guide for Getting Started with Windows Azure post look like one of those rings you pick up at the cash register of your local dollar store.

XCG_Sharepoint_Header

Conclusion

Hopefully this post will help those in my audiences that I am unable to reach out to at this point with that Best Practice Deep Dive. I have no doubt that Cloud Computing is the future of Application Deployment and Hosting. I also believe that Microsoft is putting forth a very strong offering with Windows Azure. Regardless your technology, the Best Practices provided in this document will provide you with some thought provoking reading material which after all is most likely one of the main contributing factors in choosing Software Development as a Career.


Stephane Boss posted Need Scalability? Take it to the Cloud to the Partner Side Up blog on 2/15/2011:

This is a snapshot of the Industry Partner Communiqué I send out every month. Every month I focus on a specific topic among: Cloud, CRM and Application Platform and once in a while I cover a related topic such as Microsoft Dynamics in the industry (to be released in Feb.).
Goal of this communiqué is to give a snapshot of what I think are big opportunities for our customers and partner ecosystem to embrace. All content is public and everyone has access to it even are dear friend competitors.
You can “easily” subscribe (search for Microsoft Industry Partner Communiqué) or read related stories. Feedback are always welcome and I do respond to all requests, questions on email.

For the cloud Communiqué, I have interviewed several industry subject matter experts including Bindia Hallauer, Director of Technology Strategy, Worldwide Financial Services at Microsoft .

Bindia Hallauer:While cloud computing offers a number of compelling advantages, when it comes to financial services companies (and those hoping to serve them), the most important benefit is quite clear: the ability to scale on demand without procuring intensive, expensive infrastructure.

Bindia Hallauer

Think about the sheer quantity of data that financial services organizations need to manage on a daily basis. Insurance or capital market companies, for example, rely heavily on computational metrics, which involves running simulations to develop models and pricing information. These companies run millions of computations per second-and the more simulations that a company can process the more effectively it can determine the value of a product. Or consider companies that run millions upon millions of risk analysis simulations in a single day.


These are just a few of the data-intensive processes that are the bread and butter of the financial services world. The infrastructure needed to support these activities is daunting and during peak times-such as at the end of a trading day- many firms cannot keep up with demand.

Financial services partners are leveraging Windows Azure to address scalability and other issues in exciting new ways. Partners are relying on the power and flexibility of the cloud to extend capabilities, shorten time to market, and minimize costs. Read about how partners are using Windows Azure to improve delivery of core banking systems, provide real-time NASDAQ stock quotes, or enhance response to trade and bargain issues.”

Thanks and don’t forget to read the Industry Partner Communiqué to learn more about what the cloud has to offer in specific industries

Stephane Boss -

Technorati Tags: Cloud,Insurance,infrastructure,Financial,Windows,Azure,Partners,response
Windows Live Tags: Cloud,Insurance,infrastructure,Financial,Windows,Azure,Partners,response
WordPress Tags: Cloud,Insurance,infrastructure,Financial,Windows,Azure,Partners,response

Full Disclosure: I’m a registered member of the Microsoft Partner Network.


David Linthicum asserted “A foolish pack mentality and poor assumptions about cloud computing could saddle IT with bad options down the line” in a deck for his How VCs are leading us down the wrong path for cloud computing article of 2/15/2011 for InfoWorld’s Cloud Computing blog:

image Cloud computing seems like a safe bet. I mean, any startup or existing company that does cloud computing and needs capital must provide a great near-term ROI for investors, right? Maybe not.

A wise venture capitalist once told me that those in the VC community move like flocks of birds. When they see the others moving in a certain direction, they all seem to follow. Cloud computing is another instance of that behavior. And if VCs and investors are naïve about cloud computing, IT will face business pressures to use cloud computing based on that naïveté, creating serious issues down the line.

image The trouble is that cloud computing is both ill-defined and broadly defined. There is a lot of confusion about what's real cloud computing and what is not. I have to admit that I spend a good deal of my day trying to figure that out as well.

What are the top three mistakes that VCs and other investors will make as they move into the cloud computing space? See below.

Cloud computing investor mistake No. 1: Assume a sustainable business model
The idea behind cloud computing is that renting is better than buying, but that assumes your purchases will be more expensive than your rentals. As I work through ROI and cost models for various enterprises, it's clear to me that the high subscription fees many cloud computing providers charge have to fall over time as the prices for enterprise hardware and software fall as well. That means the expected ROI may not be there for cloud computing -- and even if it's there at first, it could degrade significantly over time.

Cloud computing investor mistake No. 2: Have little understanding of the technology in the context of an emerging market
The trouble with new technology is that you need to understand how it works and plays with other technologies that are emerging around it. I'm finding that many details in the cloud computing space appear to be innovative and unique but turn out to be neither when you consider the larger market. The trouble is, you have to keep up with the technology details to come to these conclusions, which is very difficult in these fast-moving, hype-ridden days.

Cloud computing investor mistake No. 3: Buy into cloud-washed technology
Anyone with any significant enterprise technology offerings has quickly learned how to spin things as the cloud. Indeed, anything and everything that you could once find in a data center is suddenly cloud computing technology (the so-called private cloud), whether or not anything is new there. The market will figure this out at some point, and suddenly investors will abandon the faux-cloud companies whose technology you relied on.

Go cloud -- but do it wisely, not by following the VC flock.


James Urquhart lists three of Cloud computing's killer applications in a 2/14/2011 post to C|Net News’ Wisdom of Clouds blog:

The year 2010 will probably be remembered at the year that cloud computing "shaped" itself into a tangible concept, at least amongst those of us who care. 2011, on the other hand, will likely be the year in which IT figures out how to actually use cloud concepts.

Of course there are success stories dating back two or more years, but what is happening so far in 2011 is a growing body of businesses, data, and applications that were born and cultivated in the cloud. Add to that the online and conference communities forming around cloud and new application categories and we are starting to get a clear picture of what the "killer applications" of cloud really are--at least as of today.

I want to step through the three that I think are the most impactful application categories for cloud and show some examples of why cloud enables these applications to exist:

  1. Data collection and analytics
    There is no doubt that one of the biggest revolutions enabled by the cloud is the ability to store and--perhaps more importantly--process large sets of data, either as very large scale fixed-length jobs or in real time. The economics are simple: without the large upfront capital expense, the total cost of using a large number of systems for the period of time it takes to process the data is significantly reduced; and "downtime" costs nothing more than the cost of storing the data.

    I attended O'Reilly Media's Strata conference in Santa Clara, Calif., last week, which covered "big data" and the various applications of that concept to a variety of problems. I walked away extremely impressed. The innovations on display were mind-blowing. It really felt like a Web conference circa 1999, with talk of almost endless entrepreneural opportunities for those with the right ideas.

    I attended sessions on data journalism (which demonstrated how data analysis can be used to find key facts regarding the news of the day) and the application of data capture and analysis to government and academia. Both sessions demonstrated applications that amazed me, including several that used public data sets to give new insights into how we think and act as a society.

    Personal data analysis was also on display, such as Strata start-up showcase winner Billguard, a service that will analyze your credit card bills for potentially abnormal activity by combining your statement data with other data sources and "wisdom of crowds" tools. Services like Billguard lead me to believe that consumers will benefit most from "big data" innovations.

    The cloud has clearly changed the game for data and it looks like data may in turn may a central role in the future of cloud computing.

  2. Online commerce and communities
    This is probably the category that most followers of cloud computing would think of first when they think of killer applications for cloud computing. There is no doubt that cloud has changed the game for Web applications and services. Online services and communities are appearing at a dizzying pace today in large part due to the economics of cloud computing.

    How is that, you might ask? Well, again, its because the cost of failure is so low in cloud. Want to try a new business idea, or play with a new online service concept? You have plenty of options for building, hosting, and scaling your concept, all of which are on a "pay-per-use" fee schedule--which ultimate means that if you fail, you stop paying and don't have sunk capital costs to offload or absorb.

    This is why the Silicon Valley--and likely worldwide--venture capital community has so quickly changed the nature of financing for online start-ups. It is now almost impossible to get a VC firm to buy you a server. Rather, they'll tell you to use cloud services to build and test your business concept.

    If your idea shows some signs of success, they'll then tell you to scale the business in the cloud--again, to avoid sunk capital costs. If at some point your business is wildly successful, and a private data center or fixed hosting agreement starts to make more sense, then your VC board members will probably be happy to tell you to take some of your own cash flow and do what you need to do.

    The point is that services such as Google App Engine, Rackspace Cloud Files and Cloud Servers, Amazon Web Services, and a laundry list of others have made delivering new software and business concepts to the online market much, much easier, and the associated risks much, much cheaper.

  3. Context vs. core
    What may surprise many is the increasing adoption of cloud and hosted options for "context" systems (in the "core vs. context" paradigm introduced by Geoffrey Moore in his management books). In short, context systems are those that may or may not be mission critical, but add no real value to the distinct business that they support. Some examples of context systems include e-mail, telephony, and document management.

    In my role at Cisco, I've seen an increasing number of customers declare goals to freeze or reduce the number of wholly-owned data centers that support their businesses, choosing instead to find online services to meet much of their needs. Where they all seem to be starting is those context systems, with vendors like Salesforce.com, Microsoft BPOS, and several smaller SaaS vendors benefiting.

    So-called "core" functions--which are those functions that a business' competitors would find difficult to replicate--are taking much longer to move to the cloud. This isn't surprising, considering the sensitive nature of the data and code associated with these applications, as well as the investment made into existing systems by enterprises.

    While I believe core systems will move to cloud models in the coming years, I suspect that legal, financial, and technical issues will make that a much slower transition than context applications. That said, familiarity with cloud through those context apps will probably accelerate the rate at which enterprises become comfortable with cloud models, so who knows.

    The other "context" function greatly benefiting from the move to cloud models is storage. Backups, disaster recovery, object storage, content management, and many more storage-centric functions are moving to the cloud at a rapid rate. While security remains a consideration, the introduction of encryption and strong authentication technologies are rapidly expanding the market for storage services.

While I have no doubt that other forms of enterprise computing will move to the cloud over the coming year, these are the three categories that stand out for me. What about you? What do you see as the "killer apps" for cloud computing models?

(Graphics credit: Flickr/Michael Gray)


Dimitry Sotnikov [pictured below] explained What Satya Nadella Means to Azure's Future in a 2/14/2011 post:

image With the recent changes in the leadership of one of Microsoft’s key business units – Server and Tools – from Bob Muglia to Satya Nadella one can’t help speculating what this means for the business unit and how it will affect Microsoft’s cloud strategy, specifically Windows Azure – Microsoft’s platform as a service.

image Here’s my uneducated guess based on the assumption that given a new task humans tend to use the same approaches which worked well for them last time, and that Satya [pictured at right] definitely got this post as a recognition for successfully rolling out Bing and transforming Microsoft’s search business from nothing to a competitor really frustrating Google.

Here’s what I think Satya will bring to Microsoft’s Server and Tools Business:

  • imageMore focus on online (Azure) than on Windows Server: Bob Muglia made Windows Server business a success, this was his kid, while Windows Azure (one could argue) was kind of a step-child, imposed on him and added to his business during a re-org. Satya will likely feel much different: for last few years he has been “living in the cloud” leading Bing, and Steve Ballmer very explicitly made lack of cloud focus the reason for changing the business unit leadership.
  • Compete against the market leader: Bing clearly was developed to compete against Google. I guess this means that now Azure development will become aggressively anti-Amazon.
  • Acquisitions and partnerships: so far Azure has really been a ground-up effort by Microsoft engineers, Bing team tried to buy Yahoo, and when this did not work hired a lot of top talent from Yahoo and finally essentially acquired its search and ad business. Satya was directly involved in these efforts. So who is a runner up in IaaS business who Microsoft could acquire to get more visible in that space? Rackspace? Savvis? Although, one could argue that search share was more relevant in search advertising business in which the big get bigger (why even bother advertising with small players?) and this advantage of scale is not as relevant in hosting, so acquisitions might not be as effective. We will see…
  • Not sure if Azure appliance emphasis will persist: Azure appliance made a lot of sense under old leadership. Server and Tools Business knows how to sell to enterprises, so let’s turn Azure into an appliance which we can sell to our existing biggest partners and customers. Will Satya feel the same? I don’t think Bing folks were paying much attention to Microsoft’s search appliance strategy leaving this all to SharePoint/FAST and concentrating on pure cloud play…

There were speculations after Ray Ozzie left that Azure might get de-emphasized – after all Azure was one of Ray’s pet projects. With Satya’s appointment, I would say that we should expect Azure to only gain priority at Microsoft. We’ll see how applicable will Bing experience be for making Windows Azure a top player in the cloud platform space.


Jason Maynard [pictured below] asserted “Beyond executive shakeups, Microsoft needs to acknowledge this truth: Apps drive infrastructure” in a deck for his Microsoft Needs Apps To Win In The Cloud post of 2/14/2011 to InformationWeek’s Global CIO blog:

image The past week Microsoft announced that Satya Nadella, SVP of R&D for the Online Services Division, would replace Bob Muglia as the President of the Server & Tools Division. Nadella is a respected technical leader and has done good work in getting Microsoft back in the search game. It is worth noting that prior to his stint as Bing product head he ran Microsoft Business Solutions, which includes its enterprise software.

imageIn addition, the company announced the departure of another executive in the Server and Tools division, Amitabh Srivastavam, who co-led the build out of Windows Azure. Srivastavam was the technical lead for Azure and is credited with putting together its very solid PAAS offering.

The media has speculated that Muglia's departure was tied to the lack of revenue traction in the server virtualization market and the transition to cloud computing via the PaaS (platform as a service) market with Azure. We think the server virtualization criticism is fair, but the shots at Azure are a little off base, since the PaaS market is still in its infancy. While transitioning to the cloud is a big issue, we believe there is a larger and more pressing strategic question that needs to be addressed concerning the overall business application and infrastructure product strategy.

Despite the leadership changes, we see the need for overall strategy adjustments in Microsoft’s Server & Tools Division. We contend the issue is not just about repositioning for private and public cloud computing. The applications and infrastructure market is increasingly meshing together into integrated solution sets, and Microsoft needs to position for that reality. As on-premises software transitions to the cloud, we think this is going to become even more prevalent.

We think there is a very simple truth in software that says applications drive infrastructure. It is very hard to be a winning platform vendor if there aren’t apps standing on top of the infrastructure. It doesn’t matter if this is a consumer, a small business, government, or the Fortune 500 market. One of the key reasons we believe Microsoft has minted billions of dollars for Windows is due to the best-selling Office productivity suite.

A great deal of Apple’s mobile success in our view has to do with the simple fact that it has a killer app in music, and every developer now writes to iOS. Oracle has skyrocketed to a dominant position in the enterprise software industry, in our view, because it's a legitimate enterprise class business applications vendor that can leverage the R&D and scale of its application server and database teams.

Even Salesforce.com couldn’t roll out its platform strategy until it had built critical mass with its popular sales force automation application. SAP seems to have also found this religion as it bought Sybase and launched its own internally developed data warehousing appliance called Hana.

Finally, IBM is the only major vendor that isn’t directly in the applications market, although we would argue that is largely semantics. IBM’s doesn’t compete in the traditional enterprise applications market with ERP and CRM applications. It partners extensively with those vendors and leverages the global services implementation capabilities. It is worth noting how IBM’s product portfolio has morphed through the acquisition of products that walk, talk, and look like applications. IBM bought FileNet for document management, Cognos for BI, SPSS for predictive analytics, and a number of smaller nice plays for online marketing and customer analytics. In other words, we believe IBM is adding application-like capabilities in order to deliver a whole product solution that is augmented by its vertical industry services expertise.

We find Microsoft’s business applications strategy fragmented and incoherent. First off, it's a mish-mash of small- and midsized-business focused products from the Great Plains, Navision, Axapta, and Solomon acquisitions, that together they call Microsoft Dynamics. While Microsoft Dynamics has over 300,000 customers, we believe it will lose share in the long run to other vendors with more comprehensive SaaS solutions, such as NetSuite.

One of NetSuite’s key differentiators is that it offers a seamless, holistic, cloud-based solution, whereas even within the Microsoft Dynamics product line, it is still difficult to integrate the multiple pieces like financials, CRM and e-commerce. Microsoft gave up on "Project Green" back in 2007 and hasn't really had a compelling story for years in this market. When Microsoft says they are “all in” on the cloud, it is not entirely clear how applications fit into this plan.

Next up on the business intelligence front, the company has nice products with SQL Server and SharePoint, but in our view, could better leverage its vast resources. We think BI ought to be a bigger part of the overall story rather than an add-on to the server products. Gartner recently placed Microsoft in the leader quadrant for BI, validating its offering as a functional and affordable solution. We believe that Microsoft should take a more expansive and visionary approach around a broader information management story.

Page 2:  A Better BI Strategy: IBM's Smarter Planet

Read more: 2, Next Page »


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

image

No significant articles today.


<Return to section navigation list> 

Cloud Security and Governance

Nicholas Mukhar reported Cloud, Wireless Dominate 2011 RSA Conference Offerings in a 2/15/2011 post to the MSPMentor blog:

The 2011 RSA Conference kicks off today in San Francisco, attracting IT security professionals and entrepreneurs for five days of the latest in security and related technologies for on-premise, SaaS and the cloud.

Indeed, the cloud will play a starring role during the event, which kicks into high gear Tuesday with a keynote by RSA Chairman and executive VP Art Coviello titled, “Trust in the Cloud: Proof Not Promises.” And judging from the number of cloud-based products and services expected to make their debut at RSA Conference 2011, solution providers working in the ether will have a veritable banquet of new offerings to choose from.

image This year’s event will host more than 350 vendors expected to unveil their latest technology innovations and disclose their plans for 2011 and beyond. The list of exhibitors is pretty extensive, each with something new to offer. Some of the more notable technologies expected to make their debut at the conference include:

1. Motorola AirDefense Services Platform — Dedicated Sensing: It uses collaborative intelligence with dedicated sensors that work with a purpose-built appliance to monitor wireless traffic live. The new software can support small businesses that have a single office or large corporations with many international locations.

2. Astaro Log Management: Its  policy protection, error tracking and automated alerts can reduce troubleshooting by as much as 80 percent and help companies comply with system regulations. With this device, companies can store data in a central location and analyze data from all of their systems and applications.

3. Solera OS 5.0: Using this latest platform, clients will have insight into all of its own network activity. Its new features include protection against unknown threats, an easy-to-navigate interface, traffic views and pattern alerts to easily detect suspicious traffic.

imageThe RSA Conference is well-known for its glut of product announcements, and this  year’s event should be no different. Stay tuned for the latest and greatest coming out of the show, especially those right for the channel.


Sourya Biswas posted Big 4 Auditing Firm Ernst & Young Joins Cloud Security Alliance to the Cloud Tweaks blog on 2/14/2011

Cloud computing’s legitimacy as a mainstream business option just received a shot in the arm with Ernst & Young, one of the Big Four of auditing, joining the Cloud Security Alliance (CSA), a non-profit organization formed to promote the use of best practices for providing security assurance within cloud computing and provide education on the uses of cloud computing in connection with all other forms of computing.”

Ernst & Young, popularly known as E&Y, has an Information Technology Risk and Assurance practice that assists organizations in enhancing their information frameworks and IT risk management. The company said that cloud computing is of high interest on the corporate agenda with senior level management and the C-suite executives giving it considerable attention.

We’re excited to join the CSA and look forward to working with the other member organizations to help shape policy and develop cloud security and assurance guidelines,” said Bernie Wedge, E&Y’s Americas Information Technology Risk and Assurance leader. “The existence of the CSA and its mission speaks volumes about the evolution of cloud computing and its future. We decided to join the CSA not only to have a seat at the table, but also to provide our clients with leading edge insight on technology and risk management so they may move to the cloud safely and securely.”

The firm had undertaken a survey late last year that said 45% of companies are expected to use cloud computing in the next 12 months. However, it also mentioned that organizations needed to be careful about addressing potential risks before they move their business applications to the cloud. The survey revealed that almost half of the participants were already in the process of evaluating or planning to use cloud-based solutions.

In a press release, Jose Granado, E&Y’s Americas Information Security Services leader, said, “Cloud computing makes an IT investment more efficient, flexible and faster, and allows access to data anytime, anywhere, any place and with any electronic device. The more connected we become, the more exposed we are. Instant access to information presents critical security risks and the CSA is helping address those risks through education, research, and by developing guidelines for operating in the cloud.”

The Cloud Security Alliance was formed in November 2008 and comprises of many subject matter experts from a wide variety of disciplines, with the stated objectives of:

  • Promoting a common level of understanding between the consumers and providers of cloud computing regarding the necessary security requirements and attestation of assurance.
  • Promoting independent research into best practices for cloud computing security.
  • Launching awareness campaigns and educational programs on the appropriate uses of cloud computing and cloud security solutions.
  • Creating consensus lists of issues and guidance for cloud security assurance.

Ernst & Young is one of the largest professional services firms in the world and one of the Big Four auditors, along with Deloitte, KPMG and PricewaterhouseCoopers (PwC). It has a global headcount of 144,000 and provides assurance, tax, transaction and advisory services to hundreds of companies across the globe.


<Return to section navigation list> 

Cloud Computing Events

Nancy Medica (@NancyMedica) announced on 2/15/2011 a Webinar: Drive IT Cost Down with Windows Azure to be held on 3/17/2011 at 9:00 AM PST:

image The Cloud Computing era is here: Windows Azure as a vital part of the CIO‘s toolkit. Common Sense CEO Cesar DOnofrio will present next March 17th at 11am CST the new webinar “Drive IT cost down with Windows Azure

Register for free and learn:

    image

  • What is Windows Azure and its Economics.
  • Windows Azure versus on-premise solutions.
  • Azure Case Studies.

The webinar is intended for: CIOs, CTOs, IT Managers, IT Developers, Lead Developers.

Cesar DOnofrio is CEO at Common Sense and Information Systems Engineer. He has over 15 years of leadership and hands-on experience in software development, usability and interface design. From DB design and system architect, to analyst and user experience.

Register for free here! http://webinar.getcs.com/


Wolfgang Gentzsch announced the ISC Cloud’11 Conference, September 26–27, Dorint Hotel in Mannheim, Germany in a 2/15/2011 post to the HPC in the Cloud blog:

image These days, High Performance Computing (HPC) is increasingly moving into the mainstream. With commodity off-the-shelf hardware and software and thousands of sophisticated applications optimized for parallel computers, every engineer and scientist today can perform complex computer simulations on HPC systems--small and large. However, the drawback in practice is that these systems are monolithic silos, application licenses are expensive and they are often either not fully utilized, or they are overloaded. Besides, there is a long procurement process and a need to justify the expenses including space, cooling, power, and management costs that go into setting up an HPC cluster.

image With the rise of cloud computing, this scenario is changing. clouds are of particular interest with the growing tendency to outsource HPC, increase business and research flexibility, reduce management overhead, and extend existing, limited HPC infrastructures. Clouds reduce the barrier for service providers to offer HPC services with minimum entry costs and infrastructure requirements. Clouds allow service providers and users to experiment with novel services and to reduce the risk of wasting resources.

image Rather than having to rely on a corporate IT department to procure, install and wire HPC servers and services into the data center, there is the notion of self-service, where users access a cloud portal and make a request for servers with specific hardware or software characteristics, and have them provisioned automatically in a matter of minutes. When no longer needed, the underlying resources are put back into the cloud to service the next customer. This notion of disposable computing dramatically reduces the barrier for research and development! Clouds will surely revolutionize how HPC is applied because of its utilitarian usage model. Clouds will make HPC genuinely mainstream.

The ISC Cloud’11 conference will help you to understand all the details of this massive trend: the conference will focus on compute and data intensive applications, their resource needs in the cloud, and strategies on implementing and deploying cloud infrastructures. It will address members of the HPC community, especially decision makers in small, medium, and large enterprises and in research (chief executives, IT leaders, project managers, senior scientists, and so on). Speakers will be world-renowned experts in the field of HPC and cloud computing. They will, undoubtedly present solutions, guidelines, case studies, success stories, lessons learned and recommendations to all attendees.

The remarkable success of the first international ISC Cloud’10 Conference held last October in Frankfurt, Germany, has motivated ISC Events to continue this series and organize a similar cloud computing conference this year, with an even more profound focus on the use of clouds for High Performance Computing (HPC).

Following the recommendations of last year’s participants, this year, we will be inviting more expert speakers with real end-user hands-on experiences reporting on advanced topics, thus providing all attendees insightful details. This conference is highly valuable for members of the HPC community who want to understand this massive trend and mainstream HPC. For sponsors, this year, we have an additional goody: table-top exhibition space!

ISC Cloud’11 will be held at the Dorint Hotel in Mannheim, from September 26–27. The ISC Events team and I look forward to welcoming you.
http://www.isc-events.com/cloud11/

Wolfgang Gentzsch
ISC Cloud General Chair


Eric Nelson (@ericnel) posted on 2/14/2011 Slides and links from [the UK] ISV Windows Azure briefing on the 14th February 2011:

image A big thank you from David and Eric and Steve to everyone who attended yesterdays “informal” briefing on the Windows Azure Platform.

Remember – sign up to the LinkedIn group to stay connected and  Microsoft Platform Ready (read why) to take advantage of the Cloud Essentials Pack (read what it is)

Slides:

Links:


Jim O’Neill reported that it’s Cloudy @ NERD This Week in a 2/13/2011 post:

image There are a few special events coming up this week at the Microsoft Research and Development Center (aka NERD) in Cambridge to which I wanted to call your attention.   All three actually involve cloud-computing: two approach cloud and Windows Azure head on, and the third, on Windows Phone 7 development, includes a segment on how the cloud is involved in developing mobile applications.  As usual all events are free, but please RSVP at the links below.

To the Cloud

Boston Azure Hackathon February 16th, 4 – 9 p.m.  This is a hands-on event sponsored by the Boston Azure User Group (which also meets monthly at NERD).  There are limited spaces available, so please register as soon as possible, and then follow the event on Twitter via the hash tag: #bostonazurehack

Making Windows Azure Cool: Cloud HackathonFebruary 17th, 6:30 – 9:30 p.m.  This ‘Hackathon’ is the first meeting of the new Hack-the-Cloud Meetup being organized by Kyle Quest and focusing on hands-on use of various cloud technologies. The topic for this first meeting is using Erlang and Node.js with Azure, but you’ll also have the ability to help shape the group’s charter and suggest future topics.

WebsiteSpark Meetup presents: Windows Phone 7 Development with SilverlightFebruary 17th, 6 – 8:30 p.m.  At this special meeting of the Boston WebsiteSpark group, Wintellect Senior Consultant John Garland introduces the fundamental concepts needed for attendees to start developing their own applications with Silverlight for the new Windows Phone 7 platform.


The Windows Azure Team recommends in a 2/14/2011 post that you Reserve Your Space For This Friday's Academy Live Session, "Integrating SharePoint and Windows Azure: Why They're Better Together":

image

SharePoint is one of Microsoft's fastest-growing collaboration technologies for the enterprise and Windows Azure is Microsoft's Cloud Services Platform. But how do these two technologies work together? And what solutions can you build and sell by using these technologies together?

image To hear the answers to these questions and more, join Microsoft experts in an Academy Live Session this Friday, February 18, 2011 from 2-3:00 PM PST.  In this session you will learn how these two technologies complement one another and how you can get started building, deploying and selling your own solutions.  Also, find out how Windows Azure and the cloud can enhance what you may already be doing with SharePoint.

This session will feature discussion and customer technical demos and is appropriate for many audiences including, but not limited to, developers and IT staff, and business decision-makers.

Learn more and register for the session here.


Schlomo Swidler described his CloudConnect 2011 Platforms and Ecosystems BOF session in a 2/14/2011 post:

This March 7-10 2011 I will be in Santa Clara for the CloudConnect conference. There are many reasons you should go too, and I’d like to highlight one session that you should not miss: the Platforms and Ecosystems BOF, on Tuesday March 8 at 6:00 – 7:30 PM. Read on for a detailed description of this BOF session and why it promises to be worthwhile.

[Full disclosure: I'm the track chair for the Design Patterns track and I'm running the Platforms and Ecosystems BOF. The event organizers are sponsoring my hotel for the conference, and like all conference speakers my admission to the event is covered.]

CloudConnect 2011 promises to be a high-quality conference, as last year’s was. This year you will be able to learn all about design patterns for cloud applications in the Design Patterns track I’m leading. You’ll also be able to hear from an all-star lineup about many aspects of using cloud: cloud economics, cloud security, culture, risks, and governance, data and storage, devops and automation, performance and monitoring, and private clouds.

But I’m most looking forward to the Platforms and Ecosystems BOF because the format of the event promises to foster great discussions.

The BOF Format

The BOF will be conducted as a… well, it’s hard to describe in words only, so here is a picture:

BOF Overview

At three fixed points around the outside of the room will be three topics: Public IaaS, Public PaaS, and Private Cloud Stacks. There will also be three themes which rotate around the room at each interval; these themes are: Workload Portability, Monitoring & Control, and Avoiding Vendor Lock-in. At any one time there will be three discussions taking place, one in each corner, focusing on the particular combination of that theme and topic.

Here is an example of the first set of discussions:

BOF Session 1

And here are the second set of discussion, which will take place after one “turn” of the inner “wheel”:

BOF Session 2

And here are the final set of discussions, after the second turn of the inner wheel:

BOF Session 3

In all, nine discussions are conducted. Here is a single matrix to summarize:

image 

Anatomy of a Discussion

What makes a discussion worthwhile? Interesting questions, focus, and varied opinions. The discussions in the BOF will have all of these elements.

Interesting Questions

The questions above are just “seeder” questions. They are representative questions related to the intersection of each topic and theme, but they are by no means the only questions. Do you have questions you’d like to see discussed? Please, leave a comment below. Better yet, attend the BOF and raise them there, in the appropriate discussion.

Focus

Nothing sucks more than a pointless tangent or a a single person monopolizing the floor. Each discussion will be shepherded by a capable moderator, who will keep things focused on the subject area and encourage everyone’s participation.

Varied Opinions

Topic experts and theme experts will participate in every discussion.

Vendors will be present as well – but the moderators will make sure they do not abuse the forum.

And interested parties – such as you! – will be there too. In unconference style, the audience will drive the discussion.

These three elements all together provide the necessary ingredients to make sure things stay interesting.

At the Center

What, you may ask, is at the center of the room? I’m glad you asked. There’ll be beer and refreshments at the center. This is also where you can conduct spin-off discussions if you like.

The Platforms and Ecosystems BOF at CloudConnect is the perfect place to bring your cloud insights, case studies, anecdotes, and questions. It’s a great forum to discuss the ideas you’ve picked up during the day at the conference, in a less formal atmosphere where deep discussion is on the agenda.

Here’s a discount code for 25% off registration: CNXFCC03. I hope to see you there.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Jo Maitland (@JoMaitlandTT) posted a Top 10 Cloud Providers of 2011 slideshow to SearchCloudComputing.com on 2/16/2010:

image Spring is here (in San Francisco, at least), and that means it's time to blow the cobwebs off our list of the top 10 cloud computing service providers. Much has happened since last year's top 10, and we've got a new list for 2011. Our rankings are based on customer traction, solid technical innovation and management track record.

For a video version of the list, watch Jo Maitland and Carl Brooks (@eekygeeky) run down the top 10 on their new weekly TV show at CloudCoverTV.com.

image Jo Maitland is is Executive Editor for TechTarget’s SearchCloudComputing.com, SearchServerVirtualization.com, SearchVMware.com and SearchVirtualDataCentre.co.uk.

Full disclosure: I’m a paid contributor to SearchCloudComputing.com.


Judith Hurwitz asked HP’s Ambitious Cloud Computing Strategy: Can HP Emerging as a Power? in a 2/15/2011 post:

image To comprehend HP’s cloud computing strategy you have to first understand HP’s Matrix Blade System.  HP announced the Matrix system in April of 2009 as a prepackaged fabric-based system.  Because Matrix was designed as a packaged environment, it has become the lynch pin of HP’s cloud strategy.

image So, what is Matrix?  Within this environment, HP has pre-integrated servers, networking, storage, and software (primarily orchestration to customize workflow). In essence, Matrix is a Unified Computing System so that it supports both physical blades as well as virtual configurations. It includes a graphical command center console to manage resource pools, physical and virtual servers and network connectivity. On the software side, Matrix provides an abstraction layer that supports workload provisioning and workflow based policy management that can determine where workloads will run. The environment supports the VMware hypervisor, open source KVM, and Microsoft’s Hyper-V.

HP’s strategy is to combine this Matrix system, which it has positioned as its private cloud, with a public compute cloud. In addition, HP is incorporating its lifecycle management software and its security acquisitions as part of its overall cloud strategy. It is leveraging the HP services (formerly EDS) to offer a hosted private cloud and traditional outsourcing as part of an overall plan. HP is hoping to leveraging its services expertise in running large enterprise packaged software

There are three components to the HP cloud strategy:

  • CloudSystem
  • Cloud Services Automation
  • Cloud Consulting Services

CloudSystem. What HP calls CloudSystem is, in fact, based on the Matrix blade system. The Matrix Blade System uses a common rack enclosure to support all the blades produced by HP. The Matrix is a packaging of is what HP calls an operating environment that includes provisioning software, virtualization, a self-service portal and management tools to manage resources pools. HP considers its public cloud services to be part of the CloudSystem.  To provide a hybrid cloud computing environment, HP will offer compute public cloud services similar to what is available from Amazon EC2.  When combined with the outsourcing services from HP Services, HP contends that it provides a common architectural framework across public, private, virtualized servers, and outsourcing.  It includes what HP is calling cloud maps. Cloud maps are configuration templates based on HP’s acquisition of Stratavia, a database and application automation software company.

Cloud Service Automation.  The CloudSystem is intended to make use of Services Automation software called Cloud Service Automation (CSA). The components of CSA include a self-service portal that manages a service catalog. The service catalog describes each service that is intended to be used as part of the cloud environment.  Within the catalog, the required service level is defined. In addition, the CSA can meter the use of services and can provide visibility to the performance of each service. A second capability is a cloud controller, based on the orchestration technology from HP’s Opsware acquisition. A third component, the resource manager provide provisioning and monitoring services.  The objective of CSA is to provide end-to-end lifecycle management of the CloudSystem.

Cloud Consulting Services. HP is taking advantage of EDS’s experience in managing computing infrastructure as the foundation for its cloud consulting services offerings. HP also leverages its consulting services that were traditionally part of HP as well as services from EDS.  Therefore, HP has deep experience in designing and running Cloud seminars and strategy engagements for customers.

From HP’s perspective, it is taking a hybrid approach to cloud computing. What does HP mean by Hybrid? Basically, HP’s hybrid strategy includes the combination of the CloudSystem – a hardware-based private cloud, its own public compute services, and traditional outsourcing.

The Bottom Line.  Making the transition to becoming a major cloud computing vendor is complicated.  The market is young and still in transition. HP has many interesting building blocks that have the potential to make it an important player.  Leveraging the Matrix Blade System is a pragmatic move since it is already an integrated and highly abstracted platform. However, it will have to provide more services that increase the ability of its customers to use the CloudSystem to create an elastic and flexible computing platform.  The Cloud Automation Services is a good start but still requires more evolution.  For example, it needs to add more capabilities into its service catalog.  Leveraging its Systinet registry/repository as part of its service catalog would be advisable.  I also think that HP needs to package its security offerings to be cloud specific. This includes both in the governance and compliance area as well as Identity Management.

Just how much will HP plan to compete in the public cloud space is uncertain.  Can HP be effective in both markets? Does it need to combine its offerings or create two different business models?

It is clear that HP wants to make cloud computing the cornerstone of its “Instant-On Enterprise” strategy announced last year. In essence, Instant-on Enterprise is intended to make it easier for customers to consume data center capabilities including infrastructure, applications, and services.  This is a good vision in keeping with what customers need.  And plainly cloud computing is an essential ingredient in achieving this ambitious strategy.


Jeff Barr (@jeffbarr) announced AWS Identity and Access Management Users Can Now Log in to the AWS Management Console in a 2/14/2010 post to the Amazon Web Services blog:

image The AWS Management Console now recognizes Users created via AWS Identity and Access Management (IAM). IAM users can now log in to the console and manage resources within an AWS account. IAM Users can be assigned individual Multi-Factor Authentication (MFA) devices to provide additional security when they access the console. IAM can also be used to give permission for a particular User to access resources, services, and APIs.

image Here's a quick recap of the major features of IAM:

  • Create User Identities - Add Users (unique identities that can interact with AWS services) to your AWS account. A User can be an individual, a system, or an application with a need to access AWS services.
  • Assign and Manage Security Credentials - Assign security credentials such as access keys to each User, with the ability to rotate or revoke these credentials as needed.
  • Organize Users in Groups - Create IAM Groups to simplify the management of permissions for multiple Users.
  • Centrally Control User Access - Control the operations that each User can perform, including access to APIs for specific AWS Services and resources.
  • Add Conditions to Permissions - Use conditions such as time of day, source IP address, or protocol (e.g. SSL) to control how and when a User can access AWS.
  • View a Single AWS Bill - Receive a single bill which represents the activity of all of the Users within a single AWS account.

Put it all together and what's the result? It is now much easier for multiple people to securely share access to an AWS account. This should be of interest to everyone -- individual developers, small companies, and large enterprises. I am currently setting up individual IAM Users for each of my own AWS applications.

IAM is a really powerful feature and I'll have a lot more to say about it over the next couple of weeks. I've got the following blog posts in the pipeline:

  • A more detailed introduction to IAM.
  • A step-by-step guide to using the IAM CLI to enable sharing of a limited set of files within an Amazon S3 bucket.
  • A walkthrough to show you how IAM Users can access the AWS Management Console.
  • A walkthrough on the use of the AWS Access Policy Language for more advanced/conditional control of permissions.

Let me know if you'd like me to cover any other topics and I'll do my best to oblige. In the meantime, check out the IAM Getting Started Guide, the IAM API Reference, and the IAM Quick Reference Card (there's even more documentation here). Also, don’t forget to refer to my previous blog post on the AWS Policy Generator for help creating policies that control permissions for your users.

A number of applications and development tools already include support for IAM. Here's what I know about (leave a comment if you know of any others):

The AWS Identity and Access team is hiring, so let us know if you’re interested in joining the team:


NephoScale is a new Silicon Valley entrant to the IaaS cloud provisioning ratrace that offers low-priced Linux and Windows shared servers, as well as dedicated servers. An annual membership fee entitles users to a 50% discount on hourly cloud server costs:

You can create cloud servers by choosing from several different Linux and Windows operating systems using our pre-packaged and up-to-date server images. You can load your own applications and set-up your own security policies – after that you are off and running. If you want to set-up several servers at one time, you can do this by making a single API call using our CloudScript.

Details:

  • On-demand Windows and Linux cloud servers created in minutes
  • Control via web interface or by API
  • Full root/administrator access
  • On/off and reboot control
  • Share private network with your NephoScale On-Demand Dedicated Servers and NephoScale Object-Based Cloud Storage
  • 99.95% uptime service agreement
  • Use by the hour, cancel at any time
  • Usage based, pay only for what you use
  • Low priced Membership plan available
Linux Cloud Server Images
  • CentOS 5.5 32-bit – base image
  • CentOS 5.5 64-bit – base image
  • Debian 5.0.5 32-bit – base image
  • Debian 5.0.5 64-bit - base image
  • Ubuntu Server 10.04 32-bit – base image
  • Ubuntu Server 10.04 64-bit – base image

Microsoft Cloud Server Images
  • Windows Server 2008 Std. 64-bit – base image

More Images Coming Soon!

Cloud Server Types

You can instantly deploy any of the following cloud servers:

image

*8GB and 16GB cloud servers are only supported with 64-bit cloud server images.

Cloud Server Pricing

There are two pricing options with NephoScale Cloud Servers.

Each cloud server is billed by the hour as listed under the “Hourly Rate” column.  There are no long term commitments.

You also have the option to sign up any individual cloud server under an annual membership.  By making the low one-time payment as listed under the “Annual Membership Fee” column, you may run your cloud server at the discounted ”Membership Hourly Rate”  for an entire year.   Each annual membership is tied to a particular cloud server type and applies to only one cloud server at a time.  Hence, if you provision a cloud server with an annual membership, delete that cloud server, and later decide to provision another cloud server of the same type, you will still enjoy the benefits of the discounted hourly rate for that new cloud server.  There are no limits as to how many annual memberships you can purchase.

image

*These rates only apply to cloud server pricing and do not include software licenses (that may apply).


<Return to section navigation list> 

0 comments: