Tuesday, February 08, 2011

Windows Azure and Cloud Computing Posts for 2/7/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px33   
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

Adron Hall (@adronbh) explained how to Put Stuff in Your Windows Azure Junk Trunk – Repository Base in a 2/7/2011 post, the first of a three-part series:

Alright, so the title is rather stupid, but hey, it’s fun!  :)

image This project I setup to provide some basic functionality with Windows Azure Storage.  I wanted to use each of the three mediums;  Table, Blob, and Queue, and this example will cover each of these things.  The application will upload and store images, provide a listing, some worker processing, and deletion of the images & associated metadata.  This entry is part 1 of this series, with the following schedule for subsequent entries:

  • Part 1:  Today (this entry)
  • Part 2:  The Junk Trunk ASP.NET MVC 2 Web Application (Publishing on February 10th)
  • Part 3:  Windows Azure Worker Role and Storage Queue (Publishing on February 11th)

Title aside, schedule laid out, description of the project completed, I’ll dive right in!

Putting Stuff in Your Junk Trunk

imageCreate a new Windows Azure Project called PutJunkInIt.  (Click any screenshot for the full size, and also note some of the text may be off – I had to recreate a number of these images)

Windows Azure PutJunkInIt

Windows Azure PutJunkInIt

Next select the ASP.NET MVC 2 Web Application and also a Worker Role and name the projects JunkTrunk and JunkTrunk.WorkerRole.

Choosing Windows Azure Projects

Choosing Windows Azure Projects

In the next dialog choose to create the unit test project and click OK.

Create Unit Test Project

Create Unit Test Project

After the project is created the following projects are setup within the PutJunkInIt Solution.  There should be a JunkTrunk, JunkTrunk.Worker, JunkTrunk Windows Azure Deployment Project, and a JunkTrunk.Tests Project.

Solution Explorer

Solution Explorer

Next add a Windows Class Library Project and title it JunkTrunk.Storage.

Windows Class Library

Windows Class Library

Add a reference to the Microsoft.WindowsAzure.ServiceRuntime and Microsoft.WindowsAzure.StorageClient assemblies to the JunkTrunk.Storage Project.  Rename the Class1.cs file and class to JunkTrunkBase.  Now open up the Class1.cs file in the JunkTrunk.Storage Project.  First add the following fields and constructor to the class.

public const string QueueName = "metadataqueue";
public const string BlobContainerName = "photos";
public const string TableName = "MetaData";
static JunkTrunkBase()
{
    CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
    {
        configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
        RoleEnvironment.Changed
            += (sender, arg) =>
                    {
                        if (!arg.Changes.OfType()
                                .Any(change => (change.ConfigurationSettingName == configName)))
                            return;
                        if (!configSetter(RoleEnvironment.GetConfigurationSettingValue(configName)))
                        {
                            RoleEnvironment.RequestRecycle();
                        }
                    };
    });
}

After that add the following blob container and reference methods.

protected static CloudBlobContainer Blob
{
    get { return BlobClient.GetContainerReference(BlobContainerName); }
}
private static CloudBlobClient BlobClient
{
    get
    {
        return Account.CreateCloudBlobClient();
    }
}

Now add code for the table & queue client and reference methods.

protected static CloudQueue Queue
{
    get { return QueueClient.GetQueueReference(QueueName); }
}
private static CloudQueueClient QueueClient
{
    get { return Account.CreateCloudQueueClient(); }
}
protected static CloudTableClient Table
{
    get { return Account.CreateCloudTableClient(); }
}
protected static CloudStorageAccount Account
{
    get
    {
        return
            CloudStorageAccount
            .FromConfigurationSetting("Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString");
    }
}

This class now provides the basic underpinnings needed to retrieve the appropriate information from the configuration.  This base class can then provide that connection information to connect to the table, queue, or blob mediums.

Next step is to create some initialization code to get the containers created if they don’t exist in Windows Azure.  Add a new class file to the PutJunkInIt Project.

JunkTrunkSetup

JunkTrunkSetup

public class JunkTrunkSetup : JunkTrunkBase
{
    public static void CreateContainersQueuesTables()
    {
        Blob.CreateIfNotExist();
        Queue.CreateIfNotExist();
        Table.CreateTableIfNotExist(TableName);
    }
}

Next add the System.Data.Services.Client Assembly to the project.  After adding the assembly add two new classes and name them BlobMeta.cs and Table.cs. Add the following code to the Table.cs Class.

public class Table
{
    public static string PartitionKey;
}

Next add another class file and name it BlobMetaContext.cs and add the following code.

public class BlobMetaContext : TableServiceContext
{
    public BlobMetaContext(string baseAddress, StorageCredentials credentials)
        : base(baseAddress, credentials)
    {
        IgnoreResourceNotFoundException = true;
    }
    public IQueryable Data
    {
        get { return CreateQuery(RepositoryBase.TableName); }
    }
    public void Add(BlobMeta data)
    {
        data.RowKey = data.RowKey.Replace("/", "_");
        BlobMeta original = (from e in Data
                                where e.RowKey == data.RowKey
                                    && e.PartitionKey == Table.PartitionKey
                                select e).FirstOrDefault();
        if (original != null)
        {
            Update(original, data);
        }
        else
        {
            AddObject(RepositoryBase.TableName, data);
        }
        SaveChanges();
    }
    public void Update(BlobMeta original, BlobMeta data)
    {
        original.Date = data.Date;
        original.ResourceUri = data.ResourceUri;
        UpdateObject(original);
        SaveChanges();
    }
}

Now add the following code to the BlobMeta Class.

public class BlobMeta : TableServiceEntity
{
    public BlobMeta()
    {
        PartitionKey = Table.PartitionKey;
    }
    public DateTime Date { get; set; }
    public string ResourceUri { get; set; }
}

At this point, everything should build. Give it a go to be sure nothing got keyed in wrong (or copied in wrong). Once assured the build is still solid, add the Blob.cs Class to the project.

public class Blob : JunkTrunkBase
{
    public static string PutBlob(Stream stream, string fileName)
    {
        var blobRef = Blob.GetBlobReference(fileName);
        blobRef.UploadFromStream(stream);
        return blobRef.Uri.ToString();
    }
    public static Stream GetBlob(string blobAddress)
    {
        var stream = new MemoryStream();
        Blob.GetBlobReference(blobAddress)
            .DownloadToStream(stream);
        return stream;
    }
    public static Dictionary<string, string> GetBlobList()
    {
        var blobs = Blob.ListBlobs();
        var blobDictionary =
            blobs.ToDictionary(
                listBlobItem => listBlobItem.Uri.ToString(),
                listBlobItem => listBlobItem.Uri.ToString());
        return blobDictionary;
    }
    public static void DeleteBlob(string blobAddress)
    {
        Blob.GetBlobReference(blobAddress).DeleteIfExists();
    }
}

After that finalize the Table Class with the following changes and additions.

public class Table : RepositoryBase
{
    public const string PartitionKey = "BlobMeta";
    public static void Add(BlobMeta data)
    {
        Context.Add(data);
    }
    public static BlobMeta GetMetaData(Guid key)
    {
        return (from e in Context.Data
                where e.RowKey == key.ToString() &&
                e.PartitionKey == PartitionKey
                select e).SingleOrDefault();
    }
    public static void DeleteMetaDataAndBlob(Guid key)
    {
        var ctxt = new BlobMetaContext(
            Account.TableEndpoint.AbsoluteUri,
            Account.Credentials);
        var entity = (from e in ctxt.Data
                        where e.RowKey == key.ToString() &&
                        e.PartitionKey == PartitionKey
                        select e).SingleOrDefault();
        ctxt.DeleteObject(entity);
        Repository.Blob.DeleteBlob(entity.ResourceUri);
        ctxt.SaveChanges();
    }
    public static List<BlobMeta> GetAll()
    {
        return (from e in Context.Data
                select e).ToList();
    }
    public static BlobMetaContext Context
    {
        get
        {
            return new BlobMetaContext(
                Account.TableEndpoint.AbsoluteUri,
                Account.Credentials);
        }
    }
}

The final file to add is the Queue.cs Class File. Add that and then add the following code to the class.

public class Queue : JunkTrunkBase
{
    public static void Add(CloudQueueMessage msg)
    {
        Queue.AddMessage(msg);
    }
    public static CloudQueueMessage GetNextMessage()
    {
        return Queue.PeekMessage() != null ? Queue.GetMessage() : null;
    }
    public static List<CloudQueueMessage> GetAllMessages()
    {
        var count = Queue.RetrieveApproximateMessageCount();
        return Queue.GetMessages(count).ToList();
    }
    public static void DeleteMessage(CloudQueueMessage msg)
    {
        Queue.DeleteMessage(msg);
    }
}

The now gives us a fully functional class that utilizes the Windows Azure SDK. In Part 2 I’ll start building on top of that using the ASP.NET MVC 2 Web Project. Part 2 will be published tomorrow, so stay tuned.


<Return to section navigation list> 

SQL Azure Database and Reporting

The SQL Azure Team reported Temenos Moves Banks To the Cloud With T24 on Azure in a 2/7/2011 post:

image Temenos, the global provider of banking software, today announced the availability of TEMENOS T24 (T24) on the Windows Azure Platform.  This announcement makes T24 the first core banking system to go into production with customers on Windows Azure and SQL Azure.

imageThe combination of T24 and the Windows Azure Platform, including SQL Azure, allows banks to move operations to a consumption based pricing model and enables them to scale resources effectively and increase volume according to customer demand.

imageAs part of this launch, Temenos has embarked on its first cloud migration to move a network of 12 Mexican financial institutions from a traditional hosted environment onto the Windows Azure Platform.  Five of these - Sofol Tepeyac, Grupo Agrifin, Findeca, Soficam and C.Capital Global - are in the first phase of migration, which is currently underway and is expected to be completed by Q2 2011.

To learn more about this announcement, you can read the press release here

No significant articles today.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Pete Warden (@petewarden) hypothecized “Data acquisition for a site like CrunchBase may not carry the costs some assume” in the answer to his Will data be too cheap to meter? post of 2/8/2011 to the O’Reilly Radar blog:

image Last week at Strata I got into an argument with a journalist over the future of CrunchBase. His position was that we were just in a "pre-commercial" world, that creating the database required a reporter's time, and so after the current aberration had passed we'd return to the old status quo where this kind of information was only available through paid services. I wasn't so sure.

image When I explain to people why the Big Data movement is important — why it's a real change instead of a fad — I point to price as the fundamental difference between the old and new worlds. Until a few years ago, the state of the art for doing meaningful analysis of multi-gigabyte data sets was the data warehouse. These custom systems were very capable, but could easily cost millions of dollars. Today I can hire a hundred machine Hadoop cluster from Amazon for just $10 an hour, and process thousands of gigabytes a day.

image This represents a massive discontinuity in price, and it's why Big Data is so disruptive. Suddenly we can imagine a group of kids in their garage building Google-scale systems practically on pocket money. While the drop in the cost of data storage and transmission has been less dramatic, it has followed a steady downward trend over the decades. Now that processing has become cheap too, a whole universe of poverty-stricken hackers, academics, makers, reporters, and startups can do interesting things with massive data sets.

Why does this have to do with CrunchBase? The reporter had some implicit assumptions about the cost of the data collection process. He argued that it required extra effort from the journalists to create the additional value captured in the database. To paraphrase him: "It's time they'd rather spend at home playing with their kids, and so we'll end up compensating them for their work if we want them to continue producing it." What I felt was missing from this is that CrunchBase might actually be just a side-effect of work they'd be doing even if it wasn't released for public consumption.

Many news organizations are taking advantage of the dropping cost of data handling by heavily automating their news-gathering and publishing workflows. This can be as simple as Google Alerts or large collections of RSS feeds to scan, using scraping tools to gather public web data, and there's a myriad of other information-processing techniques out there. Internally there's a need to keep track of the results of manual or automated research, and so the most advanced organizations are using some kind of structured database to capture the information for future use.

That means that that the only extra effort required to release something like CrunchBase is publishing it to the web. Assuming that there's some benefits to doing so (that TechCrunch's reputation as the site-of-record for technology company news is enhanced, for example) and that there's multiple companies with the data available, then the low cost of the release will mean it makes sense to give it away.

I actually don't know if all these assumptions are true, CrunchBase's approach may not be sustainable, but I hope it illustrates how a truly radical change in price can upset the traditional rules. Even on a competitive, commercial, free-market playing field it sometimes makes sense to behave in ways that appear hopelessly altruistic. We've seen this play out with open-source software. I expect to see pricing forces do something similar to open up more and more sources of data.

I'm usually the contrarian guy in the room arguing that information wants to be paid, so I don't actually believe (as Lewis Strauss famously said about electricity) all data will be too cheap to meter. Instead I'm hoping we'll head toward a world where producers of information are paid for adding real value. Too many "premium" data sets are collated and merged from other computerized sources, and that process should be increasingly automatic, and so increasingly cheap. Give me a raw CrunchBase culled from press releases and filings for free, then charge me for your informed opinion on how likely the companies are to pay their bills if I extend them credit. Just as free, open-source software has served as the foundation for some very lucrative businesses, the new world of free public data will trigger a flood of innovations that will end up generating value in ways we can't foresee, and that we'll be happy to pay for.


Glenn Gailey (@ggailey777) reported Trying to Avoid Buffering the Entire BLOB in a WCF Data Service Client? Well You Can’t on 2/7/2010:

image I wrote a series of blog posts on the WCF Data Services team blog that walks you through the (fairly complicated) process of accessing BLOB data (such as media files, documents, anything too big to return in the feed) from an OData service as a stream. My second post in this series showed how to use the WCF Data Services client to access and change BLOB data as a stream, in this case the Photo Streaming Data Service Sample.

imageIn the process of creating this sample, I was dismayed to discover that the client was buffering the entire BLOB stream before sending it to the data service in a POST request. As you know from reading those posts, one of the big benefits of using streaming is to avoid having to buffer an entire BLOB in memory. The solution to this seemed obvious--I would just set AllowWriteStreamBuffering = false in the HttpWebRequest from the client, which I can get ahold of by handling the SendingRequest event). Ah, but the WCF Data Services client outsmarted me yet again. Apparently, the client sets properties on HttpWebRequest after the SendingRequest event fires, and it sets AllowWriteStreamBuffering = true on every request (grrrrrr).

I filed this little nasty as a bug with the product team, and it looks like it is being fixed for the next release (huzzah!).

Additional reading on streaming and OData:

No significant articles today.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Itai Raz continues his series with Introduction to Windows Azure AppFabric blog posts series – Part 2: The Middleware Services of 2/7/2011:

image In the previous post we discussed the 3 main concepts that make up Windows Azure AppFabric:

1.       Middleware Services - pre-built services that provide valuable capabilities developers can use when developing applications. This reduces the time and complexity when building the application, and allows the developer to concentrate on the core application logic.

2.       Building Composite Applications - capabilities that enable you to assemble, deploy, and manage a composite application that is made up of several different components, as a single logical entity.

3.       Scale-out Application Infrastructure - capabilities that make it seamless to get the benefit of the cloud, such as: elastic scale, high availability, high density, multi-tenancy, etc.

In this post we will start discussing the first concept, the pre-built services we provide as part of the Middleware Services.

image7223222The Middleware Services include 5 services:

1.       Service Bus - provides secure connectivity and messaging

2.       Access Control - provides identity and access control capabilities to web applications and services

3.       Caching - provides a distributed, in-memory application cache

4.       Integration - provides common integration and business user enablement capabilities

5.       Composite App - enables building applications that are made up of a composite of services, components, web services, workflows, and existing applications

These services are open and interoperable across languages (.NET, Java, Ruby, PHP...) and give developers a powerful pre-built "class library" for next-generation cloud applications. Developers can use each of these as stand-alone services, or combine them as part of a broader composite solution.

Focus on Service Bus and Access Control

In this post we cover the Service Bus and the Access Control services which are already available as production services with full Service Level Agreements (SLA). The Caching service is available as a Community Technology Preview (CTP), and the Integration and Composite App services will also be released in CTPs later in 2011.

Service Bus

The Service Bus provides a solution for connectivity and messaging.  Connectivity is about enabling connections between different components and services. Once you are able to connect the different components, Messaging enables you to support different messaging protocols and patterns.

The Service Bus facilitates the construction of distributed and disconnected applications in the cloud, as well as hybrid applications composed of both on-premises and the cloud services. In addition, the Service Bus provides these capabilities in a secure manner. You do not have to open your firewall or install anything inside your network. It also removes the need for the developer to worry about delivery assurance, reliable messaging and scale.

The image below illustrates the central role of the Service Bus as it connects hybrid cloud and on-premises environments using different protocols and patterns, as well as initiating direct connections between two different on-premises endpoints for increased efficiency:

As noted earlier, the Service Bus is a production service, supported by a full SLA. But this is a good opportunity to mention our LABS/Preview environment.

The LABS/Preview Environment: Feedback Wanted

In addition to the production services that are supported by the full SLA, the LABS/Preview  environment gives you a preview of the enhancements and capabilities we are planning to add to the production services in the near future. We want to get your feedback through this environment. Make sure to provide us with feedback on our CTPs in the Windows Azure AppFabric CTP Forum.

It is important to note that the services in this environment are free to use and have no SLA.

For instance, in the latest CTP release of Service Bus we are showcasing improvements and enhancements that will be released in the next update of the production service, such as: Durable Messaging, Load Balancing, and improved management. You can find more details here.

The next update to the production service, which will include these capabilities, will be in a few months.

For a more in-depth and technical overview of the Service Bus, please use the following resources:

Access Control

The Access Control service provides identity and access control solutions for web applications and services. The service provides integration with standards-based identity providers, including enterprise directories such as Active Directory, and web identities such as Windows Live ID, Google, Yahoo! and Facebook.

Instead of the user having to create a new user and password for each application, and instead of the developer having to write different sets of code to support all these different identities, the service enables the developer to write the code once to work with the Access Control service, and the service takes care of federating with all these different identities.

Identity and access control are complicated to implement in any application, but if you want to support various types of identities across both on-premises and cloud it becomes even more complicated. The image below illustrates how the Access Control service abstracts all this logic from the application itself, allowing the developer to focus on the application logic:

Like the Service Bus, the Access Control is a production service supported by an SLA.

In the latest CTP release of the Access Control service we added powerful capabilities such as: integration with Windows Identity Foundation (WIF), out-of-the-box support for Windows Live ID, Google ID, Yahoo! and Facebook, out-of-the-box support for ADFS v2.0, etc. You can find more details here.

The next update to the production Access Control service that will include all the enhancements will also be in a few months.

For a more in-depth and technical overview of the Access Control service, please use the following resources:

If you want to check out our CTP services in the LABS/Preview environment and get a preview of what is planned to be released in the near future, just visit https://portal.appfabriclabs.com/, sign up and get started.

As illuminated in this post, the Windows Azure AppFabric Middleware Services help make the work of a developer a lot easier by providing ready-to-use services that solve complicated problems. 

Tune in to the next post in this series to learn about the other Middleware Services and how they also help solve the challenges faced by developers.

Other places to learn more on Windows Azure AppFabric are:

If you still haven't already taken advantage of our free trial offer, click on the image below and start using Windows Azure AppFabric already today!


Manu Cohen-Yashar reported Azure AppFabric Cache can't work together with Windows Appfabric on 2/6/2011:

image Azure AppFabric’s distributed cache is very important for managing state in distributed applications. (I will talk about it in details in my next post) but it has a little unpleasant surprise.

It does not work side by side with Windows Server AppFabric !

image7223222If you previously installed Windows Server AppFabric, you must uninstall it before installing the AppFabric CTP SDK. These on-premise and cloud versions of AppFabric are not compatible at this time.

When both are installed Windows Azure Appfabric SDK will use Windows Server AppFabric Microsoft.ApplicationServer.Caching.xxx installed in the GAC instead of the original SDK assemblies.

The configuration will not be parsed correctly. The API will throw MethodNotFound exceptions… nothing works.

I was really unhappy to uninstall Windows Server AppFabric. I hope in the near future both will be able to leave in peace together on my machine.


Eugenio Pace (@eugenio_pace) reported about Our next project – Claims based Identity and Access Control in a 2/4/2011 post:

image Not surprisingly maybe, security in general, and authentication & authorization in particular, is a consistently highly rated concern for our customers. These concerns are especially elevated  with those considering the cloud, because they don’t have as much control on the cloud as they would typically have in their own datacenters. Sometimes, one could argue, for their own benefit, but that is a different discussion.

imageThe “Claims Identity Guide” published in December 2009, was a foundational component in our “Cloud series” that followed it: Moving Applications to the Cloud, Developing Application for the Cloud and the recently released Windows Phone 7 Developer Guide. The identity content in all of them, is essentially based on the core scenarios and design principles described in the claims guide.

With the Claims guide we also pioneered a new style and design in our books, and it was very well received! We’ve got some great feedback from you on the content and the approach. Exciting things are happening in the identity space and we want to continue to help you create great solutions using these new components.

Our next project then is an extension to this guide that will address two new areas:

    image72232221- Access Control Service (ACS) V2, in the Windows Azure Platform will be available in production soon. ACS opens the doors to advanced identity management scenarios including federation, interop with popular identity standards such as OpenId, OAuth, SWT and SAML, use of popular social identity providers such as Facebook, Windows Live ID and Google. All of this is available today in labs.

    2- SharePoint 2010 is “claims enabled”, meaning that it natively supports advanced identity management based on WS-Federation.

Interestingly (or not maybe), the core scenarios remain the same but the implementation details change and new interesting things can now be done much more easily. More or less our scope now looks like this:

image

The “blue” line is the existing content, “green” and “black” are the new chapters. Notice that they almost mirror what’s covered today. News and updates (including drafts, early samples, etc) will be published on http://claimsid.codeplex.com

As usual, we welcome feedback very much!

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Wesy reported the availability of a New Azure Case Study: Public Transit Data Community in a 2/7/2011 post to the US ISV Evangelism blog:

image What is PTDC?  It’s an transit data aggregation platform, based on Windows Azure, that collects and hosts all kinds of heterogeneous transit data (even real time data) and then provides developer APIs to the data and other value added services (like route planning).  Think of it as a “transit data cloud” that metros/towns/cities can use to provide transit data to developers and consumers.

imagePTDC is a cloud-based SaaS solution for publishing mass transit data on the web in a variety of open formats for consumption by people and applications. The software and application programming interface (API) developed by EastBanc runs in Windows Azure and allows developers to easily integrate transit data sets from PTDC into original software applications that can be served to the public through the cloud for consumption on personal computers or mobile devices.

Here are some quotes:

“WMATA’s legacy trip-planning tools were not as effective because they only processed data from WMATA’s own transit system, and they couldn’t integrate data from outside operators such as Virginia Railway Express [VRE] or local bus lines, for example,” Conforti says.

“WMATA is in the business of running a transportation network,” Popov explains. “They’re not in the business of hosting data and fostering a developer community. PTDC enables transportation authorities to take advantage of a common service for publishing their data, so they can focus on their core business.”

The case study on the Public Transit Data community developed by EastBanc technologies has just been published to Microsoft.com. Here is the URL: http://www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000009148

image This case study addresses the need of municipal transit agencies that struggle to provide citizens with a comprehensive multiagency and inter-jurisdictional service on bus, train, and tram schedules, as well as actual status of routes. EastBanc Technologies used the Windows Azure platform to develop the Public Transit Data Community, a cloud-based software-as-a-service solution that independent developers can use to build special applications for transit services. The benefits of the system include:

  • Integrated transit schedules
  • Rapid development platform
  • Enhanced services for citizens

If you have had a chance to try the Windows Azure Platform. We have a Free 30-Day Trial with no credit card required.

To qualify please use this Promo Code: DPWE01. Link: http://www.windowsazurepass.com/?campid=BB8443D6-11FC-DF11-8C5D-001F29C8E9A8


Avkash Chauhan described a workaround for Windows Azure VM Role: CSUPLOAD Exception System.BadImageFormatException: an attempt was made to load a program with an incorrect format in a 2/6/2011 post:

imageWhen you run CSUPLOAD tool on a Windows OS 7 32bit machine, you will receive the following error:

An unexpected error occurred:  An attempt was made to load a program with an incorrect format (Exception from HRESULT: 0x8007000B)

Exception: System.BadImageFormatException: an attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)

The detailed Exception looks as below:

imageC:\Azure>csupload Add-VMImage -Connection "SubscriptionId=xxxxx; CertificateThumbprint=xxxxxx; -Description "Base image Windows Server 2008 R2" -LiteralPath "c:\Azure\baseimage.vhd" -Name baseimage.vhd -Location "South Central US"

Windows(R) Azure(TM) Upload Tool 1.3.0.0

for Microsoft(R) .NET Framework 3.5

Copyright (c) Microsoft Corporation. All rights reserved.

Successfully passed the verification tests.

The mounted size of the VM image is 30 GB. This image can be used with the following Windows Azure VM sizes: Small, Medium, Large, ExtraLarge

Windows(R) Azure(TM) VHD Preparation Tool. 1.3.0.0

for Microsoft(R) .NET Framework 3.5

Copyright (c) Microsoft Corporation. All rights reserved.

Compressing VHD from c:\Azure\baseimage.vhd to c:\Azure\20110121133856Z4483690FF0694173BFE39998C4FF64B8.preped...

An unexpected error occurred:  An attempt was made to load a program with an incorrect format (Exception from HRESULT: 0x8007000B)

Exception: System.BadImageFormatException: an attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)

Microsoft.WindowsAzure.Tools.CsVhdPrep.NativeMethods.BCCompressFile(String InputFilename, String OutputFilename, UInt32 Flags, PFN_RDBCAPI_PROGRESS Callback)

   Microsoft.WindowsAzure.Tools.CsVhdPrep.BlockCompress.CompressFile(String inputFile, String outputFile)

   Microsoft.WindowsAzure.Tools.CsVhdPrep.DoActions.Prepare(FileInfo inputVhd, FileInfo outputFile, FileInfo outputFileDigest)

   Microsoft.WindowsAzure.Tools.CsVhdPrep.ProgramCommands.PrepareVhdAction(IList`1 unnamedArgs, IDictionary`2 switches)

   Microsoft.WindowsAzure.Internal.Common.CommandParser.ParseCommand(CommandDefinition cmdDef, IEnumerable`1 commandArgs)

   Microsoft.WindowsAzure.Internal.Common.CommandParser.ParseCommand(IEnumerable`1 commandLine)

   Microsoft.WindowsAzure.Tools.CsVhdPrep.Program.ExecuteActions(String[] args)

   Microsoft.WindowsAzure.Tools.CsVhdPrep.Program.Main(String[] args)

Cannot prepare VHD c:\Azure\baseimage.vhd.

What actually happens is that CSUPLOAD calls csvhdprep which returns the following exception:

"An unexpected error occurred: an attempt was made to load a program with an incorrect format.  Cannot prepare VHD {0}". 

So the actual problem is caused by CSVHDPREP.EXE.

Solution:

To solve this problem you should run Windows Azure SDK 1.3 on a 64bit Windows 7 or Windows Server 2008 R2 machine to successfully use CSUPLOAD tool along with VM Role BETA.

During Windows Azure VM Role BETA phase CSUPLOAD tools is supported only on 64bit Windows OS however things may change later with respect to 32bit Support included with VM Role Tools.


Avkash Chauhan described another workaround for Windows Azure VM Role: CSUPLOAD Error - This Tool is not suported on the current OS on 2/6/2011:

When you run CSUPLOAD tool on a machine Other than Windows 7 64bit or Windows Server 2008 R2 you might receive the following error:

imageThis is because when CSUPLOAD tools runs on a machine it checks the machine’s Operating System and if it does not satisfy the OS requirement, it spill the above error.

imageYou have two options to resolve this error:

1. Use Windows Azure SDK 1.3 on either Windows 7 64bit or Windows Server 2008 R2 (Preferred)

2. Use “-SkipVerify” option with CSUPLOAD to bypass this error. (I am not a big fan of using “-SkipVerify” as it might mask some problems on your VHD while uploading it to Azure Portal which I encountered during my extensive use of VM Role)


Jim O’Neill described Azure Startup Tasks and Powershell: Lessons Learned in a 2/6/2011 post:

imageLast weekend, I sat down to write the next blog post in my Azure@home series, covering the use of startup tasks to replace some tedious file copying code in the Worker Role.  Well, it turned out to be an adventure, and while segment 15 of the series is forthcoming, I thought I’d enumerate some of the not-so-obvious things I discovered here in this stand-alone blog post. 

imageBefore you read further, I want to thank both Steve Marx and Adam Sampson for helping me understand some of the nuances of startup tasks.  Steve’s blog articles, Windows Azure Startup Tasks: Tips, Tricks, and Gotchas as well as Introduction to Windows Azure Startup Tasks should be required reading and were my starting points – some of his pointers are repeated or expanded in my article as well.  Adam, one of the developers on the Azure team and the man behind the new 1.3 Diagnostics, RemoteAccess, and RemoteForwarder modules, helped clear up the primary ‘inexplicable’ behavior I was noticing.

If you’re looking for a walkthrough on startup tasks, this post isn’t it; I’ll take a more didactic approach in the next Azure@home article (or check out this Cloud Cover episode).  For sake of example, here’s a rather simple scenario that I’ll use to illustrate my own private gotchas below!  The result of this startup task is to write a new file, sometext.txt, in the E:\approot directory of the VM housing the deployed web role – whether that’s useful or not, I won’t comment!

Setup task files in WebRole project

setup.cmd

@echo off
powershell -command "Set-ExecutionPolicy Unrestricted" 2>> err.out 
powershell .\script.ps1 2>> err.out

script.ps1

Start-Transcript -Path transcript.txt
New-Item –path .. –name sometext.txt -type "file" `
   -value "I was written by a Startup Task!"

Stop-Transcript

Here’s a quick link list to the rest of the article, covering some of the distinct points to be aware of when using Azure startup tasks and/or Powershell:

Dude, Where’s My Script

When you specify the path to your batch file in the Task element keep in mind that the path will be relative to the approot directory in a worker role and the approot/bin directory in a web role.   It’s pretty easy to get mixed up with the relative paths, so consider using the rather cryptic %~dp0, which expands to the full path of the location where your batch file is running, whenever you need to reference additional files.  Obviously, I didn’t take my own advice here, but this sample is pretty simple.

Makin' copiesCopy Always

I can’t tell you how many times I’ve stumbled over this one!  In the sample above, script.ps1 and setup.cmd aren’t really part of the project, they’re just tagging along with the deployment so they’ll be available in the VM on the cloud for the Azure Fabric to kick into gear.   When you add an existing external file (or create a new one) in Visual Studio, the properties are set to not copy the file to the output directory.  As a result such files won’t get packaged into the .cspkg file or delivered to Azure.   Make sure you visit the Properties dialog of the script files you do add, and set Copy to Output Directory to “Copy Always”.

Two words, ExecutionPolicy

By default, Powershell will not run untrusted scripts, which is precisely what script.ps1 is.  The first Powershell command in setup.cmd is there to set the execution policy to allow the next command to successfully run script.ps1.  If the script runs under PowerShell 2.0 (that is, you’re deploying your role with an OSFamily setting of 2), you can get away with the following single command (which sets the policy for that one command versus globally):

powershell -ExecutionPolicy Unrestricted .\script.ps1

But Steve’s blog post says to use reg add HKLM\ Ultimately it’s the same result, and I like the fact I’m not poking directly into the registry and I can use the same script for both OSFamily values in Windows Azure.
Setting that policy makes me nervous is there no other way?
  Scott Hanselman wrote an extensive post on how to sign a Powershell script so it can run in a remote environment (like in your Windows Azure Web Role).  That’s a bit out of the scope of what I want to cover here, so read it at your leisure. [Disclaimer:  that post was written over four years ago, and I presume it’s still accurate, but I’ve not tried it in the context of Windows Azure.]

Ok, not *that* type of loggingLogging is Your Friend

It may seem like a throwback, but logging each line of your scripts to figure out what when wrong and when it went wrong is about all you can do once you’re running in the cloud.  In the setup.cmd file, you’ll notice I’ve (nod to Steve) used the stderr redirection 2>> to capture any errors when running the Powershell command itself.  And in the Powershell script I’m using the Start-Transcript cmdlet to capture the output of each command in that script.   The location of these log files is relative to the directory in which the script is run, which in the case above is /approot (worker role) or /approot/bin (web role).   The next question, you’ll ask is “how do I get to them?”  Read on!

Remote Desktop is Your BEST Friend

While you probably could figure out a way to push your log files to Azure storage or make them accessible via a WCF service call, or some other clever way, I say go straight to the source – the live VM running in Azure.  With Remote Desktop access it’s simple to set up, and you’ll thank yourself for the visibility it gives you into diagnosing issues.  Once you’re in the VM you can poke around at the various log files you’ve left behind and get a better idea of where things went awry.

“Simple” isn’t always easy

By “simple”, I mean the taskType of your Task element: simple, background, or foreground, as shown in the excerpt below:

<ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WebRole name="WebRole1">
    <Startup priority="1">
      <Task commandLine="setup.cmd" 
            executionContext="elevated" 
            taskType="simple"/>
    </Startup>
    ...

Simple is as simple doesNow, chances are you’ll want to use “simple;” it executes your task synchronously and doesn’t put your role in the ready state until the startup task completes.  But, what happens if you have a logic error, say your task raises an exception or has an infinite loop?  Since the role is never in the ready state, you won’t be able to remote into your instance to debug it.

I’d recommend running your tasks in “background” while you’re testing and debugging, and then switch back to “simple,” when you’re confident the code is rock-solid.  As for “foreground”, that also runs your code asynchronously, but prevents the role from being recycled until the task completes, so if you do have a runaway task, you’ll have to hunt around in Task Manager to kill it first before you can restart your role and deploy your updated implementation.

Cheesy "Close Encounters of the Third Kind" referenceYour Task is Not Alone!

if you’ve read up on some of the changes in the 1.3 SDK, you may be aware of a new plugin architecture, which makes enhanced capabilities – such as Diagnostics and RemoteAccess –  easy to add or remove from your roles.  You can see these module references in your ServiceDefinition.csdef file, and they are typically added by choices you’ve made in the properties of your roles, like selecting the Enable Diagnostics checkbox on the Role property sheet or clicking the link to configure Remote Desktop connections on the Cloud project publish dialog.

RemoteAccess snap-in directoryThe module references that then appear in the Service Definition document refer to plugins that are installed locally as part of the Azure 1.3 SDK, in the bin/plugins folder (see right).

If you open one of the csplugin files, you’ll notice it has a familiar look to it, essentially encapsulating behavior of an ancillary service or process you’re going to spin up in the cloud.  It’s a separate process from your web and worker roles, but runs in the same VM and has many of the same parameters.  Below is the code for the RemoteAccess module, which is required to be part of every web and worker role for which you want to enable Remote Desktop Access.

<?xml version="1.0" ?>
<RoleModule 
  xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"
  namespace="Microsoft.WindowsAzure.Plugins.RemoteAccess">
  <Startup priority="-1">
    <Task commandLine="installRuntimeSnapIn.cmd" executionContext="elevated" taskType="background" />
    <Task commandLine="RemoteAccessAgent.exe" executionContext="elevated" taskType="background" />
    <Task commandLine="RemoteAccessAgent.exe /blockStartup" executionContext="elevated" taskType="simple" />
  </Startup>
  <ConfigurationSettings>
    <Setting name="Enabled" />
    <Setting name="AccountUsername" />
    <Setting name="AccountEncryptedPassword" />
    <Setting name="AccountExpiration" />
  </ConfigurationSettings>
  <Endpoints>
    <InternalEndpoint name="Rdp" protocol="tcp" port="3389" />
  </Endpoints>
  <Certificates>
    <Certificate name="PasswordEncryption" storeLocation="LocalMachine" storeName="My" permissionLevel="elevated" />
  </Certificates>
</RoleModule>

Note there’s a series of three tasks that are run on startup, two of which run asynchronously (taskType=background), and one of which is synchronous (taskType=simple).  These tasks, along with tasks that you specify in your ServiceDefinition.csdef document are all thrown at the Azure Fabric to start up as it works to bring up your web or worker role.  The priority in the Startup element here is –1, which means these tasks will start before your own tasks (since we left priority off and it defaults to 0, or perhaps 1?).

Now here’s where things get VERY interesting.  The inclusion of the RemoteAccess module in the web role example above means that these three tasks will start before our own setup.cmd, but there is no guarantee the first two will complete before setup.cmd because they are marked as “background” (asynchronous).

Let’s take a look at what’s in that first installRuntimeSnapIn.cmd file now:

rem Run both the 32-bit and 64-bit InstallUtil
IF EXIST %SystemRoot%\Microsoft.NET\Framework\v2.0.50727\InstallUtil.exe %SystemRoot%\Microsoft.NET\Framework\v2.0.50727\InstallUtil.exe Microsoft.WindowsAzure.ServiceRuntime.Commands.dll
IF EXIST %SystemRoot%\Microsoft.NET\Framework64\v2.0.50727\InstallUtil.exe %SystemRoot%\Microsoft.NET\Framework64\v2.0.50727\InstallUtil.exe Microsoft.WindowsAzure.ServiceRuntime.Commands.dll
rem Add the snapin to the default profile
echo Add-PSSnapIn Microsoft.WindowsAzure.ServiceRuntime >> %SystemRoot%\system32\WindowsPowerShell\Profile.ps1
powershell -command set-executionpolicy allsigned

What this command file does is install a Powershell snap-in that give some access to the runtime environment of the running role – more on that later.   The very last action it takes looks kind of like the second line in our setup.cmd file, only it sets the policy to allsigned versus unrestricted… and

It’s a Race (Condition)

It's a Race

Our startup task (setup.cmd) ends up setting the policy to unrestricted to run the unsigned script (script.ps1) while the RemoteAccess script above may still be running in the background!.  Both of these Set-ExecutionPolicy commands are ultimately updating a global registry entry, so the last one wins!  

As a result, you can see a considerable variation in behavior.  In my testing, I saw it work fine; I saw the Powershell script not even invoked (because the execution policy got reset between lines 2 and 3 of setup.cmd); and I saw my Powershell script start, only to choke in the middle because the execution policy had changed midstream, and one of the Powershell commands was requesting me to confirm – interactively - that it was ok to run!

The “fix” is easy – eliminate the race condition by setting the taskType of the installRuntimeSnapIn.cmd to “simple” rather than “background”.  I suppose deleting that last line from the .cmd file would work as well, but someone put it there for a reason, and I didn’t feel confident questioning that.   In terms of switching to “simple,” I’m fine with it..  Maybe it takes a wee bit longer for my role to start up, but that’s nothing compared to the loss of two days and a bit of sanity I otherwise incurred.

Powershell Guru: “Hey, you know there’s an easier way?”   If you read Steve’s post , you’ll note he calls out the following for invoking a Powershell script from his .cmd file:

powershell -ExecutionPolicy Unrestricted ./myscript.ps1

but he also adds that this works for osFamily=”2”, which is an Azure OS based off of Windows Server 2008 R2 and so comes with Powershell 2.0.  The default osFamily is “1”, and that provides an image based off of Windows Server 2008 SP2, which comes with Powershell 1.0, and you guessed it, the –ExecutionPolicy switch wasn’t introduced until Powershell 2.0.   That switch also affects the session and not the local machine setting, so there is no race condition created there to fix!

Snap-in At Your Own Risk

One of the tips Steve provides is leveraging the Azure Service Runtime from Powershell, using the nifty snap-in that’s installed from the Remote Access module.  That’s a great idea in theory, but after some back and forth, I’d recommend against it.   Here’s my rationale:

  • The snap-in is only installed as part of the RemoteAccess module, it’s not part of the VM image in Azure.  Unless you’re planning to always deploy your roles with RemoteAccess enabled (which I wouldn’t advise given the additional attack vector it may provide) then you wouldn’t have the plug-in available.
  • As explained to me, the primary scenario for the snap-in is to be able to peer into (and perhaps fix) misbehaving instances, making its correlation with RemoteAccess clear.  Its use with your own Powershell startup scripts isn’t currently supported.
  • You can still access much of the Azure service runtime methods (after all the snap-in really just provides syntactic sugar).  For example, the following sets of Powershell commands are equivalent:

Add-PSSnapin Microsoft.WindowsAzure.ServiceRuntime
Get-LocalResource -Name FoldingClientStorage

[Reflection.Assembly]::LoadWithPartialName("Microsoft.WindowsAzure.ServiceRuntime")
[Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment]::
        GetLocalResource("FoldingClientStorage")

<Return to section navigation list> 

Visual Studio LightSwitch

Beth Massi (@bethmassi) did her own PR in a dnrTV Show #185: Beth Massi on Visual Studio LightSwitch Beta 1 post of 2/7/2011:

image Check it out, I’m on dnrTV talking about LightSwitch: Beth Massi on Visual Studio LightSwitch Beta 1

In this episode I show my version of the Vision Clinic Walkthrough and build an application from scratch that federates multiple databases and SharePoint data. I also show how to work with entity business rules, screen code, and walk through a variety of other cool features of LightSwitch Beta 1.

image22242222It’s always fun to do these shows with Carl and sometimes they don’t always go as smoothly as you’d like. In this one, my phone battery dies about 40 minutes into the show so I have to call back on a speakerphone. So you may notice my voice sounding funky at the end. It’s still me I promise ;-)


Arthur Vickers continued his Entity Framework Feature CTP5 series with Using DbContext in EF Feature CTP5 Part 12: Automatically Detecting Changes of 2/6/2011:

Introduction

In December we released ADO.NET Entity Framework Feature Community Technology Preview 5 (CTP5). In addition to the Code First approach this CTP also contains a preview of a new API that provides a more productive surface for working with the Entity Framework. This API is based on the DbContext class and can be used with the Code First, Database First, and Model First approaches.

This is the last post of a twelve part series containing patterns and code fragments showing how features of the new API can be used. Part 1 of the series contains an overview of the topics covered together with a Code First model that is used in the code fragments of this post.

The posts in this series do not contain complete walkthroughs. If you haven’t used CTP5 before then you should read Part 1 of this series and also Code First Walkthrough or Model and Database First with DbContext before tackling this post.

Automatically detecting changes

When using most POCO entities the determination of how an entity has changed (and therefore which updates need to be sent to the database) is made by detecting the differences between the current property values of the entity and the original property values that are stored in a snapshot when the entity was queried or attached. By default, the Entity Framework does this detection automatically when the following methods are called:

  • DbSet.Find
  • DbSet.Local
  • DbSet.Remove
  • DbSet.Add
  • DbSet.Attach
  • DbContext.SaveChanges
  • DbContext.GetValidationErrors
  • DbContext.Entry
  • DbChangeTracker.Entries
Disabling automatic detection of changes

If you are tracking a lot of entities in your context and you call one of these methods many times in a loop, then you may get significant performance improvements by turning off detection of changes for the duration of the loop. For example:

using (var context = new UnicornsContext())
{
    try
    {
        context.Configuration.AutoDetectChangesEnabled = false;

        // Make many calls in a loop
        foreach (var unicorn in myUnicorns)
        {
            context.Unicorns.Add(unicorn);
        }
    }
    finally
    {
        context.Configuration.AutoDetectChangesEnabled = true;
    }
}

Don’t forget to re-enable detection of changes after the loop—I used a try/finally to ensure it is always re-enabled even if code in the loop throws an exception.

An alternative to disabling and re-enabling is to leave automatic detection of changes turned off at all times and either call context.ChangeTracker.DetectChanges explicitly or use change tracking proxies diligently. Both of these options are advanced and can easily introduce subtle bugs into your application so use them with care.

Summary

In this part of the series we looked when DbContext automatically detects changes in your tracked entities and gave some guidance as to when this automatic change detection should be turned off.

As always we would love to hear any feedback you have by commenting on this blog post.

For support please use the Entity Framework Pre-Release Forum.


Arthur Vickers explained Using DbContext in EF Feature CTP5 Part 11: Load and AsNoTracking in a 2/6/2011 post to the ADO.NET Team blog:

Introduction

In December we released ADO.NET Entity Framework Feature Community Technology Preview 5 (CTP5). In addition to the Code First approach this CTP also contains a preview of a new API that provides a more productive surface for working with the Entity Framework. This API is based on the DbContext class and can be used with the Code First, Database First, and Model First approaches.

This is the eleventh post of a twelve part series containing patterns and code fragments showing how features of the new API can be used. Part 1 of the series contains an overview of the topics covered together with a Code First model that is used in the code fragments of this post.

The posts in this series do not contain complete walkthroughs. If you haven’t used CTP5 before then you should read Part 1 of this series and also Code First Walkthrough or Model and Database First with DbContext before tackling this post.

Load

In several of the parts in this series we have wanted to load entities from the database into the context without immediately doing anything with those entities. A good example of this is loading entities for data binding as described in Part 7. One common way to do this is to write a LINQ query and then call ToList on it, only to immediately discard the created list. The Load extension method works just like ToList except that it avoids the creation of the list altogether.

Here are two examples of using Load. The first is taken from a Windows Forms data binding application where Load is used to query for entities before binding to the local collection, as described in Part 7:

protected override void OnLoad(EventArgs e)
{
    base.OnLoad(e);
    
    _context = new ProductContext();

    _context.Categories.Load();
    categoryBindingSource.DataSource =
_context.Categories.Local.ToBindingList(); }

The second example shows using Load to load a filtered collection of related entities, as described in Part 6:

using (var context = new UnicornsContext())
{
    var princess = context.Princesses.Find(1);

    // Load the unicorns starting with B related to a given princess
    context.Entry(princess)
        .Collection(p => p.Unicorns)
        .Query()
        .Where(u => u.Name.StartsWith("B"))
        .Load();
}
No-tracking queries

Sometimes you may want to get entities back from a query but not have those entities be tracked by the context. This may result in better performance when querying for large numbers of entities in read-only scenarios. A new extension method AsNoTracking allows any query to be run in this way. For example:

using (var context = new UnicornsContext())
{
    // Query for all unicorns without tracking them
    var unicorns1 = context.Unicorns.AsNoTracking();

    // Query for some unitcorns without tracking them
    var unicorns2 = context.Unicorns
                        .Where(u => u.Name.EndsWith("ky"))
                        .AsNoTracking()
                        .ToList();
} 

In a sense Load and AsNoTracking are opposites. Load executes a query and tracks the results in the context without returning them. AsNoTracking executes a query and returns the results without tracking them in the context.

Summary

In this part of the series we looked the Load extension method for loading entities into the context and the AsNoTracking extension method for returning entities without tracking them.

As always we would love to hear any feedback you have by commenting on this blog post.

For support please use the Entity Framework Pre-Release Forum.


Return to section navigation list> 

Windows Azure Infrastructure

Toddy Mladenov (@toddysm) explained How to Deploy Windows Azure VM Role in a 2/6/2011 post:

image For a project that I started in Windows Azure I needed access to the VM Role functionality and hence I decided to test the experience from customer point of view. Here are the steps I followed:

  1. Request access to the VM Role functionality in Windows Azure
  2. Preparing the VHD for VM Role
  3. Upload the VHD in the VM Image Repository on Windows Azure
  4. Create a hosted service to host the VM Role
  5. Upload Windows Azure Hosted Service Certificate
  6. Create package and deploy the VM Role to the hosted service
  7. Remote Desktop to the VM Role and install additional software if needed
Request Access to VM Role Functionality on Windows Azure

imageAs you may already know Windows Azure VM Role feature is still in beta, and in order to use it you need to request access to the feature. You can do this using the following steps:

  1. Go to Windows Azure Management Portal at http://windows.azure.com
  2. Login using your Live ID, and click on Beta Programs in the left-side navigation
  3. In the main pane you will see the list of Beta Programs that are available as well as checkboxes for each one
  4. Click on the checkbox next to VM Role and click on the button Join Selected

Windows Azure Beta Programs

After you complete this workflow you need to wait until your request gets approved. I had to wait approximately two weeks before I received notification that my request is approved. Keep in mind that each Beta Program has its own wait time because they have different quotas and are approved by different teams within Windows Azure. Those times can also change based on the number of requests received.

Important: Please read the notification email carefully! In the email you will find information how to enable VM Role features in the Visual Studio development environment. In essence you need to run one of the scripts below to add new registry key or just change the following [dword] registry key:
HKEY_CURRENT_USER\SOFTWARE\Wow6432Node\Microsoft\Windows Azure Tools for Microsoft Visual Studio 2010\1.0\VirtualMachineRoleEnabled​=1

The scripts above will enable the Add New Virtual Machine Role in the context menu in Visual Studio.

Preparing the VHD for Windows Azure VM Role

One important point I would like to emphasize when you prepare the VHD is the need to install the Windows Azure Integration Components. MSDN Library documentation is very explicit how you can do that using HyperV Manager.

Of course I don’t have the HyperV Manager on my Windows 7 machine and preparing the 64-bit image turned out to be a bit problematic (Note: At the moment Windows Azure supports only 64-bit VM Roles based on Windows Server 2008 R2).

The workaround I used was to install the Open Source VirtualBox software, and prepare the image with it. However you will need hardware virtualizations in order to create the 64-bit VHD.

Upload VHD Image to Windows Azure

The next thing you need to do is to upload the VHD file into Windows Azure image repository. I have created very simple VHD file with Windows 7 Ultimate using the Virtual PC in Windows 7.

In order to upload the VHD into the image repository you need to use the csupload.exe tool delivered as part of Windows Azure .Net SDK 1.3 or later. Here is how to use the tool:

  1. Open Windows Azure SDK Command Prompt as Administrator

    image
  2. Execute the following command
    csupload Add-VMImage -Connection "SubscriptionId=[subscription_id]; CertificateThumbprint=[certificate_thumbprint]" -Description "[description]" -LiteralPath "[vhd_location]" –Name [vhd_filename] -Location "[azure_subregion]" -TempLocation %TEMP% -SkipVerify
    image
    Where:
    subscription_id is the ID of the Windows Azure subscription where you want your VHD to be placed
    certificate_thumbprint is the thumbprint for the management certificate uploaded in the above mentioned subscription. Note: This is the Management Certificate used to manage Windows Azure services for this subscription and NOT the Hosted Service certificate you will need later on when you deploy the VM Role. 
    description is user friendly description you want to use
    vhd_location is the location of the VHD on your local machine or network
    vhd_filename is the name of the VHD file
    azure_subregion is the Windows Azure sub-region (you can find the names of the sub-regions in every offer description on Windows Azure web site – look for Data Transfer Details and expand the section)
    Few notes related to the csupload.exe tool:
    1.) Make sure that you have write permissions to the vhd_location folder else the command will fail during verification. If you do not have write permissions to the location folder you can use the –SkipVerify option or specify –TempLocation as I did above
    2.) At the time of this writing there was an issue with verification on 32-bit Windows 7, and if you are trying to upload the VHD from 32-bit Windows 7 machine you may encounter some errors. In this case use the –SkipVerify option.

Depending of the size of your VHD and the upstream speed of your Internet connection you may need to wait a while until the upload is complete.

Once the VHD image is uploaded to Windows Azure you will see it in the image repository in the Management Portal.

image

Create a hosted service to host the VM Role

You create the Hosted Service for VM Role the same way you create Hosted Service for Web or Worker role. One important thing you need to remember is to create the hosted service in the same location where you have uploaded the VHD. If you forget to do so you will receive the following (very explanatory) error message at deploy time:

HTTP Status Code: 400
Error Message: A parameter was incorrect. Details: One of the specified images cannot be used for creating the deployment. When you want to create a deployment that is based on a machine image, one of the following constraints must be met: (1) the hosted service and the image belong to the same affinity group, or (2) neither the hosted service nor the image belong to an affinity group, but their location constraints are the same, or (3) the hosted service belongs to an affinity group and the image has a location constraint equal to the location constraint of the affinity group. Here are the details about the current deployment: Image [image_name] does not belong to an affinity group. Its location constraint is [vhd_location].Hosted service [hosted_service_name] is not in an affinity group. Its location constraint is [hosted_service_location].
Operation Id: [some_operation_id]

I am not going to describe the Hosted Service creation process in details – you can do this quite easy from Windows Azure Management Portal.

Upload the Hosted Service Certificate

Before deploying VM Role you will need to upload Hosted Service Certificate (also used to enable SSL communication to your endpoint). Here is how you do that:

  1. In Windows Azure Management Portal select the Hosted Services, Storage Accounts and CDN tab and then Hosted Services in the top left navigation
  2. In the main pane expand the Hosted Service, into which you want to upload the certificate and select the Certificates
    image
  3. Click on the Add Certificate button in the ribbon
    image
  4. Browse for the certificate locally and type the password for it
Create Package and Deploy the VM Role to Windows Azure

In order to deploy the VM Role to Windows Azure you need to create a CSPKG package and CSCFG for your application and deploy those to the hosted service you created in the previous step. Here is how to do that:

  1. In Visual Studio 2010 create a new Windows Azure Project
    image
  2. In the next step in the wizard it is important that you DO NOT create any Windows Azure Role – thus the project will only have CSPKG and CSCFG files created and no separate projects for the roles
    image
  3. Now that you have the project and have changed the Visual Studio configuration to show the Add->New Virtual Machine Role option you can click on the Roles node (or the Project node) in the project and add the VM Role
    image
  4. If you don’t have your Windows Azure account configured in Visual Studio you will need to do that first. You will need to use the same subscription you used to upload the VHD file for this configuration.
    image
  5. If you have your Windows Azure account already configured you can select the VHD from the drop down menu
    image
  6. In the configuration tab you can change the number of instances you want to have for the VM Role
  7. You can also add endpoint for the VM Role and make it accessible from outside. I personally enabled port 80 and port 8080 for my test role
    image
  8. Once you are done with the configuration right-click on the project and select Publish
    image
    You will see the publish dialog: 
    image
  9. Configure Remote Desktop for all the instances by clicking on the link right above the buttons
    image
    Note: The certificate you choose in this window MUST be the same you uploaded to your Hosted Service in the previous section.
  10. Click on the OK button and wait until the deployment is complete 
    image
  11. Once the deployment is done you can see your VM Role running in Windows Azure Management Portal
    image
Remote Desktop to VM Role

You can Remote Desktop to the Windows Azure VM Role from Windows Azure Management Portal by selecting the instance you want to remote to and clicking on the Connect button in the ribbon:

image

You will be asked for the username and password – you should use the ones that you set up in step 9. in the section Create Package and Deploy the VM Role above. You may need to specify the machine name before the user name in order to login into the system – if you don’t have the actual machine name you can use the Hosted Service name and it will work:

image

I already had all the software installed on my VM and didn’t need to do any other installations.

Hope this step-by-step guide will help you bootstrap your Windows Azure VM Role deployment.

You might like:


Rajesh Ramchandani published an Every Cloud Has a Silver Lining: This Time It's Come to PaaS whitepaper to Cloudbook’s Vol. 2, No. 1:

Every Cloud Has a Silver Lining: This Time It's Come to PaaS

Cloud computing (Infrastructure-as-a-Service) has already proved its value to some businesses and specific applications. It provides a way to deploy and access massive amounts of IT resources, on demand, in real time. It drives better utilization of data center resources, reducing capital expenditures and operating expenses. Most important, it provides the scalability and agility to adapt to changing business needs.

download the pdfHowever, challenges remain with the Infrastructure-as-a-Service (IaaS) cloud computing deployment model. For enterprises that have evaluated and deployed on-premise IaaS clouds or those that use public clouds, it’s obvious that IaaS clouds require application developers and IT practitioners to install, configure, customize, optimize, and manage their deployment environments – manual tasks somewhat counter to the promise of cloud computing’s “agility” value proposition. Also, IT administrators and application developers have to maintain deep technical knowledge of multiple software components, and monitor and manage them.

Platform-as-a-Service frees application developers from infrastructure issues
image With Platform-as-a-Service (PaaS), enterprises and developers get a higher value. Software stacks are pre-configured and pre-integrated in PaaS and can be available within minutes. The PaaS model abstracts the application layer from the application infrastructure; this step eliminates the need to manage infrastructure software and provides an easy-to-manage, standardized, integrated stack and multi-tenant deployment platform. PaaS technologies provide monitoring, management and auto-scaling engines that resize the resources allocated to each application in real time, taking full advantage of the scalability of the cloud. Additional platform services can be dynamically added to the PaaS globally.
The ultimate benefit is simple: developers can develop. No longer are they responsible for managing, monitoring and dynamic resource scaling. That task belongs to the PaaS platform, managed by a centralized IT department.
In addition, the PaaS model offers flexibility to build and deploy a standard set of shared components so all applications are deployed with a consistent set of software versions and releases. Centralized IT executives have full control to maintain a homogeneous and standardized development and deployment platform across the enterprise, simplifying IT operations and reducing the time needed to deliver IT resources or platforms to their departmental constituents.
Choices for enterprises for applications in the cloud
As IT organizations prepare their applications and infrastructure for cloud deployment, they must deal with enormous complexity. They must consider issues such as new deployment architectures, management and monitoring of cloud resources, application lifecycle management, software support on clouds, licensing, security, scalability, and the thorny problem of migrating existing custom applications to the cloud. Based on the application data security and compliance needs and their goals in adopting cloud computing models, enterprises have the following options to move existing applications or develop new applications:
  • Acquire similar applications from SaaS vendors such as Salesforce.com, NetSuite, SuccessFactors, or RightNow. This option best suits database-centric enterprise applications such as ERP, CRM, etc. and may not apply to business-specific custom applications. However, this approach means writing off investments in existing applications and software licenses and can lead to vendor lock-in.
  • Move custom applications to a public PaaS, such as Salesforce.com’s force.com, Microsoft Azure or Google App Engine; however, this option also involves a high degree of re-write and vendor lock-in, as public PaaS providers require the use of proprietary SDKs and data models. Current offerings also may not be suitable to some enterprises that must adhere to privacy and compliance standards. Further, these vendors could be a poor fit because of their limited functionality.
  • Do-it-yourself PaaS. While technically doable, it is complex and could take six months to a year.
  • Off-the-shelf PaaS. Use a PaaS-enablement solution vendor to build a PaaS using the set of application infrastructure components that are currently used within an enterprise and on a choice of private and public clouds.
Enterprise Java PaaS Requirements

Because of Java ubiquity in the enterprise, IT departments have invested heavily to develop and deploy Java applications on software stacks from Oracle, IBM and Red Hat. Ideally, enterprises that wish to move applications to clouds should be able to leverage their investments in skill sets, application code and infrastructure software, without re-writing applications.

Enterprises should also be able to create their own standards-based Java PaaS on public clouds such as Amazon EC2, or private clouds such as VMware, Eucalyptus or Cloud.com using the middleware they already have.

In other words, enterprises should look for a PaaS solution that is truly architected to put IT in the driver’s seat. It must allow enterprises that prefer to run their private PaaS on Amazon EC2 to set up an enterprise-wide master account with access controls, quotas, and hard/soft limits on cloud resources for users and departments to be able to manage capacity, usage, security and compliance. Application developers and QA/testing teams in turn have the flexibility of an on-demand PaaS while maintaining security and compliance, as well as controlling expenses in a manageable fashion. The same functionality should be available for on-premise private clouds.

Opportunities for Cloud Service Providers
As data center outsourcing to hosting providers continues to be one of the primary initiatives in enterprises, hosting providers are evolving their offerings to accommodate IT and management requirements for new SaaS business applications and legacy custom applications. Hosting providers should consider providing Java PaaS solutions as a service to their independent software vendors and enterprise customers. Such standard PaaS solutions give enterprises and ISVs a common platform to deploy new applications and easily migrate their existing applications to the cloud. Hosting providers can also integrate a Java PaaS solution into their own IaaS environments and offer a more valuable, more complete platform solution to customers. Additionally, a Java PaaS offering would enable service providers to compete with proprietary cloud offerings in the marketplace.
Additional Resources
CumuLogic provides a comprehensive solution for enterprises and Cloud Service Providers to build their own standards-based private PaaS for Amazon EC2, Cloud.com, Eucalyptus and VMware infrastructure clouds:
  • CumuLogic’s Cloud Application Management Platform can consolidate and deploy existing and new enterprise Java applications on public, private and hybrid clouds.
  • The CumuLogic solution includes dynamic application resource management, application lifecycle management, user management and autoscaling capabilities.
  • CumuLogic software leverages all infrastructure and security services provided by such IaaS environments, transforming them into PaaS environments.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

Lori MacVittie (@lmacvittie) asserted Public cloud computing is about capacity and scale on-demand, private cloud computing however, is not in an introduction to her Focus of Cloud Implementation Depends on the Implementer of 2/7/2011 to F5’s DevCentral blog:

image Legos. Nearly every child has them, and nearly every parent knows that giving a child a Lego “set” is going to end the same way: the set will be put together according to instructions exactly once (usually by the parent) and then the blocks will be incorporated into the large collection of other Lego sets to become part of something completely different.

lost-lego-instructions

This is a process we actually encourage as parents – the ability to envision and end-result and to execute on that vision by using the tools at hand to realize it. A child “sees” an end-product, a “thing” they wish to build and they have no problem with using pieces from disparate “sets” to build it. We might call that creativity, innovation, and ingenuity. We are proud when our children identify a problem – how do I build this thing – and are able to formulate a plan to solve it.

So why is it when we grow up and start talking about cloud computing that we suddenly abhor those same characteristics in IT?

RESOURCES as BUILDING BLOCKS

That’s really what’s happening right now within our industry. Cloud computing providers and public-only pundits have a set of instructions that define how the building blocks of cloud computing (compute, network, and storage resources) should be put together to form an end-product. But IT, like our innovative and creative children, has a different vision; they see those building blocks as capable of serving other purposes within the data center. They are the means to an end, a tool, a foundation. 

Judith Hurwitz recently explored the topic of private clouds in “What’s a private cloud anyway?” and laid out some key principles of cloud computing:

blockquote There are some key principles of the cloud that I think are worth recounting:

1. A cloud is designed to optimize and manage workloads for efficiency. Therefore repeatable and consistent workloads are most appropriate for the cloud.

2. A cloud is intended to implement automation and virtualization so that users can add and subtract services and capacity based on demand.

3. A cloud environment needs to be economically viable.

Why aren’t traditional data centers private clouds?  What if a data center adds some self-service and  virtualization? Is that enough?  Probably not. 

-- “What’s a private cloud anyway?”, Judith Hurwitz’s twitterbird Cloud-Centric Weblog

What’s common to these “key principles” is that they assume an intent that may or may not be applicable to the enterprise. Judith lays this out in key principle number two and makes the assumption that “cloud” is all about auto-scaling services. Herein lies the disconnect between public and private cloud computing. While public cloud computing focuses on providing resources as a utility, private cloud computing is more about efficiency in resource distribution and processes.

The resource model, the virtualization and integrated infrastructure supporting the rapid provisioning and migration of workloads around an environment are the building blocks upon which a cloud computing model is built. The intended use and purpose to which the end-product is ultimately put is different. Public cloud puts those resources to work generating revenue by offering them up cheaply affordably to other folks while private cloud puts those resources to work generating efficiency and time savings for enterprise IT staff.

IS vs DOES
What is happening is that the focus of cloud computing is evolving; it’s moving from “what is it” to “what does it do”.

And it is the latter that is much more important in the big scheme of things than the former. Public cloud provides resources-on-demand, primarily compute or storage resources on demand. Private cloud provides flexibility and efficiency and process automation. Public cloud resources may be incorporated into a private cloud as part of the flexibility and efficiency goals, but it is not a requirement. The intent behind a private cloud is in fact not capacity on demand, but more efficient usage and management of resources. 

The focus of cloud is changing from what it is to what it does and the intention behind cloud computing implementations is highly variable and dependent on the implementers. Private cloud computing is implemented for different reasons than public cloud computing. Private cloud implementations are not focused on economy of scale or cheap resources, they are focused on efficiency and processes.

Private cloud implementers are not trying to be Amazon or Google or Salesforce.com. They’re trying to be a more efficient, leaner version of themselves – IT as a Service. They’ve taken the building blocks – the resources – and are putting them together in a way that makes it possible for them to achieve their goals, not the goals of public  cloud computing. If that efficiency sometimes requires the use of external, public cloud computing resources then that’s where the two meet and are often considered “hybrid” cloud computing.

image The difference between what a cloud “is” and what it “does” is an important distinction especially for those who want to “sell” a cloud solution. Enterprises aren’t trying to build a public cloud environment, so trying to sell the benefits of a solution based on its ability to mimic a public cloud in a private data center is almost certainly a poor strategy. Similarly, trying to “sell” public cloud computing as the answer to all IT’s problems when you haven’t ascertained what it is the enterprise is trying to do with cloud computing is also going to fail. Rather we should take a lesson from our own experiences outside IT with our children and stop trying to force IT into a mold based on some set of instructions someone else put together and listen to what it is they are trying to do.

The intention of a private cloud computing implementation is not the same as that of a public cloud computing implementation. Which ultimately means that “success” or “failure” of such implementations will be measured by a completely different set of sticks.


cloud-connect

We’ll debate private cloud and dig into the obstacles (and solutions to them) enterprises are experiencing in moving forward with private cloud computing in the Private Cloud Track at CloudConnect 2011. Hope to see you there!


Mark Kromer (@mssqldude) described The SQL Server Private Cloud in a 2/7/2010 post:

imageI am currently working on 2 projects, one is a proof of concept and the other is an on-going, 2-year project by one of our largest Microsoft customers here on the East Coast. In both cases, these customers are implementing what Microsoft and the IT industry is referring to these days as “private cloud”. I’m not sure that I feel that term is 100% a good fit:

imageFirst, when people hear “Cloud” today, they immediately think of public Internet-based Cloud Computing. Private Cloud is based on local on-premises infrastructure for the most part. It is a reconfiguring of your data center practices and infrastructure to create an agile, cost-effective factory that can quickly provision and expand or collapse capacity (elastic) based on end-user (customer) demand. Some of the features that would constitute a “private cloud” will be listed below. Self-service, metered billing and virtualized workloads are key to private cloud, too.

Second, it says very little about what it actually does. “Cloud” is an overloaded and ill-defined term in general right now. That being said, I don’t think I have a better term for it yet, so I’m just throwing stones! Typically when talking to IT shops about comprehensive data center efficiencies such as “Private Cloud”, we will discuss “Optimized Infrastructure”. But I think that terminology also falls short of what is being proposed in Private Clouds.

That being said, let me take a few minutes of your time to quickly lay-out what “private cloud” means in the context of this blog, SQL Server databases, and then link you to further reading to provide deep-dive detail into each area:

  1. Deploy applications and databases as virtual machines
  2. Utilize commodity hardware and load-balance VMs
  3. Provide self-service portals to allow end-users (customers) to request new, expanded or smaller databases
  4. Constantly monitor server & DB usage and sizes and dynamically (automatically) resize and migrate databases to least-used servers
  5. No idle stand-by-only servers
  6. Implement workflow to approve user requests and kick-off provisioning scripts
  7. Automatically provision users & databases from scripting (PowerShell)

This is the Microsoft Self-Service Portal home page, here is the Microsoft Virtual Machine Manager, the SCOM monitoring tools to enable a fully Microsoft-enabled private cloud. Notice this is not a lot of SQL Server database-centric material there. Private Cloud is an infrastructure to enable flexibility and elasticity to your environment.


Phil Wainewright posted Building a halfway house to the cloud to ZDNet’s Software as Services blog on 2/7/2010:

image Several private clouds are now coming to market based on the Vblock technology developed by VCE, a joint venture forged by Cisco, EMC and VMWare. Last week I groaned inwardly as I saw not one, but two announcements plop into my inbox. First came Sungard’s “fully managed cloud offering”, and then a couple of days later CSC got in touch to brief me about the launch of CSC BizCloud, “the industry’s first on-premise private cloud billed as a service.”

image It’s entirely predictable of course that we’ll see a surge of fake cloud roll-outs this year, and I shouldn’t be surprised to find the usual suspects eager to host them. It’s a lucrative business when, as I highlighted last year when quoting a Microsoft white paper, “private clouds are feasible but come with a significant cost premium of about 10 times the cost of a public cloud for the same unit of service.”

There are occasions, though, when even I’ll admit that implementing private cloud can make sense as a stepping stone on the way to a fully native, cloud-scale infrastructure. In the past, I’ve framed this largely in terms of the technology challenges. In conversation last week with CSC’s vice president, emerging markets, Brian Boruff, I learnt that there’s also an important cultural angle. It’s simply too much of a mindset adjustment for many organizations to move directly to cloud computing from where they’re starting right now. [This paragraph updated 14:56 PST to change Brian Boruff's job title from VP cloud computing and software services to VP emerging markets].

“I don’t think cloud computing is a technology issue. The technology’s there,” Boruff told me. “It’s a people and a labor and a business issue. BizCloud — think of it as a sandbox. They can bring it inside the data center and start playing with it.

“Two years from now,” he continued, “I think you’ll see workloads that have moved into BizCloud moving into the public cloud — but it’s a journey.”

Some organizations of course are way ahead of the crowd. Boruff spoke of three generations that differ dramatically in their attitudes to outsourcing and subcontracting. At one extreme are the first-generation outsourcers, he said. “Some people that have never outsourced are scared to death of this cloud computing thing.” Others have been doing it for twenty years or more and it’s second nature to them. “We have one client,” he revealed, “that is a $35bn multinational whose strategy over the next three years is to move everything they do into an as-a-service model.”

For those who aren’t yet ready to go all the way into the cloud, halfway-house platforms like BizCloud provide an opportunity to get some of the benefits of virtualization and automation immediately while taking time to adapt to the wider impact of full-blown cloud computing, he explained. Since CSC offers fully multi-tenant public cloud infrastructure built on the same platform as BizCloud, it will be much easier, he assured me, to move to a public cloud infrastructure from BizCloud than it would be from a classic enterprise IT environment. In the meantime, IT management buys time to transition its workforce to the new realities of cloud.

“It’s not just capital investment. Think about all the people, all the labor investment of people that are running around managing highly inefficient workloads,” said Boruff. “If you’re the VP of infrastructure and somebody’s telling you to move to the public cloud, what does the future of your career look like?

“BizCloud is a way inside of someone’s data centers to say, instead of three people doing that workload, maybe you only need two or one. Let’s retrain them to run these highly virtualized data centers and then go after some of the applications.”

While it may still cost more than a true public cloud implementation, the cost savings compared to the existing enterprise infrastructure can still be huge. Telecoms billing provider Cycle 30, an early Sungard cloud customer, is said in its press release to have “saved millions of dollars” by adopting a cloud solution, albeit without specifying how the savings were calculated.

Nor is CSC holding back customers from moving all the way to the cloud if they’re ready — whatever the hit to its own revenues. Boruff cited the example of Britain’s national postal service, Royal Mail Group, which CSC helped move from an in-house implementation of Lotus Notes to Microsoft’s cloud-hosted BPOS suite (soon to be known as Office 365). “We were charging Royal Mail Group a lot of money to run Lotus Notes for them,” he said. “We had 40 people on site. We had to get rid of their jobs.”

That kind of story probably doesn’t help make IT decision makers any more eager to accelerate their progress cloudwards. If a hybrid cloud strategy buys a bit more time to allay staff fears and manage retraining and redeployment, maybe it’s not such a bad thing after all.

Phil’s newest role as an industry advocate is vice-president of EuroCloud.

image

No significant articles today.


<Return to section navigation list> 

Cloud Security and Governance

David Linthicum recommended When Thinking SOA Governance, Think Macro and Micro in a 2/7/2011 post to ebizQ’s Where SOA Meets Cloud blog:

image Policies, as related to governance, are declarative electronic rules that define the correct behaviors of the services. However, they can be rules that are not electronically enforced. An example would be policies created by IT leaders who create rules that everyone must follow, but the rules are not automated. Or, they can be policies that enforce the proper behavior during service execution, typically enforced electronically using governance technology. Both are important, and thus why we discuss polices as things that may exist inside or outside of governance technology.

For our purposes, we can call policies that are more general in nature macro policies, and policies that are specific to a particular service as micro policies.

Macro Policies

image Macro policies are those policies that IT leaders, such as the enterprise architect, typically create to address larger sweeping issues that cover many services, the data, the processes, and the applications. Examples of macro policies include:

  • All metadata must adhere to an approved semantic model, on-premise and cloud computing-based.
  • All services must return a response in .05 seconds for on-premise and .10 for cloud computing-based.
  • Changes to processes have to be approved by a business leader.
  • All services must be built using Java.

The idea is that we have some general rules that control how the system is developed, redeveloped, and monitored. Thus, macro polices do indeed exist as established simple rules, such as the ones listed above, or set processes that must be followed. For example, there could be a process to address how the database is changed, including 20 steps that must be followed, from initiation of the change to acceptance testing. Another example is the process of registering a new user on the cloud computing platform. Or, any process that reduces operational risks.

Many have a tendency to roll their eyes at these kinds of controls that are placed around automation. I'm sure you have many that exist within your IT shop now. They may also push back on extending these governance concepts to cloud computing. However, the core value of implementing macro policies is to reduce risk and save money.

The trick is to strike a balance between too many macro policies that hurt productivity, or too few that raise the chance that something bad will happen. Not an easy thing, but a good rule of thumb is that your IT department should spend approximately 5 percent of their time dealing with issues around macro polices. If you spend more time than that, perhaps you're over-governing. Less than that, or if you have disaster after disaster happen, perhaps you can put in more macro policies to place more processes around the management of IT resources, on-premise or cloud computing-based.

Micro Policies

Micro or service-based polices typically deal with a policy instance around a particular service, process, or data element. They are related to macro policies in that macro policies define what needs to be done, whereas the micro policies define how a policy is carried out at the lowest level of granularity.

Examples of micro policies include:

  • Only those from HR can leverage Get_Sal_Info services.
  • No more than 1 application, service, or process at a time can access the Update_Customer_Data service.
  • The Sales_Amount data element can only be updated by the DBA, and not the developers.
  • The response time from the get_customer_credit service must be less than .0001 seconds.

Micro policies are very specific, and typically destined for implementation within service governance technology that can track and implement these types of policies.


Dana Gardner reported “The Open Group today launched a public survey that will examine the measurable business drivers” and advised you to Measure the True Enterprise Impact of Cloud Computing in a 2/7/2010 post written by Dave Lounsbury, Chief Technical Officer at The Open Group. (pictued below):

Everyone in the IT industry knows by now that cloud computing is exploding. Gartner said cloud computing was its number-one area of inquiry in 2010, and hype for the popular computing movement peaked last summer according to its 2010 Hype Cycle for Cloud Computing.
Regardless of what the media says, cloud is now a real option for delivery of IT services to business, and organizations of all sizes need to determine how they can generate real business benefit from using cloud. Industry discussion about its benefits still tend to be more focused on such IT advantages as IT capacity and utilization versus business impacts such as competitive differentiation, profit margin, etc. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

The Open Group’s Cloud Work Group has created a series of White Papers to help clarify the business impacts of using cloud and, as a next step, The Open Group at our San Diego Conference today launched a public survey that will examine the measurable business drivers and ROI to be gained from the cloud. We encourage you to spend a few minutes completing the online survey.

We’re specifically looking for input from end-user organizations about their business requirements, outcomes and initial experience measuring ROI around their cloud projects.

We’re specifically looking for input from end-user organizations about their business requirements, outcomes and initial experience measuring ROI around their cloud projects. The survey both builds on the work already done by the Cloud Work Group, and the results of the survey will help guide its future development on the financial and business impact of cloud computing.
The survey will be open until Monday, March 7, after which we’ll publish the findings. Please help us spread the word by sharing the link -- http://svy.mk/ogcloud -- to our survey with others in your network who are either direct buyers of cloud services or have influence over their organization’s cloud-related investments.

You may also be interested in:


<Return to section navigation list> 

Cloud Computing Events

Neudesic announced on 2/7/2011 a live Webcast: AppFabric Series: AppFabric for Everyone to be held on 2/9/2011 at 10:00 AM to 12:00 PM PST:

image Presented by: Rick Garibay, GM of Connected Systems Practice, Neudesic

Registration: http://www.clicktoattend.com/?id=152702 

image7223222More Information: Windows Server AppFabric extends the core capabilities of IIS by providing many of the cloud benefits on-premise, including elastic scale and robust hosting capabilities. Learn how Windows AppFabric can benefit your approach to building and supporting composite application services via enhanced lifetime management, tracking, persistence of long-running work flow services, and caching for performance optimization.


1105 Media posted descriptions of first set of sessions in the Cloud Computing track at Visual Studio Live!, to be held 4/18 to 4/22/2011 at the in Las Vegas, NV:

Technologies covered include:

  • Windows Azure
  • SQL Azure
  • Windows Azure DataMarket

T3 Azure Platform Overview Tuesday April 19 9:15 AM - 10:30 AM by Vishwas Lele, Architect, AIS:

image The Windows Azure platform provides a comprehensive set of services for building and running scalable applications targeted at the cloud. In this introductory session, we’ll explore these new concepts and show the basics of how to get started today with the Windows Azure platform.

imageDuring the course of the session, we will also see how web applications can use the scalable compute and storage services with Windows Azure, secure connectivity with Service Bus and Access Control Service, and a relational database with Microsoft SQL Azure.

T7 Building Azure Applications Tuesday April 19 10:45 AM - 12:00 PM by Vishwas Lele, Architect, AIS:

imageCloud based computing offers serious financial savings for companies who want a flexible approach to building applications. Microsoft Azure provides a very compelling platform for building cloud based services. Since Azure applications can be built with .NET, you can reuse your existing skills.

imageNonetheless, building a distributed cloud application is not the same as building a desktop app or even a conventional hosted application. Latency and bandwidth considerations alone change the way you structure data and pose design constraints on the relational database model. Asynchronous processing is often needed to build applications that are scalable. Since messages can be lost, or retransmitted by the sender, services have to handle redundant messaging. Data security in a public cloud is different from privately hosted applications.

This session will introduce the basic tools of Azure and will illustrate the architectural and design tradeoffs that must be made with cloud applications.

T11 Building Compute-Intensive Apps in Azure Tuesday April 19 2:30 PM - 3:45 PM by Vishwas Lele, Architect, AIS:

imageMonte Carlo is a computation method that relies on repeated random sampling. It is essentially a non-recursive divide and conquer algorithm that can take advantage of the massive amount of parallelism, offered by the Windows Azure Platform. In this session, we will build a Monte Carlo Simulator from scratch. This will include the following functions: Submit - Ability to submit calculation jobs Monitor - Ability to monitor the progress of calculation jobs queued for execution. Analyze - Using a Silverlight based UI, visualize the results of the calculation stored in Azure Tables.

imageYou will learn:

  • Elasticity offered by Azure worker roles. Scalability offered by Azure Table. Guaranteed delivery offered by Azure Queues
  • Silverlight based application for rich visualization of calculation results stored in Azure Tables
  • MVC2 application hosted inside Azure Worker Role

T15 Using C# and Visual Basic to Build a Cloud Application for Windows Phone 7 Tuesday April 19 4:00 PM - 5:15 PM by Srivatsn Narayanan, Developer, C# and VB Compiler Team , Microsoft Corporation and Lucian Wischik, Specification Lead for Visual Basic, Microsoft Corporation:

image Are you a C# or Visual Basic developer, interested in building Windows Phone 7 applications that scale with your user base? Come to this demo-packed session and learn tips to develop these applications. You’ll start with an idea for a killer Windows Phone 7 app. It needs an Azure backend because it’s online and connected and is going to scale to millions of users.

image You want to sell it on the Marketplace. How will you develop it? This demo shows technologies that that will help you pull it off! It highlights some existing C# and Visual Basic language features like LINQ to XML, and some upcoming language features like Async for ASP.NET, that will turbo-charge your development. The only thing it doesn’t cover is what kind of yacht to buy when your app hits the big time.

TH20 Windows Azure and PHP Thursday April 21 3:00 PM - 4:15 PM by Jeffrey McManus, CEO, Platform Associates

image In this session, you'll see some real-world demonstrations of using Windows Azure cloud services with PHP Web applications. Whether you support a hybrid development environment, are planning to migrate PHP code to Azure, or are just curious about how PHP works, we'll provide examples of using Azure from PHP code to perform both compute and storage tasks.

You will learn:

    • The advantages and drawbacks of PHP as a server-side Web development language
    • imageHow Azure supports and hosts PHP Web applications

Full disclosure: I’m a contributing editor for 1105 Media’s Visual Studio Magazine and occasionally write for their Redmond Developer New[letter].


Jim O’Neill reported in a 2/7/2011 post that he’ll be giving a Windows Azure/PowerBuilder presentation entitled When Worlds Collide–PowerBuilder User Group on 2/10/2011 in Chesire, CT:

imageMany of you know that I came to Microsoft after working for nearly twelve years at Sybase (now an SAP Company), almost exclusively with the PowerBuilder developer community.  While PowerBuilder is not the shiny object it was when it spawned the client-server development phenomenon of the early to mid-90s, it’s continued to steadily evolve. 

PowerBuilder 12 trial downloadAt version 12, the ‘classic’ version supports building native Win32 applications and deploying that same code (with a few tweaks here and there) as ASP.NET Web Forms and WInForms.  A new companion IDE shipped along with the classic version (and based on the Visual Studio shell) additionally brings PowerBuilder applications into the XAML-world by enabling the deployment of WPF applications.

Since PowerBuilder and PowerBuilder developers are a part of the overall Microsoft ecosystem, I thought it might be fun to merge my roots with my latest penchant, Windows Azure and Cloud Computing, so I’ll be presenting on PowerBuilder and the Cloud at the Connecticut PowerBuilder User Group meeting this Thursday.  The abstract goes like this:

Over the past year or so, if you’re a technologist, you’ve been assailed on all sides by yet another revolutionary paradigm – cloud computing. Is it all that revolutionary? What do you need to know about it? Why would you use it? And, most importantly, how does it affect you as a PowerBuilder developer? Join Microsoft Developer Evangelist (and former Sybase Principal Systems Consultant for PowerBuilder), Jim O’Neil, for a hype-less look at cloud computing, focusing on Microsoft Azure and potential integration points with PowerBuilder. Attendees will also get an access code for a 30-day trial period of Windows Azure and SQL Azure to do their own experimentation in the cloud.

Here are all the particulars, and please RSVP so we have an accurate count for pizza, which is on me!

Oh yeah, I guarantee that nothing will blow up when my worlds collide!

PowerBuilder is still around? I remember using Visual Basic 3.0, Access 1.1 and ODBC to fend off competition from PowerBuilder consultants and client proponents many years ago.


RBA Consulting announced the Twin Cities Cloud Computing User Group - February 10, 2011, 3:00 to 5:00 PM at Microsoft’s Bloomington, MN office:

This month's topic: Advanced Roles in Windows Azure

REGISTER HERE

image So, maybe you've dabbled with Azure. Maybe you've built a small app or two. Now what? In this session, Adam Grocholski joins us to walk us through some of the more advanced features of Windows Azure web and worker roles. Areas to be discussed include:

  • Service Model Enhancements
  • Administrative Access and Startup Tasks
  • Remote Desktop
  • Local Storage
  • Input & Internal Endpoints
  • Windows Azure Connect

Thursday, February 10, 2011, 3:00 PM to 5:00 PM   
Microsoft's Bloomington Office   
8300 Norman Center Drive, Suite 950
Bloomington, MN 55437

About Adam Grocholski
image Adam Grocholski is a Technical Evangelist at RBA Consulting. You can usually find him talking to anyone who will listen about Silverlight, Windows Phone, and Windows Azure. Last spring Adam was named a Microsoft MVP for his commitment to the Microsoft development community, and most recently, has been touring the country as a speaker and facilitator for Microsoft's Azure Boot Camps. From founding the Twin Cities Cloud Computing User Group to speaking at local user groups and code camps as well as local, regional, and national conferences, Adam is committed to building a great community of well-educated Microsoft developers. When not working he enjoys spending time with his three awesome daughters and amazing wife. You can follow his thoughts on technology at thinkfirstcodelater.com.

Visit the Twin Cities Cloud Computing User Group here.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

SearchCloudComputing.com posted a How much are free cloud computing services worth? article on 2/8/2011, my (@rogerjenn) first as a contributor, to the TechTarget blog. From the RSS blurb:

image Free trials from cloud computing providers like Microsoft and Google are one way to lead new users toward cloud adoption. Our expert breaks down the value of each offering.


Matt posted Rack and the Beanstalk to the Amazon Web Services blog on 2/7/2011:

AWS Elastic Beanstalk manages your web application via Java, Tomcat and the Amazon cloud infrastructure. This means that in addition to Java, Elastic Beanstalk can host applications developed with languages compatible with the Java VM.

This includes tools such as Clojure, Scala and JRuby - in this post we start to think out of the box, and show you how to run any Rack based Ruby application (including Rails and Sinatra) on the Elastic Beanstalk platform. You get all the benefits of deploying to Elastic Beanstalk: autoscaling, load balancing, versions and environments, with the joys of developing in Ruby.

Getting started

We'll package a new Rails app into a Java .war file which will run natively through JRuby on the Tomcat application server. There is no smoke and mirrors here - Rails will run natively on JRuby, a Ruby implementation written in Java.

Java up

If you've not used Java or JRuby before, you'll need to install them. Java is available for download, or via your favourite package repository and is usually already installed on Mac OS X. The latest version of JRuby is available here. It's just a case of downloading the latest binaries for your platform (or source, if you are so inclined), and unpacking them into your path - full details here. I used v1.5.6 for this post.

Gem cutting

Ruby applications and modules are often distributed as Rubygems. JRuby maintains a separate Rubygem library, so we'll need to install a few gems to get started including Rails, the Java database adaptors and warbler, which we'll use to package our application for deployment to AWS Elastic Beanstalk. Assuming you added the jruby binaries to your path, you can run the following on your command line:

  • jruby -S gem install rails
  • jruby -S gem install warbler
  • jruby -S gem install jruby-openssl
  • jruby -S gem install activerecord-jdbcsqlite3-adapter
  • jruby -S gem install activerecord-jdbcmysql-adapter

To skip the lengthy documentation generation, just throw '--no-ri --no-rdoc' on the end of each of these commands.

A new hope

We can now create a new Rails application, and set it up for deployment under the JVM application container of Elastic Beanstalk. We can use a preset template, provided by jruby.org, to get us up and running quickly. Again, on the command line, run:

  • jruby -S rails new aws_on_rails -m http://jruby.org/rails3.rb

This will create a new Rails application in a directory called 'aws_on_rails'. Since it's so easy with Rails, let's make our example app do something interesting. For this, we'll need to first setup our database configuration to use our Java database drivers. To do this, just define the gems in the application's Gemfile, just beneath the line that starts gem 'jdbc-sqlite3':

  • gem 'activerecord-jdbcmysql-adapter', :require => false
  • gem 'jruby-openssl'

Now we setup the database configuration details - add these to your app's config/database.yml file.

development:  
  adapter: jdbcsqlite3
  database: db/development.sqlite3
  pool: 5
  timeout: 5000

production:
  adapter: jdbcmysql
  driver: com.mysql.jdbc.Driver
  username: admin
  password: <password>
  pool: 5
  timeout: 5000
  url: jdbc:mysql://<hostname>/<db-name>

If you don't have a MySQL database, you can create one quickly using the Amazon Relational Database Service. Just log into the AWS Management Console, go to the RDS tab, and click 'Launch DB instance'. you can find more details about Amazon RDS here. The hostname for the production settings above are listed in the console as the database 'endpoint'. Be sure to create the RDS database in the same region as Elastic Beanstalk, us-east and setup the appropriate security group access.

Application

We'll create a very basic application that lets us check in to a location. We'll use Rails' scaffolding to generate a simple interface, a controller and a new model.

  • jruby -S rails g scaffold Checkin name:string location:string

Then we just need to migrate our production database, ready for the application to be deployed to Elastic Beanstalk:

  • jruby -S rake db:migrate RAILS_ENV=production

Finally, we just need to set up the default route. Add the following to config/routes.rb:

root :to => "checkins#index"

This tells Rails how to respond to the root URL, which is used by the Elastic Beanstalk load balancer by default to monitor the health of your application.

Deployment

We're now ready to package our application, and send it to Elastic Beanstalk. First of all, we'll use warble to package our application into a Java war file.

  • jruby -S warble

This will create a new war file, named after your application, located in the root directory of your application. Head over to the AWS Management Console, click on the Elastic Beanstalk tab, and select 'Create New Application'. Setup your Elastic Beanstalk application with a name, URL and container type, then upload the Rails war file.

After Elastic Beanstalk has provisioned your EC2 instances, load balancer and autoscaling groups, your application will start under Tomcat's JVM. This step can take some time but once your app is launched, you can view it at the Elastic Beanstalk URL.

Congrats! You are now running Rails on AWS Elastic Beanstalk.

By default, your application will launch under Elastic Beanstalk in production mode, but you can change this and a wide range of other options using the warbler configuration settings. You can adjust the number of instances and autoscaling settings from the Elastic Beanstalk console.

Since Elastic Beanstalk is also API driven, you can automate the configuration, packaging and deployment as part of your standard build and release process.


James Urquhart described Open source and the network's role in the cloud in a 2/5/2011 post to C|Net News’ The Wisdom of Clouds blog:

image The announcement of the latest release from open-source cloud-management software project OpenStack is remarkable in many ways. The rapidly growing OpenStack community is gaining ground on a mature platform--this release adds image management and support for unlimited object sizes in its object storage service software--and there were a number of new IT vendors added to the list of supporters.

ZDNet UK covered the basics of the announcement, so I won't pick it apart here. Rather, I want to focus on one of the most interesting aspects of many of the vendors announcing their participation with this release.

Namely, several of them are networking vendors, including my employer Cisco Systems, Extreme Networks and--according to Stephen Spector, the OpenStack communications manager, soon after the OpenStack press release went out--Arista Networks. These names are added to those of already active participants, such as Citrix and Dell, that bring their own networking technologies and perspectives to the table.

(Disclosure: I am associated with Cisco's OpenStack participation, but I don't use this blog to promote my day job. I'm not here to tout Cisco. These opinions are mine and mine alone.)

What is important here is that the networking industry is showing up at all, and what that means to the future of networking in cloud computing. To me, the way the network's relevance will be defined--or at least standardized--will be through open-source efforts. And OpenStack may be the effort where the industry comes together and hammers out one or more solutions.

How will cloud services and applications utilize network capabilities such as routing, bandwidth management, VPN creation and configuration, network segmentation and isolation, and so on? And, given that most developers could care less about networking, or at least would like to be able to ignore it if they can, how do you present network concepts in a form that is meaningful for application development and operations? (Hint: do we have the right network abstractions?)

Furthermore, how will OpenStack itself take advantage of networking environments? There is certainly a debate in the industry about how much intelligence and architecture is needed within a single data-center environment, but there are still network management elements that would allow dynamic optimization of a network for mixed workloads or traffic types.

Also, large clouds will be globally distributed. How will OpenStack support that from a networking perspective? How will VPNs, VRFs, VLANs, MPLS connections, and so on be provisioned and operated in a highly distributed, dynamic environment--if at all? There is a huge opportunity for both service providers and infrastructure vendors to innovate in this space.

And that, ultimately, is why the networking effort in OpenStack will be important to the industry as a whole. Yes, there are other open-source cloud management systems worth considering for your cloud system, and yes OpenStack still has a long way to go before it reaches the maturity of several of those other offerings.

But, for whatever reason, the industry is converging on OpenStack for now, and that portends great things happening in the coming years, especially when it comes to the role of networking in cloud services.

James is a market strategist for cloud computing at Cisco Systems and an adviser to EnStratus.

image

No significant articles today.


<Return to section navigation list> 

0 comments: