Monday, December 13, 2010

Windows Azure and Cloud Computing Posts for 12/13/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.
AzureArchitecture2H640px3   
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:
To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)
Read the detailed TOC here (PDF) and download the sample code here.
Discuss the book on its WROX P2P Forum.
See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.
Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.
You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:
  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”
HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.

Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.

<Return to section navigation list> 

SQL Azure Database and Reporting

Kristofer Anderson (@KristoferA) explained Query Profiling SQL Azure when using Entity Framework or LINQ-to-SQL on 12/13/2010:
image One slight shortcoming in Microsoft’s SQL Azure (the SQL Server 2008 flavor that is hosted  in Microsoft’s cloud platform) is that users don’t have the trace privileges needed to use SQL Profiler to profile query and performance behavior. Fortunately there are other ways to extract performance data for individual queries; SQL Azure exposes IO statistics, timings, and execution plans in the same way as normal non-cloud editions of SQL Server.
image
I have recently done some testing against SQL Azure using my tools. Huagati DBML/EDMX Tools needed some minor adjustments to work against SQL Azure as outlined in a previous blog post: http://huagati.blogspot.com/2010/12/sql-azure-support-in-huagati-dbmledmx.html
Next up was testing the profiling / logging components for Entity Framework 4 and Linq-to-SQL in Huagati Query Profiler against SQL Azure. Fortunately I can announce that it works just fine; the techniques used by the Huagati Query Profiler’s logging components for capturing server-side timings, I/O statistics, query execution plans are all supported in SQL Azure. The log entries will look the same and the same performance data and filtering options that work against local SQL Server instances can be used against SQL Server in the cloud.
queryProfilerAzure
One interesting thing I noticed while testing is that the roundtrip times against the SQL Azure instance I am accessing is only 100-150ms above accessing a database on my local LAN. That is impressive considering I am in Thailand and the SQL Azure instance is in Singapore, and faster than what I have seen mentioned in some forums and blog posts from US SQL Azure users.
To get started, download and install the Huagati Query Profiler from http://huagati.com/l2sprofiler/ and take a look at these blog posts and/or the sample code that is installed together with the profiler:
Linq-to-SQL: http://huagati.blogspot.com/2009/06/profiling-linq-to-sql-applications.html
Entity Framework: http://huagati.blogspot.com/2010/06/entity-framework-support-in-huagati.html
Next, you can either try the sample projects that ship with the profiler against your own SQL Azure instance, or add profiling support to your own Entity Framework 4 or Linq-to-SQL based projects against a SQL Azure database.

The SQL Azure Team updated the SQL Azure site’s main page last week:
image

<Return to section navigation list> 

Dataplace DataMarket and OData

Jeff Kresbach described Restful Services, oData, and Rest Sharp in this 12/12/2010 post:
imageAfter a great presentation by Jason Sheehan at MDC about RestSharp, I decided to implement it.
image RestSharp is a .Net framework for consuming restful data sources via either Json or XML.
My first step was to put together a Restful data source for RestSharp to consume.  Staying entirely within .Net, I decided to use Microsoft's oData implementation, built on System.Data.Services.DataServices.  Natively, these support Json, or atom+pub xml.  (XML with a few bells and whistles added on)
There are three main steps for creating an oData data source:
1)  override CreateDSPMetaData
This is where the metadata data is returned.  The meta data defines the structure of the data to return.  The structure contains the relationships between data objects, along with what properties the objects expose.  The meta data can and should be somehow cached so that the structure is not rebuilt with every data request.
2) override CreateDataSource
The context contains the data the data source will publish.  This method is the conduit which will populate the metadata objects to be returned to the requestor.
3) implement static InitializeService
At this point we can set up security, along with setting up properties of the web service (versioning, etc)
Here is a web service which publishes stock prices for various Products (stocks) in various Categories.
namespace RestService
{
    public class RestServiceImpl : DSPDataService<DSPContext>
    {
        private static DSPContext _context;
        private static DSPMetadata _metadata;
        /// <summary>
        /// Populate traversable data source
        /// </summary>
        /// <returns></returns>
        protected override DSPContext CreateDataSource()
        {
            if (_context == null)
            {
                _context = new DSPContext();
                Category utilities = new Category(0);
                utilities.Name = "Electric";
                Category financials = new Category(1);
                financials.Name = "Financial";
                IList products = _context.GetResourceSetEntities("Products");
                Product electric = new Product(0, utilities);
                electric.Name = "ABC Electric";
                electric.Description = "Electric Utility";
                electric.Price = 3.5;
                products.Add(electric);
                Product water = new Product(1, utilities);
                water.Name = "XYZ Water";
                water.Description = "Water Utility";
                water.Price = 2.4;
                products.Add(water);
                Product banks = new Product(2, financials);
                banks.Name = "FatCat Bank";
                banks.Description = "A bank that's almost too big";
                banks.Price = 19.9; // This will never get to the client
                products.Add(banks);
                IList categories = _context.GetResourceSetEntities("Categories");
                categories.Add(utilities);
                categories.Add(financials);
                utilities.Products.Add(electric);
                utilities.Products.Add(electric);
                financials.Products.Add(banks);
            }
            return _context;
        }
        /// <summary>
        /// Setup rules describing published data structure - relationships between data,
        /// key field, other searchable fields, etc.
        /// </summary>
        /// <returns></returns>
        protected override DSPMetadata CreateDSPMetadata()
        {
            if (_metadata == null)
            {
                _metadata = new DSPMetadata("DemoService", "DataServiceProviderDemo");
                // Define entity type product
                ResourceType product = _metadata.AddEntityType(typeof(Product), "Product");
                _metadata.AddKeyProperty(product, "ProductID");
                // Only add properties we wish to share with end users
                _metadata.AddPrimitiveProperty(product, "Name");
                _metadata.AddPrimitiveProperty(product, "Description");
                EntityPropertyMappingAttribute att = new EntityPropertyMappingAttribute("Name",
                    SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true);
                product.AddEntityPropertyMappingAttribute(att);
                att = new EntityPropertyMappingAttribute("Description",
                    SyndicationItemProperty.Summary, SyndicationTextContentKind.Plaintext, true);
                product.AddEntityPropertyMappingAttribute(att);
                // Define products as a set of product entities
                ResourceSet products = _metadata.AddResourceSet("Products", product);
                // Define entity type category
                ResourceType category = _metadata.AddEntityType(typeof(Category), "Category");
                _metadata.AddKeyProperty(category, "CategoryID");
                _metadata.AddPrimitiveProperty(category, "Name");
                _metadata.AddPrimitiveProperty(category, "Description");
                // Define categories as a set of category entities
                ResourceSet categories = _metadata.AddResourceSet("Categories", category);
                att = new EntityPropertyMappingAttribute("Name",
                    SyndicationItemProperty.Title, SyndicationTextContentKind.Plaintext, true);
                category.AddEntityPropertyMappingAttribute(att);
                att = new EntityPropertyMappingAttribute("Description",
                    SyndicationItemProperty.Summary, SyndicationTextContentKind.Plaintext, true);
                category.AddEntityPropertyMappingAttribute(att);
                // A product has a category, a category has products
                _metadata.AddResourceReferenceProperty(product, "Category", categories);
                _metadata.AddResourceSetReferenceProperty(category, "Products", products);
            }
            return _metadata;
        }
        /// <summary>
        /// Based on the requesting user, can set up permissions to Read, Write, etc.
        /// </summary>
        /// <param name="config"></param>
        public static void InitializeService(DataServiceConfiguration config)
        {
            config.SetEntitySetAccessRule("*", EntitySetRights.All);
            config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
            config.DataServiceBehavior.AcceptProjectionRequests = true;
        }
    }
}
The objects prefixed with DSP come from the samples on the oData site: http://www.odata.org/developers
The products and categories objects are POCO business objects with no special modifiers.
Three main options are available for defining the MetaData of data sources in .Net:
1) Generate Entity Data model (Potentially directly from SQL Server database).  This requires the least amount of manual interaction, and uses the edmx WYSIWYG editor to generate a data model.  This can be directly tied to the SQL Server database and generated from the database if you want a data access layer tightly coupled with your database.
2) Object model decorations.  If you already have a POCO data layer, you can decorate your objects with properties to statically inform the compiler how the objects are related.  The disadvantage is there are now tags strewn about your business layer that need to be updated as the business rules change. 
3) Programmatically construct metadata object.  This is the object illustrated above in CreateDSPMetaData.  This puts all relationship information into one central programmatic location.  Here business rules are constructed when the DSPMetaData response object is returned.
Once you have your service up and running, RestSharp is designed for XML / JSON, along with the native Microsoft library.  There are currently some differences between how JSON made RestSharp expect XML with how atom+pub works, so I found better results currently with the JSON implementation - modifying the RestSharp XML parser to make an atom+pub parser is fairly trivial though, so use what implementation works best for you.
I put together a sample console app which calls the RestSvcImpl.svc service defined above (and assumes it to be running on port 2000).  I used both RestSharp as a client, and also the default Microsoft oData client tools.
namespace RestConsole
{
    class Program
    {
        private static DataServiceContext _ctx;
        private enum DemoType
        {
            Xml,
            Json
        }
        static void Main(string[] args)
        {
            // Microsoft implementation
            _ctx = new DataServiceContext(new System.Uri("http://localhost:2000/RestServiceImpl.svc"));
            var msProducts = RunQuery<Product>("Products").ToList();
            var msCategory = RunQuery<Category>("/Products(0)/Category").AsEnumerable().Single();
            var msFilteredProducts = RunQuery<Product>("/Products?$filter=length(Name) ge 4").ToList();
            // RestSharp implementation
            DemoType demoType = DemoType.Json;
            var client = new RestClient("http://localhost:2000/RestServiceImpl.svc");
            client.ClearHandlers(); // Remove all available handlers
            // Set up handler depending on what situation dictates
            if (demoType == DemoType.Json)
                client.AddHandler("application/json", new RestSharp.Deserializers.JsonDeserializer());
            else if (demoType == DemoType.Xml)
            {
                client.AddHandler("application/atom+xml", new RestSharp.Deserializers.XmlDeserializer());
            }
            var request = new RestRequest();
            if (demoType == DemoType.Json)
                request.RootElement = "d"; // service root element for json
            else if (demoType == DemoType.Xml)
            {
                request.XmlNamespace = "http://www.w3.org/2005/Atom";
            }
            // Return all products
            request.Resource = "/Products?$orderby=Name";
            RestResponse<List<Product>> productsResp = client.Execute<List<Product>>(request);
            List<Product> products = productsResp.Data;
            // Find category for product with ProductID = 1
            request.Resource = string.Format("/Products(1)/Category");
            RestResponse<Category> categoryResp = client.Execute<Category>(request);
            Category category = categoryResp.Data;
            // Specialized queries
            request.Resource = string.Format("/Products?$filter=ProductID eq {0}", 1);
            RestResponse<Product> productResp = client.Execute<Product>(request);
            Product product = productResp.Data;
            request.Resource = string.Format("/Products?$filter=Name eq '{0}'", "XYZ Water");
            productResp = client.Execute<Product>(request);
            product = productResp.Data;
        }
        private static IEnumerable<TElement> RunQuery<TElement>(string queryUri)
        {
            try
            {
                return _ctx.Execute<TElement>(new Uri(queryUri, UriKind.Relative));
            }
            catch (Exception ex)
            {
                throw ex;
            }
        }
    }
}
Feel free to step through the code a few times and to attach a debugger to the service as well to see how and where the context and metadata objects are constructed and returned.  Pay special attention to the response object being returned by the oData service - There are several properties of the RestRequest that can be used to help troubleshoot when the structure of the response is not exactly what would be expected.
You might find Jeff’s earlier RESTful Applications and the Open Data Protocol post of interest also.

The Microsoft Case Studies team published Analytics Firm [Alteryx] Broadens Customer Choice, Growth Avenues with Online Data Market on 12/10/2010 (missed when published):
image Organization Profile: Alteryx provides geographical business intelligence software used by companies in many industries to make sense of vast amounts of data. The Orange, California–based firm has 110 employees.
Business Situation: Alteryx wanted to simplify the process of incorporating its data into third-party technology. It also wanted to give customers more flexibility in selecting data sets and using Alteryx technology.
imageSolution: Alteryx is using DataMarket (a section of Windows Azure Marketplace) to give developers an easier way to consume its technology in conjunction with a broad set of third-party data.
Benefits:
  • Faster time-to-market for customer developers
  • Reduced development work and costs
  • Opportunity to expand offerings
Software and Services
  • Windows Azure Platform
  • Windows Azure Marketplace DataMarket

Vertical Industries
Software Engineering
Country/Region
United States
Read the complete, four-page case study here.

<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

The Windows Azure AppFabric Team reported 12/13 - Updated IP addresses for AppFabric Data Centers to its blog on 12/13/2010:
image722322Today (12/13/2010) the Windows Azure platform AppFabric has updated the IP Ranges on which the AppFabric nodes are hosted.  If your firewall restricts outbound traffic, you will need to perform the additional step of opening your outbound TCP Ports and IP Addresses for these nodes. Please see the 1/28/2010 “ Additional Data Centers for Windows Azure platform AppFabric” posted which was updated today to include the new set of IP Addresses.

The Windows Azure AppFabric Team reported new AppFabric Special Offers and Pricing Information on about 12/13/2010:
Introductory Special: Free Trial Windows Azure AppFabricFor a limited time, new customers can sign up for the Windows Azure Platform and get a base amount of services per month – including 100,000 Access Control transactions and 2 Service Bus connections – at no charge.
We ask for your credit card information when you accept this offer, as any additional usage per month will be billed at the standard rate.
Buy
This is a great way to try Windows Azure AppFabric – and the Windows Azure platform – without the risk. Offer Details >
Comparison chart of Offers: In order to cater to a multitude of customer needs, Windows Azure Platform provides a few offers, namely, Introductory Special, Development Accelerator Core, SQL Azure Development Accelerator Core, Development Accelerator Extended, and Consumption.
View all of the Windows Azure Platform offers in a side-by-side comparison table.
Get Details >
All Windows Azure Platform offers: Windows Azure AppFabric, and the entire Windows Azure platform, gives you two different ways to pay. You can choose to pay as you go for what you consume each month, or you can subscribe to one of our special offers and enjoy even bigger savings!

MSDN Subscriptions
You're entitled to complimentary usage of the Azure Platform, worth over $2,500! Get it now
Special savings for members of Microsoft Partner Network
Microsoft Partner Network members can take advantage of special programs and exclusive savings when purchasing any Azure Platform offer. Get it now
New to Windows Azure AppFabric?
Start with a few short videos that show why it’s important and how it works.
Watch Now
Read the updated whitepapers to learn Windows Azure AppFabric in depth.
Read Now
Start your Free Trial or review all offers to find the right one for your business.
Offer Details

<Return to section navigation list>

Windows Azure Virtual Network, Connect, and CDN

The MSDN Library recently added a Tutorial: Setting up Windows Azure Connect topic:
imageWindows Azure Platform
[This topic contains preliminary content for the current release of Windows Azure.]
Windows Azure Connect (codename Sydney) enables Windows Azure users to setup secure, IP-level network connectivity between their Windows Azure services and local (on-premises) resources. This document provides a step-by-step walkthrough of how to setup Windows Azure Connect.
Prerequisites:
Overview
In this document, we’ll walk-through how to setup network connectivity between a Windows Azure service (that is, its Roles and underlying VM instances) and a set of local machines using Connect.
This document is organized into the following sections:
  • Request access to the Windows Azure Connect CTP
  • Enable Connect service for your Windows Azure service
  • Enable your Windows Azure service for Connect
  • Using VM role and Connect
  • Enable your local machines for Connect
  • Configure / manage connectivity
  • Connectivity experience
  • Additional Connect functionality
Request access to the Windows Azure Connect CTP
The Windows Azure Connect CTP is currently invitation-only. To request access to the CTP, please follow the steps below.
  1. Sign in to Windows Azure Management Portal (http://windows.azure.com).
  2. In the navigation pane, click on “Home” and then click on “Beta Programs”, you will see “Windows Azure Connect” is listed as one of the new beta features.
    Select Beta Programs
  3. Check the box next to “Windows Azure Connect” and click “Join selected” to request access. If you have more than one Windows Azure subscriptions, you will need to specify which subscription you would like to enable for Connect. After you perform this step, your opt-in status will become “pending”, indicating that your request has been sent to Windows Azure team.
    Congratulations for Beta Participation
  4. When your request is approved, we will send email to the address you registered for your Windows Azure subscription noteNote
    Please note that during the CTP period, you can use Windows Azure Connect free of charge. There is no Service Level Agreement for the CTP, so we recommend that you do not use Windows Azure Connect for any mission-critical, high-availability services.
  5. After you receive approval email indicating that your subscription has been enabled, proceed to next step below.
Enable Connect service for your Azure subscription
  1. Sign in to Windows Azure portal (http://windows.azure.com). Click on “Virtual Network” tab.
  2. Under the “Connect” tab in the left-hand pane, click on the subscription you would like to enable for Connect.
    Enable Windows Azure Connect for a Subscription
  3. Confirm you would like to enable Connect service by clicking on OK
    Windows Azure Management Portal Connect Sucess
You are now ready to start using Windows Azure Connect.
Windows Azure Management Portal Subscription Ready

Enable your Windows Azure service for Connect

To use Connect to connect local resources with your Windows Azure service, you need to enable one or more of its roles. You do this by provisioning the role with the Connect plug-in that is part of the SDK1.3 release. Only roles of the service provisioned with the Connect plug-in will be able to connect to local resources.
  1. First you need to retrieve the activation token for your Connect service. Enter the Connect portal (by clicking the “Virtual Network” tab, and then selecting the Azure subscription that you enabled for Connect), and click the “Get Activation Token” button on the tool bar. In the dialog box that pops up (see below), copy the activation token -- you will need this in the next step.
    Windows Azure Management Portal Get Token
  2. Open your Windows Azure project in Visual Studio 2010, double click on the role you are enabling for Connect to bring up the properties page. Click on the “Virtual Network” tab, check the “Activate Windows Azure Connect” checkbox, and paste the activation token from the previous step as shown below.
    Windows Azure Management Portal Connect Activation
    At this point, you will notice that Visual Studio has updated your ServiceDefinition.csdef file and .cscfg file. In the ServiceDefinition.csdef file, for the role that you are enabling for Connect, Visual Studio will have added a line about importing the Connect plugin in the <Imports> section:
    Copy
    <Imports>
       <Import moduleName="Connect" />
    </Imports>
    And in the ServiceConfiguration.cscfg file, VS will have created Connect specific settings in the <ConfigurationSettings> section for the role you are enabling for Connect. One of these setting is “ActivationToken”

    Copy
    <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="your_ activation_token_guid" />
  3. Once you “Publish…” your service, you can then deploy it to Azure. You can deploy the Service to either the Production or Staging deployment slots.
  4. Once your Azure roles are successfully deployed and they are running in a “Ready” state, within a few seconds you should see the Connect enabled Azure roles appear in the Connect portal. In the “Roles & Groups” tab, your Azure roles and the instances associated with each role will appear in a hierarchical fashion, with information about the associated name, service name, and deployment slot. In the “Activated Endpoint” tab you will see Connect enabled role instances. This is illustrated in the following screenshots which show a single enabled Azure role named “SydneyCustomersWebRole” that has two role instances.
    Windows Azure Management Portal Groups and RolesWindows Azure Management Portal Active Endpoints
The topic continues with more details for using Windows Azure Connect.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Wes Yanaga reminded Azure developers on 12/13/2010 to get their Windows Azure Platform 30 Day Pass–No Credit Card Required (US Developers):
Microsoft Platform Ready is offering US developers free Windows Azure for a month.
We're offering a Windows Azure platform 30 day pass, so you can put Windows Azure and SQL Azure through their paces. No credit card required.
Use this link: Microsoft Platform Ready Free Azure Pass.
Use this passcode: promo code = DPEWE01
[Click the Submit button]
MPR-Azure-Banner_landing
For details about the offer, including the number of compute instances and SQL Azure sizes, see FREE 30-day Azure developer accounts! Hurry!
Other Microsoft Platform Ready Benefits
Sign up for Microsoft Platform Ready for free support and marketing benefits when you profile your application. Earn the Powered By Windows Azure stamp for your application too.
Windows Azure Training, Tools
Use the pass with the online training below to learn more about the Windows Azure Platform:
Get the Tools
Dig Deeper into the Windows Azure Platform
The promo code looks familiar to me. It’s the same one I used to get my 30-day pass three weeks ago.
I received the following message when I selected United States and pasted the promo code:
image
You might have better luck with The Code Project’s offer that uses CP001 as the promo code. Apparently, you only receive one 30-day trial per Windows Live ID. Your request will be refused if your name and company name are the same as that used for a prior trial.

Jim O’Neill continued his Azure@home series with Azure@home Part 12: The Move to SDK 1.3 on 12/13/2010:
imageThis post is part of a series diving into the implementation of the @home With Windows Azure project, which formed the basis of a webcast series by Developer Evangelists Brian Hitney and Jim O’Neil.  Be sure to read the introductory post for the context of this and subsequent articles in the series.
It’s been about a month since my last post in which I finished the deep-dive into the implementation of the Azure@home client application, and it’s been an eventful month!  At PDC 2010 at the end of October, a host of upcoming features, betas, and Community Technology Previews (CTP) were announced, and just after Thanksgiving, the first wave of those hit with the release of Version 1.3 of the SDK and the new Silverlight Windows Azure Portal.  Given the updates, I thought it would be a good time to start looking at some of the new capabilities by revisiting our old friend, Azure@home
Over the next few posts, we’ll start looking at Remote Desktop capabilities, App Fabric Caching, and the VM Role, but before we get there, we’ll need to update the existing application to leverage the 1.3 SDK and the concomitant modifications in Windows Azure itself.
Step 1Download and install the SDK.
You know the drill!
Step 2 – “Migrate” the Azure@home solution.
Visual Studio Conversion WizardOnce you re-open the Azure@home solution after installing the 1.3 SDK, you’ll note the Conversion Wizard appears!  If you’re like me, your first reaction is to Cancel, close Visual Studio, and double-check you have the right project.  After all, this is just an SDK update, not a move from Visual Studio 2008 to 2010!
Despite it seeming a bit of overkill, I confirmed it’s a legit ‘feature’ of the upgrade.  Just be sure you read the fine print, which essentially says there ain’t no goin’ back.
So the next question is probably, what exactly happens during this conversion?  If you look at the conversion report (or simply do a before-and-after diff of the solution directory), you’ll find there were only two changes:
  • ProductVersion tag in azureathome.ccproj was updated to 1.3.0 from 1.0.0.  This isn’t typically a file you’d poke into anyway.
  • The Service Definition file (servicedefinition.csdef), specifically the section for the WebRole, goes from this
      <WebRole name="WebRole">
    
        <InputEndpoints>
    
          <InputEndpoint name="HttpIn" protocol="http" port="80" />
    
        </InputEndpoints>
    
        <ConfigurationSettings>
    
          <Setting name="DiagnosticsConnectionString" />
    
          <Setting name="DataConnectionString" />
    
        </ConfigurationSettings>
    
      </WebRole>
    to this
      <WebRole name="WebRole">
    
        <Sites>
    
          <Site name="Web">
    
            <Bindings>
    
              <Binding name="HttpIn" endpointName="HttpIn" />
    
            </Bindings>
    
          </Site>
    
        </Sites>
    
        <ConfigurationSettings>
    
          <Setting name="DiagnosticsConnectionString" />
    
          <Setting name="DataConnectionString" />
    
        </ConfigurationSettings>
    
        <Endpoints>
    
          <InputEndpoint name="HttpIn" protocol="http" port="80" />
    
        </Endpoints>
    
      </WebRole>
After a moment of two of comparison, the change you’ll note is that an entire new tag, Sites, has been added to the WebRole, and the InputEndpoint has been rearranged a bit, but is essentially still there.
As you might expect, Sites (plural), implies that there may be more than one web site within your role, and that’s indeed a new feature enabled 1.3.  Previously, when you deployed a Web Role to Windows Azure, your web application was actually served up by the Hosted Web Core (HWC), a subset of full IIS.  One of the restrictions of HWC is that it can only serve up a single Web application, which in turn means that a Web Role could only host one web application. 
That may not seem so awful until you realize that a Web Role (and Worker Role) correlates to a single VM instance running in Azure.  In many cases, a single web site may not be busy enough to really warrant the resources of a complete VM instance, so to save money on compute cycles, it’s reasonable to want to host multiple sites within the same VM instance – unfortunately, you couldn’t do that, until now.
With full IIS in your Azure Web Role, there are a number of other features you can take advantage of as well, including virtual directories, application warm-up, and installing HTTP handlers.
Step 3 – Run the application.
Modifying Specific Version propertyWhen your fire up the application, you’ll probably be met with a build error, the source of which is an incorrect reference to the Microsoft.WindowsAzure. StorageClient assembly within the AzureAtHomeEntities project.  The reference property, Specific Version, is set to true, and so it’s looking for the Azure SDK 1.2 version of that assembly, which is what was set up when the project was first created.  Switching that property from true to false will resolve the reference and take care of the problem.  A bit of foresight on my part when creating the original implementation would have made this step unnecessary.
Try executing again, and whoa, another problem
InvalidOperationException - SetConfigurationSettingPublisher needs to be called before FromConfigurationSetting can be used
right at the start of Page_Load for the default.aspx page:
My 'exceptional' code at work!
But this is the same code as before, and a quick look at the WebRole.cs file (which, by the way, was generated for us when we created the application under the 1.2 SDK), shows that indeed SetConfigurationSettingPublisher is called in the OnStart method:
public override bool OnStart()
{
    DiagnosticMonitor.Start("DiagnosticsConnectionString");

    // use Azure configuration as setting publisher
    CloudStorageAccount.SetConfigurationSettingPublisher(
        (configName, configSetter) =>
    {
        configSetter(RoleEnvironment.GetConfigurationSettingValue
             (configName));
    });

    // For information on handling configuration changes
    // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
    RoleEnvironment.Changing += RoleEnvironmentChanging;

    return base.OnStart();
}
So what gives?  Well, with the move to full IIS from Hosted Web Core, the execution model of the web role code changes significantly (as covered in detail on the Windows Azure blog) .  In a nutshell, with HWC all of your web role code – your RoleEntryPoint code (in WebRole.cs) and the web site itself -  ran within the same process, WaWebHost.exe.  With full IIS, the RoleEntryPoint code runs under WaIISHost.exe, and the web site itself runs under w3wp.exe, as a regular IIS process, each in their own app domain.
Note that the FromConfigurationSetting method (as well as SetConfigurationSettingPublisher in OnStart) are static methods of CloudStorageAccount.  That worked before, because both methods were executed as part of the same process and the same app domain.  With the switch to full IIS, the methods are called in two different processes and two different app domains, the net effect of which is that SetConfigurationSettingPublisher (in WebRole.cs) and FromConfigurationSetting (in Default.aspx) aren’t dealing with the same object.  There’s a couple of different ways to handle this
  1. Make the call to SetConfigurationSettingPublisher within the web application itself.  A good place to set this up is in Application_Start.
  2. Use the Parse method on CloudStorageAccount in Default.aspx (and Status.aspx) instead:
        cloudStorageAccount = CloudStorageAccount.Parse
    
            (RoleEnvironment.GetConfigurationSettingValue("DataConnectionString"));
    
    Be aware though that this does not provide exactly the same functionality as before.  The configuration setting publisher mechanism is an abstraction that helps you more efficiently reuse code inside and outside of Windows Azure.  In Azure, the configuration would come from ServiceConfiguration.cscfg, and outside of Azure you might set it up to come from, say, the web.config file.  If you’re willing to forego that and are targeting Azure alone, then Parse will work just fine.  This, by the way, is the route we’ve taken with the updated code files on http://distributed.azure.com.

The Windows Azure Team posted Real World Windows Azure: Interview with Girish Kundlani, Program Manager at MAQ Software on 12/13/2010:
imageAs part of the Real World Windows Azure series, we talked to Girish Kundlani, Program Manager at MAQ Software, about using the Windows Azure platform for the company's enterprise resource management applications. Here's what he had to say:
MSDN: Tell us about MAQ Software and the services you offer.
Kundlani: MAQ Software is a digital marketing, analytics, and technology solutions company that helps customers serve their markets effectively. We are a Microsoft Gold Certified Partner and a Microsoft Preferred Vendor, so we use the latest Microsoft technologies for data analytics, cloud computing, and mobile platforms to help customers succeed.

MSDN: What were the biggest challenges that you faced prior to implementing the Windows Azure platform?
Kundlani: We experienced challenges maintaining our Employee Information Portal. The existing portal relied on manual processes, had outgrown its capacity, and lacked development flexibility. The portal was not scalable enough to be easily updated to meet changing business needs. Also, the system's infrastructure had grown obsolete, so it required frequent support and maintenance, which was costly. However, we didn't want to buy and build a new infrastructure that would require expensive hardware-we wanted a cost-effective way to develop a scalable portal.
MSDN: Can you describe the portal solution you built to address your need for cost-effective scalability? 
Kundlani: Microsoft products and technologies are a natural choice for our company, so we chose to host our enterprise resource management applications on the Windows Azure platform. These applications include: a timesheet application to facilitate easy tracking, resource allocation, project estimates, and reporting; an Employee Leave Tracker to automate and record time-off requests and approval; and an online test tracker to measure employees' results from training exercises. We also host the public-facing corporate portal on Windows Azure, which our customers can use to reach us from any part of the world. In addition to Windows Azure for hosting, we use Microsoft SQL Azure for our relational database needs.

Supervisors across all engineering centers located in Redmond, Washington, and Mumbai and Hyderabad, India, can access the Employee Leave Tracker to approve and monitor employees' vacation requests.
MSDN: What makes MAQ Software unique?
Kundlani: Unlike other companies that typically depend on local setup for location-based intranet solutions, we deployed ours to the cloud and can ensure that one solution is available across all of our dispersed locations.
MSDN: What kinds of benefits are you realizing with the Windows Azure platform?
Kundlani: We now have solutions that aren't limited by a physical infrastructure and offer us the scalability that we need for continued growth. We don't have to buy servers to deploy; we can take advantage of Windows Azure and Microsoft data centers to add more computing resources for improved performance and more storage as our capacity needs increase-plus, with 99.99 percent up-time, we have the high availability we need. We have also reduced our hosting costs from up to U.S.$400 per application to only $66 per application each month and, at the same time, were able to bring our applications to market quickly.
To read more Windows Azure customer success stories, visit:  www.windowsazure.com/evidence

Michael Neuwirth reported Kentico CMS 5.5 R2 Released on 12/13/2010 with support for Windows Azure deployment:
image We have just released version 5.5R2! This release brings brand new Enterprise 2.0 Intranet features and Cloud computing adoption! Read this blog post for more details.
What's new in Kentico CMS 5.5 R2
image Kentico Intranet Solution: Kentico Intranet Solution is a ready-to-use intranet site you can easily install, configure and start using within a few hours. It's a perfect solution for those who need an out-of-the-box intranet or who want to have some starting point they can build on.
It combines all intranet-related features of Kentico CMS, including:
  • content management
  • document libraries
  • project management
  • workgroups
  • social networking features
  • and others
It comes with 3 pre-defined themes and you can easily create your own.
all-color-schemas-small.png
Learn more ...
Download Intranet User's guide
We are working on the Intranet Admin's guide which will be ready at the end of 2010.
Document Management
The Document Management module is based on the Document Library web part/widget, which allows you to manage documents on the live site. You can use the following document management features:
•    Direct editing using Microsoft Office or other applications via WebDAV
•    File management – Copy/Delete/Open/Edit/Upload
•    Permissions management per file and per document library
•    Check-in/check-out
•    Workflow
•    Document archiving
•    Version history
document-library.png
Learn more ...
Project and Task Management: The Project Management module allows you us manage projects and tasks. It allows you to easily monitor the progress of all tasks and projects.
Each project and task has a status, estimated time, priority, deadline, progress and other important properties. Based on the properties, tasks and projects are graphically represented on the website.
project-management-small.pngLearn more ...
Workgroups: The Workgroups module allows you to create a place where you employees or partners can collaborate on a project, share ideas or upload documents.
Workgroups can be created and managed directly by end users.The access to workgroup content can be limited to site members or only to workgroup members. The workgroup members may invite other users to join their workgroup.
Learn more ...
WebDAV Support: Kentico CMS comes with built-in WebDAV support which allows you to open, edit and save documents from Kentico CMS using Microsoft Office and some other applications. It means you do not have to save the document on the disk, make changes locally and then upload them. So it greatly simplifies the way you work with documents published on your website or intranet.
Learn more ...
Microsoft Sharepoint 2010 Support: With the release of 5.5R2, we can connect the website with Sharepoint 2007, Sharepoint 2010, Sharepoint Services 3.0 and Sharepoint Foundation 2010 via Sharepoint web parts.
Windows Azure Support: imageFrom this release Kentico CMS supports Cloud Computing and we can start to deploy our solution to Microsoft Windows Azure platform. Learn more ...
Where to Learn More:
Download: As always, you can download Kentico CMS 5.5 R2 at http://www.kentico.com/download.aspx.
Deployment package for Windows Azure Platfom is available for the download here.
Where Do I Find the Upgrade Procedure? The upgrade procedure will be available for the download later this week at http://www.kentico.com/Download/Upgrades.aspx
Pricing and Licensing: Kentico CMS 5.5 R2 brings a new Document Management Package that contains:
  • Document Management
  • Project Management
  • Task Management
  • WebDAV Support
And is available for the same price as other packages are (from $1,499). Look at the Feature Matrix here.
With release of 5.5 R2, we are going to drop the Cloud License from our price list. The change will make our licensing more simple and affordable for those who want to use Windows Azure or other cloud platforms. Look for more details about Simplified Cloud Licensing.
Kentico Intranet Solution, the ready-to-use intranet site, is included for free, but you will need to purchase the appropriate license for the features you want to use.
The Document Management Package is also included in the Ultimate License. And here's the good news: there's no change to the Ultimate License price, so you actually get more features for the same price!
Click here for more details about the Kentico CMS 5.5 R2 licensing.

Angel “Java” Lopez (@ajlopez) posted Azure: Multithreads in Worker Role, an example on 12/13/2010:
In my previous post, I implemented a simple worker role, consuming and producing numbers from/to a queue. Now, I have a new app:

The worker role implements the generation of a Collatz sequence. See:
http://mathworld.wolfram.com/CollatzProblem.html
http://en.wikipedia.org/wiki/Collatz_conjecture
http://www.ericr.nl/wondrous/
You can download the solution from my AjCodeKatas Google project. The code is at:
http://code.google.com/p/ajcodekatas/source/browse/#svn/trunk/Azure/AzureCollatz
The initial page is simple:

The number range is send to the queue:
protected void btnProcess_Click(object sender, EventArgs e)
{
    int from = Convert.ToInt32(txtFromNumber.Text);
    int to = Convert.ToInt32(txtToNumber.Text);
    for (int k=from; k<=to; k++)
    {
        CloudQueueMessage msg = new CloudQueueMessage(k.ToString());
        WebRole.Instance.NumbersQueue.AddMessage(msg);
    }
}
The worker role gets each of these message, and calculates the Collatz sequence:

I added a new feature in Azure.Library: a MessageProcessor that can consumes message from a queue, in its own thread:
public MessageProcessor(CloudQueue queue, Func<CloudQueueMessage, bool> process)
{
    this.queue = queue;
    this.process = process;
}
public void Start()
{
    Thread thread = new Thread(new ThreadStart(this.Run));
    thread.Start();
}
public void Run()
{
    while (true)
    {
        try
        {
            CloudQueueMessage msg = this.queue.GetMessage();
            if (this.ProcessMessage(msg))
                this.queue.DeleteMessage(msg);
        }
        catch (Exception ex)
        {
            Trace.WriteLine(ex.Message, "Error");
        }
    }
}
public virtual bool ProcessMessage(CloudQueueMessage msg)
{
    if (msg != null && this.process != null)
        return this.process(msg);
    Trace.WriteLine("Working", "Information");
    Thread.Sleep(10000);
    return false;
}
Then, the worker role is launching a fixed number (12) of MessageProcessor. In this way, each instance is dedicated to process many message. I guess that this is not needed in this example. But it was an easy “proof of concept” to test the idea. Part of Run method in worker role;
QueueUtilities qutil = new QueueUtilities(account);
CloudQueue queue = qutil.CreateQueueIfNotExists("numbers");
CloudQueueClient qclient = account.CreateCloudQueueClient();
for (int k=0; k<11; k++)
{
    CloudQueue q = qclient.GetQueueReference("numbers");
    MessageProcessor p = new MessageProcessor(q, this.ProcessMessage);
    p.Start();
}
MessageProcessor processor = new MessageProcessor(queue, this.ProcessMessage);
processor.Run();
The ProcessMessage is in charge of the real work:
private bool ProcessMessage(CloudQueueMessage msg)
{
    int number = Convert.ToInt32(msg.AsString);
    List<int> numbers = new List<int>() { number };
    while (number > 1)
    {
        if ((number % 2) == 0)
        {
            number = number / 2;
            numbers.Add(number);
        }
        else
        {
            number = number * 3 + 1;
            numbers.Add(number);
        }
    }
    StringBuilder builder = new StringBuilder();
    builder.Append("Result:");
    foreach (int n in numbers)
    {
        builder.Append(" ");
        builder.Append(n);
    }
    Trace.WriteLine(builder.ToString(), "Information");
    return true;
}
The code of this example is in my ajcodekatas Google Code site.
Next steps: more distributed apps (genetic algorithm, web crawler…)
Keep tuned!

CDC Software announced CDC Software Launches its Cloud Discrete ERP Solution on Microsoft Azure Platform in China on 12/13/2010:
imageCDC Software Corporation, a global provider of hybrid enterprise software applications and services, announced today the launch of e-M-POWER On Demand, the first cloud discrete ERP solution offered on the Microsoft Windows Azure platform in China, and one of only three enterprise software solutions currently available on the Azure platform in China.
image The launch of e-M-POWER On Demand is the latest move in CDC Software’s plans to expand its growing portfolio of cloud-based solutions and increase recurring software-as-a-service (SaaS) revenue significantly over the next few years. In addition to its cloud acquisitions, CDC Software is also developing its SaaS solutions internally. As a partner of Microsoft, CDC Software plans to develop additional cloud applications on the Windows Azure platform. As previously announced, CDC Software also plans to launch its enterprise complaint management solution, CDC Respond, as a SaaS solution on the Azure platform early next year. The Windows Azure platform is a set of cloud computing services that can be used together, or independently, that enables developers to develop cloud applications.
“The launch of e-M-POWER On Demand, using Windows Azure technology, illustrates the commitment of CDC Software to Microsoft Cloud Services,” said Kathy Lee, Platform Strategy advisor, Microsoft Corporation. “e-M-POWER is the first ERP manufacturing solution using Windows Azure technology available in China. We are excited on our partnership with CDC Software and look forward to working with them on future deliverables on Windows Azure.”
Hong Kong-based Union Energy Industries Limited, a manufacturer of watch components and metal parts with a factory in mainland China, recently implemented e-M-POWER On Demand. According to Louis Li, manager, Corporate Business Development at Union Energy Industries, “We believe that e-M-POWER On Demand provides us with the most up-to-date, sophisticated solution addressing our manufacturing needs while eliminating the burden of costly IT overhead and maintenance. With an On Demand model, we can quickly leverage new ERP features and functionality while maintaining our low cost of ownership.”
“Access to affordable and scalable ERP solutions, like CDC’s e-M-POWER On Demand, is a key contributor to success when it comes to driving to a more efficient business,” says Dan Miklovic, Chief of Research for Sustainable Collaborations Group, a Green and Clean Tech market research firm. “On demand access is giving small to mid-sized firms access to ERP capabilities previously only available to the largest enterprises. SaaS also opens the door to better sustainability for any firm in China. Business efficiency is key to small to medium-sized businesses in China as they strive to become greener and operate in a sustainable way.”
“I am very proud to announce the availability of e-M-POWER On Demand for our China customers,” said Peter Yip, CEO of CDC Software. “Just six months ago, I participated at Microsoft’s press event with Steve Ballmer in Delhi, India where I proclaimed our commitment to developing cloud applications on Windows Azure. Now, we have delivered the first ERP manufacturing SaaS solution utilizing Windows Azure for the China market. Soon, we expect to deliver our cloud-based CDC Respond solution on Windows Azure. These deliverables not only confirm our commitment to Windows Azure, but the strength of our partnership with Microsoft. The e-M-POWER On Demand solution further enhances our already strong SaaS product portfolio, which we believe strengthens our position as a provider of hybrid enterprise solutions offering customers on-premise and cloud deployment options.”
“We also believe that by developing applications using Windows Azure and leveraging our Agile development methodology, we will be able to launch flexible, reliable and scalable cloud applications quickly to the market, and e-M-POWER illustrates that,” said Hilton Law, managing director, China for CDC Software. “By utilizing Azure, CDC Software has the ability to develop SOA-based, multi-tenant architectures with the tools we are already using, and deploy them to a robust platform with staged production, failure-resilience, elastic scalability and self-service provisioning. Our goal is to continue to provide specialized enterprise solutions that directly address our customers’ unique strategic requirements, and at the same time, offer a full range of deployment options, ranging from on premise to on demand and mixed deployments, to suit their current and future needs.”
e-M-POWER
e-M-POWER On Demand is a SaaS ERP solution specifically tailored for the needs of small-and medium-size discrete manufacturers in China. Its product suite includes Sales, Purchasing, Inventory, Production, Mold Management, Finance, with full integration to ACCPAC, a popular accounting system in China. Customers for e-M-POWER include industries such as electronics, watch, toys and furniture. …
<Return to section navigation list>

Visual Studio LightSwitch

Bruno Terkaly explained Generating Proxies for Silverlight accessing a Silverlight-Enabled WCF Service on 12/12/2010:
Generating Proxies
image I am writing this post because there just isn’t good information on generating your own proxies for Silverlight accessing a WCF Service. The problem is that svcutil.exe is the wrong tool. It will lead you to the front door of success but not let you into the building. 
image2224222The point is this, I wanted to understand how to connect to a WCF service without right-mouse clicking and choosing “Add Service Reference.”


Why do it?
First, I like knowing what "Add Service Reference" really does. It adds references to System.ServiceModel. It adds a proxy. It creates a configuration file.
Second, I'd like to be able to re-use the output files in other projects without having to always add a service reference.
Third, there's only two files to look at. If you add a service reference, a multitude of files are generated, which I think is very confusing.
Finally, I had some errors with Windows Phone 7 clients that went away using the proxy.

SLSVCUTIL
The real trick is to use SLSvcUtil.exe. Did you know that?
Here is the path on my 64-bit Windows 7 copy of SLSvcUtil.exe:
C:\Program Files (x86)\Microsoft SDKs\Silverlight\v4.0\Tools\SLSvcUtil.exe and for Windows Phone 7, use this path:
C:\Program Files (x86)\Microsoft SDKs\Windows Phone\v7.0\Tools\SlSvcUtil.exe
Do not use this one for Silverlight clients. It just doesn’t work.
C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\SvcUtil.exe

The Silverlight-Enabled WCF Service
Here is the project we are trying to connect to from the Silverlight project. Note that we do not want to “Add Service Reference.”
image

Before connecting the Silverlight project to the Silverlight–enabled WCF Service, set some references
Before actually generating the proxies and the configuration files, you need to first set some references as seen below.
Notice we have System.Runtime.Serialization and System.ServiceModel.
image

Start the Visual Studio Command Prompt as administrator
MyImage
Here is the command we are going to run, assuming SLSVCUTIL is in your path. 
slsvcutil http://localhost/WCFService/ServiceWCF.svc /out:Proxy.cs
The final command window looks like this:
MyImage
Notice the following:
(1) Two files were generated Proxy.cs and ServiceReferences.ClientConfig (2) SLSVCUtil was the command that was called
(3) The url for the SL-enabled WCF service is http://localhost/WCFService/ServiceWCF.svc
If the SLSVCUTIL is not in your path, go to the previous directory (C:\Program Files (x86)\Microsoft SDKs\Silverlight\v4.0\Tools\SLSvcUtil.exe )

The key step is upon us – to add the generated two files to the project.
image
That’s it. That is the magic recipe that solved a couple hours of torture.
A bit off topic, but might be germane for LightSwitch developers.

Return to section navigation list> 

Windows Azure Infrastructure

Rob Hirschfeld (@Zehicle, pictured below left) and Dave McCrory (@McCrory, below right) listed Microsoft Azure Cloud – Top 20 Lessons Learned about MSFT’s PaaS on 12/13/2010:
image image Two weeks ago, Rob Hirschfeld (@Zehicle) and I (@McCrory) had the benefit of intensive Azure training at Microsoft HQ to support Dell’s Azure Stamp.

We’ve assembled a top 20 list of things to know about programming for Azure (and really any PaaS leaning cloud):
  1. If you want performance, optimize to reduce fees. Azure (and any cloud) is architected to penalize you if you use their resources poorly. The challenge is to fix this before your boss get the tab for your unenlightened design decisions.
  2. Coding .NET on Azure easy, architecting for Azure requires learning. Clouds put things in different places than you are used to and the rules are different. Expect a learning curve.
  3. Partitioning = parallelism. Learn to love partitions in all their forms, because your app will be throttled if you throw everything into a single partition! On the upside, each partition operates in parallel and even better, they usually don’t cost extra (SQL is the exception).
  4. Roles are flexible. You can run web servers (Apache, etc) on a worker and worker tasks on a web role. This is a good way to save some change since you pay per role instance. It’s counter to separation of concerns, but financially you should also combine workers into a single role where possible.
  5. Understand walking deployments. You can (and should) have simultaneous versions of the code operating against the same data so that you can roll upgrades (ala Timothy Fitz/Eric Ries) to reduce risk and without reducing performance. You should expect your data schema to simultaneously span mutiple code versions.
  6. Learn about Update Domains (UDs). Deployment domains allow rolling upgrades and changes to Applications and Services. They are part of how you partition your overall application. If you’re planning a VIP swap deployment, then you won’t care.
  7. Each role = ONE external IP. You can have many VMs backing each role and Azure will load balance between them so you can scale out each role. Think of each role as a clonable entity: there will be at least 1 and more can be added if you want to scale.
  8. Understand between VIP and DIP. VIPs stand for Virtual IPs and are external, public, and metered. DIPs are internal, private, and load balanced. Azure provides an API to discover your DIPs – do not assume you know them because they are DYNAMIC IPs. Azure won’t let you see other DIPs inside the system.
  9. Azure has rich diagnostics, but beware. Azure leverages the existing diagnostics built into their system, but has to get the data off box since instances are volitile. This means that problems can be hard to isolate while excessive logging can impact performance and generate fees. Microsoft lets you target individual systems for elevated levels. You can also Terminal Server to a VM for troubleshooting (with caution).
  10. The new Azure admin console rocks. Take your pick between Silverlight or MMC Snap-in.
  11. Everything goes into Azure Storage. Learn to love it. Queues -> storage. Tables -> storage. Blobs -> storage. Logging -> storage. Code Repo -> storage. vDisk -> storage. SQL -> SQL (they march to their own drummer).
  12. Queues are essential, but tricky. Learn the meaning of idempotent because using queues requires you to handle failures and timeouts. The scary part is that it will work nicely until you exceed some limits and then you’ll experience cascading failure. Whee! Oh yea, and queues require polling (which stinks as a notification model).
  13. SQL Azure is just mostly like MS SQL. Microsoft did a smart thing in keeping Cloud SQL so it was highly compatible with Local SQL. The biggest note is that limited in size of partition. If you embrace the size limits you will get better performance. So stop pushing BLOBs into databases and start sharding.
  14. Duplicating data in tables will improve performance. This has to do with how partitions and keys operate but is an interesting architecture for NoSQL – stage data for use. Don’t be afraid to stage the same data in multiple ways. It may be faster/cheaper to write data twice if it becomes easier to find when you search it 1000s of times.
  15. Table data can be “warmed up.” Storage has logic that makes frequently accessed items faster (sort of like a cache ;) . If you can anticipate load spikes then you should warm the data just before the spike.
  16. Storage billing is both amount and transactions. You can get burned on a small, but busy set of data. Note: you will pay even if you 404 a request.
  17. Azure has a CDN. Leveraging Microsoft’s Content Delivery Network (CDN) will improve performance for your users with small, low latency, high request items. You need to change your URLs for those assets. Best practice is to use some versioning in the URI so that you can force changes. Remember, CDN is SLOWER for the first hit when the data is not in cache so avoid CDN for low volume assets!
  18. Provisioning time is not instant. Azure needs anywhere from 1-3 minutes to spin a new instance of a role. Build this lag into your architecture and dynamic scale plans. New databases and partitions are fast.
  19. The VM Role is maintained by YOU. Using the VM role is a handy shortcut, but has a long list of gotcha’s. Some of note: 1) the VM can be “reset” to the last VM image state that you uploaded, 2) you are responsble for VM OS upgrades and patches, 3) VMs must be clonable because they will operate in parallel.
  20. Azure supports more than .NET. You can setup anything in a worker (and now VM) role, but there are nuances to doing this effectively. You really need to understand how Azure works and had better be ready to crack open Visual Studio for some things even if you’re writing in Java.
We hope this list helps you navigate Azure deployments. No matter what cloud you use, understanding Azure’s architecture will help you write better cloud scale applications.
We’d love to hear your suggestions and recommendations!
Mirrored on both blogs: Dave McCrory’s Blog & Rob Hirschfeld’s Blog.
Great advice!

Buck Woody published Windows Azure Learning Plan - SQL Azure on 12/13/2010:
image This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with Security for  Windows Azure.

Overview and Training
Overview and general  information about SQL Azure - what it is, how it works, and where you can learn more.
General Overview (sign-in required, but free)
General Guidelines and Limitations
Microsoft SQL Azure Documentation
Samples and Learning
Sources for online and other SQL Azure Training
Free Online Training
60-minute Overview (webcast)
Architecture
SQL Azure Internals and Architectures for Scale Out and other use-cases.
SQL Azure Architecture
Scale-out Architectures
Federation Concepts
Use-Cases
SQL Azure Security Model (video)
Administration
Standard Administrative Tasks and Tools
Tools Options
SQL Azure Migration Wizard
Managing Databases and Login Security
General Security for SQL Azure
Backup and Recovery
More Backup and Recovery Options
Syncing Large Databases to SQL Azure
Programming
Programming Patterns and Architectures for SQL Azure systems.
How to Build and Manage a Business Database on SQL Azure
Connection Management
Transact-SQL Supported by SQL Azure

Lori MacVittie (@lmacvittie) added Convergence, consolidation, and common-sense as a preface to her Like Load Balancing WAN Optimization is a Feature of Application Delivery post of 12/13/2010 to F5’s DevCentral blog:
image When WAN optimization was getting its legs under it as a niche in the broader networking industry it got a little boost from the fact that remote/branch office connectivity was the big focus of data centers and C-level execs in the enterprise. Latency and congested WAN links between corporate data centers and remote offices around the globe were the source of lost productivity. The obvious solution – get thee a fatter pipe – was at the time far too expensive a proposition and, in some cases, not a feasible option. We’d had bandwidth management and other asymmetric solutions in the past and while they worked well enough for web-based content the problem now was fat files and the transfer of “big data” across the WAN.
We needed something else. 
TOO MUCH DATA? JUST MAKE LESS of IT
The problem, it was posited, was simply that there was too much data to traverse the constrained network links tying organizations to remote offices and thus the answer, logically, was to do away with trying to juggle it all in some sort of priority order and simply make less data. A sound proposition, one that was nearly simultaneously gaining traction on the consumer side of the equation in the form of real-time web application data compression.
Here we are, many years later, and the proposition is still sound: if the problem is limited bandwidth in the face of applications and their ever growing  data girth, then it behooves the infrastructure to reduce the size of that data as much as possible. This solution – whether implemented through traditional compression techniques or data deduplication or optimizing of transport and application protocols – is effective. It produces faster response times and thus the appearance, at least, of more responsive applications. As the specter of intercloud and cloud computing and the need to transport ginormous data sets (“big data”) in the form of data and virtual machine images continues to loom large on the horizon of most organizations it makes sense that folks would turn to solutions that by definition are focused on the reduction of data as a means to improve performance and success in transfer across increasingly constrained networks.
No argument there.
twitterbird#GartnerDC Major IT Trend #2 is: 'Big Data - The Elephant in the Room'. Growth 800% over next 5 years - w/80% unstructured. Tiering critical
@ZimmerHDS Harry Zimmer
The argument begins when we start looking at the changes in connectivity between then and now. The “internet” is the primary connectivity between users and applications today, even when they’re working from a “remote office.” Cloud computing changes the equation from which the solution of WAN optimization was derived and renders it a less than optimal solution on its own because it does not fit the connectivity paradigm upon which cloud computing is based - one that is increasingly unmanageable on both ends of the pipe. Luckily, decreasing data size is just one of many other methods that can be used to improve application performance and should be used in conjunction with those other methods based on context.
INCREASINGLY IRRELEVANT to APPLICATION CONSUMERS
Because of the way in which WAN optimization solutions work (in pairs) they are generally the last hop in the corporate network and the first hop into the remote network. This is a static implementation, one that leaves little flexibility. It also assumes the existence of a matching WAN optimization solution – whether hardware or software deployed – on the other end of the pipe. This is not a practical implementation for the most constrained and growing   environments – mobile devices – because as an organization you have very little control over the endpoint (device) in the first place (consider the consumerization of IT) and absolutely no control over the network on which it operates.
A traditional WAN optimization solution may be able to help specific classes of mobile devices if the user has installed the appropriate “soft client” that allows the WAN optimization solution to do its data deduplication trick. That’s feasible for corporate users over which you have control. What about the millions of end-users out there on iPhones, BlackBerries, and tablets over whom you do not have control. They are just as important and it is performance on which your organization/offering/solution will be judged by them. They’re an impatient lot, according to both Amazon and Google, and there are no studies to indicate that their conclusions are wrong, and have garnered enough mindshare to be awarded the right to run even the most stolid of enterprise applications:
blockquote Senior IT executives plan to make CRM, ERP and proprietary apps available to mobile devices
Ellen Messmer, Network World
Roughly 75% of senior IT executives plan to make internal applications available to employees on a variety of smartphones  and mobile devices, according to new research from McAfee's Trust Digital unit.
In particular, 57% of respondents said they intend to mobilize beyond e-mail and make CRM, ERP and proprietary in-house applications available to mobile devices. In addition, 45% are planning to support the iPhone and Android smartphones due to employee demand, even though many of these organizations already support BlackBerry devices.
Even if the end-user is not using a mobile device, it’s likely that their connection to the Internet exhibits very different characteristics than those experienced by corporate end-users. While download “speeds” have been increasing in the consumer market, we know there’s a difference between throughput and bandwidth, and that there is a relationship between ability of the servers to serve and consumers to consume. That relationship is often impeded by congestion, packet loss, endpoint resource constraints, and the shared nature of broadband networks.  It is simply no longer the case that we can assume ownership of any kind over the endpoint and certainly not over the network on which it resides.
And then you’ve got cloud. Cloud, oh cloud, wherefore art thou cloud? If you can deploy WAN optimization as a virtual network appliance then you have to be careful to choose a cloud that supports whatever virtualization platform the vendor currently supports. If you’ve already invested time and effort in a cloud provider and only later determined you need WAN optimization to improve the increased traffic between you and the provider (over the open, unmanaged Internet) you may be in for an unpleasant surprise.
CONTEXT. IT ALWAYS COMES BACK to CONTEXT
contextBut the even larger problem with WAN optimization as an individual solution is that it loses context. It assumes that it will always need to do its thing on the data. It’s generally automatic, with very little intelligence built into it. The architecture on which such point solutions were developed is not the same data  center architecture we’re working with today. As we continue to push the envelope of cloud computing and how it integrates with our data center architectures we find that it may be the case that a user on the LAN is directed to a cloud-hosted application while a user on the WAN is directed to the local corporate data center. In both cases it is (today) difficult to leverage a symmetric WAN optimization solution because in the first case you have little control over the infrastructure deployment and in the latter you probably have no control over the user’s network endpoint.
What you need is a solution that is aware it is symmetric when it is and asymmetric when it isn’t. Atop that, you need a solution that can simultaneously service both users while providing the best possible response time by applying the appropriate optimization and acceleration policies to their responses. That’s context, on-demand. It’s about the application and the user and the network; it’s a collaborative, integrated unified method of applying delivery policies in real-time. It’s not about simply decreasing the amount of data. That’s just one of many varied techniques and methods of improving performance. Like compression, it’s possible that introducing WAN optimization into the flow of data might impede performance because the task of deduplication may require moreimage cycles than it would to just transfer the data across a LAN. It’s possible that given the network conditions that decreasing size isn’t enough; you may need to apply TCP optimization and acceleration to the session to improve the transfer at that time.
WAN OPTIMIZATION is PART of A BIGGER PICTURE
WAN optimization techniques are just that – techniques – and they should be applied on-demand, as necessary and dictated by the conditions under which requests are made and responses must be delivered.
It’s beneficial to examine the relative importance (and applicability) of WAN optimization solutions in the context of the “big picture”. That picture includes transport and application layer impedances that also need to be addressed, as well as the generalized difficulties in deploying such solutions in the increasingly mobile and virtual environments from which applications are being deployed. A unified approach to application delivery, which encompasses WAN optimization as a service rather than an individual solution, is better suited to interpreting context and applying the appropriate policies in a way that makes sense.
Cloud really brings to the fore the architectural issues with most WAN optimization solutions. Because of the way they work, they must be paired (not impossible to overcome at all) and they must be the “last hop”, which makes multi-tenant support an interesting proposition. Contrast that with WAN application delivery services, which recognize that WAN links (any higher latency, constrained link really) requires different behavior at the network and transport and even application layers in terms of delivery, and you’ll find that the latter makes much more sense in the dynamic, services-oriented cloud computing environments currently available today. It’s just part of the bigger picture – the application delivery picture – and it has to become more integrated if it’s going to be useful for multi-tenant, dynamic environments like cloud computing.
Just as load balancing is no longer a solution of its own, WAN optimization has become a feature of a broader, holistic unified application delivery solution.
No significant articles today.

Rachel Collier recommended that you Get up to speed with the latest Windows Azure features in an 12/10/2010 post to the UK Tech Net blog:
image At PDC10 last month, Microsoft announced several enhancements for the Windows Azure Platform. Today, we're happy to announce that several of these enhancements are ready for you to try as a Beta or Community Technology Preview (CTP). You can find more details about these new features in the announcement on the Windows Azure team blog or in the webcast “Getting Started with the Windows Azure November 2010 Release”.
imageIf you are new to Windows Azure, you might want to read this post first, which contains guidance on getting you up to speed with the platform.
Step 1 – Download the Tools
Download today the Windows Azure SDK and Windows Azure Tools for Visual Studio release 1.3 and the new Windows Azure Platform Developer Training Kit and start exploring the new services and features in the Windows Azure Platform.
Step 2 – Get Access to Windows Azure
Developers and IT Pros have several cost-effective choices to try the Windows Azure Platform including the Introductory Special with 25 hours per month at no charge and in addition to these offers, MSDN Premium and Ultimate Subscribers are eligible to a 16-month free access for development & testing purposes. Learn more about your MSDN Windows Azure Benefits and how to activate your account today.
Step 3 – Learn and Explore the Platform
Now that you have installed the tools and activated your Windows Azure account you can use the following resources to learn and get hands-on experience on some of the key services and features on the Windows Azure Platform.
Learn – Webcast Explore – Hands-on Lab New Service or Feature
Building, deploying & managing Windows Azure Apps Introduction to Windows Azure HOL Full IIS; New Developer Portal; Remote Desktop; Apps Deployment; Apps Diagnostics
Migrating and Building Apps for Windows Azure Advanced Web And Worker Roles HOL Windows Azure Virtual Machine role; Elevated Privileges for Web &  Worker roles; Full IIS; Remote Desktop
Inside WA Virtual Machines Virtual Machine Role HOL Windows Azure Virtual Machine role
Understanding Windows Azure Connect Connecting Apps with Windows Azure Connect HOL Windows Azure Connect
Building High Performance Web Applications with the Windows Azure Platform Windows Azure CDN HOL CDN Dynamic Content Caching; Full IIS; Elevated Privileges for Web and Worker roles
Additional Resources
If you would like to learn more about the Windows Azure Platform and the new features that we're making available, please watch on-demand sessions from PDC10 that dive deeper into many of these Windows Azure features; check here for a full list of sessions. For all other questions, please refer to the latest FAQ.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Andi Mann (@AndiMann) concludes “hybrid cloud will be the dominant choice” in his Risk and Reward in the Cloud post of 12/13/2010:
Another day, another public cloud failure.
imageSometime on Friday December 10, 2010, claims content delivery network (CDN) provider SimpleCDN, two of its major upstream infrastructure providers, SoftLayer and Hosting Services, summarily terminated service for much of SimpleCDN’s infrastructure in Dallas, Seattle and Washington D.C. (I found out Saturday on Twitter through @Beaker).
According to SimpleCDN’s status page (reproduced below), this infrastructure constitutes the majority of SimpleCDN’s delivery network for its value services, including on-demand and live streaming services.
At the time of writing, SimpleCDN’s status page told their story:
From http://admin.simplecdn.com/ , visited 2010-12-11 13:30
From http://admin.simplecdn.com/ , visited 2010-12-11 13:30
So is this a big deal, or not a big deal?
I think it is a little of both.
It is clear that SimpleCDN customers – including Web hosting sites, application developers, streaming media providers, media production, game providers, and news distribution organizations – are suffering major service outages because of this failure. For them, there is no doubt that this is a very big deal.
It is a big deal for SimpleCDN too, of course. At the time of writing, it is unclear whether SimpleCDN will continue to provide services to remaining customers. Some competitors are making hay of these problems, offering switchover discounts while declaring (with what I believe is a disingenuously snide question mark) “SimpleCDN going out of business?”
For its part, SimpleCDN seems to be doing the right thing, working tirelessly through its Twitter stream (@SimpleCDN) to address customer concerns, and even working with erstwhile competitors like MaxCDN and Amazon CloudFront to provide alternative services. It should also be noted that SimpleCDN also says it has started to take legal action against SoftLayer and Hosting Services Inc. for what it calls:
“a deliberate attempt to cripple SimpleCDN’s current service offering [that] constitute[s] a conspiracy to remove us, and many other corporations affected by their reckless actions, from the marketplace”
However, in the bigger scheme of things – for consumers of public cloud services in general – this is really not a big deal at all.
It is just another risk to evaluate and manage.
As I have already published – and it clearly remains exceedingly relevant – downtime is endemic in the public cloud – but it is not unique to public cloud. Private cloud, and even plain old legacy infrastructure, suffers downtime and other outages to more or less the same degree as public cloud. Some public cloud providers even offer better uptime guarantees than the average privately owned and operated infrastructure.
Of course, it is possible to reduce your risk by opting out of some public services. For example, you can run your own Web site, lease private lines, buy your own fibre, or host your own DNS servers – just as many businesses still own their own real estate, trucking fleet, payroll, or generators. It is simple logic that the less a business relies on public/outsourced services – whether that is real estate, trucking, staffing, electricity, or information technology – the less risk it carries that those services will cease to be available, for whatever reason.
If you own your own trucks, no other private business can take them off the roads
After all, if you own your own fleet of trucks, you have more control over those vehicles, and no other private business can summarily take them off the roads. Similarly, if you own your own Web servers – or CDN – then no other private business can summarily take them off the Internet.
However, this does not mean owned infrastructure is an inherently better choice than shared infrastructure.
It is all simply a balance of risk and reward.
Leasing your fleet is no more or less valid than buying it outright. Even running your own power generation is no more or less valid than taking power off the public grid. Similarly, private cloud is not inherently a more or less valid choice than private cloud.
For each business, there are risks and rewards, costs and benefits. Each business must make a choice according to its own appetite for risk and reward. Moreover, within each business, individual business units (BUs) may have different levels of risk aversion, leading to different decisions on how much to outsource any given service.
The diversity of this risk-reward decision is the fundamental driver for hybrid cloud. Some businesses – and some BUs – will inevitably find the right balance in public cloud. Others will find it in private cloud. However, especially in large enterprises with multiple (often independent) BUs, it is most likely that each service group will choose independently, and not at all homogeneously.
Every enterprise will at some stage need to use some 3rd-party infrastructure
Indeed, it is inevitable that every enterprise will at some stage need to use some 3rd-party infrastructure. Company-owned trucks will eventually use public roads. Privately owned IT will eventually use some 3rd-party service, such as Web hosts, Internet service providers, domain name services, or packet routing.
So it is really just a question of how much 3rd-party infrastructure each business, business unit, or individual service or activity will use.
Which certainly bolsters my opinion that hybrid cloud will be the dominant choice, at least in the near future, for most businesses. It is simply a question of what is the best balance of risk and reward.



<Return to section navigation list> 

Cloud Security and Governance

image
No significant articles today.

<Return to section navigation list> 

Cloud Computing Events

Microsoft World Wide Events announced on 12/13/2010 Windows Azure Platform Acceleration Discover to be held 12/16/2010 from 9:00 AM to 1:00 PM PST at the Microsoft Los Angeles office:
  • Event ID: 1032468988
  • Microsoft LA Office, 333 S Grand Ave, Los Angeles California 90071-1504, United States
  • Register by Phone: 1-877-673-8368
  • Language(s): English.
  • Product(s): Other and Windows Azure.
  • Audience(s): Architect, IT Decision Maker and Tech Influencing BDM.
Event Overview
image
Microsoft would like to invite you to a special event specifically designed for ISVs interested in learning more about the Windows Azure Platform. The “Windows Azure Platform Discover Events” are half day events that will be held worldwide with the goal of helping ISVs understand the Microsoft’s Cloud Computing offerings with the Windows Azure Platform, discuss the opportunities for the cloud, and show ISVs how they can get started using Windows Azure and SQL Azure today.
The target audience for these events includes BDMs, TDMs, Architects, and Development leads.  The sessions are targeted at the 100-200 level with a mix of business focused information as well as technical information.
Register with a Windows Live™ ID
Thursday, December 16, 2010 9:00 AM to
Thursday, December 16, 2010 1:00 PM
Pacific Time (US & Canada)
Welcome Time: 8:30 AM
Register Online

Todd Hoff reported on 12/13/2010 that there’s Still Time to Attend My Webinar Tomorrow: What Should I Do? Choosing SQL, NoSQL or Both for Scalable Web Applications:

It's time to do something a little different and for me that doesn't mean cutting off my hair and joining a monastery, nor does it mean buying a cherry red convertible (yet), it means doing a webinar!
  • On December 14th, 2:00 PM - 3:00 PM EST, I'll be hosting What Should I Do? Choosing SQL, NoSQL or Both for Scalable Web Applications.
  • The webinar is sponsored by VoltDB, but it will be completely vendor independent, as that's the only honor preserving and technically accurate way of doing these things.
  • The webinar will run about 60 minutes, with 40 minutes of speechifying and 20 minutes for questions.
  • The hashtag for the event on Twitter will be SQLNoSQL. I'll be monitoring that hashtag if you have any suggestions for the webinar or if you would like to ask questions during the webinar.

MSDN Worldwide Events announced MSDN Webcast: Windows Azure Boot Camp: Worker Roles (Level 200) will be held 12/13/2010 (today) at 11:00 AM PST:
  • Event ID: 1032470870
  • Language(s): English.
  • Product(s): Windows Azure.
  • Audience(s): Pro Dev/Programmer.
Event Overview
image
While Web Roles provide an Internet Information Services (IIS) environment to applications, the key to scalability and performance is understanding Worker Roles and when to use them. This webcast covers what Worker Roles are, endpoints, how Worker Roles use local storage, and the function of Worker Roles in processing a queue.
imageTechnology is changing rapidly, and nothing is more exciting than what's happening with cloud computing. Join us for this webcast series, and get up to speed on developing for Windows Azure, the broad-based Microsoft business solution that helps you meet market demands. This series brings the in-person, two-day Windows Azure Boot Camp online. Each webcast is a stand-alone resource, but the series gives you a complete picture of how to get started with this platform.
Presenters: Mike Benkovich, Senior Developer Evangelist, Microsoft Corporation and Brian Prince, Senior Architect Evangelist, Microsoft Corporation
Energy, laughter, and a contagious passion for coding: Mike Benkovich brings it all to the podium. He's been programming since the late '70s when a friend brought a Commodore CPM home for the summer. Mike has worked in a variety of roles including architect, project manager, developer, and technical writer. Mike is a published author with WROX Press and APress Books, writing primarily about getting the most from your Microsoft SQL Server database. Since appearing in Microsoft's DevCast in 1994, Mike has presented technical information at seminars, conferences, and corporate boardrooms across America. This music buff also plays piano, guitar, and saxophone, but not at his MSDN Events. For more information, visit www.BenkoTIPS.com.
Expect Brian Prince to get (in his own words) "super excited" whenever he talks about technology, especially cloud computing, patterns, and practices. Brian is a senior architect evangelist at Microsoft and has more than 13 years of expertise in information technology management. Before joining Microsoft in 2008, Brian was the senior director of technology strategy for a major Midwest Microsoft partner. Brian has exceptional proficiency in the Microsoft .NET framework, service-oriented architecture, building enterprise service buses (ESBs), and both smart client and web-based applications. Brian is the cofounder of the non-profit organization CodeMash (www.codemash.org), and he speaks at various regional and national technology events, such as TechEd. For more information, visit www.brianhprince.com.

The UK ISV Evangelism Team reported on 12/13/2010 an upcoming Windows Azure (ALM Perspective) LiveMeeting- Free and from the comfort of your own desk??!! on 1/19/2011:
image
Wed 19th Jan 2011 10-11am [GMT]. The application lifecycle management features of Visual Studio 2010 & Team Foundation server 2010 can be beneficial for any development project but with many developers now targeting Windows Azure as their platform of choice, this session will look at how we can utilize those features to increase productivity and software quality in the cloud.
Register here-http://www.microsoft.com/visualstudio/en-gb/visual-studio-events

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Jeff Barr (@jeffbarr, pictured below) reported FreeBSD on Amazon EC2 in a 12/13/2010 post to his Amazon Web Services blog:
Colin Percival (developer of Tarsnap) wrote to tell me that the FreeBSD operating system is now running on Amazon EC2 in experimental fashion.
imageAccording to his FreeBSD on EC2 blog post, version 9.0-CURRENT of FreeBSD is now available in the US East (Northern Virginia) region and can be run on t1.micro instances. Colin expects to be able to expand to other regions and EC2 instance types over time.
image The AMI is stable enough to be able to build and run Apache under light load for several days. FreeBSD 9.0-CURRENT is a bleeding-edge snapshot release. Plans are in place to back-port the changes made to this release to FreeBSD 8.0-STABLE in the future.
Congratulations to Colin and to the rest of the FreeBSD team for making this happen. I have received a number of requests for this operating system over the years and I am happy to see that this community-driven effort has made so much progress.

<Return to section navigation list> 

0 comments: