Monday, September 13, 2010

Windows Azure and Cloud Computing Posts for 9/13/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb311  
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

imageAzure Blob, Drive, Table and Queue Services

Clemens Vasters analyzes The Magical Input Queue Of T (aka InputQueue<T>) deeply in this 9/12/2010 post:

This post explains an essential class for asynchronous programming that lurks in the depths of the WCF samples: InputQueue<T>. If you need to write efficient server-side apps, you should consider reading through this and add InputQueue<T> to your arsenal. 

Let me start with: This blog post is 4 years late. Sorry! – and with that out of the way:

The WCF samples ship with several copies of a class that’s marked as internal in the System.ServiceModel.dll assembly: InputQueue<T>. Why are these samples – mostly those implementing channel-model extensions – bringing local copies of this class with them? It’s an essential tool for implementing the asynchronous call paths of many aspects of channels correctly and efficiently.

If you look closely enough, the WCF channel infrastructure resembles the Berkeley Socket model quite a bit – especially on the server side. There’s a channel listener that’s constructed on the server side and when that is opened (usually under the covers of the WCF ServiceHost) that operation is largely equivalent to calling ‘listen’ on a socket – the network endpoint is ready for business.  On sockets you’ll then call ‘accept’ to accept the next available socket connection from a client, in WCF you call ‘AcceptChannel’ to accept the next available (session-) channel. On sockets you then call ‘receive’ to obtain bytes, on a channel you call ’Receive’ to  obtain a message.

Before and between calls to '’AcceptChannel’ made by the server-side logic,  client-initiated connections – and thus channels – may be coming in and queue up for a bit before they handed out to the next caller of ‘AcceptChannel’, or the asynchronous equivalent ‘Begin/EndAcceptChannel’ method pair. The number of channels that may be pending is configured in WCF with the ‘ListenBacklog’ property that’s available on most bindings.

I wrote ‘queue up’ there since that’s precisely what happens – those newly created channels on top of freshly accepted sockets or HTTP request channels are enqueued into an InputQueue<T> instance and (Begin-)Accept is implemented as a dequeue operation on that queue. There are two particular challenges here that make the regular Queue<T> class from the System.Collections.Generic namespace unsuitable for use in the implementation of that mechanism: Firstly, the Dequeue method there is only available as a synchronous variant and also doesn’t allow for specifying a timeout. Secondly, the queue implementation doesn’t really help much with implementing the ListenBacklog quota where not only the length of the queue is limited to some configured number of entries, but accepting further connections/channels from the underlying network is also suspended for as long as the queue is at capacity and needs to resume as soon as the pressure is relieved, i.e. a caller takes a channel out of the queue.

To show that InputQueue<T> is a very useful general purpose class even outside of the context of the WCF channel infrastructure, I’ve lifted a version of it from one of the most recent WCF channel samples, made a small number of modifications that I’ll write about later, and created a little sample around it that I’ve attached to this post.

The sample I’ll discuss here is simulating parsing/reading IP addresses from a log-file and then performing a reverse DNS name resolution on those addresses – something that you’d do in a web-server log-analyzer or as the background task in a blog engine wile preparing statistics.

Reverse DNS name resolution is quite interesting since it’s embarrassingly easy to parallelize and each resolution commonly takes a really long time (4-5 seconds) –whereby all the work is done elsewhere. The process issuing the queries is mostly sitting around idle waiting for the response.  Therefore, it’s a good idea to run a number of DNS requests in parallel, but it’s a terrible idea to have any of these requests execute as a blocking call and burning a thread. Since we’re assuming that we’re reading from a log file that requires some parsing, it would also be a spectacularly bad idea to have multiple concurrent threads compete for access to that file and get into each other’s way. And since it is a file and we need to lift things up from disk, we probably shouldn’t do that ‘just in time’ as a DNS resolution step is done, but there should rather be some data readily waiting for processing.  InputQueue<T> is enormously helpful in such a scenario.

Clemens continues with details of the sample code and adds a link to download it:

And last but not least – here’s teh codez; project file is for VS2010, through the files into a new console app for VS2008.

UsingInputQueue.zip (13.85 KB)

or if you'd rather have a version of InputQueue that is using the regular thread pool, download the WCF samples and look for InputQueue.cs.

[The sample code posted here is subject to the Windows SDK sample code license]

Looks to me like a possibility for a future enhancement to Azure Queues.


<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Ayende Rahien (@ayende) asks how you would go about Implementing CreateSequentialUuid() in this 9/13/2010 post:

We run into an annoying problem in Raven regarding the generation of sequential guids. Those are used internally to represent the etag of a document.

image For a while, we used the Win32 method CreateSequentialUuid() to generate that. But we run into a severe issue with that, it create sequential guids only as long as the machine is up. After a reboot, the guids are no longer sequential. That is bad, but it also means that two systems calling this API can get drastically different results (duh! that is the point, pretty much, isn’t it?). Which wouldn’t bother me, except that we use etags to calculate the freshness of an index, so we have to have an always incrementing number.

How would you implement this method?

public static Guid CreateSequentialUuid()

A few things to note:

We really actually care about uniqueness here, but only inside a single process, not globally.

The results must always be incrementing.

The always incrementing must be consistent across machine restarts and between different machines.

Yes, I am fully aware of the NHibernate’s implementation of guid.comb that creates sequential guids. It isn't applicable here, since it doesn't create truly sequential guids, only guids that sort near one another. [Emphasis Ayende’s.]

This is of interest to SQL Azure developers because, as John Paul Cook notes below, SQL Azure doesn’t support the newsequentialid function:

Msg 40511, Level 15, State 1, Line 32
Built-in function 'newsequentialid' is not supported in this version of SQL Server.


John Paul Cook (@JohnPaulCook) updated and expanded his SQL Azure differences post on 9/12/2010:

image If you prefer wizards and designers, SQL Azure may cause you some frustration because they’re not there. But if you like T-SQL scripting, you’ll be in your element with SQL Azure. Here’s what happens in SQL Azure when you try to create a new table:

image

imageSince it’s likely that existing SQL Server scripts will be used to create tables in SQL Azure, I decided to put a script to the test. In SQL Server 2008 R2, I created a table that has one column for every SQL Server data type. I scripted the table and tried to run the script on SQL Azure.

CREATE

TABLE dbo.allDatatypes(
abigint bigint NULL,
abinary binary(50) NULL,
abit bit NULL,
achar char(10) NULL,
adate date NULL,
adatetime datetime NULL,
adatetime2 datetime2(7) NULL,
adatetimeoffset datetimeoffset(7) NULL,
adecimal decimal(18, 0) NULL,
afloat float NULL,
ageography geography NULL,
ageometry geometry NULL,
ahierarchyid hierarchyid NULL,
aimage image NULL,
aint int NULL,
amoney money NULL,
anchar nchar(10) NULL,
antext ntext NULL,
anumeric numeric(18, 0) NULL,
anvarchar50 nvarchar(50) NULL,
anvarcharMAX nvarchar(max) NULL,
areal real NULL,
asmalldatetime smalldatetime NULL,
asmallint smallint NULL,
asmallmoney smallmoney NULL,
asql_variant sql_variant NULL,
atext text NULL,
atime time(7) NULL,
atimestamp timestamp NULL,
atinyint tinyint NULL,
auniqueidentifier uniqueidentifier NOT NULL,
avarbinary50 varbinary(50) NULL,
avarbinaryMAX varbinary(max) NULL,
avarchar50 varchar(50) NULL,
avarcharMAX varchar(max) NULL,
axml xml NULL
) ON PRIMARY TEXTIMAGE_ON PRIMARY

Msg 156, Level 15, State 1, Line 40
Incorrect syntax near the keyword 'PRIMARY'.

It didn’t work because SQL Azure abstracts away filegroups, which probably doesn’t bother people who only use the defaults for filegroups. Removing ON PRIMARY TEXTIMAGE_ON PRIMARY allows the script to run successfully on SQL Azure.

Although geography, geometry, and hierarchyid are implemented with the CLR, user defined CLR is not allowed in SQL Azure. As we can see from running the corrected script, SQL Azure supports all SQL Server data types, but that doesn’t mean everything works exactly the same way as it does on SQL Server. If you need to insert guids, newsequentialid() is very helpful in preventing index page splits during inserts. It’s not supported in SQL Azure, so you’ll have to use newid() instead.

auniqueidentifier uniqueidentifier NOT NULL default newsequentialid(),

Msg 40511, Level 15, State 1, Line 32
Built-in function 'newsequentialid' is not supported in this version of SQL Server.

Also, ROWGUIDCOL is not supported.

auniqueidentifier uniqueidentifier NOT NULL ROWGUIDCOL,

Msg 40514, Level 16, State 6, Line 1
'ROW GUID COLUMN' is not supported in this version of SQL Server.

You can’t use extended properties to document a table in SQL Azure. Most people don’t do this, so it’s not a big deal.

EXEC

sp_addextendedproperty N'MS_Description', N'This column is used as the primary key.'
, 'SCHEMA', N'dbo'
, 'TABLE', N'allDataTypes'
, 'COLUMN', N'_uniqueidentifier'
GO
Msg 2812, Level 16, State 62, Line 1
Could not find stored procedure 'sp_addextendedproperty'.

Given this background, you know to expect some differences between the SQL Server version of the AdventureWorksLT database and the SQL Azure version. Since we are so familiar with AdventureWorks databases and there are so many examples written based on them, it’s important to know how similar the SQL Azure version is. This isn’t such an easy thing to do today because of a lack of SQL Azure compatible tools. Using a special build of SQL Compare that Red Gate provided to me, I was able to get a nice visual representation of the differences.

image

We can see several differences including a lack of triggers and foreign keys in the SQL Azure version of AdventureWorks. This could lead you to the erroneous conclusion that triggers and foreign keys are not supported in SQL Azure. Rest assured, they are supported. I ran the scripts to create the triggers and foreign keys in AdventureWorksLTAZ2008R2 and they worked just fine.

Other differences between SQL Server and SQL Azure can be seen in right-click menu options. I think Arnie and Allen are probably upset about the lack of PowerShell support.

image

image

Full-Text Indexes, Policies, and Facets are not supported in SQL Azure.

image

image

Patrick Butler Monterde listed SQL Azure Best Practices and Code in this 9/11/2010 post:

SQL Azure Retry Code Update (MSDN Blog)

Link: http://blogs.msdn.com/b/bartr/archive/2010/06/20/sql-azure-retry-code-update.aspx


Development Best Practices and Patterns for Using Microsoft SQL Azure Databases (PDC 2009 – Presentation)

SQL Azure provides a fully relational database service that is based on Microsoft SQL Server and familiar concepts such as T-SQL, schema-based tables, and stored procedures. Learn patterns and best practices for developing resilient applications that allow you to take full advantage of the scale and elasticity of SQL Azure Database Service.

Link: http://www.microsoftpdc.com/2009/P09-08


Microsoft SQL CAT Best Practices Code Project

This resource provides a set of general best practices for Microsoft platform developers including SQL Server, SQL Azure and other major technologies in the relational space and beyond.

Link: http://code.msdn.microsoft.com/sqlcat


Best Practices for Building Reliable SQL Azure Database Client Applications

Blog post explaining how to use the SQL CAT best practices project

Link: http://sqlcat.com/technicalnotes/archive/2010/06/17/best-practices-for-building-reliable-sql-azure-database-client-applications.aspx


The ADO.NET Team posted a paean to EF Caching with Jarek Kowalski's Provider on 9/13/2010:

Jarek Kowalski built a great caching and tracing toolkit that makes it easy to add these two features onto the Entity Framework. His code sample plugs in as a provider that wraps the original store provider you intend to use. This guide will help you get started using Jarek’s tools to enable caching and/or tracing in your application.

Caching can be especially valuable in an application that repeatedly retrieves the same data, such as in a web application that reads the list of products from a database on each page load. Tracing allows you to see what SQL queries are being generated by the Entity Framework and determine when these queries are being executed by writing them to the console or a log file. When these two features are used together, you can use the tracing output to determine how caching has changed your application’s pattern of database access.

Although the Entity Framework currently does not ship with caching or tracing “in the box,” you can add support for these features by following this walkthrough. These instructions cover only the essentials of enabling caching and tracing using Jarek’s provider samples. More advanced details about how to configure these samples are available in the resources Jarek provides online.

Below is a diagram that represents the change we will be making to the query execution pathway in order to plug in the caching and tracing features. The extensibility point we will be using is at the store provider level of the entity framework.

In addition, below is a diagram that should elucidate why it is referred to as a “wrapping” provider and what it looks like when there is a cache hit versus a cache miss during query execution. As you can see below, the providers conceptually wrap around your existing provider.

Gathering Resources and Setting Up

Let’s get started. Here are the new resources you will need to follow along with this walkthrough:

First, install the visual studio extension my colleague Matt Luebbert created that will automatically generate certain boilerplate code for you. A vsix file like this is an installable visual studio extension that adds functionality to your IDE. This particular vsix adds a T4 template that generates a class that extends your existing entity container so that it can work with the new caching and tracing features. Note, however, that it also has dependencies on adding both of Jarek’s caching and tracing resources to your project. This does not necessarily mean that to use caching you must use tracing, or vice versa, but you do have to provide your project with references to both resources in order to use this extra tool we have provided for convenience. If you do not want to use this tool, you can also generate the necessary code by hand according to Jarek’s instructions here.

To install the extension, double-click the “ExtendedContext.vsix” file you downloaded to your desktop. Then click “Install.”

Click "Close" after the installation completes successfully.

Now open your project in Visual Studio. If you would like a sample project to get started, you can use the same one I am using, which I created in a previous walkthrough the Absolute Beginner’s Guide to Entity Framework. For this particular sample, I am using the same models as in that walkthrough but a simpler version of the executable code which also demonstrates the benefits of caching more obviously. Here is the code in the file Program.cs that I am starting with this time.

Several feet of C# source code excised for brevity.

Download:Open attached fileExtendedContext.vsix.txt


imageNo significant articles today.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Maciej Skierkowski (a.k.a. "Ski") explained Calling a Service Bus HTTP Endpoint with Authentication using WebClient in a 9/12/2010 post:

This article demonstrates using WebClient in C# to make a client call to a secure Service Bus endpoint exposed using WebHttpRelayBinding. There are two key take-aways from this that mightn’t be covered by existing documentation and samples: (1) learn how place an Access Control STS issued token into the Authorization header, and (2) make a client call to the service without the dependency on the SB client library. The second point is particularly important because non-.NET languages (e.g. PHP, Python, Java) have an analogous to WebClient, therefore understanding this example enables you to easily build clients in other non-.NET languages.

image72Before getting started you’ll first need to create a WCF service exposed via WebHttpRelayBinding. Below are four files you can copy-paste into your own solution and replace the values for the issuer name, key, and service namespace. The picture below illustrates where you can get the service namespace, default issuer name, and default issuer key to use out-of-the-box with these samples. The page is a screenshot of ServiceNamespace.aspx.

image

App.config

<?xml version="1.0"?>
<configuration>
  <system.serviceModel>
    <bindings>
      <!-- Application Binding -->
      <webHttpRelayBinding>
        <binding name="default">
          <security relayClientAuthenticationType="RelayAccessToken"></security>
        </binding>
      </webHttpRelayBinding>
    </bindings>

    <services>
      <service behaviorConfiguration="default" name="Microsoft.ServiceBus.Samples.TextService">
        <endpoint address="" behaviorConfiguration="sharedSecretClientCredentials"
          binding="webHttpRelayBinding" bindingConfiguration="default"
          name="RelayEndpoint" contract="Microsoft.ServiceBus.Samples.ITextContract" />
      </service>
    </services>

    <behaviors>
      <endpointBehaviors>
        <behavior name="sharedSecretClientCredentials">
          <transportClientEndpointBehavior credentialType="SharedSecret">
            <clientCredentials>
              <sharedSecret issuerName="owner" issuerSecret="-key-"/>
            </clientCredentials>
          </transportClientEndpointBehavior>
        </behavior>
      </endpointBehaviors>
      <serviceBehaviors>
        <behavior name="default">
          <serviceDebug httpHelpPageEnabled="false" httpsHelpPageEnabled="false"/>
        </behavior>
      </serviceBehaviors>
    </behaviors>
  </system.serviceModel>

ImageContract.cs

namespace Microsoft.ServiceBus.Samples
{
    using System;
    using System.Runtime.Serialization;
    using System.ServiceModel;
    using System.ServiceModel.Channels;
    using System.ServiceModel.Web;
    using System.IO;

    [ServiceContract(Name = "TextContract", Namespace = "http://samples.microsoft.com/ServiceModel/Relay/")]
    public interface ITextContract
    {
        [OperationContract, WebGet]
        String GetText();
    }

    public interface IImageChannel : ITextContract, IClientChannel { }
}

ImageService.aspx

namespace Microsoft.ServiceBus.Samples
{
    using System;
    using System.Collections.Generic;
    using System.Drawing;
    using System.Drawing.Imaging;
    using System.IO;
    using Microsoft.ServiceBus.Web;
    using System.ServiceModel;
    using System.ServiceModel.Channels;
    using System.ServiceModel.Web;

    [ServiceBehavior(Name = "TextService", Namespace = "http://samples.microsoft.com/ServiceModel/Relay/")]
    class TextService : ITextContract
    {
        public String GetText()
        {
            WebOperationContext.Current.OutgoingRequest.ContentType = "text/plain";
            return "Hello World";
        }
    }
}

Program.cs

namespace Microsoft.ServiceBus.Samples
{
    using System;
    using System.ServiceModel;
    using System.ServiceModel.Description;
    using Microsoft.ServiceBus;
    using System.ServiceModel.Web;

    class Program
    {
        static void Main(string[] args)
        {
            string serviceNamespace = "-servicenamespace-";
            
            Uri address = ServiceBusEnvironment.CreateServiceUri("https", serviceNamespace, "Text");

            WebServiceHost host = new WebServiceHost(typeof(TextService), address);
            host.Open();

            Console.WriteLine("Copy the following address into a browser to see the image: ");
            Console.WriteLine(address + "GetText");
            Console.WriteLine();
            Console.WriteLine("Press [Enter] to exit");
            Console.ReadLine();

            host.Close();
        }
    }
}

Once you have the above service up-and-running creating a client using WebClient is strait forward.

Program.cs

namespace Client
{
    using System;
    using Microsoft.AccessControl.Client;
    using System.Net;

    class Program
    {
        static void Main(string[] args)
        {
            string serviceNamespace = "-";
            string issuerName = "-";
            string issuerKey = "-";
            
            string baseAddress= string.Format("http://{0}.servicebus.windows.net/",serviceNamespace);
            string serviceAddress = string.Format("https://{0}.servicebus.windows.net/Text/GetText", serviceNamespace);

            TokenFactory tf = new TokenFactory(string.Format("{0}-sb",serviceNamespace), issuerName, issuerKey);
            string requestToken = tf.CreateRequestToken();
            string returnToken = tf.GetACSToken(requestToken, baseAddress);

            WebClient client = new WebClient();
            
            client.Headers[HttpRequestHeader.Authorization] = string.Format("WRAP access_token=\"{0}\"", returnToken);
            
            string returnString = client.DownloadString(new Uri(serviceAddress));
            
            Console.WriteLine(returnString);

            Console.ReadLine();
        }

    }
}

First you’ll notice that I am using TokenFactory, which comes from my “Requesting a Token from Access Control in C#”. The base address is the address of the Relying Party. This address maps to the Scope in your Access Control Service configuration. A token granted to /foo in Service Bus will give you access too /foo/bar and /foo/baz because Service Bus uses longest pre-fix matching on the path. This is why we can get a token for the root of the path and use that token to access a resource (i.e. endpoint) at it’s child address.

Also notice that the service namespace when instantiating TokenFactory is is postfixed with “-sb”. This is because the Access Control service has two independent instances of an STS running, one is at http://servicenamespace.accesscontrol.windows.net and the other at http://servicenamespace-sb.accesscontrol.windows.net/ as you can see, they only differ by “-sb”. The latter is a special STS that is just for the Service Bus. I won’t go into the details of this design decision, but from the applications perspective we only have to make sure to use the right STS for issue tokens to be used by the Service Bus.

Once the requesting token is generated by the TokenFactory it is sent to the Access Control Service STS via GetACSToken() method which returns a SWT token issued by the ACS STS.

The ACS-issued token is then added to the HTTP Authorization Header. Make sure to include “WRAP access_token=\”\”” in the header. This informs the Service Bus endpoint of the structure of the token.

F5 and you are golden.


Maciej Skierkowski (a.k.a. "Ski") described Requesting a Token from Access Control Service in C# in this 9/12/2010 post:

Timage72he AppFabric SDK V1.0 July Update SDK has a number of Access Control Service examples demonstrating the requesting of a token from the Access Control Service; however I find myself needing a small snippet to insert into other samples (e.g. Service Bus) just to craft a request token and get a token to Auth with SB. As such, I’m posting this “TokenFactory” code that I’ve been re-using. This is fundamentally the same functionality I’ve demonstrated in previous posts in PHP, Java, and Python.

namespace Microsoft.AccessControl.Client
{
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Security.Cryptography;
    using System.Text;
    using System.Web;
    using System.Net;
    using System.Collections.Specialized;

    public class TokenFactory
    {
        private static string acsHost = "accesscontrol.windows.net";
        string serviceNamespace;
        string issuerName;
        string signingKey;

        public TokenFactory(string serviceNamespace, string issuerName, string signingKey)
        {
            this.serviceNamespace = serviceNamespace;
            this.issuerName = issuerName;
            this.signingKey = signingKey;
        }

        public string CreateRequestToken()
        {
            return this.CreatRequestToken(new Dictionary<string, string>());
        }

        public string CreatRequestToken(Dictionary<string, string> claims)
        {
            // build the claims string
            StringBuilder builder = new StringBuilder();
            foreach (KeyValuePair<string, string> entry in claims)
            {
                builder.Append(entry.Key);
                builder.Append('=');
                builder.Append(entry.Value);
                builder.Append('&');
            }

            // add the issuer name
            builder.Append("Issuer=");
            builder.Append(this.issuerName);
            builder.Append('&');

            // add the Audience
            builder.Append("Audience=");
            builder.Append(string.Format("https://{0}.{1}/WRAPv0.9/&", this.serviceNamespace, acsHost));

            // add the expires on date
            builder.Append("ExpiresOn=");
            builder.Append(GetExpiresOn(20));

            string signature = this.GenerateSignature(builder.ToString(), this.signingKey);
            builder.Append("&HMACSHA256=");
            builder.Append(signature);

            return builder.ToString();
        }

        private string GenerateSignature(string unsignedToken, string signingKey)
        {
            HMACSHA256 hmac = new HMACSHA256(Convert.FromBase64String(signingKey));

            byte[] locallyGeneratedSignatureInBytes = hmac.ComputeHash(Encoding.ASCII.GetBytes(unsignedToken));

            string locallyGeneratedSignature = HttpUtility.UrlEncode(Convert.ToBase64String(locallyGeneratedSignatureInBytes));

            return locallyGeneratedSignature;
        }

        private static ulong GetExpiresOn(double minutesFromNow)
        {
            TimeSpan expiresOnTimeSpan = TimeSpan.FromMinutes(minutesFromNow);

            DateTime expiresDate = DateTime.UtcNow + expiresOnTimeSpan;

            TimeSpan ts = expiresDate - new DateTime(1970, 1, 1, 0, 0, 0, 0);

            return Convert.ToUInt64(ts.TotalSeconds);
        }

        public string GetACSToken(string swt, string appliesTo)
        {
            // request a token from ACS
            WebClient client = new WebClient();
            client.BaseAddress = string.Format(@"https://{0}.{1}/", serviceNamespace, acsHost);

            NameValueCollection values = new NameValueCollection();
            values.Add("wrap_assertion_format", "SWT");
            values.Add("wrap_assertion", swt);
            values.Add("wrap_scope", appliesTo);

            string response = null;

            byte[] responseBytes = client.UploadValues("WRAPv0.9/", values);
            response = Encoding.UTF8.GetString(responseBytes);
            return HttpUtility.UrlDecode(response
                .Split('&')
                .Single(value => value.StartsWith("wrap_access_token=", StringComparison.OrdinalIgnoreCase))
                .Split('=')[1]);
        }
    }
}

Here is a sample that uses the above to get a token from ACS. In this example I am using it specifically for Service Bus (hence “-sb” in the service namespace).

string serviceNamespace = "-";
string issuerName = "-";
string issuerKey = "-";
            
string baseAddress= string.Format("http://{0}.servicebus.windows.net/",serviceNamespace);
string serviceAddress = string.Format("https://{0}.servicebus.windows.net/Text/GetText", serviceNamespace);

TokenFactory tf = new TokenFactory(string.Format("{0}-sb",serviceNamespace), issuerName, issuerKey);
string requestToken = tf.CreateRequestToken();
string returnToken = tf.GetACSToken(requestToken, baseAddress);

Console.WriteLine(requestToken);
Console.WriteLine(returnToken);
Console.ReadLine();

Maggie Myslinska’s Windows Azure Platform AppFabric Overview COS320 session video from Tech*Ed Australia 2010 is available for viewing:

image72Come learn how to use Windows Azure AppFabric (with Service Bus and Access Control) as building block services for Web-based and hosted applications, and how developers can leverage services to create applications in the cloud and connect them with on-premises systems...

Speaker(s): Maggie Myslinska
Event: Tech·Ed Australia
Year: 2010
Track: Cloud Computing and Online Services

Thanks to Clemens Vasters (@clemensv) for the heads up.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

The Windows Azure Team posted on 9/12/2010 a list of links to New Windows Azure Videos Now Available on Channel 9:

Want to watch the experts discuss the Windows Azure platform and hear their tips to help you get started?  Then you'll want to take a look at these four videos that were just posted to Bytes by MSDN on Channel 9.

Bytes by MSDN: Adam Grocholski and Mickey Williams discuss Cloud Computing

In this video, Adam Grocholski and Mickey Williams talk about Thor, one of Adam's latest projects designed to help people get real-world experience using Windows Azure. Adam is a Microsoft .NET consultant and an early adopter of new technologies such as Silverlight and Windows Azure and the founder of the Twin Cities Cloud Computing User Group.  Mickey is vice president of the Technology Platform Group at Neudisc and a frequent contributor on Channel 9.

Bytes by MSDN: Chip Aubry and Mike Benkovich discuss Silverlight and Windows Azure

Watch Microsoft Senior Developer Evangelist Mike Benkovich and Chip Aubry, director of technology at Tribal DDB, talk about how DBB client McDonalds leverages Silverlight and Windows Azure to create and host their promotions.

Bytes by MSDN: Dianne O'Brien and Joe Healy discuss Windows Azure

In this video, Microsoft Developer Evangelist Joe Healy talks with Dianne O'Brien, senior director of Business Planning and Operations for Windows Azure, chat about Cloud economics and associated benefits.

Bytes by MSDN: Bill Lodin and Mickey Williams discuss Windows Azure Resources

In this short interview with Mickey, Bill Lodin, Vice President of IT Mentors, shares his advice about how to delve into Windows Azure and suggests several useful resources to check out.


Adron Hall posted AWS, WordPress, Windows Azure, Clouds, and More… (UPDATED) on 9/12/2010:

imageA few weeks ago I published a comparison between hosting a WordPress Blog in Windows Azure vs. Amazon Web Services (AWS).  A major new feature at AWS has made the price shift even FURTHER into the Amazon’s Favor.  The release of micro-instances now makes it even cheaper, even on a pay as  you go setup.

image The basic price difference is now about $100+ a month on Windows Azure and about $5-10 a month on AWS.  If you’re as wowed by getting the awesomeness that is cloud computing featuresets and technology as I am for a measly $5-10 a month go check it out yourself.

Amazon Release Micro-Instances


J. D. Meier described and diagrammed three Windows Azure App Types on 9/11/2010:

imageWhen you’re building applications, it helps to have a lay of the land so you can see the forest from the trees.  This is a quick visual tour of some whiteboard solutions and common application patterns for Windows Azure.

Canonical Windows Azure Application
Here is a simple and common Azure application pattern:

image

One App, Multiple Clients
Here is an example of a whiteboard solution for one cloud application, with multiple clients:

image

Mapping to the Application Architecture Guide
Here’s an example of mapping an application to the canonical application architecture in the Microsoft Application Architecture Guide, second edition:

image


J. D. Meier extended his white board diagramming above to 11 Windows Azure Application Patterns on 9/11/2010:

imageThis is a quick visual tour of some whiteboard solutions and common application patterns for Windows Azure.  It’s a look at some of the most common whiteboard solutions for Web applications, Web services, and data on Windows Azure. 

Here are the app patterns at a glance:

  • Pattern #1 - ASP.NET Forms Auth to Azure Tables
  • Pattern #2 - ASP.NET Forms Auth to SQL Azure
  • Pattern #3 - ASP.NET to AD with Claims
  • Pattern #4 - ASP.NET to AD with Claims (Federation)
  • Pattern #5 - ASP.NET to WCF on Azure
  • Pattern #6 - ASP.NET On-Site to WCF on Azure
  • Pattern #7 - ASP.NET On-Site to WCF on Azure with Claims
  • Pattern #8 - REST with AppFabric Access Control
  • Pattern #9 - ASP.NET to Azure Storage
  • Pattern #10 - ASP.NET to SQL Azure
  • Pattern #11 - ASP.NET On-Site to SQL Azure Through WCF

Web Applications
Here are some common Web application patterns on Windows Azure:

Pattern #1 - ASP.NET Forms Auth to Azure Tables

ASP.NET Forms Auth to Azure Tables

Pattern #2 - ASP.NET Forms Auth to SQL Azure

ASP.NET Forms Auth to SQL Azure

Pattern #3 - ASP.NET to AD with Claims

ASP.NET to AD with Claims

Pattern #4 - ASP.NET to AD with Claims (Federation)

ASP.NET to AD with Claims - Federation

Web Services (SOAP) App Patterns on Windows Azure
Here are common Web service app patterns on Windows Azure:

Pattern #5 - ASP.NET to WCF on Azure

ASP.NET to WCF on Azure

Pattern #6 - ASP.NET On-Site to WCF on Azure

ASP.NET On-Site to WCF on Azure

Pattern #7 - ASP.NET On-Site to WCF on Azure with Claims

ASP.NET On-Site to WCF on Azure with Claims

REST App Pattern on Windows Azure
Here is a common REST application pattern on Windows Azure:

Pattern #8 - REST with AppFabric Access Control

REST with AppFabric Access Control

Data Patterns on Windows Azure
Here are common data application patterns on Windows Azure:

Pattern #9 - ASP.NET to Azure Storage

ASP.NET to Azure Storage

Pattern #10 - ASP.NET to SQL Azure

ASP.NET to SQL Azure

Pattern #11 - ASP.NET On-Site to SQL Azure Through WCF

ASP.NET On-Site to SQL Azure Through WCF

Contributors / Reviewers
Many thanks to the following folks for contribution, collaboration, and review:

  • External contributors and reviewers: Adam Grocholski; Andy Eunson; Bill Collette; Christopher Seary; Jason Taylor; John Daniels; Juval Lowy; Kevin Lam; Long Le; Michael Smith; Michael Stiefel; Michele Leroux Bustamante; Norman Headlam; Rockford Lhotka; Rudolph Araujo; Sarang Kulkarni; Steven Nagy; Terrance Snyder; Will Clevenger
  • Microsoft contributors and reviewers:  Akshay Aggarwal; Alik Levin; Andreas Fuchsberger; Babur Butter; Bharat Shyam; Dave Brankin; Danny Cohen; Diego Dagum; Don Willits; Eugenio Pace; Gabriel Morgan; Jeff Mueller; John Steer; Julian Gonzalez; Mark Curphey; Mohit Srivastava; Pat Filoteo; Rahul Verma; Raul Rojas; Scott Densmore; Sesha Mani; Serena Yeoh; Sriram Krishnan; Stefan Schackow; Steve Marx; Stuart Kwan; Terri Schmidt; Tobin Titus; Varun Sharma; Vidya Vrat Agarwal; Vikram Bhambri; Yale Li


Return to section navigation list> 

VisualStudio LightSwitch

Jonathan Allen described Advanced Scenarios for LightSwitch in this 9/13/2010 post to the InfoQ blog:

image22LightSwitch brings together a number of technologies including Silverlight, Managed Extensibility Framework, and WCF RIA Services. If LightSwitch becomes popular, developers who understand these technologies will have a significant advantage over those who simply wire forms together using the design surfaces.

image LightSwitch, Microsoft’s answer to lightweight CRUD style applications, has recently been released in two parts. The first is the LightSwitch Beta 1, which is matched with a Training Kit. We have already reported on the core functionality, which is the ability to quickly create simple business applications using variety of backend technologies. Today’s piece looks at some of the advanced features.

The first scenario covered by the Training Kit is building a custom control. When doing normal programming with WPF or Silverlight, creating custom controls is more or less just something to do if you happen to need one. With LightSwitch they take on much more significance. In effect you have two classes of developers, those who simply wire together controls and those who define them.

Control creation for LightSwitch isn’t exactly easy with 5 projects needed for a new control. First there is the control itself, the “Client” project, which is coded in Silverlight. This is paired a “Common” project, also in Silverlight, that has meta-data about the control. Next up is the “Designer” project, which is used by the LightSwitch design surfaces inside Visual Studio. Since this code is run directly by Visual Studio it must be in CLR 4. Part of the reason for these projects is that you need to hook into extension points exposed via the Managed Extensibility Framework.

Once all of the code is done you still need to package the control. There are two packaging projects, one is specific to LightSwitch while the other is a normal VSIX project. If you don’t recall, that is the package format for any Visual Studio extension. To test your control library you must install the VSIX package, at which point you can start using the new control in your LightSwitch application.

The second “advanced” scenario really shouldn’t be. LightSwitch accesses non-SQL based data sources via WCF RIA Services. These are built like normal RIA services, optionally with the client access part turned off so that only LightSwitch applications can access it. The normal RIA operations such as initialize, submit, query, insert, update, and delete will be needed. While building RIA services are strangely more difficult than they seem like they should be, at least there are no VSIX packages to worry about.

A warning: the training kit does not install itself in the normal places such as Program Files or the Start menu. So if you can’t find it later look for folder called “LightSwitchTK”. On my test machine this was installed under the root directory.

Tim Anderson (@timanderson) counters rumors that continued investment by Microsoft in Silverlight, which provides the underpinnings of VSLS, is threatened by the potential of HTML 5.0 in his Microsoft’s Scott Guthrie: We have 200+ engineers working on Silverlight and WPF post of 9/13/2010:

image Microsoft is countering rumours that WPF (Windows Presentation Foundation) or Silverlight, a cross-platform browser plug-in based on the same XAML markup language and .NET programming combination as WPF, are under any sort of threat from HTML 5.0.

“We have 200+ engineers right now working on upcoming releases of SL and WPF – which is a heck of a lot.”

says Corporate VP .NET Developer Platform Scott Guthrie in a Twitter post. Other comments include this one:

“We are investing heavily in Silverlight and WPF.”

and this one:

“We just shipped Silverlight for Windows Phone 7 last week, and WPF Ribbon about 30 days ago: http://bit.ly/aB6e6X.”

image In addition, Microsoft has been showing off IIS Media Services 4.0 at the International Broadcasting Conference, which uses Silverlight as the multimedia client:

“Key new features include sub-two-second low-latency streaming, transmuxing between H.264 file formats and integrated transcoding through Microsoft Expression Encoder 4. Microsoft will also show technology demonstrations of Silverlight Enhanced Movies, surround sound in Silverlight and live 3-D 1080p Internet broadcasting using IIS Smooth Streaming and Silverlight technologies.”

imageNo problem then? Well, Silverlight is great work from Microsoft, powerful, flexible, and surprisingly small and lightweight for what it can do. Combined with ASP.NET or Windows Azure it forms part of an excellent cloud-to-client .NET platform. Rumours of internal wrangling aside, the biggest issue is that Microsoft seems reluctant to grasp its cross-platform potential, leaving it as a Windows and desktop Mac solution just at the time when iPhone, iPad and Android devices are exploding in popularity. [Emphasis added.] 

image22I will be interested to see if Microsoft announces Silverlight for Android this autumn, and if it does, how long it will take to deliver. The company could also give more visibility to its work on Silverlight for Symbian – maybe this will come more into the spotlight following the appointment of Stephen Elop, formerly of Microsoft, as Nokia CEO.

Apple is another matter. A neat solution I’ve seen proposed a few times is to create a Silverlight-to-JavaScript compiler along the lines of GWT (Google Web Toolkit) which converts Java to JavaScript. Of course it would also need to convert XAML layout to SVG. Incidentally, this could also be an interesting option for Adobe Flash applications.

As for WPF, I would be surprised if Microsoft is giving it anything like the attention being devoted to Silverlight, unless the Windows team has decided to embrace it within the OS itself. That said, WPF is already a mature framework. WPF will not go away, but I can readily believe that its future progress will be slow.

<Return to section navigation list> 

Windows Azure Infrastructure

Lori MacVittie (@lmacvittie) claimed Too often software design patterns are overlooked by network and application delivery network architects but these patterns are often equally applicable to addressing a broad range of architectural challenges in the application delivery tier of the data center in a preface to her Applying Scalability Patterns to Infrastructure Architecture post of 9/13/2010:

image The “High Scalability” blog is fast becoming one of my favorite reads. Last week did not disappoint with a post highlighting a set of scalability design patterns that was, apparently, inspired by yet another High Scalability post on “6 Ways to Kill Your Servers: Learning to Scale the Hard Way.

cheese-curdsThis particular post caught my attention primarily because although I’ve touched on many of these patterns in the past, I’ve never thought to call them what they are: scalability patterns. That’s probably a side-effect of forgetting that building an architecture of any kind is at its core computer science and thus algorithms and design patterns are applicable to both micro- and macro-architectures, such as those used when designing a scalable architecture.

This is actually more common than you’d think, as it’s rarely the case that a network guy and a developer sit down and discuss scalability patterns over beer and deep fried cheese curds (hey, I live in Wisconsin and it’s my blog post so just stop making faces until you’ve tried it). Developers and architects sit over there and think about how to design a scalable application from the perspective of its components – databases, application servers, middleware, etc… Network architects sit over here and think about how to scale an application from the perspective of network components – load balancers, trunks, VLANs, and switches. The thing is that the scalability patterns leveraged by developers and architects can almost universally be abstracted and applied to the application delivery network – the set of components integrated as a means to ensure availability, performance, and security of applications. That’s why devops is so important and why devops has to bring dev into ops as much as its necessary to bring some ops into dev. There needs to be more cross-over, more discussion, between the two groups if not an entirely new group in order to leverage the knowledge and skills that each has in new and innovative ways.

ABSTRACT and APPLY

So the aforementioned post is just a summary of a longer and more detailed post, but for purposes of this post I think the summary will do with the caveat that the original, “Scalability patterns and an interesting story...” by Jesper Söderlund is a great read that should definitely be on your “to read” list in the very near future.

For now, let’s briefly touch on the scalability patterns and sub-patterns Jesper described with some commentary on how they fit into scalability from a network and application delivery network perspective. The original text from the High Scalability blog are in red(dish) text.

  • Load distribution - Spread the system load across multiple processing units
    This is a horizontal scaling strategy that is well-understood. It may take the form of “clustering” or “load balancing” but in both cases it is essentially an aggregation coupled with a distributed processing model. The secret sauce is almost always in the way in which the aggregation point (strategic point of control) determines how best to distribute the load across the “multiple processing units.”  
    • load balancing / load sharing - Spreading the load across many components with equal properties for handling the request
      This is what most people think of when they hear “load balancing”, it’s just that at the application delivery layer we think in terms of directing application requests (usually HTTP but can just about any application protocol) to equal “servers” (physical or virtual) that handle the request. This is a “scaling out” approach that is most typically associated today with cloud computing and auto-scaling: launch additional clones of applications as virtual instances in order to increase the total capacity of an application. The load balancing distributes requests across all instances based on the configured load balancing algorithm.
    • Partitioning - Spreading the load across many components by routing an individual request to a component that owns that data specific
      This is really where the architecture comes in and where efficiency and performance can be dramatically increased in an image application delivery architecture. Rather than each instance of an application being identical to every other one, each instance (or pool of instances) is designated as the “owner”. This allows for devops to tweak configurations of the underlying operating system, web and application server software for the specific type of request being handled. This is, also, where the difference between “application switching” and “load balancing” becomes abundantly clear as “application switching” is used as a means to determine where to route a particular request which is/can be then load balanced across a pool of resources. It’s a subtle distinction but an important one when architecting not only efficient and fast but resilient and reliable delivery networks.
        • Vertical partitioning - Spreading the load across the functional boundaries of a problem space, separate functions being handled by different processing units
          When it comes to routing application requests we really don’t separate by function unless that function is easily associated with a URI. The most common implementation of vertical partitioning at the application switching layer will be by content. Example: creating resource pools based on the Content-Type HTTP header: images in pool “image servers” and content in pool “content servers”. This allows for greater optimization of the web/application server based on the usage pattern and the content type, which can often also be related to a range of sizes. This also, in a distributed environment, allows architects to leverage say cloud-based storage for static content while maintaining dynamic content (and its associated data stores) on-premise. This kind of hybrid cloud strategy has been postulated as one of the most common use cases since the first wispy edges of cloud were seen on the horizon.
        • Horizontal partitioning - Spreading a single type of data element across many instances, according to some partitioning key, e.g. hashing the player id and doing a modulus operation, etc. Quite often referred to as sharding.
          This sub-pattern is inline with the way in which persistence-based load balancing is accomplished, as well as the handling of object caching. This also describes the way in which you might direct requests received from specific users to designated instances that are specifically designed to handle their unique needs or requirements, such as the separation of “gold” users from “free” users based on some partitioning key which in HTTP land is often a cookie containing the relevant data.
    • Queuing and batch - Achieve efficiencies of scale by processing batches of data, usually because the overhead of an operation is amortized across multiple request 
      I admit defeat in applying this sub-pattern to application delivery. I know, you’re surprised, but this really is very specific to middleware and aside from the ability to leverage queuing for Quality of Service (QoS) at the delivery layer this one is just not fitting in well. If you have an idea how this fits, feel free to let me know – I’d love to be able to apply all the scalability patterns and sub-patterns to a broader infrastructure architecture.
      • Relaxing of data constraints - Many different techniques and trade-offs with regards to the immediacy of processing / storing / access to data fall in this strategy
        This one takes us to storage virtualization and tiering and the way in which data storage and access is intelligently handled in varying properties based on usage and prioritization of the content. If one relaxes the constraints around access times for certain types of data, it is possible to achieve a higher efficiency use of storage by subjugating some content to secondary and tertiary tiers which may not have the same performance attributes as your primary storage tier. And make no mistake, storage virtualization is a part of the application delivery network – has been since its inception – and as cloud computing and virtualization have grown so has the importance of a well-defined storage tiering strategy.
        We can bring this back up to the application layer by considering that a relaxation of data constraints with regards to immediacy of access can be applied by architecting a solution that separates data reads from writes. This implies eventual consistency, as data updated/written to one database must necessarily be replicated to the databases from which reads are, well, read, but that’s part of relaxing a data constraint. This is a technique used by many large, social sites such as Facebook and Plenty of Fish in order to scale the system to the millions upon millions of requests it handles in any given hour.
      • Parallelization - Work on the same task in parallel on multiple processing units
        I’m not going to be able to apply this one either, unless it was in conjunction with optimizing something like MapReduce and SPDY. I’ve been thinking hard about this one, and the problem is the implication that “same task” is really the “same task”, and that processing is distributed. That said, if the actual task can be performed by multiple processing units, then an application delivery controller could certainly be configured to recognize that a specific URL should be essentially sent to some other proxy/solution that performs the actual distribution, but the processing model here deviates sharply from the request-reply paradigm under which most applications today operate.
    DEVOPS CAN MAKE THIS HAPPEN

    I hate to sound-off too much on the “devops” trumpet, but one of the primary ways in which devops will be of significant value in the future is exactly in this type of practical implementation. Only by recognizing that many architectural patterns are applicable to not only application but infrastructure architecture can we start to apply a whole lot of “lessons that have already been learned” by developers and architects to emerging infrastructure architectural models. This abstraction and application from well-understood patterns in application design and architecture will be invaluable in designing the new network; the next iteration of network theory and implementation that will allow it to scale along with the applications it is delivering.

    [Photo] Credit:Michael Chow/azcentral.com


    Audrey Watters tries Defining "Platform" and "Platform-as-a-Service" in this 9/12/2010 post to the ReadWriteCloud blog:

    platform_ducttaped.jpg

    A couple of weeks ago, Alex Williams asked on the ReadWriteCloud weekly poll what people thought were the "worst terms" in cloud computing. The results were inconclusive. Or rather, there are a number of terms we dislike.

    "Cloud-in-a-box," "cloudstorming," and "cloudburst" led the pack with the most votes, the latter two suggesting that we may be tiring of weather metaphors in cloud marketing.

    But one of the terms that has recently been on the receiving end of criticism didn't make it onto Alex's list: platform-as-a-service. Or even just "platform."

    What is a Platform?

    Investor Brad Feld penned a rant about the term, noting that "platform" is becoming a buzzword bandied about without much concern for meaning. Feld writes, "In my little corner of the world, the word "platform" is a lot more precious. There are very few platforms. You aren't a platform until you have a zillion users (well, at least 100 million). Until then, feel free to call yourself a "junior platform" or an "aspiring platform." Or, call yourself an "application", which is what you most likely are."

    By extension, Gartner Research Fellow David Smith has also railed against the term platform-as-a-service, asking "Is PaaS passe yet?" "What really put me over the edge," he says, "was a conversation about another vendor's offering of "PaaS as a Product". To me this is as absurd as General Motors, who builds cars that can be used as taxis (and could be called 'car as a service' (or CaaS)) referring to their cars as "CaaS as a car" or "CaaS as a product". Pretty absurd isn't it? That's just the beginning of the absurdity that the PaaS term leads to."

    Smith questions both the "service" and the "platform" portions of the term, arguing in the first case, that most often PaaS offerings are "cloud enabling products," not services.

    Everything's a Platform?

    In the case of platform, Smith writes, "Platform refers to an extensible entity, something that is built upon. It resides on top of infrastructure and below applications (also relative terms). At Gartner, we have referred to all of the levels (not just "PaaS") as types of web/cloud platforms. After all, don't many of us also refer to Amazon as an IaaS platform when we are talking about building on top of it? What if what we are building a "PaaS" on top of it? What would we call that? a PaaS Platform? Here we go getting absurd again." He suggests that for many of the products that are labeled as PaaS, a better description might be "middleware for the cloud."

    Of course, at some point, we will probably wear out the anything-as-a-service (which makes a rather fitting acronym, I'd say) phrase. But in the meantime how do you define platform and platform-as-a-service? And does it matter that there's so much annoyance - and confusion - at the term?

    Photo credits: KISS lead singer Paul Stanley's platform shoes, plus duct tape. Photograph by Gil Kaufman, via Boing Boing


    Vincent-Philippe Lauzon lauded Microsoft Support for Windows Azure: A+ on 9/2/2010 (missed when posted):

    The other day I found myself having an issue while finalizing a proof of concept on Windows Azure.

    I was unable to deploy my solution.  It was staying in initialization / stop / pending cycle forever.

    After a while (a few days), having tried different combination of staging / production, manual / Visual Studio deployment, etc., I went to look for help online then support.

    I filed a request for support and give all the details I thought were pertinent.  I mentioned it wasn’t urgent, i.e., there was no business process impacted (proof of concepts are pretty low in business processes).

    I let it sleep.  I thought I might get an email in 1-2 weeks.

    The next day in the morning, I got a call from their support!  The guy opened a ticket and asked me permission to look at my account.  In the discussion I discovered the origin of this code 18:  I forgot to copy locally the WIF assemblies, so they were not deployed to the cloud and hence, the application was unable to start (the web.config was unable to load I suppose, since one of the section is defined in those assemblies).

    But my point is that the bloke didn’t drop the ball at any time.  He suggested a resolution, was ready to open my account and look into the problem himself, followed-up by email, etc.  .  Just excellent service!

    I don’t know if it will continue like that with time, but so far, that just boosted my confidence in the platform.  To know that there is such an enterprise-class support as a safety net is quite reassuring!  I dealt with third party hosting facilities and I never came across anything like that.

    So I gave A+ to Windows Azure support.  Well done!

    I’m sure the Azure Team appreciates Vincent-Philippe’s post.


    <Return to section navigation list> 

    Windows Azure Platform Appliance (WAPA)

    Matt Prigge claimed “Private clouds are getting a lot of buzz, but where do they actually fit in today's IT landscape?” in a deck for his “Finding a home for the private cloud” article of 8/13/2010 for InfoWorld:

    image Like it or not, the private cloud is hot. It was the talk of VMworld 2010 -- in fact, it was nearly the only topic of conversation. But it's increasingly clear that many enterprises, while intrigued by the concept, aren't at all sure about the benefits. Making sense of where the private cloud fits into the IT landscape isn't as straightforward as many would have us to believe.

    image First off, it's important to step back and take a good look at what a "private cloud" actually is. As I've explored in some of my more critical posts about the cloud in general, much of what makes up the cloud isn't new at all. The same is largely true of private clouds. In many ways, the private cloud is a relabeling of what many large enterprises have already been doing -- yet there is a difference.

    The private cloud unites virtual and sometimes physical server management behind a framework of management software that implements automated self-service resource provisioning and chargeback. These features are not new in any sense -- though they have tended to be limited to high-end implementations, due to the complexity of configuring them in heterogeneous environments.

    So here's how the private cloud is different: It's built to mimic the functionality of a public, multitenant cloud rather than simply automating a collection of privately managed server resources. This difference is subtle, yet very important.

    Stand-alone provisioning and chargeback implementations, as they are implemented with certain high-end systems today, increase IT efficiency and more accurately reflect the cost of IT services. Those same services delivered as part of a private cloud shift the IT organization into the role of a cloud service provider -- creating an even greater degree of abstraction between resource consumers and the actual server, storage, and networking resources.

    Personally, I think that's great. The more separation there is between the applications that run the business and the infrastructure that powers them, the better. Too often I've seen the perceived needs of a single application drive outlandish infrastructure investments that would have been far better spent elsewhere. By stuffing everything into an externally opaque cloud and charging back on a per-usage basis, such lopsided investments and others like it can be neatly avoided -- allowing IT to invest in infrastructure based on need.


    Links to a WMV high-def video and slide deck of Gareth James’ MGT310 Building the Private Cloud - Part 1 - An Overview TechEd Australia 2010 breakout session are available for download:

    • Session Type: Breakout Session
    • Track: Cloud Computing and Online Services, Management, Windows Server
    • Speaker(s): Gareth James
    • WMV High-Def Video
    • Slides

    imageThis session explores the Dynamic Infrastructure Toolkit for System Center (DIT-SC): a free, extensible turnkey solution that enables data centers to dynamically pool, allocate, and manage resources, delivering Infrastructure as a Service (IaaS). This solution integrates with System Center and provides ready-to-use components such as an Administration Portal, a Self Service Provisioning Portal and a light weight Provisioning Engine that allows for the seamless integration of the Dynamic Data Center components. This session illustrates how DIT-SC enables IT Implementers to create agile, virtualized IT infrastructures, made possible via easy configuration and allocation of resources based on an automated Business Unit IT (BUIT) on-boarding process. This session also demonstrates how the DIT-SC self-service portal allows the BUIT to self-provision infrastructure to deploy their application environments.


    Links to a WMV high-def video and slide deck of Gareth James’ MGT311 Building the Private Cloud - Part 2 - Diving Deeper TechEd Australia 2010 breakout session are available for download:

    • Session Type: Breakout Session
    • Track: Cloud Computing and Online Services, Management, Virtualisation
    • Speaker(s): Gareth James
    • WMV High-Def Video
    • Slides

    imageDeep dive into the extensibility engine of the SCVMM SSP 2.0. Using Post OS deployment with MDT and SCCM as an example.


    Links to a WMV high-def video and slide deck of Corey AdolphusMGT312 Private Cloud – Part 3 – Storage Efficiency and the Dynamic Data Centre TechEd Australia 2010 breakout session are available for download:

    • Session Type: Breakout Session
    • Track: Cloud Computing and Online Services, Management, Virtualisation, Windows Server
    • Speaker(s): Corey Adolphus
    • WMV High-Def Video
    • Slides

    imageThere's been a lot of discussion of cloud technology over the last couple of years, but very little in the way of tangible steps that you can follow for cloud deployment. NetApp has been collaborating with Microsoft to develop integrated technologies that simplify the process of cloud deployment. If you've already moved from a traditional data centre model to a virtualised data centre, the next logical step might be a private cloud. The main thing that differentiates a private cloud from a virtualised data centre is a management layer that allows you to treat your IT infrastructure as a pool of resources from which users or business units can request and receive resources automatically. Attend this presentation to learn how you can move to a private cloud with NetApp Storage integrated with Microsoft’s Dynamic Datacenter Toolkit (DDTK), System Center Operations Manager (SCOM) and System Center Virtual Machine Manager (SCVMM).


    <Return to section navigation list> 

    Cloud Security and Governance

    David Linthicum discussed The Rise of Cloud Governance in a brief 9/12/2010 post to ebizQ’s Where SOA Meets Cloud blog:

    As we get into more complex cloud computing solutions, the need for governance is back on the radar screen.

    image Although most have not noticed, there has been a focus lately on management and governance of clouds. The approach is logical. Provide the ability to set policies within a cloud computing system, private, public, or hybrid, such as polices that drive security, proper use, or most importantly, compliance.

    image We're beginning to see these types of cloud governance systems within most private cloud computing technology offerings, including those from VMWare and IBM. However, the larger story here is the rise of the smaller players that provide this functionality in more open and heterogeneous ways.

    There are two flavors here. First, is the old guard SOA governance technology, such as solutions from Layer 7 or Amberpoint (now a part of Oracle). These solutions provide runtime SOA governance solutions, meaning that they both create and operate policies that exist around services, cloud or not. While not created specifically for the cloud, they have morphed in that direction just as SOA has morphed towards the cloud.

    In the new guard side, we have solutions such as Abiquo, who provides a cloud management and governance solution that's able to span many different types of virtualization systems/hypervisors. Again, they support the ability to create and enforce polices from the cloud management product, as well as providing other management features leveraged through a well-designed abstraction layer.

    This is a huge hole that cloud computing has had. Indeed, without strong governance and management strategy, and enabling technology, the path to cloud computing won't be possible.

    William Vambenepe (@vambenepe) posted The PaaS Lament: In the Cloud, application administrators should administrate applications on 9/12/2010:

    image Some organizations just have “systems administrators” in charges of their applications. Others call out an “application administrator” role but it is usually overloaded: it doesn’t separate the application platform administrator from the true application administrator. The first one is in charge of the application runtime infrastructure (e.g. the application server, SOA tools, MDM, IdM, message bus, etc). The second is in charge of the applications themselves (e.g. Java applications and the various artifacts that are used to customize the middleware stack to serve the application).

    imageIn effect, I am describing something close to the split between the DBA and the application administrators. The first step is to turn this duo (app admin, DBA) into a triplet (app admin, platform admin, DBA). That would be progress, but such a triplet is not actually what I am really after as it is too strongly tied to a traditional 3-tier architecture. What we really need is a first-order separation between the application administrator and the infrastructure administrators (not the plural). And then, if needed, a second-order split between a handful of different infrastructure administrators, one of which may be a DBA (or a DBA++, having expanded to all data storage services, not just relational), another of which may be an application platform administrator.

    There are two reasons for the current unfortunate amalgam of the “application administrator” and “application platform administrator” roles. A bad one and a good one.

    The bad reason is a shortcomings of the majority of middleware products. While they generally do a good job on performance, reliability and developer productivity, they generally do a poor job at providing a clean separation of the performance/administration functions that are relevant to the runtime and those that are relevant to the deployed applications. Their usual role definitions are more structured along the lines of what actions you can perform rather than on what entities you can perform them. From a runtime perspective, the applications are not well isolated from one another either, which means that in real life you have to consider the entire system (the middleware and all deployed applications) if you want to make changes in a safe way.

    The good reason for the current lack of separation between application administrators and middleware administrators is that middleware products have generally done a good job of supporting development innovation and optimization. Frameworks appear and evolve to respond to the challenges encountered by developers. Knobs and dials are exposed which allow heavy customization of the runtime to meet the performance and feature needs of a specific application. With developers driving what middleware is used and how it is used, it’s a natural consequence that the middleware is managed in tight correlation with how the application is managed.

    Just like there is tension between DBAs and the “application people” (application administrators and/or developers), there is an inherent tension in the split I am advocating between application management and application platform management. The tension flows from the previous paragraph (the “good reason” for the current amalgam): a split between application administrators and application platform administrators would have the downside of dampening application platform innovation. Or rather it redirects it, in a mutation not unlike the move from artisans to industry. Rather than focusing on highly-specialized frameworks and highly-tuned runtimes, the application platform innovation is redirected towards the goals of extreme cost efficiency, high reliability, consistent security and scalability-by-default. These become the main objectives of the application platform administrator. In that perspective, the focus of the application architect and the application administrator needs to switch from taking advantage of the customizability of the runtime to optimize local-node performance towards taking advantage of the dynamism of the application platform to optimize for scalability and economy.

    Innovation in terms of new frameworks and programming models takes a hit in that model, but there are ways to compensate. The services offered by the platform can be at different levels of generality. The more generic ones can be used to host innovative application frameworks and tools. For example, a highly-specialized service like an identity management system is hard to use for another purpose, but on the other hand a JVM can be used to host not just business applications but also platform-like things like Hadoop. They can run in the “application space” until they are mature enough to be incorporated in the “application platform space” and become the responsibility of the application platform administrator.

    The need to keep a door open for innovation is part of why, as much as I believe in PaaS, I don’t think IaaS is going away anytime soon. Not only do we need VMs for backward-looking legacy apps, we also need polyvalent platforms, like a VM, for forward-looking purposes, to allow developers to influence platform innovation, based on their needs and ideas.

    Forget the guillotine, maybe I should carry an axe around. That may help get the point across, that I want to slice application administrators in two, head to toe. PaaS is not a question of runtime. It’s a question of administrative roles.

    Related posts:

    1. The necessity of PaaS: Will Microsoft be the Singapore of Cloud Computing?
    2. Desirable technical characteristics of PaaS
    3. PaaS portability challenges and the VMforce example
    4. The battle of the Cloud Frameworks: Application Servers redux?
    5. Cloud platform patching conundrum: PaaS has it much worse than IaaS and SaaS
    6. Generalizing the Cloud vs. SOA Governance debate

    David Kearns reported “More than 80% of company frauds in the study were committed by individuals in accounting, operations, sales, executive/upper management, customer service or purchasing” in his Survey highlights need for access controls article of 9/10/2010 for NetworkWorld Security blog:

    We can't seem to get far from data/identity governance issues this month. Dave McKee, from Aveksa (he's actually director, senior media strategy at Schwartz Communications doing PR for Aveksa) just sent me a link to a new survey from the Association of Certified Fraud Examiners (ACFE) entitled "Who is Most Likely to Commit Fraud at Your Company? Profile of a Fraudster Drawn from ACFE Survey of Experts."

    The ACFE's 2010 Report to the Nations on Occupational Fraud and Abuse is based on data compiled from a study of 1,843 cases of occupational fraud that occurred worldwide between January 2008 and December 2009.

    The report is only available to members, but McKee did share some interesting highlights that call out the need for access controls, including:

    • More than 80% of the frauds in the study were committed by individuals in one of six departments: accounting, operations, sales, executive/upper management, customer service or purchasing.
    • More than 85% of fraudsters in the study had never been previously charged or convicted for a fraud-related offense.
    • Organizations tend to over-rely on audits. External audits were the control mechanism most widely used by the victims in the survey, but they ranked comparatively poorly in both detecting fraud and limiting losses due to fraud. Audits are clearly important and can have a strong preventative effect on fraudulent behavior, but they should not be relied upon exclusively for fraud detection.
    • A lack of internal controls, such as segregation of duties, was cited as the biggest deficiency in 38% of the cases.

    I find the first point, about the departments housing fraud perpetrators, very interesting. For the past 25 years, the most frequent question I've been asked is how to control the sys admin/network admin's access to sensitive/proprietary data. Yet IT isn't even in the top six of the departments where fraudsters were found. Of course, it could be that IT types cover their tracks better.

    I can't emphasize more that audits, while necessary, are of little value in stopping fraud. Audits tend to discover last year's (or even earlier) fraud, not what's happening today. Only good data/identity governance apps can do that.


    <Return to section navigation list> 

    Cloud Computing Events

    imageI updated on 9/13/2010 my List of Cloud Computing and Online Services Sessions at TechEd Australia 2010 of 8/24/2010 with links to WMV high-definition videos and/or slide decks for all Tech*Ed Australia 2010 sessions except one.

    Rachfcollier asked Will you be attending our free online [UK] cloud conference? on 9/13/2010

    image Want to win a copy of Windows Azure Platform: Articles from the Trenches Volume One, by editor/copy and paste guru, Eric Nelson (and 15 authors smarter than him)?  Then read on. We’ll send a book to the first 20 people who register for our October TechDays Online Azure Conference in London, UK, detailed below.

    imageMake a date in your diary for the first free TechDays online conference on 8th October, showcasing everything you should know about the Windows Azure Platform.

    imageYou’ll have access to three virtual rooms full of the information you’ve been looking for on the Windows Azure Platform, SQL Azure and a lot of other awesome cloud stuff. Settle yourself in the Cirrus Room for a high-level summary of the key technologies from Microsoft UK rock stars, including the one and only David Gristwood. Visit the Altocumulus room and learn how these technologies are being developed in the real world, or get comfy in the Stratocumulus room for a deep dive in to the low level details with Community experts. 

    Sign up and take a look at the full agenda here

    <Return to section navigation list> 

    Other Cloud Computing Platforms and Services

    Alex Popescu posted Pomegranate: A Solution for Storing Tiny Little Files to his myNoSQL blog on 9/13/2010:

    Storing small files is a problem that many (file) systems have tried to solve with different degrees of success. Hadoop has had to tackle this problem and came up with Hadoop archive. Pomegranate is a new distributed file system that focuses on increasing the performance of storing and accessing small files:

    • It handles billions of small files efficiently, even in one directory;
    • It provide separate and scalable caching layer, which can be snapshot-able;
    • The storage layer uses log structured store to absorb small file writes to utilize the disk bandwidth;
    • Build a global namespace for both small files and large files;
    • Columnar storage to exploit temporal and spatial locality;
    • Distributed extendible hash to index metadata;
    • Snapshot-able and reconfigurable caching to increase parallelism and tolerant failures;
    • Pomegranate should be the first file system that is built over tabular storage, and the building experience should be worthy for file system community.

    A diagram of the Pomegranate architecture:

    Pomegranate

    Make sure you also read Jeff Darcy’s —who gratefully answered my call for comments — ☞ post on Pomegranate:

    • I can see how the Pomegranate scheme efficiently supports looking up a single file among billions, even in one directory (though the actual efficacy of the approach seems unproven). What’s less clear is how well it handles listing all those files, which is kind of a separate problem similar to range queries in a distributed K/V store.
    • Another thing I wonder about is the scalability of Pomegranate’s approach to complex operations like rename. There’s some mention of a “reliable multisite update service” but without details it’s hard to reason further. This is a very important issue because this is exactly where several efforts to distribute metadata in other projects – notably Lustre – have foundered. It’s a very very hard problem, so if one’s goal is to create something “worthy for [the] file system community” then this would be a great area to explore further.

    <Return to section navigation list> 

    0 comments: