Tuesday, November 30, 2010

Windows Azure and Cloud Computing Posts for 11/30/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px3   
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

Jai Haridas reported Changes in Windows Azure Storage Client Library – Windows Azure SDK 1.3 in an 11/30/2010 post to the Windows Azure Storage Team blog:

imageWe recently released an update to the Storage Client library in SDK 1.3. We wanted to take this opportunity to go over some breaking changes that we have introduced and also list some of the bugs we have fixed (compatible changes) in this release.

Thanks for all the feedback you have been providing us via forums and this site, and as always please continue to do so as it helps us improve the client library.

Note: We have used Storage Client Library v1.2 to indicate the library version shipped with Windows Azure SDK 1.2 and v1.3 to indicate the library shipped with Windows Azure SDK 1.3

Breaking Changes

1. Bug: FetchAttributes ignores blob type and lease status properties

In Storage Client Library v1.2, a call to FetchAttributes never checks if the blob instance is of valid type since it ignores the BlobType property. For example, if a CloudPageBlob instance refers to a block blob in the blob service, then FetchAttributes will not throw an exception when called.

In Storage Client Library v1.3, FetchAttributes records the BlobType and LeaseStatus properties. In addition, it will throw an exception when the blob type property returned by the service does not match with type of class being used (i.e. when CloudPageBlob is used to represent block blob or vice versa).

2. Bug: ListBlobsWithPrefix can display same blob Prefixes multiple times

Let us assume we have the following blobs in a container called photos:

  • photos/Brazil/Rio1.jpg
  • photos/Brazil/Rio2.jpg
  • photos/Michigan/Mackinaw1.jpg
  • photos/Michigan/Mackinaw2.jpg
  • photos/Seattle/Rainier1.jpg
  • photos/Seattle/Rainier2.jpg

Now to list the photos hierarchically, I could use the following code to get the list of folders under the container “photos”. I would then list the photos depending on the folder that is selected.

IEnumerable<IListBlobItem> blobList = client.ListBlobsWithPrefix("photos/");

foreach (IListBlobItem item in blobList)
{
    Console.WriteLine("Item Name = {0}", item.Uri.AbsoluteUri);
}

The expected output is:

  • photos/Brazil/
  • photos/Michigan/
  • photos/Seattle/

However, assume that the blob service returns Rio1.jpg through Mackinaw1.jpg along with a continuation marker which an application can use to continue listing. The client library would then continue the listing with the server using this continuation marker, and with the continuation assume that it receives the remaining items. Since the prefix photos/Michigan is repeated again for Mackinaw2.jpg, the client library incorrectly duplicates this prefix. If this happens, then the result of the above code in Storage Client v1.2 is:

  • photos/Brazil/
  • photos/Michigan/
  • photos/Michigan/
  • photos/Seattle/

Basically, Michigan would be repeated twice. In Storage Client Library v1.3, we collapse this to always provide the same result for the above code irrespective of how the blobs may be returned in the listing.

3. Bug: CreateIfNotExist on a table, blob or queue container does not handle container being deleted

In Storage Client Library v1.2, when a deleted container is recreated before the service’s garbage collection finishes removing the container, the service returns HttpStatusCode.Conflict with StorageErrorCode as ResourceAlreadyexists with extended error information indicating that the container is being deleted. This error code is not handled by the Storage Client Library v1.2, and it instead returns false giving a perception that the container exists.

In Storage Client Library v1.3, we throw a StorageClientException exception with ErrorCode = ResourceAlreadyExists and exception’s ExtendedErrorInformation’s error code set to XXXBeingDeleted (ContainerBeingDeleted, TableBeingDeleted or QueueBeingDeleted). This exception should be handled by the client application and retried after a period of 35 seconds or more.

One approach to avoid this exception while deleting and recreating containers/queues/tables is to use dynamic (new) names when recreating instead of using the same name.

4. Bug: CloudTableQuery retrieves up to 100 entities with Take rather than the 1000 limit

In Storage Client Library v1.2, using CloudTableQuery limits the query results to 100 entities when Take(N) is used with N > 100. We have fixed this in Storage Client Library v1.3 by setting the limit appropriately to Min(N, 1000) where 100 is the server side limit.

5. Bug: CopyBlob does not copy metadata set on destination blob instance

As noted in this post, Storage Client Library v1.2 has a bug in which metadata set on the destination blob instance in the client is ignored, so it is not recorded with the destination blob in the blob service. We have fixed this in Storage Client Library v1.3, and if the metadata is set, it is stored with the destination blob. If no metadata is set, the blob service will copy the metadata from the source blob to the destination blob.

6. Bug: Operations that returned PreConditionfailure and NotModified returns BadRequest as the StorageErrorCode

In Storage Client Library v1.2, PreConditionfailure and NotModified errors lead to StorageClientException with StorageErrorCode mapped to BadRequest.

In Storage Client Library v1.3, we have correctly mapped the StorageErrorCode to ConditionFailed

7. Bug: CloudBlobClient.ParallelOperationThreadCount > 64 leads to NotSupportedException

ParallelOperationThreadCount controls the number of concurrent block uploads. In Storage Client Library v1.2, the value can be between 1, int.MaxValue. But when a value greater than 64 was set, UploadByteArray, UploadFile, UploadText methods would start uploading blocks in parallel but eventually fail with NotSupported exception. In Storage Client Library v1.3, we have reduced the max limit to 64. A value greater than 64 will cause ArgumentOutOfRangeException exception right upfront when the property is set.

8. Bug: DownloadToStream implementation always does a range download that results in md5 header not returned

In Storage Client Library v1.2, DownloadToStream, which is used by other variants – DownloadText, DownloadToFile and DownloadByteArray, always does a range GET by passing the entire range in the range header “x-ms-range”. However, in the service, using range for GETs does not return content-md5 headers even if the range encapsulates the entire blob.

In Storage Client Library v1.3, we now do not send the “x-ms-range” header in the above mentioned methods which allows the content-md5 header to be returned.

9. CloudBlob retrieved using container’s GetBlobReference, GetBlockBlobReference or GetPageBlobReference creates a new instance of the container.

In Storage Client Library v1.2, the blob instance always creates a new container instance that is returned via the Container property in CloudBlob class. The Container property represents the container that stores the blob.

In Storage Client Library v1.3, we instead use the same container instance which was used in creating the blob reference. Let us explain this using an example:

CloudBlobClient client = account.CreateCloudBlobClient();
CloudBlobContainer container = client.GetContainerReference("blobtypebug");
container.FetchAttributes();
container.Attributes.Metadata.Add("SomeKey", "SomeValue");

CloudBlob blockBlob = container.GetBlockBlobReference("blockblob.txt");

Console.WriteLine("Are instances same={0}", blockBlob.Container == container);
Console.WriteLine("SomeKey value={0}", blockBlob.Container.Attributes.Metadata["SomeKey"]);

For the above code, in Storage Client Library v1.2, the output is:

Are instances same=False
SomeKey metadata value=

This signifies that the blob creates a new instance that is then returned when the Container property is referenced. Hence the metadata “SomeKey” set is missing on that instance until FetchAttributes is invoked on that particular instance.

In Storage Client Library v1.3, the output is:

Are instances same=True
SomeKey metadata value=SomeValue

We set the same container instance that was used to get a blob reference. Hence the metadata is already set.

Due to this change, any code relying on the instances to be different may break.

10. Bug: CloudQueueMessage is always Base64 encoded allowing less than 8KB of original message data.

In Storage Client Library v1.2, the queue message is Base64 encoded which increases the message size by 1/3rd (approximately). The Base64 encoding ensures that message over the wire is valid XML data. On retrieval, the library decodes this and returns the original data.

However, we want to provide an alternate where that data which is already valid XML data is transmitted and stored in the raw format, so an application can store a full size 8KB message.

In Storage Client Library v1.3, we have provided a flag “EncodeMessage” on the CloudQueue that indicates if it should encode the message using Base64 encoding or send it in raw format. By default, we still Base64 encode the message. To store the message without encoding one would do the following:

CloudQueue queue = client.GetQueueReference("workflow");
queue.EncodeMessage = false;

One should be careful when using this flag, so that your application does not turn off message encoding on an existing queue with encoded messages in it. The other thing to ensure when turning off encoding is that the raw message has only valid XML data since PutMessage is sent over the wire in XML format, and the message is delivered as part of that XML.

Note: When turning off message encoding on an existing queue, one can prefix the raw message with a fixed version header. Then when a message is received, if the application does not see the new version header, then it knows it has to decode it. Another option would be to start using new queues for the un-encoded messages, and drain the old queues with encoded messages with code that does the decoding.

11. Inconsistent escaping of blob name and prefixes when relative URIs are used.

In Storage Client Library v1.2, the rules used to escape a blob name or prefix provided in the APIs like constructors, GetBlobReference, ListBlobXXX etc, are inconsistent when relative URIs are used. The relative Uri name for the blob or prefix to these methods are treated as escaped or un-escaped string based on the input.

For example:

a) CloudBlob blob1 = new CloudBlob("container/space test", service);
b) CloudBlob blob2 = new CloudBlob("container/space%20test", service);
c) ListBlobsWithPrefix("container/space%20test");

In the above cases, v1.2 treats the first two as "container/space test". However, in the third, the prefix is treated as "container/space%20test". To reiterate, relative URIs are inconsistently evaluated as seen when comparing (b) with (c). (b) is treated as already escaped and stored as "container/space test". However, (c) is escaped again and treated as "container/space%20test".

In Storage Client Library v1.3, we treat relative URIs as literal, basically keeping the exact representation that was passed in. In the above examples, (a) would be treated as "container/space test" as before. The latter two i.e. (b) and (c) would be treated as "container/space%20test". Here is a table showing how the names are treated in v1.2 compared to in v1.3.

image

12. Bug: ListBlobsSegmented does not parse the blob names with special characters like ‘ ‘, #, : etc.

In Storage Client Library v1.2, folder names with certain special characters like #, ‘ ‘, : etc. are not parsed correctly for list blobs leading to files not being listed. This gave a perception that blobs were missing. The problem was with the parsing of response and we have fixed this problem in v1.3.

13. Bug: DataServiceContext timeout is in seconds but the value is set as milliseconds

In Storage Client Library v1.2, timeout is incorrectly set as milliseconds on the DataServiceContext. DataServiceContext treatsthe integer as seconds and not milliseconds. We now correctly set the timeout in v1.3.

14. Validation added for CloudBlobClient.WriteBlockSizeInBytes

WriteBlockSizeInBytes controls the block size when using block blob uploads. In Storage Client Library v1.2, there was no validation done for the value.

In Storage Client Library v1.3, we have restricted the valid range for WriteBlockSizeInBytes to be [1MB-4MB]. An ArgumentOutOfRangeException is thrown if the value is outside this range.

Other Bug Fixes and Improvements

1. Provide overloads to get a snapshot for blob constructors and methods that get reference to blobs.

Constructors include CloudBlob, CloudBlockBlob, CloudPageBlob and methods like CloudBlobClient’s GetBlobReference, GetBlockBlobReference and GetPageBlobReference all have overloads that take snapshot time. Here is an example:

CloudStorageAccount account = CloudStorageAccount.Parse(Program.ConnectionString);
CloudBlobClient client = account.CreateCloudBlobClient();
CloudBlob baseBlob = client.GetBlockBlobReference("photos/seattle.jpg");

CloudBlob snapshotBlob = baseBlob.CreateSnapshot();      

// save the snapshot time for later use
DateTime snapshotTime = snapshotBlob.SnapshotTime.Value;

// Use the saved snapshot time to get a reference to snapshot blob 
// by utilizing the new overload and then delete the snapshot
CloudBlob snapshotRefernce = client.GetBlobReference(blobName, snapshotTime);
snapshotRefernce.Delete();

2. CloudPageBlob now has a ClearPage method.

We have provided a new API to clear a page in page blob:

public void ClearPages(long startOffset, long length)

3. Bug: Reads using BlobStream issues a GetBlockList even when the blob is a Page Blob

In Storage Client Library v1.2, a GetBlockList is always invoked even for reading Page blobs leading to an expected exception being handled by the code.

In Storage Client Library v1.3 we issue FetchAttributes when the blob type is not known. This avoids an erroneous GetBlockList call on a page blob instance. If the blob type is known in your application, please use the appropriate blob type to avoid this extra call.

Example: when reading a page blob, the following code will not incur the extra FetchAttributes call since the BlobStream was retrieved using a CloudPageBlob instance.

CloudPageBlob pageBlob = container.GetPageBlobReference("mypageblob.vhd");
BlobStream stream = pageBlob.OpenRead();
// Read from stream…

However, the following code would incur the extra FetchAttributes call to determine the blob type:

CloudBlob pageBlob = container.GetBlobReference("mypageblob.vhd");
BlobStream stream = pageBlob.OpenRead();
// Read from stream…
4. Bug: WritePages does not reset the stream on retries

As we had posted on our site, In Storage Client Library v1.2, WritePages API does not reset the stream on a retry leading to exceptions. We have fixed this when the stream is seekable, and reset the stream back to the beginning when doing the retry. For non seekable streams, we throw NotSupportedException exception.

5. Bug: Retry logic retries on all 5XX errors

In Storage Client Library v1.2, HttpStatusCode NotImplemented and HttpVersionNotSupported results in a retry. There is no point in retrying on such failures. In Storage Client Library v1.3, we throw a StorageServerException exception and do not retry.

6. Bug: SharedKeyLite authentication scheme for Blob and Queue throws NullReference exception

We now support SharedKeyLite scheme for blobs and queues.

7. Bug: CreateTableIfNotExist has a race condition that can lead to an exception “TableAlreadyExists” being thrown

CreateTableIfNotExist first checks for the table existence before sending a create request. Concurrent requests of CreateTableIfNotExist can lead to all but one of them failing with TableAlreadyExists and this error is not handled in Storage Client Library v1.2 leading to a StorageException being thrown. In Storage Client Library v1.3, we now check for this error and return false indicating that the table already exists without throwing the exception.

8. Provide property for controlling block upload for CloudBlob

In Storage Client Library v1.2, block upload is used only if the upload size is 32MB or greater. Up to 32MB, all blob uploads via UploadByteArray, UploadText, UploadFile are done as a single blob upload via PutBlob.

In Storage Client Library v1.3, we have preserved the default behavior of v1.2 but have provided a property called SingleBlobUploadThresholdInBytes which can be set to control what size of blob will upload via blocks versus a single put blob. For example, the setting the following code will upload all blobs up to 8MB as a single put blob and will use block upload for blob sizes greater than 8MB.

CloudBlobClient.SingleBlobUploadThresholdInBytes = 8 * 1024 *1024;

The valid range for this property is [1MB – 64MB).

See also The Windows Azure Team released Windows Azure SDK Windows Azure Tools for Microsoft Visual Studio (November 2010) [v1.3.11122.0038] to the Web on 11/29/2010 in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section below.


<Return to section navigation list> 

SQL Azure Database and Reporting

TechNet’s Cloud Scenario Hub team recently added How to Build and Manage a Business Database Application on SQL Azure:

imageFaced with the need to create a business database, most people think "small database" they think of Microsoft Access. They might also add SQL Server Express edition or, perhaps, SQL Server Standard edition, to an Access implementation to provide users with relational features.[*]

What all these solutions have in common is that they run on-premise -- and therefore require a server to run on, a file location to live in, license management and a back-up and recovery strategy.

A new alternative to these approaches to the on-premise database is SQL Azure, which delivers the power of SQL Server without the on-premise costs and requirements.

Microsoft SQL Azure Database is a secure relational database service based on proven SQL Server technologies. The difference is that it is a fully managed cloud database, offered as a service, running in Microsoft datacenters around the globe. It is highly scalable, with built in high-availability and fault tolerance, giving you the ability to start small or serve a global customer base immediately.
The resources on this page will help you get an understanding of using SQL Azure as an alternative database solution to Access or SQL Express.

1)

  An overview of SQL Azure 
  This 60-minute introduction SQL Azure explores what’s possible with a cloud database.

1)

  Scaling Out with SQL Azure
  This article from the TechNet Wiki gives an IT Pro view of building SQL Azure deployments.

1)

  How Do I: Calculate the Cost of Azure Database Usage? (Video)
  Max Adams walks through the process of calculating real-world SQL Azure costs.

1)

  Comparing SQL Server with SQL Azure
  This TechNet Wiki article covers the differences between on-premise SQL Server and SQL Azure.

1)   Getting Started with SQL Azure
  This Microsoft training kit comes with all the tools and instructions needed to begin working with SQL Azure

1)   Try SQL Azure 1GB Edition for no charge
  Start out with SQL Azure at no cost, then upgrade to a paid plan that fits your operational needs.

1)

  Developing and Deploying with SQL Azure
  Get guidelines on how to migrate an existing on-premise SQL Server database into SQL Azure. It also discusses best practices related to data migration.

1)

  Walkthrough of Window Azure Sync Service
  This TechNet Wiki article explains how to use Sync Services to provide access to, while isolating, your SQL Azure database from remote users.

1)

  SQL Azure Migration Tool v.3.4
  Ask other IT Pros about how to make SharePoint Online work with on-premise SharePoint.

1)

  Microsoft Learning—Introduction to Microsoft SQL Azure
  This two-hour self-paced course is designed for IT Pros starting off with SQL Azure.

1)

  How Do I: Introducing the Microsoft Sync Framework Powerpack for SQL   Azure (Video)
Microsoft’s Max Adams explains the Synch Framework and tools for synchronizing local and cloud databases.

1)

  Connection Management in SQL Azure
  This TechNet Wiki article explains how to optimize SQL Azure services and replication.

2)

  Backing Up Your SQL Azure Database Using Database Copy
  The SQL Azure Team Blog explains the SQL copy process between two cloud database instances.

3)

  Security Guidelines for SQL Azure
  Get a complete overview of security guidelines for SQL Azure access and development.

4)

  SQL Azure Connectivity Troubleshooting Guide
  The SQL Azure Team Blog explores common connectivity error messages.

3)

  TechNet Forums on Windows Azure Data Storage & Access
  Your best resource is other IT Pros working SQL Azure databases. Join the conversation.

4)

  Creating a SQL Server Control Point to Integrate On-Premise and Cloud Databases
When using multiple SQL Server Utilities, be sure to establish one and only one control point for each.

4)

  How Do I: Configure SQL Azure Security? (Video)
  Microsoft’s Max Adams introduces security in SQL Azure, covering logins, database access control and user privilege configuration.

[*] You don’t need SQL Server “to provide users with relational features.” Access offers relational database features unless you use SharePoint lists to host the data or publish an Access application to a Web Database on SharePoint Server.


Markus ‘maol’ Perdrizat posted SQL In the Cloud on 11/30/2010:

CloudBzz has a little intro to SQL In the Cloud, and with that they mean the public cloud. Good overview of existing offerings, and the suggestion that

Cloud-based DBaaS options will continue to grow in importance and will eventually become the dominant model. Cloud vendors will have to invest in solutions that enable horizontal scaling and self-healing architectures to address the needs of their bigger customers

imageI can only agree. What I wonder, though, is how much these DBaaS is based on traditional RDBMS such as MySQL, and what kind of changes we’ll eventually see to better support the elasticity requirements of cloud DBs. E.g. I hear anecdotically that in SQL Azure, every cloud DB is just a table in the underlying SQL Server infrastructure (see also Inside SQL Azure) [*], and the good folks at Xeround are also investing heavily in their Virtual Partitioning scheme.

Related posts:

  1. Windows Azure Platform Appliance
  2. Small Data SQL Cloud
  3. Thought Leaders in the Cloud: Cassandra’s Jonathan Ellis
  4. Loading data to SQL Azure the fast way
  5. What’s Essential – And What’s Not – In Big Data Analytics

[*] The way I read the SQL Azure tea leaves: “Every cloud DB is just an SQL Server database with replicated data in a customized, multi-tenant SQL Server 2008 instance.”

Markus is a Database Systems Architect at one of the few big Swiss banks.


<Return to section navigation list> 

Dataplace DataMarket and OData

See The Windows Azure Team reported Just Released: Windows Azure SDK 1.3 and the new Windows Azure Management Portal on 11/30/2010 in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section below for new Windows Azure Marketplace DataMarket features.


Julian Lai of the WCF Data Services team described a new Entity Set Resolver feature in this 11/29/2010 post:

Problem Statement:

imageIn previous versions, the WCF Data Services .NET and Silverlight client libraries always assumed that all collections (aka entity sets) had the same base URI. This assumption was in place because the library used the URI provided in the DataServiceContext constructor to generate collection URIs using the OData addressing conventions. For example:

  • Base URI: http://localhost:1309/NorthwindDataService.svc/
  • Sample collection: Customers
  • Collection URI by convention: http://localhost:1309/NorthwindDataService.svc/Customers

This simple convention based approach works when all collections belong to a common root, but a number of scenarios, such as partitioning, exist where two collections may have different base URIs or the service needs more control over how URI paths are constructed. For example, imagine a case where collections are partitioned across different servers for load balancing purposes. The “Customers” collection might be on a different server from the “Employees” collection.

To solve this problem, we added a new feature we call an Entity Set Resolver (ESR) to the WCF Data Services client library. The idea behind the ESR feature is to provide a mechanism allowing the client library to be entirely driven by the server (URIs of collections in this case) response payloads.

Design:

For the ESR feature, we have introduced a new ResolveEntitySet property to the DataServiceContext class. This new property returns a delegate that takes the collection name as input and returns the corresponding URI for that collection:

public Func<string, Uri> ResolveEntitySet 
{ 
    get; set; 
} 

This new delegate will then be invoked when the client library needs the URI of a collection. If the delegate is not set or returns null, then the library will fall back to the conventions currently used. For example, the API on the DataServiceContext class to insert a new entity looks like:

public void AddObject(string entitySetName, Object entity)

Currently this API inserts the new entity by taking the collection/entitySetName provided and appending it to the base URI passed to the DataServiceContext constructor. As previously stated, this works if all collections are at the same base URI and the server follows the recommended OData addressing scheme. However, as stated above, there are good reasons for this not to be the case.

With the addition of the ESR feature, the AddObject API will no longer first try to create the collection URI by convention. Instead, it will invoke the ESR (if one is provided) and ask it for the URI of the collection. One possible way to utilize the ESR is to have it parse the server’s Service Document -- a document exposed by OData services that lists the URIs for all the collections exposed by the server. Doing this allows the client library to be decoupled from the URI construction strategy used by an OData service.

It is important to note that since the ESR will be invoked during the processing of an AddObject call, the ESR should not block. Building upon the ESR parsing the Service Document example above, a good way is to have the collection URIs stored in a dictionary where the key is the collection name and the value is the URI. If this dictionary is populated during the DataServiceContext initialization time, users can avoid requesting and parsing the Service Document on demand. In practice this means the ESR should return URIs from a prepopulated dictionary and not request and parse the Service Document on demand.

The ESR feature is not scoped only to insert operations (AddObject calls). The resolver will be invoked anytime a URI to a collection is required and the URI hasn’t previously been learnt by parsing responses to prior requests sent to the service. The full list of client library methods which will cause the client library to invoke the ESR are: AddObject, AttachTo, and CreateQuery.

Mentioned above, one of the ways to use the ESR is to parse the Service Document to get the collection URIs. Below is sample code to introduce this process. The code returns a dictionary given the Service Document URI. For the dictionary, the key is the collection name and the value is the collection URI. To see the ESR in use, check out the sources included with the OData client for Windows Live Services blog post.

public Dictionary<string, Uri> ParseSvcDoc(string uri) 
{ 
    var serviceName = XName.Get("service", "http://www.w3.org/2007/app"); 
    var workspaceName = XName.Get("workspace", "http://www.w3.org/2007/app"); 
    var collectionName = XName.Get("collection", "http://www.w3.org/2007/app"); 
    var titleName = XName.Get("title", "http://www.w3.org/2005/Atom"); 
    var document = XDocument.Load(uri); 
    return document.Element(serviceName).Element(workspaceName).Elements(collectionName) 
     .ToDictionary(e =&gt; e.Element(titleName).Value, e =&gt; new Uri(e.Attribute("href").Value, UriKind.RelativeOrAbsolute));        
}  

While initializing the DataServiceContext, the code can also parse the Service Document and set the ResolveEntitySet property. In this case, ResolveEntitySet is set to the GetCollectionUri method:

DataServiceContext ctx = new DataServiceContext(new Uri("http://localhost:1309/NorthwindDataService.svc/")); 
ctx.ResolveEntitySet = GetCollectionUri; 
Dictionary&lt;string, Uri&gt; collectionDict = new Dictionary&lt;string, Uri&gt;(); 
collectionDict = ParseSvcDoc(ctx.BaseUri.OriginalString);   

The GetCollectionUri method is very simple. It just returns the value in the dictionary for the associated collection name key. If this key does not exist, the method will return null and the collection URI will be constructed using OData addressing conventions. 

public Uri GetCollectionUri(string collectionName) 
{ 
    Uri retUri; 
    collectionDict.TryGetValue(collectionName, out retUri); 
    return retUri; 
}  

For those of you working with Silverlight, I’ve added an async version of the ParseSvcDoc. It parses the document in the same manner but doesn’t block when retrieving the Service Document. Check out the attached samples for this code.

Attachment: SampleAsyncParseSvcDoc.cs


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Bill Zack charts The Future of Windows Azure AppFabric in this brief reference of 11/30/2010 to a Gartner research report:

image722322Windows Azure AppFabric has an interesting array of features today that can be used by ISVs and other developers to architect hybrid on-premise/in-cloud applications.  Components like Service Bus, Access Control Service and the Caching Service. are very useful in their own right when used to build hybrid applications.

image

Other announced features such as the Integration Service and Composite Applications are coming features that will make AppFabric even more powerful as an integration tool.

image It is important to be able to know where AppFabric is going in the next 2-3 years so that you can take advantage of the features that exist today while making sure to plan for features that will emerge in the future. This research paper from Gartner is an analysis of where they believe Windows AppFabric is headed.  Remember that this is an analysis and opinion from Garner and not a statement of policy or direction from Microsoft.

Note: Windows Azure and Cloud Computing Posts for 11/29/2010+ included an article about the Gartner research report.


The Claims-Based Identity Team (a.k.a. CardSpace Team) posted Protecting and consuming REST based resources with ACS, WIF, and the OAuth 2.0 protocol on 11/29/2010:

image722322ACS (Azure Access Control Service) recently added support for the OAuth 2.0 protocol. If you haven’t heard of it, OAuth is an open protocol that is being developed by members of the identity community to solve the problem of allowing 3rd party applications to access their data without providing their passwords. In order to show how this can be done with WIF and ACS, we have posted a sample on Microsoft Connect that shows an end-to-end scenario.

The scenario in the sample is meant to be as simple as possible to show the power of the OAuth protocol to enable web sites to access resource on behalf of a user without the user providing his or her credentials to that site. In our scenario, Contoso has a web service that exposes customer information that needs to be protected. Fabrikam has a web site and wants users to be able to view their Contoso data directly on it. The user doesn’t have to log in to the Fabrikam site, but gets redirected to a Contoso specific site in order to login and give consent to access data on their behalf.

The Contoso web service requires OAuth access tokens from ACS to be attached to incoming requests. The necessary protocol flow for the Fabrikam web site (in OAuth terms – the web server client), including redirecting the user to login and give consent, requesting access tokens from ACS, and attaching the token to outgoing requests to the service is taken care of under the covers. The sample contains a walkthrough that describes the components in more detail.

Try it out here, and tell us what you think!


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, and CDN

imageSee The Windows Azure Team reported Just Released: Windows Azure SDK 1.3 and the new Windows Azure Management Portal on 11/30/2010 in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section below for the new Windows Azure Connect CTP.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Robert Duffner posted Thought Leaders in the Cloud: Talking with Roger Jennings, Windows Azure Pioneer and Owner of OakLeaf Systems to the Windows Azure Team blog on 11/30/2010:

imageRoger Jennings is the owner of OakLeaf Systems, a software consulting firm in northern California. He is also a prolific blogger and author, including a book on Windows Azure. Roger is a graduate of the University of California, Berkeley.

In this interview we discuss:

  • What gets people to have the "Ah-ha" moment to understand what the cloud is about
  • Moving to the cloud to get out of the maintenance business
  • Expected growth of cloud computing
  • The suitability of the cloud for applications with compliance requirements
  • How the cloud supports entrepreneurism

Robert Duffner: Roger, tell us a little bit about yourself and your experience with cloud computing and development.

Roger Jennings: I've been consulting and writing articles, books, and what have you about databases for about the past 15 years. I was involved with Azure from the time it started as an alpha product a couple of years ago, before the first PDC 2008 presentation.

I wrote a book for Wrox called Cloud Computing with the Windows Azure Platform, which we got out in time for the 2009 PDC meeting.

Robert: You've seen a lot of technology come and go. What was it about Windows Azure that got your attention and made it something that you really wanted to stay on top of?

Roger: My work with Visual Studio made the concept of platform-as-a-service interesting to me. I had taken a look at Amazon web services; I had done some evaluations and simple test operations. I wasn't overwhelmed with AWS Simple DB because of its rather amorphous design characteristics, lack of schema, and so on, which was one of the reasons I wasn't very happy with Windows Azure's SQL Data Services (SDS) or SQL Server Data Services (SSDS), both of which were Entity-Attribute-Value (EAV) tables grafted to a modified version of SQL Server 2008.

I was one of those that were agitating in late 2008 for what ultimately became SQL Azure. And changing from SDS to SQL Azure was probably one of the best decisions the Windows Azure team ever made.

Robert: Well, clearly you're very well steeped in the cloud, but probably the majority of developers haven't worked hands-on with the cloud platform. So when you're talking about cloud development, what do you say that helps people have that "Aha!" moment where the light bulb turns on? How do you help people get that understanding?

Roger: The best way for people to understand the process is that, when I'm consulting with somebody on cloud computing, I'll do an actual port of a simple application to my cloud account. The simplicity of migrating an application to Windows Azure is what usually sells them.

Another useful approach is to show people how easily they can move between on-premises SQL Server and SQL Azure in the cloud. The effort typically consists mainly of just changing a connection string, and that simplicity really turns them on.

I do a lot of consulting for very large companies who are still using Access databases, and one of the things they like the feel of is migrating to SQL Azure instead of SQL Server as the back end to Access front ends.

Robert: It's one thing to move an application to the cloud, but it's another thing to architect an application for the cloud. How do you think that transition's going to occur?

Roger: I don't see a great deal of difference, in terms of a cloud-based architecture as discrete from a conventional, say, MVC or MVVM architecture. I don't think that the majority of apps being moved to the cloud are being moved because they need 50, 100 or more instances on order.

I think most of them will be moderate-sized applications that IT people do not want to maintain -- either the databases or the app infrastructure.

Robert: That's a good point, even though an awful lot of the industry buzz about the cloud is around the idea of scale-out.

Roger: I'm finding that my larger clients are business managers who are particularly interested in cloud for small, departmental apps. They don't want to have departmental DBAs, for example, nor do they want to write a multi-volume justification document for offshoring the project. They just want to get these things up and running in a reliable way without having to worry about maintaining them.

They can accomplish that by putting the SQL Azure back end up and connecting an Azure, SharePoint or Access front end to it, or else they can put a C# or VB Windows app on the front of SQL Azure.

Robert: According to the Gartner Hype Cycle, technologies go through inflated expectations, then disillusionment, and then finally enlightenment and productivity. They have described cloud computing as having already gone over the peak of inflated expectations, and they say it's currently plummeting to the trough of disillusionment.

Roger: I don't see it plummeting. If you take a look at where the two cloud dots are on the Gartner Hype Cycle, they're just over the cusp of the curve, just on the way down. I don't think there's going to be a trough of disillusionment of any magnitude for the cloud in general.

I think it's probably just going to simply maintain its current momentum. It's not going to get a heck of a lot more momentum in terms of the acceleration of acceptance, but I think it's going to have pretty consistent growth year over year.

The numbers I have seen suggest 35 percent growth, and something like compounded growth at an annual rate in the 30 to 40 percent range does not indicate to me running down into the trough of disillusionment. [laughs]

Robert: So how have Azure and cloud computing changed the conversations you have with your customers about consulting and development?

Roger: They haven't changed those conversations significantly, as far as development is concerned. In terms of consulting, most customers want recommendations as to what cloud they should use. I don't really get into that, because I'm prejudiced toward Azure, and I'm not really able to give them an unbiased opinion.

I'm really a Microsoft consultant. My Access book, for instance, has sold about a million copies overall in English, so that shows a lot of interest. Most of what I do in my consulting practice with these folks is make recommendations for basic design principles, and to some extent architecture, but very often just the nitty-gritty of getting the apps they've written up into the cloud.

Robert: There was a recent article, I think back in September in "Network World," titled "When Data Compliance and Cloud Computing Collide." The author, a CIO named Bernard Golden, notes that the current laws haven't kept up with cloud computing.

Specifically, if a company stores data in the cloud, the company is still liable for the data, even though the cloud provider completely controls it. What are your thoughts on those issues?

Roger: Well, I've been running a campaign on that issue with the SQL Azure guys to get transparent data encryption (TDE) features implemented. The biggest problem is, of course, the ability of a cloud provider's employees to snoop the data, which presents an serious compliance issue for some clients.

The people I'm dealing with are not even thinking about putting anything that is subject to HIPAA or PCI requirements in a public cloud computing environment until TDE is proven by SQL Azure. Shared secret security for symmetrical encryption of Azure tables and blobs is another issue that must be resolved.

Robert: The article says that SaaS providers should likely have to bear more of the compliance load that cloud providers provide, because SaaS operates in a specific vertical with specific regulatory requirements, whereas a service like Azure, Amazon, or even Google can't know the specific regulations that may govern every possible user. What are your thoughts on that?

Roger: I would say that the only well-known requirements are HIPAA and PCI. You can't physically inspect the servers that store specific users' data, so PCI audit compliance is an issue.  These things are going to need to be worked out.

I don't foresee substantial movement of that type of data to the cloud immediately. The type of data that I have seen being moved up there is basically departmental or divisional sales and BI data. Presumably, if somebody worked at it hard enough, they could intercept it. But it would require some very specific expertise to be an effective snooper of that data.

Robert: In your book, you start off by talking about some of the advantages of the cloud, such as the ability for a startup to greatly reduce the time and cost of launching a product. Do you think that services like Azure lower the barrier to entrepreneurship and invention?

Roger: Definitely for entrepreneurship, although not necessarily for invention. It eliminates that terrible hit you take from buying conventional data center-grade hardware, and it also dramatically reduces the cost of IT management of those assets. You don't want to spend your seed capital on hardware; you need to spend it on employees.

Robert: You've probably seen an IT disaster or two. Just recently there was a story where a cloud failure was blamed for disrupting travel for 50,000 Virgin Blue airline customers.

Roger: I've read that it was a problem with data replication on a noSQL database.

Robert: What are some things you recommend that organizations should keep in mind as they develop for the cloud, to ensure that their applications are available and resilient?

Roger: Automated testing is a key, because the cloud is obviously going to be good for load testing and automated functional testing. I'm currently in the midst of a series of blog posts about Windows Azure load testing and diagnostics.

Robert: What do you see as the biggest barriers to the adoption of cloud computing?

Roger: Data privacy concerns. Organizations want their employees to have physical possession of the source data. Now, if they could encrypt it, if they could do transparent data encryption, for instance, they would use it, because people seem to have faith in that technology, even though not a lot of people are using it. Cloud computing can eliminate, or at least reduce, the substantial percentage of privacy breaches resulting from lost or stolen laptops, as well as employee misappropriation.

Robert: That's all of the questions I had for you, but are there some interesting things you're working on or thinking about that we haven't discussed?

Roger: I've been wrapped up in finishing an Access 2010 book for the past three to four months, so other than writing this blog and trying to keep up with what's going on in the business, I haven't really had a lot of chance to work with Azure recently as much as I'd like. But my real long-term interest is the synergy between mobile devices and the cloud, and particularly Windows Phone 7 and Azure.

I'm going to be concentrating my effort and attention for at least the next six months on getting up to speed on mobile development, at least far enough that I can provide guidance with regard to mobile interactions with Azure.

Robert: People think a lot about cloud in terms of the opportunity for the data center. You mentioned moving a lot of the departmental applications to the cloud as kind of the low hanging fruit, but you also talked about Windows Phone. Unpack that for us a little bit, because I think that's a use case that you don't hear the pundits talking about all the time.

Roger: IBM has been the one really promoting this, and they've got a study out saying that within five years, I think it's sixty percent of all development expenditures will be for cloud-to-mobile operations. And it looked to me as if they had done their homework on that survey.

Robert: There are a lot of people using Google App Engine as kind of a back-end for iPhone apps and things like that. It seems like people think a lot about websites and these department-level apps running in the cloud. Whether it's for consumers or corporate users, though, they are all trying to get access to data through these devices that people carry around in their pockets. Is that the opportunity that you're seeing?

Roger: Yes; I think that's a key opportunity but it's for all IaaS and Paas providers, not just Google. A Spanish developer said "Goodbye Google App Engine" in a November 22, 2010 blog post about his problems with the service that received 89,000 visits and more than 120 comments in a single day.

Robert: Well, I want to thank you for taking the time to talk today.

Roger: It was my pleasure.


The Windows Azure Team reported Just Released: Windows Azure SDK 1.3 and the new Windows Azure Management Portal on 11/30/2010:

imageAt PDC10 last month, we announced a host of enhancements for Windows Azure designed to make it easier for customers to run existing Windows applications on Windows Azure, enable more affordable platform access and improve the Windows Azure developer and IT Professional experience. Today, we're happy to announce that several of these enhancements are either generally available or ready for you to try as a Beta or Community Technology Preview (CTP).  Below is a list of what's now available, along with links to more information.

The following functionality is now generally available through the Windows Azure SDK and Windows Azure Tools for Visual Studio release 1.3 and the new Windows Azure Management Portal:

  • Development of more complete applications using Windows Azure is now possible with the introduction of Elevated Privileges and Full IIS. Developers can now run a portion or all of their code in Web and Worker roles with elevated administrator privileges. The Web role now provides Full IIS functionality, which enables multiple IIS sites per Web role and the ability to install IIS modules.
  • Remote Desktop functionality enables customers to connect to a running instance of their application or service in order to monitor activity and troubleshoot common problems.
  • Windows Server 2008 R2 Roles: Windows Azure now supports Windows Server 2008 R2 in its Web, worker and VM roles. This new support enables you to take advantage of the full range of Windows Server 2008 R2 features such as IIS 7.5, AppLocker, and enhanced command-line and automated management using PowerShell Version 2.0.
  • Multiple Service Administrators: Windows Azure now supports multiple Windows Live IDs to have administrator privileges on the same Windows Azure account. The objective is to make it easy for a team to work on the same Windows Azure account while using their individual Windows Live IDs.
  • Better Developer and IT Professional Experience: The following enhancements are now available to help developers see and control how their applications are running in the cloud:
  • A completely redesigned Silverlight-based Windows Azure portal to ensure an improved and intuitive user experience
  • Access to new diagnostic information including the ability to click on a role to see role type, deployment time and last reboot time
  • A new sign-up process that dramatically reduces the number of steps needed to sign up for Windows Azure.
  • New scenario based Windows Azure Platform forums to help answer questions and share knowledge more efficiently.

The following functionality is now available as beta:

  • Windows Azure Virtual Machine Role: Support for more types of new and existing Windows applications will soon be available with the introduction of the Virtual Machine (VM) role. Customers can move more existing applications to Windows Azure, reducing the need to make costly code or deployment changes.
  • Extra Small Windows Azure Instance, which is priced at $0.05 per compute hour, provides developers with a cost-effective training and development environment. Developers can also use the Extra Small instance to prototype cloud solutions at a lower cost.

Developers and IT Professionals can sign up for either of the betas above via the Windows Azure Management Portal.

  • Windows Azure Marketplace is an online marketplace for you to share, buy and sell building block components, premium data sets, training and services needed to build Windows Azure platform applications. The first section in the Windows Azure Marketplace, DataMarket, became commercially available at PDC 10. Today, we're launching a beta of the application section of the Windows Azure Marketplace with 40 unique partners and over 50 unique applications and services. 

We are also making the following available as a CTP:

  • Windows Azure Connect (formerly Project Sydney), which enables a simple and easy-to-manage mechanism to set up IP-based network connectivity between on-premises and Windows Azure resources, is the first Windows Azure Virtual Network feature that we're making available as a CTP. Developers and IT Professionals can sign up for this CTP via the Windows Azure Management Portal.

If you would like to see an overview of all the new features that we're making available, please watch an overview webcast that will happen on Wednesday, December 1, 2010 at 9AM PST. You can also watch on-demand sessions from PDC10 that dive deeper into many of these Windows Azure features; check here for a full list of sessions.  A full recap of all that was announced for the Windows Azure platform at PDC10 can be found here.  For all other questions, please refer to the latest FAQ.


Ryan Dunn (@dunnry) delivered a workaround for Using Windows Azure MMC and Cmdlet with Windows Azure SDK 1.3 on 11/30/2010:

image If you haven't already read on the official blog, Windows Azure SDK 1.3 was released (along with Visual Studio tooling).  Rather than rehash what it contains, go read that blog post if you want to know what is included (and there are lots of great features).  Even better, goto microsoftpdc.com and watch the videos on the specific features.

imageIf you are a user of the Windows Azure MMC or the Windows Azure Service Management Cmdlets, you might notice that stuff gets broken on new installs.  The underlying issue is that the MMC snapin was written against the 1.0 version of the Storage Client.  With the 1.3 SDK, the Storage Client is now at 1.1.  So, this means you can fix this two ways:

  1. Copy an older 1.2 SDK version of the Storage Client into the MMC's "release" directory.  Of course, you need to either have the .dll handy or go find it.  If you have the MMC installed prior to SDK 1.3, you probably already have it in the release directory and you are good to go.
  2. Use Assembly Redirection to allow the MMC or Powershell to call into the 1.1 client instead of the 1.0 client.

To use Assembly Redirection, just create a "mmc.exe.config" file for MMC (or "Powershell.exe.config" for cmdlets) and place it in the %windir%\system32 directory.  Inside the .config file, just use the following xml:

<configuration>
   <runtime>
      <assemblybinding xmlns="urn:schemas-microsoft-com:asm.v1">
       <dependentassembly>
         <assemblyidentity culture="neutral" publickeytoken="31bf3856ad364e35" name="Microsoft.WindowsAzure.StorageClient"></assemblyidentity>
         <bindingredirect newversion="1.1.0.0" oldversion="1.0.0.0"></bindingredirect>
       </dependentassembly>
       <dependentassembly>
         <assemblyidentity culture="neutral" publickeytoken="31bf3856ad364e35" name="Microsoft.WindowsAzure.StorageClient"></assemblyidentity>
          <publisherpolicy apply="no">
       </dependentassembly>
     </assemblybinding>   
   <runtime>
<configuration>
   

Bill Zack reported Windows Azure Platform Training Kit – November Update with Windows Azure SDK v1.3 updates on 11/30/2010:

There is a newly updated version of the Windows Azure Platform Training Kit that matches the November 2010 SDK 1.3 release.  This release of the training kit includes several new hands-on labs for the new Windows Azure features and the new/updated services we just released.

clip_image001

The updates in this training kit include:

  • [New lab] Advanced Web and Worker Role – shows how to use admin mode and startup tasks
  • [New lab] Connecting Apps With Windows Azure Connect – shows how to use Project Sydney
  • [New lab] Virtual Machine Role – shows how to get started with VM Role by creating and deploying a VHD
  • [New lab] Windows Azure CDN – simple introduction to the CDN
  • [New lab] Introduction to the Windows Azure AppFabric Service Bus Futures – shows how to use the new Service Bus features in the AppFabric labs environment
  • [New lab] Building Windows Azure Apps with Caching Service – shows how to use the new Windows Azure AppFabric Caching service
  • [New lab] Introduction to the AppFabric Access Control Service V2 – shows how to build a simple web application that supports multiple identity providers
  • [Updated] Introduction to Windows Azure - updated to use the new Windows Azure platform Portal
  • [Updated] Introduction to SQL Azure - updated to use the new Windows Azure platform Portal

In addition, all of the HOLs have been updated to use the new Windows Azure Tools for Visual Studio version 1.3 (November release).   We will making more updates to the labs in the next few weeks.   In the next update scheduled for early December we will also include presentations and demos for delivering a full 4-day training workshop. 

You can download the November update of the Windows Azure Platform Training kit from here:  http://go.microsoft.com/fwlink/?LinkID=130354

Finally, we’re now publishing the HOLs directly to MSDN to make it easier for developers to review and use the content without having to download an entire training kit package.  You can now browse to all of the HOLs online in MSDN here:  http://go.microsoft.com/fwlink/?LinkId=207018


Wade Wegner (@WadeWegner) described Significant Updates Released in the BidNow Sample for Windows Azure for the Windows Azure SDK v1.3 in an 11/30/2010 post to his personal blog:

image There have been a host of announcements and releases over the last month for the Windows Azure Platform.  It started at the Professional Developers Conference (PDC) in Redmond, where we announced new services such as Windows Azure AppFabric Caching and SQL Azure Reporting, as well as new features for Windows Azure, such as VM Role, Admin mode, RDP, Full IIS, and a new portal experience. 

BidNowSampleYesterday, the Windows Azure team released the Windows Azure SDK 1.3 and Tools for Visual Studio and updates to the Windows Azure portal (for all the details, see the team post).  These are significant releases that provide a lot of enhancements for Windows Azure.

One of the difficulties that come with this many updates in a short period of time is keeping up with the changes and updates.  It can be challenging to understand how these features and capabilities come together to help you build better web applications.

This is why my team has released sample applications like BidNow, FabrikamShipping SaaS, and myTODO.

What’s in the BidNow Sample?

Just today I posted the latest version of BidNow on the MSDN Code Gallery.  BidNow has been significantly updated to leverage many pieces of the Windows Azure Platform, including many of the new features and capabilities announced at PDC and that are a part of the Windows Azure SDK 1.3.  This list includes:

  • Windows Azure (updated)
    • Updated for the Windows Azure SDK 1.3
    • Separated the Web and Services tier into two web roles
    • Leverages Startup Tasks to register certificates in the web roles
    • Updated the worker role for asynchronous processing
  • SQL Azure (new)
    • Moved data out of Windows Azure storage and into SQL Azure (e.g. categories, auctions, buds, and users)
    • Updated the DAL to leverage Entity Framework 4.0 with appropriate data entities and sources
    • Included a number of scripts to refresh and update the underlying data
  • Windows Azure storage (update)
    • Blob storage only used for auction images and thumbnails
    • Queues allow for asynchronous processing of auction data
  • Windows Azure AppFabric Caching (new)
    • Leveraging the Caching service to cache reference and activity data stored in SQL Azure
    • Using local cache for extremely low latency
  • Windows Azure AppFabric Access Control (new)
    • BidNow.Web leverages WS-Federation and Windows Identity Foundation to interact with Access Control
    • Configured BidNow to leverage Live ID, Yahoo!, and Facebook by default
    • Claims from ACs are processed by the ClaimsAuthenticationManager such that they are enriched by additional profile data stored in SQL Azure
  • OData (new)
    • A set of OData services (i.e. WCF Data Services) provide an independent services layer to expose data to difference clients
    • The OData services are secured using Access Control
  • Windows Phone 7  (new)
    • A Windows Phone 7 client exists that consumes the OData services
    • The Windows Phone 7 client leverages Access Control to access the OData services

Yes, there are a lot of pieces to this sample, but I think you’ll find that it mimics many real world applications.  We use Windows Azure web and worker roles to host our application, Windows Azure storage for blobs and queues, SQL Azure for our relational data, AppFabric Caching for reference and activity data, Access Control for authentication and authorization, OData for read-only services, and a Windows Phone 7 client that consumes the OData services.  That’s right – in addition to the Windows Azure Platform pieces, we included a set of OData services that are leveraged by a Windows Phone 7 client authenticated through Access Control.

High Level Architecture

Here’s a high level look at the application architecture:

image

Compute: BidNow consists of three distinct Windows Azure roles – a web role for the website, a web role for the services, and a worker role for long running operations.  The BidNow.Web web role hosts Web Forms that interact with the underlying services through a set of WCF Service clients.  These services are hosted in the BidNow.Services.Web web role.  The worker role, BidNow.Worker, is in charge of handling the long running operations that could hurt the websites performance (e.g. image processing).

Storage: BidNow uses three different types of storage.  There’s a relational database, hosted in SQL Azure, that contains all the auction information (e.g. categories, auctions, and bids) and basic user information, such as Name and Email address (note that user credentials are not stored in BidNow, as we use Access Control to abstract authentication).  We use Windows Azure Blob storage to store all the auction images (e.g. the thumbnail and full image of the products), and Windows Azure Queues to dispatch asynchronous operations.  Finally, BidNow uses Windows Azure AppFabric Caching to store reference and activity data used by the web tier. 

Authentication & Authorization: In BidNow, users are allowed to navigate through the site without any access restrictions.  However, to bid on or publish an item, authentication is required.  BidNow leverages the identity abstraction and federation aspects of Windows Azure AppFabric Access Control  to validate the users identity against different identity providers – by default, BidNow leverages Windows Live Id, Facebook, and Yahoo!.  BidNow uses Windows Identity Foundation to process identity tokens received from Access Control, and then leverages claims from within those tokens to authenticate and authorize the user.

Windows Phone 7: In today’s world it’s a certainty that your website will have mobile users.  Often, to enrich the experience, client applications are written for these devices.  The BidNow sample also includes a Windows Phone 7 client that interacts with a set of OData services that run within the web tier.  This sample demonstrates how to build a Windows Phone 7 client that leverages Windows Azure, OData, and Access Control to extend your application.

Of course, there’s a lot more that you’ll discover one you dig in.  Over the next few weeks, I’ll write and record a series of blog posts and web casts that explore BidNow in greater depth.

Getting Started

To get started using BidNow, be sure you have the following prerequisites installed:

As you’ve come to expect, BidNow includes a Configuration Wizard that includes a Dependency Checker (for all the items listed immediately above) and a number of Setup Script you can walk through to configure BidNow.  At the conclusion of the wizard you can literally hit F5 and go.

image

Of course, don’t forget to download the BidNow Sample.  For a detailed walkthrough on how to get started with BidNow, please see the Getting Started with BidNow wiki page.

If you have any feedback or questions, please send us an email at bidnowfeedback@microsoft.com.


Steve Plank (@plankytronixx) summarized new Windows Azure SDK v1.3 features in his Windows Azure 1.3 SDK released post of 11/30/2010:

image The long-awaited 1.3 SDK with support for the following (from the download page):

  • Virtual Machine (VM) Role (Beta):Allows you to create a custom VHD image using Windows Server 2008 R2 and host it in the cloud.
  • Remote Desktop Access: Enables connecting to individual service instances using a Remote Desktop client.
  • Full IIS Support in a Web role: Enables hosting Windows Azure web roles in an IIS hosting environment.
  • Elevated Privileges: Enables performing tasks with elevated privileges within a service instance.
  • Virtual Network (CTP): Enables support for Windows Azure Connect, which provides IP-level connectivity between on-premises and Windows Azure resources.
  • Diagnostics: Enhancements to Windows Azure Diagnostics enable collection of diagnostics data in more error conditions.
  • Networking Enhancements: Enables roles to restrict inter-role traffic, fixed ports on InputEndpoints.
  • Performance Improvement: Significant performance improvement local machine deployment.

It could be quite easy to get carried away and think “oh great, I can use Windows Azure Connect or VM Role now”. But note that these are the features of the SDK, not the entire Windows Azure Infrastructure. You still need to be accepted on to the VM Role and Windows Azure Connect CTP programs.

You can download the SDK from here.


The Windows Azure Team released Windows Azure SDK Windows Azure Tools for Microsoft Visual Studio (November 2010) [v1.3.11122.0038] to the Web on 11/29/2010:

imageWindows Azure Tools for Microsoft Visual Studio, which includes the Windows Azure SDK, extends Visual Studio 2010 to enable the creation, configuration, building, debugging, running, packaging and deployment of scalable web applications and services on Windows Azure.

Overview

Windows Azure™ is a cloud services operating system that serves as the development, service hosting and service management environment for the Windows Azure platform. Windows Azure provides developers with on-demand compute and storage to host, scale, and manage web applications on the internet through Microsoft® datacenters.

Windows Azure is a flexible platform that supports multiple languages and integrates with your existing on-premises environment. To build applications and services on Windows Azure, developers can use their existing Microsoft Visual Studio® expertise. In addition, Windows Azure supports popular standards, protocols and languages including SOAP, REST, XML, Java, PHP and Ruby. Windows Azure is now commercially available in 41 countries.

Windows Azure Tools for Microsoft Visual Studio extend Visual Studio 2010 to enable the creation, configuration, building, debugging, running, packaging and deployment of scalable web applications and services on Windows Azure.

New for version 1.3:

  • Virtual Machine (VM) Role (Beta):Allows you to create a custom VHD image using Windows Server 2008 R2 and host it in the cloud.
  • Remote Desktop Access: Enables connecting to individual service instances using a Remote Desktop client.
  • Full IIS Support in a Web role: Enables hosting Windows Azure web roles in a IIS hosting environment.
  • Elevated Privileges: Enables performing tasks with elevated privileges within a service instance.
  • Virtual Network (CTP): Enables support for Windows Azure Connect, which provides IP-level connectivity between on-premises and Windows Azure resources.
  • Diagnostics: Enhancements to Windows Azure Diagnostics enable collection of diagnostics data in more error conditions.
  • Networking Enhancements: Enables roles to restrict inter-role traffic, fixed ports on InputEndpoints.
  • Performance Improvement: Significant performance improvement local machine deployment.
Windows Azure Tools for Microsoft Visual Studio also includes:
  • C# and VB Project creation support for creating a Windows Azure Cloud application solution with multiple roles.
  • Tools to add and remove roles from the Windows Azure application.
  • Tools to configure each role.
  • Integrated local development via the compute emulator and storage emulator services.
  • Running and Debugging a Cloud Service in the Development Fabric.
  • Browsing cloud storage through the Server Explorer.
  • Building and packaging of Windows Azure application projects.
  • Deploying to Windows Azure.
  • Monitoring the state of your services through the Server Explorer.
  • Debugging in the cloud by retrieving IntelliTrace logs through the Server Explorer.

Repeated from yesterday’s post due to importance.


The Microsoft Case Studies Team added a four-page Custom Developer [iLink] Reduces Development Time, Cost by 83 Percent for Web, PC, Mobile Target case study on 11/29/2010:

image Organization Profile: iLink Systems, an ISO and CMMI certified global software solutions provider, offers system integration and custom development. Based in Bellevue, Washington, it has 250 employees.

imageBusiness Situation: iLink wanted to enter the fast-growing market for cloud computing solutions, and wanted to conduct a proof of concept to see if the Windows Azure platform could live up to expectations.

Solution: iLink teamed with medical content provider A.D.A.M. to test Windows Azure for a disease assessment engine that might see millions of hits per hour should it go into production use.

Benefits:

  • Multiplatform approach reduced development time, cost by 83 percent
  • Deployments can be slashed from “months to minutes”
  • Updates implemented quickly, easily by business analysts
  • Confidence in business direction gained from trustworthy platform

Software and Services:

  • Windows Azure
  • Microsoft Visual Studio 2010 Ultimate
  • Microsoft Expression Blend 3
  • Microsoft Silverlight 4
  • Microsoft .NET Framework 4
  • Microsoft SQL Azure

Vertical Industries: High Tech and Electronics Manufacturing

Country/Region: United States

Business Need: Business Productivity

IT Issue: Cloud Services


Srinivasan Sundara Rajan described “Multi-Tiered Development Using Windows Azure Platform” in his Design Patterns in the Windows Azure Platform post of 11/29/2010:

image While most  Cloud Platforms are viewed as Virtualization, Hypervisors, Elastic Instances and infrastructure-related flexibility that enables you to arrive at the dynamic infrastructure,  Windows Azure is a complete development platform where the  scalability of the multi-tiered systems can be enabled through the usage of  ‘Design Patterns'.

Below are the  common ‘Design Patterns' that can be realized using the Windows Azure as a PaaS platform.

Web Role & Worker Role
Windows Azure currently supports the following two types of roles:

  • Web role: A web role is a role that is customized for web application programming as supported by IIS 7 and ASP.NET.
  • Worker role: A worker role is a role that is useful for generalized development, and may perform background processing for a web role.

A service must include at least one role of either type, but may consist of any number of web roles or worker roles.

Role Network Communication|
A web role may define a single HTTP endpoint and a single HTTPS endpoint for external clients. A worker role may define up to five external endpoints using HTTP, HTTPS, or TCP. Each external endpoint defined for a role must listen on a unique port.

Web roles may communicate with other roles in a service via a single internal HTTP endpoint. Worker roles may define internal endpoints for HTTP or TCP.

Both web and worker roles can make outbound connections to Internet resources via HTTP or HTTPS and via Microsoft .NET APIs for TCP/IP sockets.

So  we can have a  ASP.NET front end application hosted on a VM which is a web role and a WCF Service hosted  VM which is a worker role and the following design patterns can be applied.

Session Facade
Worker Role VM
as a facade is used to encapsulate the complexity of interactions between the business objects participating in a workflow. The Session Facade manages the business objects, and provides a uniform coarse-grained service access layer to clients.

The below diagram shows the implementation of Session Façade pattern in Windows Azure.

image

Business Delegate / Service Locator
The Business Delegate reduces coupling between presentation-tier clients and business services. The Business Delegate hides the underlying implementation details of the business service, such as lookup and access details of the Azure / Worker role architecture.

Service Locator object abstracts server  lookup, and instance creation. Multiple clients can reuse the Service Locator object to reduce code complexity, provide a single point of control, and improve performance by providing a caching facility.

These two patterns together provide valuable  support for Dynamic Elasticity and Load Balancing in a cloud environment. We can have logic in these roles (Business Delegate, Service Locator) such that the Virtual Machines with the least load are selected for Service and  providing higher scalability.

Typical Activities that can be loaded to a Business Delegate Worker Role are:

  • Monitoring Load Metrics using APIs for all the other Worker Role VMs
  • Gather and Persist Metrics
  • Rule Based Scaling
  • Adding and Removing Instances
  • Maintain and Evaluate Business Rules for Load Balancing
  • Auto Scaling
  • Health Monitoring
  • Abstract VM migration details from Web Roles

Other Patterns
Windows Azure's  coarse grained, asynchronous architecture of Web Roles and Worker Roles facilitates several other common design patterns mentioned below, which makes this a robust enterprise development platform and not just a Infrastructure Virtualization Enabler.

  • Transfer Object Assembler
  • Value List Handler
  • Transfer Object
  • Data Access Object
  • Model View Controller Architecture
  • Front controller
  • Dispatcher View

Summary
Design patterns help in  modularizing software development and deployment process , so that the building blocks can be developed independently  and yet tied together without much tight coupling between them. Utilizing  them for the  Windows Azure development will compliment the benefits already provided by the Cloud platform.


<Return to section navigation list> 

Visual Studio LightSwitch

Bruce Kyle recommended that you Get Started with LightSwitch at New Developer Center in an 11/30/2010 post to the US ISV Evangelism blog:

image LightSwitch is a rapid development tool for building business applications. LightSwitch simplifies the development process, letting you concentrate on the business logic and doing much of the remaining work for you. By using LightSwitch, an application can be designed, built, tested, and in your user’s hands quickly.

image2224222In fact, it is possible to create a LightSwitch application without writing a single line of code. For most applications, the only code you have to write is the code that only you can write: the business logic.

An updated LightSwitch Developer Center on MSDN shows you how to get started. The site includes links to the download site, How-Do I videos, tutorials, the developer training kit and forums.

LightSwitch allows ISVs to create [their] own shells, themes, screen templates and more. We are working with partners and control vendors to create these extensions so you will see a lot more available as the ecosystem grows.

Also check out the team blog that describes how you can get started with LightSwitch:

Basics:

Advanced:


Return to section navigation list> 

Windows Azure Infrastructure

Zane Adam described the NEW! Windows Azure Platform Management Portal in an 11/30/2010 post:

We have launched the new Windows Azure Platform Management Portal, and with it the database manager for SQL Azure (formerly known as Project Houston).  Now, you can manage all your Windows Azure platform resources from a single location – including your SQL Azure database.

clip_image001

The management portal seamless[ly] allows for complete administration of all Windows Azure platform resources, streamlines administration of creating and managing SQL Azure databases, and allows for ultra-efficient administrator of SQL Azure.

The new management portal is 100% Silverlight and features getting started wizards to walk you through the process of creating subscriptions, servers, and databases along with integrated help, and a quick link ribbon bar.

The database manager (formerly known as Project “Houston”) is a lightweight and easy to use database management tool for SQL Azure databases. It is designed specifically for web developers and other technology professionals seeking a straightforward solution to quickly develop, deploy, and manage their data-driven applications in the cloud. Project “Houston” provides a web-based database management tool for basic database management tasks like authoring and executing queries, designing and editing a database schema, and editing table data.

Major features in this release include:

  • Navigation pane with object search functionality
  • Information cube with basic database usage statistics and resource links
  • Table designer and table data editor
  • Aided view designer
  • Aided stored procedure designer
  • T-SQL editor

To reach the new management portal you need go to: http://windows.azure.com; and for the time being the old management portal can be reached at: http://sql.azure.com.

Along with the new Management Portal the Windows Azure team has shipped the Windows Azure SDK V1.3 (November 2010) , for more information see the announcement on the Windows Azure blog.

And yes, remember that you can get started with SQL Azure using the free trial offer.

image


David Lemphers (@davidlem) described Why the Windows Azure Nov 2010 SDK Rocks!! in this 11/20/2010 post:

imageSo I’ve finally dug myself out of email and was able to sit down last night and check out the new Windows Azure Nov 2010 SDK, and ooowee, there are some humdingers in there.

Let’s start with Virtual Machine (VM) Role, which is as it says, the ability for you to create a VHD image that you can host in the cloud! Now, don’t get this confused with something like Amazon’s EC2 service, because it’s not. What it is though, is a way to get your application VM’s to a pre-configured OS state so you can deploy your service model onto. This is important for example, if you are migrating an existing application to Windows Azure, and that application has requirements/dependencies, both on the way the OS is configured (turning on/off certain features) or the need for a specific set of software libraries that need to be pre-installed before the app runs. All of these steps can be done offline in a VHD, then uploaded to Windows Azure, to be used by your service. You still need to use the service model though, but you get to select your own pre-configured VM image to run it on. You also can’t bank on things like persistent storage on your VM, as the VM still has to abide by all the rules a normal OS image does in Windows Azure, such as being restarted or torn down and redeployed somewhere else. Either way, this is an awesome feature.

Next on the list of coolness is Remote Desktop Access, and this is one that is very close to my heart as it’s one of the first hacks I wrote when I was on the Windows Azure engineering team, so am wrapped to see it has now become a full fledged feature. Nothing much needs to be said about this feature, you’re essentially able to RDP to a Windows Azure compute instance using your TS client, which is especially useful for troubleshooting and on the box debugging.

Another feature I’m over the moon about is Elevated Privileges, and again, one that’s close to my heart. The key scenario here is where you have a part of your application that needs to run for a set period of time (generally during a particular function) at a higher level of privilege. Take for example, if I have a part of my application that needs to invoke an administration function, like update a file in a secure location, when a user executes a particular feature. I don’t want to be at Admin level for the whole time my app is running, but I want to be able to elevate just for that period of time to execute my critical function, then drop back to steady state. This happens a lot in automated, virtual environments, when you don’t know what the environment looks like until run-time, so having this feature is a boon.

And finally, my most favorite feature, Virtual Network! This is the crux of how Windows Azure and Microsoft are going to be able to provide a converged cloud experience for customers. It’s an IP level technology where you can seamlessly connect your cloud assets with your on-premise assets to create a unified cloud solution. Think of being able to access a mission-critical backend database within your corporate firewall, from your highly-scalable Windows Azure instances!? Awesome!

So there you have it, just a quick round-up of my favorite parts of the new SDK, and while there are many more, I really recommend you spend some time checking the ones above out. Also, to get the VM Role and Virtual Network/Connect stuff running in the cloud, you’ll need to logon to the Windows Azure Management Portal, and sign-up for the beta.

 


Jim O’Neill summarized new Windows Azure SDK v1.3 features in his Windows Azure: Shiny and New post of 11/30/2010:

If you attended PDC or have watched some of the sessions (all of which are available on demand – how cool is that?!), you know there were a LOT (four pages worth) of announcements of upcoming functionality for Windows Azure, some of which were to be available by the end of this year, some in 2011.  Well as you were finishing off the Thanksgiving leftovers, the Azure team has been busy setting up downloads for the first wave of releases mentioned at PDC.  

Today, the 1.3 version of the Windows Azure SDK and Tools for Visual Studio is available for download.  Coinciding with this release is the general availability of 

  • elevated privileges and remote desktop functionality,
  • full IIS support in Web roles, enabling multiple web sites per role and the ability to install IIS modules,
  • Windows Server 2008 R2 for your compute instances,
  • support for multiple LiveIDs as Azure account administrators (previously you had to share a LiveID among those on your team requiring administrative privileges), and
  • a new Silverlight Windows Azure Management Portal, providing a more intuitive user experience and access to additional diagnostic information regarding deployed role instances.

Windows Azure Portal

Also released in beta form:

Finally, Windows Azure Connect (previously known as Project “Sydney”) is now available as a CTP (Community Technology Preview).

To sign-up for the beta and CTP offerings, visit the Windows Azure Management Portal.


Maarten Balliauw described Windows Azure Remote Desktop Access in this 11/30/2010 post:

imageThe latest relase of the WIndows Azure platform, portal and tools (check here) includes support for one of the features announced at PDC last month: remote desktop access to your role instances. This feature is pretty easy to use and currently allows you to deploy a preconfigured VM with IIS where you can play with the OS. No real application needed!

Here’s how:

  1. Create a new Cloud Service and add one Web Role. This should be the result:
    image
  2. Once that is done, right click the Cloud Service and select “Publish…”
  3. In the publish dialog, click “Confiure Remote Desktop connections…”
  4. Create (or select) a certificate, make sure you also export the private key for it.
  5. Enter some credentials and set te expiration date for the account to some far future.
  6. Here’s an example of how that can look like:
    image
  7. Don’t publish yet!
  8. Navigate to http://windows.azure.com and create a new Hosted Service. In this hosted service, upload the certificate you just created:
    image
  9. Once that is done, switch back to Visual Studio, hit the Publish button and sit back while your deployment is being executed.
  10. At a given moment, you will see that deployment is ready.
  11. Switch back to your browser, click your instance and select “Connect” in the toolbar:
    image
  12. Enter your credentials, prefixed with \. E.g. “\maarten”. This is done to strip off the Windows domain from the credentials entered.
  13. RDP happyness!
    image


Brian Madden prognosticated Microsoft’s secret plans for RemoteFX: Azure-based desktops, apps, and Xbox games from the cloud? in this 11/30/2010 post:

image By now everyone should be familiar with Microsoft's upcoming "RemoteFX" extension to RDP which will be available as part of Windows 7 and Windows Server 2008 Service Packs 1. RemoteFX promises to offer a near perfect remote display experience, including multiple displays, 3D, multimedia, and Aero glass.

The downside of RemoteFX is that it will require some serious host-side GPU processing, and that in the first release it will only be targeted for LAN scenarios.

It's no secret, though, that Microsoft plans for RemoteFX to be as ubiquitous as possible. In fact they'd like it to become as popular as H.264, with RemoteFX becoming the de facto standard for live-generated interactive content (screens, apps, games, etc.), while H.264 will be used for pre-rendered video content (movies, tv shows, youtube, etc.).

One of the qualities of RemoteFX is that while the remote host encoding requires quite a bit of horsepower, the client-side decoding is relatively simple--something that can be done via a very low-cost chip or even by adding a couple hundred thousand logic gates to existing system-on-chip designs.

And to that end, Microsoft has already announced deals with LG where LG is building displays that have RemoteFX decoding capabilities built right in. So when you buy your fancy new LG 42" flat screen TV, the Ethernet port on the back will allow it to connect to a network to essentially become a huge RemoteFX thin client.

Of course this shouldn't be surprising. Teradici has already done a similar deal with Samsung who now has a line of displays with PC-over-IP decoding chips built-in. And just about every TV you buy nowadays has advanced capabilities which let it play video-on-demand from various websites, so the idea that TV makers would add RemoteFX or PCoIP capabilities as standard offerings in the next few years is not too far fetched.

imageSo what's this have to do with Azure?

Ok, so far, so good. But what's the point? Well so far I don't feel like we've really gotten a good answer from Microsoft as to why they bought Calista and developed RemoteFX.

Does Microsoft really care about enabling a great remoting experience for Windows? I mean they've been fine to let companies like Citrix and Quest extend and enhance RDP for the past decade--why the sudden urge now for Microsoft to have to do this themselves?

From a public standpoint, Microsoft has stated the goal of RemoteFX is to "push Hyper-V sockets," which means "since Hyper-V is required for RemoteFX, we want to make this awesome RemoteFX thing so that everyone will want to use Hyper-V." And I gotta say, for the past year or so, I believed that. I believed the reason they created RemoteFX was to push Hyper-V seats.

But recently it hit me. "Wait.. What?!? Does that actually make sense? Does Microsoft really want to embed RemoteFX capabilities into endpoints and clients and TVs around the world just to push Hyper-V?"

Last weekend I visited friend-of-the-site Benny Tritsch in his home outside of Frankfurt, and the topic of RemoteFX came up. "Come on..." Benny said, "RemoteFX is not about Hyper-V, it's about Azure!"

Of course! Benny mentioned this just a few days after Microsoft announced their Azure-based infrastructure as a service (IaaS), which is their Amazon EC2-like offering where you can pay a few cents an hour to run Windows Server instances in the Azure cloud. (Gabe wrote about this last week.)

The more Benny and I discussed this, the more I felt he was right. The current "v1" of the Azure IaaS offering doesn't offer GPU support in the VMs, and thus doesn't offer RemoteFX support, but that's ok because the v1 of RemoteFX is not going to be aimed for the WAN anyway. But think about Microsoft's direction. Let's assume that in a few years the server vendors and GPU vendors actually have datacenter-tuned GPU offerings. And let's assume that Azure offers more control over individual VMs (and even support for Win7 VMs). And let's assume RemoteFX v2 works better on the WAN. (Well, and let's assume that every home user has multi-mbps bandwidth and is no more than 30-40ms from an Azure datacenter.

If those assumptions come true, then you have a pretty compelling framework for desktops and applications on-demand from Microsoft via Azure. (And hey, guess what! When this is all real we also have RemoteFX decode capabilities built-in to lots of different TVs, and the stand-alone RemoteFX thin clients are available at Best Buy for $99.)

How far away is that future? 2012? 2014? (Maybe we don't need to build the desktop on demand like I thought in 2015. Maybe Chetan is right?)

And by the way, don't think this stops with the Windows desktop and apps. Don't forget about Xbox from Azure. You'll just plug the Xbox controllers right into your TV. Or your $99 thin client. Games on demand. Apps on demand. You whole life in Azure. So let Google and VMware chase these new-fangled built-from-scratch Java/HTML5/whatever apps in the cloud. Microsoft will give you full rich Windows apps and games from the cloud, thanks to Azure and a little company called Calista that they bought almost three years ago.

Sounds interesting to me.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA) and Hyper-V or Hybrid Clouds

Suren Machiraju described Hybrid Cloud Solutions With Windows Azure AppFabric Middleware in an 11/30/2010 post:

Abstract

image Technical and commercial forces are causing Enterprise Architects to evaluate moving established on-premises applications into the cloud – the Microsoft Windows Azure Platform.

imageThis blog post will demonstrate that there are established application architectural patterns that can get the best of both worlds: applications that continue to live on-premises while interacting with other applications that live in the cloud – the hybrid approach. In many cases, such hybrid architectures are not just a transition point — but a requirement since certain applications or data is required to remain on-premises largely for security, legal, technical and procedural reasons.

Cloud is new, hybrid cloud even newer. There are a bunch of technologies that have just been released or announced so there is no one book or source for authentic information, especially one that compares, contrasts, and ties it all together. This blog, and a few more that will follow, is an attempt to demystify and make sense of it all.

This blog begins with a brief review of two prevalent deployment paradigms and their influence on architectural patterns: On-premises and Cloud. From there, we will delve into a discussion around developing the hybrid architecture.

In authoring this blog posting we take an architect’s perspective and discuss the major building block components that compose this hybrid architecture. We will also match requirements against the capabilities of available and announced Windows Azure and Windows Azure AppFabric technologies. Our discussions will also factor in the usage costs and strategies for keeping these costs in check.

The blog concludes with a survey of interesting and relevant Windows Azure technologies announced at Microsoft PDC 2010 - Professional Developer's Conference during October 2010.

On-Premises Architectural Pattern

On-premises solutions are usually designed to access data (client or browser applications) repositories and services hosted inside of a corporate network. In the graphic below, Web Server and Database Server are within a corporate firewall and the user, via the browser, is authenticated and has access to data.

Figure 1: Typical On-Premises Solution (Source - Microsoft.com)

The Patterns and Practices Application Architecture Guide 2.0 provides an exhaustive survey of the on-premises Applications. This Architecture Guide is a great reference document and helps you understand the underlying architecture and design principles and patterns for developing successful solutions on the Microsoft application platform and the .NET Framework.

The Microsoft Application Platform is composed of products, infrastructure components, run-time services, and the .NET Framework, as detailed in the following table (source: .NET Application Architecture 434 Guide, 2nd Edition).

Table 1: Microsoft Application Platform (Source - Microsoft.com)

The on-premises architectural patterns are well documented (above), so in this blog post we will not dive into prescriptive technology selection guidance here, however let’s briefly review some core concepts since we will reference them while elaborating hybrid cloud architectural patterns.

Hosting The Application

Commonly* on-premises applications (with the business logic) run on IIS (as an IIS w3wp process) or as a Windows Service Application. The recent release of Windows Server AppFabric makes it easier to build, scale, and manage Web and Composite (WCF, SOAP, REST and Workflow Services) Applications that run on IIS.

* As indicated in Table 1 (above) Microsoft BizTalk Server is capable of hosting on-premises applications; however, for sake of this blog, we will focus on Windows Server AppFabric

Accessing On-Premises Data

You’re on-premises applications may access data from the local file system or network shares. They may also utilize databases hosted in Microsoft SQL Server or other relational and non-relational data sources. In addition, your applications hosted in IIS may well be leveraging Windows Server AppFabric Cache for your session state, as well as other forms of reference or resource data.

Securing Access

Authenticated access to these data stores are traditionally performed by inspecting certificates, user name and password values, or NTLM/Kerberos credentials. These credentials are either defined in the data source themselves, heterogeneous repositories such as in SQL Logins, local machine accounts, or in directory Services (LDAP) such as Microsoft Windows Server Active Directory, and are generally verifiable within the same network, but typically not outside of it unless you are using Active Directory Federation Services - ADFS.

Management

Every system exposes a different set of APIs and a different administration console to change its configuration – which obviously adds to the complexity; e.g., Windows Server for configuring network shares, SQL Server Management Studio for managing your SQL Server Databases; and IIS Manager for the Windows Server AppFabric based Services.

Cloud Architectural Pattern

Cloud applications typically access local resources and services, but they can eventually interact with remote services running on-premises or in the cloud. Cloud applications usually hosted by a ‘provider managed’ runtime environment that provides hosting services, computational services, storage services, queues, management services, and load-balancers. In summary, cloud applications consist of two main components: those that execute application code and those that provide data used by the application. To quickly acquaint you with the cloud components, here is a contrast with popular on-premises technologies:

imageTable 2: Contrast key On-Premises and Cloud Technologies

NOTE: This table is not intended to compare and contrast all Azure technologies. Some of them may not have an equivalent on-premise counterpart.

The graphic below presents an architectural solution for ‘Content Delivery’. While Content creation and its management is via on-premises applications; the content storage (Azure Blob Storage) and delivery is via Cloud – Azure Platform infrastructure. In the following sections we will review the components that enable this architecture.

Figure 2: Typical Cloud Solution

Running Applications in Windows Azure

Windows Azure is the underlying operating system for running your cloud services on the Windows Azure AppFabric Platform. The three core services of Windows Azure in brief are as follows:

  1. Compute: The compute or hosting service offers scalable hosting of services on 64-bit Windows Server platform with Hyper-V support. The platform is virtualized and designed to scale dynamically based on demand. The Azure platform runs web roles on Internet Information Server (IIS) and worker roles as Windows Services.

  2. Storage: There are three types of storage supported in Windows Azure: Table Services, Blob Services, and Queue Services. Table Services provide storage capabilities for structured data, whereas Blob Services are designed to store large unstructured file like videos, images, batch files in the cloud. Table Services are not to be confused with SQL Azure; typically you can store the high-volume data in low-cost Azure Storage and use (relatively) expensive SQL Azure to store indexes to this data. Finally, Queue Services are the asynchronous communication channels for connecting between Services and applications not only in Windows Azure but also from on-premise applications. Caching Services, currently available via Windows Azure AppFabric LABS, is another strong storage option.

  3. Management: The management service supports automated infrastructure and service management capabilities to Windows Azure cloud services. These capabilities include automatic and transparent provisioning of virtual machines and deploying services in them, as well as configuring switches, access routers, and load balancers.

A detailed review on the above three core services of Windows Azure is available in the subsequent sections.

Application code execution on Windows Azure is facilitated by Web and Worker roles. Typically, you would host Websites (ASP.NET, MVC2, CGI, etc.) in a Web Role and host background or computational processes in a Worker role. The on-premises equivalent for a Worker role is a Windows Service.

This architecture leaves a gray area — where should you host WCF Services? The answer is – it depends! Let me elaborate, when building on-premises services, you can host these WCF Services in a Windows Service, (e.g., In BizTalk, WCF Receive Locations can be hosted within an in-process host instance that runs as a Windows Service) and in Windows Azure you can host the WCF Services as Worker roles . While you can host a WCF Service (REST or SOAP) in either a Worker Role or Web role, you will typically host these services in a Web role; the one exception to this is when your WCF service specifically needs to communicate via TCP/IP endpoints. The Worker roles support the ability to expose TCP endpoints, while Web roles are capable of exposing HTTP and HTTP(S) endpoints; Worker roles add the ability to expose TCP endpoints. 

Storing & Retrieving Cloud Hosted Data

Applications that require storing data in the cloud can leverage a variety of technologies including SQL Azure, Windows Azure Storage, Azure AppFabric Cache, Virtual Machine (VM) instance local storage, Instance Local Storage and Azure Drive (XDrive). Determining the right technology is a balancing exercise in managing trade-offs, costs, and capabilities. The following sections will attempt to provide prescriptive guidance on what usage scenario each storage option is best suited for.

The key takeaway is whenever possible co-locate data and the consuming application. Your data in the cloud, stored in SQL Azure or Windows Azure Storage, should be accessible via the ‘local’ Azure “fabric” to your applications. This has positive performance and a beneficial cost model when both the application and the data live within the same datacenter. Co-location is enabled via the 'Region' you choose for the Microsoft Data Center.

SQL Azure

SQL Azure provides an RDBMS in the cloud, and functions for all intents and purposes similarly to your on-premises version of SQL Server. This means that your applications can access data in SQL Azure using the Tabular Data Stream (TDS) protocol; or, in other words, your application uses the same data access technologies (e.g. ADO.NET, LINQ to SQL, EF, etc.) used by on-premises applications to access information on a local SQL Server. The only thing you need to do is change the SQL Client connection strings you’ve always been using. You just have to change the values to point to the server database hosted by SQL Azure. Of course I am glossing over some details like the SQL Azure connection string which contains other specific settings such as encryption, explicit username (in very strict format username@servername) and password – I am sure you get the drift here.

SQL Azure Labs demonstrate an alternative mechanism for allowing your application to interact with SQL Azure via OData. OData provides the ability to perform CRUD operations using REST and can retrieve query results formatted as AtomPub or JSON.

Via .NET programming, you can interact at a higher level using a DataServicesContext and LINQ.

Securing Access

When using TDS, access is secured using SQL Accounts (i.e., your SQL Login and password must be in the connection string), however there is one additional layer of security you will need to be aware of. In most on-premises architectures, your SQL Server database lives behind a firewall, or at most in a DMZ, which you would rarely expose. However, no matter where they are located, these repositories are not directly accessible to applications living outside the corporate boundaries. Of course even for on-premises solutions you can use IPSEC to restrict the (range of) machines that can access the SQL Server machine. Eventually, the access to these databases is mediated and controlled by web services (SOAP, WCF).

What happens when your database lives on Azure, in an environment effectively available to the entire big bad world? SQL Azure also provides a firewall that you can configure to restrict access to a set of IP address ranges, which you would configure to include the addresses used by your on-premises applications, as well as any Microsoft Services (data center infrastructure which enable access from Azure hosted services like your Web Roles). In the graphic below you will notice the ‘Firewall Settings’ tab wherein the IP Address Range is specified.

Figure 3: SQL Azure Firewall Settings

When using OData, access is secured by configuration. You can enable anonymous access where the anonymous account maps to a SQL Login created on your database, or you can map Access Control Service (ACS) identities to specific SQL Logins, whereby ACS takes care of the authentication. In the OData approach, the Firewall access rules are effectively circumvented because only the OData service itself needs to have access to the database hosted by SQL Azure from within the datacenter environment.

Management

You manage your SQL Azure database the same way you might administer an on-premises SQL Server database, by using SQL Server Management Studio (SSMS). Alternately, and provided you allowed Microsoft Services in the SQL Azure firewall, you can use the Database Manager for SQL Azure (codename “Project Houston”) which provides the environment for performing many of the same tasks you would perform via SSMS.

The Database Manager for SQL Azure is a part of the Windows Azure platform developer portal refresh, and will provide a lightweight, web-based database management and querying capability for SQL Azure databases. This capability allows customers to have a streamlined experience within the web browser without having to download any tools.

Figure 4: Database Manager for SQL Azure (Source - PDC 10)

One of the primary technical differences between the traditional SSMS management option is the required use of port 1433; whereas, “Project Houston” is web-based and leverages port 80. An awesome demo on the capabilities of the Database Manager is available here.

Typical Usage

SQL Azure is best suited for storing data that is relational, specifically where your applications expect the database server to perform the join computation and return only the processed results. SQL Azure is a good choice for scenarios with high transaction throughput (view case studies here) since it has a flat-rate pricing structure based on the size of data stored and additionally query evaluation is distributed across multiple nodes.

Windows Azure Storage (Blobs, Tables, Queues)

Windows Azure Storage provides storage services for data in the form of metadata enriched blobs, dictionary-like tables, and simple persistent queues. You access data via HTTP or HTTP(S) by leveraging the StorageClient library which provides 3 different classes, respectively CloudBlobClient, CloudTableClient, CloudQueueClient), to access storage services. For example, CloudTableClient provides a DataServices context enabling use of LINQ for querying table data. All data stored by Windows Azure Storage can also be accessed via REST. AppFabric CAT (team) plans to provide code samples via this blog site to demonstrate this functionality – stay tuned.

Securing Access

Access to any of the Windows Azure Storage blobs, tables, or queues is provided through symmetric key authentication whereby an account name and account key pair must be included with each request. In addition, access to the Azure blobs can be secured via the mechanism known as Shared Access Signature

Management

Currently there is no single tool for managing Windows Azure Storage. There are quite a few ‘samples’ as well as third-party tools. The graphic below is from the tool – Azure Storage Explorer available for download via Codeplex.

Figure 5: Azure Storage Explorer (Source – Codeplex)

In general, a Windows Azure storage account, which wraps access to all three forms of storage, services) is created through the Windows Azure Portal, but is effectively managed using the API’s.

Typical Usage

Windows Azure storage provides unique value propositions that make it fit within your architecture for diverse reasons.

Although Blob storage is charged both for amount of storage used and also by the number of storage transactions, the pricing model is designed to scale transparently to any size. This makes it best suited to the task of storing files and larger objects (high resolution videos, radiology scans, etc.) that can be cached or are otherwise not frequently accessed.

Table storage is billed the same way as Blob Storage, but its unique approach to indexing at large scale makes it useful in situations where you need to efficiently access data from datasets larger than the 50 GB per database maximum of SQL Azure, and where you don’t expect to be performing joins or are comfortable enabling the client to download a larger chunk of data and perform the joins and computations on the client side.

Queues are best suited for storing pointers to work that needs to be done, due to their limited storage capacity of a maximum of 8K per message, in a manner ensuring ordered access. Often, you will use queues to drive the work performed by your Worker roles, but they can also be used to drive the work performed by your on-premises services. Bottom-line, Queues can be effectively used by a Web Role to exchange control messages in an asynchronous manner with a Worker Role running within the same application, or to exchange messages with on-premises applications

Azure AppFabric Cache

The Azure AppFabric Cache, currently in CTP/Beta as of November 2010, should give you (technology is pre-release and hence the caveat!) the same high-performance, in-memory, distributed cache available with Windows Server AppFabric, as a hosted service. Since this is an Azure AppFabric Service, you are not responsible for managing servers participating in the data cache cluster. Your Windows Azure applications can access the cached data through the client libraries in the Windows Azure AppFabric SDK.

Securing Access

Access to a cache is secured by a combination of a generated authentication tokens with authorization rules (e.g., Read/Write versus Read only) as defined in the Access Control Service.

Figure 6: Access Control for Cache

Management

At this time, creation of a cache, as well securing access to it, is performed via the Windows Azure AppFabric labs portal. Subsequent to its release this will be available in the commercial Azure AppFabric portal.

Typical Usage

Clearly the cache is best suited for persisting data closer to your application, whether its data stored in SQL Azure, Windows Azure Storage, or the result of a call to another service or a combination of all of them. Generally, this approach is called the cache-aside model, whereby requests for data made by your application first check the cache and only query the actual data source, and subsequently add it to the cache, if it’s not present. Typically we are seeing cached used in the following scenarios:

  • A scalable session store for your web applications running on ASP.NET.

  • Output cache for your web application.

  • Objects created from resultsets from SQL or costly web service calls, stored in cache and called from your web or worker role.

  • Scratch pad in the cloud for your applications to use.

Using VM local storage & Azure Drives

When you create an instance in Windows Azure, you are creating an instance of a Virtual Machine (VM). VM, just like its on-premises counterpart, can make use of locally attached drives for storage or network shares. Windows Azure Drives are similar, but not exactly the same as, network shares. They are an NTFS formatted drive stored as page blob in Windows Azure Storage, and accessed from your VM instance by drive letter. These drives can be mounted exclusively to a single VM instance as read/write or to multiple VM instances as read-only. When such a drive is mounted, it caches data read in local VM storage, which enhances the read performance of subsequent accesses for the same data.

Securing Access

An Azure Cloud Drive is really just a façade on top of a page blob, so access to the drive effectively amounts to having access to the Blob store by Windows Storage Account credentials (e.g., account name and account key).

Management

As for Blobs, there is limited tooling for managing Azure drives outside of the StorageClient API’s. In fact, there is no portal for creating Azure Cloud drives. However, there are samples on Codeplex that help you with creating and managing the Azure drives.

The Windows Azure MMC Snap-In is available and can be downloaded from here – and the graphic below provides you a quick peek into the familiar look/feel of the MMC.

Figure 7: Windows Azure MMC Snap-In (Source - MSDN)

Typical Usage

Cloud drives have a couple of unique attributes that make them interesting choices in hybrid architecture. To begin with, if you need drive-letter based access to your data from your Windows Azure application and you want it to survive VM instance failure, a cloud drive is your best option. Beyond that, you can mount a cloud drive to instances as read-only. This enables sharing of large reference files across multiple Windows Azure roles. What’s more, you can create a VHD, for example, using Window 7’s Disk Management MMC and load this VHD to the blob storage for use by your VM instances, effectively cloning an on-premises drive and extending its use into the cloud.

Hybrid Architectural Pattern: Exposing On-premises Services To Azure Hosted Services

Hybrid is very different – unlike the ‘pure’ cloud solution, hybrid solutions have significant business process, data stores, and ‘Services’ as on-premises applications possibly due to compliance or deployment restrictions. A hybrid solution is one which has applications of the solution deployed in the cloud while some applications remain deployed on-premises.

For the purposes of this article, and as illustrated in the diagram, we will focus on the scenario where your on-premises applications are services hosted in Windows Server AppFabric and then communicate to other portions of your hybrid solution running on the cloud

Figure 8: Typical Hybrid Cloud Solution

Traditionally, to connect your on-premises applications to the off-premises applications (cloud or otherwise), you would enable this scenario by “poking” a hole in your firewall and configuring NAT routing so that Internet clients can talk to your services directly. This approach has numerous issues and limitations, not the least of which is the management overhead, security concerns, and configuration challenges.

Connectivity

So the big question here is: How do I get my on-premises services to talk to my Azure hosted services?

There are two approaches you can take: You can use the Azure AppFabric Service Bus, or you can use Windows Azure Connect.

Using Azure AppFabric Service Bus

If you’re on-premises solution includes WCF Services, WCF Workflow Services, SOAP Services, or REST services that communicate via HTTP(S) or TCP you can use the Service Bus to create an externally accessible endpoint in the cloud through which your services can be reached. Clients of your solution, whether they are could other Windows Azure hosted services, or Internet clients, simply communicate with that end-point, and the AppFabric Service Bus takes care of relaying traffic securely to your service and returning replies to the client.

The graphic below (from http://www.microsoft.com/en-us/appfabric/azure/middleware-services.aspx#ServiceBus) demonstrates the Service Bus functionality.

Figure 9: Windows Azure AppFabric Service Bus (Source - Microsoft.com)

The key value proposition of leveraging the Service Bus is that it is designed to transparently communicate across firewalls, NAT gateways, or other challenging network boundaries that exist between the client and the on-premises service. You get the following additional benefits:

  • The actual endpoint address of your services is never made available to clients.

  • You can move your services around because the clients are only bound to the Service Bus endpoint address, that is a virtual and not a physical address.

  • If both the client and the service happen to be on the same LAN and could therefore communicate directly, the Service Bus can set them up with a direct link that removes the hop out to the cloud and back and thereby improves throughput and latency.

Securing Access to Service Bus Endpoints

Access to the Service Bus is controlled via the Access Control Service. Applications that use the Windows Azure AppFabric Service Bus are required to perform security tasks for configuration/registration or for invoking service functionality. Security tasks include both authentication and authorization using tokens from the Windows Azure AppFabric Access Control service. When permission to interact with the service has been granted by the AppFabric Service Bus, the service has its own security considerations that are associated with the authentication, authorization, encryption, and signatures required by the message exchange itself. This second set of security issues has nothing to do with the functionality of the AppFabric Service Bus; it is purely a consideration of the service and its clients.

There are four kinds of authentication currently available to secure access to Service Bus:

  • SharedSecret, a slightly more complex but easy-to-use form of username/password authentication.

  • SAML, which can be used to interact with SAML 2.0 authentication systems.

  • SimpleWebToken, uses the OAuth Web Resource Authorization Protocol (WRAP) and Simple Web Tokens (SWT).

  • Unauthenticated, enables interaction with the service endpoint without any authentication behavior.

Selection of the authentication mode is generally dictated by application connecting to the Service Bus. You can read more on this topic here.

Cost considerations

Service Bus usage is charged by the number of concurrent connections to the Service Bus endpoint. This means that when on-premises or cloud-hosted service registers with the Service Bus and opens its listening endpoint it is considered as 1 connection. When a client subsequently connects to that Service Bus endpoint, it’s counted as a second connection. One very important point falls out of this that may affect your architecture: you will want to minimize concurrent connections to Service Bus in order to keep the subscription costs down. It may be more likely that you would want to use the Service Bus in the middle tier more like a VPN to on-premises services rather than allowing unlimited clients to connect through the Service Bus to your on-premises service.

To reiterate, the key value proposition of the Service Bus is Service Orientation; it makes it possible to expose application Services using interoperable protocols with value-added virtualization, discoverability, and security.

Using Windows Azure Connect

Recently announced at PDC 10, and expected for release by the end of 2010, is an alternative means for connecting your cloud services to your on-premises services. Windows Azure Connect effectively offers IP-level, secure, VPN-like connections from your Windows Azure hosted roles to your on-premises services. This service is still not available and of course pricing details have not as yet been released. From the information available to date, you can conclude that Windows Azure Connect would be used for connecting middle-tier services, rather than public clients, to your on-premises solutions. While the Service Bus is focused on connectivity for on-premise Services exposed as a Azure endpoints without having to deal with firewall and NAT setup; Windows Azure Connect provides broad connectivity between your Web/Worker roles and on-premises systems like SQL Server, Active Directory or LOB applications.

Securing the solution

Given the distributed nature of the hybrid (on-premises and cloud) solution, your approach to security should match it - this means your architecture should leverage Federated Identity. This essentially means that you are outsourcing authentication and possibly authorization.

If you want to flow your authenticated on-premises identities, such as domain credentials, into Azure hosted web sites or services, you will need a local identity (akin to presenting a claim issued by a local security token service) providing a security Identity Provider Security Token Service (IP-STS) such as Active Directory Federation Services 2.0 (ADFS 2.0). Your services, whether on-premises or in the cloud, can then be configured to trust credentials, in the form of claims, issued by the IP-STS. Think of the IP-STS as simply the component that can tell if a username and password credentials are valid. In this approach, clients authenticate against the IP-STS; for example, by sending their Windows credentials to ADFS, and if valid, they receive claims they can subsequently present to your websites or services for access. Your websites or services only have to evaluate these claims when executing authorization logic - Windows Identity Foundation (WIF) provides these facilities.

For additional information, review this session - SIA305 Windows Identity Foundation and Windows Azure for Developers. In this session you can learn how Windows Identity Foundation can be used to secure your Web Roles hosted in Windows Azure, how you can take advantage of existing on-premises identities, and how to make the most of the features such as certificate management and staged environments.

In the next release (ACS v2), a similar approach can be taken by using the Windows Azure AppFabric Access Control Service (ACS) to act as your IP-STS and thereby replacing ADFS. With ACS, you can define service identities that are essentially logins secured by either a username and password, or a certificate. The client calls out to ACS first to authenticate, and the client thereafter presents the claims received by ACS in the call to your service or website.

Finally, more advanced scenarios can be put into place that uses ACS as your Relaying-Party Security Token Service (RP-STS) as the authority for all your apps, both on-premises and in the cloud. ACS is then configured to trust identities issued from other identity providers and convert them to the claims expected by your applications. For example, you can take this approach to enable clients to authenticate using Windows Live ID, Google, and Yahoo, while still working with the same set of claims you built prior to supporting those services.

Port Bridge, a point-to-point tunneling scenario is perfect when the application is not exposed as a WCF service or doesn’t speak HTTP. Port bridge is perfect for protocols such as SMTP, SNMP, POP, IMAP, RDP, TDS and SSH. You can read more about this here

Optimizing Bandwidth – Data transfer Costs

One huge issue to reconcile in hybrid solutions is the bandwidth – this is unique for hybrid.

The amount of data transferred in and out of a datacenter is billable, so an approach to optimizing your bandwidth cost is to keep as much traffic from leaving the datacenter as possible. This translates into architecting designs that transfer most data across the Azure “fabric” within affinity groups and minimize traffic between disparate data centers and externally. An affinity group ensures that your cloud services and storage are hosted together on the Windows Azure infrastructure. Windows Azure roles, Windows Azure storage, and SQL Azure services can be configured at creation time to live within a specific datacenter. By ensuring the Azure hosted components within your hybrid solution that communicate with each other live in the same data center, you effectively eliminate the data transfer costs.  

The second approach to optimizing bandwidth costs is to use caching. This means leveraging Windows Server AppFabric Cache on premise to minimize calls to SQL Azure or Azure Storage. Similarly, it also means utilizing the Azure AppFabric Cache from Azure roles to minimize calls to SQL Azure, Azure Storage, or to on-premises services via Service Bus.

Optimizing Resources

One of the most significant cost-optimization features of Windows Azure roles is their support for dynamic scale up and down. This means adding more instances as your load increases and removing them as the load decreases. Supporting load-based dynamic scaling can be accomplished using the Windows Azure management API and SQL Azure storage. While there is no out-of-the box support for this, there is a fair amount of guidance on how to implement this within your solutions (see the Additional References section for one such example). The typical approach is to configure your Azure Web and Worker roles to log health metrics (e.g., CPU load, memory usage, etc.) to a table in SQL Azure. You then have another Worker role that is responsible for periodically polling this table and when certain thresholds are reached or exceeded, it would increase or decrease the instance count for the monitored service using the Windows Azure management APIs.  

Monitoring & Diagnostics

For on-premises services, you can use Windows Server AppFabric Monitoring which logs tracking events into your on-premises instance of SQL Server, and this data can be viewed both through IIS Manager and by querying the monitoring database.

For Azure roles in production you will leverage Windows Azure Diagnostics, wherein your instances periodically load batches to Azure storage. For SQL Azure you can examine SQL Azure query performance by querying the related Dynamic Management Views.  

IntelliTrace is very useful for debugging solutions. With IntelliTrace debugging, you can log extensive debugging information for a Role instance while it is running in Windows Azure. If you need to track down a problem, you can then use the IntelliTrace logs to step through your code from within Visual Studio as though it were running in Windows Azure. In effect, IntelliTrace records key code execution and environment data while your service is running, and allows you to replay the recorded data from within Visual Studio. For more information on IntelliTrace, see Debugging with IntelliTrace.

Summary

Based off the discussions above, let’s consolidate our findings into this easy to read table.

Table 3: In Summary

image

It’s amply clear that the Hybrid Cloud Architectural pattern is ideally suited for connecting on-premises based appliances and applications with its cloud counterparts. This hybrid pattern is ideal since it leverages the best of both: traditional on-premises applications and new distributed/multi-tenant cloud applications.

Let’s Make Plans To Build This Out Now!

In the above sections we have presented a good survey of available technologies to build out the hybrid applications; recent announcements at PDC 10 are worth considering before you move forward.

First, you should consider the Composition Model within the Windows Azure Composite Applications Environment. The Composition Model ties together your entire Azure hosted services (roles, storage, SQL Azure, etc.) as one composite application enabling you to see the end to end picture in a single designer in Visual Studio. The Composition Model is a set of .NET Framework extensions and builds on the familiar Azure Service Model concepts and adds new capabilities for describing the application infrastructure, its components, and relationships among the components, in order to treat the composite application as a single logical identity.

Second, if you were making calls back to an on-premises instance of Windows Server AppFabric, Windows Workflow is fully supported on Azure as part of the composition model, and so you may choose to move the relevant Workflows directly into the cloud. This is huge, very huge since it enables develop once and deploy as needed!

Third, there is a new role in town - the Windows Azure Compute VM Role. This enables us to host 3rd party and archaic Line-of-business applications fully participate (wow - without a major code do-over!) in the hybrid cloud solution.

And last but not least, Windows Azure is also providing the Remote Desktop functionality, which enables you to connect to a running instance of your application or service to monitor activity and troubleshoot common problems. The Remote Desktop Protocol Access while super critical to the VM instance is also available to Web and Worker roles too. This in turn enables another related new feature, which is the availability of the full IIS instead of just the core that was available to web roles.  

Additional References

While web links for additional/background information are embedded into the text; the following additional references are provided as resources to move forward on your journey.

Stay Tuned!

This is the beginning; our team is currently working on the next blog that builds on this topic. The blog will take a scenario driven approach in building a hybrid Windows Azure Application.

Acknowledgements

Significant contributions from Paolo Salvatori, Valery Mizonov, Keith Bauer and Rama Ramani are acknowledged.

The PDF Version of this document is attached: HYBRID CLOUD SOLUTIONS WITH WINDOWS AZURE APPFABRIC MIDDLEWARE.pdf


David Lemphers (@davidlem) asserted Public versus Private Cloud is a NOOP Cloud Conversation! in this 11/29/2010 post:

image If you’re about to have a cloud conversation with someone (a customer, an internal colleague), the worst thing you can do is dichotomize the conversation into public versus private cloud. And here’s why.

Most folks looking to adopt cloud computing are looking to do so from a transformative perspective. For example, if your organization can’t compete because your organization is too slow to respond to new opportunities or changing market conditions, and when traced back to the root cause, it’s because:

  • IT still takes 6 weeks to stand-up a server, let alone get your app services up and running for the new business opportunity within a months
  • The sales team need the IT app up and running before they can add new customer or pricing information
  • The supply team need the IT folks and the sales folks to be done before they can add the new orders to the ERP system with the new SKU’s and pricing data for the new customers

Then tackling just the cloud conversation in terms of public or private is really not addressing the key issue, which is, how does the organization as a whole become an on-demand, always-on, 24/7 business.

When you ask this question, the answer to Private versus Public cloud becomes, both. You need a strategy that spans internal and external, because ensuring you have an on-demand internal IT capability, that is self-service and supple, that connects to a flexible, on-demand business organization, that can extend past the firewall, is how to deal with rapidly changing customer, market and internal business needs.

So make sure you cover all aspects of becoming an always-on organization, including how to leverage cloud computing internally and externally, and how it can serve as a transformative function against other business units, or you’re just getting a small part of all the goodness cloud can offer.


<Return to section navigation list> 

Cloud Security and Governance

Jeff Jedras posted Mafiaboy sees cloud computung vulnerability to Network World’s Security blog on 11/24/2010 (missed when posted):

image Michael Calce, the reformed hacker from Montreal who will forever be known as Mafiaboy, told a group of IT professionals Tuesday that he has serious concerns about the inherent vulnerabilities in the latest evolution of information technology: cloud computing.

Calce was a guest-speaker at storage vendor Hitachi Data Systems' (HDS) (NYSE: HIT) annual Information Forum event in Toronto. He came to fame in 2000 when, as a teenager, he launched a series of denial of service attacks that crippled the Web sites of companies such as CNN, Amazon, Dell and Yahoo, leading to a manhunt by the RCMP and the FBI and his eventual arrest.

Having completed his sentence and matured beyond the "misguided youth with too much power at his fingertips," Calce is speaking-out about IT security and the inherent vulnerabilities in the way the Internet is constructed that he said still haven't been addressed. And Calce has some serious concerns about the latest craze sweeping the IT industry: cloud computing

"These businesses are a lot more at risk today than ever before. So much data is available, and being put into the cloud," said Calce. "It's one of the reasons I've decided to break my silence."

While he understands the practicality behind the cloud architecture, agreeing it's what the Internet was really designed to be, Calce said he worries if we're ready for the security risks and are willing to address them.

"Raising awareness of security is very critical to where the cloud is moving," said Calce. "It's like sex-ed, we need IT security education in school. At this point, everyone is destined in their lives to touch technology."

In an interview with CDN, Calce said while everyone is focused on the cloud, his biggest concern is that we're not even secure with our current infrastructure, and here we are putting our data alone in one bubble.It's a great concept, but he said security seems to be an afterthought.

"It's hard to patch-up holes when so many bullets have already been fired. I know I'll never get businesses to agree but we need to slow down technology, and stop reinventing without fixing the predecessor first," said Calce. "We're always taking on security as an afterthought. We should redesign the protocols behind the Internet to make it less exploitable.

Calce isn't suggesting we back away from new models of computing like the cloud. But he does stress we need to make security a priority, not an afterthought.

"Imagine building a new house with a crap foundation, how long will it last?" asked Calce. "Why not build a new foundation first?"

While he doesn't share Calce's level of concern with cloud computing, Chris Willis, senior director of solutions consulting for HDS Canada, said they brought Calce in as a speaker because they wanted to raise the awareness of security issues around cloud computing and IT in general.

Willis said Hitachi works with its partners such as Brocade to build security into its systems and architectures, and offers features such as data encryption both at rest and in flight.

Read more: 2, Next >


<Return to section navigation list> 

Cloud Computing Events

James Conard (@jamescon) and Ryan Dunn (@dunnry) will present a one-hour Getting Started with the Windows Azure November 2010 Release (DPE110CAL) Webcast at 9:00 AM on 12/1/2010 (requires registration):

imageJust a few short weeks ago at the Microsoft Professional Developers Conference we announced several new features for Windows Azure. These new enhancements are now available so you can start using them to build Windows Azure applications. Some of these new enhancements include Virtual Machine Role and Elevated Privileges, which can make it easier to move existing applications to the cloud, Windows Azure Connect which can enable you to connect on-premises systems to the cloud, and core enhancements such as a new Windows Azure platform portal experience. In this webcast you will see several demos of these brand new Windows Azure features and see firsthand how to start building Windows Azure applications with the new Windows Azure SDK and Tools.

Presented by:

  • James Conard, Sr Director, Evangelism, Microsoft
  • Ryan Dunn, Sr. Technical Evangelist, Microsoft


The South Bay .NET User Group announced Introduction to Visual Studio LightSwitch as the subject of their 12/1/2010 6:00 PM PST meeting at Microsoft - Mt. View Campus, 1065 La Avenida Street, Bldg. 1, Mountain View, CA 94043:

image2224222LightSwitch is a new product in the Visual Studio family aimed at developers who want to easily create business applications for the desktop or the cloud. It simplifies the development process by letting you concentrate on the business logic while LightSwitch handles the common tasks for you.

image In this demo-heavy session, you will see, end-to-end, how to build and deploy a data-centric business application using LightSwitch. We’ll also go beyond the basics of creating simple screens over data and demonstrate how to create screens with more advanced capabilities.

You’ll see how to extend LightSwitch applications with your own Silverlight custom controls and RIA services. We’ll also talk about the architecture and additional extensibility points that are available to professional developers looking to enhance the LightSwitch developer experience.

Speaker's Bio:

image Beth Massi is a Senior Program Manager on the Visual Studio BizApps team at Microsoft who build the Visual Studio tools for Azure, Office, SharePoint as well as Visual Studio LightSwitch. Beth is a community champion for business application developers and is responsible for producing and managing online content and community interaction with the BizApps team.

Sign up here.


<Return to section navi

David Pallman reported on 11/30/2010 three New Windows Azure Features Webcasts for 12/3/2010, 12/15/2010 and 1/11/2010:

image Last month at PDC2010 Microsoft announced the imminent availability of many exciting new features and services, some long awaited. These capabilities have just come online for use this week--some as released features and some as community previews you have to sign up for. This includes a completely new management portal and an updated SDK, so there's a lot to get used to.

imageI'll be covering (and demoing) the new features in a 3-part webcast series, the first of which is this Friday 12/3. Below are the dates, topics covered, and registration links:

Hope you can join us!


Ben Day announced on 11/30/2010 Beantown .NET Meeting Thursday, 12/2/2010: Jason Haley, “Migration to the Cloud -- Amazon vs. Azure”:

A quick reminder…

imageBeantown .NET is going to be meeting Thursday, 12/2/2010.  This month we have Jason Haley presenting “A Simple Web Site Migration to the Cloud – Amazon vs. Azure”. 

As always, our meeting is open to everyone so bring your friends and co-workers – better yet, bring your boss.  Please RSVP by email (beantown@benday.com) by 3pm on the day of the meeting to help speed your way through building security and to give us an idea how much pizza to order.

Future meetings:

  • January 6 – Richard Hale Shaw, “On Time and Under Budget: How to Stop Missing – and Start Meeting – Software Project Deadlines”
  • February 3 – Bob German, “SharePoint 2010 Development for .NET Developers”
  • April 7 – TBA

When: Thursday, 12/2/2010, 6p – 8p
Where:
Microsoft NERD
1 Memorial Drive
Cambridge, MA

Directions: http://microsoftcambridge.com/About/Directions/tabid/89/Default.aspx

Parking: Paid parking is available in 1 Memorial Drive. 

Title: A Simple Web Site Migration to the Cloud - Amazon vs. Azure

Abstract:
Are you interested in moving you web site to the cloud? How complicated can it be? How much is it going to cost? How do you get started? Should I use Amazon web services of Microsoft Azure? As always, the answer to these questions is 'It Depends'. In this session I'll walk through some of the options and scenarios you will face in migrating a site to the cloud and give you some pointers so you can decide what is best for your situation. This talk is meant to provide enough information for you to get started in looking at your migration to the cloud - it is not an advanced 'how to' talk.

Bio:
Jason Haley has been working with Microsoft technologies for the past 15 years in various settings – mostly in the New England and Seattle areas. Last year (2009), he decided to become an independent consultant and started Jason Haley Consulting. Now almost a year of being gainfully unemployed, he is building his client base in the New England area and enjoying the opportunities of working with multiple clients instead of just a single full-time employer.


Adron Hall reminds .NET Developers about CloudCamp Seattle! at 6:00 PM PST on 12/1/2010 at Amazon HQ, 426 Terry Avenue North (At South Lake Union), 2nd Floor Conference Room, Seattle, WA 98109:

image Tomorrow is the big day!  So be sure to come check out CloudCamp Seattle!  We’re going to have a lot of great attendees, some rock star lightning talks and more.  Make sure to get registered ASPA (click on the CloudCamp image above).

Location:
Amazon HQ
426 Terry Avenue North (At South Lake Union)
2nd Floor Conference Room
Seattle, WA 98109

Final Schedule:
6:00pm Registration, Networking w/ Food & Drinks
6:30pm Welcome and Thank yous
6:45pm Lightning Talks (5 minutes each)
Tony Cowan – WebSphere CloudBurst/Hypervisor Editions
Mithun Dhar – Microsoft Azure
Steve Riley – Amazon Web Services
Sundar Raghavan – Skytap
Josh Wieder - Atlantic.net
Margaret Dawson - Hubspan
Patrick Escarcega – “Managing Fear – Transitioning to the Cloud
7:30pm Unpanel
8:00pm Begin Unconference (organize the unconference)
8:15pm Unconference Session 1
9:00pm Unconference Session 2
9:45pm Wrap-up Session
10:00pm Raffle Books: “Host your website in the cloud” by Jeff Barr
10:15pm Drinks at 13coins sponsored by Clear Wireless Internet

NW Cloud

NW Cloud

Local Organizers:
- Jon Madamba of http://www.sawsug.com
- Shy Cohen
- Krish Subramanian of Krishworld
- Adron Hall (Me)
- Dave Nielsen of CloudCamp


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Derrick Harris reported Red Hat Buys Makara, Adds PaaS to Its Cloud Mix in an 11/30/2010 post to GigaOm’s Structure blog:

The cloud computing world is in for yet another shakeup: Red Hat has acquired platform-as-a-service (PaaS) startup Makara. The purchase immediately vaults Red Hat into the role of cloud provider (Makara offers an on-demand service hosted atop Amazon (s amzn) EC2), but, more importantly, gives Red Hat the means to sell its PaaS vision across the cloud landscape. Red Hat’s cloud strategy has been about choice since the beginning, and Makara is icing on the cake.

image Makara is widely cited as JBoss PaaS provider, but that’s not entirely accurate. It actually specializes in JBoss and PHP applications, but also supports standard Java EE, Spring and Tomcat, as well as LAMP. According to Red Hat’s Bryan Che, the company won’t change the current focus on JBoss and PHP, and actually will expand integration between Makara and other components of the JBoss portfolio. This isn’t a surprising plan at all given that Red Hat bought JBoss in 2006, and it actually mirrors Microsoft’s Windows Azure approach of providing an optimal platform for .NET while supporting a variety of alternative languages.

image But unlike Windows Azure – or any other popular PaaS offering, for that matter – Makara isn’t relegated to running in any particular environment. Yes, the service is hosted atop Amazon EC2, but Makara customers can implement the platform atop pretty much any virtualized infrastructure. Che said that means Red Hat Enterprise Virtualization-, Xen, or VMware-based (s vmw) internal or public clouds, as well as internal IaaS software such as Eucalyptus or Cloud.com’s CloudStack. Makara can serve as the platform for hybrid cloud environments, too.

As I pointed out yesterday while reporting on CloudBees’ funding for its Java PaaS offering, the Java PaaS space is getting crowded. Makara is a meaningful player made even more formidable with the addition of Red Hat’s financial backing and strong product lineup to tie into. Apart from JBoss middleware at the platform level, Red Hat has positioned Red Hat Enterprise Linux and Red Hat Enterprise Virtualization as core infrastructure building blocks, along with various systems management products and its open source Deltacloud API. It recently packaged these components, along with consulting services, into the Red Hat Cloud Foundations portfolio. In true PaaS form, Makara adds a layer of automation and abstraction above all of these lower-level components.

Microsoft and VMware probably have the most to fear from Red Hat’s ceaseless cloud push because they’re the only other large software vendors targeting both internal and public IaaS and PaaS, and they’ve both been cited for promoting lock-in to some degree. Even without the addition of Makara, JBoss already supports PHP, Perl, Python, Ruby, OCAML, C/C++, Java, Seam, Hibernate, Spring, Struts and Google Web Toolkit applications, and the Deltacloud API gives users basic management capabilities across pretty much every major public infrastructure cloud. This means Red Hat customers seeking IaaS have a large degree of choice in terms of how they develop and where they deploy their applications.

imageOn the contrary, VMware’s PaaS strategy is tied to Spring, and its IaaS strategy is tied largely to vCloud. Microsoft’s public PaaS and IaaS strategies are both tied largely to Windows Azure, while its somewhat disconnected internal strategy revolves around its System Center management product. Of course, VMware will correctly tell you that there’s definite value to be had by running Spring applications on either VMware-based internal clouds or Google App Engine and Force.com, and Windows Azure is a PaaS standout in its own right, lock-in concerns or not. [Emphasis added.]

Until Makara, Red Hat hasn’t had a cloud offering designed entirely with the cloud in mind, so its presence as a cloud vendor is now a lot stronger. And, until now, no major software vendor has presented this degree of choice in its cloud strategy – something customers regularly cite as important. Will cloud computing be the area where Red Hat finally takes a market leadership position instead of acting as a thorn in the side of Microsoft and VMware?

Related content from GigaOM Pro (sub req’d):


Gartner claimed “By 2015, 20 percent of non-IT Global 500 companies will be cloud service providers” in its Gartner Reveals Top Predictions for IT Organizations and Users for 2011 and Beyond press release of 11/30/2010:

Predictions Show Clear Linkage of IT Investments and Business Results Becoming an Imperative for IT Organizations

image STAMFORD, Conn., November 30, 2010 —   Gartner, Inc. has revealed its top predictions for IT organizations and users for 2011 and beyond. Analysts said that the predictions highlight the significant changes in the roles played by technology and IT organizations in business, the global economy and the lives of individual users.

More than 100 of the strongest Gartner predictions across all research areas were submitted for consideration this year. This year's selection process included evaluating several criteria that define a top prediction. The issues examined included relevance, impact and audience appeal.

"With costs still under pressure, growth opportunities limited and the tolerance to bear risk low, IT faces increased levels of scrutiny from stakeholders both internal and external," said Darryl Plummer, managing vice president and Gartner fellow. "As organizations plan for the years ahead, our predictions focus on the impact this scrutiny will have on outcomes, operations, users and reporting. All parties expect greater transparency, and meeting this demand will require that IT become more tightly coupled to the levers of business control."

"Gartner's top predictions showcase the trends and disruptive events that will reshape the nature of business for the next year and beyond," said Brian Gammage, vice president and Gartner fellow. "Selected from across our research areas as the most compelling and critical predictions, the developments and topics they address this year focus on changes in the roles that technologies and IT organizations play: in the lives of workers, the performance of businesses and the wider world."

Last year's theme of rebalancing supply, consumer demand and regulation is still present across most of the predictions, but the view has shifted further toward external effects. This year's top predictions highlight an increasingly visible linkage between technology decisions and outcomes, both economic and societal. The top predictions include:

By 2015, a G20 nation's critical infrastructure will be disrupted and damaged by online sabotage.
Online attacks can be multimodal, in the sense of targeting multiple systems for maximum impact, such as the financial system (the stock exchange), physical plant (the control systems of a chemical, nuclear or electric plant), or mobile communications (mobile-phone message routers). Such a multimodal attack can have lasting effects beyond a temporary disruption, in the same manner that the Sept. 11 attacks on the U.S. had repercussions that have lasted for nearly a decade. If a national stock market was rendered unavailable for several weeks, there would be lasting effects even if there was no change in government, although it is also possible that such disruptive actions could eventually result in a change in leadership.

By 2015, new revenue generated each year by IT will determine the annual compensation of most new Global 2000 CIOs.
Four initiatives — context-aware computing, IT's direct involvement in enterprise innovation development efforts, Pattern-Based Strategies and harnessing the power of social networks — can potentially directly increase enterprise revenue. Executive and board-level expectations for realizing revenue from those and other IT initiatives will become so common that, in 2015, the amount of new revenue generated from IT initiatives will become the primary factor determining the incentive portion of new Global 2000 CIOs' annual compensation.

By 2015, information-smart businesses will increase recognized IT spending per head by 60 percent.
Those IT-enabled enterprises that successfully navigated the recent recession and return to growth will benefit from many internal and external dynamics. Consolidation, optimization and cost transparency programs have made decentralized IT investments more visible, increasing "recognized" IT spending. This, combined with staff reduction and freezes, will reward the leading companies within each industry segment with an IT productivity windfall that culminates in at least a 60 percent increase in the metric for "IT spending per enterprise employee" when compared against the metrics of peer organizations and internal trending metrics.

By 2015, tools and automation will eliminate 25 percent of labor hours associated with IT services.
As the IT services industry matures, it will increasingly mirror other industries, such as manufacturing, in transforming from a craftsmanship to a more industrialized model. Cloud computing will hasten the use of tools and automation in IT services as the new paradigm brings with it self-service, automated provisioning and metering, etc., to deliver industrialized services with the potential to transform the industry from a high-touch custom environment to one characterized by automated delivery of IT services. Productivity levels for service providers will increase, leading to reductions in their costs of delivery.

By 2015, 20 percent of non-IT Global 500 companies will be cloud service providers.
The move by non-IT organizations to provide non-IT capabilities via cloud computing will further expand the role of IT decision making outside the IT organization. This represents yet another opportunity for IT organizations to redefine their value proposition as service enablers — with either consumption or provision of cloud-based services. As non-IT players externalize core competencies via the cloud, they will be interjecting themselves into value chain systems and competing directly with IT organizations that have traditionally served in this capacity.

By 2014, 90 percent of organizations will support corporate applications on personal devices.
The trend toward supporting corporate applications on employee-owned notebooks and smartphones is already under way in many organizations and will become commonplace within four years. The main driver for adoption of mobile devices will be employees — i.e., individuals who prefer to use private consumer smartphones or notebooks for business, rather than using old-style limited enterprise devices. IT is set to enter the next phase of the consumerization trend, in which the attention of users and IT organizations shifts from devices, infrastructure and applications to information and interaction with peers. This change in view will herald the start of the postconsumerization era.

By 2013, 80 percent of businesses will support a workforce using tablets.
The Apple iPad is the first of what promises to be a huge wave of media tablets focused largely on content consumption, and to some extent communications, rather than content creation, with fewer features and less processing power than traditional PCs and notebooks or pen-centric tablet PCs. Support requirements for media tablets will vary across and within enterprises depending on usage scenario. At minimum, in cases where employees are bringing their own devices for convenience, enterprises will have to offer appliance-level support with a limited level of network connectivity (which will likely include access to enterprise mail and calendaring) and help desk support for connectivity issues.

By 2015, 10 percent of your online "friends" will be nonhuman.
Social media strategy involves several steps: establishing a presence, listening to the conversation, speaking (articulating a message), and, ultimately, interacting in a two-way, fully engaged manner. Thus far, many organizations have established a presence, and are mostly projecting messages through Twitter feeds and Facebook updates that are often only an incremental step up from RSS feeds. By 2015, efforts to systematize and automate social engagement will result in the rise of social bots — automated software agents that can handle, to varying degrees, interaction with communities of users in a manner personalized to each individual.

Additional details are in the Gartner report "Gartner's Top Predictions for IT Organizations and Users, 2011 and Beyond: IT's Growing Transparency" which is available on Gartner's website at http://www.gartner.com/resId=1476415.

Mr. Gammage and Mr. Plummer will be hosting upcoming webinars "Gartner Top Predictions for 2011 and Beyond" on November 30 and December 15. To register for the November 30 webinar, please visit http://my.gartner.com/portal/server.pt?open=512&objID=202&mode=2&PageID=5553&resId=1447016&ref=Webinar-Calendar. To register for the December 15 webinar, please visit http://my.gartner.com/portal/server.pt?open=512&objID=202&mode=2&PageID=5553&resId=1462334&ref=Webinar-Calendar.


Jinesh Varia reported New Whitepapers on Cloud Migration: Migrating Your Existing Applications to the AWS Cloud in an 11/29/2010 post to the Amazon Web Services blog:

image Developers and architects looking to build new applications in the cloud can simply design the components, processes and workflow for their solution, employ the APIs of the cloud of their choice, and leverage the latest cloud-based best practices for design, development, testing and deployment. In choosing to deploy their solutions in a cloud-based infrastructure like Amazon Web Services (AWS), they can take immediate advantage of instant scalability and elasticity, isolated processes, reduced operational effort, on-demand provisioning and automation.

At the same time, many businesses are looking for better ways to migrate their existing applications to a cloud-based infrastructure so that they, too, can enjoy the same advantages seen with greenfield application development.
One of the key differentiators of AWS’ infrastructure services is its flexibility. It gives businesses the freedom of choice to choose the programming models, languages, operating systems and databases they are already using or familiar with. As a result, many organizations are moving existing applications to the cloud today.

In that regards, I am very excited to release our series of whitepapers on cloud migration.

Download Main Paper : Migrating your existing applications to the AWS cloud (PDF)

This whitepaper will help you build a migration strategy for your company. It discusses steps, techniques and methodologies for moving your existing enterprise applications to the AWS cloud. There are several strategies for migrating applications to new environments. In this paper, we share several such strategies that has helped enterprise companies to take advantage of the cloud.  We discuss a phase-driven step-by-step strategy for migrating applications to the cloud (see below).

Cloudmigration-jineshvaria
To illustrate the step-by-step strategy, we provide three distinctly different scenarios listed in the table. Each scenario discusses the motivation for the migration, describes the before and after application architecture, details the migration process, and summarizes the technical benefits of migration:

image

  1. Migration Scenario #1: Migrating web applications to the AWS cloud (PDF)
  2. Migration Scenario #2: Migrating batch processing applications to the AWS cloud (PDF)
  3. Migration Scenario #3: Migrating backend processing pipelines to the AWS cloud (PDF)

As always we are hungry for customer feedback. Let us know whether this paper is helpful to you. If you have moved an existing application to the AWS cloud, we would love to get your feedback too. Send us more details at evangelists at amazon dot com.


GetApp.com posted The Cloud Computing Revolution in Images on 11/26/2010, including a 22.5% market percentage figure for Windows Azure:

The term Cloud Computing has now been around for a few years, mainly driven by the pioneers Amazon, Google and SalesForce. More recently, the competition has been growing with more and more Cloud providers flooding the market with plenty of new offerings which enable ISV and developers to launch new services very quickly and very efficiently.

But how many SMBs are really jumping on the Cloud Computing train? SpiceWorks estimate that 14% of SMBs are currently using Cloud Solutions. Another 10% plan to deploy cloud services over the next six months. Small business are more aggressive with cloud adoption than larger business. Unproven technology topped security as the main concern among those IT pros than have no plans to deploy cloud solution over the next 6 months

Here are below three nice recent infographics which give an interesting overview of the current landscape around Cloud Computing and SaaS.

If you have more infographics to share with the community, please add your pointers !

From Zenoss:

Cloud-Computing-Infographic-2010

Note: The preceding is the first published market penetration figure (~22.5%) for Windows Azure that I’ve seen.

From Cloud HyperMarket:

Cloud-Hypermarket-Infographic-2010

From Wikibond:


<Return to section navigation list> 

0 comments: