Wednesday, July 06, 2011

Windows Azure and Cloud Computing Posts for 7/5/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.


Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Azure Blob, Drive, Table and Queue Services

Avkash Chauhan answered What if "Access Denied or Not sufficient permission" error occurred while accessing Windows Azure Storage (Blob, Table or Queue)? on 7/4/2011

imageWhenever you [get] "Access Denied" error with Windows Azure Storage via any kind of resources, while:

  • Creating, accessing or modifying tables or tables entries in Windows Azure Table Storage
  • Creating containers or file modifications in Windows Azure Blob Storage
  • Creating queue or queue items in Windows Azure Queue Storage

imageThe problem will show different behavior with different kind of applications.

For example creating containers in Azure Blob Storage returned the following:

The account being accessed does not have sufficient permissions to execute this operation.
View diagnostic information

For example you may get the following error when programmatically creating a table in Windows Azure Table Storage:


For such problem, it certainly means you have some issues with Windows Azure subscription. You can go to Windows Azure Management Portal and you may be able to verify that your Account is disabled.

You can contact Windows Azure Billing Team using the link below and get your Subscription issue sorted out and after that this problem will be resolved.

<Return to section navigation list>

SQL Azure Database, BI and Reporting

Nicholas Mukhar posted Panorama Prepares BI for Windows Azure Cloud to the TalkinCloud blog on 7/6/2011:

image A few weeks ago I introduced MSPmentor readers to Panorama Software’s Necto — which connects the dots between business intelligence (BI) and social media. More recently, I spoke with Panorama Software CEO Eynav Azarya to find how the Necto fits in with the company’s future plans — which include support for the Windows Azure cloud.

imageFirst, some background on Panorama and Necto. “Contextual relevance is the name of the game,” said Azarya. “The Necto is unique because of two things: social intelligence and automated insight.” Panorama Necto uses algorithms to help Panorama’s mid-market and enterprise customers, in Azaria’s words, “help find data you don’t know that you don’t know… Companies are looking for speed of thought, decision and action.” So Necto’s algorithms takes a user’s search history and then suggest relevant content to read and people with whom to connect.

The Microsoft Angle

image The Panorama Software — Microsoft Corp. partnership runs deep. Panorama developed OLAP (online analytical processing) related software when the company was founded in 1993 and then continued to develop it through 1996 before selling the solution to Microsoft.

“Our CTO Roni Ross flew to Boston to meet with Howard Dresner of Gartner, Inc. But Dresner didn’t show up. So Roni decided to meet with Microsoft instead. She ended up selling OLAP to Micrsoft over lunch.”

OLAP is now the platform for most of Microsoft’s BI solutions today. And the Panorama — Microsoft relationship has taken off since then. Panorama will soon reveal its first BI solution to run on Microsoft Windows Azure. Panorama is also set to launch a BI solution for SMB’s at the Microsoft Worldwide Partner Conference (Los Angeles, July 10-14). “There is a need for this type of BI in the SMB space,” said Azarya. “It’s much harder to be an SMB than an enterprise when it comes to competing. So There are many new announcements and developments coming up. All of this is based on Necto.”

We’ll be watching for updates from Microsoft Worldwide Partner Conference.

Raja Bavani wrote SQL: More than three decades old and still thriving! for SD Times on the Web on 6/2/2011, but it appeared in their RSS feed on 7/6/2011:

imageIn June 1970, Dr. E.F. Codd published the seminal paper, "A Relational Model of Data for Large Shared Data Banks," for the ACM. This paper laid the foundation of relational databases, and Codd's model was accepted as the definitive model for relational database management systems across research institutes around the world.

imageIn 1974, at IBM’s San Jose Research Center, Donald Chamberlin and Raymond Boyce invented Structured English Query Language (SEQUEL) to implement Codd's model in a System/R project that was aimed at developing the first SQL implementation. It also seeded IBM's relational database technology.

Over the next three years, SEQUEL became SQL (still pronounced "sequel," but some people spell it out as S-Q-L). IBM conducted the beta testing of System/R at customer test sites and demonstrated the usefulness and practicality of the system. As a result, IBM developed and sold commercial products that implemented SQL based on their System R prototype, including SQL/DS, introduced in 1981, and DB2 in 1983.

Meanwhile, in 1979, Relational Software introduced the first commercially available implementation of SQL, called Oracle Version 2. The company later changed its name to match its flagship product. Also, several other vendors such as Ingres (based on the Ingres project led by Michael Stonebraker and Eugene Wong at the Univ. of California, Berkeley during late 1970s) and Sybase introduced database products using SQL.

For the past three decades, SQL has been accepted as the standard RDBMS language. With the veracity of Moore’s law and the continuous research and upgrades on SQL implementations by some of the top vendors in RDBMS arena, SQL has come a long way.

There are three key reasons for the longevity of SQL:

  1. Simplicity. SQL is simple. It is simple to learn and implement. It can be used for data definition, data manipulation and data control. It is declarative. You tell it what you want using simple language constructs. While this was a handicap, programming extensions such as PL/SQL and Pro*C bridged the gap and provided the ability to build programming logic too.
  2. Mathematical foundation. SQL is built on concepts such as relational algebra and tuple calculus. This enabled its success in all databases that supported relational models to store, retrieve and manipulate data. Even today, a vast majority of data of businesses across the world resides on relational databases that support SQL.
  3. Multi-vendor support. SQL has been implemented by many vendors, and even though there were issues related to the standardization of SQL, the fundamental principles and constructs remained universal. This encouraged database users at all levels to leverage the strengths of SQL.

The power of SQL increased tremendously over these three decades. While the initial versions of relational databases lacked support for basic concepts such as referential integrity, the latter versions offered support for several advanced concepts such as two-phase commits, partitioned tables and object orientation, to name a few. Because of this evolution, SQL emerged as the de facto standard for relational databases.

SQL came across several challenges during these three decades. One of them was unpredictable scalability, which challenged not only the product engineering community during the Internet era, but also end users and other stakeholders. RDBMS implementations that shined during the late 1990s started facing scalability and performance issues when applications and products migrated to the Web.

Databases had to store long text strings, images and other types of data to support Web-based products and applications. Also, the number of simultaneous users became unpredictable and seasonal. Clustering and load-balancing solutions provided little relief. Gradually, database administrators and system administrators found ways to handle such issues.

These times raised expectations on RDBMS (and hence SQL), which led to issues related to user satisfaction. Product teams wanted to find alternate ways to access data. Frameworks that supported object-relational mapping came into play. Meanwhile the need for processing and mining the data available in social networking forums and other collaboration platforms on the Internet resulted in new disciplines such as "Sentimental Analysis." Also, paradigms such as virtualization and software-as-a-service opened up a new arena called cloud computing.

What about NoSQL?
The NoSQL movement began in 1998, and it supported database implementations that are non-relational data stores of several categories, such as document stores, graph databases, key-value stores and multi-valued databases. While there are several flavors of NoSQL implementations such as Oracle’s Berkeley DB, Google’s BigTable, Apache’s Cassandra and Amazon’s SimpleDB, SQL continues to thrive today.

In his blog titled “The NoSQL discussion has nothing to do with SQL,” Michael Stonebraker articulates the performance overheads on OLTP systems, and he provides his views on optimizing performance using concepts such as horizontal partitioning. He concludes that resolving scalability and performance issues are possible either in a SQL context or some other context.

It is evident that SQL and the NoSQL movement are complementary. RDBMS and SQL implementations will continue to thrive in ecosystems that require OLTP support on relational data stores.

However, a critical challenge in front of the software industry has become visible with the plethora of NoSQL database implementations and the waves of evolutionary ideas and concepts in implementing Cloud Databases. Years ago, our industry made considerable progress in standardizing SQL, and the success rate was not a perfect 10/10.

At present, are we even attempting to invent something similar to SQL for cloud databases? Or are we digressing with multiple approaches while increasing the risk of inducing issues related to data consistency, concurrency, integration and accuracy? Time alone will tell.

Raja Bavani heads delivery for MindTree’s Software Product Engineering group in Pune, India, and also plays the role of SPE evangelist.

<Return to section navigation list>

MarketPlace DataMarket, Big Data and OData

Glenn Gailey (@ggailey777) Described Uploading Data to a Service Operation in a 7/5/2011 post:

imageWhen you upload data using HTTP, you typically include the data being uploaded in the body of the POST request. However, service operations in the Open Data Protocol (OData) work a bit differently, in that input to a service operation may only be passed to the service operation by using parameters. Consider a service operation that takes an entity as an input, creates a property-wise clone of this entity in the data source, and returns the cloned entity. This scenario requires us to actually upload entity data (and not just something simple, like an integer). In this this post, I will work through just this sort of service operation.


imageWeb programmers know that, in HTTP, the GET method is used to request data from the Web server and the POST method is used to submit data, so uploading data by using a POST makes sense. The OData spec also seems to come down in favor of POST for uploading data to a service operation:

“Server implementations should only allow invocation of Service Operations through GET when the operations will not cause side-effects in the system.”

It seems to me that scenarios where you are uploading data to a service operation, there is likely going to be chance of creating a “side-effect” in the system, which I take to mean something that affects the results of queries, such as CUD operation—the stuff for which one typically uses POST. However, as I will demonstrate, we can’t actually use POST requests as OData wants us to because of limitations in our client libraries. The primary limitation is that the WCF Data Services client (designed specifically for OData) doesn’t support sending POST requests to OData service operations, as I described in my previous post Calling Service Operations Using POST. This pretty much leaves us with using GET to upload data.

Use Parameters to Pass Data

Because, OData requires that data sent to service operation be included as parameters in the query URI, the data service ignores any data in the body of the request. This is the case with either GET or POST requests. Not only that, WCF Data Services provides us no way to even get to this message body data should you choose to send it, short of implementing the IDispatchMessageInspector interface to intercept the message before it gets handed off to the data service. That means that if we want to pass data to a service operation, we need to do so by supplying this data as parameters to the service operation, as prescribed by OData.

Serialize Non-Primitive Types

Passing data to a service operation works great for things like integers, strings, and Booleans, since service operation parameters must be primitive type data (as per spec). But as we have already illustrated, we might need to supply more complex data types to your service operation, such as entities or graphs of entities. Consider the following service operation CloneCustomer, which clones a client-supplied Customer entity and then returns the new entity:

public Customer CloneCustomer(string serializedCustomer) 
    NorthwindEntities context = this.CurrentDataSource; 
    XmlSerializer xmlSerializer = 
        new System.Xml.Serialization.XmlSerializer(typeof(Customer));

    TextReader reader = new StringReader(serializedCustomer);

    // Get a customer created with a property-wise clone 
    // of the supplied entity, with a new ID. 
    Customer clone = CloneCustomer(xmlSerializer.Deserialize(reader) as Customer);

        // Note that this bypasses the service ops restrictions. 
    catch (Exception ex) 
        throw new DataServiceException( 
            "The Customer could not be cloned.", ex.GetBaseException()); 
    return clone; 

Security note: Using a service operation to upload entity data bypasses the built-in entity set access rules, which are used to restrict the ability of clients to do things like insert data. Because an operation like this one may act on the data source directly, you must make sure to implement your own authorization checks on the operation and not rely on entity set or service operation access settings in the data service configuration.

Notice that when the client calls this service operation, the Customer entity to clone is passed to the method as a string, which is an XML serialized representation of the Customer entity object. In this example, a client that can call CloneCustomer is able to insert new entities into the data source, which is often a restricted privilege).

Batch Requests with Large Parameters

You can see that this serialization of uploaded data can easily end up creating some very long URIs that are going to get truncated by most Web servers, which limit request URIs. This is the main reason why we must use the WCF Data Services client (with its lack of POST support) instead of something like HttpWebRequest. The batching functionality provided by OData lets us “package” multiple requests (or in this case just one request), which may have long URIs, into a single request to the $batch endpoint of the data service. This batching functionality is only available to us by using an OData-aware client.

The following code on the client uses an XmlSerializer to serialize a Customer object, which is supplied to the CloneCustomer service operation by using a typed DataServiceRequest sent in a batch:

// Create the DataServiceContext using the service URI. 
NorthwindEntities context = new NorthwindEntities(svcUri2);

Customer clonedCustomer = null;

// Get a Customer entity. 
var customer = (from cust in context.Customers 
                    select cust).FirstOrDefault(); 
// Serialize the Customer object. 
XmlSerializer xmlSerializer = 
    new System.Xml.Serialization.XmlSerializer(typeof(Customer)); 
TextWriter writer = new StringWriter(); 
xmlSerializer.Serialize(writer, customer); 
var serializedCustomer = writer.ToString();

// Define the URI-based data service request. 
DataServiceRequest<Customer> request = 
    new DataServiceRequest<Customer>(new 
        serializedCustomer), UriKind.Relative));

// Batch the request so we don't have trouble with the long URI. 
DataServiceRequest[] batchRequest = new DataServiceRequest[] { request };

// Define a QueryOperationResponse. 
QueryOperationResponse response;

    // Execute the batch query and get the response-- 
    // there should be only one. 
    response = context.ExecuteBatch(batchRequest) 
        .FirstOrDefault() as QueryOperationResponse;

    if (response != null) 
        // Get the returned customer from the QueryOperationResponse, 
        // which is tracked by the DataServiceContext. 
        clonedCustomer = response.OfType().FirstOrDefault(); 

        // Do something with the cloned customer. 
        clonedCustomer.ContactName = "Joe Contact Name"; context.SaveChanges(); 
        throw new ApplicationException("Unexpected response type."); 
catch (DataServiceQueryException ex) 
    QueryOperationResponse error = ex.Response;

catch (DataServiceRequestException ex) 

And here’s what the batched HTTP request looks like that gets sent to to the CloneCustomer operation. The batch request is a POST to the $batch endpoint, and the body of the request contains the GET request with a very long URI that is the serialized representation of a customer object:

POST http://myserver/Northwind/Northwind.svc/$batch HTTP/1.1 
User-Agent: Microsoft ADO.NET Data Services 
DataServiceVersion: 1.0;NetFx 
MaxDataServiceVersion: 2.0;NetFx 
Accept: application/atom+xml,application/xml 
Accept-Charset: UTF-8 
Content-Type: multipart/mixed; boundary=batch_11d9a5ac-1e92-446d-b6da-22d7e1bbba53 
Host: myserver 
Content-Length: 1067 
Expect: 100-continue

Content-Type: application/http 
Content-Transfer-Encoding: binary

GET http://myserver/Northwind/Northwind.svc/CloneCustomer?serializedCustomer='%3C?xml%20version=%221.0%22%20encoding=%22utf-16%22?%3E%0D%0A%3CCustomer%20xmlns:xsd=%22' HTTP/1.1


Note that the client correctly encodes the serialized entity XML in the URI. The service operation returns the cloned entity in a response body, which is also batched (because the request was batched). We get the first QueryOperationResponse in the batch response, which contains the cloned customer returned by the service operation. This object is already attached to the DataServiceContext (which happened on materialization), so we can make immediate updates and send them back to the data service.

Marcelo Lopez Ruiz (@mlrdev) described datajs samples in a 7/5/2011 post:

imageOne of the things that we care about a lot in datajs is in being practical and enabling better productivity. As such, we don't think you should spend a lot of time pouring over the library documentation to figure out how to put it to use.

imageInstead, we build some high-level pages with important information, and put together a set of samples that you can use to get up and running quickly. The samples cover using OData, the storage API, and the cache API. They try to show some good best practices, like how to organize UI, in-memory objects and server-data representations so you have cleaner, more maintainable code.

But of course you don't have to rebuild things from the ground up to use datajs to make your site better. You can add a cache on a page to make some lookups faser. You can add preferences locally so the user doesn't have to round-trip to the server to keep them around. You can have input on the side send structured data to your server, or query from it, without having to start from scratch - there is no "page model" associated with datajs, so you can simply pick and choose what you want to get started with and then take it from there.

The Silverlight Show blog reported on 7/5/2011 that Michael Crump will be speaking on OData at devLINK 2011 in Chattanooga, TN on 8/17-19/2011:

devLINK Technical ConferenceSilverlightShow author and webinar presenter Michael Crump will have the honour to speak at the devLink Technical Conference in Chattanooga, TN this August.

devLINK is one of the most cost-effective technical conferences available and is aimed not only at Developers, but also at Project Managers, IT Pros and beyond.

imageMichael will be presenting a session on OData - a topic he already covered through his SilverlightShow article series, and eBook "Producing and Consuming OData in a Silverlight and WP7 App".

NOTE: Both the article series and the eBook will be soon updated to reflect the recent Mango updates. Everyone who purchased the eBook will be emailed the updated copy too.

For more information on this and other upcoming events, please visit our Events page.

Alex Popescu (@al3xandru) posted Big Data: Volume vs Variety According to McKinsey and Gartner to his MyNoSQL blog on 7/5/2011:

image[From TechTarget’s blog:] Big Data: Volume vs Variety According to McKinsey and Gartner:

The value of “big data” lies less in its volume than in its variety. This is the gist of a recent report from the McKinsey Global Institute, Big data: The next frontier for innovation, competition, and productivity.


On the vendors’ side of the house, Stephen Brobst, Teradata’s chief technology officer, has told us: “It’s not actually the size of the data that matters as much as the diversity. One important factor is the transition from transactions to interactions. This creates big data.”

I think the value of Big Data resides in the insights it is hiding—volume is essential to get these—and the multitude of possibilities to enhance it with either metadata and/or other data sets. Emphasizing only some of these aspects will just diminish BigData value.

Full disclosure: I’m a paid contributor to, which is a sister site to TechTarget’s

Brian Harry described OData Access to TFS in a 7/4/2011 post:

image I’m criminally late in blogging about this and for that I apologize. Several months ago, the Microsoft platform evangelism team came to me and said they wanted to create a really good OData sample that would show people all the stuff you can do with OData and how easy it is. They told me that they thought creating an OData service running on Windows Azure and providing a public data feed for TFS on CodePlex would make for a great sample. They asked it I had any objections. Of course, I said no; that sounds like a great idea. A few months ago they published the service and now I’d like to tell you a little about it.

imageOData is a convenient protocol for managing structured data. It’s a relatively simple XML format that’s easy to parse and there are a lot of tools that already understand it. It’s something we’ve been looking at for a while to add to TFS and I was glad to see this experiment. It will give us valuable learning for when we bite off adding it as an official part of the product.

Having OData feeds is particularly useful in constrained environments (like phones) where you don’t have as complete a software stack as you might on a server for desktop machine. The simplicity of the protocol/data format is a real advantage. In creating the OData sample, the evangelism team created a Window Phone 7 app to demonstrate how easy it is to use OData in that context.

If you just want to kick the tires, then you can get access to or create a CodePlex project and use the Azure hosted TFS OData service against it. It’s really simple. You can just type urls in the browser and see the feed results. Here’s the url to the Codeplex OData service. The initial page tells you everything you need to know about getting started:

Here’s an example from a CodePlex project I have access to. Here’s the url I typed in the browser:'tfsadmin')/Changesets

And here’s the results that come back:


‎Sunday, ‎January ‎09, ‎2011, ‏‎6:15:32 AM

Fixed Team Foundation Server object model error message during setup


‎Sunday, ‎December ‎19, ‎2010, ‏‎8:51:18 AM

TFS Administration Tool 2.1 release branch


‎Saturday, ‎December ‎04, ‎2010, ‏‎10:59:55 AM

Updated version number and fixed a minor bug. Getting ready for release.


‎Saturday, ‎August ‎28, ‎2010, ‏‎10:46:46 AM

Added tracing information to SharePoint and Reporting Services detection

truncated for brevity…

And here’s what the XML looks like:

<?xml version="1.0" encoding="utf-8" standalone="yes"?> 
<feed xml:base="" xmlns:d="" xmlns:m="" xmlns=""> 
  <title type="text">Changesets</title> 
  <link rel="self" title="Changesets" href="Changesets" /> 
  <entry m:etag="W/&quot;datetime'2011-01-09T11%3A15%3A32.7%2B00%3A00'&quot;"> 
    <title type="text">vstfs:///VersionControl/Changeset/83047</title> 
    <summary type="text">Fixed Team Foundation Server object model error message during setup</summary> 
      <name /> 
    <link rel="edit" title="Changeset" href="Changesets(83047)" /> 
    <link rel="" type="application/atom+xml;type=feed" title="Changes" href="Changesets(83047)/Changes" /> 
    <link rel="" type="application/atom+xml;type=feed" title="WorkItems" href="Changesets(83047)/WorkItems" /> 
    <category term="Microsoft.Samples.DPE.ODataTFS.Model.Entities.Changeset" scheme="" /> 
    <content type="application/xml"> 
        <d:Id m:type="Edm.Int32">83047</d:Id> 
        <d:Comment>Fixed Team Foundation Server object model error message during setup</d:Comment> 
        <d:CreationDate m:type="Edm.DateTime">2011-01-09T11:15:32.7+00:00</d:CreationDate> 
        <d:Branch m:null="true" /> 
  <entry m:etag="W/&quot;datetime'2010-12-19T13%3A51%3A18.317%2B00%3A00'&quot;"> 
    <title type="text">vstfs:///VersionControl/Changeset/82090</title> 
    <summary type="text">TFS Administration Tool 2.1 release branch</summary> 
      <name /> 
    <link rel="edit" title="Changeset" href="Changesets(82090)" /> 
    <link rel="" type="application/atom+xml;type=feed" title="Changes" href="Changesets(82090)/Changes" /> 
    <link rel="" type="application/atom+xml;type=feed" title="WorkItems" href="Changesets(82090)/WorkItems" /> 
    <category term="Microsoft.Samples.DPE.ODataTFS.Model.Entities.Changeset" scheme="" /> 
    <content type="application/xml"> 
        <d:Id m:type="Edm.Int32">82090</d:Id> 
        <d:Comment>TFS Administration Tool 2.1 release branch</d:Comment> 
        <d:CreationDate m:type="Edm.DateTime">2010-12-19T13:51:18.317+00:00</d:CreationDate> 
        <d:Branch m:null="true" /> 
  <entry m:etag="W/&quot;datetime'2010-12-04T15%3A59%3A55.183%2B00%3A00'&quot;"> 
    <title type="text">vstfs:///VersionControl/Changeset/81133</title> 
    <summary type="text">Updated version number and fixed a minor bug. Getting ready for release.</summary> 
      <name /> 
    <link rel="edit" title="Changeset" href="Changesets(81133)" /> 
    <link rel="" type="application/atom+xml;type=feed" title="Changes" href="Changesets(81133)/Changes" /> 
    <link rel="" type="application/atom+xml;type=feed" title="WorkItems" href="Changesets(81133)/WorkItems" /> 
    <category term="Microsoft.Samples.DPE.ODataTFS.Model.Entities.Changeset" scheme="" /> 
    <content type="application/xml"> 
        <d:Id m:type="Edm.Int32">81133</d:Id> 
        <d:Comment>Updated version number and fixed a minor bug. Getting ready for release.</d:Comment> 
        <d:CreationDate m:type="Edm.DateTime">2010-12-04T15:59:55.183+00:00</d:CreationDate> 
        <d:Branch m:null="true" /> 

Again, snipped for brevity…

If you want to see a sample client or if you want to set up the service to point at your own TFS server (rather than CodePlex), you can use this download:

Brian Keller put together a great video showing how use use this stuff:

I need adjust/clarify a few of the things in his intro. OData is a very cool way to access TFS – because it is lightweight and there are a lot of tools that already support it. This particular implementation is terrific for simple TFS browsing applications – show me a list of my work items, show me recent build status, etc. It is not the same thing as the TFS object model – which not only provides access to the TFS server but also provides quite a lot of TFS client logic for managing workspaces, etc. Further, we now provide both .NET and Java versions of the TFS object model, making it available from virtually any platform you choose.

Lastly, I want to comment on “support” of the TFS web services. They are supported. We’ve had 3rd parties use them and we’ve supported them. We don’t change them willy nilly because we provide backwards compatibility between new TFS servers and old object model implementations. It is true that we discourage people from using the web services directly. They are much harder to use than the client object model (and much harder to use than OData). There is very little documentation on them – not because we don’t believe in it but rather because we just haven’t gotten to it. At some point in the future, I expect we’ll move our web services from SOAP to REST, at which point we’ll probably put in the effort to document them.

I’m aware of one project where someone has picked up the OData support here built a more serious phone app. It’s called TFS On The Road and is available for free. Go to the WP7 Marketplace and check it out. Here are a few screenshots of it:



And you can read about Pedro’s experience building it here:

Anyway enjoy playing with the OData service for TFS and let me know what you think of it.

<Return to section navigation list>

Windows Azure AppFabric: Access Control, WIF and Service Bus

Itai Raz (pictured below) reported a Nice post covering Windows Azure AppFabric Applications in a 7/5/2011 post to the AppFabric Team blog:

image Neil Mackenzie, a Windows Azure MVP, has written a great blog post regarding Windows Azure AppFabric Applications which were introduced as part the Windows Azure AppFabric June CTP.

Neil summed up the concept very nicely in his post:

image72232222222I am a great advocate of the PaaS model of cloud computing. I believe that AppFabric Applications represents the future of PaaS for use cases involving the composition of services that can benefit from the cost savings of running in a multi-tenanted environment. AppFabric Applications reduces the need to understand the service hosting environment allowing the focus to remain on the services. That is what PaaS is all about.

If you would also like to start using the June CTP here is what you need to do:

1. To request access to the Application Manager follow these steps:

  • Sign in to the AppFabric Management Portal at
  • Choose the entry titled “Applications” under the “AppFabric” node on the left side of the screen.
  • Click on the “Request Namespace” button on the toolbar on the top of the screen.
  • You will be asked to answer a few questions before you can request the namespace.
  • Your request will be in a “pending” state until it gets approved and you can start using the Application Manager capabilities.

2. In order to build applications you will need to install the Windows Azure AppFabric CTP SDK and the Windows Azure AppFabric Tools for Visual Studio. Even if you don’t have access to the Application Manager you can still install the tools and SDK to build and run applications locally in your development environment.

Please don’t forget to visit the Windows Azure AppFabric CTP Forum to ask questions and provide us with your feedback.

<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

The Windows Azure Connect Team reported HPC Pack 2008 R2 SP2 uses Windows Azure Connect for hybrid Cloud on 7/6/2011:

imageThe recent[ly] relased HPC Pack 2008 R2 SP2 is focused on providing customers with a great experience when expanding their on-premises clusters to Windows Azure. One of new features include a tuned MPI stack for the Windows Azure network, support for Windows Azure VM role (currently in beta), and automatic configuration of the Windows Azure Connect preview to allow Windows Azure based applications to reach back to enterprise file server and license servers via virtual private networks.

Andy Cross (@andybareweb) posted Azure VM Role Tips from the CoalFace on 7/6/2011:

imageThis post shows a series of tips from my experience of deploying Windows Azure VM Roles. Some of these may be obvious to you, but it’s sometimes the obvious ones that catch you out! So here we go … in no particular order …

1. Specialize with differencing disks

imageCreate a base image that is as close to vanilla Windows 2008 as possible and then use differencing disks to modify the base image. This allows you to spin up a VM role of your vanilla Windows 2008 Server and then test things such as connectivity to IIS or using Windows Azure Connect.


Technet about differencing disks:

2. Use the right edition!

This is a fundamental requirement, but one that has caught me out. The only editions of Windows Server 2008 that are supported are Standard and Enterprise. You cannot use Windows Web Server 2008 – if you try the VM Role will never start in Azure – it will freeze at “Preparing Windows for first use”.

3. Use InputEndpoint LocalPort with care

When you specify an InputEndpoint with localPort=”*”, you are instructing the load balancer to expose an external endpoint on one port, but translate it to a different port when delivering messages to your VM Role. This can cause your applications to not receive messages, even though it seems that the endpoint exists.

Instead, set the localPort to the port you intend explicitly, or omit this optional parameter.

<InputEndpoint name=”Endpoint1″ protocol=”http” port=”80″ localPort=”80″ />


<InputEndpoint name=”Endpoint1″ protocol=”http” port=”80″ />

4. Enable RDP (Remote Access and Remote Forwarder)

It is the most flexible way of debugging things.

5. Consider using fixed disk sizes

When creating a VHD for use in Azure, you are limited to what VM Role instance size you can use based on the size of your VHD. If you exceed the number shown below, you cannot use that VM Size. If you exceed 65GB, you cannot use Azure AT ALL. For this reason, I prefer using fixed disk sizes when creating the VHD, rather than dynamic sizes – which could “creep up” beyond the 65GB mark.


6. Enable Diagnostics!

Rather than having to RDP to Azure, it is useful to be able to check things such as event logs remotely. To do this, create a diagnostics.wadcfg and place it in the %ProgramFiles%\Windows Azure Integration Components\v1.0\Diagnostics folder

My post shows how to do this, and use intellisense to make it a little easier on you!

7. Remember to sysprep

Before uploading your differencing disks or base image, always remember the last action you should have done is a sysprep. Make sure you specify “shutdown” as the reboot action, otherwise you’ll undo your hard work automatically.


I’ve pulled these tips together from the pile of note I made when I developed a solution using VM Roles. I will post any more I find over the coming weeks

The Windows Azure Connect group updated the MSDN Library Appendix 1: Create a Local Endpoint Group for Windows Azure Connect on 6/29/2011 (missed when updated):

imageThis appendix describes the general steps to create a local endpoint group to use Windows Azure Connect features in your Windows HPC Server 2008 R2 Service Pack 2 (SP2) or later cluster. Windows Azure Connect allows IPsec-protected connections between the on-premises computers in your local network and the Windows Azure nodes that you deploy in a Windows Azure subscription. To make these connections, the on-premises computers must be individually configured with local endpoint software and then configured as a group of endpoints (a local endpoint group). You perform these configuration steps by accessing the Windows Azure Management Portal ( from the computers in your on-premises network that you want to be members of a local endpoint group.

noteNote: To use Windows Azure Connect and create a local endpoint group, your Windows Azure subscription must be enabled for the Windows Azure Connect features. For more information, see Requirements for Windows Azure Nodes in Windows HPC Server 2008 R2 in this guide.

In this appendix:

  1. Install the local endpoint software on the on-premises computers
  2. Create and configure an endpoint group
  3. Enable Windows Azure Connect in a Windows Azure node template
  4. Validate the Windows Azure Connect configuration
  5. Additional references

1. Install the local endpoint software on one or more computers

The first step is to install the local endpoint software on each computer in the on-premises network that you want to be a member of a Windows Azure Connect local endpoint group. For example, you might want to create a local endpoint group that consists of the head node of the cluster and a file server or a licensing server in the Active Directory domain. You must install the local endpoint software on each computer separately.

To install the local endpoint software for Windows Azure Connect

  1. On the computer on which you want to install the local endpoint software, in your browser, open the Windows Azure Management Portal.

  2. On the lower left, click Virtual Network.

    The Windows Azure Connect interface appears.

  3. In the console tree, click the subscription in which you want to configure this local endpoint. If you are prompted to enable the subscription for Windows Azure Connect, click OK.

  4. With the subscription still selected, at the top, click Install Local Endpoint.

  5. Click Copy Link to Clipboard and paste the link into a new browser window.

  6. In the resulting dialog box, click Run. (You cannot save the installation software and run it later or run it on another computer, because it includes an activation token.)

  7. Follow the instructions in the wizard, which allows you to review and accept the terms of use and privacy statement before you install the software.

  8. Repeat the above steps for each computer that you want to configure as a local endpoint.

ImportantImportant: You may have to adjust the security settings of your browser to access the Windows Azure Management Portal and to download and run the local endpoint software. For example, if you are running Internet Explorer in Windows Server® 2008 R2, the default enhanced security configuration settings that are configured can prevent you from downloading and running the local endpoint software. If you need to change the enhanced security configuration settings to perform tasks with the Windows Azure Management Portal, you can use Server Manager.

Additional considerations

  • If you need to join your Windows Azure nodes to your Active Directory domain, you must install the local endpoint software on a domain controller that is also a DNS server.
  • Windows Azure Connect uses HTTPS, which uses port 443. Therefore, ensure that the TCP 443 outbound port is open on all local endpoints. This port should already be open on the head node of your cluster to allow communication with the Windows Azure subscription, but you may need to configure it on other local endpoints. In addition, configure program or port exceptions needed by your applications or tools. For more information about firewall settings for Windows Azure Connect, see Overview of Firewall Settings Related to Windows Azure Connect (

2. Create and configure a local endpoint group

After you have installed the local endpoint software on one or more computers, you must add the local endpoints to a local endpoint group.


  • You can perform this procedure by accessing your subscription in the Management Portal from any compatible computer.
  • The following procedure describes how to create a new local endpoint group. However, you can also add a local endpoint to an existing endpoint group.

To create and configure a group of endpoints

  1. Confirm that you have installed the Windows Azure Connect endpoint software on the computers in your local network that you want to include in the endpoint group.

  2. Open the Windows Azure Management Portal.

  3. On the lower left, click Virtual Network.

    The Windows Azure Connect interface appears.

  4. In the console tree, click the subscription in which you want to create the local endpoint group. If you are prompted to enable the subscription for Windows Azure Connect, click OK.

  5. With the subscription still selected, at the top, click Create Group.

    The Create a New Endpoint Group dialog box appears.

  6. In Group Name, specify a name that will help you recognize the group when you view it in the Windows Azure interface. This name will also appear in HPC Cluster Manager in the node template interface used to enable Windows Azure Connect, after you provide the credentials for your subscription.

  7. Optionally, in Description, specify a description.

  8. Under Connect from, use the Add button to browse for and add local endpoints. Click OK.

    If you click the Add button under Connect from, but do not see a local endpoint that you expect to see, it might already be assigned to a group.

  9. Select or clear the check box labeled Allow connections between endpoints in group.

    When this check box is selected, local endpoints in the group can connect to each other through connections in Windows Azure Connect (not just through your local network).

  10. Optionally, under Connect to, use the Add button to browse for and add endpoint groups that you have already created.

    ImportantImportant:Do not manually add the names of Windows Azure nodes to this list. These names are added automatically to the group after you deploy Windows Azure nodes using a node template in which Windows Azure Connect is enabled.

  11. When the list of endpoints is complete, click Create.

Additional considerations

  • An on-premises computer can belong to only one local endpoint group.
  • You can remove a local endpoint from a group and move it to another.
  • A local endpoint group is defined in a Windows Azure subscription. Therefore, you cannot use the same local endpoint group in more than one subscription.

3. Enable Windows Azure Connect in a Windows Azure node template

To enable connectivity between a local endpoint group and Windows Azure nodes that you will deploy, you must configure Windows Azure Connect settings in a Windows Azure node template. You can do this when you create a Windows Azure node template, or you can configure the settings by editing the template at a later time. Settings for Windows Azure Connect are on the Windows Azure Connect page of the Create Node Template wizard or the Node Template Editor. For procedures to create and edit a Windows Azure node template, see the topic earlier in this guide.


  • In the Create Node Template wizard or Node Template Editor, to retrieve a list of available local endpoint groups and to enable Windows Azure Connect, you must provide a Windows Live ID and password to access your subscription in the Management Portal.
  • To retrieve the local endpoint groups, you must use a Windows Live ID that is based on a Windows Live account (that is, an ID of the form You cannot use a Windows Live ID that is a federated ID.
  • You can configure Windows Azure Connect to use the same local endpoint group in more than one Windows Azure node template.

4. Validate the Windows Azure Connect configuration

After you have deployed Windows Azure nodes using a node template in which Windows Azure Connect is enabled, you can validate the configuration of Windows Azure Connect. You can do this in one of several ways, including the following:

  • Use the Windows Azure Connect diagnostic tool, which is installed on each local endpoint. To open the Windows Azure Connect diagnostic tool, in the system tray, click the Windows Azure Connect tray icon, and on the menu click Diagnostics. The tool runs a set of tests for Windows Azure Connect that can confirm connectivity as well as detect common configuration and network problems.
  • Confirm that a local endpoint has an IPv6 address that as assigned by Windows Azure Connect. This is an address that begins with 2a01:111. You can view the Ipv6 address of each local endpoint in the Management Portal, in Virtual Network. In the console tree, expand the subscription, click Activated Endpoints, and then select an endpoint. The IPv6 address appears in Properties, under Addresses.
  • Use standard network diagnostic tools such as ping to confirm that you can connect from a Windows Azure node to a local endpoint, or from a local endpoint to a Windows Azure node, using an IPv6 address or host name.

    ImportantImportant: To use ping, you must ensure that the firewall on both computers allows inbound Internet Control Message Protocol version 6 (ICMPv6) traffic.

    Example: To ping a deployed Windows Azure role instance from an on-premises node

    1. Open the Windows Azure Management Portal.

    2. On the lower left, click Hosted Services, Storage Accounts & CDN.

    3. Click Hosted Services, and then click Production Deployments.

    4. Under the name of the hosted service that you used for deployment, click a role instance. Then, at the top, click Connect.

    5. Type the Remote Desktop credentials that you configured in the node template for the Windows Azure nodes.

      This establishes a remote connection to the Windows Azure role instance.

      noteNote:You can only establish a remote connection if you configured the Remote Desktop credentials in the Windows Azure node template.

    6. In the remote connection to the role instance, open an elevated Command Prompt window. Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.

    7. To ensure that Windows Firewall is configured to allow inbound ICMPv6 traffic, type the following command:

      netsh advfirewall firewall add rule name="ICMPv6" dir=in action=allow enable=yes protocol=icmpv6
    8. In Server Manager in the role instance, find the Full Computer Name. Make a note of it.

    9. Log on to an on-premises computer that is configured as a local endpoint in the endpoint group.

    10. Open a Command Prompt window. Click Start, point to All Programs, click Accessories, and then click Command Prompt.

    11. Type the following command:

      ping <ComputerName>


      <ComputerName> is the full computer name of the Windows Azure role instance.

      Review the ping statistics to confirm the connection from the local endpoint to the Windows Azure role instance.

Additional references

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Rinat Abdullin (@abdullin) asserted Lokad.CQRS Can Make Windows Azure Cheaper for You in a 7/6/2011 post:

image Development of cloud computing systems can be a bit expensive these days. Expenses come from the development costs (and difficulties of finding good "cloud" developers) along with the actual pricing of the computing, storage and network capacities.

imageFor example, traditionally the smallest distributed web application in the cloud requires a separate worker and web role. This can cause even the small projects to cost quite a bit when deployed to Windows Azure (to be precise, Microsoft recommends to have at least two web role instances, in order to achieve SLA).

If we are using Extra Small VMs with price of 0.05$ per hour, this would sum up to 72 USD per month just in the compute expenses. That's a bit too much for the small projects. Yet, every large project (that's where Azure shines with it's elastic scalability) starts small.

Lokad.CQRS framework offers three primary options to save the money while targeting Windows Azure Cloud.

First one, is to run an instance of Lokad.CQRS App Engine within the Web Role. This way we half our expenses from the start - paying 36 USD per month instead of 72 USD for the compute resources.

Sample code for that is available in the gist.

The approach of reducing costs by 50% was tried out by Vsevolod Parfenov of Lokad (also a new contributor to Lokad.CQRS, who currently pushes forward the Tape Storage).

We have a system already in production at Windows Azure. That's CallCalc, which is also explained in greater detail in Lokad.CQRS PDF (see case studies).

Second option comes from the portability side and involves deploying Lokad.CQRS solution outside of Windows Azure, till it grows large enough to justify cloud. You can run systems on local or hosted services using the cheapest Windows Server hosting to run web UI in IIS and App Engine as windows service or a console.

Note, that theoretically we can even go cheaper and use Linux with Mono. But research and development in this direction has not been the priority of Lokad for the time being.

As the system grows, it should be possible to switch it to Windows Azure by reconfiguring it to use cloud-scalable storage and transport. This is the scenario we have started using Lokad.CQRS in for development purposes, since Azure Development Fabric (Storage and Compute emulators) just add too much friction (you need to run VS as Administrator) and performance overhead.

For example, I get 15000 messages per second processed sequentially by Lokad.CQRS using in-memory queues, while Azure Queues give me just 7.4 messages per second. This test scenario involves sequential processing, where message handler keeps sending messages to itself. This tests full roundtrip overhead of a system and uses just a single thread for message processing.

Obviously, I could boost up the numbers of out-of-Azure performance by using multiple threads and also optimizing the reflection code. Yet for the time being there has been no real case, since Lokad.CQRS forces systems to be designed in a partitionable way. In it you improve performance not by over-optimizing code (which makes it more complex and involves expensive development effort) by just throwing in more processing power (which is generally cheaper).

If we do the math. An additional "Extra Small" instance on Windows Azure costs 36 USD per month. I would imagine that any performance optimization, deployed to production, would be generally more expensive than that:

  • Costs would involve a few hours of development and testing work, followed by QA and deployment. Besides, good luck on finding an experienced Azure developer for less than 36USD per hour.
  • Added code complexity is likely to add future penalty on new development along with the potential risks of breaking something (system that needs to scale must be already quite stressed and important for the company).
  • You need to have enough qualified developers and time to do that (opportunity costs are involved).
  • There is always a limit to how deeply you can optimize the code (deeper you go - more expensive it becomes).

I'd say it's much cheaper and easier to just provision a few more processing units, as needed.

This is the third way to save money on Azure - using the flexibility of elastic cloud to avoid doing risky and expensive development to handle scalability issues. Instead of going deep into the guts of Windows Azure, we just stay simple, wide and almost-infinitely-scalable.

BTW, if you are interested in design and performance of Lokad.CQRS and Windows Azure - please, check out Lokad community. It might already hold answers to some of your questions. Besides, there always is CQRS starting point.

Joe Panettieri (@joepanettieri) asked Office 365 Marketplace and Windows Azure: Killer Combo? in a 7/5/2011 post to the TalkinCloud blog:

imageWhen Microsoft Office 365 officially debuted June 28, Talkin’ Cloud couldn’t help but notice that Microsoft didn’t say much about the Office 365 Marketplace and the Windows Azure cloud platform. Short term all three cloud solutions remain in their infancy. But long term we suspect Microsoft is going to help partners connect the dots between Office 365, the Office 365 Marketplace and Windows Azure.

imageNo doubt, Microsoft has its hands full right now with Office 365 — which includes Exchange Online, SharePoint Online, Lync Online and Microsoft Office. Monthly per-user prices start at about $2, though Microsoft’s SMB push starts at about $6 per user per month.

imageEven as Microsoft markets Office 365 in airports and on billboards, the company’s technology team is taking a conservative approach to Office 365 — even blocking established BPOS (Business Productivity Online Suite) customers from migrating to Office 365. BPOS is Office 365′s predecessor.

Strangely Silent

imageMeanwhile, Microsoft said little — if anything — about the Office 365 Marketplace during the Office 365 launch in June. That surprised Talkin’ Cloud. Within the Office 365 Marketplace, software partners and integrators can promote their solutions to end-customers. As of June 21, the Office 365 Marketplace had grown to about 174 partners. Today (July 5, 2011), the marketplace has 305 companies — including 229 listings for professional services partners and 218 listings for application partners.

Now here’s where things get extra interesting. Microsoft also is developing Windows Azure, a platform-as-a-service (PaaS) that can host third-party applications. From giants like CA Technologies to upstarts like Quosal, numerous companies are launching their SaaS applications in the Windows Azure Cloud.

Near term, I believe Windows Azure faces an up-hill battle. In some cases, Microsoft is paying ISVs (independent software vendors) to port their applications into Windows Azure, Talkin’ Cloud has heard. But we do expect Azure to gain a critical mass of third-party SaaS applications.

SaaS Triple Play?

For Microsoft partners it’s getting easier to see potential synergies between Microsoft’s various cloud platforms. One simple example: Launch a SaaS application on Azure, promote it in the Office 365 Marketplace, and offer integration services to the core Office 365 suite.

We wonder if or when Microsoft will begin promoting those potential synergies to channel partners. And we’ll go searching for answers starting July 10 at the Microsoft Worldwide Partner Conference 2011 (WPC11) in Los Angeles.

The Windows Azure Team (@WindowsAzure) announced Content Update: Windows Azure Diagnostics API and Service Runtime API References Improved with New Descriptions and Code Examples in a 7/5/2011 post:

imageWe have recently made significant improvements to the content for the Windows Azure Diagnostics API and Service Runtime API, which should make it easier for you to create your Windows Azure applications.

Included are the following types of changes:

  • Descriptions that explain how the API elements work and how they relate to each other.
  • Code examples that demonstrate how to use API elements.
  • Links that point you to related conceptual content.

Take a look, and let us know what you think by using the on-page rating/feedback or by sending email to

Andy Cross (@andybareweb) posted 3 Windows Azure Powershell Resources on 7/5/2011:

image1. Use the Microsoft.WindowsAzure.ServiceRuntime commands.

1.1 Where are they?

imageIt is possible that these are not available on a vanilla machine. You will know since if you try to add the snapin, it will fail:

PS C:\> Add-PSSnapin Microsoft.WindowsAzure.ServiceRuntime
Add-PSSnapin : The Windows PowerShell snap-in 'Microsoft.WindowsAzure.ServiceRuntime' is not installed on this machine.
At line:1 char:13
+ Add-PSSnapin <<<<  Microsoft.WindowsAzure.ServiceRuntime
+ CategoryInfo          : InvalidArgument: (Microsoft.WindowsAzure.ServiceRuntime:String) [Add-PSSnapin], PSArgume
+ FullyQualifiedErrorId : AddPSSnapInRead,Microsoft.PowerShell.Commands.AddPSSnapinCommand

The way to get around this is to install it using InstallUtil. This is normally done at startup of an Azure role, but won’t be done locally unless you specifically enable RemoteAccess. Since this isn’t necessary locally, you may find that you can’t get this runtime command assembly to run.

To manually install this, run:

IF EXIST %SystemRoot%\Microsoft.NET\Framework64\v2.0.50727\InstallUtil.exe %SystemRoot%\Microsoft.NET\Framework64\v2.0.50727\InstallUtil.exe "C:\Program Files\Windows Azure SDK\v1.4\bin\plugins\RemoteAccess\Microsoft.WindowsAzure.ServiceRuntime.Commands.dll"

Once you have done this, try adding the PSSnapIn again:

PS H:\> Add-PSSnapin Microsoft.WindowsAzure.ServiceRuntime
PS H:\> Add-PSSnapin Microsoft.WindowsAzure.ServiceRuntime
Add-PSSnapin : Cannot add Windows PowerShell snap-in Microsoft.WindowsAzure.ServiceRuntime because it is already added.
Verify the name of the snap-in and try again.
At line:1 char:13
+ Add-PSSnapin <<<<  Microsoft.WindowsAzure.ServiceRuntime
+ CategoryInfo          : InvalidArgument: (Microsoft.WindowsAzure.ServiceRuntime:String) [Add-PSSnapin], PSArgume
+ FullyQualifiedErrorId : AddPSSnapInRead,Microsoft.PowerShell.Commands.AddPSSnapinCommand

1.2 What can they do?

The Powershell snapin provides some key functions:

  • Get-ConfigurationSetting
  • Get-LocalResource
  • Get-RoleInstance
  • Set-RoleInstanceStatus

Here’s the documentation I refer to for these functions:

2. Use Windows Azure Powershell cmdlets

This project provides a series of excellent hand helpful cmdlets that fall into the following categories:

  • Windows Azure Hosted Services
  • Windows Azure Storage
  • Windows Azure Affinity Groups
  • Windows Azure Service Certificates
  • Windows Azure Diagnostics

There are a great many useful cmdlets in this set, one of my favourites is: Get-DiagnosticAwareRoleInstances – which is great for a quick sanity check that your diagnostic code is working.

3. Use Cerebrata Azure Management Cmdlets

If you need even more functionality from your Powershell, the best offering is from Cerebrata. For a modest sum of $69.99 you get the widest Powershell scripts available (close to 100).

My favourite; Backup-StorageAccount – a cmdlet that downloads contents of a storage account (tables and/or blobs) and saves them as files.

<Return to section navigation list>

Visual Studio LightSwitch

Beth Massi (@bethmassi) reported Cranking out more LightSwitch “How Do I” Videos for the Big Day! on 7/6/2011:

imageThe past couple weeks I’ve been recording more LightSwitch How Do I videos for the Developer Center in preparation for the Visual Studio LightSwitch release on July 26th! I’ve been announcing them on the LightSwitch Team blog but if you’ve missed them here’s a rollup of what I got out there so far. I’m doing more each week so check this page often:

imageWatch the LightSwitch How Do I VideosWatch all the LightSwitch How Do I videos

Here are some of the latest ones I’ve done:

#13 - How Do I: Deploy a LightSwitch Application to Azure?
#14 - How Do I: Modify the Navigation of Screens in a LightSwitch Application?
#15 - How Do I: Open a Screen After Saving Another Screen in a LightSwitch Application?
#16 - How Do I: Connect LightSwitch to an Existing Database?
#17 - How Do I: Connect LightSwitch to SharePoint Data?
#18 - How Do I: Save Data from Multiple Data Sources on the Same Screen?

I just completed two more that should release soon. I’m aiming to have 30 done by the 26th but I really want to take a couple days of vacation while the weather is so awesome here so we’ll see how many I get out there in time. Rest assured I’ll be creating these beyond launch and rolling out content every week on the Developer Center and we have a lot of site updates planned for the 26th so stay tuned!

Stay up to date by signing up for e-mail launch updates.

The Visual Studio LightSwitch Team (@VSLightSwitch) reported a new Using Custom Controls to Enhance LightSwitch Application UI (Karol Zadora-Przylecki) story in a 7/6/2011 post:

image Check it out, this month’s CoDe Magazine is featuring Karol’s article on how to enhance your LightSwitch applications with custom controls:

Using Custom Controls to Enhance LightSwitch Application UI

Creating custom controls for LightSwitch is as simple as creating Silverlight controls and Karol shows a few examples of how to incorporate them into your LightSwitch screens. This is really part 2 of the blog post he wrote a while back on custom controls so if you’ve been anticipating that one here it is!

The Visual Studio LightSwitch Team (@VSLightSwitch) repeated it’s Visual Studio LightSwitch 2011 is Launching July 26th! post on 7/5/2011:


Microsoft Visual Studio LightSwitch gives you a simpler and faster way to create high-quality business applications for the desktop and the cloud. LightSwitch is a new addition to the Visual Studio family and is launching on July 26th!

Learn more about what LightSwitch can do for you:

Learn how to build LightSwitch business applications with these tutorials, samples & videos:

Ask questions and chat with the community in the LightSwitch forums:

And follow us on Facebook:

We’re planning a lot of updates to all these places for launch so stay up to date by signing up for e-mail notifications.

Return to section navigation list>

Windows Azure Infrastructure and DevOps

My (@rogerjenn) Video Sessions and Short/Ignite Talks Archives from DevOps Days 2011 Mountain View, June 17 - 18, 2011 post of 7/5/2011 includes links to all videos:

imageDevOps Days claims to be “The conference that brings development and operations together.” DevOps Days 2011 Mountain View was held June 17 -18, 2011 with a sold-out crown at Linked In’s Mountain View headquarters. [The post has] links to video archives of the welcome address, panel discussions, short talks, and lightening talks.

For links to related DevOps videos, see my Video Keynote and Lightning Demo Archives from O’Reilly Media’s Velocity 2011 Conference, June 22-24, 2011, Giga Om Structure Conference 2011 - Links to Archived Videos for 6/23/2011 and Giga Om Structure Conference 2011 - Links to Archived Videos for 6/22/2011 posts.

The Windows Azure Team (@WindowsAzure) posted a Windows Azure Deployments and the Virtual IP reminder on 7/6/2011:


With any deployment in Windows Azure, a single public IP address, known as a virtual IP address (VIP), is assigned to the service for the customer to use for all input endpoints associated with roles in the service. Even if the service has no input endpoints specified in the model, the VIP is still allocated and used as the source address assigned to outbound traffic coming from each role.

Throughout the lifetime of a deployment, the VIP assigned will not change, regardless of the operations on the deployment, including updates, reboots, and reimaging the OS. The VIP for a given deployment will persist until that deployment is deleted. When a customer swaps the VIP between a stage and production deployment in a single hosted service, both deployment VIPs are persisted but swapped. Consequently, even when the customer swaps the VIP, the production application VIP is still persisted as part of the hosted service, it is simply transferred to the other deployment slot.

A VIP is associated with the deployment and not the hosted service. When a deployment is deleted, the VIP associated with that deployment will return to the pool and be re-assigned accordingly, even if the hosted service is not deleted. Windows Azure currently does not support a customer reserving a VIP outside of the lifetime of a deployment.

Lori MacVittie (@lmacvittie) asserted It’s kind of like thinking globally but acting locally… and It’s a job for devops in her Forget Hyper-Scale. Think Hyper-Local Scale. post of 7/6/2011:

imageWhile I rail against the use of the too vague and cringe-inducing descriptor “workload” with respect to scalability and cloud computing , it is perhaps at least bringing to the fore an important distinction that needs to be made: that of the impact of different compute resource utilization patterns on scalability.

hyper scalability

What categorizing workloads has done is to separate “types” of processing and resource needs: some applications require more I/O, some less. Others are CPU hogs while others chew up memory at an alarming rate. Applications have different resource utilization needs across the network, storage and compute spectrum that have a profound impact on their scalability. This leads to models in which some applications scale better horizontally and others vertically. Unfortunately, there are very few “pure” applications that can be dissected down to a simple model in which it is simply a case of providing more “X” as a means to scale. It is more often the case that some portions of the application are more network intensive while others require more compute. Functional partitioning is certainly a better option for scaling out such applications, but is an impractical design methodology during development as the overhead resulting from separation of duties at the functional level requires a more service-oriented approach, one that is not currently aligned with modern web application development practices.

Yet we see the need on a daily basis for hyper-scalability of applications. Applications are being pushed to their resource limits with more users, more devices, more environments in which they must perform without delay. The “one size fits all” scalability model offered by cloud computing providers today is inadequate as a means to scale our rapidly and nearly infinitely, without overrunning budgets. This is because along with resource consumption patterns comes constraints on concurrency. Cloud computing offers an easy button to this problem – auto-scalability. Concurrency demands are easily met, just spin up another instance. While certainly one answer, it can be an expensive one and it’s absolutely an inefficient model of scalability.

The lessons we should learn from cloud computing and hyper-scalability demands is that different functional processing scales, well, differently. The resource consumption patterns of one functional process imagemay differ dramatically from another, and both need to be addressed in order to efficiently scale the application.

If it is impractical to functionally separate “workloads” in the design and development process, then it is necessary to do so during the deployment phase leveraging those identifying contextual clues indicating the type of workload being invoked, i.e. hyper-local scale.

Hyperlocal scalability requires leveraging scalability domains, those functional workload divisions that are similar in nature and require similar resources to scale. Scalability domains allow functional partitioning as a scalability pattern to be applied to an application without requiring function level separation (and all the management, maintenance and deployment headaches that go along with it). Scalability domains are discrete pools of similar processing workloads, partitioned as part of the architecture, that allow specific configuration and architectural techniques to be applied to the underlying network and platform that specifically increase the performance and scalability of those workloads.

This is the notion of hyperlocal scalability: an architectural scaling strategy that leverages scalability domains to isolate similar functional partitions requiring hyper-scale from those partitions that do not. In doing so, highly efficient scalability domains can be used to scale up or out those functional partitions requiring it while allowing other functional partitions to scale at a more nominal rate, incurring therefore less costs. Consider the notion a form of offload, where high resource impact processing is offloaded to another instance, thereby increasing the available resources on the first instance which results in higher concurrency. The offloaded processing can hyper-scale as necessary in a purpose-configured instance at higher efficiency, resulting in better concurrency. Where a traditional scalability pattern – effectively replication – may have required ten instances to meet demand, a hyper-localized scalability pattern may require only six or seven, with the plurality of those serving the high resource consuming processing. Fewer instances results in lower costs whilst simultaneously improving performance and efficiency.

Hyper-localized scalability architectures can leverage a variety of infrastructure scalability patterns, but the fact that they are dependent upon infrastructure and its ability to perform application layer routing and switching is paramount to success.

Today’s demanding business and operational requirements cannot be met with the simple scalability strategies of yesterday. Not only are legacy strategies based on infinite resources and budgets, they are inherently based on legacy application design in which functional partitioning was not only difficult, it was nearly impossible without the aid of methodologies like SOA.

Web applications are uniquely positioned such that they are perfectly suited to partitioning strategies whether at the functional or type or session layers. The contextual data shared by web applications with infrastructure capable of intercepting, inspecting and acting upon that data means more modern, architectural-based scaling strategies can be put into play. Doing so affords organizations the means to achieve higher efficiency and utilization rates, while in turn improving the performance, resiliency and availability of those applications.

These strategies require infrastructure services and an understanding of the resource needs and usage patterns of the application as well as the ability to implement that architecture using infrastructure services and platform optimization.

It’s a job for devops.

Richard Seroter (@rseroter) posted Interview Series: Four Questions With … Pablo Cibraro on 7/5/2011:

imageHi there and welcome to the 32nd interview in my series of chats with thought leaders in the “connected technology” space. This month, we are talking with Pablo Cibraro who is the Regional CTO for innovative tech company Tellago, Microsoft MVP, blogger, and regular Twitter user.

Pablo has some unique perspectives due to his work across the entire Microsoft application platform stack. Let’s hear what he has to say:

Q: In a recent blog post you talk about not using web services unless you need to. What do you think are the most obvious cases when building a distributed service makes sense? When should you avoid it?

A: Some architects tend to move application logic to web services for the simple reason of distributing load on a separate layer or because they think these services might be reused in the future for other systems. However, these facts are not always true. You typically use web services for providing certain integration points in your system but not as a way to expose every single piece of functionality in a distributed fashion. Otherwise, you will end up with a great number of services that don’t really make sense and a very complicated architecture to maintain. There is, however, some exceptions to this rule when you are building distributed applications with a thin UI layer and all the application logic running on the server side. Smart client applications, Silverlight applications or any application running in a device are typical samples of applications with this kind of architecture.

In a nutshell, I think these are some of obvious cases where web services make sense,

  • You need to provide an integration point in your system in a loosely coupled manner.
  • There is explicit requirements for running a piece of functionality remotely in an specific machine.

If you don’t have any of these requirements in the application or system you are building, you should really avoid them. Otherwise, Web services will add an extra level of complexity to the system as you will have more components to maintain or configure. In addition, calling a service represents a cross boundary call so you might introduce another point of failure in the system.

Q: There has been some good discussion (here, here) in the tech community about REST in the enterprise. Do you think that REST will soon make significant inroads within enterprises or do you think SOAP is currently better suited for enterprise integration?

A: REST is having a great adoption for implementing services with massive consumption in the web. If you want to reach a great number of clients running on a variety of platforms, you will want to use something everybody understand, and that where Http and REST services come in. All the public APIs for the cloud infrastructure and services are based on REST services as well. I do believe REST will start getting some adoption in the enterprise, but not as something happening in the short term. For internal developments in the enterprise, I think developers are still very comfortable working with SOAP services and all the tooling they have. Even integration is much simpler with REST services, designing REST services well requires a completely different mindset, and many developers are still not prepared to make that switch. All the things you can do with SOAP today, can also be done with REST. I don’t buy some of the excuses that developers have for not using REST services like REST services don’t support distributed transactions or workflows for example, because most them are not necessarily true. I’ve never seen an WS-Transaction implementation in my life.

Q: Are we (and by “we” I mean technology enthusiasts) way ahead of the market when it comes to using cloud platforms (e.g. Azure AppFabric, Amazon SQS, PubNub) for integration or do you think companies are ready to send certain data through off-site integration brokers?

A: Yes, I still see some resilience in organizations to move their development efforts to the cloud. I think Microsoft, Amazon and others cloud vendors are pushing hard today to break that barrier. However, I do see a lot of potential in this kind of cloud infrastructure for integration applications running in different organizations. All the infrastructure you had to build yourself in the past for doing the same is now available for you in the cloud, so why not use it ?

Q [stupid question]: Sometimes substituting one thing for another is ok. But “accidental substitutions” are the worst. For instance, if you want to wash your hands and mistakenly use hand lotion instead of soap, that’s bad news. For me, the absolute worst is thinking I got Ranch dressing on a salad, realizing it’s Blue Cheese dressing instead and trying to temper my gag reflex. What accidental substitutions in technology or life really ruin your day?

A: I don’t usually let simple things ruin my day. Bad decisions that will affect me in the long run are the ones that concern me most. The fact that I will have to fix something or pay the consequences of that mistake is something that usually piss me off.

Clearly Pablo is a mellow guy and makes me look like a psychopath. Well done!

<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Larry Grothaus (pictured below) posted Changing the Conversation - Server Virtualization is the Overture, Not the Finale, a guest post from Brad Abrams that celebrates Microsoft as a leader in Gartner’s new Server Virtualization Infrastructure Magic Quadrant, on 7/5/2011:

imageI wanted to provide a guest blog post from Brad Anderson, Microsoft Corporate Vice President, in which he share’s the news that Gartner has named Microsoft a leader in their 2011 Magic Quadrant for x86 Server Virtualization Infrastructure*. In his post, Brad covers topics such as private cloud computing and the role that virtualization plays in it, as well as touching on public cloud solutions.


Please check out Brad’s entire post below and you can visit the following links for more information on Microsoft’s private cloud or Windows Azure public cloud offerings. Thanks and let me know if you have any questions or comments. Larry

Gartner just published the 2011 Magic Quadrant for x86 Server Virtualization Infrastructure* and I’m very happy to report that Microsoft is listed among the leaders. Coming on the heels of InfoWorld’s Virtualization Shootout and a Microsoft-commissioned lab test by Enterprise Strategy Group, the Magic Quadrant rounds out a trifecta of independent recognition for Windows Server Hyper-V’s readiness in the enterprise. Added to this, a growing number of customers like Target and Lionbridge are running their businesses on Microsoft’s virtualization technologies.

What does this mean for you and your business? For one thing, it means the conversation about virtualization has changed for good. Now you can base your decision on value and which partner has the most compelling vision and strategy for the next logical step—private cloud computing. Private clouds provide elasticity, shared hardware, usage-based self-service—plus unique security, control and customization on IT resources dedicated to a single organization.

Throughout our industry, virtualization has become widely accepted as a means to a bigger end. In order to get the full advantage of cloud-computing you need to have world-class management capabilities that deeply understand the virtualized infrastructure—but more importantly have an in-depth understanding of the applications that are running virtualized. Microsoft’s management solutions provide that insight. System Center 2012 will System Center 2012 will offer the simplest solution to build private clouds at lowest price, using the infrastructure you are already familiar with and integrating seamlessly across the common virtualization platforms. “Concero,” a new capability in System Center 2012, empowers the consumers of cloud-based applications to deploy and manage those apps on private and public cloud infrastructures, helping IT managers deliver greater flexibility and agility to their business teams.

But customers don’t have to wait for System Center 2012 to get started with private cloud. Microsoft and its partners—including Dell, Fujitsu, Hitachi, HP, IBM and NEC—already offer a range of private cloud solutions (custom, pre-configured, or hosted) built on top of Windows Server Hyper-V and System Center 2010. These solutions pool hardware, storage, and compute resources so you can deploy applications themselves, quickly and easily. With Microsoft’s private cloud solutions, IT can empower their business groups to deploy applications and ensure those applications perform reliably.

And our private cloud solution is the only one in the industry that builds a bridge between your existing investments—in both infrastructure and skills—and the public cloud. For many large enterprises, the best solution will be to adopt both public and private clouds, often using them in tandem as a “hybrid cloud.” Microsoft customers will be able to do this seamlessly with a common set of familiar tools—including development, management and identity solutions—that span the entire spectrum, allowing IT to manage their public and private clouds from a single pane of glass and to adapt the mix easily to changing business needs.

If IT’s primary role is to deliver applications that move the business forward, then an application-centric approach will help you stay focused on what drives business value. It’s this unique combination—private and public clouds built and managed with one set of tools—that enables Microsoft’s customers to focus on the applications rather than the underlying technology. As business needs evolve over time, you maintain control and flexibility over how you create, consume, deploy and manage applications in the cloud. With Microsoft’s comprehensive approach your applications drive the resources, not the other way around.

Thanks for your time. Brad

*The Magic Quadrant is copyrighted 2011 by Gartner, Inc. and is reused with permission. The Magic Quadrant is a graphical representation of a marketplace at and for a specific time period. It depicts Gartner's analysis of how certain vendors measure against criteria for that marketplace, as defined by Gartner. Gartner does not endorse any vendor, product or service depicted in the Magic Quadrant, and does not advise technology users to select only those vendors placed in the "Leaders" quadrant. The Magic Quadrant is intended solely as a research tool, and is not meant to be a specific guide to action. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

<Return to section navigation list>

Cloud Security and Governance

Jason Bloomberg (@TheEbizWizard) posted Cloud Computing: Legal Quagmire to the Zap Think blog on 7/5/2011:

imageIf you don’t realize by now that Cloud Computing has its risks, then, well, you must have your head in the clouds. But then again, without risk there is no reward. When you place a bet on the Cloud, you know you’re betting on an emerging set of capabilities. And in any case, there are risks everywhere in business. Why should the Cloud be any different?

post thumbnail

Even if you are willing to take on the risks of the Cloud, you must still do whatever you can to mitigate those risks. And unfortunately, risk means liability, and that means lawyers. To help make sure you and your lawyer are up to speed on all the legal ramifications of Cloud Computing, we’ve assembled the following list of concerns. Ignore the items on this list at your own peril.

Liabilities related to the geographic location of your data in the cloud

  • Legal jurisdiction – Where your Cloud provider is physically located may impact the legal jurisdiction that applies to your contract with the provider. How will you know which laws apply to your data if you don’t know what country or state your data currently reside in?

  • Regulatory Compliance – There may be regulatory constraints that limit where you locate your data. There’s no guarantee your Cloud provider will locate your data in your country—unless, of course, you pay them for that guarantee.

  • Disputes – If you need to arbitrate with or sue your provider, where do you do that? The business location of the provider may not be the same as the physical location of the data, complicating this issue.

  • Moving data across borders – The European Union is very particular about this rule. You can be held liable for moving customer information across borders without their permission.

Third-party access to your data

  • Search warrants – If a law enforcement agency has a search warrant for the server or hard drive that hosts your data, then they can remove the hardware from the provider’s data center and put it into evidence. For a long time. If you’re up to do good that’s one thing, but they may be going after suspected criminal activity for another one of the provider’s customers that happens to share space with you on the same physical server or drive.

  • PATRIOT Act seizures – if the FBI or other US federal agency suspects terrorist activity, they don’t even need a search warrant. They’ll simply walk into the provider’s data center and take whatever equipment they want. Think you’ll see your data again? Not likely. Does this sort of thing only happen in the US? I wouldn’t count on it.

  • eDiscovery/subpoenas – Even if no one suspects criminal activity, if you or someone else on the same server is party to a lawsuit, the opposing counsel can subpoena the data on the server. And just as with a search warrant, it may be many months before they return the hardware to the provider. Another question for your provider: what is the nature of their response to a subpoena? Do they need to inform you when a subpoena affects your data? What are your responsibilities in the face of a subpoena? For example, it may be illegal for you to delete data, even if the subpoena doesn’t explicitly specify such a restriction.

  • Provider employee access – what access do employees of the Cloud provider have to your data or machine instances? They have some level of responsibility for administering your account, but does that mean they have access to your data?

  • Trade secret & attorney/client privilege protection – If you have privileged information in the Cloud, either trade secrets or attorney communications, then making that information available to a third party can remove the privilege—even if the third party in question is just an admin at the provider backing up a server.

  • Liability of rogue employee – Employees of your Cloud provider aren’t the only risk. What if one of your own employees uses your Cloud account for illegal purposes? How much liability does your company have, and how do you mitigate such risks?

Responsibility and how to allocate it

  • Insurance in case of disaster – Do you have the proper insurance? What sort of disasters would be covered under your provider’s insurance, and which ones to you need to insure against yourself?

  • Liability for breach of privacy – Somehow your confidential data are leaked to the Internet. Under what circumstances is your provider liable for such a breach?

  • Liability for commingling with illegal data – sharing hardware with criminals and other unsavory types can lead to those pesky search warrants and subpoenas, but you should also understand your liability for having your data in close proximity to illegal data. Innocence may be no excuse when the feds find child pornography on the same server as your machine instances.

  • Liability for hacking – Hackers compromise your data or your machine instances. The weakness they targeted may have been your provider’s fault, but then again, maybe your own people misconfigured your machine instances, allowing the bad guys in. How do you determine the liability? What if the hackers installed a botnet in your machine instance that they used to penetrate the security of another company, who now wants to sue. Can they sue you?

  • Risk allocation – in those situations where perhaps you’re partly to blame for a disaster or a breach, how do you allocate the risk between your company and the Cloud provider? And will your insurance company pay a claim if you are partly to blame?

Logging and auditing requirements and risks

  • Supporting legal requirement for logging – Some regulations provide for specific logging and auditing requirements. For example, HIPAA requires you to maintain an audit log of everyone who accesses an electronic health record—even if it’s an admin at the Cloud provider. Make sure you communicate your specific logging and auditing requirements to your provider and include those requirements in your contract.

  • Privacy of logs – Sometimes the audit logs themselves contain confidential information. You must contract with your provider to properly encrypt that information, and you also need to mitigate the risk that such encryption is inadequate, allowing the logs to be compromised.

Other regulatory compliance issues

  • Regulations specific to your industry – The web of regulations is both extraordinarily complex and entirely arbitrary. It is your responsibility that you don’t run afoul of any regulations that pertain to storing, moving, or using data in the Cloud.

  • Risk of regulatory change – For the most part, today’s regulations that apply to the Cloud were around before the notion of Cloud Computing took off. Once regulators get a handle on the issues Cloud presents, however, you can expect new regulations to follow—and of course, it’s impossible to fully plan for them.

  • Requirement for provider audits and security certifications – You may also have regulatory priorities that require your Cloud provider to conduct its own internal audits or obtain security certifications. As regulations develop, expect such certifications to proliferate as well.

What if your Cloud provider declares bankruptcy?

  • Salvage rights to data – one day everything seems to be fine, but the next your provider is out of business, and they’re liquidating their assets. That means the servers that held your precious data are now on eBay, and they’ll soon belong to the highest bidder. To avoid this nightmare scenario, you’ll need to put in place some ironclad protections that will survive even a liquidation bankruptcy.

  • Escrow of provider data, code, and configurations – your own data aren’t the only things you might want to protect should your Cloud provider go belly up. Depending on how you’re using the Cloud, you may want to require your provider to escrow its own data, code, or configuration files, in the admittedly slender hope that if their servers go on the auction block, there’s some way to rebuild your Cloud application without starting from scratch.

The ZapThink Take

You probably picked up on the general assumption that this article is discussing Public Clouds in particular. That assumption is generally true, but it’s important to realize that Private Clouds have many of the same risks. You must still comply with regulations, deal with rogue employees, and potentially even respond to subpoenas or search warrants, after all. The list goes on.

Instead of focusing your efforts on insuring you’ve put together an ironclad agreement with a third-party Cloud provider, you must now serve as provider as well as customer if you’re building a Private Cloud. Yes, you have greater visibility and control, but you also have even greater responsibility and liability than if you are working with a Public Cloud provider. After all, having one throat to choke is no consolation when the only throat available is your own!

Elizabeth White asserted “At Cloud Expo New York, PerspecSys CTO Terry Woloszyn discussed how to defend against attacks on sensitive data in the clouds” in a deck for her War in the Clouds: Are You Ready? post of 7/6/2011 to the SYS-CON Media blog:

imageAs cloud application adoption becomes pervasive throughout the enterprise, concerns around cloud data privacy, residency, and security continue to grow. A number of enterprises are slowing, and even reversing, their cloud application adoption until they can address the concerns stemming from regulatory compliance requirements, industry standards, or internal policies surrounding sensitive data management.

In his general session at Cloud Expo New York, Terry Woloszyn, Founder/CTO of PerspecSys Inc., explored the "war on your cloud data" and what the enterprise can do to defend against the attacks on sensitive data in the clouds. With these defenses in place, the enterprise can move forward with cloud application adoption more securely.

Click Here to View This General Session Now!

Petri I. Salonen asked Who owns your data in the cloud and do you care if the vendor uses it to derivate work? in a 7/5/2011 post:

imageLast week we could read about a security breach at Dropbox file passwords where a bug made passwords optional for a few hours. This break in security led to an outcry in the user community and with reason. What if you had your personal and very private information stored at Dropbox and now suddenly it was open to anybody. Dropbox said that less than 1% of users logged in during this time, but if the company has 25 million users, even one percent is a considerable number specifically if the breach caused users an issue.

A few days later Dropbox was yet again in the news and now by changing the Terms and Conditions of use and the new terms gives Dropbox the authority to use your information by having following statement in their terms

We sometimes need your permission to do what you ask us to do with your stuff (for example, hosting, making public, or sharing your files). By submitting your stuff to the Services, you grant us (and those we work with to provide the Services) worldwide, non-exclusive, royalty-free, sublicenseable rights to use, copy, distribute, prepare derivative works (such as translations or format conversions) of, perform, or publicly display that stuff to the extent reasonably necessary for the Service. This license is solely to enable us to technically administer, display, and operate the Services. You must ensure you have the rights you need to grant us that permission.

The question that I have is now whether Dropbox can sell my content to be used by search engine vendors to index and to do targeted marketing. The timing of this topic is pretty interesting as I was reading last night a book by Eli Pariser called The Filter Bubble: What the Internet Is Hiding from You. The book really opened my eyes concerning personalization of search results based on YOU and your profile. If you assume that the search results are the same for you and I want you to think again…. as you and I will have different search results even if we use the same search terms.

The Dropboxes of the world have a valuation based on the future expectations and according to Cnet News, the company has now more than 25 million users that are using the service for free. According to TechCrunch, the rumored valuation of the company today is as high as $1.5 or 2 billion. But this valuation is based on that people trust the service like TechCrunch concludes in their blog entry.

My question has always been that can we expect anything if we get things for free? If the only idea for your business is to take venture capital to drive the business on huge loss and then capitalize on valuation expectation like many other companies have, then I do get it. But if you build a software business with the idea of being around for a while and having a sustainable and profitable business, I can’t see a free model to work. I am sure that even Dropbox is considering to use the content in the “free accounts” to drive ad revenue as indexing the content will enable targeted marketing for the end user using the “free service”.

Michael Krigsman from ZDnet concludes that Dropbox is unlikely to read your “Stuff” but he suggest to discontinue the use of the product for applications where privacy and confidentially are mission critical. I believe this has nothing to do with the bug or security breach, but more how the terms and conditions are laid out for users. You need to be your own judge when you use the service and whether you feel it is OK to give Dropbox the authority to your data as the terms suggest. Dropbox has responded to the outcry of the change in terms and conditions in their blog so you can all judge based on the response how you want to see the change in conditions.

My primary offline storage is on Windows Live SkyDrive. I dropped my DropBox account after their security mishap. If I hadn’t been a former user by the time I read Petri’s post, I would have dropped it then.

<Return to section navigation list>

Cloud Computing Events

Cory Fowler (@SyntaxC4) reported a Cloud Startup Challenge [Powered By Microsoft] on 7/5/2011:

imageA few weeks ago, I was involved with a Cloud Startup Challenge at Microsoft Canada Meadowvale Campus. The Cloud Startup Challenge invited Startups from across Canada to submit their Business Plan. Out of the 500 Entries, only 5 companies were selected to participate. My role in the Cloud Startup Challenge was a Cloud Mentor. My responsibility as a Cloud Mentor was to help any of the Startups get their application up and running on Windows Azure.

imageThe Participants

Presentation: Windows Azure for Startups

Windows Azure for Startups

View more presentations from Cory Fowler

Cloud Startup Challenge Outcome

The winner of the Cloud Startup Challenge was YourVirtualButler is a Service which organizes Co-Work Spaces [like my friends at threefortynine* in Guelph]. This Software as a Service [SaaS] Application inventories and manages Shared-Space Environments keeping track of both the rooms available as well as equipment and refreshments.

Congrats! I look forward to seeing how YourVirtualButler takes off in the coming months.

* ThreeFortyNine is not a Client of YourVirtualButler

<Return to section navigation list>

Other Cloud Computing Platforms and Services

Alex Handy included Heroku, Cloud Foundry, App Harbor, CloudBees and Dot Cloud, but not Windows Azure in his The top five platform-as-a-service offerings you should know about article of 7/5/2011 for the SD Times on the Web blog:


If all you need to run your application is a Ruby runtime and some Node.js support, Heroku is head-and-shoulders above all other platform-as-a-service providers. Though it is now owned by, much to the chagrin of VMware, Heroku remains the next-generation cloud platform. You don't worry about spinning up servers or about scaling your application. Heroku just makes your app work in the cloud and for your users.</ALIGN="LEFT">

imageObviously, the limit of what's available is the biggest downfall for Heroku. We can only hope that Salesforce is putting resources into expanding Heroku to include Java, .NET, Python, C, C++, and all the other languages and stacks an enterprise would use. Heroku is a bit like the Apple of PaaS: It does one thing, and does it very, very well, but you won't be hosting your trading applications there. For that, you might want to take a look at VMware.


Cloud Foundry has the best pedigree on this list, but it's also a bit of a dark horse in the existing marketplace. That is, if VMware can get Cloud Foundry into shape.
The idea behind Cloud Foundry is great: host Java, Ruby and Node.js in a platform that can be run just about anywhere. Whereas Heroku is hosted in the cloud, Cloud Foundry is designed to run in your data center. As a result, your enterprise Spring applications can be migrated over to this modular platform, then spun up in any data center that needs them and runs Cloud Foundry. And with SpringSource behind the platform, you can be sure the Java stacks will be fast, lean and optimized for enterprises.

Of course, there are still doubts around the platform. It did come out of nowhere, with almost no warning from VMware. Previously, Cloud Foundry was just a name, but it was snatched up a few days before SpringSource was purchased by VMware. Rumors claim that Cloud Foundry pushed that acquisition over the edge and was the deciding factor for VMware to purchase SpringSource.

However, it's been two years now, and Cloud Foundry still isn't here in full. It's almost as if VMware missed out entirely on Heroku, then turned around and decided to amend its existing plans to building a Heroku competitor. If Cloud Foundry and its MicroCloud desktop test deployment environment come to fruition, they could be quite useful and powerful. Keep an eye out for the final public release of MicroCloud later this summer, or give Cloud Foundry a try now by asking for an invite at its site.


At first glance, CloudBees is all about Jenkins. You know, the Java continuous integrations suite that forked away from Hudson due to Oracle's meddling? CloudBees is entirely about Jenkins, and that's why it's the single most developer-focused PaaS in this list. Instead of worrying about catchall administration and anti-Amazon pricing, CloudBees is about helping to make build and deploy as easy as possible.
Of course, once you move your code to the cloud and build it there, it only makes sense to just go ahead and host it there. Why build and test in one cloud, then deploy to another? With CloudBees, you don't have to. If you've got a big Java application and it's constantly evolving, CloudBees may be the best place for it to live.


All this focus on Java and Ruby in the cloud could make the .NET folk feel like they're left out in the cold. But the .NET kids like to use the cool tools too. That's why AppHarbor is based around rapidly building, testing and deploying .NET code. It's so in-tune with the cool kids that it even offers a quick and easy way to push your code into AppHarbor from Git.

There's a reason the slogan for AppHarbor is “Azure Done Right.” Instead of slowly pushing out stack additions and new tools over time, AppHarbor is fully focused on making the build and deploy time faster for developers, and thus enabling more agility for programmers working in Microsoft's environments.


Out of all the platforms listed here, there is only one that is truly, top to bottom, getting it right. Dot Cloud was started at a Y Combinator company. That's a startup funding and incubation group in the Valley that is very well known for producing hip, cool new companies founded by young, visionary developers.
The Dot Cloud way of doing things is not tied to any stack. The company is relentlessly focused on building out stacks for every situation. Once a Ruby on Rails stack was built, everyone could use it. The same goes for Java and Python. In the end, Dot Cloud's goal is to allow developers to run anything, written in any language, using any stack. Instead of hording all the Ruby developers, or focusing exclusively on Java, Dot Cloud aims to be a platform you can run anywhere with anything inside. Naturally, this also includes management tools that will work just about anywhere as well.

Dot Cloud is by far the most interesting PaaS offering out there right now. It's the one to watch, especially when it comes to its long-term plans.

<Return to section navigation list>