Thursday, March 10, 2011

Windows Azure and Cloud Computing Posts for 3/10/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.


Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.

<Return to section navigation list> 

SQL Azure Database and Reporting

Rob Tiffany (@robtiffany) continued his series with Reducing SQL Server Sync I/O Contention :: Tip 5 on 5/9/2011:

Today’s tip is an easy one.

image When it comes to delivering server solutions with Windows Server and SQL Server, speed is your friend (as we used to say in the submarine service). More speed means more things can happen in a given period of time. If more things can happen in a given period of time, then you can derive greater scalability. Duh, Winning!

Okay, this stuff is obvious so let’s move on.

imageHave you ever noticed when you’ve finished installing your shiny new Windows Server 2008 R2 box, the default Power Plan is set to “Balanced?” Guess what kind of performance and scalability you get when you decide to “Go Green” and save the world with a “Balanced” power plan? Needless to say, you’re not making the most of the high-powered CPUs you just paid big money for.

So how does this relate to SQL Server and reducing I/O contention?

Would it surprise you to know that the amount of time your CPU’s spend processing your queries could actually double with a Balanced plan? If it takes more CPU time to execute a query, then imagine all those Merge Agent processes locking and blocking each other as they try to perform DML operations on the change tracking tables.

So what’s the takeaway here?

Set your Windows Server 2008 R2 power plan to High Performance! If you’re part of a Windows Domain and you need to make this setting stick, have your sys admin enforce this setting on all your SQL Servers via Group Policy.

Go fast or go home because your users care about performance.

No significant articles today.

<Return to section navigation list> 

MarketPlace DataMarket and OData

My (@rogerjenn) Upsizing the Northwind Web Database to an Updated SharePoint 2010 Server Hosted by post of 3/10/2011 begins as follows:

image Chapter 23 of my Microsoft Access 2010 In Depth book, “Sharing Web Databases with SharePoint Server 2010” includes a “Signing Up for and Testing a Trial Access Hosting Account” section. After the book published, upgraded their shared hosting plan in January 2011 with the following new features:

    • Support for English, Spanish, German, and French language packs on our shared hosting plans …
    • Support for the OData service entry point into all Access Web Databases (see the “What is OData and Why Should I Care?” section below) [Emphasis added.]
    • A new and improved control panel for self-service administration of user account passwords
    • Support for permissive handling of all file types including PDF
    • Support for SharePoint Designer outside of Access Web Databases

I was interested in giving SharePoint’s Open Data Protocol (OData) Service a try, so I requested and received an upgrade to my Access Hosting test account. To simplify the initial testing of OData representation of data delivered by SharePoint Server’s Access Services, I downloaded the Northwind Traders Web Database template from Office Online to create a local NorthwindWebDatabase.accdb (NWWDB) Access 2010 database.

The post continues with details of upsizing the sample Northwind Web Database to a multi-tenanted NorthwindTraders SharePoint site and analyzing SharePoint’s OData feed for the database. Here’s the feed for a single Products item in IE9’s template:


And here’s the Products list in a customized Microsoft Excel PowerPivot table:


Marcelo Lopex Ruiz recommended Uplevel NuGet in a 3/10/2011 post:

image In the wake of the recent datajs on NuGet announcement, it's very much worth noting that you can significantly uplevel your experience - just take a look at Scott's post on this.

Glenn Gailey (@ggailey777) reported March 2011 CTP of WCF Data Services for OData v3 is Live on 3/9/2011:

image There is a new CTP out for WCF Data Services (v3), which features the long awaited support for properties on derived types (yay!)
You can get it from here.

imageThe following proposed OData version 3.0 functionality is also supported by WCF Data Services in this release:

  • Multi-Valued Data Types
    OData now enables you to define multi-valued types in the data model of your data service. These multi-valued types contain unordered collections of primitive or complex types.
  • Named Resource Streams
    OData now enables you to define named resource streams for a given entity. This gives you the ability to have more than one binary data stream associated with a given entity. The .NET Framework client library now enables you to access named resource streams.
  • PATCH Requests
    A new PATCH method has been added to the HTTP standard. OData now supports this new HTTP method. WCF Data Services handles PATCH requests in the same way that it handles MERGE requests. The .NET Framework client library now enables you to request that updates be sent to the data service by using a PATCH request.
  • Prefer Header Support
    OData now supports the ability for clients to request whether or not a payload is returned in response to a POST, PUT, MERGE, or PATCH request. This client preference is indicated in the request by the value of the Prefer header. When using the .NET Framework client, this preference is managed by the DataServiceContext.

The following additional functionality is also provided by WCF Data Services in this release:

  • Properties on Derived Types
    You can now explicitly create relationships between derived types. Previously, you could only do this on the base type.
  • Expanded Support for Feed Customization
    Feed customization now supports mapping to the atom:link and atom:category elements and mapping multi-valued properties. You can now also define conditional criteria when mapping to the attributes of specific elements in the returned feed.
  • Support for Entity Sets with Different Base URIs
    The OData protocol allows for a data service to expose entity sets as collections that have different base URIs. Previously, the .NET Framework client assumed that all entity sets shared the same base URI defined in the DataServiceContext. Now, you can define a delegate that is used by the context to resolve URIs for entity sets that do not share a base URI.
  • Including Relationship Links in the Response
    The OData protocol defines a method for addressing relationships between entities by using the $links operator in a URI. WCF Data Services now enables you to request that the data service include these links in entry elements in the response. This behavior is controlled by the IncludeRelationshipLinksInResponse configuration property.
  • The .NET Framework client library now uses relationship links when constructing URIs that address related entities, when they are present in the response.

Marcelo Lopez Ruiz announced datajs formats for OData flavored with simplicity in a 3/9/2011 post:

imageAlex has just uploaded the intermediate formats on the datajs CodePlex Wiki. These describe the shape of results provided by reading OData as well as the expected format to send data back to the server.

Like I mentioned before, these values do not have any classes or prototypes associated with them - you can simply recognize them by their shape, access them through fields, and create them with literals or in whichever way you prefer.

Another thing that we did when designing these was to keep them as close as possible to the on-the-wire representation for OData JSON payloads. This way, if you are using a tool like Fiddler to look at the data going back and forth, you will have a very good idea about how datajs is going to represent this in memory.

Ahmed Moustafa posted Announcing WCF Data Services March 2011 CTP2 for .NET4 & SL4 on 3/9/2011:

Today we are releasing CTP2 of the next version of the WCF Data Services libraries.  This release targets .NET 4 and Silverlight 4 and includes new client and server features in addition to those that shipped as part Oct 2010 CTP1.   Below is a brief summary of the features available in this CTP.  Subsequent blog posts will discuss each feature in more detail and provide examples of how to use each.

Properties on derived types:  The WCF Data Services framework is designed to make it possible to expose a model that has inheritance hierarchy for the entities in the model however a current limitation is that only operations on properties that exist on the base type, associated with the set, are supported. This feature has been among our customers top asks since the lack of support makes exposing models with rich, well-defined inheritance hierarchies impossible to do. To enable such scenarios WCF Data Services now supports both exposing and consuming models which have properties (primitive, complex & navigation) defined on subtypes of the base type associated with the set.

Frequently Asked Questions

Q1: What are the prerequisites?

A1: See the download center page for a list of prerequisites, supported operating systems, etc.

Q2: Does this CTP install side-by-side with Oct 2010 CTP1 that is currently on my development machine?

A2: Installation of CTP2 will result in setup automatically uninstalling CTP1 if it is installed on the machine. 

Q3: Does this CTP install side-by-side with the .NET 4 and Silverlight 4 versions that are currently on my development machine?

A3: By in large this install is side-by-side with existing .NET4 and SL4 bits; however, that was not possible in all cases so some VS files will be modified by the CTP installer to enable the Add Service Reference gesture in Visual Studio 2010 to make use of the new features in this CTP.  The files should be replaced to their original state during uninstall of this CTP. 

Q4: Does this CTP include support for Windows Phone 7?

A: No, you can download the OData Windows Phone 7 client from The Windows Phone 7 client does not yet support new features like properties on derived types.

Giving Feedback

The following forum is dedicated to providing feedback on "pre-release" versions of data services such as this CTP:  Please direct all your questions about the release to this forum. 

Note: The forum intended for questions on currently shipping versions of ADO.NET Data Services is still available at: (  ).

We look forward to hearing your thoughts on the release!

Ahmed is a Program Manager, WCF Data Services

Ahmed Moustafa posted Introduction to Derived Properties on 3/9/2011:

What is a derived property?

A derived property is a property that does not exist on the EntityType associated with the EntitySet; rather it exists on a type that derives from the base type of the entity set. This feature has been among our customers top asks since the lack of support makes exposing models with rich, well-defined inheritance hierarchies impossible to do. To enable such scenarios WCF Data Services now supports both exposing and consuming models which have properties (primitive, complex & navigation) defined on subtypes of the base type associated with the set

How can I specify derived properties in CSDL?

As you know, WCF Data Services exposes a metadata document ($metadata endpoint) which describes the data model exposed by the service.  Below is an example of a metadata document that defines derived properties. No new extensions have been added to support this functionality. Protocol changes need to support derived properties can be found here


How are derived properties used on the client?

A representation of the above example in a .NET type would be:

public class Person
    public int ID { get; set; }
    public string FirstName{ get; set; }
    public string LastName{ get; set; }

public class Employee:Person
    public string Building{get;set;}
    public IList<Employee> Resports { get; set; }
    public Employee Manager { get; set; }

Two new Linq operators are now supported on the client:

  • OfType<T>:  used when the type specification needs to happen in the path segment.
  • As: used when the type appears in a query option (e.g. $filter, $select, $orderby)

Below are some examples of how to query derived properties on the client:

PeopleData ctx = newPeopleData(serviceUri);

// returns only Employee instances
foreach (Employee e in  ctx.People.OfType<Employee>() )
     . . . 

// returns a narrow projection that mixes content from Person and Employee
var q = from c in ctx.People
select new EmployeeBuilding { Name = c.FirstName, Building = (c as Employee).Building };
foreach (EmployeeBuilding s in q)
      . . .

// returns only entities that are Employees working in building 18
var q2 = from p in ctx.People
where (p.FirstName==”Bill” && ((p as Employee).Building == "18"))
select p;
foreach (Employee e in q2)
     . . .

// returns entities of type Person, expanding derived Reports navigation property on the employee instances
foreach (Person p in ctx.People.Expand(p => (p as Employee).Reports))
     . . .

Your feedback is appreciated.

Todd Hoff reported Google and Netflix Strategy: Use Partial Responses to Reduce Request Sizes in a 3/9/2011 post to the High Scalability blog:

image This strategy targets reducing the amount of protocol data in packets by sending only the attributes that are needed. Google calls this Partial Response and Partial Update.

Netflix posted about adopting this strategy in their recent Netflix API redesign. We've seen previously how Netflix improved performance by creating less chatty protocols.

As a consequence packet sizes rise as more data is being stuffed into each packet in order to reduce the number of round trips. But we don't like large packets either (memory usage and packet processing overhead), so we have to think of creative ways to shrink them back down.

The change Netflx is making is to conceptualize their API as a database. What does this mean?

<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF and Service Bus

The Windows Azure AppFabric Team announced New and Improved Windows Azure AppFabric Silverlight Portal Released! on 3/9/2011:

image722322222Today we released the new and improved Windows Azure AppFabric Silverlight-based portal experience.

The new Silverlight-based portal provides the same great experience as the Windows Azure and SQL Azure Silverlight portals.

The new Silverlight portal has the same capabilities as the old portal with an improved Silverlight experience.

We will have both the old and the new portals available in parallel for a while, but we will remove the old portal in the near future.

The possible error message that we announced in this blog post will not occur in the new portal, but might still occur in the old portal.

If you already opted in to use the new Silverlight-based portal for Windows Azure or SQL Azure, the next time you open the portal via this url:, you will automatically see the new Silverlight AppFabric portal.

If you use this url: to open the AppFabric portal, you will be directed to the old portal. You can continue using the old portal until it is removed, but you will now see a banner at the top of the page with a message and a link directing you to a page at which you can opt-in to use the new Silverlight portal as your default view.

You can learn more about how to use Windows Azure AppFabric, including the management aspects of the portal, in our Windows Azure AppFabric SDK Documentation.

As always, please share your feedback with us and raise any questions you have in our Windows Azure Platform forums:

<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

Maarten Balliauw (@maartenballiauw) analyzed Windows Azure CDN updates in a 3/9/2011 post:

image The Windows Azure team has just put out the new Windows Azure SDK 1.4 for download. Next to that, I noticed some interesting new capabilities for the CDN (Content Delivery Network):

  • Windows Azure CDN for Hosted Services
    Developers can use the Windows Azure Web and VM roles as “origin” for objects to be delivered at scale via the Windows Azure Content Delivery Network. Static content in your website can be automatically edge-cached at locations throughout the United States, Europe, Asia, Australia and South America to provide maximum bandwidth and lower latency delivery of website content to users.
  • Serve secure content from the Windows Azure CDN
    A new checkbox option in the Windows Azure management portal to enable delivery of secure content via HTTPS through any existing Windows Azure CDN account.

imageThat first one looks very interesting: before today, if you wanted to use the CDN feature, you’d have to upload all static content that should be served by the CDN to your bob storage account. Today, you can just use any hosted service as your CDN “source data” provider. This means you can deploy your application on Windows Azure and have its static content (or cachable dynamic content) cached in the CDN and delivered from edge locations all over the world.

Using the Windows Azure CDN with a hosted service

As with blob storage based CDN, the management portal will give you a domain name in the format http://<identifier> This is the CDN endpoint that will serve content you specify for caching on the CDN. Of course, a prettier domain name can be linked to this URL as well. The source for this data willl come from your hosted service's subfolder "cdn", e.g. This means that all content under that folder will be cached on the CDN. For example, say you have a URL This will be cached on the CDN at http://<identifier> It's even possible to cache by query string, e.g. http://<identifier>

One closing hint here: make sure to specify correct cache control headers for content. This will greatly improve your end user's CDN experience and reduce bandwidth costs between your source (blob or hosted service) and the CDN in many cases.

And one closing question for the Windows Azure team: it would be great if I could use my current blog as the CDN source. It's not on Windows Azure yet I would want to use the CDN with my current host's data. This feature would also fit into the "cloud is not all or nothing" philosophy. Vote for this here :-)

<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Andy Cross (@andybareweb) announced Windows Azure SDK V1.4 Released and provided details of its contents in a 3/10/2011 post:

image Microsoft have just released the Windows Azure SDK v1.4:

imageI will be going through my samples and upgrading anything I find to have changed. So far, from my initial investigations the changes are not breaking changes and focus on resolving issues within v1.3.

The release notes state the following:

  • Resolved an issue that caused full IIS fail when the web.config file was set to read-only.
  • Resolved an issue that caused full IIS packages to double in size when packaged.
  • Resolved an issue that caused a full IIS web role to recycle when the diagnostics store was full.
  • Resolved an IIS log file permission Issue which caused diagnostics to be unable to transfer IIS logs to Windows Azure storage.
  • Resolved an issue preventing csupload to run on x86 platforms.
  • User errors in the web.config are now more easily diagnosable.
  • Enhancements to improve the stability and robustness of Remote Desktop to Windows Azure Roles.

From this, I take that a number of the more annoying bugs in SDK v1.3 (IIS Logs and Web.Config being read only) have been resolved. This means I will revisit my previous blog posts on SDK 1.3 workarounds and SDK 1.3 Diagnostics.

The release notes do not detail any new or extended functionality around the core of the SDK and so I expect all existing code (for example all my blog samples) to remain compatible. I will be performing some compatibility testing later on today; I will post any results I can share.

The new features in the SDK are around Windows Azure Connect and Windows Azure Content Delivery Network (CDN). These features look like they’ve had some significant work done.  For more details see the Windows Azure Team Blog:

Riccardo Becker (@riccardobecker) asserted A Generic Worker beats all in a 3/9/2011 post:

image Windows Azure will charge you for the amount of time your application is up (and maybe running). In order to fully utilize the resources that are at your disposal, you better be efficient with your worker roles in general. This blogpost will be the first in a row showing you how to gain maximum efficiency and still have your workerrole scalable.

imageThe magic behind this is: a generic worker that can handle different tasks.
Consider a workerrole as a program running and in the beginning doing absolutely nothing but listening at a queue in your storage environment. This queue is the broker for your workerrole and will be fed by the outside world (your own apps of apps of others you will serve). What will be in the message is description of a task that the generic worker has to fulfill. The description also contains the location of an uploaded assembly in BLOB somewhere, parameters and other information that is needed for the specific task.

After noticing the message on the queue the worker role will look for the assembly in blob, load it in an app domain and start executing the task that is described in the message. It can be a long-running calculation or a task that listens to another queue where it will be fed task specific messages. It can be a single task being executed just once or a task that will run forever (as long as the app is deployed of course). The worker role loads the different assemblies and starts executing the different tasks on configurable intervals or when a new message arrives.

Remember, this is a rough description of a generic worker that can be utilized up to the 100%. That's what you need, after all you are paying for it. Don't worry about the CPU getting hot!

To keep this worker role scalable new instances of the role will need to preload the assemblies already available in the first instance. This requires some administration but hey, that's why we have storage at our disposal. Imagine a generic worker role that has 10's of tasks running. Once task is to provide elasticity to itself! When overrunning a certain limit (CPU, max number of threads, a steep growing number of messages in queue(s)) it will scale itself up! Now that's what I call magic.

Next blog post will show you how the bare generic worker will look like.

Bruce Kyle reported a new ISV Video: Crowd-Sourcing Public Sector App for Windows Phone, Azure post of 3/9/2011:

image A new ISV video on Channel 9 shows Blue Dot’s Advanced Mobile 311 crowd-sourcing solution for Public Sector that enables citizens to quickly and easily create non-emergency Service Requests through their Smart Phones. Leveraging the power of Visual Studio 2010 Express for Windows Phone, Blue Dot has developed a ‘Mobile 311’ application that is freely available on the Windows Phone Marketplace.

imageISV Video: Crowd-Sourcing Public Sector App for Windows Phone, Azure

clip_image001Randy Starr, VP of Technology at Blue Dot Solutions, demonstrates the Advanced Mobile 311 solution with ISV Architect Evangelist Bruce Kyle. Randy explains how Blue Dot Solutions used Windows Phone and Windows Azure to build out the system.

This application allows a citizen to report the actual GPS location of a problem or issue and the appropriate Service Code, and to optionally include a photograph and text comments related to the Service Request.  The Mobile 311 application interacts with a set of Windows Azure based Web Services that route the service requests to the appropriate municipality based upon the Service Request location or the selected Jurisdiction ID.  Windows Azure provides the 7x24x365 uptime, dynamic scalability, and affordability required by Blue Dot for the Advanced Mobile 311 solution as well as the opportunity to utilize the Microsoft .NET skills and enterprise mobile IP developed over the last several years.

About Blue Dot Solutions

ch9Blue Dot Solutions is a leading provider of Packaged Mobile Line of Business Applications for ISVs specializing in enterprise solutions for asset intensive industries. Blue Dot mobile LOB solutions are developed using the Advanced Mobile Platform, a Microsoft .NET-based enterprise mobility platform that provides on-premise or cloud-based data communications services and data transformation and integration services for a variety of industry leading EAM, ERP, and T&L systems.

Free Assistance

Join Microsoft Platform Ready for free assistance in developing and marketing your Windows Phone 7 applications.

Other ISV Videos

For videos on developing for Windows Phone 7, see:

For videos on Windows Azure Platform, see:

For other videos about independent software vendors (ISVs):

Matt Stroshane explained Using NuGet with Windows Phone Applications in a 3/9/2011 post:

The first time that I heard about NuGet, it was in the context of ASP.NET, so I mistakenly assumed that NuGet was only for Web development. Now that I’ve done a little research, I’m happy to report that NuGet is for Windows Phone developers too. In fact, all .NET developers who use Visual Studio 2010 or Visual Web Developer 2010 can benefit from NuGet.

This post describes NuGet and shows how you can use it when developing a Windows Phone application. To demonstrate NuGet, we use it to install Shawn Wildermuth’s PhoneyTools project and then write a small app that uses PhoneyTools classes.

What is NuGet?

From the NuGet Web site:

“NuGet is a Visual Studio extension that makes it easy to install and update open source libraries and tools in Visual Studio. When you use NuGet to install a package, it copies the library files to your solution and automatically updates your project…”

NuGet is many things:

  • An open-source software project: coordinated, developed, and edited by a group of people on CodePlex in cooperation with the Outercurve Foundation.
  • A PowerShell-based Visual Studio console: named Package Manager Console, which helps simplify the process of incorporating third-party libraries into a .NET application during development.
  • A Visual Studio dialog: named Add Library Package Reference which does essentially the same thing as the console, but with the flair of a GUI.
  • A Web site located at named the NuGet Gallery, which provides a browser-friendly way to learn about NuGet and the third-part package that are available via NuGet.

In summary, Nuget is a tool that installs, updates, and uninstalls NuGet packages in your Visual Studio project. A NuGet package is a unit of work. Installing a package prepares your Visual Studio project to use a specific set of libraries/tools that are defined by the package.

Installing NuGet

The easiest way to install NuGet is right from Visual Studio’s Extension Manager in the Tools menu. NuGet has great documentation. For details about this installation approach, see Using the Extension Manager to Install the Library Package Manager (NuGet).
After it’s installed, you’ll see it in Extension Manager as NuGet Package Manager:

NuGet Package Manager in Visual Studio Extension Manager

NuGet Package Manager in Visual Studio Extension Manager

Using Add Library Package Reference

If you like GUIs, Add Library Package Reference is the dialog for you. To open it, right click your Visual Studio project in Solution Explorer and click Add Library Package Reference. For the full details on how to install a package with it, see Finding and Installing a Package.

Despite the name, you can also use Add Library Package Reference for removing a package and updating a package.

NuGet's Add Library Package Reference in Visual Studio

NuGet's Add Library Package Reference in Visual Studio (click to zoom)

Using the Package Manager Console

If you like a console experience, and want to take advantage of the benefits that PowerShell brings with it, the Package Manager Console is the way to go. You open the console from the Tools menu by clicking Tools | Library Package Manager | Package Manager Console. Great documentation located at: Finding and Installing a Package, Removing a Package, and Updating a Package.

Package Manager Cmdlets

A cmdlet is a self-contained single-function command-line tool that runs inside of PowerShell. Essentially, a cmdlet is a PowerShell command. Here are the Package Manager cmdlets that you’ll most likely use when developing Windows Phone applications:

Cmdlet Name
Usage Example
Use it to…

get-help package
see all the cmdlets having “package” in the name

get-help install-package -full
see all the help about the install-package cmdlet

see the packages installed in the current project

get-package -remote phone
search all available packages for the word “phone”

install-package PhoneyTools
install the package having id=PhoneyTools

update-package PhoneyTools
update the package having id=PhoneyTools

uninstall-package PhoneyTools
uninstall the package having id=PhoneyTools

PowerShell Shortcuts

When using Package Manager Console, the following PowerShell shortcuts might come in handy:

Shortcut Description

  • (up arrow) previous command
  • (down arrow) next command (in history)
  • (tab) autocomplete for cmdlet names and parameters
  • cls - clears the console screen
Example: Using Phoney Tools

Shawn’s Phoney Windows Phone 7 project was one of the first Windows Phone NuGet packages that I heard of, so I thought it would be a fitting package to use as an example for this post. Because PowerShell is so very cool, this example will demonstrate the PowerShell-based Package Manager Console.

The code in this example is a VB single-page app that uses the PhoneyTools FadingMessage class to display the type of network connection that the phone is using. The network type is obtained using the PhoneyTools PhoneNetworking class.


This example assumes you have a professional version of Visual Studio 2010 or Visual Web Developer 2010, a baseline setup for Windows Phone development, and have installed NuGet in Visual Studio.

Before continuing, open up a new Windows Phone Application project in Visual Studio and make sure that you have Internet access.

1. Open the Package Manager Console

We open the console from the Tools menu by clicking Tools | Library Package Manager | Package Manager Console. As shown in the following image, note the Default project in the top-right of the console. We can use the console to install packages in other projects too. When we don’t specify a project in a cmdlet, then the project specified in Default project is used.

Visual Studio Package Manager Console

Visual Studio Package Manager Console (click to zoom)

2. Query for Phoney Tools

The first thing we want to do is query for Phoney Tools. We know the name will have “phoney” in it, so we will use that as the search keyword with the get-package cmdlet. Remote searches the Web for available packages. Filter defines the keyword that will be used in the search.

PM> get-package -remote -filter phoney

Id Version Description
-- ------- -----------
PhoneyTools 0.1 Phoney Tools for Windows Phone...
PhoneyTools 0.2 Phoney Tools for Windows Phone...
PhoneyTools 0.3 Phoney Tools for Windows Phone...
PhoneyTools 0.3.1 Phoney Tools for Windows Phone...


Here we can see there are four versions of Phoney Tools. Note: by default, install-package installs the latest version.

3. Install Phoney Tools

Now that we know the Id of the package, we can use the name PhoneyTools to install the package in our current project. Id is the only required parameters, so we don’t need to specify it when it’s the only parameter we’re using. If we wanted a different version, or wanted to install it in a different project, we would need to use the parameters.
This example demonstrates the simple use of install-package:

PM> install-package PhoneyTools
Successfully installed 'PhoneyTools 0.3.1'.
Successfully added 'PhoneyTools 0.3.1' to NuGetPhoneyToolsExample.

4. Explore Phoney Tools

Now that PhoneyTools is installed in the project, we can view it in Solution Explorer. If you have a Visual Basic project, first click the Show All Files button in Solution Explorer. Then expand the References folder to show the PhoneyTools assembly, named AgiliTrain.PhoneyTools. These two steps are shown in the following image:

Showing All Files and References in Visual Studio Solution Explorer

Showing All Files and References in Visual Studio Solution Explorer

To learn more about the PhoneyTools assemblies, right-click on the assembly name, AgiliTrain.PhoneyTools, and then click View in Object Browser. From Object Browser, you can explore the PhoneyTools API:

PhoneyTools API in Visual Studio Object Browser

PhoneyTools API in Visual Studio Object Browser

5. Set Up the XAML in MainPage.xaml

Replace your LayoutRoot grid element with the following XAML in the MainPage.xaml file:

<!--LayoutRoot is the root grid where all page content is placed-->
<Grid x:Name="LayoutRoot" Background="Transparent">
        <RowDefinition Height="Auto"/>
        <RowDefinition Height="*"/>

    <!--TitlePanel contains the name of the application and page title-->
    <StackPanel x:Name="TitlePanel" Grid.Row="0" Margin="12,17,0,28">
        <TextBlock x:Name="ApplicationTitle" Text="Phoney Tools Demo"
                   Style="{StaticResource PhoneTextNormalStyle}"/>
        <TextBlock x:Name="PageTitle" Text="network type" Margin="9,-7,0,0"
                   Style="{StaticResource PhoneTextTitle1Style}"/>

    <!--ContentPanel - place additional content here-->
    <Grid x:Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0">
        <Button Content="Display Network Type"
                HorizontalAlignment="Left" Margin="0,0,0,0"
                Name="Button1" VerticalAlignment="Top"  />

The only changes made to the default XAML file were modifying the Text values in the text blocks and adding a Button element in the ContentPanel grid element.

Note: Double-clicking Button1 in the designer will instruct Visual Studio to automatically create the event-handler method named Button1_Click.

6. Add Code to MainPage.xaml.vb

The code in this example uses two different PhoneyTools classes, PhoneNetworking and FadingMessage. Before adding code for the button click event handler, add the following Imports statements to the MainPage.xaml.vb file:

Imports AgiliTrain.PhoneyTools 'for FadingMessage class
Imports AgiliTrain.PhoneyTools.Net 'for PhoneNetworking class

Then add/replace the following event handler method to the MainPage class in the MainPage.xaml.vb file:

Private Sub Button1_Click(ByVal sender As System.Object,
                          ByVal e As System.Windows.RoutedEventArgs) _
                          Handles Button1.Click

    'get network type as a PhoneyTools NetworkType
    Dim CurrentNetType =

    'build message string for PhoneyTools FadingMessage
    Dim CurrentNetMsg =
        String.Format("your current network type is {0}",

    'show PhoneyTools FadingMessage for 2 seconds
    FadingMessage.Show(CurrentNetMsg, 2000)

End Sub

In this method, first we get the network type from the GetNetworkType() method and save it to CurrentNetType, which is of type NetworkType. Then we build the message string and display the network type with the ToString() method of CurrentNetType. Finally, we display the message using the FadingMessage static class for two seconds using the Show() method.

7. Run Your Project

After you’ve copied the code to your project, run the project (start debugging) by pressing F5. When the application comes up, clicking the Display Network Type button triggers the FadingMessage to appear as shown in the following image:

Example: Windows Phone app using Phoney Tools

Example: Windows Phone app using Phoney Tools


NuGet can save you a lot of time when you are setting up your Windows Phone project, so check to see if a library has been NuGet-ified before you download it the old-fashioned way. There are several NuGet packages for Windows Phone, and more are appearing all the time.

For example, Coding4Fun and Silverlight each have NuGet-friendly toolkits available for Windows Phone. I’ve listed links to those and several others on my downloads page under Notable Downloads for Windows Phone Development, just click the NuGet links for more information.

See Also: NuGet

NuGet Documentation
Outercurve Foundation
NuGet project page at Outercurve
NuGet at CodePlex
NuGet forums
Using the Extension Manager to Install the Library Package Manager (NuGet)
Finding and Installing a NuGet Package Using the Package Manager Console
Package Manager Console Commands

Creating a NuGet Package in 7 easy steps – Plus using NuGet to integrate ASP.NET MVC 3 into existing Web Forms applications

See Also: Phoney Tools

Phoney Windows Phone 7 Project Now Available!
[video] Phoney’s FadingMessage Class
Phoney Tools Updated (WP7 Open Source Library)
PhoneyTools on CodePlex

See Also: PowerShell

PowerShell Documentation
Using Cmdlets

<Return to section navigation list> 

Visual Studio LightSwitch

image2224222222No significant articles today.


Return to section navigation list> 

Windows Azure Infrastructure and DevOps

My (@rogerjenn) How DevOps brings order to a cloud-oriented world post of 3/10/2011 to carries this deck:

image Today's evolved IT model magnifies the divide between developers and IT operatives, but DevOps pushes to increase collaboration among these diverging departments.

The article also includes a list of “monitoring and management tools for public and private Infrastructure as a Service and Platform as a Service clouds to choose from.”

Full disclosure: I’m a paid contributor to

Steve Plank (@plankytronixx) recommended Cloud Computing Framework from Microsoft Research: Orleans from Microsoft Research in a 3/10/2011 post:

Grains and Silos. Promises are resolved if they are either kept or broken. It’s a bit of an academic read (well, it is from Microsoft Research), but it shows some very interesting thinking and ways of approaching the classes of problems the cloud gives us.

I wouldn’t say it’s for the faint-hearted, but it is very interesting.


Click here to read the paper (PDF).

imageNo significant articles today.

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Robert Duffner (@rduffner, pictured below) posted Thought Leaders in the Cloud: Talking with Srinivasan Sundara Rajan, Senior Solution Architect at Hewlett Packard to the Windows Azure Team blog on 3/10/2011:

image Srinivasan Sundara Rajan works at Hewlett Packard as a Senior Solution Architect. His primary focus is on enabling service-oriented architecture (SOA) through legacy modernization for automobile industries. He worked as a consultant for Compuware, Verizon, and other organizations in the earlier parts of his career. All the views expressed here are Srinivasan's independent analysis of industry and solutions and not necessarily those of his current or past organizations.

Robert Duffner: Could you please take a moment to introduce yourself?

image Srinivasan Sundara Rajan [pictured at right]: I currently work as a senior solutions architect for HP Enterprise Services, catering to the larger customer accounts. I have more than 19 years of experience in the industry. I write regularly in Cloud Computing Journal on enterprise cloud adoption.

Robert: On the Supply Chain and Technology blog, the author says that, "SMBs are frantically moving to the cloud. Enterprises are not ready for it." What are your thoughts on that?

Srinivasan: I think that is correct to an extent, in the sense that enterprises are still contemplating. For an enterprise, infrastructure as a service is useful , but  that is not a business transformation enabler . The real challenge for them is moving to the cloud so that their time to market improves, and software as a service is an ultimate goal for them, although they can never really arrive at the perfect software-as-a-service (SaaS) platform in the short run

The issue for them is that the big ERP packages like SAP or Oracle Financials require a lot of customization. With that in mind, no SaaS offering can straightaway satisfy most of the larger enterprise needs, so the best bet for the enterprises is to get to a platform-as-a-service (PaaS) platform on top of your SaaS building blocks. In other words, you satisfy your basic needs on the SaaS, but you extend on top of it  with PaaS to satisfy your enterprise needs.

That type of platform is still evolving, and enterprises are not finding a ready-made answer to get into the cloud. Even though a lot of the standard concerns like security and availability have been addressed, I feel that the business capability enablement has yet to mature so that enterprises can fully adapt to the cloud.

Robert: How do you recommend that enterprises with the full spectrum of both very old legacy systems and new applications approach the cloud?


Srinivasan: I always choose the Microsoft platform because of its PaaS and strong integration tools with the on premises applications, and my approach is to recommend a hybrid environment. For the foreseeable future, a 100% cloud-only solution is not going to be viable for large enterprises.

The enterprises yet  continue  to have their own data centers to support certain legacy applications and calculations that are not available as a SaaS s, for various reasons,  enterprises are not going to  to move these critical applications  to the cloud.

Still, there is very good potential for some 30 to 40 percent of the workload to move into the cloud.

Let us take an  example, in the very specific case of a data warehousing scenario, from a business intelligence perspective, there is an operational data store which  is a consolidation of all your transactions systems in near real time.

Near-real-time consolidation of all your operational systems may not move to the cloud, because they have to be near to your data centers to accumulate data as quickly as possible. On the other hand, there are   operations such as enterprise reporting and data marts focus on historical, staggered accumulation of data.

You could always wait a week to accumulate your operational data store into the data warehouse, so the latency of cloud may be very compatible with that scenario. On the other hand, enterprise reporting raises issues in terms of licensing and standardization of the reports. For big enterprises facing those kinds of problems, SaaS or PaaS would be ideal for reporting. That's why the overall solution tends toward a hybrid delivery

Robert: I am glad you mentioned the hybrid cloud environment, which is something we are finding a great deal of interest in among customers interested in Windows Azure. You just recently wrote an article about how data warehousing fits in a hybrid cloud environment.

Srinivasan: A data warehouse is not just one database. You have a transactional system, which may be like any kind of a legacy system. It could involve mainframes, ERPs, and other stuff. And traditionally you should have an operational data store, which is at the transaction level. You want to capture a real-time, or near-real-time, consolidated view of all this data.

You may have 15 different technologies like AS/400, mainframes, Oracle, and ERP in your transactional system and want to consolidate them in near real time into an ODS. Because of that near-real-time requirement and having so many adapters in place and so forth, you may want to keep your operational data store inside your data center, because of the latency and availability factors.

Data warehousing typically works on a variable load pattern. For example, a telecom company may load all its billing data to a data warehouse, so naturally during the billing cycles, they would have a very high level of activity, and then it might be cool for the next 10 days. And when the new customer file comes, they may want to do lot of address validation, address cleansing, and so on.

That variable load and the fact that data accumulates over a period, rather than requiring real-time response, as well as the availability of a common SaaS platform for reporting and a PaaS platform for extending that reporting, make the rest of the components a very good candidate for moving to the cloud.

In this scenario, certain pieces like the operational data store and transactional system continue to be on premises. The data warehouse itself and the compute resources that handle the monthly or weekly differential load can move to the cloud. That combination can benefit the enterprise  in terms of operational and other expenses while also providing the benefits of a platform tool.

Robert: Are there any other scenarios where you see hybrid cloud as being particularly important, beyond those that you've already mentioned?

Srinivasan: Product lifecycle management is becoming more and more collaborative. Big manufacturers make certain components of a large product such as an automobile, airplane, or industrial equipment, and you want a lot of suppliers to collaborate with the design. You also want to enable connected parts purchasing, quality assurance, and several other areas.

These manufacturers may use very specialized systems that house a lot of IP, such as CAD systems for engineering drawings, and they may not want to expose everything to the cloud. Still, cloud provides a very good platform to collaborate with the appropriate security controls.

For example Azure has got the Windows Active Directory Federation and other stuff. These tools give you a very good platform for multiple people to collaborate over the cloud while still keeping certain systems isolated. Please also refer to my article on, Federated Security in Windows Azure

Hybrid cloud can also be very useful in terms of reducing dependence on legacy systems. For example, you may want to reduce the amount of processing you depend on mainframes for over time. Mainframes are very good for transactions, especially real-time transactions.

You won't be able to get an equivalent level of performance for the foreseeable future in the cloud, but reporting doesn't have to coexist with the transaction system. That can be moved to the cloud, and there are quite a few scenarios like that where I could see a good amount of cross usage between on-premises and cloud systems.

Robert: Can you talk a little bit about how you see technologies like SQL Azure and DataSync playing a role?

Srinivasan: In a data warehousing scenario, say you have your ODS and your current data warehouse on premises, and you have decided to move your data warehouses to the cloud. Naturally, you cannot use your REST-based service invocations to move those enormous amounts of data, so first of all, you need appropriate tools. SQL Azure is exciting here, because it provides you lot of native tools for the movement of large data. Of course, Informatica and other third-party vendors also support it.

Whenever you are talking about this kind of a hybrid delivery, you also need to address continuous integration, because it is not just one time movement of data. There you've got multiple choices. You can use queues for a publish-and-subscribe approach, which will give you asynchronous access. You can also use SQL Azure for more of an ETL type of data transformation, so you set some kind of a source and a target. This is especially so if your source database is in SQL Server, because ETL plays a major part in moving the data from one place to another.

You are never going to have the data 'as is' getting transformed from a source to a target. You'll have to use multiple transformations, so you need an automated tool to support movement from an on-premises system to the cloud. That means you need several things to be taken care of.

First of all, you need the adapters. You need to have a common tool that will talk to your on-premises data sources and to the cloud. The second consideration is security, including how your credentials, tokens, access accounts, and pass words  are integrated. The third concern is about ensuring a rich set of functionalities. Since you have built it on TSQL, you have a rich set of functionality for transformation. Those tools will definitely have a larger role on a hybrid kind of a platform.

Robert: In the article, "Improving the Business Value of SaaS Applications," you talk about the need to link cloud application authentication back to enterprise directories. How do you see SaaS vendors and enterprises handling this requirement today?

Srinivasan: I may be wrong, but what I have seen in Amazon is that you need to go and register yourself  as a user. Consider how untenable that is in an enterprise of 150,000 or 200,000 users. It's not like all the users are going to access that same application, but you will need consistency.

Suppose you are using a workflow kind of CRM application, where a service request  needs to go to your  manager, and it may need to be escalated further, depending upon the requirements of the individual transaction. You can't set up all that authentication data or your organizational directory data into the cloud, because that would require a huge amount of synching and potential chaos from data mismatches.

Therefore, I need certain things to be tied back to my enterprise directory. I still use a CRM application over the SaaS, but when my customer sends me a complaint that I don't handle in a timely fashion, it should automatically be forwarded on to my manager.

Such a synergy is possible only if the authentication and authorization is tied back to the enterprise directories. Of course, it will put a lot on the security mechanisms, but I don't really feel a need to duplicate the security inside the cloud. It may be good for individual users but not for entire enterprises.

Robert: You spend a lot of time on legacy modernization. Is there a fit for cloud in this regard?

Srinivasan: Yes, in a few different ways. First of all, most legacy modernization is targeted at server consolidation or platform migration. I might move a system from multiple mainframes into a single DB2 system on commodity servers.

Suppose the mainframe or other legacy system is implementing content management or document management. You could certainly move that to Office 365, SharePoint, or a similar content management system on the cloud.

This opportunity is not just about moving one to one. You could establish certain dynamic scaling properties on top of it, for example. Moreover, if you really think about your business processes with a holistic approach, you might investigate what other business processes are redundant on the legacy systems and consider whether those can also be moved to the Cloud. It may also be that you would be able to extend SaaS offerings using PaaS.

Robert: You also wrote an article comparing cloud sourcing versus outsourcing. Can you talk us through your thinking on that?

Srinivasan: I have worked both in the US and in India, so I think I know both business environments to some extent. I am not pointing at any single company, but I believe that, under the current outsourcing model, companies spend most of their time on maintenance, which means that they get relatively less value for their business capability.

Say I, as an enterprise, wanted to create a new report to satisfy a federal compliance requirement.  What we current see in an outsourcing world, we  go to the business unit that requested it and tell them that it would have cost them about $10k, but because their system is on an unsupported platform or an older version of the database, it is going to cost more like $100k.

That has been a relatively common scenario, and while it is not really a problem of outsourcing, but outsourcing is not clearly able to solve that situation. You can't really plan in advance for things like that, and with outsourcing, you generally have a multi-year contract which provides less room for dynamically adding and removing resources.

If it's executed well, cloud gives you a lot of choice that increases your capability to concentrate on business capability versus maintenance.

Security and trust also come into play. When you move your data to your outsourcing provider or let that provider access your databases, there is a very similar set of considerations as when you move the same data to a trusted cloud provider. Therefore, that security aspect remains much the same between outsourcing versus cloud sourcing.

Because there is the potential for a huge benefit in terms of business capability if executed correctly, outsourcing may evolve into a combination of cloud sourcing plus outsourcing plus data center maintenance. The outsourcing companies may also help companies move towards cloud sourcing.

Robert: The cloud is clearly a no brainer for startups because you can pretty much trade your capital expenses for more operating expenses. Do you see any corresponding best-kept secrets in terms of cloud advantages for big enterprises?

Srinivasan: One very interesting concept there is the community cloud perspective. Think about an automobile manufacturer that needs to comply with a large number of separate regulatory bodies from all over the world. It could carry substantial cost to create separate sets of reports from scratch for each of those worldwide governments.

Of course, those reports are largely standard in nature, such as emissions reports that need to be completed by a large number of international car makers. Therefore, each individual car company stands to benefit in terms of the cost and time requirements to create those reports if they band together to create a common SaaS model, which is not available today.

Another aspect of the community cloud concept is an industry exchange for information. For example, certain in-car controls created by big automobiles, collect and track information about collisions on the highway. If I get into an accident, information about it is automatically passed on to a help desk that handles my automated request for help. Aggregating the support for those information stores among multiple car companies represents substantial cost savings, as well as the ability to get better geographical coverage.

So community cloud can create some valuable synergies among different companies, even though they may be competitors. In that sense, this idea is similar to relationships that are commonplace in other areas. For example, Microsoft, IBM, HP and Oracle collaborate on projects like SOA standards or security standards, even though they are competitors in many market segments.

Robert: Large enterprises traditionally have to spend a significant amount of time thinking about things like business continuity and disaster recovery. How do you think those considerations are impacted by the cloud?

Srinivasan: Cloud provides you things like standby server support, replication, and log shipping out of the box, although there is of course the issue of how trusted they are. Large providers are providing very good SLA commitments, and I have written on the topic of infrastructure-as-a-service acting as a disaster recovery platform.

Disaster recovery and that sort of thing follow naturally from some of the basic tenets of cloud, since things are automatically copied and replicated, for example. And the tools are also available as part of the platform to get back your old data from your replicated versions.

Of course, enterprises have to go through certain studies and proofs of concept there, to figure out how cloud suits their particular scenarios, but for applications that are already hosted on the cloud, there is definitely a simple alternative to conventional backup and disaster recovery.

Robert: People have speculated that server shipments might decline as a result of virtualization innovation and now the cloud, but that hasn't happened. How do you see these technologies impacting the need for servers?

Srinivasan: Workloads are continuing to increase, and in fact, the presence of the cloud helps to accelerate that growth. Data volumes are growing, and what we want to do with them is also. What cloud is really giving you is that you don't have to spend lot of your time thinking about how much compute power to have in reserve to avoid going over capacity and losing business.

Robert: That concludes my planned questions for today. Is there anything else that you would like to talk about?

Srinivasan: To touch back on the community cloud concept, it is very difficult to achieve SaaS using a common package that suits every enterprise. You need customization, which is going to be available in the form of PaaS, and how well you provide integration from your on-premises systems will be a key differentiator as businesses try to focus on generating business value instead of using their resources on maintenance.

A lot of the new offerings are pretty exciting, but we still need a lot of case studies and proofs of concept to illuminate the path for enterprises to follow in adopting cloud. There are still a lot of issues to work through, including vendor lock-in and many others, but ultimately the effort will be worth it, and businesses on the whole will benefit.

Robert: Thanks for taking the time to talk today, Srini. I really appreciate your insights.

Srinivasan: Thank you.

<Return to section navigation list> 

Cloud Security and Governance

Matthew Weinberger reported Intel Unveils Trusted Cloud Security Software in a 3/10/2011 article for the TalkinCloud blog:

In another dispatch from this week’s Cloud Connect Conference, Intel has taken the lid off Intel Expressway Cloud Access 360, a software tool designed to bring trusted client-to-cloud access to the public cloud. Chip giant Intel’s not exactly known for its software, but the company has been making a lot of noise around building a better cloud of late.

Expressway Cloud Access aims to provide cloud account provisioning, federated single sign-on for SaaS, two-factor authentication using soft tokens, and access control using existing Intel Identity Protection Technology clients. It supports the SAML, OAuth, OpenID and XACML standards for data and application cloud connectivity, according to the announcement.

image Developers have already signed on, with Nordic Edge’s Opacus leveraging Expressway Cloud Access 360 for their identity management-as-a-service SMB-marketed solution. And channel organizations and cloud integrators like  L & T InfoTech, Acumen Solutions, and Qontex are apparently using it to move customers from restrictive private clouds to public cloud solutions — while keeping a consistent level of security all the way.

Intel is on the record as stating that a lack of trusted computing is a major problem the cloud will need to overcome in the years to come. But Intel seems to be putting its money where its mouth is with the development of technologies and platforms to help partners address the issue.

Read More About This Topic

<Return to section navigation list> 

Cloud Computing Events

Jo Maitland dated her Ten cloud computing startups in 10 minutes article for 3/11/2011 (tomorrow):

Weekly cloud computing update

image If you missed this week's Cloud Connect conference in Santa Clara, don't worry; we've got you covered. The big guns were present as Netflix and eBay told dramatic stories of moving operations to the cloud: the online auction leader discussed dropping 1,200 physical servers in favor of Infrastructure as a Service burst capacity while Netflix cloud architect Adrian Cockcroft laid out how his company rewrote their entire operations for Amazon's cloud.

image But what we focused on were the bevy of cloud startups present. Of the 70 companies exhibiting at the conference, we picked a handful of newcomers that we believe will be useful to IT pros ready to make the leap into cloud computing.

We've boiled all it down for you in this week's episode of Cloud Cover TV by interviewing 10 cloud computing startups in 10 minutes. In just 60 seconds, each company's representative had to explain what they do and why enterprise IT departments should care.

For example, Nimbula presented its build-your-own-cloud offering while Skydera pitched its cloud deployment tools and Oxygen Cloud sold its file system for enterprise storage management. Some of them have actually been around for a while, like cloud platform Abiquo. Sure, they've gone through a few rounds of fundraising already, but they still like to pitch.

And others might even be a new breed of cloud company. Drew Bartkiewicz heads CloudInsure, which says it'll insure your cloud services. How, and for what? We don't know, but it's definitely something we hadn't seen before.

You probably haven't heard of these startups just yet, but if their products are as valuable as their pitches made it seem, you will. And if you've already got thoughts on what they're selling, we have a comments section for you over on Feel free to tell us what you think of our selections and whether any of the companies sound interesting to you.

Jo is the Senior Executive Editor of

Full disclosure: I’m a paid contributor to

Alex Williams (@alexwilliams) answered Weekly Poll: Did the Web Win? in a 3/9/2011 post to the ReadWriteCloud blog:

image The keynotes yesterday from CloudConnect provided some highlights that remind us how different the universe can look when viewed from the Web.

It's a far different landscape that you see when viewing the world from the enterprise.

image On the Web, the apps are built on top of simple, commoditized stacks. Cloudscaling CEO Randy Bias said in his keynote that Amazon Web Services (AWS) is the one enterprise providers need to beat. Cisco's Cloud CTO Lew Tucker said in his keynote that the Web won.

Thumbnail image for oracleweeklypollchart.pngAWS is winning because it picked the winning architecture. He says costs are six to eight times less than what it costs to build the infrastructures as developed by enterprise cloud services. This is due to the fundamentally different architecture that was used to build legacy apps. Enterprise providers are building cloud services for legacy apps. The effort now is to move those apps to the cloud. This increases expenses considerably when recreating an environment that was built for on-premise systems.

To move legacy apps to the cloud creates an online silo. Web apps are built for the Web.

What do you think?

Brenda Michelson reported Cloud Connect 2011: Keynotes, Day 2 on 3/9/2011:

imageOnce again, the day opens with short keynotes. 

Starting us off is James Staten of Forrester.  James gives us two words to think about in respect to cloudonomics: Down & Off.  When the application (resource) isn’t in use, you can turn it off.  When you turn it off, you aren’t paying. 

image James says to write applications in components that are as small as possible.  Only turn on what you need, when you need it.  James is describing cloud applications that have a zero footprint until a request is made.  [Sounds event-driven to me.  I like it.]

In the same respect, monitor performance thresholds to determine when an instance can be turned off.  Make “down” and “off” part of your design. 

[Of course, need to be smart in “systems management” with a hyper distributed, small component architecture.  Need to understand/monitor/manage the collective, to complete transactions].

Scott Baker, Director of Systems Engineering and Operations, Eventbrite

Scott is buzzing through a history of datacenter building and love.  However, he’s learned to love the cloud.  A favorite point of the crowd is “To have agile development, you need to have agile operations”.  The cloud, according to Scott, provides agile operations.

Oriol Vinyals, PhD Student, UC Berkeley, Microsoft Research Fellow: Building the Overmind: AI and Cloud Computing

Oriol is looking at artificial intelligence, Starcraft and the Cloud.  Starcraft, according to Oriol, is a strategy game.  To win, you need to gather resources, produce units and attack your opponent.  Starcraft, apparently, is a good AI problem to solve.

Challenges: long horizon, concurrent, partially observable (don’t see the complete board), and real-time.

Connections to cloud computing: resources, tasks, opponents & weapons (users & servers).

Oriol shows videos of swarms and defenders responding, you can see parallels to cloud computing and effective management of resources.

Marvin Wheeler, Chief Strategy Officer, Terremark, on the Open Data center Alliance.  From the website:

“The Open Data Center Alliance is an independent consortium comprised of leading global IT managers in a wide range of vertical segments, who have come together to provide a unified vision for long-term data center requirements.

In support of its mission, the Alliance is developing and delivering an Open Data Center Usage Model Roadmap, which defines Usage Model requirements to resolve key IT challenges and fulfill cloud infrastructure needs into the future. This vendor-agnostic roadmap serves as the foundation for member planning of future data center deployments, and relies on open, interoperable, industry-standard solutions.”

Marty Kagan, President and Co-Founder, Cedexis on Cloud Performance Data

Cloud Bakeoff: Amazon EC2, Google App Engine, Joyent, Rackspace and Windows Azure.  Bitcurrent is publishing a study.  EC2 East wins for HTTP request, but need to factor in request origin for best performance.

Neal Sample, Vice President, Architecture, Technology Product Management, Developer Program, eBay

Neal waled through models and curves on cloud bursting.  eBay has moved from 2000 servers down to 800 servers, with excess requests bursting to the public cloud.  By reducing infrastructure costs, eBay has been able to redirect that investment into business intelligence, providing additional value to the business.

Related posts:

  1. @ Cloud Connect: Tuesday Morning 10 Minute Keynotes
  2. @ Cloud Connect 2011: Colin Clark introduces Cloud Event Processing
  3. @ Cloud Connect: Opening Keynotes

Cade Mertz asserted “Redmond makes like Redmond” as the deck for his Microsoft: 'No one cares about Google's dev cloud' post of 3/8/2011 about Cloud Connect for The Register from Santa Clara, CA:


Cloud Connect Microsoft developer and platform general manager Matt Thompson has claimed that among startups across the United States, interest in Google's App Engine is "almost nonexistent" and that only a "tiny number" have an eye on Salesforce's Citing a recent Microsoft survey, Thompson said that if startups are interested in building and deploying apps on a so-called public cloud, they're interested in Amazon Web Services and, yes, Microsoft Azure.

image"Salesforce has built a huge business. [It's] very successful," he said. "But if you go to the leading startups who are building apps today, Salesforce is an edge case. We see [interest in] Amazon [EC2], host-your-own setups, and Azure. Those are the top three."

Asked about Microsoft's claims, Salesforce told us that there are currently 340,000 members of the developer community, including startups such as Appirio, Financial Force, DocuSign, ServiceMax, GreatVines, and Model Metrics. Google went even further. "Every week, more than 150,000 applications, including many developed by start-ups such as Simperium (developer of Simplenote),, Farmigo, and others, are active on Google App Engine," a Google spokeswoman said.

"These applications have been developed by the more than 100,000 developers at companies large and small who are working on apps powered by Google App Engine. Everyday, the Google App Engine platform powers more than one billion pageviews across all App Engine applications."

The company also cited a survey of 150 businesses, published today, that ranked Google App Engine as the second leading cloud offering behind Amazon Web Services.

Microsoft's Thompson took the stage for a panel discussion this morning at the annual Cloud Connect conference in Santa Clara, California, and in typical Microsoft fashion, he wasn't shy about taking aim at the competition. At one point, he even made a crack about people who wear jackets – with fellow panelist Mathew Lodge, senior director of cloud services at VMware, sitting beside him in a jacket. "I'm a geek. That's what I do," Thompson said, when the talk turned to the developer tools that run atop Azure. "That's why I'm not wearing a jacket."

Azure is Microsoft's so-called platform cloud, a service that lets developers build and host applications via the web. Google's App Engine and are similar services. Unlike Amazon's EC2 "infrastructure cloud", these platform clouds let you build and host applications without juggling virtual-machine instances and other raw infrastructure resources. EC2 gives you access to those raw resources, providing a bit more flexibility than the platform clouds – but potentially more hassle as well.

Currently, Azure is a single public service hosted by Microsoft. But the company is working with partners to build appliances that will allow third parties to build their own Azure services. Asked if it was difficult to balance a public cloud with Microsoft's traditional business, where it's selling software for private hosting situations, Thompson said "no". Naturally.

"We haven't see that conflict yet," said Thompson, who runs Microsoft developer relations in the US. "We're finding solutions for each [individual customer]. Could it happen? Possibly. But so far, not yet."

Just as naturally, Thompson said that private clouds will be "very important going forward". This is in stark contrast to aN outfit like Amazon or Salesforce, which believes that the world should move to a model in which all applications are hosted on remote servers. For Amazon CTO Werner Vogels and Salesforce boss Marc arc Benioff, private cloud is an oxymoron.

"You run into services companies and you ask about the public cloud, and they say 'Why would I take the most important asset I have and put it in a place where I no longer manage it directly?'," Thompson explained. That asset, he said, is the customer list held by these companies.

Thompson did not acknowledge that Microsoft's Azure appliances are well behind schedule. They were supposed to arrive before the end of last year, but no one – neither Microsoft nor its partners HP, Dell, and Fujitsu – will say when they'll arrive.

Update: This story has been updated with comment from Google and Salesforce.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Dana Gardner asserted it “Offers enterprise-class open source data virtualization” in a deck for his Red Hat Introduces JBoss Enterprise SOA Platform 5.1 post of 3/10/2011 to his Dana Gardner BriefingsDirect blog:

Red Hat has announced the availability of JBoss Enterprise SOA Platform 5.1, which includes new extensions for data services integration.

JBoss Enterprise Data Services Platform 5.1, a superset of JBoss Enterprise SOA Platform 5.1, is an open source data virtualization and integration platform that includes tools to create data services out of multiple data stores with different formats, presenting information to applications and business processes in an easy-to-use service. These data services become reusable assets across the enterprise.
We're beginning to see a real marketplace for open source-based integration and middleware, and in many ways the open source versions are advancing the value and variety of these services beyond where the commercial products can quickly tread. The advantages of community development and open source sharing really shine when multiple and fast-adapting integrations are involved.

What's more, as cloud and SaaS services become more common, ways of integrating data and applications assets -- regardless or origins -- will need to keep pace. Standardization and inclusiveness of integration points and types may be much better served by a community approach, and open source licenses, than waiting for a commercial product upgrade, or costly custom integrations.
I also see enterprises, SMBs, ISVs and cloud providers working to elevate the concept of "applications" more to the business processes level. And that means that decomposing and re-composing and orchestrating of services -- dare I say via SOA principles -- becomes essential, again, regardless of services, data and assets origins.

Lastly, the interest and value in Big Data benefits is also roiling the landscape. The integration of data then becomes tactical, strategic, imperative and at the heart of what drives an agile and instant-on enterprise.

“Being able to integrate and synchronize useful information out of a wide range of disparate data sources remains a serious stumbling block to the enterprise,” said Craig Muzilla, vice president and general manager, Middleware BusinessUnit at Raleigh, N.C.-based Red Hat. “JBoss Enterprise Data Services Platform 5.1 is a flexible, standards-based integration and data virtualization solution built on JBossEnterprise SOA Platform that delivers more efficient and cost-effective application and data integration techniques, allowing enterprises to more fully realize the value of their data.”

All businesses draw upon many different data sources and formats to run their applications. In many cases these data sources are hardwired into applications through data access frameworks that reduce agility and make control and compliance difficult. This data architecture counters the agility and cost-savings benefits delivered by service-oriented architectures (SOA) by forcing redundant data silos for each application.

Multiple data stores
Data Services Platform 5.1 aims to address these problems by virtualizing multiple data stores simultaneously, delivering data services consumable by multiple applications and business processes. By leveraging the integrated JBoss Enterprise SOA Platform 5.1, the information delivered using data virtualization can more easily be integrated into the business via the enterprise service bus (ESB) included with the platform.

JBoss Enterprise SOA Platform 5.1 includes:

  • Apache CXF web services stack
  • JBoss Developer Studio 4.0, which features updated SOA tooling for ESB and data virtualization
  • A technology preview of WS-BPEL, which delivers service orchestration
  • A technology preview of Apache Camel Gateway, which is a popular enterprise integration pattern framework that brings an expanded set of adapters to JBoss Enterprise SOA Platform
  • Updated certifications -- Red Hat Enterprise Linux 6, Windows 2008, IBM, JDK, among others

    Being able to integrate and synchronize useful information out of a wide range of disparate data sources remains a serious stumbling block to the enterprise.

JBoss Enterprise SOA Platform follows the JBoss Open Choice strategy of offering a choice of integration architectures, messaging platforms, and deployment options. Also, both JBoss Enterprise SOA Platform 5.1 and JBoss Enterprise Data Services Platform 5.1 are designed to leverage past and present solutions, such as SOA integration, through the ESB, event-driven architecture (EDA) and data virtualization, while building a foundation to support future integration paradigms, such as integrating cloud, hybrid, and on-premise data, services and applications.

Along with JBoss Enterprise SOA Platform 5.1, Red Hat is offering a new two-day training course, JBoss Enterprise SOA Platform – ESB Implementation, which is focused on developing and deploying ESB providers and services using JBoss Developer Studio and JBoss Enterprise SOA Platform.

You might also be interested in:

Sean Michael Kerner asserted The OpenStack cloud computing platform, backed by Rackspace, NASA, Cisco, Dell and others, ramps up commercial support options for cloud and enterprise users as a deck for his Open Source Cloud Computing Platform OpenStack Goes Commercial article of 3/9/2011for

The OpenStack open source cloud computing project isn't just for hobbyists and bleeding-edge developers anymore.

OpenStack began as a joint effort between Rackspace and NASA in July of 2010. The project has now expanded to over 40 partners and is now gearing up commercial support services in an effort to help grow production deployments.

"Since we started OpenStack we have seen a ton of momentum, but one thing we've heard is that while people love open source software, they really want some to stand behind it," Mark Collier, vice president of marketing and business development at Rackspace told

Another item that is sometimes a concern with open source software adoption is legal indemnification as part of a services or support engagement. Collier was unsure as to whether or not Rackspace would be providing any form of indemnification for OpenStack.

That said, Rackspace is only one part of the commercial services ecosystem surrounding OpenStack.

"From a software perspective we are not creating software distributions, we're working with partners like Canonical and Citrix who will be making software distributions," Collier said. "What we're doing is more on the services side, in terms of helping people get setup with operational support, as opposed to traditional software support which will be handled by the people building the software distributions."

Canonical is the lead commercial sponsor behind the Ubuntu Linux distribution and joined OpenStack earlier this year.

Collier sees demand for OpenStack coming from both service providers that are looking to build their own clouds as well as enterprises looking to build private clouds. Part of the go-to-market strategy for OpenStack commercialization includes leveraging services and hardware from Dell to help enable cloud or enterprise deployments. Collier explained that Dell will be providing a package of hardware, networking and the services needed to standup an OpenStack deployment.

From a management perspective, Collier noted that partners are also building out new tools as well. One of the new tools comes from Opscode with integration with their Chef language for configuration management.

"There is also an open source control panel as part of the project," Collier said. "Customers will have a lot of choice and we aren't saying it has to 100 percent open source or proprietary, users will have their choice of options."

Moving forward from the core OpenStack project perspective Collier noted that there are some discussions around providing some kind of long term supported releases of OpenStack. The OpenStack Bexar release debuted in February and the Cactus release is set to debut in April of this year.

Sean is a senior editor at, the news service of

<Return to section navigation list>