Wednesday, August 10, 2011

Windows Azure and Cloud Computing Posts for 8/10/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles.


Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.

<Return to section navigation list>

SQL Azure Database and Reporting

imageNo significant articles today.

<Return to section navigation list>

MarketPlace DataMarket, AppsMarket and OData

Jonathan Rozenblit (@jrozenblit) described Your SaaS Applications and the Windows Azure Marketplace in an 8/10/2011 post:

imageThere has been lots of buzz about the Windows Azure Marketplace since the announcements at the Worldwide Partner Conference in Los Angeles that the Windows Azure Marketplace has been expanded to feature and sell not only data subscriptions, but application subscriptions.

imageEffectively, you can now use the Marketplace to sell your Windows Azure applications, services, and building block components to a global online market. When your application is published to the Marketplace, you instantly get access to new customers, new markets, and new revenue opportunities all backed by Microsoft to ensure quality and service.

Read more about the benefits of the Windows Azure Marketplace >>

Getting Started

In order to get started selling subscriptions in the Marketplace to your Windows Azure-based SaaS (software-as-a-service) application, your application will need to know how to handle the events of the subscription lifecycle, such as subscribing, registering, accessing, using, and unsubscribing. With the August 2011 refresh of the Windows Azure Platform Training Kit (Download, Online), a new hands-on-lab has been added to walk you through the changes you’ll need to make in order to be able to publish and interact with the Marketplace.


The lab takes an application through the full provisioning process through 5 exercises. First you’ll learn about SaaS subscription scenarios. You’ll then modify an existing application to support Windows Azure Marketplace subscriptions. Once that’s done, you’ll go through the registration of the application in the Marketplace and test it using the Marketplace’s Dev Playground. When your application can support new subscriptions, you’ll then modify it again to support unsubscribing the subscriptions. With those changes in place, the application is ready to be published to the Marketplace. You’ll walkthrough how to do that as well.

Download the Windows Azure Platform Training Kit >>
Go through the Introduction to Windows Azure Marketplace for Applications Hands on Lab >>

When Your Application is Ready

After completing the hands-on-lab, you’ll have everything you need in order to make the changes to your application. However, before you get started making changes to your application, head over to the Windows Azure Marketplace Publishing page to get the paperwork that you need to get involved with the Marketplace. Take care of that first and when everything is done on that end, go ahead and make the changes to your application, test them, and publish your application.

Publish your application to the Marketplace >>

This article also appears on Musings of a Developer Evangelist from Microsoft Canada.

David Ebbo (@davidebbo) explained How an OData quirk slowed down NuGet, and how we fixed it in an 8/9/2011 post:

imageUpdate: my terminology in this post is not quite correct. Whenever I refer to the server part of OData, I really mean to say ‘WCF Data Services’. OData is the protocol, and WCF Data Services is the specific implementation. So the ‘quirk’ we ran into is a WCF Data Services thing and not an OData thing.

imageAs you may know, NuGet uses an OData feed for its packages. Whenever you install packages, or search for packages from Visual Studio, it goes through this feed, which you can find at

If you’re a NuGet user, you may also have noticed that the perf of NuGet searches from Visual Studio had been quite bad in recent months. You’d go to the NuGet package dialog and type a search string, and it would take 10 or more seconds to give you results. Ouch! :(

It turns out that the perf issue was due to a nasty OData quirk that we’ve since worked around, and I thought it might be interesting to share this with others. I’m partly doing this as you might run into this yourself if you use OData, partly to poke a little fun at OData, and also to poke a little fun at ourselves, since we really should have caught that from day one.

A whole stack of query abstractions

When you make an OData query from a .NET client, you go through a whole bunch of abstraction layers before a SQL query is made. Let’s say for example that you’re looking for packages that have the string ‘T4MVC’ in their description. It would roughly go though these stages:

First, in your .NET client, the OData client library would let you write something like:

var packages = context.Packages.Where(p => p.Description.Contain("T4MVC"));

Second, this code gets translated by the OData client LINQ provider into a URL with a query string that looks like this:


Third, this is processed by the OData server, which turns it back into a LINQ expressing, which in theory will look similar to what you had on the client, which was:

var packages = context.Packages.Where(

p => p.Description.Contain("T4MVC"));

Of course, the ‘context’ here is a very different beast from what it was in step 1, but from a LINQ expression tree point of view, there shouldn’t be much difference.

And finally, the Entity Framework LINQ provider turns this into a SQL query, with a WHERE clause that looks something like:

WHERE Description LIKE N'%T4MVC%'

And then it executes nice and fast (assuming a proper index), and all is well.

When the abstractions break down

Unfortunately, that clean sequence was not going as planned, resulting is much less efficient queries, which started to get really slow as our package count started to get large (and we’re already at over 7000 as of writing this post!).

So which of these steps went wrong? For us, it turned out to be the third one, where the OData server code was creating a very complex LINQ expression.

To understand why, let’s first briefly discuss OData providers. When you write an OData DataService<T>, you actually have the choice between three types of providers:

  1. An Entity Framework provider which works directly over an EF ObjectContext
  2. A reflection provider which works on an arbitrary context that exposes entity sets that are not tied to a specific database technology
  3. A custom provider, which is something so hard to write that almost no one has ever done it (maybe a slight exaggeration, but not by much!)

Give that we’re using EF, #1 seems like the obvious choice. Unfortunately, the EF provider is very inflexible, as it doesn’t let you use any calculated properties on your entities. In other words, it only works if the only thing you want on your OData feed are fields that come straight from the database. So for most non-trivial apps, it’s not a very usable option, and it wasn’t for us (we have some calculated fields like ReportAbuseUrl).

So we ended up using the reflection provider, and wrapping the EF objects with our own objects which exposed whatever we wanted.

Functionally, this worked great, but what we didn’t realize is that the use of the reflection provider causes OData to switch to a different LINQ expression tree generator which does ‘crazy’ things. Specifically, it makes the bad assumption that when you use the reflection provider, you must be using LINQ to object.

So it protects you by using some ‘null propagation’ logic which makes sure that when you write p.Description.Contain("T4MVC"), it won’t blow up if the Description is ever null. It does this by inserting some conditional checks in the LINQ expression. This is very useful if you are in fact using LINQ to object, but it’s a perf disaster if you are using LINQ to EF!

Now, when translated into SQL, what should have been the simple WHERE clause above was in fact becoming something like this:


WHEN ( Description LIKE N'%T4MVC%' ) THEN


WHEN ( NOT ( Description LIKE N'%T4MVC%' ) ) THEN



which was running significantly slower. Note that in reality, we’re querying for multiple fields at once, so the final SQL statement ended up being much scarier than this. I’m just using this simple case for illustration.And to make things worse, we learned that there was no way of turning off this behavior. What to do?

The solution: use some LINQ ninja skills to restore order

LINQ ninja David Fowler found this an irresistible challenge, and came up with a fix is both crazy and brilliant: he wrote a custom LINQ provider that analyses the expression tree generated by the OData LINQ provider, searches for the unwanted conditional null-check pattern, and eliminates it before the expression gets handed out to the EF LINQ provider.

If you want to see the details of his fix, it’s all on github, split into two projects:

QueryInterceptor ( is a helper library that makes it easier to write this type of query modification code.

ODataNullPropagationVisitor ( builds on QueryInterceptor and specifically targets the removal of the unwanted null check.

Naturally, these are available via NuGet (with the second depending on the first). After importing those packages, all that’s left to do is add one small call to your IQueryable<T>, e.g.

query = query.WithoutNullPropagation();

and your expression trees will be given a gardener’s special pruning :)

Lesson learned: always check your SQL queries

Some might conclude that all those query abstractions are just too dangerous, and we should just be writing raw SQL instead, where this never would have happened. But I think that would be way too drastic, and I certainly wouldn’t stop using abstractions because of this issue.

However, the wisdom we learned is that no matter what query abstractions you’re using (LINQ, OData, or other), you should always run SQL query analyzer on your app to see what SQL statements get run in the end. If you see any queries that doesn't completely make sense based on what your app is doing, get to the bottom of it and address it!

Of course, this is really ‘obvious’ advice, and the fact that we never did that is certainly a bit embarrassing. Part of the problem is that our tiny NuGet team is mostly focused on the NuGet client, and that the server hasn’t been getting enough love. But yes, these are just bad excuses, and in the end, we messed that one up. But now it’s fixed :)

<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

Avkash Chauhan explained How to setup IIS Application Pool for running Windows Azure App Fabric Application to pass Proxy Server settings in an 8/10/2011 post:

imageRecently I was working on an App Fabric Application which was connecting to App Fabric Service bus while hosted inside IIS. The application was working fine while connecting with App Fabric Service Bus, when my machine was located in a network where there was no proxy. However when my machine was moved into a network where there was proxy server, the same application returned the following error:

Unable to reach via TCP (9351, 9352) or HTTP (80, 443) 
Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Exception Details: System.ServiceModel.CommunicationException: Unable to reach via TCP (9351, 9352) or HTTP (80, 443)

Source Error:

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. …

image72232222222I tried to use App Fabric Server Sample Application iishostedcalculatorservice which resulted the same error.

I tried to modify web.config with http/nettcp bindings however none of the binding settings was working with proxy. I used another App Fabric SDK sample ECHO app which worked with both http and tcp settings which indicate that the command line application could connect to Service Bus however the application hosted in IIS & Application Pool was not able to connect to server bus. So most of the attention went towards Application Pool.

After some digging here and there, I found that the application was running inside IIS Application Pool (based on .net 4.0) however the Application Pool was not able to use access credentials properly to pass proxy server, which ultimately cause this error. I was able to solve this problem using the following steps:

  1. Added a new AppPool based on 4.0. Be sure to use proper credentials to it. If machine is domain joined then please use your Active Directory credentials and if machine is not domain joined then use local machine admin credentials (servername\username + Pwd)

  1. After it you can setup your IIS application to use newly create Application Pool
  2. Please don't forget to reset IIS otherwise new settings will not take affect

After above these changes, the application worked without any further hitch.

<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

The Windows Azure Team (@WindowsAzure) reported JUST ANNOUNCED: Intuit and Microsoft Expand Partnership with Latest Windows Azure SDK for Intuit Partner Platform in an 8/10/2011 post:

imageOpportunities for developers in the cloud continue to multiply. Today Intuit provided yet another great example of this trend with the announcement that the company is increasing its investment to more tightly integrate the Intuit Partner Platform with Windows Azure. This will provide developers and channel partners the tools to deliver solutions that reach the millions of people that use QuickBooks and other Intuit financial software.

imageThis move has a few really exciting opportunities for developers.

  • Simplified Process: If you are already writing QuickBooks based solutions, targeting the cloud just got easier. Intuit will soon release version 2.0 of Intuit’s Windows Azure SDK for Intuit Partner Platform, providing support for one-click publishing & authentication of federated IPP and Intuit Anywhere apps hosted on Windows Azure.
  • Reduced costs: Most developers want to focus on building innovative apps, rather than dealing with the infrastructure costs that prohibit these apps from reaching market quickly. Using Windows Azure means there are no up-front expenses and developers pay only for the resources they use.
  • Expanded reach: For small business developers, the ability to expand your reach is just as important as the ability develop a quality app. Developers looking to expand the solutions to reach small and medium businesses have a well-established channel along with a terrific and easy way to build the apps that they envision for their users. Intuit’s 26 million and growing SMB customers represent a great opportunity to provide value added services.

Whether you are already working with Intuit, or you are a software developer looking for new markets, the expanded partnership between Microsoft and Intuit is worth exploring. Check out Intuit’s Alex Chriss’ blog for a deeper dive into the investments Microsoft and Intuit are making together.

Steve Marx (@smarx) explained Redirecting to HTTPS in Windows Azure: Two Methods in an 8/10/2011 post:

imageUsing HTTPS is a good idea. It means communication between the browser and your web application is encrypted and thus safe from eavesdropping. The only reason I rarely use HTTPS in my applications is that I don’t want to buy an SSL certificate.

imageIf you do provide HTTPS support for your application in Windows Azure, you may want to redirect users to the HTTPS version (even if they came in via HTTP). In this blog post, I’ll show you two ways to accomplish this redirect. You can test the second method at (Note that I’m using a self-signed certificate, so you’ll have to click through a browser warning.)

ASP.NET MVC RequireHttps Attribute

Since version 2, ASP.NET MVC includes a RequireHttps attribute that can be used to decorate a controller or a specific action. If you’re using ASP.NET MVC 3, this is by far the simplest way to force an HTTPS redirect. Usage is trivial:

public class HomeController : Controller
    public ActionResult Index()
        return View();

    public ActionResult Secure()
        return View();

This will automatically redirect users to the HTTPS version of your site when they try to browse to the “Secure” controller action. Note that the redirect will always use port 443 (the default HTTPS port), so if you’re testing locally under the compute emulator and you’re getting an HTTPS port other than 443 (because, for example, that port is already taken by another web site), this won’t work well. I don’t know of a good workaround for this. Just make sure port 443 is available.

IIS URL Rewrite

If you’re not using ASP.NET MVC but are using IIS (as in a web role), you can use the URL Rewrite module, which is installed by default. Using this module to redirect to HTTPS is fairly trivial and documented elsewhere, but getting everything to work properly in the local compute emulator is non-trivial. In particular, most examples you’ll find assume that HTTP traffic will always be on port 80, which isn’t always the case when testing under the compute emulator. Here’s a rule that seems to work locally and in the cloud:

      <rule name="Redirect to HTTPS">
        <match url="(.*)" />
          <add input="{HTTPS}" pattern="off" ignoreCase="true" />
          <add input="{URL}" pattern="/$" negate="true" />
          <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
        <action type="Redirect" url="https://{SERVER_NAME}/{R:1}" redirectType="SeeOther" />

I’ve put in two exceptions (the negate="true" lines). The first is for the root of the domain. Here I just wanted to show how to add an exception for certain URLs (like maybe a login screen). If you’re entire site supports HTTPS, there’s little reason for this. The second exception is for actual files (as opposed to controller actions). This allows JavaScript and CSS files to be loaded even over HTTP, making the first page work.

The standard rule you’ll see uses something like url="https://{HTTP_HOST}/{R:1}". The reason I didn’t do that is because of local ports in the compute emulator. {HTTP_HOST} actually includes the port, so this will result in rewriting to URLs like…, which won’t work at all. (Note the double port.)

[UPDATE 10:30am PDT] I had a more complicated server rule here, but @tpettijohn pointed me to his post about HTTPS rewriting that had a better rule using {SERVER_NAME}, a variable I wasn’t aware of.

The trick I used is to apply a regular expression to the {HTTP_HOST} variable and extract just the host name itself (without the port). This is then captured and available in the {C:1} variable in the rewrite URL. Doing it this way mimics the behavior you see from the RequireHttps attribute in ASP.NET MVC, where the port is stripped and HTTPS traffic always goes to port 443 (the default).

The above URL Rewrite rule is what’s in use at

[UPDATE 10am PDT] It may be obvious to some, but I do want to point out that using either of the above methods in Windows Azure means you’ll need to first declare an HTTPS endpoint and supply a certificate (configured locally by thumbprint and uploaded via the Windows Azure portal to make available in the cloud). See the MSDN topic “How to Configure an SSL Certificate on an HTTPS Endpoint.”

AppDynamics posted Sign up now for your AppDynamics .NET free trial! on 8/10/2011:

We'd like to give you the opportunity to try out AppDynamics' .NET solution for yourself. Please fill out the form below and a sales rep will be in touch with you in one business day to set up your SaaS account and begin your 30-day free trial.


AppMan reported .NET Application Performance delivered with AppDynamics 3.3 in an 8/9/2011 post to the ApDynamics blog:

imageIt’s official: AppDynamics support for Microsoft .NET and Windows Azure is finally here! We’ve got the same Kick Ass Product with the same Secret Sauce–but now it sports a shiny new CLR agent. So whether your apps are Java, .NET or hybrid, with AppDynamics you have the best of both worlds when it comes to managing application performance.

imageWe thought it was only fair to share our secret sauce and love with the Microsoft community, given that 40,000+ users of the Java community have been enjoying it for over 18 months. Our mission is to simplify the way organizations manage their agile, complex, and distributed applications. For .NET, this means that AppDynamics supports the latest and greatest technologies from Microsoft, including their new PaaS platform Windows Azure.

So, what does this mean for organizations with .NET or Azure apps? Let me summarize:

#1 You get to visualize in real-time what your .NET application actually looks like, along with its health and performance across any distributed production environment. It’s the 50,000 foot view that shows how your application and your business performs in your data center or Windows Azure.

#2 Ability to track all business transactions that flow through your .NET application. This gives you insight into business activity, health, and impact in the event that a slowdown or problem occurs in your production environment. This unique context and visibility helps you troubleshoot through the eyes of the business, so you can see their pain instantly in production and resolve it in minutes. We auto-discover and map every business transaction automatically–so don’t worry about configuration. We’ve got that taken care of.

#3 Deep diagnostic information on how your business transactions actually execute through your CLRs and/or JVMs (if you’ve got a hybrid app). This means complete call stacks of code execution with latency breakdowns across all your namespaces, classes and methods, which your business transactions invoke. You get maximum visibility in production with zero configuration, allowing you to perform root cause analysis in minutes.

#4 Ability to plot, correlate, and trend any CLR or OS metric over time–whether it’s logic thread counts, garbage collection time, or simply how much CPU your application CLR is burning. We let you report and analyze all this so you understand your CLR run-time and OS system resource.

Don’t believe us? Sign up for our free 30-day trial and we’ll provision you a SaaS login. You can then download and install our lightweight agents and see for yourself just how easy it can be!

As well as our .NET support, we’ve also crammed in some great innovation into the 3.3 release.

Real-Time JVM MBean Viewer:

In addition to trending standard JMX metrics from the JVM, users can now discover and trend any MBean attributes on the fly for short term analysis in real-time. Our new UI dialogue allows the user to browse through hundreds of available metrics which are automatically discovered and reported at the touch of a button. If the user wishes to convert any MBean attribute into a standard JMX metric they can just click “Create Metric” and AppDynamics will collect and report that metric as standard in the JMX Metrics viewer.

Search Business Transactions by their content/payload:

For example, you might have launched a new product on your application or website and need to understand its performance by looking at all business transactions that interact with that product. With AppDynamics v3.3 users can now search business transactions by any transaction payload. For example, the below screenshot shows how a user can search for all business transactions that relate to the book “Harry Potter”.

Additional Platform Support:

  • Auto-discovery and mapping of LDAP, SAP and JavaMail tiers to business transaction flows for increased visibility.
  • MongoDB support allowing users to see BSON queries and associated latency for calls made from Java applications.
  • Enhanced support for WebSphere on Z/OS with automatic JVM naming pools to help customers identify and manage short-living and dynamic JVM run-times.

All in all another great release packed full of innovation from the AppDynamics team. Stay tuned over the next few weeks for more information on specific 3.3 features

Janet Tu reported Nokia's plans for Windows Phone win in North America in an 8/10/2011 post to the Seattle Times Business/Technology blog:

imageA lot is riding on Nokia's partnership with Microsoft as the Finnish phone-manufacturing company produces its first Windows Phone, expected later this year. Nokia chose Windows Phone as the OS for all its smartphones going into the future but both it and Microsoft face big challenges going up against Android and Apple phones.

imageMatt Rosoff at Business Insider has an interview today with Chris Weber, who used to be with Microsoft and now runs Nokia's North American sales group. Weber talked about his plans for how Nokia and Windows Phone can win.

imageNokia plans to work with all the major mobile phone operators in North America (which it used to not do) to sell subsidized phone, Weber said in the interview. Nokia also plans to release a variety of Windows phones in various forms and price points, and make sure they are prominently displayed and demonstrated by clerks in retail stores - something that has not happened in the past and may have hurt Windows Phone sales.

Nimageokia will also apparently make a push for Windows Phone among business users, since it will be integrated with Office 365 (Microsoft's new cloud service for email, Word, Excel, PowerPoint and unified communication), according to the interview.

Of course a primary Windows Phone 7 connection will be with Windows Azure storage and SQL Azure.

Rob Tiffany (@RobTiffany) described Consumerization of IT Collides with MEAP: Windows > Cloud in an 8/9/2011 post:

imageIn my Consumerization of IT Collides with MEAP article last week, I described how to connect a Windows 7 device to Microsoft’s On-Premises servers. Whether you’re talking about a Windows 7 tablet or laptop, I showed that you can follow the Gart[n]er MEAP Critical Capabilities to integrate with our stack in a consistent manner. Remember, the ability to support multiple mobile apps across multiple mobile platforms, using the same software stack is a key tenant to MEAP. It’s all about avoiding point solutions.

imageIf you need a refresher on the Gartner MEAP Critical Capabilities, check out:

In this week’s scenario, I’ll use the picture below to illustrate how Mobile versions of Windows 7 in the form of slates, laptops, and tablets utilize some or all of Gartner’s Critical Capabilities to connect to Microsoft’s Cloud infrastructure:


As you can see from the picture above:

  1. For the Management Tools Critical Capability, Windows 7 uses Windows Intune for Cloud-based device management and software distribution.
  2. For both the Client and Server Integrated Development Environment (IDE) and Multichannel Tool Critical Capability, Windows 7 uses Visual Studio. The Windows Azure SDK plugs into Visual Studio and provides developers with everything they need to build Cloud applications. It even includes a Cloud emulator to simulate all aspects of Windows Azure on their development computer.
  3. For the cross-platform Application Client Runtime Critical Capability, Windows 7 uses .NET (Silverlight/WPF/WinForms) for thick clients. For thin clients, it uses Internet Explorer 9 to provide HTML5 + CSS3 + ECMAScript5 capabilities. Offline storage is important to keep potentially disconnected mobile clients working and this is facilitated by SQL Server Compact + Isolated Storage for thick clients and Web Storage for thin clients.
  4. For the Security Critical Capability, Windows 7 provides security for data at rest via Bitlocker, data in transit via SSL, & Authorization/Authentication via the Windows Azure AppFabric Access Control Serivce (ACS).
  5. For the Enterprise Application Integration Tools Critical Capability, Windows 7 can reach out to servers directly via Web Services or indirectly through the Cloud via the Windows Azure AppFabric Service Bus to connect to other enterprise packages.
  6. The Multichannel Server Critical Capability to support any open protocol is handled automatically by Windows Azure. Crosss-Platform wire protocols riding on top of HTTP are exposed by Windows Communication Foundation (WCF) and include SOAP, REST and Atompub. Cross-Platform data serialization is also provided by WCF including XML, JSON, and OData. Cross-Platform data synchronization if provided by the Sync Framework. These Multichannel capabilities support thick clients making web service calls as well as thin web clients making Ajax calls. Distributed caching to dramatically boost the performance of any client is provided by Windows Azure AppFabric Caching.
  7. As you might imagine, the Hosting Critical Capability is knocked out of the park with Windows Azure. Beyond providing the most complete solution of any Cloud provider, Windows Azure Connect provides an IPSec-protected connection with your On-Premises network and SQL Azure Data Sync can be used to move data between SQL Server and SQL Azure. This gives you the Hybrid Cloud solution you might be looking for.
  8. For the Packaged Mobile Apps or Components Critical Capability, Windows 7 runs cross-platform mobile apps include Office/Lync/IE/Outlook/Bing.

As you can see from this and last week’s article, Windows 7 meets all of Gartner’s Critical Capabilities whether it’s connecting to Microsoft’s On-Premises or Cloud servers and infrastructure. They great takeaway from the picture above, is Windows 7 only needs to know how to integrate its apps with WCF in the exact same way as is does in the On-Premises scenario. Windows developers can focus on Windows without having to concern themselves with the various options provided by Windows Azure. Cloud developers just need to provide a WCF interface to the mobile clients.

When an employee walks in the door with a wireless Windows 7 Slate device, you can rest assured that you can make them productive via Windows Azure without sacrificing any of the Gartner Critical Capabilities.

Next week, I’ll cover how Windows Phone connects to an On-Premises Microsoft infrastructure.

Rob Tiffany is a Mobility Architect at Microsoft focused on designing and delivering the best possible Mobile solutions for his global customers.

<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

The Visual Studio LightSwitch Team (@VSLightSwitch) reported an Updated WPI Feed for LightSwitch Server Runtime in an 8/10/2011 post:

imageWe've updated the Windows Platform Installer (WPI) feeds for the Visual Studio LightSwitch server runtime components. Previously there was only a single option that would install the server runtime components along with SQL Server Express which we considered to be the primary deployment scenario for LightSwitch applications. We've received feedback to make this optional for scenarios where a full version of SQL Server is installed or the database may be hosted on a different tier. In the latest WPI feed you will now see two options for the LightSwitch server runtime, one that includes SQL Express (which is the same as the previous option) and one that does not include SQL Express.

image222422222222So... for scenarios where SQL may already be installed or hosted on another computer, choose the option that does not include Local SQL support. See the LightSwitch deployment guide for complete instructions on publishing LightSwitch applications.

The Visual Studio LightSwitch Team (@VSLightSwitch) wants you to Submit Your Ideas for Future Versions of Visual Studio LightSwitch according to this 8/10/2011 post:

imageWe just launched a new feedback site called UserVoice that makes it easy for you to tell us what you want to see in the next versions of Visual Studio LightSwitch. It gathers and sorts ideas by popularity, giving us an immediate view of the most requested features.

image222422222222Just head to and then click the “Submit Idea” button.


This will take you to the LightSwitch UserVoice site where you can sign up with email, Facebook, or a Google account. Once registered, you can view ideas submitted by other people and vote on these ideas, or enter new ideas. Each person is given 10 votes to apply across the list of ideas. This way we can tell what’s really important for people.


You are free to move your votes around over time, and when an idea is closed out all votes for that idea are returned to you, so you can apply them to other ideas as we accept or decline the suggestions. Anyone can comment on the ideas submitted. The site is lightweight and designed for making it a pleasant, efficient experience to submit ideas. We’ll be looking at these feature suggestions often so that it influences our feature priorities for future versions of LightSwitch.

Let us know how we can improve Visual Studio LightSwitch!

Rowan Miller posted EF Releases & Versioning - Call for Feedback on 8/10/2011 to the ADO.NET Data Services blog:

imageThe EF team has been working toward being more agile and having a higher release frequency. We are also releasing more previews and making them available earlier in the release cycle so that your feedback can help shape the product. This effort has seen us publishing a much higher number of releases.

In the last 6 months we have released; EF 4.1 Release Candidate, EF 4.1, EF Power Tools CTP1, EF June 2011 CTP, EF 4.1 Language Packs, EF 4.1 Update 1 and Code First Migrations August 2011 CTP. That is a pretty confusing list of releases, especially given there are some incompatibilities and dependencies within the list. Just to add to the confusion each release is available in one or more places including Microsoft Download Center, NuGet & Visual Studio Gallery.

It’s become clear that we need to rationalize how we name, distribute and talk about releases. This problem isn’t just affecting our team, Scott Hanselman recently posted about the need for better versioning across all our products.

This post will walk you through the changes we are planning to make. Nothing is locked in yet so we welcome your feedback.

What We Ship

There are two logical parts to the Entity Framework, the core components that ship inside the .NET Framework and Visual Studio and the ‘out of band’ components that we can update on a much more frequent schedule. We are currently looking at how we can update the core components on a more frequent cadence as well. The ‘EF June 2011 CTP’ was our first attempt at shipping the core components more frequently and it’s become clear we’re just not technically ready to do this yet.

Core components include:

  • Core EF Runtime (System.Data.Entity.dll & System.Web.Entity.dll)
  • EF Designer

Out of band components include:
(We will likely ship more out of band components in the future)

  • The DbContext API & Code First (EntityFramework.dll)
  • T4 Templates for using DbContext API with Model First & Database First
  • EF Power Tools
  • Code First Migrations
Where We Ship

As mentioned above, the core components will remain part of the .NET Framework and Visual Studio.

The out of band components will be primarily available via NuGet and Visual Studio Gallery.

  • The DbContext API & Code First will continue to be available as the EntityFramework NuGet package. We no longer plan to provide a stand-alone installer for these components. If you want to GAC or distribute the EntityFramework assembly it can easily be taken from the NuGet package.
  • T4 Templates for DbContext API will become available on Visual Studio Gallery. In the past we have distributed other T4 templates, such as our POCO templates, using this mechanism.
  • EF Power Tools will continue to be available on Visual Studio Gallery.
  • Code First Migrations will continue to be available via NuGet. We will likely also release via another mechanism to support team build and deployment scenarios, we are still working through the logistics of this.

NuGet will eventually have built-in support for pre-release versions of packages but in the meantime we will introduce a .Preview version of each package. For example the EntityFramework package will be the latest fully supported runtime, EntityFramework.Preview will be the latest preview. For the period between an RTM release and the next preview both packages will include the same build. We will ensure that the non-preview packages always work together (i.e. EntityFramework will always be compatible with EntityFramework.Migrations etc.). We will try to ensure that the latest pre-release versions work together but this may not always be possible.

Version Numbers

The version numbers of the core components will be governed by the .NET Framework & Visual Studio release that they are part of.

For our out of band components we will version using principles from Semantic Versioning.

  • The DbContext API & Code First is currently at 4.1.10715.0 and is called ‘EF 4.1 Update 1’. The next update to this package will bump the version number to 4.2.0 and be called EF 4.2, we will follow semantic versioning from that point forwards. At this stage EF 4.2 will just include some bug fixes. This is the primary out of band component and we will use its version number to describe what version of EF is the latest (i.e the next release will be EF 4.2).
  • T4 Templates for DbContext API are currently included in the EF 4.1 installer. We will move these to Visual Studio Gallery with a version number of 1.0.0 and follow semantic versioning from that point forwards.
  • EF Power Tools are currently in preview mode and available on Visual Studio Gallery with a version number of 0.5.0. We will continue to follow semantic versioning with this component.
  • Code First Migrations was initially released on NuGet with a version number of 0.5.10727.0. The next preview will be 0.6.0 and then we will continue to follow semantic versioning with this component.

Once a component has had an RTM release (i.e. reached version 1.0.0) all subsequent previews will use the target final release number with an ‘alpha’, ‘beta’, etc. special version. For example we will release EntityFramework.Preview package with a version number of ‘4.2.0beta’ before releasing EntityFramework with a version number of 4.2.0.

Please let us know what you like and what you think we should do differently.

Rowan Miller, Program Manager, ADO.NET Entity Framework

Martin Beebe (pictured below) posted LightSwitch in the Real World to the MSDN UK Team Blog on 8/8/2011 (missed when published):

imageTim Leung interviews Garth Henderson about how he uses LightSwitch.

Last week saw a major milestone in the development of the Microsoft LightSwitch, a new addition to the Microsoft Visual Studio family.

image222422222222After being in development for more than 2 years, the product has been finally released. Many of those in the LightSwitch community have eagerly awaited this release for a very long time.

For those unfamiliar with LightSwitch, it enables you to quickly build data driven Silverlight applications for both the desktop and the web. A major selling feature is that applications are built using well-known patterns and best practices such as n-tier application layering and MVVM, as well as technologies such as Entity Framework and RIA Services.

It is possible to create entire applications using just the graphical designers and not have to resort to writing code. It appeals strongly to non developers and to those from Access, Excel VBA and Foxpro backgrounds.

But being a version 1 product, is LightSwitch really all that it's hyped up to be? Many of the sample applications are perhaps simplified 'academic' demonstrations of how an application might be. Is it really possible to write practical, large scale, production quality applications using LightSwitch?

In order to find out, I've been speaking to Garth Henderson from Vanguard Business Technology. Garth is an early adopter of LightSwitch and one of the most knowledgeable experts in the field. His recent work has been pushing LightSwitch to its limits.

imageTim (pictured at right): What does Vanguard do and what kind of applications are you building with LS?

Garth (pictured below): Vanguard Business Technology (VBT) is a division of a USA company formed to write affordable business applications for resale and hosting with current OO technology. Our initial research lead to a decision that .NET was a better platform for our goals than Oracle/Java. We think that Microsoft continues to provide the global community of developers with excellent collaborative stewardship for .NET.

imageWe do plan to integrate with systems written in Java and/or Oracle (or IBM) databases.

We started with Ajax based ASP.NET development. In 2010, we migrated to Silverlight 4 with RIA services. When LightSwitch Beta 1 was release in August 2010, we switched development to LS. By October 2010, we were running three LS applications in a company owned by our investors. We expanded our development efforts when B2 was released last March. All of our LS apps are now running with RTM. There were no problems with the upgrade.

The first LS products we will release support the construction and property management industries. Our applications are designed on an ERP foundation that we use to build industry niche solutions. We have been building the core as we migrated from ASP.NET, to Silverlight, to LS.

We have a broad perspective of software technology development as our core management group has worked with all major business app platforms as they emerged starting with IBM mainframes. Our joint experience serves as a pretty good “crystal ball” when selecting appropriate technologies. Microsoft has been working on .NET since 1998 when .NET was called DNA. Just now we are finally getting a .NET RAD biz app solution. Point: We see LS as a beginning of a technology that will quickly gain consensus and evolve.

Tim: What do you like most about LS?

Garth: The LS product comes closest to providing the RAD functionality and short development timeframes that I’m used to. To make LS even better, I’d like to help figure out a way to build LS apps dynamically. This is an open topic in the LS forum.

The extensibility and openness of LS is definitely at the top of my list of benefits. We are looking forward to using 3rd party commercial controls and extensions to help build our LS application products.
The most remarkable fact about LS is that it is a Visual Studio product. Development with LS is a true pleasure. Everything we need is built into VS. VS development with LS connects everything together. Intellisense is right there for us. We can drill down through extremely complex hierarchies of relational entities.

The LS product is pure genius with the widespread use of T4 and MEF built into the architecture of LS. As we type code or change Designer properties, LS is automatically, with every keystroke, detecting if some related code needs to be changed. And . . . when some related code needs to be changed or generated, LS within VS will automatically change the code.

LS is written in Silverlight. It runs with the same version of application code in a web browser or on the Desktop. A lot of developers tell me that Silverlight will not survive. This is a crazy assumption. In 2010, MS released VS written in WPF. WPF is THE desktop technology for .NET. Silverlight is the web version of WPF. WP7 is the phone version of WPF technology. Yes, WPF and its dependents will evolve.

Silverlight uses XAML as the markup language. LS has its own tuned version of XAML called LSML to expedite business app development. It is very powerful One of the immediate opportunities is for 3rd party vendors to develop utilities that manage LSML.

The architectural support for business logic at the Model, ViewModel, and View is very well laid out. The Designers have context intelligent dropdown menus that take us exactly to the override method that we need to work with. We can even go directly to an Entity Property Changed method in the Screen Designer by right clicking on the Collection property in the left hand ViewModel list. Now that is a real timesaver.

LS allows the developer to continually stay focused on application development. A lot of thought and hard work went into making LS the great product that it is for RAD biz app programming.

Tim: How easy or hard has it been to learn LS? Are there any parts that you've found particularly challenging?

Garth: Business application development is quite different than boutique web site development. To date, most .NET development has been involved with ASP.NET web sites. I consider web site development to be a completely different industry than LOB/ERP development. LS is designed specifically for biz app development.

LS is very easy to use. You can do a whole lot without any programming code. It is a VS tool that a non-programmer can use. Yet, when you need to dig in and use code, it is available. LS bridges the gap between programmers and business process experts. Business experts can harmlessly hack around with prototypes and turn it over to a pro as a pretty good articulation of business requirements.

For professional developers adept with LS, the RAD Designers allow us to do some incredible real time prototypes using a projector in front of customers. In fact, LS is so good that I can use it has a requirements “take down” tool while interviewing our client’s subject experts.
If the business process has existing data, a development copy of the database (or Excel spreadsheets) can be used as a Data Source. It is a quick task to use SQL Server Manager to import Excel spreadsheets into a prototype database table.

For me, the most challenging part was waiting until July 26, 2011 for a final release. There have been significant improvements between Beta 1, Beta 2, and the LS 2011 RTM.

Tim: One of the key features of LS is Rapid Application Development. Compared to how you were working before, has LS cut down on your development time?

Garth: I’d have to answer that question in respect to previous win forms, ASP.NET, and Silverlight app development. LS still has a way to go to match the RAD development that we had with our Linux/C++ system.

The biggest time saving was in not having to write all of the Model View code and custom wiring for an SL application. LS is a dream come true when it comes to maintenance. I can make all of maintenance and/or new functionality changes without ever having to worry about rewriting all of the extremely tedious View Model code.

Dropdown lookup controls are also a huge time saver and the GUI setup within the Screen Designer also shaves hours off of complex UI tasks with the use of Groups.

The debug runtime “Customize” designer flips from the Group/Property layout paradigm to a true WYSIWYG format. Yep, just run your app and there is a “Design Screen” icon in the upper right hand corner that switches to the Customize WYSIWYG Designer. But wait, it gets better: The WYSIWYG designer is displaying and running like your app with live data. Since this is running with Visual Studio, all changes that you make during your WYSIWYG design session are saved back to the LSML file.

Additional time saving tools can be created with the LS Extensibility Toolkit, but one of the true pleasures of LS development is the integrated LINQ Lambda syntax. LINQ syntax works with Screen Collections and the automatically generated Entity Collections with their respective methods to retrieve records. The LS Entity Designer manages a DataRuntime.cs (or .vb) code file that has the collections and a direct method to bring back a single record from the database based on the Primary Key value. LINQ object (dot) notation is supported in all code methods.

About the toughest part of writing an intelligent client application is the communication between the Client and the Server. LS does an exemplary job of supporting application level methods at both levels. I’ve run side-by-side tests of processing large data updates on the LS server side compared with Winforms technology. The results were identical. LS gives the developer the ability to decide what should be run on the Client and what should be run on the Server.

LS will scale if designed correctly. The current release of LightSwitch requires that a large system be designed in modules rather than one mammoth application.

When we consider the big picture of all of the LS RAD features/functionality, it seems to be a straightforward conclusion that LS is ready for primetime. Keep in mind that new LS technology is just the top 2% with 98% of the work being done by existing proven .NET technology – most of which has been in production for over 3-5 years now.

Tim: Having completed a LightSwitch project, is there anything you would have done differently?

Garth: No, I don’t’ think so. I depended heavily on working with a few friends to figure everything out. We split up the load and shared information.

Hehe . . . but there are a lot of things that I wish others would have done differently. It is a chicken and the egg scenario. Tools developers need to have great apps to study in order to build better tools and app developers need to have advanced tools to build great apps. We are getting there.

Initially, we all had to radically change the way that we thought about building a .NET app – especially an ASP.NET app. It has almost been a year now for me, and thinking in terms of LightSwitch is second nature. However, at first, it was really a challenge to dig in and figure out how LS worked. There wasn’t much documentation. We all kind of dug in a posted back what worked for us and what we liked best.

Everyday I’ve done something differently with LS. LS is a very rich environment. I’m constantly getting new ideas. Just today I figured out a pretty cool way to handle one-to-one table relationship support thinking in terms of the “LightSwitch way.”

Understanding the unique features and functionality of Screen Collections was probably the most useful part of LS development for me.

It is logical that LS would start to fuel a new generation of business development technology. LS is the first RAD OO system that is built using .NET technology. We can use LS (as it evolves) to improve global business and government. We are far from having the best systems for business and governments. Everything needs to be reengineered – with deeper considerations this time around. The point here is that biz app developers are people that solve real world problems with software. It isn’t about the technology, it is about improving the life quality for people everywhere. So, there is always more to do and more to discuss.

I can’t wait for .NET Workflow and SharePoint document management to integrate with LS biz app development. I don’t expect that it will take too much to make this happen.

Tim: What recommendations would you give those who are just starting out?

Garth: I’d say pick up a good book on LS. The book you want should teach you how to develop apps in LS. Many technical books are not much more of a reference manual of content that is already available on MSDN – which aren’t going to be much help. The best books will walk through a professional methodology explaining the process of building an LS application starting with requirements.

Until that kind of LS book is available, read through the MSDN documentation. Grab a copy of LS and build a few screens. See how easy it is. Then, look at some of the examples to see what kind of additional features and functions you want to accomplish. Join the forum and talk with us.

Use LS in your business. Put together and deploy a useful LS app for a client. Bottom line: Start making money with LS – both for yourself and the businesses your efforts support.

Tim: Garth – thanks for giving us a fascinating insight into the way that you work with LightSwitch and for sharing your experiences with us.

So there we have it. We've learnt that you can construct solid, scalable, real life enterprise solutions by using Microsoft LightSwitch. We've also seen how you can build a viable business model around LightSwitch and attract investors who will also see the benefit of rapid application development.

LightSwitch is architecturally solid and there are many areas throughout the product that enables you to be more productive and to reduce development timescales.

If you haven’t seen the light yet, join the vibrant community that Garth speaks about and make a start by downloading LightSwitch.

Return to section navigation list>

Windows Azure Infrastructure and DevOps

Lori MacVittie (@lmacvittie) reported The University of Washington adds a cloud computing certificate program to its curriculum in her Cloud Computing Goes Back to College post of 8/10/2011 to F5’s DevCentral blog:

imageIt’s not unusual to find cloud computing in a college environment. My oldest son was writing papers on cloud computing years ago in college, before “cloud” was a marketing term thrown about by any and everyone pushing solutions and products hosted on the Internet. But what isn’t often seen is a focus on cloud computing on its own; as its own “area of study” within the larger context of computer science. That could be because when you get down to it, cloud computing is merely an amalgamation of computer science topics and is more about applying processes and technology to modern data center problems than it is a specific technology itself. But it is a topic of interest, and it is a complex subject (from the perspective of someone building out a cloud or even architecting solutions that take advantage of the cloud) so a program of study may in fact be appropriate to provide a firmer foundation in the concepts and technologies underpinning the nebulous “cloud” umbrella.

uw_logoThe University of Washington recently announced the addition of a cloud computing certificate program to its curriculum. This three-course program of study is intended to explore cloud computing across a broad spectrum of concerns, from IaaS to PaaS to SaaS, with what appears to be a focus on IaaS in later courses. The courses and instructors are approved by the UW Department of Computer Science, and are designed for college-level and career professionals. They are non-credit courses that will set you back approximately $859 per course. Those of us not in close proximity may want to explore the online option, if you’re interested in such a certificate to hang upon your wall. This is one of the first certificates available, so it will be interesting to see whether it’s something the market is seeking or whether it’s just a novelty.

imageIn general, the winter course appears to really get into the meat and serves up a filling course. While I’m not dismissing the first course offered in the fall, it does appear light on the computer science and heavy on the market which, in general, seems more appropriate for an MBA-style program than one tied to computer science. The spring selection looks fascinating – but may be crossing too many IT concerns at one time. There’s very few folks who are as comfortable on a switch command line that are also able to deal with the programmatic intricacies of data-related topics like Hadoop, HIVE, MapReduce and NoSQL. My guess is that the network and storage network topics will be a light touch given the requirement for programming experience and the implicit focus on developer-related topics. The focus on databases and lack of a topic specifically addressing scalability models of applications is also interesting, though given the inherent difficulties and limitations on scaling “big data” in general, it may be necessary to focus more on the data tier and less on the application tiers.

Of course I’m also delighted beyond words to see the load testing component in the winter session, as it cannot be stressed enough that load testing is an imperative when building any highly scalable system and it’s rarely a topic discussed in computer science degree programs.

The program is broken down into a trimester style course of study, with offerings in the fall, winter and spring.

Fall: Introduction to Cloud Computing
  • Overview of cloud (IaaS/PaaS/Saas, major vendors, market overview)
  • Cloud Misconceptions
  • Cloud Economics
  • Fundamentals of distributed systems
  • Data center design
  • Cloud at startup
  • Cloud in the Enterprise
  • Future Trends
Winter: Cloud Computing in Action
  • Basic Cloud Application Building
  • Instances
  • Flexible persistent storage (aka EBS)
  • Hosted SQL
  • Load testing
  • Operations (Monitoring, version control, deployment, backup)
  • Small Scaling
  • Autoscaling
  • Continued Operations
  • Advanced Topics ( Query optimization, NoSQL solutions, memory caching, fault tolerance, disaster recovery)
Spring: Scalable and Data-Intensive Computing in the Cloud
  • Components of scalable computing
  • Cloud building topics (VLAN, NAS, SAN, Network switches, VMotion)
  • Consistency models for large-scale distributed systems
  • MapReduce/Big Data/NoSQL Systems
  • Programming Big Data (Hadoop, HIVE, Pig, etc)
  • Database-as-a- Service (SQL Azure, RDS,

Apposite to the view that cloud computing is a computer science related topic, not necessarily a business-focused technology, are the requirements for the course: programming experience, a fundamental understanding of protocols and networking, and the ability to remotely connect to Linux instances via SSH are expected to be among the skill set of applicants. The requirement for programming experience is an interesting one, as it seems to assume the intended users are or will be developers, not operators. The question becomes is scripting as is often leveraged by operators and admins to manage infrastructure considered “programming experience?” Looking deeper into the courses it later appears to focus on operations and networking, diving into NAS, SAN, VLAN and switching concerns; a focus in IT which is unusual for developers.

That’s interesting because in general computer science as a field of study tends to be highly focused on system design and programming, with some degree programs across the country offering more tightly focused areas of expertise in security or networking. But primarily “computer science” degrees focus more on programmatic concerns and less on protocols, networking and storage. Cloud computing, however, appears poised to change that – with developers needing more operational and networking fu and vice-versa. A focus of devops has been on adopting programmatic methodologies such as agile and applying them to operations as a means to create repeatable deployment patterns within production environments. Thus, a broad overview of all the relevant technologies required for “cloud computing” seems appropriate, though it remains to be seen whether such an approach will provide the fundamentals really necessary for its attendees to successfully take advantage of cloud computing in the Real World™.

Regardless, it’s a step forward for cloud computing to be recognized as valuable enough to warrant a year of study, let alone a certificate, and it will be interesting to hear what students of the course think of it after earning a certificate.

You can learn more about the certificate program at the University of Washington’s web site.

David Linthicum (@DavidLinthicum) asserted “Gartner's 2011 hype cycle shows that cloud computing is entering the trough of disillusionment as everyone claims to be cloud-centric, but few are” in a deck for his It's official: 'Cloud computing' is now meaningless article of 8/10/2011 for InfoWorld’s Cloud Computing blog:

imageI have to credit my good friend and fellow blogger Brenda Michelson for relaying to me that yet another Gartner hype cycle report is now out. You can expect to see its accompanying graphic (below) used in every vendor's presentation from now on. (There must be a law or something.)

Some of the better analyses of the report came from Louis Columbus, whose abstracts were pretty spot-on regarding the report's issues. This included the fact that "Gartner states that nearly every vendor who briefs them has a cloud computing strategy, yet few have shown how their strategies are cloud-centric. Cloud-washing on the part of vendors across all 34 technology areas is accelerating the entire industry into the trough of disillusionment."

imageEveryone out there is promoting their product as "cloud-centric" when they have very little or nothing that appears cloudlike. The concept of private clouds compounds the problem; it's much easier to spin any on-premise technologies into the cloud. That's old news, but lately it's getting much worse.

Here's the dilemma: If everything is promoted as cloud-centric, no matter if the vendors actually changed the technology to support cloud computing concepts, then cloud computing is all-encompassing. Therefore, cloud computing is no longer emerging, but stands as the the state of all things computing. Right?

Clearly, the term "cloud computing" has lost most of its meanings and core attributes. This occurred not by anybody redefining what it is, but by billions of marketing dollars that simply shout down the thought leaders in this space who call BS on all the cloud-washing.

I think we've officially lost the war on defining the core attributes of cloud computing so that businesses and IT can make proper use of it. It's now in the hands of marketing organizations and PR firms who, I'm sure, will take the concept on a rather wild ride over the next few years.

Trough of disillusionment, indeed.

Time to change the name of your column, David?

I also quoted from Louis ColumbusGartner Releases Their Hype Cycle for Cloud Computing, 2011 post of 7/27/2011 in my Microsoft's, Google's big data plans give IT an edge article of 8/8/2011 for The four points of general interest were:

  • There continues to be much confusion with clients relative to hybrid computing. Gartner’s definition is as follows ”Hybrid cloud computing refers to the combination of external public cloud computing services and internal resources (either a private cloud or traditional infrastructure, operations and applications) in a coordinated fashion to assemble a particular solution”. They provide examples of joint security and management, workload/service placement and runtime optimization, and others to further illustrate the complex nature of hybrid computing.
  • Big Data is also an area of heavy client inquiry activity that Gartner interprets as massive hype in the market. They are predicting that Big Data will reach the apex of the Peak of Inflated Expectations by 2012. Due to the massive amount of hype surrounding this technology, they predict it will be in the Trough of Disillusionment eventually, as enterprises struggle to get the results they expect.
  • By 2015, those companies who have adopted Big Data and extreme information management (their term for this area) will begin to outperform their unprepared competitors by 20% in every available financial metric. Early use cases of Big Data are delivering measurable results and strong ROI. The Hype Cycle did not provide any ROI figures however, which would have been interesting to see.
  • PaaS is one of the most highly hyped terms Gartner encounters on client calls, one of the most misunderstood as well, leading to a chaotic market. Gartner does not expect comprehensive PaaS offerings to be part of the mainstream market until 2015. The point is made that there is much confusion in the market over just what PaaS is and its role in the infrastructure stack.


Windows Azure/SQL Azure certainly is a “comprehensive PaaS offering” by any reasonable interpretation of the term. Google App Engine and Heroku could make similar claims. Microsoft is emphasizing hybrid cloud computing as a marketing ploy for 2012 versions of System Center Virtual Machine Manager (SCVMM), System Center Orchestrator (SCorch) and System Center App Controller (formerly Project “Concero”.)

Louis’ two big-data abstracts played a major role in that article. See my Links to Resources for my “Microsoft's, Google's big data [analytics] plans give IT an edge” Article post of 8/8/2011 for more details.

<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds


No significant articles today.

<Return to section navigation list>

Cloud Security and Governance

Richard Ross posted Compliance in the Cloud to the Nubifer Blogs on 8/10/2011:

Cloud computing seems simple in concept, and indeed, simplicity of operation, deployment and licensing are its most appealing assets. But when it comes to questions of compliance, once you scratch the surface you’ll find more questions than you asked in the first place, and more to think about than ever before.

Compliance covers a lot of ground, from government regulations such as Sarbanes-Oxley and the European Union Data Protection Act, to industry regulations such as PCI DSS for payment cards and HIPAA for healthcare information. You may have internal controls in place, but moving to a public-cloud infrastructure platform, a cloud-based application suite or something in between will mean giving up some controls to the cloud vendor.

That’s a position many auditors—and CIOs and CEOs—find themselves in today. They want to know how to leap into cloud computing in a way that preserves their good standing in regulatory compliance. Here are four tips for keeping tabs on compliance in the cloud, from analysts, vendors and consultants.

Challenges the Cloud May Add to your Workload.

When you evaluate cloud vendors, start by looking for sound practices and strategies for user identity and access management, data protection and incident response. These are baseline compliance requirements. Then, as you map specific compliance requirements to your prospective cloud vendor’s controls, you’ll likely face some cloud-specific challenges.

Data location is one. The EU Data Protection Act, for example, strives to keep personal information within the European Union. To comply, your cloud vendor should keep your European customer data on servers located in Europe.

Multi-tenancy and de-provisioning also pose challenges. Public cloud providers use multi-tenancy to optimize server workloads and keep costs down. But multi-tenancy means you’re sharing server space with other businesses, so you should know what safeguards your cloud provider has in place to prevent any compromise. Depending on how critical your data is, you may also want to use encryption. HIPAA, for example, requires that all user data, both moving and at rest, be encrypted.

User de-provisioning is an issue that will become more challenging as password-authentication methods grow in complexity and volume. Federated identity management schemes will make it easier for users to log on to multiple clouds, and that will make de-provisioning much trickier.

Ever-Changing Standards

Like it or not, you’re an early adopter. Your decisions about what applications to move to the cloud and when to move them will benefit from an understanding of new and/or modified standards that are now evolving for cloud computing.

Today you can look for SAS 70 Type II and ISO 27001 certifications for general compliance with controls for financial and information security typically required by government and industry regulations, but these don’t guarantee that your company’s processes will comply.

Bringing visibility to users is a major goal of the Cloud Security Alliance, a three-year-old organization fast gaining popularity among users, auditors and service providers. A major goal of the CSA is development of standardized auditing frameworks to facilitate communication between users and cloud vendors.

Well underway, for example, is a governance, risk and compliance (GRC) standards suite, or stack, with four main elements: the Cloud Trust Protocol, Cloud Audit, Consensus Assessments Initiative and the Cloud Controls Matrix. The Cloud Controls Matrix includes a spreadsheet that maps basic requirements for major standards to their IT control areas, such as “Human Resources Employment Termination,” while the Consensus Assessments Initiative offers a detailed questionnaire that maps those control areas to specific questions that users and auditors can ask cloud vendors.

Efforts of the CSA and other alliances, plus those of industry groups and government agencies, are bound to produce a wealth of standards in the next several years. The CSA has formal alliances with ISO, ITU and NIST, so that its developments can be used by those groups as contributions to standards they’re working on. And a 2010 Forrester Research report counted 48 industry groups working on security-related standards in late 2010.

Importance of an SLA

Regardless of your company’s size or status, don’t assume your cloud vendor’s standard terms and conditions will fit your requirements. Start your due diligence by examining the vendor’s contract.

Your company’s size can give you leverage to negotiate, but a smaller business can find leverage, too, if it represents a new industry for a cloud vendor that wants to expand its market. In any case, don’t be afraid to negotiate.


To best understand your potential risk, as well as your benefits, you should bring your security team into the conversation at the earliest possible opportunity, says Forrester.

Moving to the cloud may offer an opportunity to align security with corporate goals in a more permanent way by formalizing the risk-assessment function in a security committee. The committee can help assess risk and make budget proposals to fit your business strategy.

You should also pay attention to the security innovations coming from the numerous security services and vendor partnerships now growing up around the cloud.

For more information regarding compliance and security in the Cloud, contact a Nubifer representative today.

<Return to section navigation list>

Cloud Computing Events

Bruce Kyle recommended that you Learn to Build Your Private Cloud at TechNet Sessions in Western US in an 8/10/2011 post to the US ISV Evangelism Blog:

    imageCloud Power! What are the options? Public Cloud, Hybrid Cloud, Private Cloud? Which one is right for your business?

    Join us as we discuss the basics of cloud infrastructures and the details of how to build your own private cloud.

    clip_image001In 4 hours we will build a private cloud with you! We will talk about Hyper-V, Windows Azure, System Center Virtual Machine Manager (SCVMM) and the Self Service Portal. We will demonstrate how to use these building blocks to build your own private cloud environment to host your own IT applications and services. We will also show you how to connect Public Cloud components to your Private Cloud in order to maximize the unique competitive benefits of each environment.

    Before this session is over you will have an understanding of the ins and outs of Microsoft’s Private Cloud Offerings. All sessions begin at 8:30 am local time and go until 12:30 local time.

      Date Registration Link
    Irvine, CA Aug 11 1032489811
    Tempe, AZ Aug 16 1032489849
    Denver, CO Aug 18 1032489850
    San Francisco, CA Aug 23 1032489851
    Bellevue, WA Aug 25 1032489852
    Lehi, UT Aug 30 1032489853

    Joe Panettieri (@joepanettieri) reported HostingCon 2011: Four Trends for Cloud Services Providers in an 8/9/2011 post to the TalkinCloud blog:

    imageHosting providers and cloud services providers (CSPs) are in San Diego this week for HostingCon 2011. Hardly surprising, the big themes involve hosting providers pushing beyond managed servers toward comprehensive cloud services. But what are the key takeaways for established and emerging channel partners? Here are four key HostingCon highlights plus reality checks from Talkin’ Cloud.

    1. Enabling Cloud Storefronts: Parallels unveiled Parallels Automation for Cloud Applications, which allows service providers to launch and manage cloud storefronts for SMB customers.

    Reality Check: Parallels has quietly emerged as one of the top SaaS-enablement software platforms for hosting companies. Over the past year, Parallels has recruited several Microsoft veterans to the company, and there are signs that Parallels hopes to ultimately become a $1 billion company — though I believe revenues are closer to $100 million or so at the moment.

    2. Emerging Cloud Help Desks: The Parallels effort includes white-label help desk relationships with Global Mentoring Solutions and Bobcares. In theory, the relationships will help service providers improve support for SaaS applications.

    Reality Check: In recent months, we’ve noticed a growing number of MSPs and VARs also trying to leverage new approaches to help desk services — so we’re curious to see if channel partners get cozy with Global Mentoring Systems and Bobcares.

    3. More Cloud Backup: Arkeia Software launched a multi-tenant cloud storage solution for hosting providers and managed services providers. Arkeia says the solution will protect customer data both in the cloud and on-site at customers’ offices.

    Meanwhile, R1Soft says more than 1,000 cloud service providers and hosting providers now leverage R1Soft’s CDP (Continuous Data Protection) software to back up more than 200,000 servers. The disk-to-disk server backup software is designed for Windows and Linux servers.

    Reality Check: It’s good to see continued cloud storage innovations. But we continue to watch the cloud backup market closely; it seems like there are dozens of rivals seeking to build closer relationships with hosting providers and MSPs… so something’s got to give over the long haul.

    4. Hosted PBX Revisited?: Infratel launched a cloud-based voice service for small and midsize businesses, though Infratrel didn’t say if the solution is a complete hosted PBX or something slightly different. According to Infratel, the hosted solution “provides a wide array of features for SMBs including the ability to: create unique inbound calling paths for better customer service; deliver calls to any available phone; route calls based on time of day or other business rules; allow users to set do-not-disturb status; and set-up unique mailboxes for the business and end-users.”

    Infratel’s core offering, Infra Call Center, is a SIP-based application built on Microsoft Windows Server. Its corporate telephony solution, Infra CommSuite, is a Windows Server-based IP PBX solutions.

    Reality Check: Talkin’ Cloud has heard plenty of hosted PBX buzz this year. Parallels announced a major hosted PBX push during its own conference earlier this year. And Intermedia has been promoting a white label hosted PBX solution to VARs and MSPs. However, we believe it will be a few years before businesses fully trust and embrace hosted PBXes.

    What’s Next?

    imageWe’ll continue to track key trends at this week’s HostingCon. We’re particularly interested to see what moves, if any, Microsoft makes at the conference.

    Read More About This Topic

    <Return to section navigation list>

    Other Cloud Computing Platforms and Services

    The SD Times Newswire reported Red Hat is First to Deliver Java EE 6 via Platform-as-a-Service with OpenShift in an 8/10/2011 post:

    imageRed Hat, Inc. (NYSE: RHT), the world's leading provider of open source solutions, today announced that the Red Hat OpenShift Platform-as-a-Service (PaaS) now supports Java Enterprise Edition 6, powered by the company’s JBoss application server, one of the leading open source Java Enterprise Edition (EE) application servers that forms the basis for JBoss Enterprise Application Platform. With this new integration, OpenShift becomes the first PaaS in the industry to deliver Java Enterprise Edition (EE) 6, simplifying how application developers build and deploy Java in the cloud.

    imageOpenShift is a free PaaS for developers who leverage open source. Developers looking for a faster on-ramp to the cloud with built-in management and auto-scaling capabilities can use OpenShift so they can focus on coding mobile, social and enterprise applications while leaving stack setup, maintenance and operational concerns to a trusted hosted service. First announced at the Red Hat Summit in May 2011, OpenShift redefined the PaaS space by offering a broad choice of supported languages, frameworks, databases and clouds, including Ruby, Python, Perl, PHP, Java EE, Spring, MySQL, SQLite, MongoDB, MemBase and Memcache, all open source, helping developers avoid getting locked into any particular technology or platform.

    OpenShift Java EE 6 support is based on JBoss Application Server 7, an open source JBoss Community project. Red Hat’s JBoss application server forms the foundation for the company’s JBoss Enterprise Application Platform 6, the next major release of the application platform, which is scheduled for release in early 2012. JBoss application servers are Java EE-certified, enabling a cloud-ready architecture with a lightweight footprint and dynamic container model to better support multi-core processing and multi-tenancy.

    The combination of OpenShift with JBoss application server now allows Java EE to be more easily scaled, managed and monitored in the cloud. By delivering JBoss in OpenShift, developers can take advantage of Java EE 6, one of the biggest advancements in Java in over ten years. Java EE 6 includes Content and Dependency Injection (CDI), a standards-based, modern programming framework that makes it easier for developers to build dynamic applications and picks up where some proprietary frameworks left off. CDI offers a more robust set of capabilities including eventing support and typing, delivering optimal flexibility for programmers. Additionally, as an open standard, more vendors support and contribute to the specification, allowing developers to have their choice of programming approaches without vendor lock-in.

    "While developers and enterprises have long been interested in the time to market advantages offered by PaaS platforms, the difficulty of migrating existing applications to incompatible frameworks has slowed adoption," said Stephen O'Grady, principal analyst and co-founder of RedMonk. "With EE6 available by integrating the JBoss application server technology, Red Hat's OpenShift platform is aimed at allowing enterprises to transition their existing Java EE applications and skills to the cloud with zero friction."

    "OpenShift today provides differentiation in the industry as the first on-ramp to get Java EE 6 applications into the cloud," said Brian Stevens, CTO and vice president, Engineering at Red Hat. "With this, Red Hat has solved multi-tenant problems with its expertise in providing full-stack support. Combining our technology expertise from the hypervisor through the operating system and middleware to the cloud, our technology is integrated to allow for easier development of applications with free Java EE in PaaS. This is an efficiency unique to Red Hat through OpenShift today."

    To learn more about this announcement, join Red Hat executives for a webcast to be broadcast on August 10, 2011 at 12pm ET. To join the live webcast or to watch the replay, visit

    To access OpenShift, visit To download JBoss application servers, including JBoss Application Server 7, visit

    For more information about Red Hat, visit For more news, more often, visit

    Todd Hoff described LevelDB - Fast and Lightweight Key/Value Database From the Authors of MapReduce and BigTable in an 8/10/2011 post to his High Scalabilty blog:

    LevelDB is an exciting new entrant into the pantheon of embedded databases, notable both for its pedigree, being authored by the makers of the now mythical Google MapReduce and BigTable products, and for its emphasis on efficient disk based random access using log-structured-merge (LSM) trees.

    The plan is to keep LevelDB fairly low-level. The intention is that it will be a useful building block for higher-level storage systems. Basho is already investigating using LevelDB as one if its storage engines.

    imageIn the past many systems were built around embedded databases, though most developers now use database servers connected to via RPCs. An embedded database is a database distributed as a library and linked directly into your application. The application is responsible for providing a service level API, sharding, backups, initiating consistency checking, initiation rollback, startup, shutdown, queries, etc. Applications become the container for the database and the manager of the database.

    Architectures using embedded databases typically never expose a raw database abstraction at all. They have a service API and the services use the embedded database library call transparently behind the scene. Often an embedded database will provide multiple access types, like indexed access for key-value uses and btrees for range queries and cursors.

    BerkelyDB is one well known example of an embedded database, SQLite is another, the file system is perhaps the most commonly used database, and there have been many many other btree libraries in common use. I've used C-tree on several projects. In a battle of old versus new, a user named IM46 compared LevelDB to BerkelyDB and found that LevelDB solidly outperforms Berkeley DB for larger databases.

    Programmers usually thought doing this stuff was easy, wrote their own failed on-disk btree library (raises hand), and then look around for a proven product. It's only relatively recently the databases have gone up market and included a network layer and higher level services.

    Building a hybrid application/database architecture is still a very viable option when you want everything to be just so. If you are going to load balance requests across sharded application servers anyway, using a heavy weight external database infrastructure may not be necessary.

    The LevelDB mailing list started off very active and has died down a bit, but is still nicely active and informative. Here are some excellent FAQish tips, performance suggestions, and porting issues extracted from the list:

    • Largest tested database: 1 billion entries with 16 byte keys and 100 byte values (roughly 100 GB of raw data, about half that after compression).
    • LevelDB has been Open Sourced.
    • Relationship between LevelDB and BigTable: The implementation of LevelDB is similar in spirit to the representation of a single Bigtable tablet (section 5.3). However the organization of the files that make up the representation is somewhat different and is explained [in source code comments]. They wanted to put together something like the BigTable tablet stack that had minimal dependencies and would be suitable for open sourcing, and also would be suitable for use in Chrome (for the IndexedDB implementation). LevelDB has the same general design as the BigTable tablet stack, but does not share any of the code. [Emphasis added.]
    • Didier Spezia on log-structured-merge (LSM) trees: They are mostly useful to optimize random I/Os at insertion/delete time at the price of a slightly degradation of read access time. They are extremely efficient at indexing data in random order stored on rotational disks (i.e. better than b-trees).
    • Optimized for random writes. TokyoCabinet could be filled with a million 100-byte writes in less than two seconds if writing sequentially, but the time ballooned to ~2000 seconds when writing randomly. The corresponding slowdown for [LevelDB] is from ~1.5 seconds (sequential) to ~2.5 seconds.
    • In the tradition of BerkelyDB it's a library you embed in your program, it's not a server. You will have to add the networker layer, sharding, etc., if a single process won't suffice.
    • Quite appropriately threading decisions are left to the application, the library is not thread safe. Threads sharing iterators, for example, will need to lock.
    • Data is written in sorted order.
    • C++ only.
    • Variable sized keys are used to save memory.
    • What [LevelDB] does differently from B+trees is that it trades off write latency for write throughput: write latency is reduced by doing bulk writes, but the same data may be rewritten multiple times (at high throughput) in the background due to compactions.
    • Log-Structured Merge Trees offer better random write performance (compared to btrees). It always appends to a log file, or merges existing files together to produce new ones. So an OS crash will cause a partially written log record (or a few partially written log records). Leveldb recovery code uses checksums to detect this and will skip the incomplete records.
    • Search performance is still O(lg N) with a very large branching factor (so the constant factor is small and number of seeks should be <= 10 even for gigantic databases).
    • One early user found performance degraded at around 200 million keys.
    • Bigger block sizes are better, increasing the block size to 256k (from 64k).
    • Batching writes increases performance substantially.
    • Every write will cause a log file to grow, regardless of whether or not you are writing to a key which already exists in the database, and regardless of whether or not you are overwriting a key with the exact same value. Only background compactions will get rid of overwritten data. So you should expect high CPU usage while you are inserting data, and also for a while afterwards as background compactions rearrange things.
    • LevelDB Benchmarks look good:
      • Using 16 byte keys at 100 byte values:
        • Sequential Reads: LevelDB 4,030,000 ops/sec; Kyoto TreeDB 1,010,000 ops/sec; SQLite3 186,000 ops/sec
        • Random Reads: LevelDB 129,000 ops/sec; Kyoto TreeDB 151,000 ops/sec; SQLite3 146,000 ops/sec
        • Sequential Writes: LevelDB 779,000 ops/sec; Kyoto TreeDB 342,000 ops/sec; SQLite3 26,900 ops/sec
        • Random Writes: LevelDB 164,000 ops/sec; Kyoto TreeDB 88,500 ops/sec; SQLite3 420 ops/sec
      • Writing large values of 100,000 bytes each: LevelDB is even Kyoto TreeDB. SQLite3 is nearly 3 times as fast. LevelDB writes keys and values at least twice.
      • A single batch of N writes may be significantly faster than N individual writes.
      • LevelDB's performance improves greatly with more memory, a larger write buffer reduces the need to merge sorted files (since it creates a smaller number of larger sorted files).
      • Random read performance is much better in Kyoto TreeDB because it cached in RAM.
      • View many more results by following the link, but that's the jist of it.
    • InnoDB benchmarks as run by Basho.
      • LevelDB showed a higher throughput than InnoDB and a similar or lower latency than InnoDB.
      • LevelDB may become a preferred choice for Riak users whose data set has massive numbers of keys and therefore is a poor match with Bitcask’s model.
      • Before LevelDB can be a first-class storage engine under Riak it must be portable to all of the same platforms that Riak is supported on.
    • LEVELDB VS KYOTO CABINET MY FINDINGS. Ecstortive says wait a minute here, Kyoto is actually faster.
    • A good sign of adoption, language bindings are being built: Java, Tie::LevelDB on CPAN
    • Comparing LevelDB and Bitcask: LevelDB is a persistent ordered map; bitcask is a persistent hash table (no ordered iteration). Bitcask stores a fixed size record in memory for every key. So for databases with large number of keys, it may use too much memory for some applications. Bitcask can guarantee at most one disk seek per lookup I think. LevelDB may have to do a small handful of disk seeks. To clarify, [LevelDB] stores data in a sequence of levels. Each level stores approximately ten times as much data as the level before it. A read needs one disk seek per level. So if 10% of the db fits in memory, [LevelDB] will need to do one seek (for the last level since all of the earlier levels should end up cached in the OS buffer cache). If 1% fits in memory, [LevelDB] will need two seeks. Bitcask is a combination of Erlang and C.
    • Writes can be lost, but that doesn't trash the data files: [LevelDB] never writes in place: it always appends to a log file, or merges existing files together to produce new ones. So an OS crash will cause a partially written log record (or a few partially written log records). [LevelDB] recovery code uses checksums to detect this and will skip the incomplete records.
    • LevelDB is being used as the back-end for IndexedDB in Chrome. For designing how to map secondary indices into LevelDB key/values, look at how the IndexedDB support within Chrome is implemented.
    • In case of a crash partial writes are ignored.
    • Possible scalability issues:
      • LevelDB keeps a separate file for every couple of MB of data, and these are all in one directory. Depending on the underlying file system, this might start causing trouble at some point.
      • Scalability is more limited by the frequency of reads and writes that are being done, rather than the number of bytes in the system.
    • Transactions are not supported. Writes (including batches) are atomic. Consistency is up to you. There is limited isolation support. Durability is a configurable option. Full blown ACID transactions require a layer on top of LevelDB (see WebKit's IndexedDB).
    • Michi Mutsuzaki compared LevelDB to MySQL as a key-value store. LevelDB had better overall insert throughput, but it was less stable (high variation in throughput and latency) than [MySQL]. There was no significant performance difference for 80% read / 20% update workload.
    • LevelDB hasn't been tuned for lots of concurrent readers and writers. Possible future enhancements:
      1. Do not hold the mutex while the writer is appending to the log (allow concurrent readers to proceed)
      2. Implement group commit (so concurrent writers have their writes grouped together).
    Related Articles

    Curt Woodward (from XConomy) reported Ex-CEO of Avanade Joins Cloud Startup, Opscode in an 8/10/2011 post:

    imageOpscode, a young Seattle company that sells cloud-computing management services, has added some executive firepower to help manage its growth—co-founder Jesse Robbins is handing over chief executive duties to Mitch Hill, founding CEO of tech-services company Avanade.

    Robbins, an veteran, is staying on board as chief community officer to guide work on Chef, Opscode’s open-source IT management software. And no, Robbins says, this isn’t one of those entrepreneur nightmares where the investors and board members jam a new executive down the founders’ throats.

    image“I’m sleeping better at night, to be honest,” Robbins says. “It’s been pretty awesome.”

    Opscode makes it easier for companies to add cloud computing power by automating a lot of related tasks. Say your company wants to add more computing power—with big vendors like Amazon Web Services and Rackspace, that’s pretty easy. But someone still has to make sure all of those fancy new servers work with your existing computer systems. In the past, that could have meant a lot of tedious coding.

    Service providers like Opscode speed up the process by giving IT pros a standard way of plugging cloud computing power into their systems. They’re not the only ones in the field, but Opscode bases its offerings around the customizable, open-source Chef software that it developed.

    How does that make money? Opscode sells a hosted service based around the Chef software. It’s also introduced “Private Chef,” which targets companies that need to make sure their information stays on a private network. The hunger for more of those premium services, just a few months after they were introduced, is what drove the Opscode team to look for a more seasoned IT executive.

    When the Hosted Chef service became available, Robbins says, Opscode figured their medium and large business customers would buy a certain amount of service, and add slowly after the initial spike. Instead, he says, it “spread almost instantly in those companies,” leaving the small startup sprinting to keep up.

    “If I could set my wayback machine to a year ago, we would have staffed up faster to meet that enterprise demand so we could deal with those customers,” Robbins says.

    Enter Hill, who has seen this story play out before. “I’ve been working in IT for over 30 years,” he says. “We go through these cycles where we scale up and we scale out, and every time we scale out, we create a huge management problem.”

    The match was initially sparked by Bill Bryant of Draper Fisher Jurvetson, a member of the startup’s board. Hill took some time off after stepping down from Avanade in 2008, and started exploring a new career advising startups in the region during 2009 and 2010. That’s how he got …

    Read more: …Next Page »

    Having a high-power exec with a proven industry track record is a big plus for a cloud management software startup.

    <Return to section navigation list>


    Chris Radich said...

    This is a great post! Lots of relevant information about cloud computing