Friday, April 29, 2011

Windows Azure and Cloud Computing Posts for 4/27/2011+

image2 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

image4    

Note: The MSDN Team conducted maintenance on MSDN blogs on Thursday afternoon. On Friday morning, many previously hidden posts appeared in my IE 9 RSS/Atom reader.

• Updated 4/29/2011 with articles marked from Michael Washington, Wade Wegner, Avkash Chauhan, Chris Hoff, David Linthicum, David Hardin, Glenn Gailey, Matt Thalman, Bruce Kyle, Robert Green, Steve Yi, Windows Azure Team, and Windows Azure AppFabric Team.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

• David Hardin described Configuring diagnostics.wadcfg to Capture Custom Log Files in a 3/31/2011 post (missed when posted):

image Windows Azure Diagnostics (WAD) has the ability to copy files to Azure blob storage.  The feature is meant for copying custom log files but you can use it to copy any file that WAD has permission to access.  To configure this feature via diagnostics.wadcfg add appropriate <DataSource> and <DirectoryConfiguration> elements to the XML.

Within <DirectoryConfiguration> there is the option to use <Absolute> or <LocalResource>.  The local resource approach is appropriate in most situations since absolute paths are harder to define which exist in both the DevFabric and Azure environments.  The MSDN documentation covering diagnostics.wadcfg shows the use of an absolute path to copy the logs from a %SystemRoot% location.

For custom logs use the local resource approach.  The configuration steps are:

  1. Define a <LocalStorage> element in ServiceDefinition.csdef.  Set the name attribute to any value desired.  Set the sizeInMB attribute value large enough to hold the anticipated log data.  Set cleanOnRoleRecycle to false for reasons discussed below.
  2. Add a <DirectoryConfiguration> element to diagnostics.wadcfg.  Set the container attribute value to the name of the Azure storage blob container you want WAD to use.  WAD automatically creates the blob container if it does not exist.  Set the directoryQuotaInMB attribute value to the same value used for the <LocalStorage> sizeInMB value in step 1.
  3. Add a <LocalResource> element within <DirectoryConfiguration>.  Set the name attribute to the same value used for the name attribute of the <LocalStorage> element in step 1.  Set the relativePath attribute value, “.” means from the root of the local storage.
    Warning:  Not specifying the relativePath attribute or setting the value to an empty string causes WAD to ignore the entire <DirectoryConfiguration> element.

Here is a snippet from within ServiceDefinition.csdef:

<LocalResources>
<LocalStorage name="CustomLogs" cleanOnRoleRecycle="false" sizeInMB="128" />
</LocalResources>

Here is the corresponding snippet from within diagnostics.wadcfg:

<DataSources>
<DirectoryConfiguration container="wad-custom" directoryQuotaInMB="128">
<LocalResource name="CustomLogs" relativePath="." />
</DirectoryConfiguration>
</DataSources>

WAD does not delete or remove custom files, it just persists them to blob storage, so you’ll need to implement a cleanup strategy.  This is actually a good feature considering it may be inappropriate to delete the files.

Regarding setting cleanOnRoleRecycle to false, I suspect there are role recycle scenarios during which WAD will not have had time to transfer the files.  If the files are left on the disk then WAD will transfer them after the recycle.  This is another reason for implementing your own cleanup strategy.

Finally, there is some difference of opinion as to WAD needing an exclusive lock on the files it copies.  I believe some logging code uses file writing techniques which prevent WAD from performing the copy.  Here are some links about the issue:

http://archive.msdn.microsoft.com/azurediag

http://blog.bareweb.eu/2011/01/implementing-azure-diagnostics-with-sdk-v1-3/

http://social.msdn.microsoft.com/Forums/en-US/windowsazure/thread/f4d880c0-3d79-4ddd-af56-2be9bba88a94/

For more posts in my WAD series:

http://blogs.msdn.com/b/davidhardin/archive/tags/wad/


Bruce Kyle explained How to Connect SharePoint 2010, Windows Azure in a 4/27/2011 post:

imageWindows Azure offers SharePoint developers the opportunity to apply their existing skills in the cloud. The integration between Windows Azure and SharePoint is based on:

  • Data integration and content delivery.
  • Leveraging Windows Azure Blob storage and the capability to build compelling BI dashboards.
  • Servicing multiple customers through common cloud applications and extending on-premises code. … 

For more information, see Steve Fox’s blog post SharePoint and Windows Azure: Why They’re Better Together.

How to Extend SharePoint into the Cloud

Get started now by extending your existing SharePoint business to the Windows Azure cloud.

 

image
Compute
Use Windows Azure’s internet-scale hosting to extend service-based scenarios into SharePoint’s powerful collaboration platform.
image
Storage
Use Windows Azure’s internet-scale hosting to extend service-based scenarios into SharePoint’s powerful collaboration platform.
image
Front End
Use Windows Azure to create customer-facing interfaces that easily integrate with SharePoint’s platform architecture.

How to Get Started

To get started, see the SharePoint and Windows Azure Developer Primer. This dev training kits shows how to integrate using:

  • ASP.NET and IFrame
  • Hosted Date within SharePoint
  • Custom Services including BCS connections
  • Integration of WCF on Windows Azure and SharePoint

SharePoint and Azure Overview Deck on Academy Live.

Also note the great developer resources for both SharePoint and Windows Azure development on C9/MSDN:


Brian Swan explained Sorting Azure Table Entities by Timestamp with PHP in a 4/26/2011 post:

image This is a short post that describes one way to sort Windows Azure Table entities by timestamp when using the Windows Azure SDK for PHP. The problem boils down to sorting an array of objects by a timestamp property, so the information here is nothing that hasn’t been done before. However, after spending some time looking for a way to use a filter in the Windows Azure SDK for PHP API, I didn’t find one, so I’m hoping this post might save others some time. In the end, my solution simply uses the PHP array_multisort function.

imageTo illustrate the problem and my solution, I’ll reuse some of the code I wrote in this post: Accessing Windows Azure Table Storage from PHP. The important piece in that post to keep in mind is that when you define a class that inherits from Microsoft_WindowsAzure_Storage_TableEntity, the Windows Azure SDK for PHP maps specially annotated properties to “columns” in a table that has the same name as the class name. So, the example class below maps to the Contacts table and the $Name, $Address, and $Phone properties map to “columns” in that table.

class Contact extends Microsoft_WindowsAzure_Storage_TableEntity
{
  /**
  * @azure Name
  */
  public $Name;

  /**
  * @azure Address
  */
  public $Address;

  /**
  * @azure Phone
  */
  public $Phone;
}

Also note that the Contact class inherits the $_partitionKey, $_rowKey, and $_timestamp properties, as well as the corresponding methods getPartitionKey(), getRowKey(), and getTimestamp().

With that class in place, I can execute a query like this to retrieve all contacts:

$tableStorageClient = new Microsoft_WindowsAzure_Storage_Table();

$contacts = $tableStorageClient->retrieveEntities("Contact", null, "Contact");

  • The first parameter in the call to retrieveEntities specifies the table from which entities are retrieved.
  • The second parameter is a filter for entities. Specifying null will retrieve all entities.
  • The last parameter specifies the object type entities are retrieved as.

To sort the entities by timestamp, I’ll use two functions (which I essentially copied from the comments in the array_multisort documentation):

function objSort(&$objArray,$indexFunction,$sort_flags=0)
{
     $indices = array();
     foreach($objArray as $obj)
     {
         $indeces[] = $indexFunction($obj);
     }
     return array_multisort($indeces,$objArray,$sort_flags);
}

function getIndex($obj)
{
     return $obj->getTimestamp();
}

Note: If you are using the Windows Azure SDK v3.0.0 or later, timestamps are returned as PHP DateTime objects. So, to for this sorting solution to work, you’ll need to alter the getIndex function slightly (to get the Timestamp property on a DateTime object):

function getIndex($obj)

      return $obj->getTimestamp()->getTimestamp();
}

Now I can sort the $contacts array like this:

objSort($messages,'getIndex');

That’s it! I love the comments in the PHP documentation. :-)

Hope this saves someone some time.


<Return to section navigation list> 

SQL Azure Database and Reporting

• Steve Yi announced TechNet Wiki- Overview of Security of SQL Azure on 4/29/2011:

image TechNet has written an article which provides an overview of the security features of SQL Azure. Getting a clear understanding of security in the cloud, is something that we cannot encourage learning enough about.

Click here for the article.

We covered this topic recently in another post where we shared a quick overview video and samples on how to effectively secure a SQL Azure database.


Keshava Kumar reported SQL Server Migration Assistant v5.0 is now available in a 4/28/2011 post to the SSMA blog:

Migrate from Sybase, Oracle, MySQL and Microsoft Access to SQL Azure and SQL Server with ease!

Microsoft announced today the release of SQL Server Migration Assistant (SSMA) v5.0, a family of products to further simplify the user experience in automating the migration of Oracle, Sybase, MySQL and Microsoft Access databases to SQL server or SQL Azure.

In this release, SSMA family of products has been improved to further to reduce the cost and the risk of migration from Sybase, Oracle, MySQL and Microsoft Access. Our recent SSMA survey showed that 94% of SSMA downloader’s would recommend the tool to others. SSMA products are available for free download and are supported for free over email.

In recent months, there has been an uptick in migration to SQL Azure and/or SQL Server, Microsoft has observed more than 8,000 downloads/month of SSMA family of products in recent months.

Dollar Thrifty Auto Group, PEMEX, Eli Lily, Lockheed Martin, CSR Limited, Florida Department of Education, Forest Oil, The Wyoming Department of Health Vital Statistics Services Program, Landratsamt Landshut, Horowhenua District Council, and Volvo Aero are just a few of the customers that have downloaded and used the SQL Server Migration Assistant toolkit to migrate to Microsoft SQL Server.

What’s New in this Release?

With this wave of release, customers can now migrate to any edition of SQL Server including the free Express edition.

image

  • Support for migrating to SQL Server “Denali”
  • Multi-thread data migration for improved scale and performance.
  • Globalization support for migrating non-English databases to SQL Server.
  • Support for installation for SSMA Extension Pack on clustered SQL Server environments.

Sybase migration enhancements:

  • Support for migration to SQL Azure
  • Extended data access connectivity to Sybase ASE ADO.NET and ASE ODBC providers
  • Support for conversion of case sensitive Sybase database to case sensitive SQL Server
  • Extended support for conversion of Non-ANSI joins for DELETE and UPDATE statements
  • Removed dependency on separate SYSDB database on target SQL Server

Report use of Oracle’s User Defined Type in the database object and inside PL/SQL

Free Downloads:

Customers and partners can provide feedback or obtain free SSMA product support from Microsoft Customer Service and Support (CSS) at ssmahelp@microsoft.com.

Resources:


Brian Moran posted Database Cloud Computing: Cloudy with a Chance of Meatballs to the SQL Server Magazine site on 4/27/2011:

image I usually like to avoid writing about things that I don’t know too much about—especially when I’m being paid to write it! But this week I gave myself the liberty to explore cloud database ideas, even though I don’t consider myself to be an expert (at least not yet).

I think we all recognize that cloud technology will change the IT business in countless ways. However, I think we also see a tremendous amount of thunder and lightning when it comes to the cloud that doesn’t necessarily translate to reality. We see the storm on the horizon: It’s big, and it produces a lot of noise, flash, and excitement—but it’s still dry where we’re standing. Database-oriented cloud computing is especially subject to this weather phenomenon.

Are you actively working with database cloud technology in your job? Probably not. Do you know many people actively working with database cloud technology? Probably not. What does that mean? Well, it means that most people aren’t doing projects with databases in the cloud, which means that cloud databases are still a teeny tiny part of the market compared with hosted and on-premises database solutions.

I think we all sort of agree that cloud technology, including data-oriented clouds, is profoundly cool in many ways. Instant scale-out and back again on demand? Yeah, I want that. Dramatically reduced administration? Yep, I want that too. All the other stuff the cloud gives us? Sure; I’ll take some. But let’s ignore the thunder and lightning in two key areas of database cloud technology and look at reality for a minute, especially when it comes to SQL Azure.

Thunder and Lightning: “The cloud is super scalable.”
imageDry Land: A few thousand dollars buys me a 4-core laptop with 8GB of memory and a 300GB+ SSD. 50GB from SQL Azure in the cloud is scalable? Really. Yeah. I know all about database sharding, but no matter how you slice it, SQL Azure probably isn’t what you want today if you need super high-end scaling.

Thunder and Lightning: “The cloud is super redundant, so you don’t need to worry about availability.”
Dry Land: Really? Did you happen to read about Amazon’s recent outage that took down many highly visible sites, such as Foursquare and HootSuite, while other high-end sites, such as Netflix and Zynga, escaped unscathed? To be fair, it was the first major high-profile outage in 5 years. That’s better than most of us can do on our own budgets. But still. Always there? Really?

Lately, I tend to think of the movie Cloudy with a Chance of Meatballs when I think about database cloud technology. Food falling from the sky sounds sort of cool at first. But honestly, it’s a bit different when a giant meatball lands on you. Maybe it’s not exactly what you needed or wanted after all—at least not in its current form. (Unless, of course, you’re a SQL Server rockstar, and the meat falling from the sky is bacon.)

I’m not anti-cloud. I honestly think cloud technology is the future of IT. But the future of cloud computing, database in particular, is still a bit cloudy. In fact, I suspect that in the short term, it won’t be traditional mainstream line of business database apps that move to the cloud. I think that database cloud technology will enable (as it’s already doing) an entirely new class of applications and people doing things from their proverbial garage on a low budget. I’m not sure that we’ve even seen the “Aha, so this is how you’re supposed to use a cloud database” design pattern emerge yet. I think we’ll know how to use cloud databases when we see it, and I think that eventually cloud databases will mature to the point where we consume database functionality like we do power from the grid: Plug in and use what we want. But that’s still off on the horizon. For now, I declare the short-term database cloud technology future as “cloudy with a chance of meatballs.”

I wonder what “a 4-core laptop with 8GB of memory and a 300GB+ SSD” has to do with SQL Azure’s scalability. SQL Azure offers “Plug in and use what we want” capabilities now. See my Build Big-Data Apps in SQL Azure with Federation article for Visual Studio Magazine’s March 2011 issue regarding forthcoming SQL Azure sharding features.


Steve Yi reported the availability of MSDN Article: Guidelines for Connecting to a SQL Azure Database in a 4/27/2011 post to the SQL Azure blog;

imageThe flexibility of SQL Azure database allows you to work with a myriad of applications from familiar Microsoft products to other programming languages such including PHP and Java . MSDN has written an article on some of the consideration guidelines that developers should be aware of when connecting to SQL Azure Database. I suggest that everyone read this article, just to be familiar with some of the basics.

Click here for the article on MSDN.

The brief article includes a multitude of links to related SQL Azure topics.


<Return to section navigation list> 

MarketPlace DataMarket and OData

• Glenn Gailey (@ggailey777) described Getting Entity Data into Your Web Application in a 4/29/2011 post:

image A friend said to me the other day “so, now that I have all of my data modeled as entity data using the Entity Framework, how do I get it into my web application?” While I will admit to not being an ASP.NET expert (despite having written some of the EntityDataSource web service control documentation), I have recently been investigating support for displaying OData feeds in web apps, so I thought that I would take a stab at enumerating the options that I have found for displaying entity data on the web. 

Silverlight Application

imageDisplaying entity data in a Silverlight application is probably my personal favorite, since it leverages OData and I now know Silverlight pretty well. Silverlight is great (very “Flash”-y) and supports nice stuff like data binding. Since it’s a browser control, it behaves more like a client app (great for non-web devs like me). Plus it even runs out of the browser now. To consume entity data into a Silverlight application, you have two basic options:

OData Feeds

When you expose entity data via WCF Data Services, data is accessed by using the Open Data Protocol (OData). Silverlight includes a rich client for consuming OData feeds, the WCF Data Services client for Silverlight. Create an OData service when you want your data to be available to applications other than just Silverlight. Also, an OData service will work across domains. This quickstart shows how to consume an OData feed in a Silverlight client application (or watch this video). To see how to create the OData service that exposes Entity Framework data by using WCF Data Services, see this quickstart (or watch this video).

WCF RIA Services

Rich Internet Application (RIA) Services is a Silverlight-specific data access strategy that involves generating WCF endpoints for all CRUD operations against an entity data model. It is designed for same-domain, “end-to-end” Silverlight applications. This quickstart shows you how to access Entity Framework data in an Silverlight application by using WCF RIA Services.

JavaScript and OData

As I mentioned earlier, OData is an excellent way to expose entity data in the web. Since OData is accessed via HTTP and returns XML or JSON, it is also perfect to use in a web site that uses JavaScript. The following libraries can be used to access an OData service to display entity data in a web page by using JavaScript:

AJAX

You can use the ASP.NET AJAX library to access an OData service. There are quite a few examples how to do this on the ASP.NET site.

JQuery

You can also use JQuery to access OData, as Shawn Wildermuth demonstrates in his informative blog post: WCF Data Services and jQuery.

datajs

A newcomer on the scene, the datajs library is designed specifically for OData and HTML5, with some OData examples here.

ASP.NET EntityDataSource Web Server Control

Use the ASP.NET EntityDataSource web server control to display data by directly binding entity data to controls in an ASP.NET web page. This control enables you to access the entity data model exposed by Entity Framework directly and bind the data directly to web controls. This quickstart shows how to use the EntityDataSource web server control, and there is another useful EntityDataSource tutorial on the ASP.NET site.

ASP.NET MVC

You can use an Entity Framework data source with an ASP.NET MVC application, as demonstrated in this tutorial. I’ve never tried to use MVC, but I applaud the model-view-controller pattern as I use the related model-view-view-model (MVVM) pattern in my Windows Phone 7 apps.

ASP.NET Dynamic Data

While I don’t know much about ASP.NET Dynamic Data, apparently it uses this same EntityDataSource control in the data scaffolding. This walkthrough shows how to use Entity Framework with Dynamic Data.

. . . . .

Please let me know if I left anything out.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF and Service Bus

• Wade Wegner (@wadewegner) posted Cloud Cover Episode 45 - Windows Azure AppFabric Caching with Karandeep Anand on 4/29/2011:

image Join Wade and Steve each week as they cover the Windows Azure Platform. You can follow and interact with the show at @CloudCoverShow.

In this episode, Karandeep Anand joins Steve to discuss the newly-released Windows Azure AppFabric Caching service and show you exactly how to get started using Caching in your application.

In the news:

Get the full source code for http://cloudcovercache.cloudapp.net.


The Windows Azure AppFabric Team announced Windows Azure AppFabric Caching Service Released! as a commercial service on 4/28/2011:

image722322222Recently, at the MIX conference we announced the release of a new version of the Access Control service and the upcoming release of the Caching service.

Today we are excited to announce that the Caching service has been released as a production service.

The Caching service is a distributed, in-memory, application cache service that accelerates the performance of Windows Azure and SQL Azure applications by allowing you to keep data in-memory and saving you the need to retrieve that data from storage or database.

We provide 6 different cache size options for you to choose from, varying from 128MB to 4GB.

In order for you to be able to start using the service and evaluate your needs we are running a promotion period in which we will not be charging for the service for billing periods prior to August 1, 2011.

If you are signed up to one of the Windows Azure Platform Offers you might be eligible to get the 128MB cache option for free for a certain period of time depending on the specific offer. You can find more details on the Windows Azure Platform Offers page.

The service is billed on a monthly basis, based on the cache size you sign up for. You can also sign up for more than one cache and use these multiple caches in order to get a total cache size that is different or bigger than the standard cache sizes we provide.

You should read the Windows Azure AppFabric FAQ on MSDN in order to understand the pricing details and the advantages you get from the Caching service compared to alternatives.

The prices of the different cache sizes are the following:

  • 128 MB cache for $45.00/month
  • 256 MB cache for $55.00/month
  • 512 MB cache for $75.00/month
  • 1 GB cache for $110.00month
  • 2 GB cache for $180.00/month
  • 4 GB cache for $325.00/month

To learn more about the Caching service please use the following resources:

The service is already available in our production environment at: http://appfabric.azure.com.

For questions on the Caching service please visit the Windows Azure Storage Forum.

Take advantage of our free trial offer to get started with the Caching service and Windows Azure AppFabric. Just click on the image below and get started today!

image


Scott Densmore reported Another Release for Claims Identity and Access Control - Drop 3 on 4/28/2011:

image We just posted another drop for the Claims Based Identity Guidance.  This will be our last drop before we release. We would love any feedback. This new drop includes:

  • All samples for ACS (updated to ACS production version).
  • A sample for using WP7 as a client for REST services
  • All previous samples refactored and cleaned-up
  • A new Dependency Checker (much simplified).
  • A new sample with ACS as a Federation Provider
  • All ACS chapters
  • Two appendices on protocols and authorization strategies

image722322222What are you waiting for? Go download the new drop and give your feedback!


<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

The Windows Azure Connect Team reported a problem with the Windows Azure SDK 1.4 Refresh on 4/25/2011:

imageWe have found an issue with the new Windows Azure SDK 1.4 refresh which causes Windows Azure Connect endpoints to fail to deploy on Windows Azure Roles built using this SDK release. We have since fixed the issue and updated the Web Platform Installer feed.

If you downloaded and installed the Windows Azure SDK 1.4 refresh from the Windows Azure web site before 4pm PST 4/25/2011, you will have to uninstall and reinstall it.

1. Go to Control Panel\Programs\Programs and Features, uninstall SDK 1.4 refresh.

image

2. Install Windows Azure SDK 1.4 refresh from the Windows Azure web site.

We apologize for any inconvenience this issue may have caused.

I wonder if the team meant PDT?


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

The ADO.NET Team explained MVC + Code-First + Azure Part 2: Deployment in a 4/14/2011 post (missed when posted):

In the previous article [see post below], you learned how to create a basic MVC application using Code First. In the second and final section you will learn how to prepare and deploy your application to Windows Azure using Visual Studio.

Deploying to Windows Azure
Preparing Application

Having tested our app locally, it is time to set up Azure to deploy our app. Although this process may seem long, all the steps except for deployment only have to be done once per application.

First thing we must do before deploying is ensure our project has access to all of the assemblies we are referencing. Windows Azure machines run .NET 4.0, so we have to copy any new assemblies we may be using. We do this by going to the Solution Explorer on Visual Studio, expanding the References folder. Now we right click on EntityFramework and select Properties from the Context UI. Next, on the properties window, we set Copy Local equal to ‘True’.

Repeat this process for:

  • Microsoft.WindowsAzure.Diagnostics
  • Microsoft.WindowsAzure.StorageClient
  • System.ComponentModel.DataAnnotations
  • System.Web.Mvc
  • System.Web.Routing
New Storage Account

We need to create a storage account through the Management Portal. Go to the ‘Hosted Services, Storage Accounts, & CDN’ section, and click on Storage Accounts. Select New Storage Account on the Ribbon, and on the dialog box enter a URL name. Make this name unique for your app. In this example we will use codefirstazure.

Now create an affinity group. We do this so that both, the storage account and the hosted service reside within the same data center, which is good for application performance. Name the Affinity Group WorkoutsAffinityGroup, and select a location near you.

Note the properties pane contains some information about the storage account. We will use some of this information later to deploy our application.

New Hosted Service

Creating Hosted Service
On the azure portal, click on ‘Hosted Services, Storage Accounts & CDN’ and then click on New Hosted Service. Enter a name for your service on the new window. We will call ours WorkoutsAzureApp. Enter workoutsazureapp on the URL Prefix textbox. Now click on the radio button to ‘Create or choose an affinity group’ and select the affinity group we created in the previous step. Under Development options, select ‘Do not deploy’, then hit OK.

Publishing Application
In visual studio, right click WorkoutsAzureApp and click Publish… Under Credentials, hit <Add…>.

Now, on the new dialog, create a new certificate for authentication by clicking on the dropdown and selecting <Create…>. Enter a friendly name for the certificate. We will call ours WorkoutsCert. Click on ‘Copy the full path’ on the authentication window, then go back to the Management Portal and upload the certificate. To do this, click on Hosted Services, Storage Accounts & CDN. Then click on Management Certificates on the left hand menu, and click on Add a Certificate on the Ribbon.

On the new dialog window, click browse, then paste the path currently on your clipboard onto the ‘File name’ textbox. Now click Open, then click Done.

Now, on the Management Portal, copy the subscription ID that shows up on the properties pane on the left. Then go back to Visual Studio and paste it on the textbox asking for the subscription ID on the Windows Azure Project Management Authentication textbox. Name the credentials ‘WindowsAzureAccount’ Click OK.

At this stage, the Deploy Windows Azure project should be all filled in. Click OK to begin deployment.

The app will begin deployment. You can see the progress on the Windows Azure Activity Log.

When deployment completes, go to the Windows Azure Portal and click on the Deployment node. Next, click on the link under DNS name. This will bring up our application:

Maintaining the App

Development won’t stop after you deploy the first time. If you would like to make more changes to the application, the fastest way to do so is by upgrading the deployment, rather than redeploying from scratch. To do so, right click on WorkoutsAzureApp on Visual Studio and select Publish…. Now select ‘Create Service Package Only’ and click OK. A windows explorer window will open, showing you a file named ‘WorkoutsAzureApp.cspkg’ and another one named ‘ServiceConfiguration.cscfg’. Now go back to the Management Portal, go to the ‘Hosted Services, Storage Accounts & CDN’ section, and then click on the Hosted services folder. Now select your deployment, and click Upgrade on the Ribbon. On the new pop-up, click Browse Locally… next to the Package location textbox, and browse to the location of the two files created by publishing. Select the WorkoutsAzureApp file, and then repeat this task for the Configuration file. With the two files selected, click OK and the deployment will begin. When it finishes, your application should be ready to be used again.

Conclusion

In this exercise, you have learned how to build an MVC Web App that is hosted in Windows and SQL Azure, using code first. We covered the basics of code first, and showed you the simplest way to deploy your applications to the cloud. At this point, you should be able to write and deploy your own apps to Azure from Visual studio. We hope that you find this article helpful. Please be sure to leave your feedback in the comments section.


• Pedro Ardilla described MVC + Code-First + Azure Part 1: Local Development in a 4/13/2011 post (missed when posted):

This entry will guide you through the process of creating a simple workout tracker application using ASP.NET MVC and the Code First development approach. MVC stands for Model View Controller you can find more information on MVC here. The application will reside in Windows Azure, and it will be backed by a SQL Azure database. By the end of the article, we hope you understand how to use Code First with MVC, and how to test and deploy an application using MVC, Code First, SQL Azure, and Windows Azure.

Here is an overview of what we will cover:

  • How to develop an MVC application using Code First
  • Seeing some of the Code First basics at work
  • How to seamlessly use a SQL Azure database
  • Testing the application locally using the Windows Azure Tools for Visual Studio
  • Deploying the app to a Windows Azure staging Deployment using Visual Studio
  • Moving the application to a Production Deployment through the Azure Portal

Each topic will not be covered in depth in this article. However, you will find links wherever further explanation is required. Also, be aware that the MVC application we are building is for demonstration purposes only so we have taken the liberty to break conventions for the sake of brevity. We hope you find this post useful, and encourage you to leave any questions or comments at the bottom of the post!

Entity Framework and Azure

Entity Framework 4.1 and Azure work quite nicely together, as long as the configuration settings are correct. Here are some of the key things to have in mind while deploying a CF app to Azure. If any of this is not clear to you, please continue reading, as each bullet is explained in the appropriate context below:

  • Add PersitSecurityInfo=true to the connection string to allow Code First to  create the database in SQL Azure. Make sure to remove PersistSecurityInfo from the connection string after the database is created.
  • Ensure that any assembly referenced in your project that is newer than .NET 4.0 has Copy Local = true
  • Make sure all third-party assemblies are signed
  • Uploading the security certificate is key for Visual Studio to communicate with Windows Azure
  • Make sure to set CustomErrors on Web.Config to the RemoteOnly so that errors are only shown on remove clients.
  • Use the ‘Upgrade’ option on the Windows Azure Management portal whenever you make changes to the application. It is faster than re-deploying the app from visual studio
  • System.Transactions transactions cannot be used with SQL Azure. Please see this article for General SQL Azure Guidelines and Limitations
  • You may run into connection retry issues. Check out this blog post for troubleshooting options.
Pre-Requisites

To complete this exercise, you must:

  1. Download and Install the Windows Azure SDK and Windows Azure Tools for Microsoft Visual Studio.
  2. Download and install the latest version of the NuGet Package Manager.
  3. Steps 1 and 2 will allow you to create and deploy a Windows Azure app locally. To deploy the application to the cloud, you must obtain a Windows Azure account. Click here to sign up for Windows Azure or here to obtain a free trial (free trial is only available through June 30th, 2011).
  4. Create a SQL Azure account. For that, you can follow the steps here.
Getting Started: Creating a Windows Azure Project

After fulfilling the list above, we can get started with development. The first step is to create a new Windows Azure Project in Visual Studio. To do so, press Ctrl + Shift + N and on the New Project window, click on Cloud under the Visual C# node, then select Windows Azure Project. Name your project WorkoutsAzureApp, and click OK.

A window called New Windows Azure Project will come up. We will use the window to add the MVC Role to our application. Double-click on ASP.NET MVC 2 Web Role, then click OK. Note that we can also use MVC3, however, we are using MVC2 since it is the latest version offered through the ‘New Windows Azure Project’ dialog at the time of writing.

A new window will come up asking you if you would like to create unit tests. Select No to skip the creation of unit tests, and then click OK. We skip creation of unit tests for the sake of simplicity, however, you should strongly consider using unit tests in your application. After these steps, we will get a project that we can immediately begin working with.

Before we can use code first, we must bring in the Entity Framework assembly. We can get it from NuGet by right clicking the references node in the Solution Explorer, and selecting Add Library Package Reference… When the window opens, select Online from the menu on the left, then select EFCodeFirst from the list, and click Install. A dialog will show up asking to accept the License terms. Click ‘I Accept’ and then close the Add Library Package Reference window.

Creating the Model

We will begin by creating the model, which is where we will have most of our interaction with code first. We will create a new model by right-clicking on the Models folder on the Solution Explorer, then going to Add, and selecting New Item… On the new screen, create a new class called WorkoutsModel.cs

We can start creating the model for our workouts. Within the namespace, we will have two classes that will represent our entities named Workout and Gear. Feel free to delete the WorkoutsModel class as we will not use it. We will also have a class named WorkoutsContext which will inherit from DBcontext, and will help us keep track of our entities. Note that it would be best practice to have your POCO classes and your context in different files. We are keeping them in the same file to keep our walkthrough short.

Here is what our classes will look like:

using System;
using System.Collections.Generic;
using System.ComponentModel.DataAnnotations;
using System.Data.Entity;
using System.Data.Entity.Database;
namespace MvcWebRole1.Models
{
public class Workout
    {
public Workout()
        {
            Gear = new List<Gear>();
        }
public int Id { get; set; }
        [StringLength(50, MinimumLength = 3)]
public string Name { get; set; }
public TimeSpan? Duration { get; set; }
public decimal? Distance { get; set; }
public virtual ICollection<Gear> Gear { get; set; }
    }
public class Gear
    {
public Gear()
        {
            Workouts = new List<Workout>();
        }
public int Id { get; set; }
public string Brand { get; set; }
public string Name { get; set; }
public virtual ICollection<Workout> Workouts { get; set; }
    }
public class WorkoutsContext : DbContext
    {
public DbSet<Workout> Workouts { get; set; }
public DbSet<Gear> Gear { get; set; }
protected override void OnModelCreating(
                    System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder)
        {
            modelBuilder.Entity<Workout>().ToTable("Workouts");
            modelBuilder.Entity<Gear>().ToTable("Gear");
            modelBuilder.Entity<Workout>().Property(c => c.Name).IsRequired();
        }
    }

}

The first class will be for workouts. Each workout will have an ID, name, duration, distance, and a collection of gear associated with the workout. The second class is called Gear and it will contain an Id and two strings for the brand and name respectively. Observe some of the conventions at work. For instance, Code First will use the Id property in Workouts as the primary key based on convention. Additionally, data annotations are used to shape the data. The StringLength annotation above the Workout’s Name property is an example of this.

Creating the Controller

Create a controller by right clicking on the Controllers node on the solution explorer, selecting Add…, and clicking on Controller…. Enter WorkoutsController on the textbox, and select the checkbox to add action methods for all CRUD scenarios. Now, press Add.

By selecting the checkbox above, MVC scaffolding provides us with a set of methods we can fill in to complete our controller. Some of the noteworthy methods will look like the following:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
using MvcWebRole1.Models;
namespace MvcWebRole1.Controllers
{
public class WorkoutsController : Controller
    {
WorkoutsContext db = new WorkoutsContext();
//
// GET: /Workouts/
public ActionResult Index()
        {
return View(db.Workouts.ToList());
        }
//
// GET: /Workouts/Details/5
public ActionResult Details(int id)
        {
Workout w = db.Workouts.Find(id);
return View(w);
        }

//
// POST: /Workouts/Create
        [HttpPost]
public ActionResult Create(Workout w)
        {
try
            {
if (ModelState.IsValid)
                {
                    db.Workouts.Add(w);
                    db.SaveChanges();
return RedirectToAction("Index");
                }
return View(w);
            }
catch
            {
return View();
            }
        }
//
// GET: /Workouts/Edit/5
public ActionResult Edit(int id)
        {
Workout w = db.Workouts.Find(id);
return View(w);
        }
//
// POST: /Workouts/Edit/5
        [HttpPost]
public ActionResult Edit(Workout w)
        {
try
            {
if (ModelState.IsValid)
                {
var workout = db.Workouts.Find(w.Id);
                    UpdateModel(workout);
                    db.SaveChanges();
return RedirectToAction("Index");
                }
return View(w);
            }
catch
            {
return View();
            }
        }
    }
}

Note that at the top we created an instance of WorkoutsContext named db. We read from it when we query for workouts on the Index and Details methods, and we change it when we create or edit our workouts.

Creating Views

We need to build to be able to create strongly typed views from our controller methods. You can build the project by pressing Ctrl+Shift+B). We create views for each method by right clicking on the method name in the code itself, and selecting Add View. In the new dialog, select ‘Create a strongly-typed view’ and pick MvcWebRole1.Models.Workout from the View Data Class dropdown. We will use the same class for all of our views, as they will all be for workouts. The view content will be unique for each view: For the Index view, select List from the View Content dropdown. For the Details method, select the same Details. Follow this pattern for the Create and Edit methods.

We will only need to slightly alter the views. We will make sure to pass in the item keys to the ActionLink methods. Here is what they will look like:

<td>
<%: Html.ActionLink("Edit", "Edit", new { id=item.Id }) %> |
<%: Html.ActionLink("Details", "Details", new { id = item.Id })%>
</td>

In the above example, we have removed the ‘Delete’ action as we won’t be implementing it on this walkthrough.

Database Initializer

Code First allows us to plant seed data into our database by using a database initializer. We will add a new class to our Workouts Model for the sake of simplicity. Alternatively you could create this in a separate file under the Models folder. Here is what the initializer looks like:

public class WorkoutsInitializer : DropCreateDatabaseAlways<WorkoutsContext>
{
protected override void Seed(WorkoutsContext context)
    {
Gear Bicycle = new Gear { Name = "P2", Brand = "Cervelo" };
Gear Wetsuit = new Gear { Name = "Torpedo", Brand = "TYR" };
Gear Watch = new Gear { Name = "310xt", Brand = "Garmin" };
Workout swim = new Workout { Name = "Swim", Distance = 3800, Duration = new TimeSpan(1, 10, 0), Gear = new List<Gear> { Wetsuit } };
Workout bike = new Workout { Name = "Bike Ride", Distance = 112, Duration = new TimeSpan(6, 15, 0) };
Workout run = new Workout { Name = "Run", Distance = 26, Duration = new TimeSpan(4, 20, 0) };
        bike.Gear = new List<Gear> { Bicycle, Watch };
        run.Gear = new List<Gear> { Watch };
        context.Gear.Add(Bicycle);
        context.Gear.Add(Wetsuit);
        context.Workouts.Add(swim);
        context.Workouts.Add(bike);
        context.Workouts.Add(run);
        context.SaveChanges();
    }
}

We can ensure the initializer gets called by adding the following setting to Web.config, within the <configuration> node:

<appSettings>
  <add key="DatabaseInitializerForType MvcWebRole1.Models.WorkoutsContext, MvcWebRole1" value="MvcWebRole1.Models.WorkoutsInitializer, MvcWebRole1" />
</appSettings>

Before we move to local deployment, we must edit the Global.asax to ensure our application immediately reroutes to our Workouts page. This is done by editing routes.MapRoute() so that it uses the Workouts controller as opposed to the Home controller. The final result is:

routes.MapRoute(
"Default", // Route name
"{controller}/{action}/{id}", // URL with parameters
new { controller = "Workouts", action = "Index", id = UrlParameter.Optional } // Parameter defaults
);

Deploying Locally Using SQL Azure

Creating a Valid Connection String

It will only take a few more steps to deploy our workouts application locally using SQL Azure as the backend. To leverage SQL Azure, all we have to do is change the database connection string to target our SQL Azure database. A valid connection string can be obtained through the Windows Azure Portal, under the Database section. Once there, expand the subscriptions tree until you can see the ‘master’ database. Click on master on the tree on the left, and then click on View… under ‘Collection Strings’ on the Properties pane on the right. This will bring up a dialog with valid connection strings for ADO.NET, ODBC, and PHP. The string will look like the following:

Server=tcp:<YourServer>.database.windows.net,1433;Database=master;User ID=<YourUserID>;Password=myPassword;Trusted_Connection=False;Encrypt=True; 

Copy this string to your clipboard and go back to Visual Studio, then open Web.config under ‘MvcRole1’. Once there, look for the <connectionStrings> node and create a new connection string named WorkoutsContext. Set the connection string equal to the one obtained in the Windows Azure Portal, then replace the placeholders with your real username, password, and desired database name.

Lastly, in order to allow CodeFirst to create the database on SQL Azure, we must append PersistSecurityInfo=True; to our connection string. The final connection string will be similar to the one immediately below:

<add name="WorkoutsContext" connectionString=" Server=tcp:<YourServer>.database.windows.net,1433;Database=WorkoutsDB;User ID=<YourUserID>;Password=myPassword;Trusted_Connection=False;Encrypt=True;

PersistSecurityInfo=True;" providerName="System.Data.SqlClient" />

Remember to remove PersistSecurityInfo=True once the database has been created to ensure your credentials are not left in memory.

Changes To Web.config
We will make some changes to our app configuration so that it runs smoothly on Windows Azure. Remove the following nodes <system.web>:

  • <authentication>
  • <membership>
  • <profile>
  • <roleManager>

While testing locally, we will use impersonation to make sure we can create the database. This will only work if you have admin access to your computer. To use impersonation, type <identity impersonate="true" /> inside the <system.web> node.

Now, press F5 to deploy your application. Your default browser should show your application:

We can verify that CodeFirst created the database using SQL Server Management studio or the Windows Portal. You can test the app by pressing ‘Create New’ and adding a new workout. Make sure to remove the identity impersonation before continuing to part 2 of the series.

Conclusion

At this point, our MVC application is running perfectly our machine thanks to the Windows Azure SDK. In the second and final part of this series you will find out how to take deploy your application to Windows Azure. Please let us know if you found this article helpful by leaving a comment below!


The Windows Azure Team posted Real World Windows Azure: Interview with Sebastian Iglesias, Business Development Manager, Tata Consultancy Services on 4/28/2011:

As part of the Real World Windows Azure series, we talked to Sebastian Iglesias, Business Development Manager at Tata Consultancy Services, about using the Windows Azure platform to deliver services. Here's what he had to say:

MSDN: Tell us about Tata Consultancy Services.

Iglesias: Tata Consultancy Services (TCS) is a global IT services, business solutions, and outsourcing organization. We offer an integrated portfolio of IT and IT-enabled services delivered through our Global Network Delivery Model, which is recognized as a benchmark of excellence in software development. A part of the Tata group, one of India's largest industrial conglomerates, TCS employs more than 174,000 employees in 42 countries. The company generated revenues of more than [U.S.]$6.3 billion for fiscal year that ended March 31, 2010.

MSDN: How did Tata Consultancy Services get started as a cloud services provider?

imageIglesias: When the Windows Azure platform was released, the Microsoft Technology Excellence Group at TCS started experimenting with the technology. In mid-2009, we developed a proof-of-concept application that we successfully demonstrated at various events. By integrating three technologies-Microsoft ASP.NET MVC 3, jQuery, and Ajax-we developed a code generator that provides developers with easy-to-use templates and wizards that can create source code quickly and efficiently. This results in a 50 to 60 percent increase in productivity by automating repetitive tasks, and it cuts in half the time-to-market. The code generator can generate simple data entry screens. We also have a few software-as-a-service products, including a banking solution known as BaNCS.

MSDN: What kinds of cloud services do you offer to customers?

Iglesias: We offer three cloud services. Firstly, we provide Cloud Advisory Services, in which we focus on analysing cloud readiness and recommending target states. This includes planning application migration, calculating costs and risks, identifying business drivers, and developing business cases. Secondly, we offer Cloud Migration and Development Services, which includes migrating applications and databases to the cloud, re-engineering applications, and developing and deploying hybrid applications. Thirdly, we offer Deploy and Manage Services, which mainly focuses on managing service line agreements and implementing governance infrastructure.

MSDN: Describe some of your solutions.

Iglesias: One of our customers had several public and intranet websites hosted in its own IT infrastructure. This company's websites experienced peak demand a few weeks a year. It needed a solution that would lower infrastructure costs while handling usage fluctuations. TCS developed a methodology called Common Adoption Framework that can be used to migrate a portfolio of applications to the Windows Azure platform in an iterative manner.

Another company, a U.S.-based paint and chemical manufacturer, had several public websites hosted in its own IT infrastructure, and these needed to scale to meet business demands at certain times. The company's IT professionals were pressured to quickly roll out new content reflecting the newest products. TCS recommended the Windows Azure platform and applied its proven ProTeam methodology to implement the project. This included analysing existing technology, submitting a detailed design document to the customer, and migrating the relevant contents to the Windows Azure platform.

MSDN: Explain a few of your customers' challenges and how you address them with a cloud solution.

Iglesias: While migrating the public website of a customer, we needed to move the house-built authentication and authorization systems to the cloud. We solved this by using Windows Identity Foundation, which allows developers to use a single, simplified identity model based on claims. We have also moved a huge number of scripts, images, videos, and rich media applications to Blob Storage, a part of Windows Azure that provides persistent and durable storage in the cloud for text and binary data.

MSDN: What benefits have TCS customers realized by using the Windows Azure platform?

Iglesias: One of the main benefits of Windows Azure is its capacity to scale on demand. Also, by using the Windows Azure platform, customers can be freed from the grip of internal IT barriers. Customers have also realized significant administrative cost reductions by hosting applications in the cloud instead of in their on-premises infrastructures. A solution hosted on the Windows Azure platform is easier to maintain, which also contributes to reducing overall IT costs. Also, customers can improve time-to-market and roll out new features or fixes more rapidly.

To read more Windows Azure customer success stories, visit: www.windowsazure.com/evidence.


• Avkash Chauhan described Ruby on Rails in Windows Azure - Part 1 - Setting up Ruby on Rails in Windows 7 Machine with test Rails Application on 4/26/2011:

image Lets start with main component installation as below:

  1. Ruby 1.9.2
  2. Rails 3.0.7

Ruby Installation:image

http://rubyforge.org/frs/?group_id=167

I have installed it at C:\Applicaitons\Ruby\Ruby192

Gems (Ruby Package Manager) Installation

http://rubyforge.org/frs/?group_id=126

Unzip the above zip folder in C:\Applicaitons\Ruby\ folder.

To start GEM installation use command ruby <Path_to>\setup.rb as below:

C:\Applications\Ruby>dir
Volume in drive C has no label.
Volume Serial Number is 8464-7B7C

Directory of C:\Applications\Ruby

04/25/2011  01:30 PM    <DIR>          .
04/25/2011  01:30 PM    <DIR>          ..
03/28/2011  04:23 PM            14,996 HelloWorld.docx
03/23/2011  05:04 PM               122 help.txt
03/23/2011  04:45 PM         4,823,453 ruby-1.8.7-p174.tar.gz
03/23/2011  04:48 PM    <DIR>          Ruby192
04/05/2011  02:17 PM    <DIR>          rubygems-1.7.2
03/23/2011  04:45 PM        12,502,696 rubyinstaller-1.9.2-p180.exe
               4 File(s)     17,341,267 bytes
               4 Dir(s)  36,662,747,136 bytes free

C:\Applications\Ruby>ruby rubygems-1.7.2\setup.rb
RubyGems 1.7.2 installed

=== 1.7.2 / 2011-04-05

* 1 Bug Fix:
  * Warn on loading bad spec array values (ntlm-http gem has nil in its cert
    chain)

------------------------------------------------------------------------------

RubyGems installed the following executables:
        C:/Applications/Ruby/Ruby192/bin/gem

C:\Applications\Ruby>

Now adding C:\Applications\Ruby\Ruby192\bin into System PATH so I can run GEM.

C:\Applications\Ruby>gem
RubyGems is a sophisticated package manager for Ruby.  This is a
basic help message containing pointers to more information.

  Usage:
    gem -h/--help
    gem -v/--version
    gem command [arguments...] [options...]

  Examples:
    gem install rake
    gem list --local
    gem build package.gemspec
    gem help install

  Further help:
    gem help commands            list all 'gem' commands
    gem help examples            show some examples of usage
    gem help platforms           show information about platforms
    gem help <COMMAND>           show help on COMMAND
                                   (e.g. 'gem help install')
    gem server                   present a web page at
                                 http://localhost:8808/
                                 with info about installed gems
  Further information:
    http://rubygems.rubyforge.org

C:\Applications\Ruby>gem -v
1.7.2
Now we can install Rails as below:

C:\Applications\Ruby>gem install rails
Fetching: activesupport-3.0.7.gem (100%)
Fetching: builder-2.1.2.gem (100%)
WARNING: builder-2.1.2 has an invalid nil value for @cert_chain
Fetching: i18n-0.5.0.gem (100%)
Fetching: activemodel-3.0.7.gem (100%)
Fetching: rack-1.2.2.gem (100%)
Fetching: rack-test-0.5.7.gem (100%)
Fetching: rack-mount-0.6.14.gem (100%)
Fetching: tzinfo-0.3.26.gem (100%)
Fetching: abstract-1.0.0.gem (100%)
WARNING: abstract-1.0.0 has an invalid nil value for @cert_chain
Fetching: erubis-2.6.6.gem (100%)
Fetching: actionpack-3.0.7.gem (100%)
Fetching: arel-2.0.9.gem (100%)
Fetching: activerecord-3.0.7.gem (100%)
Fetching: activeresource-3.0.7.gem (100%)
Fetching: mime-types-1.16.gem (100%)
Fetching: polyglot-0.3.1.gem (100%)
Fetching: treetop-1.4.9.gem (100%)
Fetching: mail-2.2.17.gem (100%)
Fetching: actionmailer-3.0.7.gem (100%)
Fetching: thor-0.14.6.gem (100%)
Fetching: railties-3.0.7.gem (100%)
Fetching: bundler-1.0.12.gem (100%)
Fetching: rails-3.0.7.gem (100%)
Successfully installed activesupport-3.0.7
Successfully installed builder-2.1.2
Successfully installed i18n-0.5.0
Successfully installed activemodel-3.0.7
Successfully installed rack-1.2.2
Successfully installed rack-test-0.5.7
Successfully installed rack-mount-0.6.14
Successfully installed tzinfo-0.3.26
Successfully installed abstract-1.0.0
Successfully installed erubis-2.6.6
Successfully installed actionpack-3.0.7
Successfully installed arel-2.0.9
Successfully installed activerecord-3.0.7
Successfully installed activeresource-3.0.7
Successfully installed mime-types-1.16
Successfully installed polyglot-0.3.1
Successfully installed treetop-1.4.9
Successfully installed mail-2.2.17
Successfully installed actionmailer-3.0.7
Successfully installed thor-0.14.6
Successfully installed railties-3.0.7
Successfully installed bundler-1.0.12
Successfully installed rails-3.0.7
23 gems installed
Installing ri documentation for activesupport-3.0.7...
Installing ri documentation for builder-2.1.2...
Installing ri documentation for i18n-0.5.0...
Installing ri documentation for activemodel-3.0.7...
Installing ri documentation for rack-1.2.2...
Installing ri documentation for rack-test-0.5.7...
Installing ri documentation for rack-mount-0.6.14...
Installing ri documentation for tzinfo-0.3.26...
Installing ri documentation for abstract-1.0.0...
Installing ri documentation for erubis-2.6.6...
Installing ri documentation for actionpack-3.0.7...
Installing ri documentation for arel-2.0.9...
Installing ri documentation for activerecord-3.0.7...
Installing ri documentation for activeresource-3.0.7...
Installing ri documentation for mime-types-1.16...
Installing ri documentation for polyglot-0.3.1...
Installing ri documentation for treetop-1.4.9...
Installing ri documentation for mail-2.2.17...
Installing ri documentation for actionmailer-3.0.7...
Installing ri documentation for thor-0.14.6...
Installing ri documentation for railties-3.0.7...
Installing ri documentation for bundler-1.0.12...
Installing ri documentation for rails-3.0.7...
Installing RDoc documentation for activesupport-3.0.7...
Installing RDoc documentation for builder-2.1.2...
Installing RDoc documentation for i18n-0.5.0...
Installing RDoc documentation for activemodel-3.0.7...
Installing RDoc documentation for rack-1.2.2...
Installing RDoc documentation for rack-test-0.5.7...
Installing RDoc documentation for rack-mount-0.6.14...
Installing RDoc documentation for tzinfo-0.3.26...
Installing RDoc documentation for abstract-1.0.0...
Installing RDoc documentation for erubis-2.6.6...
Installing RDoc documentation for actionpack-3.0.7...
Installing RDoc documentation for arel-2.0.9...
Installing RDoc documentation for activerecord-3.0.7...
Installing RDoc documentation for activeresource-3.0.7...
Installing RDoc documentation for mime-types-1.16...
Installing RDoc documentation for polyglot-0.3.1...
Installing RDoc documentation for treetop-1.4.9...
Installing RDoc documentation for mail-2.2.17...
Installing RDoc documentation for actionmailer-3.0.7...
Installing RDoc documentation for thor-0.14.6...
Installing RDoc documentation for railties-3.0.7...
Installing RDoc documentation for bundler-1.0.12...
Installing RDoc documentation for rails-3.0.7...

C:\Applications\Ruby>

Now Lets try to run Rails application to verify the installation:

C:\Applications\Ruby>rails
Usage:  rails new APP_PATH [options]

Options:
  -r, [--ruby=PATH]           # Path to the Ruby binary of your choice
                              # Default: C:/Applications/Ruby/Ruby192/bin/ruby.exe
  -d, [--database=DATABASE]   # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db)
                              # Default: sqlite3
  -b, [--builder=BUILDER]     # Path to an application builder (can be a filesystem path or URL)
  -m, [--template=TEMPLATE]   # Path to an application template (can be a filesystem path or URL)
      [--dev]                 # Setup the application with Gemfile pointing to your Rails checkout
      [--edge]                # Setup the application with Gemfile pointing to Rails repository
      [--skip-gemfile]        # Don't create a Gemfile
  -O, [--skip-active-record]  # Skip Active Record files
  -T, [--skip-test-unit]      # Skip Test::Unit files
  -J, [--skip-prototype]      # Skip Prototype files
  -G, [--skip-git]            # Skip Git ignores and keeps

Runtime options:
  -f, [--force]    # Overwrite files that already exist
  -p, [--pretend]  # Run but do not make any changes
  -q, [--quiet]    # Supress status output
  -s, [--skip]     # Skip files that already exist

Rails options:
  -v, [--version]  # Show Rails version number and quit
  -h, [--help]     # Show this help message and quit

Description:
    The 'rails new' command creates a new Rails application with a default
    directory structure and configuration at the path you specify.

Example:
    rails new ~/Code/Ruby/weblog

    This generates a skeletal Rails installation in ~/Code/Ruby/weblog.
    See the README in the newly created application to get going.

Ruby comes with embedded version of SQLite name Sqlite3_ruby so lets install it now:

C:\Applications\Ruby>gem install sqlite3-ruby
Fetching: sqlite3-1.3.3-x86-mingw32.gem (100%)

=============================================================================

  You've installed the binary version of sqlite3.
  It was built using SQLite3 version 3.7.3.
  It's recommended to use the exact same version to avoid potential issues.

  At the time of building this gem, the necessary DLL files where available
  in the following download:

  http://www.sqlite.org/sqlitedll-3_7_3.zip

  You can put the sqlite3.dll available in this package in your Ruby bin
  directory, for example C:\Ruby\bin

=============================================================================

Fetching: sqlite3-ruby-1.3.3.gem (100%)

#######################################################

Hello! The sqlite3-ruby gem has changed it's name to just sqlite3.  Rather than
installing `sqlite3-ruby`, you should install `sqlite3`.  Please update your
dependencies accordingly.

Thanks from the Ruby sqlite3 team!

<3 <3 <3 <3

#######################################################

Successfully installed sqlite3-1.3.3-x86-mingw32
Successfully installed sqlite3-ruby-1.3.3
2 gems installed
Installing ri documentation for sqlite3-1.3.3-x86-mingw32...
Installing ri documentation for sqlite3-ruby-1.3.3...
Installing RDoc documentation for sqlite3-1.3.3-x86-mingw32...
Installing RDoc documentation for sqlite3-ruby-1.3.3...

As suggested in Sqlite3_ruby installation, lets download the sqlite3.dll from the link below:

http://www.sqlite.org/sqlitedll-3_7_3.zip

Unzip the DLL and be sure to place in Ruby\Bin folder and verify as below:

C:\Applications\Ruby>dir Ruby192\bin\SQL*.dll
Volume in drive C has no label.
Volume Serial Number is 8464-7B7C

Directory of C:\Applications\Ruby\Ruby192\bin

10/07/2010  10:37 PM           546,205 sqlite3.dll
               1 File(s)        546,205 bytes
               0 Dir(s)  36,548,767,744 bytes free

Now we have all the pieces needed to create our first application. Let's create a new Ruby on Rails application name “RubyonAzure” as below:

C:\Applications\Ruby>rails new rubyonazure
      create
      create  README
      create  Rakefile
      create  config.ru
      create  .gitignore
      create  Gemfile
      create  app
      create  app/controllers/application_controller.rb
      create  app/helpers/application_helper.rb
      create  app/mailers
      create  app/models
      create  app/views/layouts/application.html.erb
      create  config
      create  config/routes.rb
      create  config/application.rb
      create  config/environment.rb
      create  config/environments
      create  config/environments/development.rb
      create  config/environments/production.rb
      create  config/environments/test.rb
      create  config/initializers
      create  config/initializers/backtrace_silencers.rb
      create  config/initializers/inflections.rb
      create  config/initializers/mime_types.rb
      create  config/initializers/secret_token.rb
      create  config/initializers/session_store.rb
      create  config/locales
      create  config/locales/en.yml
      create  config/boot.rb
      create  config/database.yml
      create  db
      create  db/seeds.rb
      create  doc
      create  doc/README_FOR_APP
      create  lib
      create  lib/tasks
      create  lib/tasks/.gitkeep
      create  log
      create  log/server.log
      create  log/production.log
      create  log/development.log
      create  log/test.log
      create  public
      create  public/404.html
      create  public/422.html
      create  public/500.html
      create  public/favicon.ico
      create  public/index.html
      create  public/robots.txt
      create  public/images
      create  public/images/rails.png
      create  public/stylesheets
      create  public/stylesheets/.gitkeep
      create  public/javascripts
      create  public/javascripts/application.js
      create  public/javascripts/controls.js
      create  public/javascripts/dragdrop.js
      create  public/javascripts/effects.js
      create  public/javascripts/prototype.js
      create  public/javascripts/rails.js
      create  script
      create  script/rails
      create  test
      create  test/fixtures
      create  test/functional
      create  test/integration
      create  test/performance/browsing_test.rb
      create  test/test_helper.rb
      create  test/unit
      create  tmp
      create  tmp/sessions
      create  tmp/sockets
      create  tmp/cache
      create  tmp/pids
      create  vendor/plugins
      create  vendor/plugins/.gitkeep

C:\Applications\Ruby>

The above command will create the Ruby application skeleton code for us in a predefined Ruby application pattern. You can check the application folder as below:

You can check the application folder as below:

C:\Applications\Ruby\rubyonazure>dir
Volume in drive C has no label.
Volume Serial Number is 8464-7B7C

Directory of C:\Applications\Ruby\rubyonazure

04/25/2011  01:58 PM    <DIR>          .
04/25/2011  01:58 PM    <DIR>          ..
04/25/2011  01:51 PM                36 .gitignore
04/25/2011  01:51 PM    <DIR>          app
04/25/2011  01:51 PM    <DIR>          config
04/25/2011  01:51 PM               161 config.ru
04/25/2011  01:51 PM    <DIR>          db
04/25/2011  01:51 PM    <DIR>          doc
04/25/2011  01:51 PM               743 Gemfile
04/25/2011  01:58 PM             1,642 Gemfile.lock
04/25/2011  01:51 PM    <DIR>          lib
04/25/2011  01:51 PM    <DIR>          log
04/25/2011  01:51 PM    <DIR>          public
04/25/2011  01:51 PM               271 Rakefile
04/25/2011  01:51 PM             9,126 README
04/25/2011  01:51 PM    <DIR>          script
04/25/2011  01:51 PM    <DIR>          test
04/25/2011  01:51 PM    <DIR>          tmp
04/25/2011  01:51 PM    <DIR>          vendor
               6 File(s)         11,979 bytes
              13 Dir(s)  36,548,046,848 bytes free

C:\Applications\Ruby\rubyonazure>

Once satisfied, lets launch the sample application using “rails server” in the application folder below:

C:\Applications\Ruby\rubyonazure>rails server
=> Booting WEBrick
=> Rails 3.0.7 application starting in development on http://0.0.0.0:3000
……

During the start the following Security Alert will be displayed so accept it:

Finally you will see the following log at command prompt to show that application is running in localhost:3000:

C:\Applications\Ruby\rubyonazure>rails server
=> Booting WEBrick
=> Rails 3.0.7 application starting in development on http://0.0.0.0:3000
=> Call with -d to detach
=> Ctrl-C to shutdown server
[2011-04-25 13:58:57] INFO  WEBrick 1.3.1
[2011-04-25 13:58:57] INFO  ruby 1.9.2 (2011-02-18) [i386-mingw32]
[2011-04-25 13:58:57] INFO  WEBrick::HTTPServer#start: pid=9236 port=3000

Now you can open the browser at http://localhost:3000 to test the application as below:

Now Click at “About your application’s environment” as below:

Part 2: http://blogs.msdn.com/b/avkashchauhan/archive/2011/04/26/ruby-on-rails-in-windows-azure-part-2-creating-windows-azure-sdk-1-4-based-application-to-host-ruby-on-rails-application-in-cloud.aspx

See below for Part 2.


• Wade Wegner (@wadewegner) posted Cloud Cover Episode 44 - Umbraco and Windows Azure on 4/22/2011 (missed when posted):

image Join Wade and Steve each week as they cover the Windows Azure Platform. You can follow and interact with the show @CloudCoverShow.

In this episode, Vittorio Bertocci joins Wade as they explore the Umbraco Accelerator for Windows Azure. The accelerator is designed to make it easy to deploy Umbraco applications into Windows Azure, allowing users to benefit from automated service deployment, reduced administration, and the high availability & scalability provided by Windows Azure. Additionally, Vittorio discusses last week's launch of the Access Control Service 2.0.

imageIn the news:

Get the Sesame Data Browser from Vittorio's tip


The Windows Azure Team announced NOW AVAILABLE: Windows Azure Platform Training Kit - April Update on 4/28/2011:

imageThe April 2011 update of the Windows Azure Platform Training Kit is now available for download.  This version has been updated to the Windows Azure SDK 1.4 and Visual Studio 2010 SP1.  It also includes new and updated hands-on labs (HOLs) and a demo for the new Windows Azure AppFabric portal.

The Windows Azure Platform Training Kit includes a comprehensive set of technical content including hands-on labs, presentations, and demos that are designed to help you learn how to use the Windows Azure platform.

Included in the April 2011 of the Windows Azure Platform Training Kit are:

  • NEW: Authenticating Users in a Windows Phone 7 App via ACS, OData Services and Windows Azure HOL
  • NEW: Windows Azure Traffic Manager HOL
  • NEW: Introduction to SQL Azure Reporting Services HOL
  • UPDATED: Connecting Apps with Windows Azure Connect HOL (updated for Windows Azure Connect refresh)
  • UPDATED: Windows Azure CDN HOL (updated for the Windows Azure CDN refresh)
  • UPDATED: Introduction to the AppFabric ACS 2.0 HOL (updated to the production release of ACS 2.0)
  • UPDATED: Use ACS to Federate with Multiple Business Identity Providers HOL (updated to the production release of ACS 2.0)
  • UPDATED: Introduction to Service Bus HOL (updated to latest Windows Azure AppFabric portal)
  • UPDATED: Eventing on the Service Bus HOL (updated to latest Windows Azure AppFabric portal)
  • UPDATED: Service Remoting HOL (updated to latest Windows Azure AppFabric portal)
  • UPDATED: Rafiki demo (updated to latest Windows Azure AppFabric portal)
  • UPDATED: Service Bus demos (updated to latest Windows Azure AppFabric portal)

Click here to learn more and to download.


Robin Shahan wrote Migrate a Web App to Windows Azure, which DevPro Connections posted on 4/27/2011:

image In my role as director of engineering for GoldMail, I was fortunate enough to be given an opportunity to do a major migration to Windows Azure and SQL Azure last year. As I tend to do, I took a long running jump and leaped off the cliff with no bungee cord. (My mother says, "No brains, no headaches," but my father says, "No guts, no glory." I've always liked my father better.)

imageThere is a lot of information out there about Azure and a lot of theory, snippets of code, blogs full of code, and so on, but it seemed like I could never find exactly what I was looking for. So in this article, I will share the knowledge I've gained by discussing the bits that I use over and over again in my Azure-related projects:

  • how to turn a web application into an Azure web role
  • how to migrate a SQL Server database to SQL Azure
  • how to set up and use trace logging
  • how to handle SQL Azure connectivity issues

Tips Before You Get Started
To migrate a web application, you first need to confirm that your application does not require any software installed on the web server and see if you have any special Microsoft IIS configurations. If you have either of those conditions, you will need to figure out a way to live without it or figure out how to replace it. For example, I was using some third-party software that was installed on the web server. I could not install that software on my instance in the cloud, so I had to change my application to take up the slack and handle it.

I had completely forgotten about that software until I published my web application. The Azure instance would start up and then fail, start up and then fail, and so on, not unlike my first lesson driving a car with a standard transmission. When I replaced the component, it published the first time without fail.

I also used URL rewrites, but in this case, I found I could install the URL Rewrite Module and add the rewrite configuration to my web.config file.

With Azure, you have to publish your entire web application every single time, no matter how small the modification. Change a word, change a page, or add 20 pages, it doesn't matter—you have to publish the whole thing.

To read the rest of the article and download the code, click here

WebDeploy probably hadn’t been released when Robin wrote her articles.



Avkash Chauhan continued his series with Ruby on Rails in Windows Azure - Part 2 - Creating Windows Azure SDK 1.4 based Application to Host Ruby on Rails Application in Cloud on 4/26/2011:

In the part 1 we have finished a Rails application name "RubyonAzure" as described in the link below:

http://blogs.msdn.com/b/avkashchauhan/archive/2011/04/26/ruby-on-rails-in-windows-azure-part-1-setting-up-ruby-on-rails-in-windows-7-machine-with-test-rails-application.aspx

imageNow to create Windows Azure Application, I took the Simon Davies written base sample from the link below:

http://archive.msdn.microsoft.com/railsonazure

I am using the Simon Davies sample base application and then making changed to get it working on Ruby 1.9.2 and Rails 3.0.7. The solution in VS2010 looks as below:

If you open the RR application you will see two folders:

1.       RailsApp folder:  It will include your Rails Application

2.       Ruby Folder: This will include Ruby\Bin and Ruby\Lib folder

Now include the Ruby\Bin and Ruby\Lib files  in above Ruby Folder:

Note: Please remove the following folder as this is not needed and your package will be very smaller comparatively.

            ruby\gems\1.9.1\doc

Now copy RubyonAzure project in “RailsApp” folder:

Now let’s include “Ruby” Folder in the VS2010 Solution as below:

Note: Ruby\lib folder is about 90MB so it will take good amount of time to include this folder in the VS2010 solution. If you remove \Ruby\lib\ruby\gems\1.9.1\doc from the original Ruby\Lib folder it will take about 1 hours to add all the files from the Ruby\Lib folder.

Now let’s Include RailsApp folder in the Solution as below:

Now Open the Service Configuration (ServiceConfiguration.cscfg) and add the following:

<?xml version="1.0"?>
<ServiceConfiguration serviceName="RW" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration">
  <Role name="RR">
    <Instances count="1" />
    <ConfigurationSettings>
      <Setting name="DiagnosticsConnectionString" value="DefaultEndpointsProtocol=https;AccountName=<Storage_Account_Name>;AccountKey=Storage_Account_Key” />
      <Setting name="StorageAccount" value="DefaultEndpointsProtocol=https;AccountName=<Storage_Account_Name>;AccountKey= Storage_Account_Key" />
      <Setting name="RubyFolder" value="Ruby" />
      <Setting name="AppFolder" value="RailsApp" />
      <Setting name="OutputContainer" value="testoutput" />
      <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=<Storage_Account_Name>;AccountKey=Storage_Account_Key" />
      <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" />
      <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="avkash" />
      <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="***********************ENCRYPTED_PASSWORD**************************" />
      <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2011-05-08T23:59:59.0000000-07:00" />
      <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" />
    </ConfigurationSettings>
    <Certificates>
      <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="*****************************************" thumbprintAlgorithm="sha1" />
    </Certificates>
  </Role>
</ServiceConfiguration>

Now Open Service Definition (ServiceDefinition.csdef) and include the following:

In the VS2010 based Solution, I have already included the Ruby 1.9.2 and Rails 3.0.0 which is located in Ruby folder. Your RubyonRails application is located in RailsApp folder inside the application.

Worker Role Source Study:

Step 1: When the Worker Role Starts, we copy Ruby + Rails files and our Ruby app application in two sub steps:

1.       In step 1, we copy Ruby (Bin and Lib) folder to Worker Role Local storage folder name \Ruby

2.       In step 2, we copy our RubyonRails application to Worker Role local Storage folder name \RailsApp

The code responsible for it is as below:

string rubyFolderName = RoleEnvironment.GetConfigurationSettingValue("RubyFolder");
if (localStorageRoot.FullName.EndsWith("\\") == false)
      this.rubyLocation=string.Format("{0}\\{1}",localStorageRoot.FullName,rubyFolderName);
else
      this.rubyLocation = string.Format("{0}{1}", localStorageRoot.FullName, rubyFolderName);
CopyFolder(string.Format("{0}\\{1}", this.roleRoot, rubyFolderName), this.rubyLocation);
            
string appFolderName = RoleEnvironment.GetConfigurationSettingValue("AppFolder");
if (localStorageRoot.FullName.EndsWith("\\") == false)
      this.appLocation = string.Format("{0}\\{1}", localStorageRoot.FullName, appFolderName);
else
      this.appLocation = string.Format("{0}{1}", localStorageRoot.FullName, appFolderName);
CopyFolder(string.Format("{0}\\{1}", this.roleRoot, appFolderName), this.appLocation);

this.endPoint  = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Server"].IPEndpoint;

// Start the server
StartProcess();

Step 2: Once above Ruby + Rails & our Ruby application copy process is completed we launch the Ruby application as below:

ProcessStartInfo spawnedProcessInfo = new ProcessStartInfo();
spawnedProcessInfo.UseShellExecute = false;
spawnedProcessInfo.WorkingDirectory = this.appLocation;
spawnedProcessInfo.CreateNoWindow = true;
//string args = String.Format(@"script\server --port {0} --binding {1}", this.endPoint.Port, this.endPoint.Address); // This is old code
// changed by Avkash to get it working
string args = String.Format(@"script\rails server --port {0} --binding {1}", this.endPoint.Port, this.endPoint.Address);
LogInfo(@"Arguments: {0}", args);
spawnedProcessInfo.Arguments = args;
spawnedProcessInfo.FileName = Path.Combine(this.rubyLocation, @"bin\Ruby ");
spawnedProcessInfo.RedirectStandardError = true;
spawnedProcessInfo.RedirectStandardOutput = true;
            
            //Run It
Process spawnedProcess = new Process();
spawnedProcess.ErrorDataReceived += new DataReceivedEventHandler(spawnedProcess_ErrorDataReceived);
spawnedProcess.OutputDataReceived += new DataReceivedEventHandler(spawnedProcess_OutputDataReceived);
spawnedProcess.StartInfo = spawnedProcessInfo;

if (spawnedProcess.Start()) // This line Execute the process as >Ruby script\rails server –-port <Role_Port> --binding <Role_IP_Address>
{
    this.id = spawnedProcess.Id;

    spawnedProcess.BeginErrorReadLine();
    spawnedProcess.BeginOutputReadLine();
    LogInfo("Process Id {0} Started",this.id);
}

Note: When you are running this application in Development Fabric please set the following

      <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true” />
      <Setting name="StorageAccount" value="UseDevelopmentStorage=true" />

Now run the application in the development fabric and you will see the following results in Compute Emulator UI:

Using the IP address and Port available in Computer Emulator we can launch the browser to check our application:

Now you can package this application and deploy on Cloud to test as below:

Note: Before packaging the application, please be sure to set your data connection string to your correct Windows Azure Storage

      <Setting name="DiagnosticsConnectionString" value="DefaultEndpointsProtocol=https;AccountName=<Storage_Account_Name>;AccountKey=Storage_Account_Key” />
      <Setting name="StorageAccount" value="DefaultEndpointsProtocol=https;AccountName=<Storage_Account_Name>;AccountKey= Storage_Account_Key" />
      <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=<Storage_Account_Name>;AccountKey=Storage_Account_Key" />

After publishing the Windows Azure Application in cloud you can verify it running  as below:

You can download the Windows Azure SDK 1.4 based VS2010  Solution from the link below (This sample includes Ruby 1.9.2 and Rails 3.0.7 package as well):

http://rubyonrailsinazure.codeplex.com/


<Return to section navigation list> 

Visual Studio LightSwitch

Robert Green reported his Updated Post on Extending a LightSwitch Application with SharePoint Data on 3/30/2011 (missed when posted):

image I have just updated Extending A LightSwitch Application with SharePoint Data for Beta 2. I reshot all the screens and have made some minor changes to both the text and the narrative. There are two primary differences. The minor change is that I need to add code to enable editing of data in two different data sources. The more significant change is in what I need to do to create a one to many relationship between Courses in SQL Server and KBs in SharePoint.

I have now updated all but one of Beta 1 posts. One more to go.


• Matt Thalmann explained Invoking Tier-Specific Logic from Common Code in LightSwitch in a 4/12/2011 post (missed when posted):

image Visual Studio LightSwitch makes use of .NET portable assemblies to allow developers to write business logic that can be executed on both the client (Silverlight) and server (.NET 4) tiers.  In LightSwitch terminology, we refer to the assembly that contains this shared logic as the Common assembly.  In this post, I’m going to describe a coding pattern that allows you to invoke code from the Common assembly that has different implementations depending on which tier the code is running on.

image2224222222In my scenario, I have a Product entity which has an Image property and the image must be a specific dimension (200 x 200 pixels).  I would like to write validation code for the Image property to ensure that the image is indeed 200 x 200.  But since the validation code for the Image property is contained within the Common assembly, I do not have access to the image processing APIs that allow me to determine the image dimensions.

This problem can be solved by creating two tier-specific implementations of the image processing logic and store that in classes which derive from a common base class that is defined in the Common assembly.  During the initialization of the client and server applications, an instance of the tier-specific class is created and set as a static member available from the Common assembly.  The validation code in the Common assembly can now reference that base class to invoke the logic.  I realize that may sound confusing so let’s take a look at how I would actually implement this.

This is the definition of my Product entity:

1

I now need to add some of my own class files to the LightSwitch project.  To do that, I switch the project to File View.

2

From the File View, I add a CodeBroker class to the Common project.

3

The CodeBroker class is intended to be the API to tier-specific logic.  Any code in the Common assembly that needs to execute logic which varies depending on which tier it is running in can use the CodeBroker class. Here is the implementation of CodeBroker:

C#:

public abstract class CodeBroker
{
    private static CodeBroker current;

    public static CodeBroker Current
    {
        get { return CodeBroker.current; }
        set { CodeBroker.current = value; }
    }

    public abstract void GetPixelWidthAndHeight(byte[] image, out int width,
out int height); }

VB:

Public MustInherit Class CodeBroker
    Private Shared m_current As CodeBroker

    Public Shared Property Current() As CodeBroker
        Get
            Return CodeBroker.m_current
        End Get
        Set(value As CodeBroker)
            CodeBroker.m_current = value
        End Set
    End Property

    Public MustOverride Sub GetPixelWidthAndHeight(image As Byte(),
ByRef width As Integer,
ByRef height As Integer) End Class

I next add a ClientCodeBroker class to the Client project in the same way as I added the CodeBroker class to the Common project.  Here’s the implementation of ClientCodeBroker:

C#:

using Microsoft.LightSwitch.Threading;

namespace LightSwitchApplication
{
    public class ClientCodeBroker : CodeBroker
    {
        public override void GetPixelWidthAndHeight(byte[] image, out int width,
out int height) { int bitmapWidth = 0; int bitmapHeight = 0; Dispatchers.Main.Invoke(() => { var bitmap = new System.Windows.Media.Imaging.BitmapImage(); bitmap.SetSource(new System.IO.MemoryStream(image)); bitmapWidth = bitmap.PixelWidth; bitmapHeight = bitmap.PixelHeight; }); width = bitmapWidth; height = bitmapHeight; } } }

VB:

Imports Microsoft.LightSwitch.Threading

Namespace LightSwitchApplication
    Public Class ClientCodeBroker
        Inherits CodeBroker
        Public Overrides Sub GetPixelWidthAndHeight(image As Byte(),
ByRef width As Integer, ByRef height As Integer) Dim bitmapWidth As Integer = 0 Dim bitmapHeight As Integer = 0 Dispatchers.Main.Invoke( Sub() Dim bitmap = New Windows.Media.Imaging.BitmapImage() bitmap.SetSource(New System.IO.MemoryStream(image)) bitmapWidth = bitmap.PixelWidth bitmapHeight = bitmap.PixelHeight End Sub) width = bitmapWidth height = bitmapHeight End Sub End Class End Namespace

(By default, my application always invokes this GetPixelWidthAndHeight method from the Logic dispatcher.  So the call to invoke the logic on the Main dispatcher is necessary because BitmapImage objects can only be created on the Main dispatcher.)

To include the server-side implementation, I add a ServerCodeBroker class to the Server project.  It’s also necessary to add the following assembly references in the Server project because of dependencies in my image code implementation: PresentationCore, WindowsBase, and System.Xaml.  Here is the implementation of ServerCodeBroker:

C#:

public class ServerCodeBroker : CodeBroker
{
    public override void GetPixelWidthAndHeight(byte[] image, out int width,
out int height) { var bitmap = new System.Windows.Media.Imaging.BitmapImage(); bitmap.BeginInit(); bitmap.StreamSource = new System.IO.MemoryStream(image); bitmap.EndInit(); width = bitmap.PixelWidth; height = bitmap.PixelHeight; } }

VB:

Public Class ServerCodeBroker
    Inherits CodeBroker
    Public Overrides Sub GetPixelWidthAndHeight(image As Byte(),
ByRef width As Integer, ByRef height As Integer) Dim bitmap = New System.Windows.Media.Imaging.BitmapImage() bitmap.BeginInit() bitmap.StreamSource = New System.IO.MemoryStream(image) bitmap.EndInit() width = bitmap.PixelWidth height = bitmap.PixelHeight End Sub End Class

The next thing is to write the code that instantiates these broker classes.  This is done in the Application_Initialize method for both the client and server Application classes.  For the client Application code, I switch my project back to Logical View and choose “View Application Code (Client)” from the right-click context menu of the project.

4

In the generated code file, I then add the following initialization code:

C#:

public partial class Application
{
    partial void Application_Initialize()
    {
        CodeBroker.Current = new ClientCodeBroker();
    }
}

VB:

Public Class Application
    Private Sub Application_Initialize()
        CodeBroker.Current = New ClientCodeBroker()
    End Sub
End Class

This initializes the CodeBroker instance for the client tier when the client application starts.

I need to do the same thing for the server tier.  There is no context menu item available for editing the server application code but the code file can be added manually.  To do this, I switch my project back to File View and add an Application class to the Server project.

5

The implementation of this class is very similar to the client application class.  Since the server’s Application_Initialize method is invoked for each client request, I need to check whether the CodeBroker.Current property has already been set from a previous invocation.  Since the CodeBroker.Current property is static, its state remains in memory across multiple client requests.

C#:

public partial class Application
{
    partial void Application_Initialize()
    {
        if (CodeBroker.Current == null)
        {
            CodeBroker.Current = new ServerCodeBroker();
        }
    }
}

VB:

Public Class Application
    Private Sub Application_Initialize()
        If CodeBroker.Current Is Nothing Then
            CodeBroker.Current = New ServerCodeBroker()
        End If
    End Sub
End Class

The next step is to finally add my Image property validation code.  To do this, I switch my project back to Logical View, open my Product entity, select my Image property in the designer, and choose “Image_Validate” from the Write Code drop-down button.

6

In the generated code file, I add this validation code:

C#:

public partial class Product
{
    partial void Image_Validate(EntityValidationResultsBuilder results)
    {
        if (this.Image == null)
        {
            return;
        }

        int width;
        int height;
        CodeBroker.Current.GetPixelWidthAndHeight(this.Image, out width,
out height); if (width != 200 && height != 200) { results.AddPropertyError(
"Image dimensions must be 200x200.",
this.Details.Properties.Image); } } }

VB:

Public Class Product
    Private Sub Image_Validate(results As EntityValidationResultsBuilder)
        If Me.Image Is Nothing Then
            Return
        End If

Dim width As Integer Dim height As Integer CodeBroker.Current.GetPixelWidthAndHeight(Me.Image, width, height) If width <> 200 AndAlso height <> 200 Then results.AddPropertyError("Image dimensions must be 200x200.",
Me.Details.Properties.Image) End If End Sub End Class

This code can execute on both the client and the server.  When running on the client, CodeBroker.Current will return the ClientCodeBroker instance and provide the client-specific implementation for this logic.  And when running on the server, CodeBroker.Current will return the ServerCodeBroker instance and provide the server-specific implementation.

And there you have it.  This pattern allows you to write code that is invoked from the Common assembly but needs to vary depending on which tier is executing the logic.  I hope this helps you out in your LightSwitch development.


Michael Washington (@adefwebserver) claimed It Is Easy To Display Counts And Percentages In LightSwitch on 4/28/2011 and offered source code to prove it:

image

image In many LightSwitch applications, you will desire the ability to display aggregated data and percentages. While the article at this link explains a method that will work 100% of the time, with the best performance, it requires the creation of an additional project. The method described here, is easier to use, yet, it has limitations (for example it will not allow you to use GroupBy).

The Call Log Application

image

First, we create a LightSwitch application with a single table called PhoneCall.

image

image2224222222We click on CallType in the table, and in the Properties for the field, select Choice List.

image

We enter options for the field. This will automatically cause a dropdown list to appear for the field when we create a screen for this table.

image

In the Solution Explorer, we right-click on the Screen folder and select Add Screen.

image

We select a List and Details Screen.

image

This creates a screen for us.

image

We hit F5 to run the application.

image

We are able to enter data into our simple Call Log application.

Display Aggregate Data And Percentages

We now desire to display the number of calls by call type, and their percentage of the total calls.

When you use the LightSwitch screen designer, it may seem confusing at first. It is actually very simple, the “stuff” (the data and the properties) is listed on the left side of the screen designer, the right-hand side of the screen is used to display the “stuff”. The right-hand side of the screen displays the data and elements in a object tree.

When we design a LightSwitch screen, we first make sure we have the “stuff” we need in the column on the left-hand side. We then indicate where in the object tree we want to display the “stuff”.

To display the count of, for example, just the Sales, and the percentage of calls that Sales represents, we need to add two properties to the left-hand side of the screen hold the values. We then calculate the values for those properties in the “_InitializeDataWorkspace” method for the screen. 

Lastly, we will place labels to display those properties in the object tree designer on the right-hand side of the screen.

image

On the screen designer, we click the Add Data Item button.

image

We add a property for CountOfSales, and a property for PercentOfSalesCalls.

image

The properties will show on the left-hand side of the screen.

image

We select Write Code, then the “_InitializeDataWorkspace” method.

We insert code into the method:

    partial void PhoneCallsListDetail_InitializeDataWorkspace(List<IDataService> saveChangesTo)
    {
        // Write your code here.
        int TotalCalls = PhoneCalls.Count();
        double SalesCalls = (from calls in PhoneCalls
                                where calls.CallType == "Sales"
                                select calls).Count();
        PercentOfSalesCalls = (SalesCalls / TotalCalls).ToString("p");
        CountOfSales = SalesCalls.ToString();
    }

We switch back to the screen designer (note, if you were to try to run the application while a code window has the focus in Visual Studio, it will not run).

image

We right-click on the Phone Calls List Detail section and select Add Group.

image

Click on the newly added group, and drag it under the the List Column group.

image

Change it to Columns layout.

image

In the Properties for the group, change it to Left and Top.

image

Add Count Of Sales.

image

Change it to a Label.

image

Add Percent Of Sales Calls, and also change it to a Label.

image

Hit F5 to run the application. You will see the count of sales calls and their percentage of total calls.

You must hit the Refresh button to refresh the values after adding new calls.

image

Add the additional properties shown above.

Alter the “_InitializeDataWorkspace” method to the following:

        partial void PhoneCallsListDetail_InitializeDataWorkspace(List<IDataService> saveChangesTo)
        {
            // Write your code here.
            int TotalCalls = PhoneCalls.Count();
            double SalesCalls = (from calls in PhoneCalls
                                 where calls.CallType == "Sales"
                                 select calls).Count();
            double ServiceCalls = (from calls in PhoneCalls
                                   where calls.CallType == "Service"
                                   select calls).Count();
            double OtherCalls = (from calls in PhoneCalls
                                 where calls.CallType == "Other"
                                 select calls).Count();
            PercentOfSalesCalls = (SalesCalls / TotalCalls).ToString("p");
            PercentOfServiceCalls = (ServiceCalls / TotalCalls).ToString("p");
            PercentOfOtherCalls = (OtherCalls / TotalCalls).ToString("p");
            CountOfSales = SalesCalls.ToString();
            CountOfService = ServiceCalls.ToString();
            CountOfOther = OtherCalls.ToString();
        }

image

Add two new groups to display the additional properties.

Set the properties for each group to match the first group (left and top).

image

Hit F5 to run the application.

The Application is complete.

Download

You can download the code on the Downloads page:

http://lightswitchhelpwebsite.com/Downloads.aspx


The Visual Studio LightSwitch Team posted Course Manager Sample Part 4 – Implementing the Workflow (Andy Kung) on 4/28/2011:

In Course Manager Sample Part 3, we’ve set up a Desktop application with Windows authentication. We’ve created some “raw data” screens and wrote permission logic. In this post, we will dive into the main workflows of Course Manager.

image2224222222We will be covering quite a few screens in the remainder of the series. Some of them are fairly straightforward. In which case, I will briefly highlight the concepts and reference you to the downloaded sample. Others are more interesting and require some explanations. I will walk through these examples step-by-step. Let’s begin!

Workflow

Course Manager is designed with 4 main entry points or workflows. From the Home screen, you can:

  1. Create a new student => view student detail => register a course for this student
  2. Search for an existing student => view student detail => register a course for this student
  3. Browse course catalog => view section detail => register this course for a student
  4. Register a course for a student

Therefore, the rest of the series will focus on creating the following screens:

  • Create New Student
  • Search Students
  • Register Course
  • Course Catalog
  • Student Detail
  • Section Detail
  • Home

clip_image002

Screens
Create New Student

Create a screen using “New Data Screen” template on Student table.

clip_image003

By default, the screen vertically stacks up all the controls (using Rows Layout). In our case, we’d like to show the Picture on the left column and the rest of the fields on the right column.

clip_image004

To do this, first change “Student Property” from “Rows Layout” to “Columns Layout.” Each node under a “Columns Layout” will represent a column.

clip_image005

We only want 2 columns on the screen. So essentially, we need 2 group nodes under “Columns Layout.” Each group node represents a column and contains some student fields. Right click “Student Property” and select “Add Group” to add a group node under “Student Property.” Repeat and create a 2nd group node.

clip_image006

We’d like the picture to be in the first column, so drag and drop Picture under the first group node. Set the image’s width and height to 150 and 200 via Properties to make it bigger. We don’t want the picture to display any label, so set “Label Position” to “None.” We’d also like the first column to fit tightly with the Picture, select the first group and set its “Horizontal Alignment” to “Left” via Properties.

Drag and drop the rest of the fields under the 2nd group node to make them appear in the 2nd column.

clip_image007

Let’s hit F5 to see the screen. As expected, we now have 2 columns on the screen. The first column shows a big image editor and the 2nd column contains the rest of the student fields. We can also use “Address Editor” to display the address fields instead, as we did in Part 3.

clip_image008

Search Students

Create a screen using “Search Data Screen” template on Student table.

clip_image009

In screen designer, you will get a Data Grid of students showing all student fields. Let’s make the grid easier to read by removing some non-essential fields. Delete Picture, Street, City, State from Data Grid Row.

clip_image010

We’d also need a way for the user to drill into a record to see more details. One Beta 2 feature worth mentioning here is the ability to show a label-based control as links (provided the field is part of a record). When a link is click, the detail screen of the related record will open. In Beta 1, it is only possible with a Summary control.

Show a label as link

Select First Name under Data Grid Row. Check “Show as Link” in Properties. It will be shown as a column of links in the running app. When a link is click, the corresponding Student detail screen will open. Notice you can also choose a target screen to launch in the Properties. This is useful if you have multiple customized details screens for Student.

clip_image011

Register Course

From Part 2, we know Enrollment table is essentially a mapping table between Student and Section table. To register a course is to create an Enrollment record in the database. Let’s create a “New Data Screen” called “RegisterCourse” on Enrollment table.

clip_image012

In the screen designer, you will see the EnrollmentProperty, which is the new enrollment record we are creating, on the data list. EnrollmentProperty’s Section and Student fields are represented as 2 data pickers on the screen content tree.

clip_image013

Using a custom query for data picker

By default, the pickers on the screen will show you all available students and sections. In our case, when a student is selected, we only want to show the sections this student has not yet enrolled in.

In Part 2, we’ve already created a custom query called AvailableSections. This query takes a StudentId as parameter and returns a list of Sections this student has not enrolled in. This is exactly what we need! Click “Add Data Item” button in the command bar. Use the “Add Data Item” dialog to add AvailableSections on the screen.

clip_image014

Select StudentId query parameter and bind it to EnrollmentProperty’s Student.Id field in the Properties.

clip_image015

Finally, select the Section picker on the screen. Set the Choices property to AvailableSections. The source of the picker is now set to the custom query instead of the default “select all.”

clip_image016

Adding local screen choice list properties

Now we have a Section picker that filters its list of Sections based on a Student. We’d also like to further filter it down by Academic Year and Academic Quarter. We need a choice list picker for Academic Year and a choice list picker for Academic Quarter on the screen.

LightSwitch Beta 2 enables the ability to add a choice list as a screen property (in Beta1, you can only create a choice list on a table field). Use “Add Data Item” dialog, add a local property of Integer type called AcademicYear. Mark it as not required since we’d like it to be an optional filter.

clip_image017

Select the newly created AcademicYear. Click “Choice List” in Properties. Enter choice list options in the dialog.

clip_image018

Create a group node on the screen content tree using “Columns Layout.”

clip_image019

Use “+ Add” button to add “Academic Year.” A picker will be added to the screen.

clip_image020

Follow similar steps. Add an optional local property of String type called AcademicQuarter. Set its choice options to Fall/Winter/Spring/Summer. Add it below the Academic Year picker.

clip_image021

Applying additional filters on custom query

Now we have Academic Year and Academic Quarter pickers on the screen. We need to wire them up to the custom query. This means that we need to create 2 additional filters to the AvailableSection query. To do this, click “Edit Query” on AvailableSections to go to query editor.

clip_image022

Add 2 optional parameterized filters for AcademicYear and AcademicQuarter. We are marking the parameters as optional so if they are not specified, it still returns results.

clip_image023

Click “Back to RegisterCourse” link on top to go back to the screen designer. You will see AvailableSections now has 2 more parameters.

clip_image024

Select AcademicYear parameter, set parameter binding to AcademicYear, which is the local choice list property we just added.

clip_image025

Follow the same steps to bind AcademicQuarter.

Using optional screen parameters to pre-set screen fields

Our workflow indicates that we can also navigate to Register Course screen from a student or section screen. Wouldn’t it be nice if we could pre-populate the student or section picker in this case? To achieve this, we need to create optional screen parameters.

Use “Add Data Item” dialog, add a local property of Integer type called StudentId. Mark it as not required since it will be used as an optional parameter.

clip_image026

In the Properties, check “Is Parameter.” Repeat the same steps to create a local property of Integer type called SectionId. Set it as an optional parameter.

Just a side node, if a screen has required screen parameters, it will not be shown on the menu of the running application. This makes sense because the screen can only be opened with parameters. In our case, we have 2 optional screen parameters. “Register Course” screen will still show up in the menu since it can be open with or without screen parameters.

clip_image027

Now we write logic to handle the screen parameters if they’re available. Use the “Write Code” dropdown menu and select RegisterCourse_InitializeDataWorkspace method.

clip_image028

At the end of the method, add:

If (StudentId.HasValue) Then

Me.EnrollmentProperty.Student = DataWorkspace.ApplicationData.Students_Single(StudentId)

End If

If (SectionId.HasValue) Then

Me.EnrollmentProperty.Section = DataWorkspace.ApplicationData.Sections_Single(SectionId)

End If

We check if the screen is supplied with a StudentId (or SectionId) parameter. If so, we run a query to get the student (or section) record and pre-set the field of the EnrollmentProperty on the screen.

Adjusting screen layout with runtime screen designer

Let’s hit F5 to run the application and make some layout adjustments on the fly. Click “Design Screen” button in the ribbon to launch the runtime screen designer.

clip_image029

Select the screen root node. Make the following tweaks and Save the design.

  • Label Position = Top
  • Horizontal Alignment = Left
  • Vertical Alignment = Top
  • Move Student picker above Section picker
  • Use Modal Window Picker control for both Student and Section

clip_image031

Ahh. Much better!

clip_image032

Conclusion

We have covered quite a few topics in this post! We created “Create New Student,” “Search Students,” and “Register Course” screens.

clip_image034

During the process, we customized screen layouts, added detail links, utilized custom queries, created screen parameters, etc. These are all very useful techniques for your own apps. We will continue the rest of the screens in Part 5.


Return to section navigation list> 

Windows Azure Infrastructure and DevOps

The Windows Azure Team announced New Content: Affinity Group and Subscription History Operations Now Available in the Windows Azure Service Management REST API in a 4/20/2011 post:

image

We have recently published content on the following new operations in the Windows Azure Service Management REST API:

With these updates, you can now programmatically create, update, and delete affinity groups, which previously were operations that could only be done using the Management Portal. In addition to these affinity group operations, there’s a new subscription history operation that lets you list all operations that have occurred within your subscription, up to 90 days in the past. You can filter these results in different ways, such as displaying only operations for a specific hosted service.

To use this updated functionality, you must specify the newest version of the Service Management API in your request header. More information, see Service Management Versioning.


• Bruce Kyle reported on 4/18/2011 the availability of a Table [That] Compares Windows Azure Developer Offerings (missed when posted):

image A table comparison shows developer offers on Windows Azure to get started and develop on Windows Azure.

Find the right plan for you at Windows Azure Platform Offer Comparison Table.

Special Incentives

See Next 250 US ISVs Who Verify on Windows Azure Earn $250.

image

Getting Started with Windows Azure

See the Getting Started with Windows Azure site for links to videos, developer training kit, software developer kit and more. Get free developer tools too.

For free technical help in your Windows Azure applications, join Microsoft Platform Ready.

Learn What Other ISVs Are Doing on Windows Azure

For other videos about independent software vendors (ISVs) on Windows Azure, see:

Here’s a preview of the table’s Windows Azure section:

image

The table continues with Windows Azure AppFabric, Data Transfers, Committment Term, Savings, and Base Unit Price sections. I have Cloud Essentials and MSDN Ultimate subscriptions.


Kevin Remde (@kevinremde) posted Serve Yourself (“Cloudy April” - Part 28) on 4/28/2011:

image As you may recall from part 16 (Hyper-V Cloud), one key aspect of what we define as “a cloud” has to do with it providing some level of “self-service”.  You provide a portal or some other method of requesting and then being granted IT resources. 

“Yeah, I’d love to let my users do that, Kevin.”

Do-it-yourself Clouds

Of course you would.  Rather than getting a request for a new server or servers for a business unit or group that needs to do development/test/or production hosting of some application or service, and then expecting you to get it all set up for them, wouldn’t it be easier if they could do it themselves?  How long does it take you to set up new physical servers? 

“Order, receive, install, configure… it takes weeks… sometimes months!”

That’s right.  And even if you’re highly virtualized, you still have a lot of work to do to set up and configure the virtual servers each time a request comes in.  But if you could have A) a pre-defined set of machine templates, B) a well-established and easily manageable, delegated hierarchy of administrative rights, and C) a portal that allows for that delegation, plus the ability to request and be granted virtual machines and services on-demand.  Wouldn’t that be nice?

“Yes!”

You configure the datacenter.  You set up the virtualization platform, including the compute, networking, and storage available.  You define the machines and the administration.  And then you introduce your business to “the portal”, where they can define their own infrastructures, their own administrators and users, and finally request services which ultimately end up being virtual machines for their use.

“Sounds awesome, Kevin.  What is it?”

It’s the Self Service Portal (SSP).

“Oh.. isn’t that included in SCVMM 2008 R2?”

Well.. yes, there is a very easy-to-configure SSP in SCVMM 2008 R2.  And that’s very useful if all you want to do is grant self-service rights to users or groups so that they can directly create, use, and destroy virtual machines.  (Go here for a feature comparison of VMM SSP and VMM SSP 2.0. And check out this great two-part article on how to configure the SSP in SCVMM.)  But in many cases businesses have a more complex environment that requires more well-defined control and workflow.  For the ability to:

  • Configure the datacenter and all of its components (compute, storage, and networking),
  • Allow business unit IT admins to “onboard” their business (with an approval process),
  • Allow business unit IT admins to define their Infrastructure, and then services (and one-or-more “service roles”) that they require, along with the administrators and users who will utilize them,
  • and finally allow their users to create and use  virtual machines – without concern for where that machine is coming from or how the under-the-hood infrastructure is actually implemented,

you need the Self Service Portal 2.0.  SSP 2.0.  SSP 2.0 is a free solution accelerator from Microsoft that installs onto its own server(s) and interacts with (and drives) SCVMM 2008 R2.  It includes the web portal, the application itself, and the database (it requires SQL Server). 

Example of the architecture of what is requested: Infrastructure, services, and service roles.

“If I’m using the SSP that’s included with SCVMM, can I just upgrade it?”

No.  It’s an entirely new and separate tool.  It replaces the original SSP.

So… to summarize the benefits one more time:  Your users and business units get to define and use resources in a matter of minutes rather than days or weeks.  And you (the datacenter administrator) get to sit back and monitor the process.  And all the while a record is kept of who-used-how-much compute or storage power, so that they can be charged-back accordingly.   That’s “private cloud” at its best.

Here are some related resources for you:

---

Are you considering building and providing a private cloud for your users and/or businesses?  Have you used the SSP or the new SSP 2.0?  Have you considered some other “private cloud” solution?  Give us a comment!

Tomorrow, Part 29 (we’re almost done!) will be about a new cloud-based server monitoring service. 


David Linthicum asserted “It's up to cloud users to figure out how to remove risk from their cloud implementations -- like they used to do within IT” as a deck for his The failure behind the Amazon outage isn't just Amazon's post of 4/27/2011 to InfoWorld’s Cloud Computing blog:

When Amazon.com's outage last week -- specifically, the failure of its EBS (elastic block storage) subsystem -- left popular websites and services such as Reddit, Foursquare, and Hootsuite crippled or outright disabled, the blogosphere blew up with noise around the risks of using the cloud. Although a few defenders spoke up, most of these instant experts panned the cloud and Amazon.com. The story was huge, covered by the New York Times and the national business press; Amazon.com is now "enjoying" the same limelight that fell on Microsoft in the 1990s. It will be watched carefully for any weakness and rapidly kicked when issues occur.

It's the same situation we've seen since we began to use computers: They are not perfect, and from time to time, hardware and software fails in such a way that outages occur. Most cloud providers, including Amazon.com, have spent a lot of time and money to create advanced multitenant architectures and advanced infrastructures to reduce the number and severity of outages. But to think that all potential problems are eliminated is just being naive.

Some of the blame around the outage has to go to those who made Amazon.com a single point of failure for their organizations. You have to plan and create architectures that can work around the loss of major components to protect your own services, as well as make sure you live up to your own SLA requirements.

Although this incident does indeed show weakness in the Amazon.com cloud, it also highlights liabilities in those who've become overly dependent on Amazon.com. The affected companies need to create solutions that can fail over to a secondary cloud or locally hosted system -- or they will again risk a single outage taking down their core moneymaking machines. I suspect the losses around this outage will easily track into the millions of dollars.

Read more: Next page


Kevin Remde (@kevinremde) described What’s new in SCVMM 2012 - (“Cloudy April” - Part 27) on 4/27/2011:

image System Center Virtual Machine Manager (SCVMM) 2008 R2 is a powerful virtualization management tool.  It does a great job of organizing and managing your virtualization hosts, clusters, virtual machines, and libraries of resources (virtual hard disks, saved machines, machine templates, profiles for hardware and operating systems, etc.)  And it does this for the management of virtualization from either Microsoft or VMware.  But there are some really important aspects of virtualization – particularly when we start considering the delivery of IT-as-a-Service, that SCVMM 2008 R2 doesn’t manage.

SCVMM 2012Here are just a few examples of what I’m really looking forward to in SCVMM 2012, and what I think you’ll be excited about, too.

First – I think you’re going to appreciate being able to manage many more resources as important aspects of your virtualization platform.  Defining and then using things such as load balancers and storage devices in how you model virtualized services (not just servers) is a great benefit.  Defining logical networks, IP pools, MAC address pools, VIP pools for load balancers; these all become easy then to add to virtual machines and machine templates that are used when building your “service templates”

Which brings me to another new feature that I am very excited about: Service Templates.  You will now not only be defining templates for machines and the operating systems that run on them, but you’ll have the ability to create the definition of a service that is potentially made up of multiple machines, network objects (logical networks, load balancers, storage devices), and the relationships that they have.  For example – say you are defining a 3-tiered application, with a web front-end, middle application/logic tier, and a database cluster on the back end.  And perhaps you need to support high availability and performance that scales through load balancing your machines at the front end or middle tier.  And you may even want to define a range of machine instances for those tiers; maybe saying that I need to start with 2 web frontend servers, but I may be scaling up to as many as ten at some later time.  You can define all of this as a Service Template.  And once you’re ready then to deploy, SCVMM does intelligent placement of the new VMs based on their needs for resources (as defined in their templates) as well as the needs of the service as a whole.  Pretty amazing.. and that’s just scratching the surface.  You’ll also be able to update the machines in a service by updating the template and then replacing the old with the new template, and finally updating the machines in an automated way.

SCVMM is your private cloud fog machineAnd finally (though not really finally, because there are so many more new and exciting features that I don’t have time to describe them all here) is the Fabric Management.  The “fabric” (a term used to define the parts that make up a “cloud”, which is also a level of abstraction supported in SCVMM 2012) can be defined and configured.  Even beyond my first point of managing resources such as storage and networks, SCVMM 2012 extends capabilities for automating the creation of new virtualization hosts – even from bare metal.  It talks to the hardware controller on the motherboard and is able to boot and then deploy Hyper-V Server to new physical servers; ultimately adding them into your infrastructure as new virtualization hosts.  You can also perform automated updates of your virtualization hosts using WSUS

“What do you mean, ‘automated’?”

Here’s an example: Let’s say you have a cluster of virtualization hosts running several highly available (HA) virtual machines; meaning that they have the ability to migrate between hosts using Live Migration (or even vMotion.. we don’t play favorites here).  But now it’s time to install updates to your hosts.  SCVMM automates the process for you by performing the updates in a way that moves around your VMs for you.. installing updates, restarting hosts, and eventually re-balancing (yes.. VMware DRS-style load re-balancing) your VM workloads between and among the hosts.  And doing this all with absolutely zero-downtime of your virtual machines and the services they are providing.

If you were at MMS this year, you probably saw this slide several times.  It’s one that we’re using in our talks on SCVMM 2012 to introduce the main improvements in SCVMM 2012.

image

And from the Beta download page, here is the overview and quick list of new features:

Overview

System Center Virtual Machine Manager 2012 delivers industry leading fabric managment, virtual machine management and services deployment in private cloud environments. Virtual Machine Manager 2012 offers key new features that include hypervisor creation and management, network management, storage management, private cloud creation, self-service usage and service creation. It features deep investments in server application virtualization, service design and service modeling all of which can be used to efficiently offer an on-premises private cloud.

Feature Summary

  • Fabric Management
    • Hyper-V and Cluster Lifecycle Management – Deploy Hyper-V to bare metal server, create Hyper-V clusters, orchestrate patching of a Hyper-V Cluster
    • Third Party Virtualization Platforms - Add and Manage Citrix XenServer and VMware ESX Hosts and Clusters
    • Network Management – Manage IP Address Pools, MAC Address Pools and Load Balancers
    • Storage Management – Classify storage, Manage Storage Pools and LUNs
  • Resource Optimization
    • Dynamic Optimization – proactively balance the load of VMs across a cluster
    • Power Optimization – schedule power savings to use the right number of hosts to run your workloads – power the rest off until they are needed
    • PRO – integrate with System Center Operations Manager to respond to application-level performance monitors
  • Cloud Management
    • Abstract server, network and storage resources into private clouds
    • Delegate access to private clouds with control of capacity, capabilities and user quotas
    • Enable self-service usage for application administrator to author, deploy, manage and decommission applications in the private cloud
  • Service Lifecycle Management
    • Define service templates to create sets of connected virtual machines, OS images and application packages
    • Compose operating system images and applications during service deployment
    • Scale out the number of virtual machines in a service
    • Service performance and health monitoring integrated with System Center Operations Manager
    • Decouple OS image and application updates through image-based servicing
    • Leverage powerful application virtualization technologies such as Server App-V

So as you can see, there is a lot to be excited about coming in SCVMM 2012.  Helping you deliver IT-as-a-Service is really what it’s all about.  Your “private cloud” just got a whole lot more cloudy.  And that’s a good thing.

Here are some more resources for you:

---

Are you as excited about SCVMM 2012 as I am?  Have you tried the beta yet?  What do you think?  Add your comments and lets discuss it!


Steve Plank (@plankytronixx) posted Dryad: Program thousands of machines without knowing anything about parallel programming on 4/26/2011:

image The guys at Microsoft Research are doing some interesting things with parallel programming. It’s not an area I’ve ever had anything to do with, but one can see how the combination of Windows Azure and Dryad offer the novice parallel programmer access to a massive amount of compute power.

Imagine the kind of power you can wield when you really know little-to-nothing about parallel programming. I can imagine a technology like this is something we might play about with, but the chance we’d ever get to try it out on thousands of computers would be slim. However, imagine running a thousand instances for an hour on Windows Azure. Quite interesting to see what you could achieve – from zero-knowledge to hero in 59 minutes?

There is a talk here from 2007 which explains the ideas behind Dryad. If you’ve ever felt like being involved in parallel programming, don’t do anything until you’ve watched the video and visited the MS Research site.

They even have an academic download of something called DryadLINQ. You can watch a 00:54:00 video about Dryad here.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Mary Jo Foley (@maryjofoley) asserted “Microsoft is expected to announce yet another reorganization, maybe as soon as the week of May 1, that the company is hoping will help boost the sales of its Windows Azure cloud-computing platform” in the lead for her Microsoft to shuffle the Windows Azure deck chairs (again)? article of 4/27/2011 for ZDNet’s All About Microsoft blog:

image I’m hearing from my contacts that Scott Guthrie, currently Corporate Vice President of the .Net Platform, may be moving to the Windows Azure team, as part of the shake-up.

image

Microsoft has attempting to play up its ability to provide software and services for customers interested in the public cloud, the private cloud and a hybrid of the two. But some of the promised components of its strategy have gone missing — like the Windows Azure Appliances (private cloud in a box) unveiled a year ago. (Microsoft execs have said they will provide an update on the appliances at the TechEd ‘11 conference in mid-May.) Some company watchers and partners with whom I’ve talked consider Microsoft’s cloud strategy to be confusing and not well articulated.

Read the entire story here. As I said in a comment to Mary Jo’s post:

ScottGu would be a welcome addition as a leader of the Windows Azure Platform team. His enthusiasm as the primary Visual Studio evangelist has been contagious.

The Windows Azure Platform Appliance has been vaporware since it was announced more than a year ago. It will be interesting to see if MSFT can resurrect it.


<Return to section navigation list> 

Cloud Security and Governance

• David Linthicum claimed his 3 dirty little cloud computing secrets were “What you need to know before committing to the cloud, whether vendors like it or not” in a 4/29/2011 post to InfoWorld’s Cloud Computing blog:

image Every overhyped technology has good and bad aspects. The trouble is that few are willing to fill you in on the bad aspects. Doing so is often met with several dozen rounds of being called a hater. Cloud computing is no exception.

imageHere are the three major cloud computing secrets:

  1. Some public cloud computing providers are falling and will fail.
  2. Public clouds don't always save you money.
  3. Using clouds can get you fired.

Now to the details.

Dirty cloud secret 1: Some public cloud computing providers are falling and will fail. Many of the smaller cloud computing providers are not getting the traction they anticipated and are closing their doors, including some of the older firms. This is largely due to providing a far too tactical solution in a world where strategic solutions are sought. Moreover, newer providers have learned to use other clouds, such as IaaS and PaaS clouds, as their platform, whereas the older providers built their services from scratch and are paying for their own private data center spaces. The new generation of cloud providers based on cheaper back ends is pushing them out of business.

Dirty cloud secret 2: Public clouds don't always save you money. I've covered this topic before. The fact is public clouds are not cheap. If you've already invested in internal infrastructure, public clouds don't always make financial sense. You need to run the numbers.

Dirty cloud secret 3: Using clouds can get you fired. The slowest and weakest in the herd get culled, while some in the lead will take all the arrows. Many of those IT leaders who look to build private clouds or to use public clouds have made huge errors that have hurt the business. A few have been presented with walking papers. At least they have some popular technology experiences on their CV; my belief is that it's better to be an innovator than an adopter.


• Chris Hoff (@Beaker) posted On the CA/Ponemon Security of Cloud Computing Providers Study… on 4/29/2011:

image Computer Associates (CA) recently sponsored the second in a series of Ponemon Institute cloud computing security surveys.

The first, released in May, 2010 was focused on responses from practitioners: “Security of Cloud Computing Users – A Study of Practitioners in the US & Europe

The latest titled “Security of Cloud Computing Providers Study,” released this week, examines “cloud computing providers’” perspectives on the same.

While the study breaks down the  survey in detail in Appendix 1, I would kill to see the respondent list so I could use the responses from some of these “cloud providers” to quickly make assessments of my short list of those to not engage with.

I suppose it’s not hard to believe that security is not a primary concern, but given all the hype surrounding claims of “cloud is more secure than the enterprise,” it’s rather shocking to think that this sort of behavior is reflective of cloud providers.

Let’s see why.

This survey qualifies those surveyed as such:

We surveyed 103 cloud service providers in the US and 24 in six European countries for a total of
127 separate providers. Respondents from cloud provider organizations say SaaS (55 percent) is
the most frequently offered cloud service, followed by IaaS (34 percent) and PaaS (11 percent).
Sixty-five percent of cloud providers in this study deploy their IT resources in the public cloud
environment, 18 percent deploy in the private cloud and 18 percent are hybrid.

…and offers these most “salient” findings:

  • The majority of cloud computing providers surveyed do not believe their organization views the security of their cloud services as a competitive advantage. Further, they do not consider cloud computing security as one of their most important responsibilities and do not believe their products or services substantially protect and secure the confidential or sensitive information of their customers.
    -
  • The majority of cloud providers believe it is their customer’s responsibility to secure the cloud  and not their responsibility. They also say their systems and applications are not always  evaluated for security threats prior to deployment to customers.
    -
  • Buyer beware – on average providers of cloud computing technologies allocate 10 percent or  less of their operational resources to security and most do not have confidence that  customers’ security requirements are being met.
    -
  • Cloud providers in our study say the primary reasons why customers purchase cloud  resources are lower cost and faster deployment of applications. In contrast, improved security  or compliance with regulations is viewed as an unlikely reason for choosing cloud services.  The majority of cloud providers in our study admit they do not have dedicated security  personnel to oversee the security of cloud applications, infrastructure or platforms.
  • Providers of private cloud resources appear to attach more importance and have a higher  level of confidence in their organization’s ability to meet security objectives than providers of  public and hybrid cloud solutions.
    _
  • While security as a “true” service from the cloud is rarely offered to customers today, about  one-third of the cloud providers in our study are considering such solutions as a new source  of revenue sometime in the next two years.

I have so much I’d like to say with respect to these summary findings and the details within the reports, but much of it I already have.  I don’t think these findings are reflective of the larger cloud providers I interact with which is another reason I would love to see who these “cloud providers” were beyond the breakdown of their service offerings that were presented.

In the meantime, I’d like to refer you to these posts I wrote for reflection on this very topic:


Jay Heiser recommended that you Get your head out of the cloud in a 4/27/2011 post to the Gartner blogs:

Security practitioners inevitably cloud their thinking whenever they become trapped in purist arguments over what constitutes cloud computing.

imageIt has long been recognized that discussions about cloud security quickly degenerate into arguments over what constitutes a cloud (I’m told the same thing happens to other IT specialties).  Cloud purity is always a good excuse for a vigorous argument (and even some self-satisfying intellectual bullying). It would undoubtedly be useful if the world could agree on precisely which situations are cloudy and which are not.   However, if a more precise understanding of cloudishness eventually emerges, it will almost certainly not be heavily influenced by the security niche (my bet is on the advertising industry).

image My point is not that cloud computing is not a useful concept (and who could possibly question that?).  My point is that the people responsible for assessing confidentiality, integrity, and availability risks should be focusing their attention on what is relevant to risk.

Security questions function at an abstraction level that can be almost blissfully aloof from purist arguments over blanket terminology.  Understanding the security profile requires detailed answers to questions like:   Who is doing what (or wants to do what)?  Where?  How? Using what technology?  “Who controls it?” “Who can access it?” And the most important question “How do you know that?”

“In the cloud” can never be a useful answer to any substantive security question.


Jay Fry (@jayfry3) posted More than 7 deadly sins of cloud computing on 4/25/2011 to his Data Center Dialog blog (missed when posted):

image I’m a sucker for a clever headline.

A while back I ran across an article about the “7 Deadly Sins of Cloud Computing” in Computerworld. Antony Savvas was writing about a report from the Information Security Forum (ISF) that announced it had identified those items of great IT wickedness that will turn something that sounds as angelic as it comes – cloud computing – into some sort of pit of eternal damnation.

OK, maybe I’m exaggerating, but just go with it. (Though after the beating that Amazon took from some quarters after their EC2 outage last week, maybe I’m not exaggerating by much.)

So what were those 7 cloudy yet sinful atrocities? I’ll tempt fate by listing them here, as reported by Computerworld UK:

  1. Ignorance - cloud services have little or no management knowledge or approval
  2. Ambiguity - contracts are agreed without authorization, review or security requirements
  3. Doubt - there is little or no assurance regarding providers' security arrangements
  4. Trespass - failure to consider the legality of placing data in the cloud
  5. Disorder - failure to implement proper management of the classification, storage, and destruction of data
  6. Conceit - belief that enterprise infrastructure is ready for the cloud when it's not
  7. Complacency - assuming 24/7 service availability

It’s a solid list, for sure. For each of these, you can probably recollect relevant horror stories. For example, the folks whose sites were impacted by Amazon EC2 going down for an extended period of time last week are probably guilty of the last one: complacency. They forgot to architect for failure, something they had probably done all the time in pre-cloud IT environments.

As part of the write-up on these big no-nos, Steve Durbin, ISF global vice president, explains that "with users signing up to new cloud services daily - often 'under the radar' - it's vital that organizations ensure their business is protected and not exposed to threats to information security, integrity, availability and confidentiality." No argument there from me at all.

But, security isn’t the only thing you need to be concerned about if you’re going to list out cloud computing sins. And why stop at seven? (Historical and liturgical tradition aside, of course.)

So, after talking about some additions to the list with folks on Twitter, here are a few more sins that I’ve heard suggested that I think are worthy of adding to the list:

  1. Cloudwashing (from @Staten7) – Describing something as a cloud offering that is not. Vendors get beaten up for this all the time. And they often deserve it. In his research at Forrester, James Staten points out that enterprises do this, too. So, this can apply to vendors and to enterprises who believe they’ve checked the cloud box by just doing a little virtualization work.
  2. Defensive posture (also from @Staten7) – I think this is one of the reasons an enterprise cloudwashes their own internal efforts. They are not looking at cloud computing for the agility or cost benefits, but instead are working to meet someone’s internal goal of “moving to cloud.” They’re trying to cross cloud off the list while trying to avoid breaking their own existing processes, technology, or organizational silos. Or by saying they’ve already done as much as they need to do. Pretty selfish, if you ask me. Which is a sin all its own, right?
  3. Needless complexity (from @mccrory) – The cloud is supposed to be clean, simple, and dead easy. Yet, there are cloud offerings that end up being just as complicated as the more traditional route of sourcing and operating IT. That’s sort of missing the point, I think.
  4. Too many separate security projects (from @jpmorgenthal) – Back on the security topic, JP Morgenthal tweeted this: “I’m a big supporter of security investments but I believe there are too many separate cloud security efforts.” What are the issues? For starters, the right hand needs to know what the left hand is doing. And those different efforts need to be at the right level, area of focus, and have the right buy-in. Folks like Chris Hoff at Cisco and our own security experts here at CA Technologies can help you sort through this in more detail.
  5. Worshipping false idols (from @AndiMann) – Well, this sounds a bit more like a commandment than a deadly sin, but I’m not going to split hairs at this point. This directive from on high can cover two topics, in fact: don’t get all hung up on cloud computing definitions to the exclusion of a useful approach. And, secondly, ignore silly rhetoric from vendors going on and on about “false clouds.” Yes, salesforce.com, I’m talking about you.

So there you go. Five newly minted Deadly Cloud Sins to go along with the 7 Cloud Security Sins from the ISF. All those sins, and I still didn’t figure a way to wrap in gluttony. That was always a favorite of mine.

What’s the bottom line here? Aside from a little bombast, there’s some good advice to be had in all this. Avoid these approaches and you have a fighting chance at cloud computing salvation. Ignore them and your users will be drawing pictures of you with little devil horns and pitchforks on them. That last scenario is never a path to success in my book.

Any additional sins you will admit to? There are plenty more cloud computing sins that could be added to this list, that’s for sure. Share any glaring ones that you think absolutely have to be here. Even if you’re the one that committed them. After all, confession is good for the soul.


<Return to section navigation list> 

Cloud Computing Events

Jonathan Rozenblit reported on 4/26/2011 that AzureFest Makes A Surprise Visit to Calgary This Weekend on 4/30/2011:

image AzureFest is making a surprise visit to Calgary this weekend! If you haven’t yet heard of AzureFest, check out this post where AzureFest is described in full.

Remember, AzureFest is a hands-on event. This means that you’ll be following along on your own laptop and actually deploying your solution during the event. In order to get the most out of the experience, make sure to bring your laptop, a power cable if you’re going to need to plug in your laptop, and a credit card. Don’t worry, nothing will be charged to your credit card during AzureFest. Your credit card is just required for activating your Windows Azure account.

image

If you want to see for yourself how easy it is to move your existing application to the cloud, this is an event you don’t want to miss. Register early as space is limited.

Calgary
University of Calgary, Rm 121 ICT Building
2500 University Drive NW, Calgary, AB
Saturday, April 30, 2011
Click here to register


Steve Plank (@plankytronixx) announced Free Windows Azure in-person training (UK) at Windows Azure half-day bootcamps:

image There is still time to book on to the free Windows Azure half-day bootcamp running on 4th May at the Microsoft Campus in Reading, England. You can select either morning or afternoon.

If you’ve never played with Windows Azure, never written an app, never looked at the architecture or tried to understand SQL Azure and you want a quick and efficient way to get yourself up to speed with the basics so you can go on to bigger and better things – this is the perfect way to spend half a day.

image

It’ll be a fairly pacey half-day with hands-on labs included.

You’ll build a simple Windows Azure application which will involve multiple compute-instances in a web role, multiple compute-instances in a worker role, Windows Azure storage and SQL Azure. You’ll deploy it to the cloud and you can then go on to modify it at your leisure once you’ve left the bootcamp.

I talked about it in a previous post. But here are the registration details:

Windows Azure Half-Day Bootcamps

If you are a UK developer looking to take advantage of cloud computing, but you haven’t yet taken the plunge, this free half-day of training is the quickest way to get up-to-speed with Microsoft’s offering; Windows Azure. We’ll take you from knowing nothing about the cloud to actually having written some code, deployed it to the cloud service and made a simple application available on the public Internet. You’ll get all the information you need to get up to speed with Windows Azure in a packaged and compressed form, ready for your consumption, without having to trawl through books, blogs and articles on your own. There will be experienced people available to guide you through each exercise. Once you have the basics in place, you’ll be off and running.

To get your applications running, you’ll need an Azure subscription, so we’ll issue you with a special free pass that will entitle you to 4 compute instances, 3 GB of storage, 2 * 1GB Web Edition databases and more. You’ll be able to do some quite elaborate learning and testing even after you’ve left the training course. You do not need a credit card to activate this free pass.

Pre-requisites:

  • A wireless-enabled laptop with either Visual Web Developer Express or Visual Studio 2010 installed along with the Windows Azure SDK and Tools version 1.4 (including the SDK pre-requisites):
  • NET Framework 3.5 SP1
  • IIS 7.0 (with ASP.NET, WCF HTTP Activation, Static Content, IIS Management Console and optionally CGI)
  • Microsoft SQL Server 2008 R2 or Microsoft SQL Server Express 2008 R2 or Microsoft SQL Server 2008 or Microsoft SQL Server Express 2005
  • Bring the power supply, you will be using the laptop all day.
  • A basic knowledge of programming concepts and familiarity with Visual Studio
  • A basic knowledge of web-programming and how Internet applications work
  • An understanding of the Microsoft web-stack (Windows Server, IIS, basic security etc.)

When you walk away from this bootcamp, you can either de-activate the Azure application before you leave, or leave it running so that when you get home you can continue with your Windows Azure coding adventures. In any case you will walk away with the code you’ve written on your laptop and an ability to modify it, test it locally on your laptop and deploy it to your free Windows Azure subscription any time you choose.

4 hours; half a day:

  1. Registration and Coffee
  2. Windows Azure Introduction – architecture, roles, storage.
  3. First Lab: Hello World Windows Azure Web Role and deploying to your free Windows Azure subscription
  4. Second Lab: Using Windows Azure Storage
  5. Third Lab: Worker Role and how to use Windows Azure Storage to communicate between roles.
  6. Break, phone calls, coffee…
  7. Introduction to SQL Azure
  8. Lab: Using SQL Azure with a Windows Azure application
  9. Review and wrap-up

To register go to one of these URLs:


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

My (@rogerjenn) fully illustrated Test-Driving IBM’s SmartCloud Enterprise Infrastructure as a Service: Part 2 - Spring 2011 Promotion Free Trial tutorial of 4/28/201 describes creating a Windows 2008 R1 Silver compute instance and persistent storage. From the “My initial conclusion” section:

image The provisioning process [for IBM’s SmartCloud Enterprise] is simple and straightforward. Simplicity is result of a lack of most advanced options offered by Amazon Web Services (AWS). In particular, managing instance availability and scalability (DevOps) is a complex manual process. IBM papers on do-it-yourself high-availability topics apply only to Red Hat Enterprise Unix instances. Unlike AWS with its Elastic Beanstalk and CloudFormation offerings, there appear be no IBM or third party deployment, management or monitoring tools available for SmartCloud Enterprise.

Read the entire guided tour.


Nati Shalom posted PaaS on OpenStack - Run Applications on Any Cloud, Any Time Using Any Thing on 4/28/2011:

image Yesterday, I had a session during the OpenStack Summit where I tried to present a more general view on how we should be thinking about PaaS in the context of OpenStack.

The key takeaway :

image The main goal of PaaS is to drive productivity into the process by which we can deliver new applications.

Most of the existing PaaS solutions take a fairly extreme approach with their abstraction of the underlying infrastructure and therefore fit a fairly small number of extremely simple applications and thus miss the real promise of PaaS.

Amazon's Elastic Beanstalk took a more bottom up approach giving us better set of tradeoffs between the abstraction and control which makes it more broadly applicable to a larger set of applications.

The fact that OpenStack is opensource allows us to think differently on the things we can do at the platform layer. We can create a tighter integration between the PaaS and IaaS layers and thus come up with better set of tradeoffs into the way we drive productivity without giving up control. Specifically that means that:

  • Anyone should be able to:
    • Build their own PaaS in a snap
    • Run on any cloud (public/private)
    • Gain multi-tenancy, elasticity… Without code changes.
  • Provide a significantly higher degree of control without adding substantial complexity over our:
    • Language choice
    • Operating System
    • Middleware stack
  • Should come pre-integrated with a popular stack:
    • Spring,Tomcat, DevOps, NoSQL, Hadoop...
    • Designed to run the most demanding mission-critical app

You can read the full story and see the demo here


Matthew Weinberger asked Why Would VMware Acquire a Cloud Presentation Company? in a 4/28/2011 post to the TalkinCloud blog:

image Despite building momentum in the private and hybrid cloud space, VMware is best known as a virtualization technology developer. That’s why I was scratching my head for a few minutes when I found out VMware acquired SlideRocket, a cloud-based presentation tool. But on further reflection, this could turn out to be an inspired move for VMware, as it prepares to compete with Microsoft SharePoint on its home turf: the slide presentation.

image First off, VMware is getting into the cloud platform game with its Cloud Foundry initiative. That already puts the company into competition with the Microsoft Windows Azure PaaS platform. Being able to offer enterprise SaaS such as SlideRocket’s offering on top of it makes perfect sense. And since it’s privately hostable, it may have an edge in some use cases over public cloud offerings such as Microsoft Office 365, which includes SharePoint Online.

The dots all connect: VMware only recently made its Zimbra e-mail offering available to channel partners. And it’s easy to see how the Zimbra and SlideRocket offerings might fit together as the foundation of a suite of private cloud collaboration tools. While the official presentation announcing the acquisition wasn’t naming names, it’s pretty clear VMware is looking to take on Microsoft.

As ever, adoption rates will tell the tale. Cloud service providers: Would you be interested in a VMware-provided, privately hosted cloud collaboration suite?


Alex Williams asked the same question, VMware is a Virtualization Company - So Why is it Buying a Slide Show Service?, in a 4/27/2011 post to the ReadWriteCloud:

image VMware is a virtualization company, right? Then why did it announce yesterday that it is buying SlideRocket, a services company that helps people make their presentations look all snappy?

The same reasons as last year when it bought Zimbra, the email services provider. Microsoft.

SlideRocket Logo on Black.jpgOr because perhaps collaboration may be one of the biggest opportunities in the enterprise. Really...it's both.

Here are some details on the deal that should shine light on what Vmware is after:

  • SlideRocket has 20,000 customers and 300,000 users.
  • It is a Web-based service.
  • It has its own marketplace.
  • It is available across end-devices such as smartphones and tablets.

image SlideRocket is a modern presentation application. It is easy to use. It does not require installation. SlideRocket goes with email, which is a good fit with Zimbra. It's also a supplement to Zimbra, which already has a presentation capability.

VMware CTO Steve Herrod writes:

Presentations are second only to email as the most commonly used business tool. Professionals rely upon presentations for critical business communication such as influencing audiences and closing deals. Yet, despite their critical role, the process of creating, delivering, and sharing presentations is still based on 25-year-old technology, so most presentations remain static, one-way documents that lack impact.

That quote supports our answer to the question. Microsoft is the only company with 25-year-old presentation technology.

Here's further proof. Herrod says collaboration is key:

Collaborating with others around presentations often involves sending large file attachments and comments in email, worrying about whether recipients have the appropriate software to review, and wasting time keeping track of who has the latest version. And, once a presentation is shared outside your company, it's impossible to make changes, or even know if someone has viewed it. The process is frustrating and the result is often miscommunication and lost productivity. Perhaps most importantly, this approach to building and sharing presentations is incompatible with our increasingly mobile business lives.

VMware is the virtualization leader. But it is increasingly focused on the top of the stack where collaborative applications are thriving. The company recently took over Mozy for Cloud Foundry, its new platform. That's further proof that VMware is serving as the stable for apps that have a Web focus. That puts VMware in a place that positions it against Microsoft, still the reigning power of enterprise productivity.


Michael Coté (@cote) announced his 00:15:19 IBM Smart Cloud, Enterprise and Enterprise+ Overview Webcast on 4/27/2011:

image You’ve probably heard that IBM launched two new, public cloud offerings recently, Enterprise and Enterprise+. These offerings are oriented around the requirements IBM is getting from larger companies – they’re hoping to match the feature sets to existing work-loads and application types.

While at the IBM Cloud Forum where IBM announced these offerings, I sat down with IBM’s Jan Jackman and CohesiveFT’s Craig Heimark to talk about these offerings. Jan tells us about the two Smart Clouds and the types of work-loads people are using cloud for; Craig goes over how CohesiveFT partners with IBM to secure these work-loads and help manage the stacks.

Disclosure: IBM is a client and sponsored this podcast.


Alex Williams (@alexwilliams) reported Yahoo Weighs Spinning Out Hadoop Engineering Group for $1 Billion Opportunity in a 4/26/2011 post to the ReadWriteCloud:

imageYahoo has invested considerable resources into Apache Hadoop over the past several years. And now it is considering spinning out the engineering group responsible for the data analysis software into a separate company that it believes has the potential to be a $1 billion business.

image Hadoop is the open source distributed data technology that is now used by Web companies and increasingly by enterprise providers. It's an analytics and optimization tool that Yahoo uses to personalize content and optimize advertising.

yahooWe've been covering Yahoo's deepening interest in Hadoop over the past year. The latest development came last week when Yahoo joined the Linux Foundation. It was another signal of its focus on Hadoop and its further commitment to the data analytics technology.

According to the Wall Street Journal, Hadoop would spin out the engineering group to form a separate business. Yahoo would not comment on the news but there have been discussions with Benchmark Capital, a Silicon Valley venture-capital firm about forming a Hadoop company. A partner at Benchmark sees the opportunity as one of the biggest that has been seen in the enterprise software world in quite some time.

Yahoo would be entering an already competitive market. Cloudera is one of the most well-known startups competing in the space that markets services related to Hadoop. IBM has also invested heavily in the technology.

Hadoop is one of those technologies that has value to any organization with lots of data and a Web presence. It should be no surprise that companies are adopting it. What is fascinating is seeing yahoo's evolution from an advertising company to one that sees services as an important aspect of its offerings.


<Return to section navigation list>