Sunday, March 27, 2011

Windows Azure and Cloud Computing Posts for 3/26/2011+

image2 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px3332   

• Updated 3/27/2011 with articles marked from the Windows Azure Team, Firedancer, Flick Software, Alex Williams, SQL Azure Labs, Saugatuck Technology, Damir Dobric, Graham Callidine, Bruce Guptill, David Linthicum, Robert Green, Inge Henriksen and Bert Craven.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

Richard Parker and Valeriano Tortola reported on 3/26/2011 the availability of Beta 2 of their FTP to Azure Blob Storage Bridge project on CodePlex:

Project Description
imageDeployed in a worker role, the code creates an FTP server that can accept connections from all popular FTP clients (like FileZilla, for example) for command and control of your blob storage account.
What does it do?

image It was created to enable each authenticated user to upload to their own unique container within your blob storage account, and restrict them to that container only (exactly like their 'home directory'). You can, however, modify the authentication code very simply to support almost any configuration you can think of.

Using this code, it is possible to actually configure an FTP server much like any traditional deployment and use blob storage for multiple users.

image This project contains slightly modified source code from Mohammed Habeeb's project, "C# FTP Server" and runs in an Azure Worker Role.

Can I help out?
Most certainly. If you're interested, please download the source code, start tinkering and get in touch!


Thomas Conte posted Deploying Ruby in Windows Azure with access to Windows Azure Blobs, Tables and Queues on 3/22/2011:

You probably know already that the Windows Azure Platform supports many types of technologies; we often mention Java or PHP in that regard, but today I would like to show you how to deploy a basic Ruby application in Windows Azure. This will also give us the opportunity to see a few utilities and tips & tricks around application deployment automation in Windows Azure Roles using Startup Tasks.

First, a quick word about Windows Azure Platform support in Ruby in general: I am going to use the Open Source WAZ-Storage library that allows you to access Windows Azure Blobs, Tables & Queues from your Ruby code. This will allow us both to run our Ruby application in the cloud, and leverage our non-relational storage services.

In order to start with the most compact application possible, I am going to use the Sinatra framework that allows very terse Web application deveopment; here the small test application I am going to deploy, it consists in a single Ruby file:

require 'rubygems'
require 'sinatra'
require 'waz-blobs'
require 'kconv'
get '/' do
	WAZ::Storage::Base.establish_connection!(:account_name => 'foo', :access_key => 'bar', :use_ssl => true)
	c = WAZ::Blobs::Container.find('images')
	blobnames = c.blobs.map { |blob| blob.name }
	blobnames.join("<br/>")
end

Here are a few key points of this superb application:

  • It is a test application that will run within Sinatra’s “development” configuration, and not a real production configuration… But this will do for our demonstration!
  • The WAZ-Storage library is very easy to use: just change the “foo/bar” in establish_connection() with your Storage account and key, then choose an existing container name for the call to Container.find()
  • The code will assemble the list of Blob names from the container into a character string that will be displayed in the resulting page
  • The WAZ-Storage library of course exposes a way more complete API, that allows creation, modification, destruction of Blobs and Containers… You can go to their Github to download the library, and you are of course welcome to contribute ;-)

Let’s see how we are going to prepare our environement to execute this application in Windows Azure.

We are going to use a Worker Role to execute the Ruby interpreter. The benefit of using the Ruby interpreter and the Sinatra framework is that their impact on the system is very light: all you need to do is install the interpreter, download a few Gems, and you will have everything you need. It’s a perfect scenario to use Startup Tasks in a Worker Role, a feature that will allow us to customize our environment and still benefit from the Platform As A Service (PAAS) features, like automatic system updates & upgrades.

I have created a new project of type Cloud Service in Visual Studio 2010, where I added a Worker Role. I then added a “startup” directory in the Worker Role project, where I will store all the files required to install Ruby. It looks like this:

image

Let’s see what all these files do:

ruby-1.9.2-p180-i386-mingw32.7z is of course the Ruby interpreter itself. I chose the RubyInstaller for Windows version, which is ready to use. I chose the 7-Zip version instead of the EXE installer, because it will just make operations simpler (all you need to do is extract the archive contents). I could also dynamically download the archive (see below), but since it’s only 6 MB, I finally decided to include it directly in the project.

test.rb is of course my Ruby “application”.

curl is a very popular command-line utility, that allows you to download URL-accessible resources (like files via HTTP). I am not using it for the moment, but it can be very useful for Startup Tasks: you can use it to download the interpreter, or additional components, directly from their hosting site (risky!) or from a staging area in Blob Storage. It is typically what I would do for bigger components, like a JDK. I downloaded the Windows 64-bit binary with no SSL.

7za is of course the command-line version of 7-Zip, the best archiver on the planet. You will find this version on the download page.

WAUtils will be used in the startup script to find the path of the local storage area that Windows Azure allocated for me. This is where we will install the Ruby interpreter. There is no way to “guess” the path of this area, you need to ask the Azure runtime environment for the information. The source code for WAUtils is laughably simple, I just compiled it in a separate project and copied the executable in my “startup” folder.

namespace WAUtils
{
    class Program
    {
        static void Main(string[] args)
        {
            if (args.Length < 2)
            {
                System.Console.Error.WriteLine("usage: WAUtils [GetLocalResource <resource name>|GetIPAddresses <rolename>]");
                System.Environment.Exit(1);
            }
            switch (args[0])
            {
                case "GetLocalResource":
                    GetLocalResource(args[1]);
                    break;
                case "GetIPAddresses":
                    break;
                default:
                    System.Console.Error.WriteLine("unknown command: " + args[0]);
                    System.Environment.Exit(1);
                    break;
            }
        }
        static void GetLocalResource(string arg)
        {
            LocalResource res = RoleEnvironment.GetLocalResource(arg);
            System.Console.WriteLine(res.RootPath);
        }
    }
}

As you can see, I use the RoleEnvironment.GetLocalResource() method to find the path to my local storage.

Let’s now have a look in my solution’s ServiceDefinition.csdef; this is where I will add this local storage definition, called “Ruby” for which I asked 1 GB.

<LocalResources>
  <LocalStorage name="Ruby" cleanOnRoleRecycle="false" sizeInMB="1024" />
</LocalResources>

I will also add an Input Endpoint, to ask the Windows Azure network infrastructure to expose my loca port 4567 (Sinatra’s default) to the outside world as port 80.

<Endpoints>
  <InputEndpoint name="Ruby" protocol="tcp" port="80" localPort="4567" />
</Endpoints>

I will then add two Startup Tasks:

<Startup>
  <Task commandLine="startup\installruby.cmd" executionContext="elevated" />
  <Task commandLine="startup\startruby.cmd" taskType="background" />
</Startup>
  • installruby.cmd will execute with Administrator rights (eve if it is not scritcly required in this case… but this is typically what you would use for system configuration tasks)
  • startupruby.cmd will start the Ruby interpreter, will not use Administrator rights (for security reasons), and will run in “background” mode, which means that the Worker Role will not wait for the task completion to finish its startup procedure.

Let’s now see these two scripts!

First, installruby.cmd:

cd %~dp0
for /f %%p in ('WAUtils.exe GetLocalResource Ruby') do set RUBYPATH=%%p
echo y| cacls %RUBYPATH% /grant everyone:f /t
7za x ruby-1.9.2-p180-i386-mingw32.7z -y -o"%RUBYPATH%"
set PATH=%PATH%;%RUBYPATH%\ruby-1.9.2-p180-i386-mingw32\bin
call gem install sinatra haml waz-storage --no-ri --no-rdoc
exit /b 0

Here are the steps:

  • We run WAUtils.exe to find the local storage path, and we save it in a variable called RUBYPATH
  • We use the Windows cacls.exe command to give all right to this directory
  • We extract the 7-Zip archive using 7za, in the RUBYPATH directory
  • We add Ruby’s “bin” sub-directory to the system PATH
  • Finally, we use the “gem” command to dynamically download the Ruby libraries we need

Now, startupruby.cmd that will run my application:

for /f %%p in ('startup\WAUtils.exe GetLocalResource Ruby') do set RUBYPATH=%%p
set PATH=%PATH%;%RUBYPATH%\ruby-1.9.2-p180-i386-mingw32\bin
ruby test.rb

That’s it, I hope these guidelines will help you re-create your own Ruby service in Windows Azure!

And thanks to my colleague Steve Marx for inspiration & guidance :-)


<Return to section navigation list> 

SQL Azure Database and Reporting

The SQL Azure Labs Team announced the availability of a Database Import and Export for SQL Azure CTP on 3/26/2011:

imageSQL Azure database users have a simpler way to archive SQL Azure and SQL Server databases, or to migrate on-premises SQL Server databases to SQL Azure. Import and export services through the Data-tier Application (DAC) framework make archival and migration much easier.

The import and export features provide the ability to retrieve and restore an entire database, including schema and data, in a single operation. If you want to archive or move your database between SQL Server versions (including SQL Azure), you can export a target database to a local export file which contains both database schema and data in a single file. Once a database has been exported to an export file, you can import the file with the new import feature. Refer to the FAQ at the end of this article for more information on supported SQL Server versions.

This release of the import and export feature is a Community Technology Preview (CTP) for upcoming, fully supported solutions for archival and migration scenarios. The DAC framework is a collection of database schema and data management services, which are strategic to database management in SQL Server and SQL Azure.

Microsoft SQL Server “Denali” Data-tier Application Framework v2.0 Feature Pack CTP

The DAC Framework simplifies the development, deployment, and management of data-tier applications (databases). The new v2.0 of the DAC framework expands the set of supported objects to full support of SQL Azure schema objects and data types across all DAC services: extract, deploy, and upgrade. In addition to expanding object support, DAC v2.0 adds two new DAC services: import and export. Import and export services let you deploy and extract both schema and data from a single file identified with the “.bacpac” extension.

For an introduction to and more information on the DAC Framework, this whitepaper is available: http://msdn.microsoft.com/en-us/library/ff381683(SQL.100).aspx .


Installation

In order to use the new import and export services, you will need the .NET 4 runtime installed. You can obtain the .NET 4 installer from the following link.
With .NET 4 installed, you will then install the following five SQL Server redistributable components on your machine. The only ordering requirement is that SQLSysClrTypes be installed before SharedManagementObjects.

     1.   SQLSysClrTypes.msi
     2.   SharedManagementObjects.msi
     3.   DACFramework.msi
     4.   SqlDom.msi
     5.   TSqlLanguageService.msi

Note: Side-by-side installations with SQL Server “Denali” CTP 1 are not supported and will cause users some issues. Installations of SQL Server 2008 R2 and earlier will not be affected by these CTP components.


Usage

You can leverage the import and export services by using the following .EXE which is provided as an example only.
DacImportExportCli.zip

Sample Commands

Assume a database exists that is running on SQL Server 2008 R2, which a user has federated (or integrated security) access to. The database can be exported to a “.bacpac” file by calling the sample EXE with the following arguments:
DacImportExportCli.exe -s serverName -d databaseName -f C:\filePath\exportFileName.bacpac -x -e

Once exported, the export file can be imported to a SQL Azure database with:
DacImportExportCli.exe -s serverName.database.windows.net -d databaseName -f C:\filePath\exportFileName.bacpac -i -u userName -p password

A DAC database running in SQL Server or SQL Azure can be unregistered and dropped with:
DacImportExportCli.exe -s serverName.database.windows.net -drop databaseName -u userName -p password

You can also just as easily export a SQL Azure database to a local .bacpac file and import it into SQL Server.


Frequently Asked Questions

A FAQ is provided to answer some of the common questions with importing and exporting databases with SQL Azure
The FAQ can be found on the TechNet Wiki.

See also Steve Yi reported Simplified Import and Export of Data in a 3/24/2011 post to the SQL Azure Team blog below.


image• See the Windows Azure Team recommended on 3/26/2011 that you Try the Windows Azure Platform Academic 180 day pass if you meet the qualifications and have a promo code in the Windows Azure Infrastructure and DevOps section below. (Users of previous Windows Azure passes need not apply.)


Steve Yi reported Simplified Import and Export of Data in a 3/24/2011 post to the SQL Azure Team blog:

image Importing and exporting data between on-premises SQL Server and SQL Azure just got a lot easier, and you can get started today with the availability of the Microsoft SQL Server "Denali" Data-tier Application (DAC) Framework v2.0 Feature Pack CTP.  Let's call this the DAC framework from this point on J.  To learn more about DAC, you can read this whitepaper.

imageIf you're eager to try it out, go to the SQL Azure Labs page; otherwise read on for a bit to learn more.

There are 3 important things about this update to the DAC framework:

  • 1. New import & export feature: This DAC CTP introduces an import and export feature that lets you store both database schema and data into a single file with a ".bacpac" file extension, radically simplifying data import and export between SQL Server and SQL Azure databases with the use of easy commands.
  • 2. Free: This functionality will be shipping in all editions of the next version of SQL Server "Denali"-including free SKUs-and everyone will be able to download it.
  • 3. The Future = Hybrid Applications Using SQL Server + SQL Azure: Freeing information to move back and forth from on-premises SQL Server and SQL Azure to create hybrid applications is the future of data. The tools that ship with SQL Server "Denali" will use the DAC framework to enable data movement as part of normal management tasks.

The Data-tier Application (DAC) framework is a collection of database schema and data management libraries that are strategic to database management in SQL Server and SQL Azure.  In this CTP, the new import and export feature allow for the retrieval and restoration of a full database, including schema and data, in a single operation. 

If you want to archive or move your database between SQL Server versions and SQL Azure, you can export a target database to a single export file, which contains both database schema and data in a single file.  Also included are logins, users, tables, columns, constraints, indexes, views, stored, procedures, functions, and triggers.  Once a database has been exported, users can import the file with the new import operation.  

This release of the import and export feature is a preview for fully supported archival and migration capability of SQL Azure and SQL Server databases.  In coming months, additional enhancements will be made to the Windows Azure Platform management portal. Tools and management features shipping in upcoming releases of SQL Server and SQL Azure will have more capabilities powered by DAC, providing increased symmetry in what you can accomplish on-premises and in the cloud.

How Do I Use It?

Assume a database exists that is running within an on-premises SQL Server 2008 R2 instance that a user has access to.  You can export the database to a single ".bacpac" file by going to a command line and typing:

DacImportExportCli.exe -s serverName -d databaseName -f C:\filePath\exportFileName.bacpac -x -e

Once exported, the newly created file with the extension ".bacpac" can be imported to a SQL Azure database if you type:

DacImportExportCli.exe -s serverName.database.windows.net -d databaseName -f C:\filePath\fileName.bacpac -i -u userName -p password

A DAC database running in SQL Server or SQL Azure can be unregistered and dropped with:

DacImportExportCli.exe -s serverName.database.windows.net -drop databaseName -u userName -p password

You can also just as easily export a SQL Azure database to a local export file and import it into SQL Server.

How Should I Use the Import and Export Features?

It's important to note that export is not a recommended backup mechanism for SQL Azure databases. (We're working on that. so look for an update in the near future).  The export file doesn't store transaction log or historical data.  The export file simply contains the contents of a SELECT * for any given table and is not transactionally consistent by itself.

However, you can create a transactionally consistent export by creating a copy of your SQL Azure database and then doing a DAC export on that.  This article has details on how you can quickly create a copy of your SQL Azure database.  If you export from on-premise SQL Servers, you can isolate the database by placing it in single-user mode or read-only mode, or by exporting from a database snapshot.

We are considering additional enhancements to make it easier to export or restore SQL Azure databases with export files stored in cloud storage - so stay tuned.

Tell Us What You Think

We're really interested to hear your feedback and learn about your experience using this new functionality.  Check out the SQL Azure forum and send us your thoughts here.

Check out the SQL Azure Labs page for installation and usage instructions, and frequently asked questions (FAQ). 


Tony Bailey (pictured below) claimed SQL Migration Wizard “solves a big problem for me” in a 3/24/2010 post to the TechNet blogs:

image My colleague George [Huey] is working hard on the SQL Azure Migration Wizard [SQLAzureMW]

Some of the tool reviews speak for themselves.

Why SQL Azure?

  • Relational database service built on SQL Server technologies
  • If you know SQL, why waste development hours trying to configure and maintain your own or your hosted SQL Server box?
  • SQL Azure provides a highly available, scalable, multi-tenant database service hosted by Microsoft in the cloud

So, use the tool to migrate a SQL database to SQL Azure. SQLAzureMW is a user interactive wizard that walks a person through the analysis / migration process.

Got a question about SQL migration?   Get free help.

Sign up for Microsoft Platform Ready: http://www.microsoftplatformready.com/us/dashboard.aspx

Chat, phone hours of operation are from 7:00 am - 3:00 pm Pacific Time.

I can attest to SQLAzureMW’s superior upgrade capabilities. See my Using the SQL Azure Migration Wizard v3.3.3 with the AdventureWorksLT2008R2 Sample Database post of 7/18/2010.

For a bit of history, read my Using the SQL Azure Migration Wizard with the AdventureWorksLT2008 Sample Database post of 9/21/2009.


Steve Yi pointed to an MSDN Article: How to Connect to SQL Azure Using sqlcmd in a 3/24/2010 post to the SQL Azure team blog:

imageOne of the reasons that SQL Azure is such a powerful tool is that it's based on the same database engine as SQL Server.  For example, you can connect to the Microsoft SQL Azure Database with the sqlcmd command prompt utility that is included with SQL Server. The sqlcmd utility lets you enter Transact-SQL statements, system procedures, and script files at the command prompt. MSDN has written a tutorial on how to do this. Take a look and get a feel for how easy it is to get started using SQL Azure.

Click here for the MSDN article


Steve Yi pointed to another MSDN Article: SQL Azure Data Sync Update in a 3/24/2010 post to the SQL Azure Team blog:

imageAt PDC 2010 - SQL Azure Data Sync CTP2 was announced and it has been progressing quite successfully with huge interest, so much so that we are going to continue to process registrations throughout March 2011.  For those of you who waiting for access, we appreciate your patience.  We'll be releasing CTP3 this summer which will be made available to everyone. 

While on the subject of CTP3, here are some of the new features planned:

  • A new user interface integrated with the Windows Azure management portal.
  • The ability to select a subset of columns to sync as well as to specify a filter so that only a subset of rows are synced.
  • The ability to make certain schema changes to the database being synced without having to re-initialize the sync group and re-sync the data.
  • Conflict resolution policy can be specified.
  • General usability enhancements.

imageCTP3 will be the last preview, before the final release later this year.  We'll announce the release dates on this blog when available.

Click here for the link to the MSDN article


Bill Ramo (@billramo) explained Migrating from MySQL to SQL Azure Using SSMA in a 3/23/2011 post to the Microsoft SQL Server Migration Assistant (SSMA) Team blog:

image In this blog post, I will describe how to setup your trial SQL Azure account for your migration project, create a “free” database on SQL Azure and walkthrough differences in the process of using SSMA to migrate the tables from the MySQL Sakila sample database to SQL Azure. For a walkthrough of how to migrate a MySQL database to SQL Server, please refer to the post “MySQL to SQL Server Migration: How to use SSMA”. This blog assumes that you have a local version of the MySQL Sakila-DB sample database already installed and that you have  SQL Server Migration Assistant for MySQL v1.0 (SSMA) installed and configured using the instructions provided in the “MySQL to SQL Server Migration: How to use SSMA” blog post.

Getting Started with SQL Azure

imageIf you don’t have a SQL Azure account, you can get a free trial special at http://www.microsoft.com/windowsazure/offers/ through June 30th, 2011. The trial includes a 1GB Web Edition database. Click on the Activate button to get your account up and running. You’ll log in with your Windows Live ID and then complete a four step wizard. Once you are done, the wizard will take you to the Windows Azure Platform portal. If you miss this trail, stay tuned for additional trial offers for SQL Azure.

00 Windows Azure Platform Portal

The next step is to create a new SQL Azure Server by clicking on the “Create a new SQL Azure Server” option in the Getting Started page. You will be prompted for your subscription name that you created in the wizard, the region where your SQL Azure server should be hosted, the Administrator Login information, and the firewall rules. You will need to configure the firewall rules to specify the IPv4 address for the computer where you will be running SSMA.

01 Setup firewall rules

Click on the Add button to add your firewall rule. Give it a name and then specify a start and end range. The dialog will display your current IPv4 address so that you can enter it in for your start and end range.

02 Name the firewall rule

Once your are done with the firewall rules, you can then click on your subscription name in the Azure portal to display the fully qualified server name that was just created for you. It will look something like this: x4ju9mtuoi.database.windows.net. You are going to use this server name for your SSMA project.

Using the Azure Portal to Create a Database

SSMA can create a database as part of the project, but for this blog post I’m going to walk through the process of creating the database using the portal and then use the SSMA feature to place the resulting database in a schema within the database. Within the Azure portal, with the subscription selected, you will click in the Create Database command to start the process.

03 Name the database

Enter in the name of the database and then keep the other options as the defaults for your free trial account. If you have already paid for a SQL Azure account, you can use the Web edition to go up to 5 GB or switch to the Business edition to up the size limit between 10GB and 50GB. Once created, you will want to click on the Connection Strings – View button to display the connection string information you will use for your SSMA project shown below. The password value shows just a placeholder value.

04 Connection string information

You are now ready to setup your SSMA project.

Using SSMA to Migrate a MySQL Database to SQL Azure

Once you start SSMA, you will click on the New Project command shown in the image below, enter in the name of the project, and select the SQL Azure option for the Migration To control.

10 New Azure Project

You will then follow the same processes described in the “MySQL to SQL Server Migration: How to use SSMA” blog post that I will outline below.

  1. Click on the Connect to MySQL toolbar command to complete the MySQL connection information.
  2. Expand out the Databases node to expand the Sakila database and check the Tables folder as shown below.
    11 Tables selection
  3. Click on the Create Report command in the toolbar button. You can ignore the errors. For information about the specific errors with converting the Sakila database, please refer to the blog post “MySQL to SQL Server Migration: Method for Correcting Schema Issues”. Close the report window.
  4. Click on the Connect to SQL Azure button to complete the connection to your target SQL Azure database. Use the first part of the server name and the user name for the administrator as shown below. If you need to change the server name suffix to match your server location, click on the Tools | Project Settings command, click on the General button in the lower left of the dialog and then click on the SQL Azure option in the tree above. From there you can change the suffix value.
    12 Connect to SQL Azure
  5. Expand out the Databases node to see the name of the database created in the SQL Azure portal. You will see a Schemas folder under the database name that will be the target for the Sakila database as shown below.
    13 Connected to SQL Azure DB
  6. With the Tables node selected in the MySQL Metadata Explorer, click on the Convert Schema command to create schema named Sakila containing the Sakila tables within the SSMA project as shown below.
    14 Ready to Sync to database
  7. Right click on the Sakila schema above and choose the Synchronize with Database command to write the schema changes to your SQL Azure database and then click OK to confirm the synchronization of all objects. This process creates a SQL Server schema object within your database named Sakila and then all the object from your MySQL database go into that schema.
  8. Select the Tables node for the Sakila database in the MySQL Metadata Explorer and then issue the Migrate Data command from the toolbar. Complete the connection dialog to your MySQL database and the connection dialog to the SQL Azure database to start the data transfer. Assuming all goes well, you can dismiss the Data Migration Report as shown below.
    15 Data Migration Report

At this point, you now have all of the tables and their data loaded into your SQL Azure database contained in a schema named Sakila.

Validating the Results with the SQL Azure Database Manager

To verify the transfer of the data, you can use the Manage command from the Azure Portal and shown below. Just select the database and press the Manage command.

20 Manage a database

This will launch the SQL Azure Database Manager program in a new browser window with the Server, Database, and Login information prepopulated in the connection dialog. If you have other SQL Azure databases you want to connect to without having to go to the portal, you can always connect via the URL - https://manage-ch1.sql.azure.com/.

Once connected, you can expand the Tables node and select a table like sakila.film to view the structure of the table. You can click on the Data command in the toolbar to view and edit the table’s data as shown below.

21 Viewing Table data

The SQL Azure Database Manager will also allow you to write an test queries against your database by Database command and then selecting the New Query button in the ribbon.  To learn more about this tool, check out the MSDN topic – Getting Started with The Database Manager for SQL Azure.

Additional SQL Azure Resources

To learn more about SQL Azure, see the following resources.


Steve Yi explained Working with SQL Azure using .NET in a 3/23/2011 post to the SQL Azure Team blog:

imageIn this quick 10 minute video, you'll get an understanding of how to work with SQL Azure using .NET to create a web application.   I'll demonstrate this using the latest technologies from Microsoft, including ASP.NET MVC3 and Entity Framework.

imageAlthough the walkthrough utilizes Entity Framework, the same techniques can be used when working with traditional ADO.NET or other third party data access systems such as NHibernate or SubSonic. In a previous walk-through, we used Microsoft Access to track employee expense reports.  We showed how to move the expense-report data to SQL Azure while still maintaining the on premises Access application.

In this video, we will show how to extend the usage of this expense-report data that's now in our SQL Azure cloud database and make it more accessible via an ASP.NET MVC web application.

The sample code and database scripts are available on Codeplex, if you want to try this for yourself.

If you want to view it in full-screen mode, go here and click on the full-screen icon in the lower right-hand corner of the playback window.

We've got several other tutorials on the SQL Azure Codeplex site , go to http://sqlazure.codeplex.com.  Other walk-throughs and samples there include ones on how to secure a SQL Azure database, programming .NET with SQL Azure, and several others.

If you haven't started experimenting with SQL Azure, for a limited time, new customers to SQL Azure get a 1GB Web Edition Database for no charge and no commitment for 90 days.  This is a great way to try SQL Azure and the Windows Azure platform without any of the risk.  Check it out here


Steve Yi announced another video about SQL Azure in Securing SQL Azure of 3/23/2011:

imageIn this brief 10 minute video, I will demonstrate how to use the security features in SQL Azure to control access to your cloud database.  This walkthrough shows how to create a secure connection to a database server, how to manage SQL Azure database settings on the firewall, and how to create and map users and logins to control authentication and authorization.

imageOne of the strengths of SQL Azure is that it provides encrypted SSL communications between the cloud database and the running application. It doesn't matter whether it is running on Windows Azure, on-premises, or with a hosting service. All communications between the SQL Azure Database and the application utilize encryption (SSL) at all times.

Firewall rules prevent only specific IP addresses or IP-address ranges to access a SQL Azure database. Like SQL Server, users and logins can be mapped to a specific database or server-level roles which can provide complete control, restrict a user to just read-only access of a database, or no access at all.

I will walk you through a straightforward example here, which really shows the benefits of SQL Azure. After checking out this video, visit Sqlazure.com for more information and for additional resources. There's also database scripts and code you can download on Codeplex to follow along and re-create this example.

If you want to view it in full-screen mode, go here and click on the full-screen icon in the lower right-hand corner of the playback window.

We've got several other tutorials on the SQL Azure Codeplex site , go to http://sqlazure.codeplex.com.  Other walk-throughs and samples there include ones on how to secure a SQL Azure database, programming .NET with SQL Azure, and several others.

If you haven't started experimenting with SQL Azure, for a limited time, new customers to SQL Azure get a 1GB Web Edition Database for no charge and no commitment for 90 days.  This is a great way to try SQL Azure and the Windows Azure platform without any of the risk.  Check it out here.


<Return to section navigation list> 

MarketPlace DataMarket and OData

• Flick Software introduced an OData Sync Client for Android on 3/26/2011:

image The following is an intro to OData + Sync protocol from the Microsoft website:

"The OData + Sync protocol enables building a service and clients that work with common set of data. Clients and the service use the protocol to perform synchronization, where a full synchronization is performed the first time and incremental synchronization is performed subsequent times.

imageThe protocol is designed with the goal to make it easy to implement the client-side of the protocol, and most (if not all) of the synchronization logic will be running on the service side. OData + Sync is intended to be used to enable synchronization for a variety of sources including, but not limited to, relational databases and file systems. The OData + Sync Protocol comprises of the following group of specifications, which are defined later in this document:”

 

Specification Description
OData + Sync: HTTP Defines conventions for building HTTP based synchronization service, and Web clients that interact with them.
OData + Sync: Operations Defines the request types (upload changes, download changes, etc…) and associated responses used by the OData + Sync protocol.
OData + Sync: ATOM Defines an Atom representation for the payload of an OData + Sync request/response.
OData + Sync: JSON Defines a JSON representation of the payload of an OData + Sync request/response.

AzureMobile

Our Android OData + Sync client implements the OData Sync:Atom protocol:  it tracks changes in SQLite,  incrementally transfers DB changes and resolves conflict. It wraps up these general sync tasks so that applications can focus on implementing business specific sync logic.   Using it, developers can sync their SQLite DB with almost any type of relational DBs on the backend server or in the cloud. It can give a boost to the mission critical business applications development on Android. These types of applications use SQLite as local storage to persist data and sync data to/from the backend (clouds) when connection is available.  So the business workflow won’t be interrupted by poor wireless signal, data loss caused by hardware/software defects will be minimized and data integrity will be guaranteed.


Pablo Castro described An efficient format for OData in a 3/25/2011 post to the OData blog:

image The need for a more efficient format for OData has been coming up often lately, with this thread being the latest on the topic. I thought I would look into the topic in more detail and take a shot at characterizing the problem and start to drill into possible alternatives.

imageDiscussing the introduction of a new format for OData is something that I think about with a lot of care. As we discussed in this forum in the past, special formats for closed systems are fine, but if we are talking about a format that most of the ecosystem needs to support then it is a very different conversation. We need to make sure it does not fragment the ecosystem and does not leave particular clients or servers out.

All that said, OData is now used in huge server farms where CPU cycles used in serializing data are tight, and in phone applications where size, CPU utilization and battery consumption need to be watched carefully. So I think is fair to say that we need to look into this.

What problem do we need to solve?

Whenever you have a discussion about formats and performance the usual question comes up right away: do you optimize for size or speed? The funny thing is that in many, many cases the servers will want to optimize for throughput while the clients talking to those servers will likely want to optimize for whatever maximizes battery life, with whoever is paying for bandwidth wanting to optimize for size.

The net of this is that we have to find a balanced set of choices, and whenever possible leave the door open for further refinement while still allowing for interoperability. Given that, I will loosely identify the goal as creating an "efficient" format for OData and avoid embedding in the problem statement whether more efficient means more compact, faster to produce, etc.

What are the things we can change?

Assuming that the process of obtaining the data and taking it apart so it is ready for serialization is constant, the cost of serialization tends to be dominated by the CPU time it takes to convert any non-string data into strings (if doing a text-based format), encode all strings into the right character encoding and stitching together the whole response. If we were only focused on size we could compress the output, but that taxes CPU utilization twice, once to produce the initial document and then again to compress it.

So we need to write less to start with, while still not getting so fancy that we spend a bunch of time deciding what to write. This translates into finding relatively obvious candidates and eliminating redundancy there.

On the other hand, there are things that we probably do not want to change. Having OData requests and responses being self-contained is important as it enables communication without out of band knowledge. OData also makes extensive use of URLs for allowing servers to encode whatever they want, such as the distribution of entities or relationships across different hosts, the encoding of continuations, locations of media in CDNs, etc.

What (not) to write (or read)

If you take a quick look at an OData payload you can quickly guess where the redundancy is. However I wanted to have a bit more quantitative data on that so I ran some numbers (not exactly a scientific study, so take it kind of lightly). I used 3 cases for reference from the sample service that exposes the classic Northwind database as an OData service:

So a really small data set, a larger and wider data set, and a larger but narrower data set.

If you contrast the Atom versus the JSON size for these, JSON is somewhere between half and a third of the size of the Atom version. Most of it comes from the fact that JSON has less redundant/unneeded content (no closing tags, no empty required Atom elements, etc.), although some is actually less fidelity, such as lack of type annotations in values.

Analyzing the JSON payload further, metadata and property names make up about 40% of the content, and system-generated URLs ~40% as well. Pure data ends up being around 20% of the content. For those that prefer to see data visually:

EfficientFormatPayloadBreakdown

There is quite a bit of variability on this. I've seen feeds where URLs are closer to 20-25% and others where metadata/property names go up to 65%. In any case, there is a strong case to address both of them.

Approach

I'm going to try to separate the choice of actual wire format from the strategy to make it less verbose and thus "write less". I'll focus on the latter first.

From the data above it is clear that the encoding of structure (property names, entry metadata) and URLs both need serious work. A simple strategy would be to introduce two constructs into documents:

  • "structural templates" that describe the shape of compound things such as records, nested records, metadata records, etc. Templates can be written once and then actual data can just reference them, avoiding having to repeat the names of properties on every row.
  • "textual templates" that capture text patterns that repeat a lot throughout the document, with URLs being the primary candidates for this (e.g. you could imagine the template being something like " http://services.odata.org/Northwind/Northwind.svc/Customers('{0}')" and then the per-row value would be just what is needed to replace "{0}").

We could inline these with the data as needed, so only the templates that are really needed would go into a particular document.

Actual wire format

The actual choice of wire format has a number of dimensions:

  • Text or binary?
  • What clients should be able to parse it?
  • Should we choose a lower level transport format and apply our templating scheme on top, or use a format that already has mechanisms for eliminating redundancy?

Here are a few I looked at:

  • EXI: this is a W3C recommendation for a compact binary XML format. The nice thing about this is that it is still just XML, so we could slice it under the Atom serializers and be done. It also achieves impressive levels of compression. I worry though that we would still do all the work in the higher layers (so I'm not sure about CPU savings), plus not all clients will be able to use it directly, and implementing it seems like quite a task.
  • "low level" binary formats: I spent some time digging into BSON, Avro and Protocol Buffers. I call them "low level" because this would still require us to define a format on top to transport the various OData idioms, and if we want to reduce redundancy in most cases we would have to deal with that (although to be fair Avro seems to already handle the self-descriptive aspect on the structural side).
  • JSON: this is an intriguing idea someone in the WCF team mentioned some time ago. We can define a "dense JSON" encoding that uses structural and textual templates. The document is still a JSON document and can be parsed using a regular JSON parser in any environment. Fancy parsers would go directly from the dense format into their final output, while simpler parsers can apply a simple JSON -> JSON transform that would return the kind of JSON you would expect for a regular scenario, with plain objects with repeating property names and all that. This approach probably comes with less optimal results in size but great interoperability while having reasonable efficiency.

Note that for the binary formats and for JSON I'm leaving compression aside. Since OData is built on HTTP, clients and servers can always negotiate compression via content encoding. This allows servers to make choices around throughput versus size, and clients to suggest whether they would prefer to optimize for size or for CPU utilization (which may reflect on battery life on portable devices).

A dense JSON encoding for OData

Of these, I'm really intrigued by the JSON option. I love the idea of keeping the format as text, even though it could get pretty cryptic with all the templating stuff. I also really like the fact that it would work in browsers in addition to all the other clients.

This is a long write up already, the actual definition of the JSON encoding belongs to a separate discussion, provided that folks think this is an interesting direction. So let me just give an example for motivation and leave it at that.

Let's say we have a bunch of rows for the Title type from the Netflix examples above. We would typically have full JSON objects for each row, all packaged in an array, with a __metadata object each containing "self links" and such. The dense version would instead consist of an object stream that has "c", "m" or "d" objects for "control", "metadata" and "data" respectively. Each object represents an object in the top level array, and each value is either a metadata object introducing a new template or a data value representing a particular value for some part of the template, with values matched in template-definition order; control objects are usually first/last and indicate things like count, array/singleton, etc. It would look like this:

EfficientFormatSample

(there are some specifics that aren't quite right on this, take it as an illustration and not a perfect working example)

Note the two structural templates (JSON-schema-ish) and the one textual template. For a single row this would be a bit bigger than a regular document, but probably not by much. As you have more and more rows, they become densely packed objects with no property names and mostly fixed order for unpacking. With a format like this, the "100NarrowCustomers" example above would be about a third of the size of the original JSON. That's not just smaller on the wire, but a third of text that does not need to be processed at all.

Next steps?

We need a better evaluation of the various format options. I like the idea of a dense JSON format but we need to have validation that the trade-offs work. I will explore that some more and send another note. Similarly I'm going to try and get to the next level of detail in the dense JSON thing to see how things look.

If you read this far you must be really motivated by the idea of creating a new format…all feedback is welcome.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF and Service Bus

• Bert Craven explained Implementing a REST-ful service using OpenRasta and OAuth WRAP with Windows Azure AppFabric ACS in a 3/27/2011 post:

image I’ve been building prototypes again and I wanted to build a service that exposed a fairly simple, resource-based data API with granular security, i.e. some of my users would be allowed to access one resource but not another or they might be allowed to read a resource but not create or update them.

image722322222To do this I’ve used OpenRasta and created an security model based on OAuth/WRAP claims issued by the Windows Azure AppFabric Access Control Service (ACS).

The client can now make a rest call to the ACS passing an identity and secret key. In return they will be issued with a set of claims. A typical claim encompasses a resource in my REST service and the action(s) the user is allowed to perform, so their claim set might show that they were allowed to execute GET against resource foo but not POST or PUT.

In my handler in OpenRasta I add method attributes that indicate what claims are required to invoke that method, for instance in my handler for resources of type foo I might have the following method  :

1 [RequiresClaims("com.somedomain.api.resourceaction", "GetFoo")] 2 public OperationResult GetFooByID(int id) 3 { 4 //elided 5 }

In my solution I have created an OpenRasta interceptor which examines inbound requests, validates the claim set and then compares the claims required by the method attribute to the claims in the claim set. Only if there is a match can the request be processed.

I was going to write a long blog post about how to build this from scratch with diagrams and screen shots and code samples but I found that I couldn’t be arsed. If you’d like more info more on how to do this just drop me a line. In the meantime I’ve dropped the source for the interceptor, validator and attribute as follows :


Damir Dobric described My Session at WinDays 2011 to be given 4/6/2011 16:20:00 in Rovinj, Croatia:

image Welcome to all of you who will attend WinDays 2011. This year I will give the session about technologies which are owned by home-division called Microsoft Business Platform Division. All of you who do not know much about this division please just note what technologies are in focus: WCF, WF, BizTalk, Windows Server AppFabric, Windows Azure AppFabric, Service Bus, Access Control Service, etc. etc.

image722322222This time the major focus is Windows Azure AppFabric which include many serious technologies for professional enterprise developers.

My personal title of the session is:

image

Here is the agenda:

image

Note that I will keep the right to slightly change the program (you know like TV).


<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

• Graham Callidine continued his video series with a 00:26:49 Windows Azure Platform Security Essentials: Module 5 - Secure Networking using Windows Azure Connect video segment:

image Graham Calladine, Security Architect with Microsoft Services, provides detailed description of Windows Azure Connect, a new mechanism for establishing, managing and securing IP-level connectivity between on-premises and Windows Azure resources.

You’ll learn about:

  • Potential usage scenarios of Windows Azure Connect
  • Descriptions of the different components of Windows Azure Connect, including client software, relay service, name resolution service, networking policy model
  • Management overview
  • Joining your cloud-based virtual machines to Active Directory

imageRelated Resources:


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Inge Henriksen reported Version 1.1 of Azure Providers has been released on 3/27/2011:

image I have released version 1.1 of my [Azure Membership, Role, Profile, and Session-State Providers] project on Codeplex.

New in version 1.1 is:

  • imageComplete Session-State Provider that uses the Azure Table Storage so that you can share sessions between instances
  • All e-mail sent from the application is handled by a worker role that picks e-mails from the Azure Queue
  • Automatic thumbnail creation when uploading a profile image
  • Several bugs have been fixed
  • Added retries functionality for when accessing the table storage
  • General rewrite of how to access the table storage that require less code and is more readable

Project home page: http://azureproviders.codeplex.com/


The Windows Azure Team recommended on 3/26/2011 that you Try the Windows Azure Platform Academic 180 day pass if you meet the qualifications and have a promo code:

image

imageClicking the Don’t have a promo code? button and enter your Windows Live ID for more information. If you’ve received any free Windows Azure promotion previously, you a message similar to the following:

A Windows Azure platform 30 day pass has already been requested for this Windows Live ID. Limit one per account.

Limiting potential Azure developers to a single free pass, no matter how short the duration, doesn’t seem very generous to me.


Jas Sandhu (@jassand) reported on Microsoft Interoperability at EclipseCon 2011 in a 3/25/2011 post:

image I've just returned from EclipseCon 2011, in wet and less than usually sunny Santa Clara California, and it's been definitely a jam packed and busy event with a lot of things going on. Interoperability @ Microsoft was a Bronze Sponsor for the event and we also had a session, "Open in the Cloud:- Building, Deploying and Managing Java Applications on Windows Azure Platform using Eclipse” by Vijay Rajagopalan, previously architect on our team, Interoperability Strategy, and now leading the Developer Experience work for the Windows Azure product team.

imageThe session primarily covers the work we have done on Windows Azure to make it an open and interoperable platform which supports development using many programming languages and tools. In the session, you can learn the primers on building large-scale applications in the cloud using Java, taking advantage of new Windows Azure Platform as a Service features, Windows Azure applications using Java with Eclipse Tools, Eclipse Jetty, Apache Tomcat, and the Windows Azure SDK for Java.

imageWe have been working on improving the experience for Java developers who use Eclipse to work with Windows Azure. At this session we announced the availability of a new Community Technology Preview (CTP) of a new plugin for Eclipse which provides Java developers with a simple way to build and deploy web applications for Windows Azure. The Windows Azure Plugin for Eclipse with Java, March 2011 CTP, is an open source project released under the Apache 2.0 license, and it is available for download here. This project has been developed by Persistent Systems and Microsoft is providing funding and technical assistance. For more info in this regard please check out the post, “New plugin for Eclipse to get Java developers off the ground with Windows Azure” by Craig Kitterman and the video interview and demo with Martin Sawicki, Senior Program Manager in the Interoperability team.  Please send us feedback on what you like, or don’t like, and how we can improve these tools for you.

I would like to thank the folks at the Eclipse foundation and the community for welcoming us and I look forward to working with you all in the future and hope to see you at EclipseCon next year!

Jas is a Microsoft Technical Evangelist


Steve Peschka explained Using WireShark to Trace Your SharePoint 2010 to Azure WCF Calls over SSL in a 3/19/2011 post:

imageOne of the interesting challenges when trying to troubleshoot remotely connected systems is figuring out what they're saying to each other.  The CASI Kit that I've posted about other times on this blog (http://blogs.technet.com/b/speschka/archive/2010/11/06/the-claims-azure-and-sharepoint-integration-toolkit-part-1.aspx) is a good example whose main purpose in life is providing plumbing to connect data center clouds together.  One of the difficulties in troubleshooting it is that case is that the traffic travels over SSL so it can be fairly difficult to troubleshoot.  I looked at using both NetMon 3.4, which has an Expert add in now for SSL that you can get from http://nmdecrypt.codeplex.com/, and WireShark.  I've personally always used NetMon but had some difficulties getting the SSL expert to work so decided to give WireShark a try.

WireShark appears to have had support for SSL for a couple years now; it just requires that you provide the private key used with the SSL certificate that is encrypting your conversation.  Since the WCF service is one that I wrote it's easy enough to get that.  A lot of the documentation around WireShark suggests that you need to convert your PFX of your SSL certificate (the format that you get when you export your certificate and include the private key) into a PEM format.  If you read the latest WireShark SSL wiki (http://wiki.wireshark.org/SSL) though it turns out that's not actually true.  Citrix actually has a pretty good article on how to configure WireShark to use SSL (http://support.citrix.com/article/CTX116557), but the instructions are way to cryptic when it comes to what values you should be using for the "RSA keys list" property in the SSL protocol settings (if you're not sure what that is, just follow the Citrix support article above).  So to combine that Citrix article and the info on the WireShark wiki, here is a quick run down on those values:

  • IP address - this is the IP address of the server that is sending you SSL encrypted content that you want to decrypt
  • Port - this is the port the encrypted traffic is coming across on.  For a WCF endpoint this is probably always going to be 443.
  • Protocol - for a WCF endpoint this should always be http
  • Key file name - this is the location on disk where you have the key file
  • Password - if you are using a PFX certificate, this is a fifth parameter that is the password to unlock the PFX file.  This is not covered in the Citrix article but is in the WireShark wiki.

So, suppose your Azure WCF endpoint is at address 10.20.30.40, and you have a PFX certificate at C:\certs\myssl.pfx with password of "FooBar".  Then the value you would put in the RSA keys list property in WireShark would be:

10.20.30.40,443,http,C:\certs\myssl.pfx,FooBar.

Now, alternatively you can download OpenSSL for Windows and create a PEM file from a PFX certificate.  I just happend to find this download at http://code.google.com/p/openssl-for-windows/downloads/list, but there appear to be many download locations on the web.  Once you've download the bits that are appropriate for your hardware, you can create a PEM file from your PFX certificate with this command line in the OpenSSL bin directory:

openssl.exe pkcs12 -in <drive:\path\to\cert>.pfx -nodes -out <drive:\path\to\new\cert>.pem

So, supposed you did this and created a PEM file at C:\certs\myssl.pem, then your RSA keys list property in WireShark would be:

10.20.30.40,443,http,C:\certs\myssl.pem

One other thing to note here - you can add multiple entries separated by semi-colons.  So for example, as I described in the CASI Kit series I start out with a WCF service that's hosted in my local farm, maybe running in the Azure dev fabric.  And then I publish it into Windows Azure.  But when I'm troubleshooting stuff, I may want to hit the local service or the Windows Azure service.  One of the nice side effects of taking the approach I described in the CASI Kit of using a wildcard cert is that it allows me to use the same SSL cert for both my local instance as well as Windows Azure instance.  So in WireShark, I can also use the same cert for decrypting traffic by just specifying two entries like this (assume my local WCF service is running at IP address 192.168.20.100):

10.20.30.40,443,http,C:\certs\myssl.pem;192.168.20.100,443,http,C:\certs\myssl.pem

That's the basics of setting up WireShark, which I really could have used late last night.  :-)   Now, the other really tricky thing is getting the SSL decrypted.  The main problem it seems from the work I've done with it is that you need to make sure you are capturing during the negotiation with the SSL endpoint.  Unfortunately, I've found with all the various caching behaviors of IE and Windows that it became very difficult to really make that happen when I was trying to trace my WCF calls that were coming out of the browser via the CASI Kit.  In roughly 2 plus hours of trying it on the browser I only ended up getting one trace back to my Azure endpoint that I could actually decrypt in WireShark, so I was pretty much going crazy.  To the rescue once again though comes the WCF Test Client. 

The way that I've found now to get this to work consistently is to:

  1. Start up WireShark and begin a capture.
  2. Start the WCF Test Client
  3. Add a service reference to your WCF (whether that's your local WCF or your Windows Azure WCF)
  4. Invoke one or more methods on your WCF from the WCF Test Client.
  5. Go back to WireShark and stop the capture.
  6. Find any frame where the protocol says TLSV1
  7. Right-click on it and select Follow SSL Stream from the menu

A dialog will pop up that should show you the unencrypted contents of the conversation.  If the conversation is empty then it probably means either the private key was not loaded correctly, or the capture did not include the negotiated session.  If it works it's pretty sweet because you can see the whole conversation, or only stuff from the sender or just receiver.  Here's a quick snip of what I got from a trace to my WCF method over SSL to Windows Azure:

  • POST /Customers.svc HTTP/1.1
  • Content-Type: application/soap+xml; charset=utf-8
  • Host: azurewcf.vbtoys.com
  • Content-Length: 10256
  • Expect: 100-continue
  • Accept-Encoding: gzip, deflate
  • Connection: Keep-Alive
  • HTTP/1.1 100 Continue
  • HTTP/1.1 200 OK
  • Content-Type: application/soap+xml; charset=utf-8
  • Server: Microsoft-IIS/7.0
  • X-Powered-By: ASP.NET
  • Date: Sat, 19 Mar 2011 18:18:34 GMT
  • Content-Length: 2533
  • <s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope" blah blah blah

So there you have it; it took me a while to get this all working so hopefully this will help you get into troubleshooting mode a little quicker than I did.


The Windows Azure Team posted a Seattle Radio Station KEXP Takes to the Cloud with Windows Azure story on 3/22/2011:

At the Nonprofit Technology Conference in Washington, D.C., last week, Akhtar Badshah, Microsoft's senior director of Global Community Affairs, shared how Microsoft and popular Seattle radio station KEXP have teamed up to give the station a "technological makeover." Leveraging Microsoft software and services such as Windows Azure, KEXP is transforming operations ranging from the station's internal communications to how listeners worldwide interact with music. The makeover will take three years to complete but listeners will start seeing dramatic improvements this spring.

KEXP, which began as a 10-watt station in 1972, grew into a musical force in Seattle and beyond with the help of a supportive membership base and Internet streaming. KEXP was the first station in the world to offer CD-quality 24/7 streaming audio and now broadcasts in Seattle on 90.3, in New York City on 91.5, and worldwide via the web at KEXP.org.

The radio station's colorful history and widespread support among musicians has made it a virtual treasure chest of musical media, from its music library to related content such as live performances, podcasts and videos, album reviews, blog posts, DJ notes and videos. KEXP's website allows users to listen live and to studio performances and archives of recent shows, as well as read about artists and buy music. To date, much of this vast musical information has been compartmentalized, unconnected, or not available online.

That's about to change, thanks to a metadata music and information "warehouse" that KEXP has built on the Windows Azure platform. The warehouse, or "Music Service Fabric", will store all of the bits of related data put them at listeners' fingertips. The Music Service Fabric will normalize all play data throughout the station and will be used to power the real-time playlist, streaming services and new features such as contextual social sharing.

According to Badshah, KEXP's willingness to experiment made the station a natural partner and the station's use of Microsoft technology ranging from Windows Azure to SharePoint Online to Microsoft Dynamics CRM helps show technology can help nonprofits in a holistic way.

To learn more about this story, please read the related posts, "Tech Makeover: KEXP Takes Its Musical Treasure Chest to the Cloud", on Microsoft News Center and, "KEXP.org: Where the Music and Technology Matter (Part 2)", on the Microsoft Unlimited Potential Blog.

I was a transmitter engineer and diskjocky for the first listener-sponsored (and obviously non-profit) radio station KPFA in Berkeley, CA in a prior avatar (when I was at Berkeley High School and UC Berkeley.) KPFA started life in 1949 with a 250-watt transmitter. To say the least, KPFA has had a “colorful history,” which you can read about here.


<Return to section navigation list> 

Visual Studio LightSwitch and Entity Framework 4+

• Robert Green announced an Updated Post on Where Do I Put My Data Code? in a 3/27/2011 post:

imageI have just updated Where Do I Put My Data Code In A LightSwitch Application? for Beta 2. I reshot all the screens and have made some minor changes to both the text and the narrative. The primary difference is that I have moved my data code into the screen’s InitializeDataWorkspace event handler, rather than using the screen’s Created handler (which was known as Loaded in Beta 1).

image2224222222I am working on updating the rest of my posts for Beta 2. Stay tuned.


Julie Lerman (@julielerman) described Round tripping a timestamp field for concurrency checks with EF4.1 Code First and MVC 3 in a 3/25/2011 post to her Don’t Be Iffy blog:

image MVC is by definition stateless so how do you deal with timestamp that needs to be persisted across post backs in order to be used for concurrency checking?

Here’s how I’m achieving this using EF4.1 (RC), Code First and MVC 3.

I have a class, Blog.

Blog has a byte array property called TimeStamp, that I’ve marked with the code first DataAnnotation, Timestamp:

[TimeStamp]
public byte[] TimeStamp{get; set;}

If you letting CF generate the database you’ll get the following field as a result of the combination of byte array and TimeStamp attribute:

blogposttimestamp

So the database will do the right thing thanks to the timestamp (aka rowversion) data type – update that property any time the row changes.

EF will do the right thing, too, thanks to the annotation – use the original TImeStamp value for updates & deletes and refresh it as needed –if it’s available, that is.

But none of this will work out of the box with MVC. You need to be sure that MVC gives you the original timestamp value when it’s time to call update.

So…first we’ll have to assume that I’m passing a blog entity to my Edit view.

       public ActionResult Edit(int id)
       {
           var blogToEdit=someMechanismForGettingThatBlogInstance();
           return View(blogToEdit); 
       }

In my Edit view I need to make sure that the TimeStamp property value of this Blog instance is known by the form.

I’m using this razor based code in my view to keep track of that value:

[after posting this, Kathleen Dollard made me think h arder about this and I realized that MVC 3 provides an even simpler way thanks to the HiddenFor helper…edits noted]

@Html.Hidden("OriginalTimeStamp",Model.TimeStamp)
@Html.HiddenFor(model => model.TimeStamp)
 

This will ensure that the original timestamp value stays in the timestamp property even though I’m not displaying it in the form.

When I post back, I can access that value this way:

        [HttpPost]         public ActionResult Edit(byte[] originalTimeStamp, Blog blog)         {          

When I post back, Entity Framework can access still get at the original timestamp value in the blog object that’s passed back through model binding.

        [HttpPost]
        public ActionResult Edit(Blog blog)
        {

Then, in the code I use to attach that edited blog to a context and update it, I can grab that originalTimeStamp value and shove it into the blog instance using Entry.Property.OriginalValue, like so, EF will recognize the timestamp value and use it in the update. (No need to explicitly set the original value now that I’ve already got it.)

  db.Entry(blog).State = System.Data.EntityState.Modified;
  db.Entry(blog).Property(b => b.TimeStamp).OriginalValue = originalTimeStamp;   db.SaveChanges();

After making the Dollard-encouraged modification, I verified that the timestamp was being used in the update:

set [Title] = @0, [BloggerName] = @1, [BlogDetail_DateCreated] = @2, [BlogDescription] = @3
where (([PrimaryTrackingKey] = @4) and ([TimeStamp] = @5))
select [TimeStamp]
from [dbo].[InternalBlogs]
where @@ROWCOUNT > 0 and [PrimaryTrackingKey] = @4',N'@0 nvarchar(128),@1 nvarchar(10),@2 datetime2(7),@3 nvarchar(max) ,@4 int,@5 binary(8)',@0=N'My Code First Blog',@1=N'Julie',@2='2011-03-01 00:00:00',@3=N'All about code first',@4=1,@5=0x0000000000001771

You can use the same pattern with any property that you want to use for concurrency even if it is not a timestamp. However, you should use the [ConcurrencyCheck] annotation. The Timestamp annotation is only for byte arrays and can only be used once in a given class.

If you use ConcurrencyCheck on an editable property, you’ll need to use a hidden element (not hiddenfor) to retain the value of that property separately, otherwise EF will use the value coming from the form as the original property then grab it s a parameter of the Edit post back method and finally, use Entry.Property.OriginalValue (all that crossed out stuff Smile)  to shove the value back into the entity before saving changes.

The next video in my series of EF4.1 videos for MSDN (each accompanied by articles) is on Annotations. It’s already gone through the review pipeline and should be online soon. It does have an example of using the TimeStamp annotation but doesn’t go into detail about using that in an MVC 3 application, so I hope this additional post is helpful.


Robert Green announced the availability of an Updated Post on Using Remote and Local Data in a 3/23/2011 post:

image I have just updated Using Both Remote and Local Data in a LightSwitch Application for Beta 2. I reshot all the screens and have made some changes to both the text and the narrative. I also added a section on the code you need to write to enable saving data in two different data sources. In Beta 1, you could have two data sources on a screen and edit both of them by default. Starting with Beta 2, the default is that you can only have one editable data source on a screen. If you want two or more, you have to write a little code. I cover that in the updated post.

image2224222222I am working on updating the rest of my posts for Beta 2. Stay tuned.

 


Return to section navigation list> 

Windows Azure Infrastructure and DevOps

• David Linthicum asserted Cloud Integration Not as Easy as Many Think in a 3/27/2011 post to ebizQ’s Where SOA Meets Cloud blog:

image Okay, you need to push your customer data to Salesforce.com, and back again? There are dozens of technologies, cloud and not cloud, which can make this happen. Moreover, there are many best practices and perhaps pre-built templates that are able to make this quick and easy.

But, what if you're not using Salesfoce.com, and your cloud is a rather complex IaaS or PaaS cloud, that is not as popular and thus not as well supported with templates and best practices? Now what?

image Well, you're back in the days when integration was uncharted territory and when you had to be a bit creative when attempting to exchange information with one complex and abstract system with another. This means mapping data, transformation and routing logic, adapters, many of the old school integration concepts seem to be a lost art these days. Just because your source or target system is a cloud and not a traditional system, that does not make it any easier.

The good news is that there are an awful lot of effective integration technologies around these days, most of them on-premise with a few of them cloud-delivered. But, learning to use these products still requires that you have a project mentality when approaching cloud-to-enterprise integration, and it's not an afterthought, as it is many times. This means time, money, and learning that many enterprises have not dialed into their cloud enablement projects.

Many smaller consulting firms are benefiting form this confusion and are out there promoting their ability to connect stuff in your data center with stuff in the cloud. Most fall way short in delivering the value and promise of cloud integration, and I'm seeing far too many primitive connections, such as custom programmed interfaces and FTP solutions out there. That's a dumb option these days, considering that the problem as already been solved by others.

I suspect that integration will continue to be an undervalued part of cloud computing, until it becomes the cause of many cloud computing project failures. Time to stop underestimating that work that needs to be done here.


• Bruce Guptill authored Open Network Foundation Announces Software-Defined Networking as a Saugatuck Technology Research Alert on 3/23/2011 (site registration required):

image What Is Happening? On Tuesday March 22, 2011, the Open Networking Foundation (ONF) announced its intent to standardize and promote a series of data networking protocols, tools, interfaces, and controls known as Software-Defined Networking (SDN) that could enable more intelligent, programmable, and flexible networking. The linchpin of SDN is the OpenFlow interface, which controls how packets are forwarded through network switches. The SDN standard set also includes global management interfaces. See Note 1 for a current list of ONF member firms.

ONF members indicate that the organization will first coordinate the ongoing development of OpenFlow, and freely license it to member firms. Defining global management interfaces will follow next.

If sufficiently developed and uniformly adopted, SDN could significantly improve network interoperability and reliability, while improving traffic management and security, and reducing the overall costs of data networking. This could enable huge improvements in the ability of Cloud platforms and services to interoperate securely, efficiently, and cost-effectively. They could also engender massive changes in basic Internet/web connectivity and management, and raise significant concerns regarding network neutrality. …

Bruce continues with the usual “Why Is It Happening” and “Market Impact” sections.


James Hamilton (pictured below) posted Brad Porter’s Prioritizing Principals in "On Designing and Deploying Internet-Scale Services" in a 3/26/2011 post:

image Brad Porter is Director and Senior Principal engineer at Amazon. We work in different parts of the company but I have known him for years and he’s actually one of the reasons I ended up joining Amazon Web Services. Last week Brad sent me the guest blog post that follows where, on the basis of his operational experience, he prioritizes the most important points in the Lisa paper On Designing and Deploying Internet-Scale Services.

--jrh


Prioritizing the Principles in "On Designing and Deploying Internet-Scale Services"

By Brad Porter

James published what I consider to be the single best paper to come out of the highly-available systems world in many years. He gave simple practical advice for delivering on the promise of high-availability. James presented “On Designing and Deploying Internet-Scale Services” at Lisa 2007.

A few folks have commented to me that implementing all of these principles is a tall hill to climb. I thought I might help by highlighting what I consider to be the most important elements and why.

1. Keep it simple

Much of the work in recovery-oriented computing has been driven by the observation that human errors are the number one cause of failure in large-scale systems. However, in my experience complexity is the number one cause of human error.

Complexity originates from a number of sources: lack of a clear architectural model, variance introduced by forking or branching software or configuration, and implementation cruft never cleaned up. I'm going to add three new sub-principles to this.

Have Well Defined Architectural Roles and Responsibilities: Robust systems are often described as having "good bones." The structural skeleton upon which the system has evolved and grown is solid. Good architecture starts from having a clear and widely shared understanding of the roles and responsibilities in the system. It should be possible to introduce the basic architecture to someone new in just a few minutes on a white-board.

Minimize Variance: Variance arises most often when engineering or operations teams use partitioning typically through branching or forking as a way to handle different use cases or requirements sets. Every new use case creates a slightly different variant. Variations occur along software boundaries, configuration boundaries, or hardware boundaries. To the extent possible, systems should be architected, deployed, and managed to minimize variance in the production environment.

Clean-Up Cruft: Cruft can be defined as those things that clearly should be fixed, but no one has bothered to fix. This can include unnecessary configuration values and variables, unnecessary log messages, test instances, unnecessary code branches, and low priority "bugs" that no one has fixed. Cleaning up cruft is a constant task, but it is a necessary to minimize complexity.

2. Expect failures

At its simplest, a production host or service need only exist in one of two states: on or off. On or off can be defined by whether that service is accepting requests or not. To "expect failures" is to recognize that "off" is always a valid state. A host or component may switch to the "off" state at any time without warning.

If you're willing to turn a component off at any time, you're immediately liberated. Most operational tasks become significantly simpler. You can perform upgrades when the component is off. In the event of any anomalous behavior, you can turn the component off.

3. Support version roll-back

Roll-back is similarly liberating. Many system problems are introduced on change-boundaries. If you can roll changes back quickly, you can minimize the impact of any change-induced problem. The perceived risk and cost of a change decreases dramatically when roll-back is enabled, immediately allowing for more rapid innovation and evolution, especially when combined with the next point.

4. Maintain forward-and-backward compatibility

Forcing simultaneous upgrade of many components introduces complexity, makes roll-back more difficult, and in some cases just isn't possible as customers may be unable or unwilling to upgrade at the same time.

If you have forward-and-backwards compatibility for each component, you can upgrade that component transparently. Dependent services need not know that the new version has been deployed. This allows staged or incremental roll-out. This also allows a subset of machines in the system to be upgraded and receive real production traffic as a last phase of the test cycle simultaneously with older versions of the component.

5. Give enough information to diagnose

Once you have the big ticket bugs out of the system, the persistent bugs will only happen one in a million times or even less frequently. These problems are almost impossible to reproduce cost effectively. With sufficient production data, you can perform forensic diagnosis of the issue. Without it, you're blind.

Maintaining production trace data is expensive, but ultimately less expensive than trying to build the infrastructure and tools to reproduce a one-in-a-million bug and it gives you the tools to answer exactly what happened quickly rather than guessing based on the results of a multi-day or multi-week simulation.

I rank these five as the most important because they liberate you to continue to evolve the system as time and resource permit to address the other dimensions the paper describes. If you fail to do the first five, you'll be endlessly fighting operational overhead costs as you attempt to make forward progress.

  • If you haven't kept it simple, then you'll spend much of your time dealing with system dependencies, arguing over roles & responsibilities, managing variants, or sleuthing through data/config or code that is difficult to follow.
  • If you haven't expected failures, then you'll be reacting when the system does fail. You may also be dealing with complicated change-management processes designed to keep the system up and running while you're attempting to change it.
  • If you haven't implemented roll-back, then you'll live in fear of your next upgrade. After one or two failures, you will hesitate to make any further system change, no matter how beneficial.
  • Without forward-and-backward compatibility, you'll spend much of your time trying to force dependent customers through migrations.
  • Without enough information to diagnose, you'll spend substantial amounts of time debugging or attempting to reproduce difficult-to-find bugs.

I'll end with another encouragement to read the paper if you haven't already: "On Designing and Deploying Internet-Scale Services"


Tim Anderson questioned Small businesses and the cloud: 60% have no plans to adopt? in a 3/25/2011 post:

Microsoft has released a few details from a global survey of small businesses, defined as employing up to 250 employees, and cloud computing.

The research finds that 39 percent of SMBs expect to be paying for one or more cloud services within three years, an increase of 34 percent from the current 29 percent. It also finds that the number of cloud services SMBs pay for will nearly double in most countries over the next three years.

I think this means that today 71% of small businesses do not pay for any cloud services, but that this is expected to drop to 61% in the next three years.

It is worth noting that very small businesses can get quite a long way with free services such as Google Mail. So when we read that:

The larger the business, the more likely it is to pay for cloud services. For example, 56 percent of companies with 51–250 employees will pay for an average of 3.7 services within three years.

that may mean that very small businesses mainly use free services, rather than none at all.

In my experience, many small businesses do not have clearly articulated IT strategies, so I am sceptical about this kind of survey. One day the server breaks down and at that point the business decides whether to get a new server or buy into something like Microsoft BPOS or Office 365 instead.

A business actually has to be pretty determined to embrace cloud computing in a comprehensive way. There are often a number of business-critical applications that presume a Windows server on the premises, sometimes in old versions or custom-written in Visual Basic or Access. It is easier to maintain that environment, but perhaps to start using cloud-based email, CRM or even document storage alongside it.

I still find it interesting that Microsoft’s research points to larger businesses within this sector being more open to cloud computing than the smallest ones. The new Small Business Server 2011 range makes the opposite assumption, that smaller businesses (with Essentials) will be cloud-based but larger ones (with Standard) will remain on-premise. I still cannot make sense of this, and it seems to me that the company is simply unwilling to be radical with its main Small Business Server offering. It is a missed opportunity.

That said, clearly there is a lot of caution out there If, if I am right in my reading of the figures, that 61% have no plans to pay for anything cloud-based over the next 3 years.

Related posts:

  1. Microsoft maybe gets the cloud – maybe too late
  2. Cloud Computing survey: more fog than cloud
  3. How will online services impact Microsoft’s partner business?

Rob Conery offered Congratulations - You Just Won MySpace in a 3/25/2010 post to his Weke Road blog:

image We’ve just completed an eventful tour of “Charlie and The Social Networking Factory” and we’ve reached the end… I take off my rainbow-colored top hat and…

Hi! I’m ROB from NIGERIA and I’m here to let you know that you’ve just won a broken-down, bruised and demoralized Social Networking App! That’s right it’s MYSPACE!

Have fuuuuunnnnn….

image What do you do now? You’re on a sinking ship and you need to figure out what’s gone wrong and how you can fix it. Scoble says it’s the stack you’re on. Your execs say your dev team eats paste and breaths through their mouths.

Everyone is pointing fingers, everyone has opinions. You need to rebuild…

Time To Hire!

You put some ads out on Craigslist. You need a couple hundred or more developers to work on your app - some senior, many junior. You need “Server People” and “DBAs” to resuscitate your broken app. It’s on the Microsoft stack - which is dandy. We know it will scale - we just need to put it together right and have the right people run it.

Let’s get to work. For now we’ll assume the hardware is fine - but we don’t want to put lipstick on a dead pig so we need to figure out what web platform we’ll use - that’s the first order of business.

Platform Choice

If we opt for WebForms we can probably fill the hundreds of “heads” that we need to build out our app. WebForms has been around for a long time - people have resolved the best way to drag a grid from the Toolbox to the designer…

If we use MVC we’ll have some flexibility and testability, but when we interview the pool will be thinner because the majority of .NET developers are still using .NET 2.0 and don’t know what MVC is, nor how to use it. If we hire a massive set of high-end geeks they’ll probably spend all day arguing about whether to use Windsor or StructureMap. Oh yah - there’s always Fubu.

WebMatrix is another possibility here. You could hire some “scripty” types that understand the web, but might not be the best programmers in the world. You could improve the UI disaster that is MySpace, but your backend might suffer…

What to do…

Interviews

Your lobby is full - 320 .NET developers are waiting for you, resume in-hand. You can see the bolded “MCSD, MVP, RD, NINJA” certifications from where you sit. Hooray for certifications…

As the new MySpace CEO you’re rebuilding. You need to make MySpace relevant again - which means a compelling application, ground-breaking and innovative ways to connect people, uptime, beautifully intuitive and simple UI.

Scoble made a quip on his blog yesterday:

Workers inside MySpace tell me that this infrastructure, which they say has “hundreds of hacks to make it scale that no one wants to touch” is hamstringing their ability to really compete…

… the .NET developers are mostly enterprise folks who don’t like working in rough-and-tumble startup world. Remember, I worked at Visual Studio magazine and had a front-row seat.

Many people have bristled at that. I have to wonder why - isn’t this something we all understand? And by “we” I mean you and me - the people who read and write on blogs, Twitter our thoughts and generally “get involved” in the community.

I kind of thought that it was common knowledge: Most .NET developers are working in jobs that use technology from 3-5 years ago. .NET 2.0 is where it’s at for most MS developers.

Does that make them bad people? Of course not. It’s also likely that many of them will respond to your Craigslist ad because they want out - and they want “in” to helping you fix MySpace.

You have an interesting task ahead of you: you have to fill your hundreds of open positions with a team that’s capable of rebuilding MySpace into an innovative, compelling web application.

How do you think you’ll do?

If you’re interested in this issue, be sure to read the comments.


Lydia Leong (@cloudpundit) posted a Cloud IaaS special report on 3/25/2011:

image I’ve just finished curating a collection of Gartner research on cloud infrastructure as a service. The Cloud IaaS Special Report covers private and public cloud IaaS, including both compute and storage, from multiple perspectives — procurement (including contracting), governance (including chargeback, capacity, and a look at the DevOps movement), and adoption (lots of statistics and survey data of interest to vendors). Most of this research is client-only, although some of it may be available to prospects as well.

image There’s a bit of free video there from my colleague David Smith. There are also links to free webinars, including one that I’m giving next week on Tuesday, March 29th: Evolve IT Strategies to Take Advantage of Cloud Infrastructure. I’ll be giving an overview of cloud IaaS going forward and how organizations should be changing their approach to IT. (If you attended my data center conference presentation, you might see that the description looks somewhat familiar, but it’s actually a totally different presentation.)

As part of the special report, you’ll also find my seven-part note, called Evaluating Cloud Infrastructure as a Service. It’s an in-depth look at the current state of cloud IaaS as you can obtain it from service providers (whether private or public) — compute, storage, networking, security, service and support, and SLAs.


Microsoft made a sponsored IDC white paper available in a Study Reveals Microsoft Partner Ecosystem Revenues of $580 Billion in 2010 post of 3/24/2011:

Today, global research firm IDC issued a new white paper which estimates that members of the worldwide Microsoft ecosystem generated local revenues for themselves of $580 billion in 2010, up from $537 billion in 2009 and $475 billion in 2007. This demonstrates strong revenue growth when total worldwide IT spending increased less than half a percent, and validates the substantial opportunities and benefits available through the Microsoft Partner Network, the program that equips Microsoft partners with training, resources and support they need to successfully compete in today's marketplace while allowing customers to easily identify the right partner for their technology needs.

Through the Microsoft Partner Network https://partner.microsoft.com, partners can extend their market reach for greater opportunities and profitability while delivering innovative solutions to help customers achieve their business goals. The IDC study estimates that for every dollar of revenue made by Microsoft Corp. in 2009, local members of the Microsoft ecosystem generated revenues for themselves of $8.70. In an additional study on Microsoft Core Infrastructure Optimization, IDC found that partners that invested in more difficult or a greater number of Microsoft competencies enjoyed 68 percent larger deals and 28 percent more revenue per employee, compared with partners that invested less.

"The Microsoft Partner Network has allowed us to extend and grow our business by delivering innovative solutions to our customers," said Tom Chew, national general manager of Slalom Consulting. "Over the past year, we've increased revenue 45 percent and built strong momentum with Microsoft technologies around cloud services, business intelligence, portals and collaboration, dynamics, and unified communications. As a Microsoft partner, we've been able to differentiate Slalom and grow rapidly in an extremely challenging business landscape."

"Microsoft and its partners make a significant impact on the global economy," said Darren Bibby, program vice president for IDC Software Channels and Alliances Research. "Microsoft does an excellent job of providing great products for partners to work with, as well as effective sales, marketing and training resources. And the number of Microsoft partners working together is growing. The result is that the Microsoft ecosystem has achieved impressive results and has a very bright future."

imageThe Microsoft-commissioned IDC report reveals that the modifications made to the Microsoft Partner Network equip Microsoft partners with the training, resources and support they need to be well-positioned in the competitive IT marketplace, both with the current lineup of Microsoft products and in the cloud. Cloud-based solutions offer Microsoft partners the opportunity to grow by extending their current businesses via cloud infrastructure-as-a-service (private and/or public), software-as-a-service (Microsoft Business Productivity Online Standard Suite and/or Office 365), platform-as-a-service (Windows Azure) or a hybrid combined with on-premise solutions. Microsoft's cloud offerings are based on products customers already use and that partners have already built their businesses around.

According to the IDC study, implementation of cloud computing is forecast to add more than $800 billion in net new business revenues to worldwide economies over the next three years, helping explain why Microsoft has made cloud computing one of its top business priorities.

"As business models continue to change, the Microsoft Partner Network allows partners to quickly and easily identify other partners with the right skill sets to meet their business needs, so Microsoft partners are set up to compete and drive profits now and in the future," said Jon Roskill, corporate vice president of the Worldwide Partner Group at Microsoft. "The data provided in IDC's study reflect the fact that the opportunities available to partners will have them poised for success now and in the future."

The IDC research paper illustrates how partner-to-partner activity within the Microsoft Partner Network has increased, both on a per-partner basis and as a whole. In a study of the International Association of Microsoft Channel Partners, IDC found that the value of partner-to-partner activities within the community rose from $6.8 billion in 2007 to $10.1 billion in 2009. Among other reasons, more partners were able to work together to differentiate themselves and shorten their time to market, increasing their reach and extending their capabilities to capture additional and larger customer opportunities.

More information about joining the Microsoft Partners Network is available at https://partner.microsoft.com, and the IDC white paper, commissioned by Microsoft, can be viewed here. …

Full disclosure: I’m a registered member of the Microsoft Partner Network.


Nubifer analyzed Cloud Computing’s Popularity with SMB’s on 3/23/2011:

image There is no simple answer as to whether or not 2010 was the year small business IT finally adopted cloud computing once and for all. On behalf of Microsoft, 7th Sense Research recently conducted a study on cloud computing in small business computing environments and found that 29% of SMBs view the cloud as an opportunity for small business IT to be more strategic. The study also found that 27% of SMBs have bought into cloud computing because it integrates with existing technology investments, while 12% of SMBs have used the cloud to start a new business.

image

Despite those figures, overall, small businesses are largely unfamiliar with cloud computing. Josh Waldo, director of SMB Marketing at Microsoft reveals, “Roughly 20 percent of SMBs claim to know what cloud technology is.”

The numbers just don’t match up, but Waldo points out that just because people may not identify with the term cloud computing doesn’t mean they aren’t using the technology. Take Gmail or Hotmail, for example: They are both prime examples of the Software-as-a-Service (SaaS) form of cloud computing and are extremely popular—without their many users even realizing they are using cloud technology when checking their inbox.

“People might not understand what cloud is. But they are using it. They’re using it in their private life. In some cases they’re using it in their work life. But they might not necessarily identify it with the term cloud,” says Waldo.

He believes that the lack of familiarity SMB’s have with cloud computing can be an opportunity for Microsoft, Zoho and other providers of small business technology. Says Waldo, “For Microsoft, what that means is that this gives us a big opportunity to really educate SMB’s about cloud technologies and how they can benefit their business. Our goal is really going to be to help SMB’s evolve how they think about technology.”

According to Waldo, the benefits for small businesses that embrace the cloud are potentially huge: “First, SMBs can get enterprise-class technology at a fraction of the price, where you’re not purchasing on-premises technology that’s going to cost you an enormous amount upfront. Second, it really allows companies, whether you’re a development shop and you’re building software, or you’re an end customer—like a financial or insurance firm—to focus on your business rather than your IT requirements.”

By outsourcing data-center needs, for example, small business IT can eliminate building out capacity to handle potential strikes in data or transaction processing, because they buy the processing power they need when they need it. This leads to another key benefit of cloud computing: elasticity and the expectation of mobility. Waldo defines elasticity as the capability to scale up or down rapidly, based on need. While that includes processing power, it also means being able to add new users from a seasonal workforce—without having to deal with per-seat licensing associated with traditional desktop software.

When it comes to the expectation of mobility, Waldo says that today’s notebook, smartphone and tablet-totting employees want to make their work more flexible by making it mobile. SMB’s can let employees access the information and applications they need while on the go by exposing core applications as SaaS via the cloud.

Embracing Cloud Computing
Waldo recommends that SMB’s that have decided to embrace the cloud by adding cloud computing to their small business technology portfolio seek expert advice. “We really think it’s important that SMB’s choose carefully. And if they’re uncertain, they should work with a third party or a consultant or a value added reseller or some type of agent who understands the various elements of cloud technology and [who] can advise clients,” he says.

According to Chad Collins, CEO of Nubifer.com, a provider of turn-key cloud automation solutions, the first thing a small business should consider is which problem it is trying to solve: “The most important thing is that the cloud really isn’t just about infrastructure. It’s about solving problems. It should be about scalability, elasticity and economies of scale.” Collins adds, “What our enterprise clients are asking for is the ability to create virtual environments, run applications without code changes or rewrites and, most importantly, to be able to collaborate and share using single sign-on interface.

Collins says that the person responsible for small business IT should ask a range of questions when considering a cloud services provider. Among the most important is: Does the cloud provider allow you to run existing applications without any code rewrites or changes to code? Microsoft’s research reveals that 27% of SMBs have already bought into cloud services because it integrates with existing technology, while another 36% would be encouraged to but into the cloud because of that fact. “Being able to migrate custom applications over to the cloud without rewrites is not only a huge cost saver but also a huge time saver for SMBs,” says Collins.

Another important question is whether the cloud provider offers granular user access and user-based permissions based on roles. Can you measure value on a per user basis? Can you auto-suspend resources by setting parameters on usage to avoid overuse of the cloud? The latter is important because although cloud services can result in immense cost savings, their pay-as-you-go nature can yield a large tab if used inefficiently.

Collins recommends paying special attention to the level of responsive support offered by a cloud provider. “I think for SMBs it’s really important. Having to log a Web form and then wait 24 to 48 hours for support can be really frustrating,” he says, adding that the provider should guarantee that a support team would respond in mere hours. Agreeing with Collins, Waldo points out that a service-level agreement with a high-availability and 24 hour support is key.

To discover how the power of cloud computing can benefit your SMB, please visit Nubifer.com.


Steve Lange announced a New Invoicing Option Available for Azure Benefits on MSDN on 3/22/2011:

image As you may know, as an MSDN subscriber you get access to Windows Azure for reviewing your application’s viability and resource requirements in the cloud (see Azure Benefits for MSDN Subscribers). 

If you went over the allotted computing usage while using your Azure access, you previously only had the option to pay for that overage via credit card.

image

Now, you have two options:  credit card or invoicing.  (And if you’re a volume license (VL) customer, you can use your VL Agreement number to server as a credit check during invoicing setup.)

So if you haven’t already activated your MSDN Windows Azure benefit, it’s pretty easy to get started.

Credit Card Option – Simply go to the Windows Azure Portal and follow the instructions to activate via your MSDN subscriptions page.  For a straightforward walkthrough, try this.

Invoicing Option – Start at the Azure Invoicing information page, then complete your activation at the Windows Azure Portal.


James Urquhart (@jamesurquhart) posted Cloud, 'devops,' and 'shadow IT' on 3/17/2011 to his Wisdom of Clouds blog on C|Net News (missed when posted):

image Last week, I attended the Cloud Connect Conference and Expo in Santa Clara, Calif., which is one of the biggest gatherings of cloud thought leaders and practitioners of the year. What I took away from that week was both a firm confirmation of the concepts I have covered in the past, and a surprising revelation of the maturity of some organizations with respect to those practices.

image Most notably, there is a growing gap between the culture and practices of organizations that have embraced cloud as a primary IT model, and those that are trying to fit the cloud into their existing practices. The former has become very development- and application-driven, while the latter remains focused on infrastructure and data center operations models.

The developer-centric cloud model--a model based on the "devops" concepts I discussed in an earlier series--is one of working the code to meet the realities of the services available to it, including infrastructure services like computing and storage services. (The term "devops" is a combination of the words "developer" and "operations.")

The mantra "infrastructure as code" was spoken frequently during the conference, and the idea that infrastructure could be provisioned on demand, via an application programming interface has become a baseline assumption to these organizations. There is little argument about self-service versus controls and governance--developers are expected to develop apps that behave responsibly, and fix the bugs when they don't.

Its not just infrastructure that makes the cloud work for these organizations, however. It's also the variety of other services that can be found online, including coding platforms, payment services, notification services, data storage, and so on. These organizations tend to be focusing on being truly service-oriented, making the application (with some associated operations logic) be its own master.

These organizations also tend to be Web application-centric, and relatively small.

Meanwhile, the bulk of existing businesses with existing IT infrastructure and processes and budgets and...well, you get the idea...are working hard to figure out how cloud concepts are best adopted into their plans. And, as most of these IT groups tend to be infrastructure operations-centric, they are starting with the effects on that domain--not on application development or deployment, per se.

And, rightly so, there are some challenges to adopting cloud models from the infrastructure perspective. First, most existing infrastructure is not easily adopted into a cloud model. Differences in hardware profiles, network configurations, and even system density in the data center can make building a uniform cloud service from existing kit trend toward impossible. Or at least quite expensive.

So, most of these organizations begin with "greenfield" applications of cloud to their infrastructure. Sometimes that's specific applications, as in the case of a major financial firm that started with their business intelligence environment; and sometimes it's a "pilot" general purpose cloud for development and testing. But wholesale conversion to a private cloud model is rare. So, however, is wholesale adoption of public cloud as a primary model.

What is happening, however, is a realization that somehow, some way, culture and processes will have to change to make the cloud a powerful tool in the arsenal. It does no good to deploy a cloud with 5-minute provisioning times if there is still a committee with a 30-day backlog that has to approve the deployment. Having the illusion of infinite scale in terms of VMs or storage means nothing if your applications can't consume additional VMs or storage when they need to.

All of this brings me to a debate that took place on Twitter last week among some true practitioners of cloud, both in the "all in" sense and the "adoption into existing IT" sense.

See, what happened is this: Adrian Cockcroft is Netflix's "cloud architect," and a huge proponent of using Amazon Web Services or other public cloud services. Christian Reilly has built one of the first true private cloud infrastructures for a Fortune 500 company. The two of them sat down and had a long conversation one day last week, and Reilly walked away with a new appreciation for what the cloud could offer--and what it would take to get there.

When Reilly communicated this to his Twitter followers, a debate ensued with vendors, IT practitioners, and developers all chiming in. Reilly covers the core of the debate quite well in a recent post. The long and the short of it was that cloud adoption is different for those who's core business is software, versus those who use software to serve their core business. The former has the opportunity to change their culture and practices to get the most out of public cloud services, which is often the right thing to do for all but the biggest destinations.

The latter, however, must begin to adopt new practices--a painful process--as opportunity allows. The pace of adoption, technologies, and cloud models used and even evolution of development practices will be dictated by how quickly the organization as a whole can adopt --and adapt to--these changes. For most, that will take years.

Which brings me to the subject of "shadow IT"--the idea that business units working outside the realm of the IT organization and its processes cause problems for IT further down the line, and must be reined in. While the cloud certainly makes that easier--as one blogger suggests, check your employee's expense reports--I would contend that I wouldn't be so quick to squelch these projects.

The truth is, the lessons learned by teams working hard to adopt to cloud models and devops concepts will be invaluable to you regardless of what types of clouds IT ends up officially supporting--public, private, or hybrid. If you don't have the capacity to support experimentation with new models within IT, the initiative taken by "rogue" developers in this case might be the difference between getting ahead and falling behind.

Oh, and don't feel too badly if you never convert entirely to cloud infrastructure. Joe Weinman's cloud economics work demonstrates an advantage to mixed cloud and dedicated models for many applications. As my friend and colleague Chris Hoff notes in a classic post, "utility is in how you use things, not necessarily how it's offered."

You and your IT organization will be hearing a lot about online companies like Netflix, Good Data, and others building extremely agile businesses in the public cloud. If you think that you'll be asked to replicate some of that success, I bet you are right. If you think you can do that with technology alone, you are kidding yourself. Ask yourself, what are the ways in which these success stories can influence your existing IT environment.

That may be the multimillion dollar question.

Read more: http://news.cnet.com/8301-19413_3-20044335-240.html#ixzz1HjikBg44


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

Jo Maitland (@JoMaitlandTT) asserted Windows shops wake up to cloud realities and challenges in a 3/23/2011 post to the SearchCloudComputing.com blog:

image The idea of building a private cloud has grown in appeal for many IT shops during the past year. This is particularly true when compared to Microsoft’s previous annual management event, where the company introduced the idea of cloud computing only to be snubbed by Windows administrators and managers.

image The reason might have something to do with the fact that there are some tangible cloud products to discuss at the this week's Microsoft Management Summit 2011.

"It was a tough pill to swallow last year," said Charlie Maurice, PC support specialist at The University of Wisconsin. "The message was really unclear, all about public cloud ... This year they are saying you don't have to use public cloud; it can be private or hybrid cloud."

Other attendees agreed. "We want to create more automation in our environment so private cloud is appealing, we're working towards that," said Michael Hough, director of data center operations at JTI-Macdonald Corp, the Canadian arm of Japanese Tobacco International. "The public cloud is not an option for us."

Microsoft received that message. Conversations about cloud appeared focused on building private clouds with IT creating services internally for business users to consume. "You are in the service provider role and your users are service consumers," said Brad Anderson, a corporate vice president of management and security at Microsoft, during his keynote.

Anderson described a scenario where IT would create service templates based on pre-defined configuration setting that would automatically deploy services across different clouds. For most Windows administration routines today, this is a high concept.

"It's tough to get your head around this, but we're doing it," said Rob Hansen, IT architect with Deloitte LLP. The multitude of subsidiaries that make up Deloitte have standardized on a set of core infrastructure components that it refers to as its "Global Windows Framework" -- one set of apps, drivers, hardware, tools, management and policies that knit it all together.

"The goal is to get a global app out the door, get it to work once and then it should work everywhere," Hansen added. The IT organization is now able to deliver services to the member firms on a subscription basis. The key driver in bringing about this degree of standardization was getting the CIOs of each subsidiary to meet on a regular basis and agree on the requirements for a common platform.

"This took the longest time," Hansen said.

Next gen System Center gets ready
imageMicrosoft’s Andersen warned IT operations teams that if they didn’t think about delivering services that make it easy for end users to consume capacity from IT "they will go around you." Some IT shops said this wasn’t a threat to them anymore as corporate security policies had quickly caught up with the easy accessibility of cloud services like Amazon Web Services and have strictly forbidden their use.

Meanwhile, Microsoft bets that while it might be several years behind Amazon Web Services in public cloud computing, it is much better positioned to pull the majority of IT organizations into the cloud from the inside, out.

To that end Microsoft discussed new products and features in the System Center product family, including Microsoft System Center 2012, the beta availability of Virtual Machine Manager (VMM) 2012, and project Concero, a new System Center feature that allows administrators to manage applications across public and private clouds.

According to Ananthanarayan Sundaran, cloud platform marketing manager at Microsoft, Concero will release to manufacturing in the second half of 2011, albeit with limited functionality. The initial release will let administrators move applications from one VMM cluster to another on a private cloud and from one Azure subscription to another in the public cloud as well as see applications that are running in both environments, from a single view. However it will not support moving applications between private and public clouds and there's no telling how long that feature will take to ship.

Sundaran said that the release of the Windows Azure Platform Appliance (WAPA) will be key to enabling the hybrid cloud management feature in Concero. The appliance will give IT organizations Windows Azure in a box making it easier to connect to the Windows Azure public cloud service as the same software will be on both ends of the pipe.

However, these appliances were discussed almost a year ago and are still not shipping. Sundaran said this was because the Azure cloud software was written to run on a minimum of 900 servers and scaling that down to run on, say, 200 servers, was really hard.

"We're working on those SKUs, it's a lot of engineering and testing," he said. Ebay, Fujitsu, HP and Dell are waiting in the wings to sell the Azure appliances when they finally ship.

Still, this kind of hybrid cloud computing strategy where users manage applications across a private and public cloud computing environment is way off the map for most IT organizations today.

More from Microsoft Management Summit 2011:

Microsoft System Center opens Windows to the cloud
Microsoft is hoping to move more organizations to the cloud with its upcoming slate of System Center products,  but some IT professionals still wonder what the cloud means for them.

"This model of cloud assumes bandwidth and this can be a problem for us," said an IT architect for Chevron, who declined to be identified. "Oil is in bad places where you don't have bandwidth," he said. Moreover he said his audit team would have something to say about it if he even suggested public cloud. "Nothing can compromise network security," he said.

Softcorp, a Microsoft reseller in Brazil echoed these sentiments. "Microsoft does not have a data center in Brazil, the closest is Mexico, so online services are limited due to latency," said Mauro Hiroshi Sato, director of services at Softcorp. He added that Amazon and Google do have data centers in Brazil, and he uses their services to host customer extranets and portals.

"Microsoft's business model is changing, but in many places of the world they are constrained by infrastructure," he said.

Jo is the Senior Executive Editor of SearchCloudComputing.com.

Full Disclosure: I’m a paid contributor to SearchCloudComputing.com.


Amr Abbas announced SCVMM 2012 Beta Available Now in a 3/22/2011 post to the Team blog of MCS @ Middle East and Africa:

image You can download it from the following link:

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=e0fbb298-8f02-47e7-88be-0614bc44ee32

 Feature Summary

  • imageFabric Management
    • Hyper-V and Cluster Lifecycle Management – Deploy Hyper-V to bare metal server, create Hyper-V clusters, orchestrate patching of a Hyper-V Cluster
    • Third Party Virtualization Platforms - Add and Manage Citrix XenServer and VMware ESX Hosts and Clusters
    • Network Management – Manage IP Address Pools, MAC Address Pools and Load Balancers
    • Storage Management – Classify storage, Manage Storage Pools and LUNs
  • Resource Optimization
    • Dynamic Optimization – proactively balance the load of VMs across a cluster
    • Power Optimization – schedule power savings to use the right number of hosts to run your workloads – power the rest off until they are needed
    • PRO – integrate with System Center Operations Manager to respond to application-level performance monitors
  • Cloud Management
    • Abstract server, network and storage resources into private clouds
    • Delegate access to private clouds with control of capacity, capabilities and user quotas
    • Enable self-service usage for application administrator to author, deploy, manage and decommission applications in the private cloud
  • Service Lifecycle Management
    • Define service templates to create sets of connected virtual machines, os images and applica tion packages
    • Compose operating system images and applications during service deployment
    • Scale out the number of virtual machines in a service
    • Service performance and health monitoring integrated with System Center Operations Manager
    • Decouple OS image and application updates through image-based servicing
    • Leverage powerful application virtualization technologies such as Server App-V

System Requirements

  • Supported Operating Systems:Windows 7 Enterprise;Windows 7 Professional;Windows 7 Ultimate;Windows Server 2008 R2;Windows Server 2008 R2 Datacenter;Windows Server 2008 R2 Enterprise
    • Windows Server 2008 R2 (full installation) Standard, Enterprise, Datacenter x64
    • Windows 7 Professional, Enterprise, Ultimate x32, x64
    • Windows Remote Management (WinRM) 2.0
    • Windows PowerShell 2.0
    • Microsoft .NET Framework 3.5 Service Pack 1 (SP1)
    • Windows Automated Installation Kit (AIK) for Windows 7
    • SQL Server 2008 or SQL Server 2008 R2 Standard, Enterprise, and Datacenter
    • Windows Deployment Services (WDS) – (Version on Windows Server 2008 R2)
    • Windows Software Update Services (WSUS) 3.0 SP2 64bit

<Return to section navigation list> 

Cloud Security and Governance

• Firedancer described requirements for Securing Windows Azure Assets in a 3/27/2011 post:

image There are several things that comes to my mind when thinking about the stuff that needs to be secured on Windows Azure. At the infrastructure level, I am sure it is well-taken care off by Microsoft data centers and there are tonnes of whitepapers that we can find in the Microsoft Global Foundation Services website to read about it.

On the application level, we would still need to practice standard security principles such as encrypting our sensitive data, hashing passwords, using HTTPS for transport where applicable, impose proper authentication and authorization mechanisms in our applications and securing any WCF end-points that we are exposing from Windows Azure. Some people may assume that cloud solutions are either a silver-bullet to their security problems or they are very insecure because "everyone can access it".

image

There is no difference at the application-level. Cloud or on-premise, proper security practices should be in-place. I realized that from interactions with people, the main concerns of security are usually the infrastructure and application, However, there is a tiny concern that most people seem to overlook. The weakest point to  our Windows Azure assets is neither the infrastructure or application, it is the Windows Live ID that is used to login to the Windows Azure Portal. Yup! The same ID we used for our Live Messenger and XBOX Live.

If the Windows Live ID is compromised, an attacker can easily delete services, change certs, hijack administrative control, block access to data or just provision extra instances to bomb your credit card (Luckily, the default maximum instance is only 20). It is very common that organizations will either use the Infra Manager's or CTO's Live ID for Windows Azure. This is somewhat dangerous because the Windows Live ID is a personal thing and we are unsure whether the Live ID is compromised (i.e., clicked on those "Hey! Here is a picture of you" links in Live Messenger).

Therefore, I would suggest creating a separate "Company Windows Live ID" for your organization, tie the credit card to that ID and entrust it with the person who is in-charge of deploying the applications. This Windows Live ID should not be used for e-mail, chat or even XBOX.

P.S. Remember to change the password of the Windows Live ID when the person changes role or no longer works for the company.


<Return to section navigation list> 

Cloud Computing Events

• Saugatuck Technology will sponsor the one-day Cloud Business Summit 2011 to be held on May 10, 2011 at the Westin New York at Times Square hotel:

image The Cloud stretches far beyond today’s narrow vision of software, servers, and storage delivered via Public and Private networks. It creates new markets, new channels, new ways of doing business. It enables tremendous efficiencies in communication, sourcing, collaboration, sales, support and production. Yet the Cloud is not a panacea. It is constantly changing, creating new challenges, risks, and exposures, as it gets leveraged into profitable new business models.

The CLOUD BUSINESS SUMMIT is the first in-person executive conference that brings together the most Cloud-experienced senior business strategists and technology executives, combined with the most comprehensive Cloud business research, to explore what is possible, what is real, and what is not.

In this regard, the CLOUD BUSINESS SUMMIT will focus not only on how the Cloud is helping businesses drive operational efficiencies and evolve toward hybrid business application architectures, but how the Cloud is helping to transform business and drive revenue through the creation of new Cloud-enabled business services, as well as in terms of evolved supplier / partner relationships. Intersecting all of these themes will be a major focus on risk mitigation and management.

Learn from experts how the Cloud:
  • Enables the development of profitable new business services and business models
  • Facilitates more effective and efficient business processes and operations
  • Reshapes business and IT strategy, planning, and management — including a rethinking of the business application portfolio
  • Impacts business, technology, and regulatory risk and compliance

A 00:03:47 Cloud Business Summit -- Pre-conference Overview by Saugatuck CEO Bill McNee video segment of 3/25/2011 provides additional information.

Microsoft is conspicuous by its absence from the sponsor list.


Jason Milgram will present Building Causal Games on Azure to the Boston Azure Users Group on 3/31/2011 at 6:00 PM at the Microsoft NERD Center in Cambridge, MA:

image One of the fastest-growing cloud segments, casual gaming spans all age groups and regions of the world. Examine the architecture of one example of a casual gaming cloud app, review the common event logic used, learn how to add in-app purchasing and ways businesses can capitalize on these break-time diversions.

image

Jason Milgram is Founder and CEO of Linxter, Inc., provider of message-oriented cloud middleware. Jason is also a Windows Azure MVP.


Michael Coté described 3 Important Things from the Microsoft Management Summit 2011 in a 3/24/2011 post to his Redmonk blog:

image This week in Las Vegas, Microsoft came out with some strong, confident direction in their IT Management portfolio. There were numerous products announced in beta or GA’ed and endless nuance to various stories like “what exactly is a Microsoft-based private cloud?” Rotating in my head though are three clusters of important offerings and concepts to keep track of, whether you’re a user of IT Management or a vendor looking to compete of frenemy with Microsoft:

  1. image IT Management delivered as SaaS – thus far, success has been the exception, not the rule in delivering IT Management as a SaaS. Service-now.com has been the stand-out success here, driving incumbents BMC, CA, HP, and IBM to start offering IT Management function as a SaaS. Others have had rockier times getting IT heads to move their tool-chain off-premise. The common sentiment as told me by one admin last year was: well, if the Internet goes down, we’re screwed. Windows Intune, GA’ed at MMS 2011, is a SaaS-based (or “cloud-based,” if you prefer) service for desktop management – keeping the Microsoft portions of desktops up-to-date for $11-12/month/desktop. It’s not hard to imagine that Microsoft would want to extend this to servers at some point, as Opscode does now. The System Center Advisor product line (covering SQL Server, Exchange, Windows Server, Dynamics, and SharePoint) is knowledge-base served up as a SaaS – something klir (RIP) and Splunk have played around with – to make this kind of collaborative IT management work, you have to layer in a strong community like Spiceworks does, something that seems missing from the Advisor line at the moment. The feel I get from this momentum is that Microsoft would like to (after a long, multi-year “eventually”) move much of its portfolio to SaaS delivery. Admins can be “special snow-flakes” when it comes to moving their tools to SaaS, but at a certain point of cost & hassle avoidance vs. the risk of the Internet going down, it starts to make sense. And, really, if the Internet goes down, many businesses would be dead-in-the-water regardless of the IT Management tools available.
  2. Private cloud is what you need – while the focus on “Cloud and Microsoft” is often the public Azure cloud, Microsoft is also amped up to provide companies with the technology needed to use cloud-based technologies and practices behind the firewall, creating private clouds. Microsoft’s Project Concero is the spearhead of this, but there’s some interesting training wheels towards cloud that Microsoft wants to do with its virtualization management product. Strapping on the recently released System Center Service Manager and System Center Orchestrator (formally Opalis), and you have the self-service, highly-virtualized view of “private cloud.” The troubling aspect for Microsoft is the hardware layer. Time and time again, Microsoft executives rightly pointed out that “true” clouds need standardized hardware – at the same time they pointed out that most IT shops are far from “standardized.” When I asked them what that transformation would mean, being a software company, the answers weren’t too prescriptive. One hopes that the answer is more than “keep your eye on those Azure appliances we mentioned awhile back.” The issue is this: if private cloud means rip-n-replace of your existing hardware to get “standardized” hardware…then we’ve got some rocky budget hijinks ahead for anyone considering private vs. public.
  3. The War Against VMWare – judged by the amount of kicking in VMWare’s direction, Microsoft is freaked out about VMWare. They see VMWare’s plan as using their hypervisor dominance (which no one, including Microsoft, denies them having in the present) to infect all the way up to the application stack and, as one executive put it, “all the sudden you’re re-writing your apps in [Spring-based] Java!” While VMWare would certainly love for you to do that – as would Microsoft! – its unclear if that strategy is a realistic enough one to fear so much. More importantly, arguing that a VMWare-carved path to Java applications is somehow more costly, closed, and otherwise un-desirable than a complete Microsoft Hyper-V/.Net stack is a dangerous glass house to start throwing rocks in. I’d say the real enemy of both of these companies is the vast, un-quantifiable mass of open source developers out there who don’t want allegiance to any vendor – not to mention the public cloud IaaS and PaaS options out there (how exactly “cloud development” plays out with the ages-old lockin/closed source/single vendor stack architectural decisions is incredibly murky at this point). Clearly, with moves like buying WaveMaker, you can see some pitched VMW v. MSFT battles involving Lightswitch and other post-VB RAD development. As I told one Microsoft executive, if VMWare buys into an entirely different, non-Java ecosystem (e.g., Engineyard for Ruby, Zend for PHP, etc.), then it’d be time to doff the foil-hats for the real helmets.

They Did a Good Job

Overall, Microsoft did well with the announcements, products, and conversations at this MMS. Their focus on explaining how technology is used to further the goals of IT (and the evolution of those goals to align with business) stands in stark contrast to the wider, dual-messages going on at most Big 4 conferences of this type. Microsoft in the IT Management area is typically good at speaking to their products rather than pure vision.

The Visual Studio Test

The primary issue revolves around one question you can ask of any transformation story from Microsoft: does it require the use of Visual Studio? There’s nothing wrong, really, with Visual Studio, but that questions shows you how much of a Microsoft shop you have to become to follow their vision, here, of a private cloud. As Microsoft, I think rightly, said, cloud is ultimately about applications: a business needs applications, not just a bunch of “services.” How you develop that application, ideally, should be very open and up to you rather than, as Microsoft criticized VMWare for lusting after, requiring the use of the application stack from the same company that provides your cloud technology. Sure, there may be time-to-market trade-offs and all sorts of other concerns, but last I checked, mono-vendor approaches were still something to be concerned about and evaluate carefully before going “all in.”

(The same test should be applied to VMWare, Amazon, IBM, Oracle, HP, and anyone else looking to be your cloud-dealer.)

Disclosure: Microsoft, VMWare, IBM, and WaveMaker are clients.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

• Alex Willams posted Forget Rivalries, Pay Attention to the Developers to the ReadWriteCloud blog on 5/26/2010:

image What do we know about developers? What can we deduce about what they don't do?

We can make the educated guess that open source developers are unfamiliar and not really interested in the "one stack fits all" approach that the enterprise giants still battle to control.

image I was reminded of this in a post by Michael Cote, recapping his time last week at the Microsoft Management Summit. Microsoft sometimes seems obsessed with VMware. Cote argues that both companies have a different foe:

I'd say the real enemy of both of these companies is the vast, un-quantifiable mass of open source developers out there who don't want allegiance to any vendor - not to mention the public cloud IaaS and PaaS options out there (how exactly "cloud development" plays out with the ages-old lockin/closed source/single vendor stack architectural decisions is incredibly murky at this point). Clearly, with moves like buying WaveMaker, you can see some pitched VMW v. MSFT battles involving Lightswitch and other post-VB RAD development. As I told one Microsoft executive, if VMWare buys into an entirely different, non-Java ecosystem (e.g., Engineyard for Ruby, Zend for PHP, etc.), then it'd be time to doff the foil-hats for the real helmets.

The citizen developer is not an enterprise creation. The platform providers understand this and are catering to this developer community in a wholly different manner than the large enterprise vendors. Granted, there is a greater push on the part of the enterprise to reach out to developers. Still, developers are autonomous and prefer the independent nature and the culture that comes with being part of a community that controls how they program.

image This past week, a new startup called DotCloud, raised $10 million. Yahoo Co-Founder Jerry Yang participated in the funding. DotCloud is a platform that will compete against Windows Azure, Google App Engine and EngineYard.

In an interview this week, Solomon Hykes, Co-Founder and CEO of DotCloud talked about the issues that developers face. Often, developers are encumbered by systems administration issues such as monitoring, scaling, backups, redundancy and upgrade management. They are forced to choose one programming language. On DotCloud, the system adminstration work can be offloaded. The developers can choose to use Ruby for its Web-front end and Java on the back-end. That's different than platforms such as Heroku, which is a Ruby-based and Google App Engine, which caters to people who are trained in Python.

Hykes says the target is the enterprise. And that shows something larger than just DotCloud. The demand in this world will be for platforms that give developers the power to customize.

That's the sea change. Its evidence is in the developer community and the DevOps movement. It's what makes the latest round of innovation so compelling. It's not the cloud. It's the people who are innovating and developing on their own terms, with any stack they decide to choose.


David Pallman announced the start of a new series in his Amazon Web Services and Windows Azure Architectural Comparison, Part 1 post of 3/26/2011:

image At the recent Cloud Connect show in the Bay Area I attended evangelist Jinesh Varia’s talk on Amazon Web Services design patterns, Design Patterns in the Cloud: A Love Story. I wanted to learn more about how Windows Azure is similar and dissimilar to other cloud platforms, and I found Jinesh’s talk to be insightful, well delivered, and entertaining. I thought therefore it would be useful to present the same progressive scenario as it would be done in Windows Azure.

image

The scenario in the presentation is Thursdate, a dating website that only runs 3 hours a week on Thursday evenings. Lonely geek Andy creates this site, initially in a very modest form, and it progressively grows and scales.

image In writing this, I’ve labored to avoid spin or mischaracterization. However, I must admit my knowledge of AWS is elementary compared to Windows Azure—so please do bring any errors to my attention. Nor am I claiming that everything here is an exact equivalent; this is not the case since no two cloud platforms are identical. I’m merely showing how the same scenario with the same progression would be achieved in Windows Azure.

1. Local Deployment
The first incarnation of Thursdate is hosted locally by Andy. He uses Apache as his application server, develops in PHP, and uses a MySQL database. He backs up to tapes. His deployment looks like this:

Local Deployment

2. Initial Cloud Deployment
When Andy gets the bright idea to move things into the cloud, he minimally needs a server in the cloud, a public access point for his web site, and a way to do backups. A single server in the cloud isn’t going to provide high availability or any guarantee of data persistence--but Andy isn’t concerned about that yet.

AWS: In AWS this means an Amazon EC2 Instance, an Elastic IP, and backups to the Amazon S3 storage service.

Windows Azure: In Windows Azure, the counterpart to EC2 is Windows Azure Compute. Andy must specify a role (hosting container) and number of VM instances. Here he chooses a worker role (the right container for running Apache) and one VM instance. He uploads metadata and an application package, from which Windows Azure Compute creates a Windows Server VM instance. An input endpoint is defined which provides accessibility to the web site. The input endpoint is nominally accessible as <production-name>.cloudapp.net; for a friendlier URL, a domain or subdomain can forward to this address. Backups are made to the Windows Azure Storage service in the form of blobs or data tables.

Initial Cloud Deployment

3. Designing for Failure
Pattern #1: Design for failure and nothing will fail
Andy soon realizes that failures can and will happen in a cloud computing environment and he’d better give that some attention. VM server state is not guaranteed to be persistent in a cloud computing environment. He starts keeping application logs and static data outside of the VM server by using a cloud storage service. He also makes use of database snapshots, which can be mapped to look like drive volumes.

AWS: The logs and static data are kept in the Amazon S3 storage service. Root and data snapshot drive volumes are made available to the VM server using the Amazon Elastic Block Service (EBS).

Windows Azure: Logs and static data are written to the Windows Azure Storage service in the form of blobs or tables. For snapshots, a blob can be mapped as a drive volume using the Windows Azure Drive service. As for the root volume of the VM, this is created from the Windows Azure Compute deployment just as in the previous configuration.

Updated Deployment - Designing for Failure

4. Content Caching
Pattern #2: Edge cache static content
Andy is starting to hit it big, and there is now significant usage of Thursdate in different parts of the world. He wants to take advantage of edge caching of static content. He uses a content distribution network to serve up content such as images and video performantly based on user location.

AWS: Amazon Cloudfront is the content distribution network.

Windows Azure: The Windows Azure Content Delivery Network (CDN) can serve up blob content using a network of 24+ edge servers around the world.

Updated Deployment - Caching Static Content

5. Scaling the Database
In preparing to scale, Andy must move beyond a self-hosted database on a single VM server instance. By using a database service outside of the compute VM, he will free himself to start using multiple compute VMs without regard for data loss.

AWS: The Amazon Relational Database Service (RDS) provides a managed database. Andy can continue to use MySQL.

Windows Azure: Andy must switch over to SQL Azure, Microsoft’s managed database service. This provides him with a powerful database available in sizes of 1-50GB. Data is automatically replicated such that there are 3 copies of the database. In addition, Andy can make logical backups if he chooses--to another SQL Azure database in the cloud or to an on-premise SQL Server database.

Updated Deployment - Using a Database Service

6. Scaling Compute
Pattern #3: Implement Elasticity
With a scalable data tier Andy is now free to scale the compute tier, which is accomplished by running multiple instances.

AWS: Andy runs multiple instances of EC2 through the use of an Auto-Scaling Group. He load balances web traffic to his instances by adding an Elastic Load Balancer.

Windows Azure: Andy has had the equivalents of a scaling group and an elastic load balancer all along: we just haven’t bothered to show them in the diagram until now and he hasn’t been taking advantage of them with a single compute instance. The input endpoint comes with a load balancer. The worker role is a scale group—its instances can be expanded or reduced, interactively or programmatically. The only change Andy needs to make is to up his worker role’s instance count, a change he can make in the Windows Azure management portal.

Updated Deployment – Compute Elasticity

7. High Availability and Failover
Pattern #4: Leverage Multiple Availability Zones
Andy wants to keep his service up and running even in the face of failures. He’s already taken the first important step of redundant resources for compute and data. Now he also wants to takes advantage of failover infrastructure, so that a catastrophic failure (such as a server or switch failure) doesn’t take out all of his solution.

AWS: Andy sets up a second availability domain. His Amazon RDS database in the first domain has a standby slave counterpart in the second domain. The solution can survive a failure of either availability domain.

Windows Azure: The Windows Azure infrastructure has been providing fault domains all along. Storage, database, and compute are spread across the data center to prevent any single failure from taking out all of an application’s resources. At the storage and database level, replication, failover, and synchronization are automatic. The one area where fault domains weren’t really helping Andy until recently was in the area of Compute because he was running only one instance at first. A best practice in Windows Azure is to run at least 2 instances in every role.

Updated Deployment – Fault-tolerant

Summary – Part 1
We’re halfway through the progression, and you can see that Amazon Web Services and Windows Azure have interesting similarities and differences. In both cases we have arrived at a solution that is scalable, elastic, reliable, and highly available.

My main observation is that an informed Windows Azure developer would not need to jump through all of these hoops individually (I suspect that’s also true of an informed AWS developer). Second, some of the discrete steps in this progression are automatically provided by the Windows Azure environment and don’t require any specific action to enable; that includes compute elasticity, load balancing of public endpoints, and fault domains: they're always there. The Windows Azure edition of the scenario does require Andy to change database providers in order to realize the benefits of database as a service; aside from this all of the other steps are easy.

In Part 2 we’ll complete the comparison.


Klint Finley (@klintron) asked Oracle Had a Killer Quarter - What Does That Mean for Open Source in the Cloud? in a 3/26/2011 to the ReadWriteCloud blog:

image Oracle's sales increased 37% to $8.76 billion last quarter, according to Bloomberg. Cloud computing gets some of the credit for the revenue jump, causing a surge of interest in Oracle's databases and a 29% gain in new license sales.

Earlier this year David Linthicum wrote a post titled "Amazon's Oracle move shows open source won't gain in the cloud" in response to Amazon Web Services offering Oracle Databases on RDS.

image EnterpriseDB CEO Ed Boyajian disagrees. EnterpriseDB provides commercial support and management tools for PostGRES databases. "We're engaged with every major cloud provider today on how they can have an open source database alternative," he says. Boyajian says a lot of customers are tired of being locked into Oracle's databases and are looking for an alternative.

image And that's not even mentioning the use of NoSQL databases like Apache Cassandra and Hbase in cloud environments. But most of that on the server side. As former Canonical COO Matt Asay says, it's invisible.

And as Linthicum writes:

Migrating from Oracle is a pretty risky proposition, considering how dependent many applications are on Oracle's features and functions. Indeed, as I work the cloud-migration project circuits, I find that those companies on Oracle stay on Oracle. Although they will consider open source alternatives for some projects, most enterprises and government agencies cite the existing skill sets within the organizations and a successful track record as the reasons they are remaining with Larry.

Even in cases where NoSQL tools are adopted, traditional databases tend to remain. For example, CERN and CMS adopted Apache CouchDB and MongoDB for certain uses, but kept Oracle for others.

And the numbers out from Oracle today suggest that the company's databases are not just being used by cloud customers, but behind the scenes as well.

Oracle seems like an unstoppable juggernaut at the moment, and it's making some bold moves. Earlier this week it announced that it would stop supporting Intel Itanium processors. Forrester analyst Richard Fichera describes this mostly as an attack on HP.

"Oracle was in a position where an arguably reasonable decision on long-term strategy also had a potentially negative tactical impact on a major competitor, and as such was probably much easier to make," he writes. Fichera notes how expensive and difficult it is to move migrate from Oracle databases, but suggests that IBM and Microsoft might benefit from the uncertainty that Oracle is creating. Could open source alternatives benefit here as well?


Klint Finley (@klintron) posted Big Data Round-Up: Unbounded Data, Cloudera Gets Competition and More on 3/25/2011 to the ReadWriteCloud blog:

imageThis week the GigOM event Structure Big Data took place in New York. We've already told you about the announcement of DataStax's Hadoop distribution, Brisk, and about the launch of our own Pete Warden's Data Science Toolkit. Here are a few more big data stories that you may have missed this week.

Should We Replace the Term Big Data with "Unbounded Data"?

imageThis is actually from a couple weeks ago, but I think it's worthy of inclusion. Clive Longbottom of Quocirca makes the case that "Big Data" is the wrong way to talk about the changes in the ways we store, manage and process data. The term certainly gets thrown around a lot, and in many cases for talking about managing data that is much smaller than the petabytes of data that arguably defines big data. Longbottom suggests the term "unbounded data":

Indeed, in some cases, this is far more of a "little data" issue than a "big data" one. For example, some information may be so esoteric that there are only a hundred or so references that can be trawled. Once these instances have been found, analysing them and reporting on them does not require much In the way of computer power; creating the right terms of reference to find them may well be the biggest issue.
Hadapt and Mapr Take on Cloudera

Hadapt and Mapr both launched at Structure Big Data this week.

Mapr is a new Hadoop vendor and competitor to Cloudera co-founded by ex-Googler M.C. Srinivas. Mapr announced that it is releasing its own enterprise Hadoop distribution that uses its own proprietary replacement for the HDFS file system. In addition to Cloudera, Mapr will compete with DataStax and Appistry.

Hadapt is a new company attempting to bring SQL-like relational database capabilities to Hadoop. It leaves the HDFS file system intact and uses HBase.

For more about the heating up of the Hadoop market, don't miss Derrick Harris' coverage at GigaOM.

Tokutek Updates Its MySQL-based Big Database

Don't count MySQL out of the big data quite yet. Tools like HandlerSocket (coverage) and Memcached help the venerable DB scale. So does TokuDB from Tokutek, a storage engine used by companies like Kayak to scale-up MySQL and MariaDB while maintaining ACID compliance.

The new version adds hot indexing, for building queries on the fly, and hot column addition and deletion for managing columns without restarting the database.

The Dark Side of Big Data

Computerworld covers the relationship between surveillance and big data at the conference. "It will change our existing notions of privacy. A surveillance society is not only inevitable, it's worse. It's irresistible," Jeff Jonas, chief scientist of Entity Analytic Solutions at IBM, told Computerworld.

we covered this issue last year and asked what developers would do with access to the massive data sets location aware services enable. It's still an open question.

For more background on Jonas' analytics work, check out this InfoWorld piece. …

Disclosure: IBM is a ReadWriteWeb sponsor.


John C. Stame asserted Hybrid Clouds – The Best Cloud! in a 3/25/2011 post:

An article a couple of months back in VentureBeat asked if Hybrid Clouds were the path to cloud-computing nirvana. I am not sure I would go that far, but I do believe that most enterprise IT organizations will land at a place where they will have a private cloud, leverage a public cloud, and take advantage of the very compelling benefits of hybrid cloud scenarios.

Most large companies will not re-engineer their existing portfolio of applications to run in public clouds, and in fact will continue to invest in their existing infrastructure to enable the benefits of cloud and build out their own private clouds – driving costs down in their infrastructure, while gaining agility for their businesses.  Additionally, some will have additional flexibility, greater business agility, and greater savings in costs, through hybrid cloud scenarios.

Hybrid cloudHybrid cloud will enable IT in the enterprise to optimize infrastructure services and their computing resources across their data centers and service providers as necessary to respond to business demands.  Hybrid clouds will provide portability for workloads, while maintaining consistent control and management, and maintaining security and compliance.

A key factor for to enable the Hybrid cloud capability is that IT will will need to build a private cloud platform that is consistent with cloud service providers and supports the portability mentioned above.  One approach that is leading the way is VMware’s vCloud Powered service providers.  Check it out.


Klint Finley (@klintron) reported OpenStack Now Supports VMware vSphere in a 3/25/2011 post to the ReadWriteCloud blog:

imageThe talk about OpenStack's openness should not cloud out what is really happening. And that's the fact that there is a whole lot of innovation coming from this growing group of engineers and developers.

Simon Crosby of Citrix puts this in perspective today in a post about OpenStack's support for vSphere that Citrix played a large part in helping make happen.

image The vSphere support now brings to seven the total OpenStack compute options available: vSphere, XenServer/Xen Cloud Platform, Xen, Hyper-V, KVM, QEMU, and UML.

Why did Citrix, a huge VMware foe, decide to embark upon the effort to support the VMware hypervisor in the OpenStack community? Crosby puts it this way:

imageWe in the OpenStack community believe that you ought to have a choice. We don't think you need to throw out ESX or even vSphere. You made a rational decision to use (err, buy) it. But you ought to have a choice as to whether your cloud implementation will lock you into a single vendor model forever, with a limited set of expensive value-added services. We think there's another way, that permits innovation in cloud services and solutions, that scales massively, and that is wholly free. The answer is OpenStack Compute - a massively scalable cloud orchestration system developed by over 50 vendors, hundreds of engineers and put in production by the world's largest cloud service providers.

Wardey is one of the better bloggers out in the cloud world. He pulls no punches. But his points are quite valid.

imageOpenStack is innovating, showing its importance as an organization. Adding vSphere opens up the possibilities for customers.

And that's a good thing.


James Governor (@monkchips) reported HP Gets With the (Developer) Program. CEO pimps PaaS, NoSQL in a 3/18/2011 post to the Redmonk blog (missed when posted):

image I got back yesterday from a couple of days at the HP Analyst Summit in SF. Its been a really tough week personally- an eye injury made the trip far less fun that it might have been. Given my vision is blurry, I will try and keep this post short and too the point.

image Firstly, its time to dust off my trusted “blimey, the CEO is pimping Ruby” meme.  In his opening keynote new HP CEO Leo Apotheker said that HP planned to build both a public cloud, and also, specifically, a Platform as a Service offering.

"We want to be a PaaS company"- we’ll have a complete suite for developers."

image Public cloud Infrastructure As a Service (IaaS) is to be expected from HP as a way to sell servers and storage – PaaS not so much. HP’s history in middleware is chequered to say the least, and PaaS is modern shorthand for middleware in the stack burger. Unfortunately I wasn’t able to glean any technical details at all about the PaaS offering during the event, so it might have just been CEOware – but CTO Shane Robison had a notable chart, showing not just PaaS, but multi-language PaaS. Java and .NET you’d absolutely expect from HP – Ruby, Python and Javascript not so much. In terms of customer demand Robison was clear:

“Customers asking HP for public cloud support. Customers asking for more sophisticated billling support”

Game on Amazon AWS.

But back to Leo: he also made an extremely aggressive anti-Oracle statement, which any modern web developer would absolutely recognise:

"traditional relational databases are becoming less and less relevant to the future stack". 

imageThe call to NoSQL is a wakeup call because unlike IBM, Oracle and Microsoft, HP doesn’t have a relational database franchise to protect. Sure it sells a boatload of servers to run relational databases, but its not locked in from a customer information perspective. HP and VMware are in a similar situation here, and its worth reading my post about VMware in the post-relational era for more context.

What might the era of Oracle database offload look like? Something like this probably- see my case study from The Guardian Newspaper. Oracle is great for transactional workloads- we all know that – but it should not be the default choice for all data storage. Oracle is overly heavyweight, and demands design time data model decision-making which makes very little sense in an age of linked data, used and reused in new contexts. Its also just too expensive to be used as a straightforward a bucket of bits; MySQL is more appropriate for that role – but developers are moving on when it comes to graph and document databases. Check out my client Neo4J, for example, as a modern, made for purpose, graph store. But the web is churning out a host of interesting new stores- Cassandra is a speedy key value store database built and open sourced by web companies. Though I am sure HP will be aggressively pushing its own Vertica database for column-oriented apps, one of the first acquisitions of the Leo era, there are surely more acquisitions to come. It seems highly likely it will make a play for Hadoop master packager, our client Cloudera.

Anyway I seem to be getting off topic. Suffice to say HP is now having the kind of conversation that RedMonk is interested in. Less silence, more interesting.

But of course HP’s posture to developers needs to change dramatically, it needs to lean in, rather than back (there is nothing puts off developers faster than disinterest) and there are hints that this is happening. Posture comes from the CEO down, and Apotheker definitely understands the value of rich ecosystems like the SAP Developer Network. HP is going to invest heavily here, which will be good for the company. And yes- to be self-serving just for a second, it could well be good for RedMonk. HP now looks like a potential major client, in a way it really didn’t just a few months ago.

And I haven’t even mentioned Palm and webOS yet. Without developers Palm will be a total waste of shareholder money. But the webOS SDK is something developers like, and it makes porting reasonably easy for mobile apps built with some web technology.

Another thing developers like is performance. I met one of my friends from Facebook on the plane over and he dismissed Palm with a curt: “performance sucks”. Evidently he hasn’t seen the Touchpad- dear lord that thing was screaming in the demos.

I didn’t spent that long on webOS at the event, frankly, mostly because we’re already hooked up with its developer relations folks, and I already know the platform and potential plays pretty well. Talking of posture: I recently introduced one of my contacts to the team – a French entrepreneur/engineer in the XMPP messaging space – and practically bit my hand off in terms of setting up a meeting with him in Sunnyvale.

Man this post is getting long. Other things to mention – ALM 11 looks pretty solid. Rational is going to need to up its game, because HP is in good shape there. One way HP moved forward really quickly is by signing a deal with another RedMonk client TaskTop Technologies- which is making ALM less painful by using pointer based approaches to integrate with existing tools, rather than taking the traditional ALM vendor approach of forcing all development metadata into a single repository. CEO Mik Kersten is a disciple of flow, and he hates anything that gets in the developers way. It was funny talking to Jonathan Rende, of the old “BTO” school, the guy in charge of ALM – he was totally straightforward. He traditional sold to ops, and didn’t really care about developers that much, but things had changed. As per his keynote:

"[I] see a collision happening between agile and ITIL." Jonathan Rende

HP is responding to the world of devops, agile and The Developer Landgrab. Developers are of course the new kingmakers.

I am going to sign off here – without writing up some of the interesting tools I saw at the event like HP Sprinter or IT-Hive (putting a face on operations) – but perhaps what struck me most clearly after the event? Leo Apotheker got the sole CEO job at SAP just as the global economy went to hell, and he paid the price. Today however, the economy is heating up, and HP has some great assets to get behind. Apotheker seems to have taken on the biggest job in tech at just the right time.

Now he just needs to get busy on sustainability, but that’s definitely a different post.

HP is not a subscription client, but paid T&E. Apache is a client – Cassandra is a project there. IBM is a client, Microsoft is too. We do some work with SAP.

Still no mention of the Windows Azure Platform Appliance (WAPA) partnership with Microsoft announced at least year’s Microsoft Worldwide Partner Conference (WPC 2010).


<Return to section navigation list>  

0 comments: