Tuesday, February 01, 2011

Windows Azure and Cloud Computing Posts for 2/1/2011

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px33   

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

Updated my (@rogerjenn) Resource Links for SQL Azure Federations and Sharding Topics of 1/28/2011 with the following item:

••• Shaun Ziyan Xu posted a link to the source code for a Parallel-Oriented Data Access (PODA) C# library to Google Code in his Happy Chinese New Year! post of 2/1/2011:

image[2] If you have heard about the new feature for SQL Azure named SQL Azure Federation, you might know that it’s a cool feature and solution about database sharding. But for now there seems no similar solution for normal SQL Server and local database. I had created a library named PODA, which stands for Partition Oriented Data Access which partially implemented the features of SQL Azure Federation.

imageI’m going to explain more about this project after the Chinese New Year but you can download the source code here.

image Downloads from Google Code don’t support IE 8 and require Mozilla Firefox or Chrome. [Parochial, no?]

I’ll update this post when Shaun recovers from celebrating Chinese New Year (also big in Oakland) and delivers the promised explanation.


Steve Yi posted Azure Anniversary and Customer Momentum on 2/1/2011 to the SQL Azure blog:

image A Customer Spotlight Press Release today reminded us that we've reached the one year anniversary of the Windows Azure platform.  Given all the exciting developments and progress we have made over the last year, it is easy to forget that it's been only a year since SQL Azure was first available as a paid service!  For us, of course, the most interesting part of this journey has been watching the innovative ways in which our customers use SQL Azure.

In January alone, we blogged about an energy management software startup which was using the cloud to reach new markets and an online marketing services company which was leveraging  SQL Azure and the cloud for massive scale.

Today's press release, sheds light on a range of other interesting scenarios:  including an app created by T-Mobile USA in just six weeks, for families to schedule and co-ordinate activities such as movie nights and Xerox's innovative cloud based mobile printing solution using SQL Azure and Windows Azure.

imageI'd like to call out the Xerox solution in particular.  In 2010, Xerox introduced, Xerox Mobile Print, a solution which allows its workers to send documents to any printer in the company from a handheld device such as their smartphone. Now, when employees are traveling on business, they can do things like quickly print a presentation for tomorrow's client meeting without having to spend countless hours at a local hotel business center.

Xerox used SQL Azure to store all relational data including information related to user accounts, jobs, and devices as well as print job metadata. By deploying Xerox Cloud Print on SQL Azure, Xerox was able to get the solution to market in just four months. Eugene Shustef, Chief Engineer, Global Document Outsourcing, at Xerox mentions that his developers saw no difference between SQL Azure and SQL Server and the familiarity offered by the platform was critical in getting to market quickly and gaining a global presence overnight.   Read more about Eugene's interview and the associated case study here.

Cory Fowler (@SyntaxC4) created the birthday cake above for the Windows Azure Platform.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Mike Ormond continues his OData/WP7 series with OData and Windows Phone 7 Part 2 of 2/1/2011:

image In my last post, we walked through accessing the Twitpic OData feed to retrieve a collection of images for a particular user. We ended up with an app that simply displayed a list of ShortIds for the images concerned.

TwitpicFinalIn this post we’ll jazz things up a bit and display the actual images. The picture on the right is the sort of thing we’re shooting for.

Fundamentally, nothing really needs to change. We’ve got the data, all we need do it change the presentation a little bit.

Getting the Images

First up is getting hold of the images themselves. If you take a good look at the results of our query:

http://odata.twitpic.com/Users('zdnet')/Images?$top=20

imageYou’ll see that each feed item (or <entry>) contains a <properties> element. The proxy classes generated by DataSvcUtil expose children of the <properties> element as properties thus giving us easy access. So, for example, the Image type we’re using has properties ShortId, UserId, Source, Message, Views etc

The Value Converter

Unfortunately, it doesn’t have an ImageUrl property or equivalent. The Url property points to an HTML document – we want a url that points directly to the image. We’re going to have to make one ourselves. Enter the value converter. This allows us to modify data as part of the binding process. In our case, we’ll take the ShortId and transform into an image url.

Our value converter implementation is trivial:

image

And we can just add that as a new class in the project. We’ll need an instance of that type (which we can create in XAML) to do the conversion as part of the data binding.

Modify the XAML for the PhoneApplicationPage as follows (changes are the addition of the local namespace and the creation of a ThumbnailConverter as a resource):

image

Now we just need to adjust our original binding expression slightly by changing it to:

image

OdataQuery2By referencing the value converter, our rendered ListBox now looks like the image on the right.

Show Some Pictures

It should be pretty trivial to render some images now – just change the TextBlock element in the ItemTemplate to an Image.

I’ve also added a Margin and set a fixed Width / Height on the Image element as below.

image 

Nearly There

OdataQuery3This displays a list of images and there’s just a couple of things we need to do to replicate our “desired outcome”.

First we ought to change the ApplicationTitle and PageTitle elements (to “TWITPIC ODATA VIEWER” and “by user id” respectively).

Finally, we need to switch the ListBox to use a WrapPanel (which lives in the Silverlight for Windows Phone Toolkit) instead of a StackPanel. The easiest way to do this is to open the project in Expression Blend (right-click on the project and select “Open in Expression Blend”).

We <3 Expression Blend

Add a reference to Microsoft.Phone.Controls.Toolkit.dll (you’ll need to download and unzip the Toolkit if you don’t already have it).

Find the ListBox in the Objects and Timeline window, right-click and select Edit Additional Templates –> Edit Layout of Items (ItemsPanel) –> Create Empty and click Okay to create a new ItemsPanelTemplate resource.

Blend

In the ItemsPanelTemplate delete the StackPanel and replace it with a WrapPanel (either use the Assets selector or the Assets Window –> Controls –> Panels to find it).

Blend2

Build and run (from Blend or save all and go back to Visual Studio and run from there) and we have our “finished” app.


Jonathan Tower explained Consuming an OData Feed in .NET 2.0 in a 2/1/2011 post to the Falafel Software (@falafelsoftware) blog:

image I was recently asked to create an ASP.NET 2.0 web site that consumed an OData (WCF Data Services) data feed.  Of course, in .NET 4.0, this would have been easy given the great tool that Microsoft has provided for us in the WCF Data Services Client (System.Data.Services.Client).  However, in .NET 2.0, this presented somewhat of a challenge.

For all the examples in this blog, I’m using Chris Woodruff’s baseballs stats feed located at http://baseball-stats.info/OData/baseballstats.svc/

I decided to write my own simple OData ATOM/XML parser.  But first, I wanted some classes to hold.  Ideally, I wanted classes that matched the entities I would get auto-generated for me in .NET 4.0. 

To start, I created a .NET 4.0 project and added a service reference to my OData Service.  This creates a code-generated file containing classes for all the entities defined in the OData feed.  By highlighting the service reference and click Show Hidden Files and maximizing a couple of the nodes that appear beneath it, IU was able to see the code-generated file Reference.cs.  I could now create a .NET 2.0 project and copy this file into it.  With a few changes, it works perfectly as a container for all the feed data I’m going to parse for myself.

image

The first clean-up step for this code-generated file was to remove the entirety of first class defined in it.  For me, this class was named “BaseballStatsEntities”.  After this, I commented-out or removed all the Attributes appearing before the remaining classes and properties in the file.  Attributes are the lines that that appear in square brackets.   After this clean-up, I was able to compile the .NET 2.0 project.

Next I created a helper class that I called EntityHelper.  This class is responsible for retrieving the OData feed as XML, parsing it, and loading it into my entity classes.

To get the OData as XML, I’m using a method called ExecuteQuery().  Here’s how it looks:

private string ExecuteQuery(string url)
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(new Uri(url));
    request.Method = "GET";
    request.Accept = "application/atom+xml";
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
    {
using (StreamReader readStream = new StreamReader(
            response.GetResponseStream(), Encoding.GetEncoding("utf-8")))
        {
return readStream.ReadToEnd();
        }
    }
}

Next, I load the XML string data into a XMLDocument (with the following XML NameSpaces set up to allow parsing):

string xml = ExecuteQuery(url);
XmlNamespaceManager xmlNsMgr = new XmlNamespaceManager(xmlDoc.NameTable);
xmlNsMgr.AddNamespace("atom", "http://www.w3.org/2005/Atom");
xmlNsMgr.AddNamespace("m", "http://schemas.microsoft.com/ado/2007/08/dataservices/metadata");
xmlNsMgr.AddNamespace("d", "http://schemas.microsoft.com/ado/2007/08/dataservices");
xmlNsMgr.AddNamespace("base", "http://falafel.cloudapp.net/ConferenceODataService/");
xmlDoc.LoadXml(xml);

Now I loop through all the <entry> nodes in the file like this

XmlNodeList elements = xmlDoc.DocumentElement.SelectNodes("./atom:entry", xmlNsMgr);
foreach (XmlNode element in elements)
{
    ...
}

For each element, I create a new entity object instance and loop through the element node’s <property> nodes:

XmlNodeList properties = element.SelectSingleNode("./atom:content/m:properties", xmlNsMgr).ChildNodes;
foreach (XmlNode property in properties)
{
    ...
}

Now for each property, I can use reflection to set the entity object’s matching property to the value contained in the property node.  Finally, after all the properties are set,  I add the entity to a list of results.

On my test page in the .NET 4.0 project, I use LINQ and the QCF Data Services client to display all the teams with “sox” in their name in 2009 like this:

BaseballStats.BaseballStatsEntities context =
new BaseballStats.BaseballStatsEntities(
new Uri("http://baseball-stats.info/OData/baseballstats.svc/"));
var teamQuery = from t in context.Team
where t.name.Contains("sox") && t.yearID == 2009
select t;
rptPlayers.DataSource = teamQuery;
rptPlayers.DataBind();

In the .NET 2.0 project, the same query looks like this (note that I limited it to just 2009 results in the EntityHelper for speed reasons):

EntityHelper context = new EntityHelper(
"http://baseball-stats.info/OData/baseballstats.svc/");
List<Team> teams = context.Teams.FindAll(
delegate(Team t) { return t.name.ToLower().Contains("sox"); });
rptPlayers.DataSource = teams;
rptPlayers.DataBind();

The full code of the example used in this post is available for download.

image


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

Richard Seroter posted Interview Series: Four Questions With … Rick Garibay on 1/31/2011:

Welcome to the 27th interview in my series with thought leaders in the “connected systems” space.  This month, we’re sitting down with Rick Garibay who is GM of the Connected Systems group at Neudesic, blogger, Microsoft MVP and rabid tweeter.

Let’s jump in.

Q: Lately you’ve been evangelizing Windows Server AppFabric, WF and other new or updated technologies. What are the common questions you get from people, and when do you suspect that adoption of this newer crop of app plat technologies will really take hold?

A: I think our space has seen two major disruptions over the last couple of years. The first is the shift in Microsoft’s middleware strategy, most tangibly around new investments in Windows Server AppFabric and Azure AppFabric as a compliment to BizTalk Server and the second is the public availability of Windows Azure, making their PaaS offering a reality in a truly integrated manner.

I think that business leaders are trying to understand how cloud can really help them, so there is a lot of education around the possibilities and helping customers find the right chemistry and psychology for taking advantage of Platform as a Service offerings from providers like Microsoft and Amazon. At the same time, developers and architects I talk to are most interested in learning about what the capabilities and workloads are within AppFabric (which I define as a unified platform for building composite apps on-premise and in the cloud as opposed to focusing too much on Server versus Azure) how they differ from BizTalk, where the overlap is, etc. BizTalk has always been somewhat of a niche product, and BizTalk developers very deeply understand modeling and messaging so the transition to AppFabric/WCF/WF is very natural.

On the other hand, WCF has been publically available since late 2006, but it’s really only in the last two years or so that I’ve seen developers really embracing it. I still see a lot of non-WCF services out there. WCF and WF both somewhat overshot the market which is common with new technologies that provide far more capabilities that current customers can fully digest or put to use. Value added investments like WCF Data Services, RIA Services, exemplary support for REST and a much more robust Workflow Services story not only showcase what WCF is capable of but have gone a long way in getting this tremendous technology into developer hands who previously may have only scratched the surface or been somewhat intimidated by it in the past. With WF written from the ground up, I think it has much more potential, but the adoption of model-driven development in general, outside of the CSD community is still slow.

In terms of adoption, I think that Microsoft learned a lot about the space from BizTalk and by really listening to customers. The middleware space is so much different a decade later. The primary objective of Server AppFabric is developer and ops productivity and bringing WCF and WF Services into the mainstream as part of a unified app plat/middleware platform that remains committed to model-driven development, be it declarative or graphical in nature. A big part of that strategy is the simplification of things like hosting, monitoring and persistence while making tremendous strides in modeling technologies like WF and Entity Framework. I get a lot of “Oh wow!” moments when I show how easy it is to package a WF service from Visual Studio, import it into Server AppFabric and set up persistence and tracking with a few simple clicks. It gets even better when ops folks see how easily they can manage and troubleshoot Server AppFabric apps post deployment.

It’s still early, but I remember how exciting it was when Windows Server 2003 and Vista shipped natively with .NET (as opposed to a separate install), and that was really an inflection point for .NET adoption. I suspect the same will be true when Server AppFabric just ships as a feature you turn on in Windows Server.

Q: SOA was dead, now it’s back.  How do you think that the most recent MS products (e.g. WF, WCF, Server AppFabric, Windows Azure) support SOA key concepts and help organization become more service oriented?  in what cases are any of these products LESS supportive of true SOA?

A: You read that report too, huh? :)

In my opinion, the intersection of the two disruptions I mentioned earlier is the enablement of hybrid composite solutions that blur the lines between the traditional on-prem data center and the cloud. Microsoft’s commitment to SOA and model-driven development via the Oslo vision manifested itself into many of the shipping vehicles discussed above and I think that collectively, they allow us to really challenge the way we think about on-premise versus cloud. As a result I think that Microsoft customers today have a unique opportunity to really take a look at what assets are running on premise and/or traditional hosting providers and extend their enterprise presence by identifying the right, high value sweet spots and moving those workloads to Azure Compute, Data or SQL Azure.

In order to enable these kinds of hybrid solutions, companies need to have a certain level of maturity in how they think about application design and service composition, and SOA is the lynchpin. Ironically, Gartner recently published a report entitled “The Lazerus Effect” which posits that SOA is very much alive. With budgets slowly resuming pre-survival-mode levels, organizations are again funding SOA initiatives, but the demand for agility and quicker time-to-value is going to require a more iterative approach which I think positions the current stack very well.

To the last part of the question, SOA requires discipline, and I think that often the simplicity of the tooling can be a liability. We’ve seen this in JBOWS un-architectures where web services are scattered across the enterprise with virtually no discoverability, governance or reuse (because they are effortless to create) resulting in highly complex and fragile systems, but this is more of an educational dilemma than a gap in the platform. I also think that how we think about service-orientation has changed somewhat by the proliferation of REST. The fact that you can expose an entity model as an OData service with a single declaration certainly challenges some of the percepts of SOA but makes up for that with amazing agility and time-to-value.

Q: What’s on your personal learning plan for 2011?  Where do you think the focus of a “connected systems” technologist should be?

A: I think this is a really exciting time for connected systems because there has never been a more comprehensive, unified platform for building distributed application and the ability to really choose the right tool for the job at hand. I see the connected systems technologist as a “generalizing specialist”, broad across the stack, including BizTalk, and AppFabric (WCF/WF Services/Service Bus)  while wisely choosing the right areas to go deep and iterating as the market demands. Everyone’s “T” shape will be different, but I think building that breadth across the crest will be key.

I also think that understanding and getting hands on with cloud offerings from Microsoft and Amazon should be added to the mix with an eye on hybrid architectures.

Personally, I’m very interested in CEP and StreamInsight and plan on diving deep (your book is on my reading list) this year as well as continuing to grow my WF and AppFabric skills. The new BizTalk Adapter Pack is also on my list as I really consider it a flagship extension to AppFabric.

I’ve also started studying Ruby as a hobby as its been too long since I’ve learned a new language.

Q [stupid question]: I find it amusing when people start off a sentence with a counter-productive or downright scary disclaimer.  For instance, if someone at work starts off with “This will probably be the stupidest thing anyone has ever said, but …” you can guess that nothing brilliant will follow.  Other examples include “Now, I’m not a racist, but …” or “I would never eat my own children, however …” or “I don’t condone punching horses, but that said …”.  Tell us some terrible ways to start a sentence that would put your audience in a state of unrest.

A: When I hear someone say “I know this isn’t the cleanest way to do it but…” I usually cringe.

Thanks Rick!  Hopefully the upcoming MVP Summit gives us all some additional motivation to crank out interesting blog posts on connected systems topics.

image7223222No significant articles today.


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

The Windows Azure Team reported a New TechNet Edge Series Talks with Microsoft IT About the Company's Move to the Cloud and Windows Azure on 2/1/2011:

imageWant to hear first-hand about Microsoft's own move to the cloud and Windows Azure?  Would you like to know more about the benefits, implications and strategies involved?  Then be sure to listen in to a new series of TechNet Edge interviews with top Microsoft IT executives as they discuss the company's move to the cloud.  The first interview, "What does the cloud mean to Microsoft CIO, Tony Scott?", aired last Wednesday and subsequent interviews will debut every Wednesday for the next few months.   See below to learn more about what Tony talked about, as well as an overview of the next show, "Microsoft IT Enterprise Architecture and the Cloud."

Wednesday, January 26, 2011: What does the Cloud mean to Microsoft CIO, Tony Scott?
Tony Scott, Microsoft CIO, is leading the Microsoft IT organization to invest in the cloud. Watch this interview to find out how it will bring new possibilities and benefits to the business.

Wednesday, February 2, 2011:  Microsoft IT Enterprise Architecture and the Cloud
Migrating an IT department to the cloud impacts more than just IT architecture, business processes, applications, employees, and facilities. In this interview, key subject matter experts will explain how Microsoft IT architecture will be integrated to the cloud and what nirvana would look like to its CTO.


Wesy described Microsoft Platform Ready Testing Videos in a 2/1/2011 post to the US ISV Evangelism blog:

If you or your company is part of the Microsoft Platform Ready (MPR) program these videos should provide insight on the MPR Test Tools. These four videos cover Windows 7, Windows Server 2008 R2, Windows Azure/SQL Azure, and SQL Server 2008 R2.

If you are not familiar with the MPR program; MPR provides free technical, marketing, and certification support for independent software vendors (ISV) that are developing applications on the Windows Azure platform, Windows Server 2008 R2, SQL Server 2008 R2, SharePoint 2010, Windows 7 and Windows Phone 7. The MPR program is free and it’s a great way to help get your application to market faster.

 Testing Windows 7 Readiness using the Microsoft Platform Ready Test Tool

This session discusses the steps involved getting up and running using the MPR Test Tool. It also demonstrates how to use the tool and the steps involved testing your applications. This session focuses on Windows 7.

imageTesting Windows 7 Readiness Using the MPR Test Tool

Testing Windows Server 2008 R2 Readiness using the Microsoft Platform Ready Test Tool

This session discusses the steps involved getting up and running using the MPR Test Tool. It also demonstrates how to use the tool and the steps involved testing your applications. This session focuses on Windows Server 2008 R2

image

Testing Windows Server 2008 R2 Readiness Using the MPR Test Tool

Testing Windows Azure and SQL Azure Readiness using the Microsoft Platform Ready Test Tool [Emphasis added.]

This session discusses the steps involved getting up and running using the MPR Test Tool. It also demonstrates how to use the tool and the steps involved testing your applications. This session focuses on Windows Azure and SQL Azure.

image

Testing Windows Azure and SQL Azure Readiness Using the MPR Test Tool

Testing SQL Server 2008 R2 Readiness using the Microsoft Platform Ready Test Tool

This session discusses the steps involved getting up and running using the MPR Test Tool. It also demonstrates how to use the tool and the steps involved testing your applications. This session focuses on SQL Server 2008 R2.

image

Testing SQL Server 2008 R2 Readiness Using the MPR Test Tool

You’ll need to use the MPR Testing Tool to qualify for the Windows Azure Platform Cloud Essentials for Partners benefit that’s my Windows Azure Compute Extra-Small VM Beta Now Available in the Cloud Essentials Pack and for General Use post updated 1/31/2011 describes.


Buck Woody (@buckwoody) described how to use Windows Azure Emulators On Your Desktop in a 2/1/2011 post:

image Many people feel they have to set up a full Azure subscription online to try out and develop on Windows Azure. But you don’t have to do that right away. In fact, you can download the Windows Azure Compute Emulator – a “cloud development environment” – right on your desktop. No, it’s not for production use, and no, you won’t have other people using your system as a cloud provider, and yes, there are some differences with Production Windows Azure, but you’ll be able code, run, test, diagnose, watch, change and configure code without having any connection to the Internet at all. The best thing about this approach is that when you are ready to deploy the code you’ve been testing, a few clicks deploys it to your subscription when you make one.

imageSo what deep-magic does it take to run such a thing right on your laptop or even a Virtual PC? Well, it’s actually not all that difficult. You simply download and install the Windows Azure SDK (you can even get a free version of Visual Studio for it to run on – you’re welcome) from here: http://msdn.microsoft.com/en-us/windowsazure/cc974146.aspx

This SDK will also install the Windows Azure Compute Emulator and the Windows Azure Storage Emulator – and then you’re all set. Right-click the icon for Visual Studio and select “Run as Administrator”:

azure1

Now open a new “Cloud” type of project:

azure2

Add your Web and Worker Roles that you want to code:

azure2b

And when you’re done with your design, press F5 to start the desktop version of Azure:

azure3

Want to learn more about what’s happening underneath? Right-click the tray icon with the Azure logo, and select the two emulators to see what they are doing:

azure4 azure5 azure6 azure7

In the configuration files, you’ll see a “Use Development Storage” setting. You can call the BLOB, Table or Queue storage and it will all run on your desktop. When you’re ready to deploy everything to Windows Azure, you simply change the configuration settings and add the storage keys and so on that you need.

Want to learn more about all this?


The HPC in the Cloud blog posted T-Mobile USA and Xerox Adopt the Windows Azure Platform in the Development of Key Applications on 2/1/2011:

imageMicrosoft Corp. announced today that T-Mobile USA Inc. and Xerox Corp. have adopted the Windows Azure platform and are creating new services and gaining greater efficiencies through cloud computing. This announcement comes as Microsoft celebrates the one-year anniversary of Windows Azure Platform availability.

image "These companies are creating new solutions and businesses in record time that reinforce the new possibilities created by the cloud," said Doug Hauger, general manager, Windows Azure, Microsoft. "A company can quickly use the power of the Windows Azure platform to scale and meet the demands of any market. Success is no longer limited by computing capacity, capability or cost, it is now a matter of imagination and execution."

The Windows Azure platform has enabled T-Mobile and Xerox to address customer needs quickly with innovative solutions that are cost-effective and reliable.

T-Mobile USA

Wireless services provider T-Mobile was preparing to create Family Room, a new mobile software application that helps families share photos and coordinate activities, like a family movie night, across multiple devices. Working under a tight deadline, the developers chose Windows Azure and SQL Azure, along with Microsoft Visual Studio 2010 and Windows Phone 7, as the solution to fit their needs.

By using the Windows Azure cloud services and integrated development environment, T-Mobile was able to create and launch Family Room in just six weeks. Using a cloud-based computing environment has also cut down on management duties and freed up T-Mobile's developers to spend more time creating new features.

"I'm confident that as we move forward and implement richer feature sets, Windows Phone 7 and the Windows Azure platform will be able to scale and support a lot of capabilities," said Joshua Lipe, product manager of Devices, T-Mobile.

Xerox

As an industry leader in managed print services (MPS), Xerox expanded its MPS capabilities in 2010 by introducing Xerox Mobile Print, a solution that allows its workers to send documents to any printer in the company from a handheld device, such as their smartphone.

Xerox has since expanded on that concept to create Xerox Cloud Print, a cloud-based printing service that allows end users to route a printing job to any available public printer directly from their mobile devices. Now, when employees are traveling on business, they can do things like quickly print a presentation for tomorrow's client meeting without having to spend countless hours at a local hotel business center. By running Windows Azure and SQL Azure, developers were able to build Xerox Cloud Print in just four months.

Page:  1  of  2
Read more: 2, All »


Steve Yi posted Real World SQL Azure: Interview with Eugene Shustef, Chief Engineer, Global Document Outsourcing, Xerox to the SQL Azure Team blog on 2/1/2011:

image As part of the Real World SQL Azure series, we recently talked to Eugene Shustef, Chief Engineer in the Global Document Outsourcing group at Xerox, about using Microsoft SQL Azure to build an innovative new printing service for mobile workers. Let's listen in.

MSDN: What exactly have you created with SQL Azure?

image Shustef: We've created a service called Xerox Cloud Print that allows mobile workers to print to any available public printer from their smartphones, iPads, and other mobile devices.

MSDN: Wow. I could have used that earlier today!

Shustef: That's what we hear constantly from our customers. Thousands of their employees are on the move all the time. Some don't even have permanent offices. Their smartphones are their primary computing devices, because smartphones have gotten really ... well, smart. About the only thing they don't come with is a print button.

MSDN: So you decided to add one.

Shustef: You bet. Who's better qualified than Xerox, the global leader in printing, to give you a way to print from your phone?

MSDN: How does it work?

Shustef: Your mobile phone manufacturer or wireless provider would provide Xerox Cloud Print as an app on your phone. When you get an email attachment that you want to print, you just pop open a dialog box that lets you locate a nearby printer, perhaps at a copy shop, and you route your print job there and pick it up later.

Xerox Cloud Print gives mobile workers a way to locate and print to nearby printers by routing print jobs through the Windows Azure platform.

MSDN: So where does SQL Azure come in?

Shustef: We use SQL Azure to store the user account information, print-job information, device information, and other related data. We also store the actual print files in Windows Azure Blob storage.

MSDN: Why is SQL Azure a good fit for this application?

Shustef: SQL Azure provides multitenancy, dynamic scalability, and cost effectiveness. We can securely  stack data from multiple customers in a single SQL Azure instance, which makes it very cost effective. Also, our development staff is Microsoft trained. We used Microsoft SQL Server 2008 for the predecessor to Xerox Cloud Print, a private-cloud version of the service that enables phone-based printing from inside a corporate firewall. We wanted to reuse that SQL Server development investment, and SQL Azure provided the most economical way to do so.

MSDN: Is Xerox going to do more with SQL Azure and the Windows Azure platform?

Shustef: Absolutely. Our customers demand continuous innovation, but innovation is expensive. You have to try things that may not pan out and scale quickly if an idea takes off. Building a physical infrastructure for development and deploying new ideas is cost-prohibitive. Doing development, testing, staging, and hosting in the cloud removes all those upfront costs; it lowers the barrier to innovation.

MSDN: What about the sliding payment scale?

Shustef: Oh definitely, another great benefit. With SQL Azure, we pay only for the storage resources we use; not a cent more. If 5,000 customers sign up for Xerox Cloud Print tomorrow, we just click a few buttons and increase the number of SQL Azure instances. If they all leave in droves, then we can scale back.

MSDN: You can't beat that kind of efficiency.

Shustef: You certainly can't. The Windows Azure platform makes good business sense.

Read the full story at:
www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000008986

To read more about SQL Azure, visit:
www.sqlazure.com


<Return to section navigation list> 

Visual Studio LightSwitch

Robert Green (@rogreen_ms) posted Extending A LightSwitch Application with SharePoint Data on 2/1/2011 to his Adventures in DeveloperLand blog:

image In my previous posts, I have discussed my demo application that a training company’s sales rep can use to manage customers and the courses they order. The application includes both remote data and local data. The remote data consists of the Courses, Customers and Orders tables in the TrainingCourses SQL Server database. The local data consists of the Visits and CustomerNotes tables in a local SQL Server Express database.

image22242222In this post I want to extend the application by adding SharePoint data.

Connect to SharePoint

Consider the following fairly typical scenario. The enterprise maintains the SQL Server database but a department wants to track additional information that is related to the information in the database. Rather than ask the database owner to add a table to the database, the department uses SharePoint instead. The department can quickly create a SharePoint site and start using it right away.

I want to show this scenario in my sample application. The Courses table in the SQL Server database contains information on courses, including name, type (online or instructor led), price, date of release and date of retirement. I want to maintain a knowledge base, which will contain tips and tricks on each course. So I created a SharePoint site and added two lists to it. The Courses list is a simple list of course names. Then I created a KB list with columns for Title, Course and Notes. The Course column in the KB list is linked to the Title column in the Courses list.

image

There can be multiple KB entries for each course and I want to be able to view and add/edit/delete them in the LightSwitch application.

In the Solution Explorer, I can right-click on Data Sources and select Add Data Source. In the Attach Data Source Wizard, I can select SharePoint.

Figure02

I then enter the SharePoint site and specify my login credentials.

Figure03

I then specify the list I want to use (KB in this example).

Figure04

When I click Finish I am informed that two additional lists will be imported. Courses is included because the KB list is linked to the Courses list. UserInformationList is included regardless of what list I select. We’ll see why shortly.

Figure05

I click Continue and I see a new data source node in the Solution Explorer. ApplicationData represents the data I am storing locally in SQL Server Express. TrainingCoursesData represents the data in the SQL Server database. SPTrainingCoursesData represents the SharePoint data.

Figure06

The SQL Server database has a Courses table and the SharePoint site has a Courses list. I can’t have two entities both named Course, so I renamed the SharePoint entity to SPCourse.

In the Entity Designer I see the one-to-many relationship between the SharePoint Courses and KBs lists. I also see that UserInformationList was imported so that I could see the name of the person who created and modified list items.

Figure07

The reason I added the Courses list to the SharePoint site was so that I didn’t have to retype course names every time I added a KB item. So on SharePoint I need and use the relationship between Courses and Lists. But in the LightSwitch application, course information comes from the SQL Server Courses table. What I want is a one-to-many relationship between Courses in SQL Server and KBs in SharePoint. I will never tire of pointing out the ability of LightSwitch to create relationships across data sources. This is a killer feature.

Figure08

Now that I have the relationship established, I can create the CourseDetail screen to display both courses and KBs. This is delightfully simple in LightSwitch. I just create a Detail screen selecting Course as the screen data and ask to include KBs. The bottom row of the screen contains a DataGrid that will display KBs. By default, all of the columns in the list will appear. I will delete all of them except KB and Notes.

Figure09

Now when I run the application and select a course, I see not only the course details but also the KBs. And if I select a KB, I see the details on it.

Figure10

Figure11

I can also add, edit or delete KBs by clicking buttons in the Command Bar on the CourseDetail page. Before I deploy this application to real live users I will have to create my own screen for this. The users don’t need to see Content Type, SPCourse, CreatedBy or ModifiedBy.

Figure12

When I click Save, the KB is saved to SharePoint. So cool!!

Figure13

This is admittedly a pretty simple scenario. I will dive into this in a bit more depth in the future.


Arthur Vickers continued his Entity Framework v4 DbContext CTP series with Using DbContext in EF Feature CTP5 Part 7: Local Data on 2/1/2011:

Introduction

In December we released ADO.NET Entity Framework Feature Community Technology Preview 5 (CTP5). In addition to the Code First approach this CTP also contains a preview of a new API that provides a more productive surface for working with the Entity Framework. This API is based on the DbContext class and can be used with the Code First, Database First, and Model First approaches.

This is the seventh post of a twelve part series containing patterns and code fragments showing how features of the new API can be used. Part 1 of the series contains an overview of the topics covered together with a Code First model that is used in the code fragments of this post.

The posts in this series do not contain complete walkthroughs. If you haven’t used CTP5 before then you should read Part 1 of this series and also Code First Walkthrough or Model and Database First with DbContext before tackling this post.

Using Local to look at local data

The Local property of DbSet provides simple access to the entities of the set that are currently being tracked by the context and have not been marked as Deleted. Accessing the Local property never causes a query to be sent to the database. This means that it is usually used after a query has already been performed. The Load extension method can be used to execute a query so that the context tracks the results. For example:

using (var context = new UnicornsContext())
{
    // Load all unicorns from the database into the context
    context.Unicorns.Load();

    // Add a new unicorn to the context
    context.Unicorns.Add(new Unicorn { Name = "Linqy" });

    // Mark one of the existing unicorns as Deleted
    context.Unicorns.Remove(context.Unicorns.Find(1));

    // Loop over the unicorns in the context.
    Console.WriteLine("In Local: ");
    foreach (var unicorn in context.Unicorns.Local)
    {
        Console.WriteLine("Found {0}: {1} with state {2}",
                          unicorn.Id, unicorn.Name, 
context.Entry(unicorn).State); } // Perform a query against the database. Console.WriteLine("\nIn DbSet query: "); foreach (var unicorn in context.Unicorns) { Console.WriteLine("Found {0}: {1} with state {2}", unicorn.Id, unicorn.Name,
context.Entry(unicorn).State); } }

Using the data set by the initializer defined in Part 1 of this series, running the code above will print out:

In Local:
Found 0: Linqy with state Added
Found 2: Silly with state Unchanged
Found 3: Beepy with state Unchanged
Found 4: Creepy with state Unchanged

In DbSet query:
Found 1: Binky with state Deleted
Found 2: Silly with state Unchanged
Found 3: Beepy with state Unchanged
Found 4: Creepy with state Unchanged

This illustrates three points:

  • The new unicorn Linqy is included in the Local collection even though it has not yet been saved to the database. Linqy has a primary key of zero because the database has not yet generated a real key for the entity.
  • The unicorn Binky is not included in the local collection even though it is still being tracked by the context. This is because we removed Binky from the DbSet thereby marking it as deleted.
  • When DbSet is used to perform a query the entity marked for deletion (Binky) is included in the results and the new entity (Linqy) that has not yet been saved to the database is not included in the results. This is because DbSet is performing a query against the database and the results returned always reflect what is in the database.
Using Local to add and remove entities from the context

The Local property on DbSet returns an ObservableCollection with events hooked up such that it stays in sync with the contents of the context. This means that entities can be added or removed from either the Local collection or the DbSet. It also means that queries that bring new entities into the context will result in the Local collection being updated with those entities. For example:

using (var context = new UnicornsContext())
{
    // Load some unicorns from the database into the context
    context.Unicorns.Where(u => u.Name.StartsWith("B")).Load(); 

    // Get the local collection and make some changes to it
    var localUnicorns = context.Unicorns.Local;
    localUnicorns.Add(new Unicorn { Name = "Linqy" });
    localUnicorns.Remove(context.Unicorns.Find(1)); 

    // Loop over the unicorns in the context.
    Console.WriteLine("In Local: ");
    foreach (var unicorn in context.Unicorns.Local)
    {
        Console.WriteLine("Found {0}: {1} with state {2}",
                          unicorn.Id, unicorn.Name,
context.Entry(unicorn).State); } var unicorn1 = context.Unicorns.Find(1); Console.WriteLine("State of unicorn 1: {0} is {1}", unicorn1.Name, context.Entry(unicorn1).State); // Query some more unicorns from the database context.Unicorns.Where(u => u.Name.EndsWith("py")).Load(); // Loop over the unicorns in the context again. Console.WriteLine("\nIn Local after query: "); foreach (var unicorn in context.Unicorns.Local) { Console.WriteLine("Found {0}: {1} with state {2}", unicorn.Id, unicorn.Name,
context.Entry(unicorn).State); } }

Using the data set by the initializer defined in Part 1 of this series, running the code above will print out:

In Local:
Found 3: Beepy with state Unchanged
Found 0: Linqy with state Added
State of unicorn 1: Binky is Deleted

In Local after query:
Found 3: Beepy with state Unchanged
Found 0: Linqy with state Added
Found 4: Creepy with state Unchanged

This illustrates three points:

  • The new unicorn Linqy that was added to the Local collection becomes tracked by the context in the Added state. It will therefore be inserted into the database when SaveChanges is called.
  • The unicorn that was removed from the Local collection (Binky) is now marked as deleted in the context. It will therefore be deleted from the database when SaveChanges is called.
  • The additional unicorn (Creepy) loaded into the context with the second query is automatically added to the Local collection.
    One final thing to note about Local is that because it is an ObservableCollection performance is not great for large numbers of entities. Therefore if you are dealing with thousands of entities in your context it may not be advisable to use Local.
Using Local for WPF data binding

The Local property on DbSet can be used directly for data binding in a WPF application because it is an instance of ObservableCollection. As described in the previous sections this means that it will automatically stay in sync with the contents of the context and the contents of the context will automatically stay in sync with it. Note that you do need to pre-populate the Local collection with data for there to be anything to bind to since Local never causes a database query.

This is not an appropriate post for a full WPF data binding sample but the key elements are:

  • Setup a binding source
  • Bind it to the Local property of your set
  • Populate Local using a query to the database.

We will put up a separate post on the ADO.NET team blog describing how to do this in detail.

WPF binding to navigation properties

If you are doing master/detail data binding you may want to bind the detail view to a navigation property of one of your entities. An easy way to make this work is to use an ObservableCollection for the navigation property. For example:

public class Princess
{
    private readonly ObservableCollection<Unicorn> _unicorns =
        new ObservableCollection<Unicorn>();

    public int Id { get; set; }
    public string Name { get; set; }

    public virtual ObservableCollection<Unicorn> Unicorns
    {
        get { return _unicorns; }
    }
}

We will put up a separate post on the ADO.NET team blog describing how you would then use this class for WPF binding.

Using Local to clean up entities in SaveChanges

In most cases entities removed from a navigation property will not be automatically marked as deleted in the context. For example, if you remove a Unicorn object from the Princess.Unicorns collection then that unicorn will not be automatically deleted when SaveChanges is called. If you need it to be deleted then you may need to find these dangling entities and mark them as deleted before calling SaveChanges or as part of an overridden SaveChanges. For example:

public override int SaveChanges()
{
    foreach (var unicorn in this.Unicorns.Local.ToList())
    {
        if (unicorn.Princess == null)
        {
            this.Unicorns.Remove(unicorn);
        }
    }

    return base.SaveChanges();
}

The code above uses LINQ to Objects against the Local collection to find all unicorns and marks any that do not have a Princess reference as deleted. The ToList call is required because otherwise the collection will be modified by the Remove call while it is being enumerated. In most other situations you can do LINQ to Objects directly against the Local property without using ToList first.

Using Local and ToBindingList for Windows Forms data binding

Windows Forms does not support full fidelity data binding using ObservableCollection directly. However, you can still use the DbSet Local property for data binding to get all the benefits described in the previous sections. This is achieved through the ToBindingList extension method which creates an IBindingList implementation backed by the Local ObservableCollection.

This is not an appropriate post for a full Windows Forms data binding sample but the key elements are:

  • Setup an object binding source
  • Bind it to the Local property of your set using Local.ToBindingList()
  • Populate Local using a query to the database

We will put up a separate post on the ADO.NET team blog describing how to do this in detail.

Getting detailed information about tracked entities

Many of the examples in this series use the Entry method to return a DbEntityEntry instance for an entity. This entry object then acts as the starting point for gathering information about the entity such as its current state, as well as for performing operations on the entity such as explicitly loading a related entity.

The Entries methods return DbEntityEntry objects for many or all entities being tracked by the context. This allows you to gather information or perform operations on many entities rather than just a single entry. For example:

using (var context = new UnicornsContext())
{
    // Load some entities into the context
    context.Unicorns.Include(u => u.Princess.LadiesInWaiting).Load();

    // Make some changes
    context.Unicorns.Add(new Unicorn { Name = "Linqy" });
    context.Unicorns.Remove(context.Unicorns.Find(1));
    context.Princesses.Local.First().Name = "Belle";
    context.LadiesInWaiting.Local.First().Title = "Special";

    // Look at the state of all entities in the context
    Console.WriteLine("All tracked entities: ");
    foreach (var entry in context.ChangeTracker.Entries())
    {
        Console.WriteLine("Found entity of type {0} with state {1}",
                    ObjectContext.GetObjectType(entry.Entity.GetType()).Name,
                    entry.State);
    }

    // Find modified entities of any type
    Console.WriteLine("\nAll modified entities: ");
    foreach (var entry in context.ChangeTracker.Entries()
                              .Where(e => e.State == EntityState.Modified))
    {
        Console.WriteLine("Found entity of type {0} with state {1}",
                    ObjectContext.GetObjectType(entry.Entity.GetType()).Name,
                    entry.State);
    }

    // Get some information about just the tracked princesses
    Console.WriteLine("\nTracked princesses: ");
    foreach (var entry in context.ChangeTracker.Entries<Princess>())
    {
        Console.WriteLine("Found Princess {0}: {1} with original Name {2}",
                          entry.Entity.Id, entry.Entity.Name,
                          entry.Property(p => p.Name).OriginalValue);
    }

    // Find any person (lady or princess) whose name starts with 'S'
    Console.WriteLine("\nPeople starting with 'S': ");
    foreach (var entry in context.ChangeTracker.Entries<IPerson>()
                              .Where(p => p.Entity.Name.StartsWith("S")))
    {
        Console.WriteLine("Found Person {0}", entry.Entity.Name);
    }
}

Using the data set by the initializer defined in Part 1 of this series, running the code above will print out:

All tracked entities:
Found entity of type Unicorn with state Added
Found entity of type Princess with state Modified
Found entity of type LadyInWaiting with state Modified
Found entity of type Unicorn with state Deleted
Found entity of type Unicorn with state Unchanged
Found entity of type Unicorn with state Unchanged
Found entity of type Princess with state Unchanged
Found entity of type LadyInWaiting with state Unchanged
Found entity of type Unicorn with state Unchanged
Found entity of type Princess with state Unchanged
Found entity of type LadyInWaiting with state Unchanged

All modified entities:
Found entity of type Princess with state Modified
Found entity of type LadyInWaiting with state Modified

Tracked princesses:
Found Princess 1: Belle with original Name Cinderella
Found Princess 2: Sleeping Beauty with original Name Sleeping Beauty
Found Princess 3: Snow White with original Name Snow White

People starting with 'S':
Found Person Special Lettice
Found Person Sleeping Beauty
Found Person Snow White

These examples illustrate several points:

  • The Entries methods return entries for entities in all states, including Deleted. Compare this to Local which excludes Deleted entities.
  • Entries for all entity types are returned when the non-generic Entries method is used. When the generic entries method is used entries are only returned for entities that are instances of the generic type. This was used above to get entries for all princesses. It was also used to get entries for all entities that implement IPerson. This demonstrates that the generic type does not have to be an actual entity type.
  • LINQ to Objects can be used to filter the results returned. This was used above to find entities of any type as long as they are modified. It was also used to find IPeople with a Name property starting with ‘S’.

Note that DbEntityEntry instances always contain a non-null Entity. Relationship entries and stub entries are not represented as DbEntityEntry instances so there is no need to filter for these.

Summary

In this part of the series we looked at how the Local property of a DbSet can be used to look at the local collection of entities in that set and how it this local collection stays in sync with the context. We also touched on how Local can be used for data binding. Finally, we looked at the Entities methods which provide detailed information about tracked entities.

As always we would love to hear any feedback you have by commenting on this blog post.

For support please use the Entity Framework Pre-Release Forum.

Arthur is a Developer on the ADO.NET Entity Framework Team


Return to section navigation list> 

Windows Azure Infrastructure

Sharon Pian Chan (@sharonpianchan) reported Azure now has 31,000 customers in a 2/1/2011 article for the Seattle Times’ Business/Technology blog:

image Microsoft's cloud computing platform Azure is a year old today, and Microsoft says it now has 31,000 customers. The new number is a 55 percent increase from the 20,000 customers Microsoft said Azure had in July.

Observers are watching Azure closely as a sign of how fast companies are moving to cloud computing. Microsoft is competing with IBM, Amazon.com, Salesforce.com, Google and others to get businesses to build applications and store data in remote data centers.

imageMicrosoft highlighted a few different customers: travel website Travelocity has moved its analytics tracking to Azure, Xerox has built a cloud printing service on Azure, and T-Mobile USA built its Family Room application on Windows Phone 7 on Azure.

Last year, the companies that jumped into Azure were start-ups that wanted to outsource server infrastructure and niche companies with large computational needs. It appears that larger companies are now experimenting with building new projects in the cloud.

"One of the things we’re seeing is cloud computing is moving away from a buzz word to chest beating to something with real business benefits," said Amy Barzdukas, general manager in the Server and Tools business at Microsoft.

Barzdukas also said the discussion is now less about efficiency and reduced costs result from having a cloud provider take over server infrastructure and more about using cloud computing to create new applications.

"Remember when we had that Internet [first] coming through? I was working in the marketing and we were really, really happy because we could post the catalog on the Web instead of mailing it," Barzdukas said. "It was only once we had it up that we realized you could do e-commerce and things you could never think about. That’s what we’re seeing with the cloud."

T-Mobile's Family Room application is a social network for family members. It has a chalkboard for members to share messages with each other, a shared calendar and a shared photo space. Matchbox Mobile built the app for T-Mobile.

"We wanted to bring smaller groups of people together, in contrast with some of the social sites out there, which are basically trying to drink from a firehose and aren’t as private as a lot of people are comfortable with," said Josh Lipe, product manager for T-Mobile. The application is preloaded on every Windows Phone 7 sold through T-Mobile.

Lipe said the company went with Azure because it was faster to develop on and it had more features for development. "

The main one was time frame and development cost. "We looked at Azure as being able to quickly integrate with development we were performing on the application" with Visual Studio, Lipe said. "The development kits for the other cloud services weren’t as well baked with nearly the options for development."

Also, Lipe liked that Microsoft guaranteed the platform wouldn't change for T-Mobile for 12 months, so he knew the application would not need a major update for a year. It took two months to build the application, Lipe said. "There’s no doubt in my mind that this was fastest implementation we would have put together," he said.

Lipe said Windows Phone 7 is "doing well" for T-Mobile. "We’re continuing to look at options with Microsoft how to move the platform forward. The customers are very satisfied with the experience. We’ve done well with the devices that we sold."


David Linthicum told The Truth behind Standards, SOA, and Cloud Computing in a post to ebizQ’s Where SOA Meets Cloud blog:

image Many in the world of cloud computing consider cloud computing as a new space that needs new standards. The fact is, most of the standards we've worked on in the world of SOA over the past several years are applicable to the world of cloud computing. Cloud computing is simply a change in platform, and the existing architectural standards we leverage should transfer nicely to the cloud computing space. You can consider SOA as something you do, and cloud computing as a place to do it.

image Standards are a double edged sword; they clearly provide some value by protecting you from vendor-specific standards, in this case, cloud lock-in. However, they can delay things as enterprise ITs wait for the standards to emerge. Moreover, they may not live up to expectations when they do arrive, and not provide the anticipated value.

Standards should be driven by existing technologies, rather than by trying to define new standards approaches for new technologies. While the latter does occasionally work, more often it leads to design-by-committee and poor technology. Past failures around standards should make this less of an issue in the world of cloud computing.

So, when considering SOA and cloud computing standards, take a few things into consideration:

• Standards should be driven by three or more technology vendors that actually plan to employ the standard. Watch out for standards that include just one vendor and many consulting organizations. Or, are just driven by marketing.

• Standards should be well-defined. This means the devil is in the details, and a true standard should be defined in detail all the way down to the code level. Conceptual standards that are nothing but white papers are worthless.

• Standards should be in wide use. This means that many projects leverage this standard and the technology that uses the standard, and they are successful with both. In many instances you'll find that standards are still concepts, and not yet leveraged by technology consumers.

• Standards should be driven by the end users, not the vendors. At least, that's the way it should be in a perfect world. While the vendors may have had a hand in creating the standards, the consumers of the technology should be the ones driving the definition and direction. Standards that are defined and maintained by vendors often fail to capture the hearts and minds, while standards maintained by technology consumers typically provide more value for the end user and thus live a longer life.


Jay Fry (@jayfry3) wrote More about the ‘people side’ of cloud: good-bye wizards, hello wild ducks and T-shaped skills on 1/31/2011:

image One of the problems that has dragged down the transformational ideas around cloud computing is the impact of this new operating model on organizations and the actual people inside them. I noticed this back in my days at Cassatt before the CA acquisition. The Management Insight survey I wrote about a few weeks back pointed this out. And, if you’ve been following this blog, I’ve mentioned it a couple other times over the past few months as the hard part of this major transition in IT.

While the technical challenges of cloud computing aren’t simple, the path to solve them is pretty straightforward. With the people side of things…not so much.

However, I’m optimistic about a new trend I've noticed throughout the first month of 2011: much of the recent industry discussion is actually talking through the people issues and starting to think more directly about how cloud computing will affect the IT staff.

Now, maybe that’s just beginning-of-the-year clarity of thought. Or people sticking to some sort of resolutions.

(I have a friend, for example, who swears off alcohol for 31 days every January 1st. He does it to make up for the overly festive holiday season that has been in high-gear the previous month. But I think he also uses it as an attempt to start the year out focusing himself on the things he thinks he should be doing, rather than having an extra G&T. And it usually works. For a while, anyway.)

Whatever the reason, I thought it was worth highlighting interesting commentary that’s recently appeared in the hopes of extending (and building on) the discussion about the human impact of cloud computing.

It’s got to be you

Back in December, the folks on-stage at the Gartner Data Center Conference had a bunch of relevant commentary on IT and its role in this cloudy transition.

Gartner analyst Tom Bittman noted in his keynote that for cloud computing, IT needs to focus. The IT folks are the ones that “should become the trusted arbiter and broker of cloud in your organization.” He saw evangelism in the future of every IT person in his audience from the sounds of it. “Who is going to be the organization to tell the business that there is a new opportunity based on cloud computing? That’s got to be you.” Are people ready for that level of commitment? We’ll find out. With great power comes great responsibility, right?

Automation & staffing levels

With cloud also comes an increasing reliance on automation. That hands IT people a big reason to push back on cloud computing right there. As Gartner analyst Ronni Colville said in one of her sessions, “the same people who write the automation are the ones whose jobs are going to change.”

In another session, Disney Interactive Media Group’s CTO Bud Albers talked about how the company’s cloud-based approach to internal IT has impacted staffing levels. “No lay-offs,” he said, “but you get no more people.” That means each person you do have (and keep) is going to have to be sharpening their skills toward this transition.

T-shaped business skills

In the era of cloud computing, then, what do you want that staff to be able to do?

Gartner’s Dave Cappuccio talked about creating a set of skills that are “T-shaped” – deep in a few areas, but having broad capabilities across how the business actually works. He believes that “technology depth is important, but not as key as business breadth.” The biggest value to the business, he said, is this breadth.

So that means even more new IT titles from cloud computing

To get to what Cappuccio is proposing, organizations are going to have to create some new roles in IT, especially focusing on the business angle. A few posts ago, I rattled off a whole list of new titles that will be needed as organizations move to cloud computing. Last week, Bernard Golden’s blog at CIO.com talking about “Cloud CIO: How Cloud Computing Changes IT Staffs” made some insightful suggestions (and some, I’m happy to say, matched mine). He noted the rising importance of the enterprise architect, as well as an emphasis on operations personnel who can deal with being more “hands-off.” He saw legal and regulatory roles joining cloud-focused IT teams and security designers needing to handle the “deperimeterization” of the data center. And, IT financial analysts become more important in making decisions.

Gartner’s Donna Scott agreed with that last one in her session on the same topic at the Gartner Data Center Conference. She believed that the new, evolved roles that would be needed would include IT financial/costing analysts. She also called out solution architects, automation specialists, service owners, and cloud capacity managers.

Wild ducks & the correct number of pizzas

So what personalities do you need to look for to fill those titles?

At the same conference, Cameron Haight discussed how to organize teams and whom to assign to them. “If you have that ‘wild duck’ in your IT shop, they’re dangerous. But they are the ones who innovate,” he said.

Haight noted that the hierarchical and inflexible set up of a traditional IT org just won’t work for cloud. What’s needed? Flatter and smaller. “Encourage the ‘wild duck’ mentality,” said Haight. “Encourage critical thinking skills and challenge conventional wisdom” in individuals.

As for organizing, “use 2-pizza teams,” he suggested, meaning groups should be no larger than 2 pizzas would feed. (He left the choice of toppings up to us, thankfully.) Groups then should support a service in its entirety by themselves. Haight believes this drives autonomy, cohesiveness, ownership, and will help infrastructure and operations become more like developers, lessening the “velocity mismatch” between agile development and slow and methodical operations teams.

To take this even farther, take a look at the Forrester write-up called “BT 2020: IT’s Future in the Empowered Era.” Analysts Alex Cullen and James Staten talk about a completely new mindset that’s needed for IT (or, as they call it, BT – business technology) by 2020. Why? Your most important customers today won’t be so important then, what those new customers will want IT doesn’t yet provide, and the cost of energy is going to destroy current operating models.

Other than that, how was the play, Mrs. Lincoln?

Moving past wizardry: getting real people onboard with real changes today

Getting the actual human beings in IT to absorb and actually be a part of all these changes is hard and has to be thought through.

At a recent cloud event we held, I interviewed Peter Green, CTO and founder of Agathon Group, a cloud service provider that uses CA 3Tera AppLogic (this clip has the interview; he talks about cloud and IT roles starting at 3 min., 55 sec.). These changes are the hardest, Green said, “where IT sees its role as protector rather than innovator. They tend to view their job as wizardry.” That’s not a good situation. Time to pull back the curtain.

Mazen Rawashdeh, eBay’s vice president of technology operations, noted onstage at the Gartner conference that he has found a really effective way to get everyone pointed the same direction through big changes like this. “The moment your team understands the ‘why’ and you keep the line of communications open, a lot of the challenges will go away.” So, communicate about what you're up against, what you're working on, and how you're attacking it. A lot.

Christian Reilly posted a blog last month that I thought was a perfect example of that eyes-wide-open attitude, despite the uncertainty that all of these shifts bring. Reilly (@reillyusa on Twitter) is in IT at a very large end-user organization dealing with cloud and automation directly.

“I am under no illusion,” Reilly posted, “that in the coming months (or years)…automation, in the guise of the much heralded public and private cloud services, will render large parts of my current role and responsibility defunct. I am under no illusion that futile attempts to keep hold of areas of scope, sets of repeatable tasks or, for that matter, the knowledge I’ve collected over the years will render me irreplaceable.

“Will I shed tears? Yes. But they will be tears of joy.”

A little over the top, sure, but he gets a gold star for attitude. Green of Agathon Group thinks the cloud is actually the opportunity to bring together things that have been too separate.

“Where I see a potential, at least,” said Green, “[is] for cloud computing to act as a common area where tech and management can start to converse a little bit better.”

So, as I said, January has given me a little hope that the industry is on a good path. Let’s hope this kind of discussion continues for the rest of the year.

Jay is Strategy VP for CA Technologies' cloud business. He joined CA via its Cassatt acquisition.


JP Morgenthal (@jpmorgenthal) asserted Just Because the Cloud Offers Scalability, Doesn’t Mean That You Automatically Inherit It in a 1/31/2011 post to his Tech Evangelist blog:

image In reading Vivek Kundra’s “25 Point Implementation Plan To Reform Federal Information”, I was struck by the anecdote regarding how the lack of scalability was the cause for outages and, ultimately, delays in processing transactions on the Car Allowance and Rebate System (CARS) or as it was more commonly known as Cash-for-Clunkers.  According to this document the overwhelming response overwhelmed the system leading to outages and service disruptions.  However, a multimedia company offering users the ability to create professional-quality TV-like videos and share them over the Internet scaled to meet rising demand that rose from 25,000 to 250,000  users in three days and reached a peak rate of 20,000 new users every hour.

The moral of the story is that the multimedia application was able to scale from 50 to 4,000 virtual machines as needed to meet demand because it was designed on a Cloud architecture.  While true, there is a very important piece of information lacking from this anecdote, which in turn could lead some to believe that the Cloud offers inherent scalability.  This piece of information is that the system you design must be able to take advantage of available opportunity to scale as much as have the facilities of the underlying platform support scaling.

In the comparison offered by Kundra, it’s clear that the system was appropriately designed to scale with the rapid growth in users.  For example, they may have had to add additional load balancers to distribute the load across increased numbers of web servers.  If they had a database architecture, perhaps the database was clustered and more nodes were added to the cluster to support the increased number of transactions.  If it was file-based, perhaps they were using a distributed file system, such as Hadoop and they were able to add new nodes in the system dispersed geographically to limit latency.  In each of these cases, it was the selection of the components and manner in which they were integrated that facilitated the ability to scale and not some inherent "magic" of the Cloud Computing platform.

It’s great that Kundra is putting forth a goal that the government needs to start seeking lower-cost alternatives to building data centers, but it’s also important to note that, according to this same document, many of today's government IT application initatives are often behind schedule and fail to meet promised functionality.  It’s hard to believe that with these issues that the systems are going to be appropriately designed to run in a Cloud architecture and scale accordingly.  The key point here is that in addition to recommending a “Cloud First” policy, the government needs to hire contractors and employees that understand the nuances of developing an application that can benefit from Cloud capabilities.  In this way, the real benefit of Cloud will be realized and the Cloud First policy will achieve its goals.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

image

No significant articles today.


<Return to section navigation list> 

Cloud Security and Governance

Chris Hoff (@Beaker) asserted CloudPassage & Why Guest-Based Footprints Matter Even More For Cloud Security in a 2/1/2011 post:

image Every day for the last week or so after their launch, I’ve been asked left and right about whether I’d spoken to CloudPassage and what my opinion was of their offering.  In full disclosure, I spoke with them when they were in stealth almost a year ago and offered some guidance as well as the day before their launch last week.

VM (operating system)Disappointing as it may be to some, this post isn’t really about my opinion of CloudPassage directly; it is, however, the reaffirmation of the deployment & delivery models for the security solution that CloudPassage has employed.  I’ll let you connect the dots…

Specifically, in public IaaS clouds where homogeneity of packaging, standardization of images and uniformity of configuration enables scale, security has lagged.  This is mostly due to the fact that for a variety of reasons, security itself does not scale (well.)

In an environment where the underlying platform cannot be counted upon to provide “hooks” to integrate security capabilities in at the “network” level, all that’s left is what lies inside the VM packaging:

  1. Harden and protect the operating system [and thus the stuff atop it,]
  2. Write secure applications and
  3. Enforce strict, policy-driven information-centric security.

My last presentation, “Cloudinomicon: Idempotent Infrastructure, Building Survivable Systems and Bringing Sexy Back to Information Centricity” addressed these very points. [This one is a version I delivered at the University of Michigan Security Summit]

If we focus on the first item in that list, you’ll notice that generally to effect policy in the guest, you must have a footprint on said guest — however thin — to provide the hooks that are needed to either directly effect policy or redirect back to some engine that offloads this functionality.  There’s a bit of marketing fluff associated with using the word “agentless” in many applications of this methodology today, but at some point, the endpoint needs some sort of “agent” to play*

So that’s where we are today.  The abstraction offered by virtualized public IaaS cloud platforms is pushing us back to the guest-centric-based models of yesteryear.

This will bring challenges with scale, management, efficacy, policy convergence between physical and virtual and the overall API-driven telemetry driven by true cloud solutions.

You can read more about this in some of my other posts on the topic:

Finally, since I used them for eyeballs, please do take a look at CloudPassage — their first (free) offerings are based upon leveraging small footprint Linux agents and a cloud-based SaaS “grid” to provide vulnerability management, and firewall/zoning in public cloud environments.

/Hoff

* There are exceptions to this rule depending upon *what* you’re trying to do, such as anti-malware offload via a hypervisor API, but this is not generally available to date in public cloud.  This will, I hope, one day soon change.

Image via Wikipedia


<Return to section navigation list> 

Cloud Computing Events

Hany Mahmoud announced on 1/29/2011 an online Riyadh Online Community Summit February 2011 will occur on 2/10 and 2/11/2011:

imageMicrosoft Technical Communities in Saudi Arabia (Riyadh SharePoint User Group, DevLifeStyle, Windows Phone Middle East, Dynamix Ax Brains, Dynamics CRM Experts and SQLPath and others), MVPs and Community Leads are pleased to announce the Second Riyadh Online Community Summit being held on February 10th , 11th  2011.

Riyadh Online Community Summit February 2011

The event will cover a wide array of topics ranging from Dynamics CRM, Dynamics AX, XNA, and Client to Mobile Programming, Database Development and Administration, all the way to SharePoint Security and Systems Integration.

The event will take place on Thursday, February 10, 2011 at 11:00 AM – Friday, February 11, 2011 at 1:00 PM (GMT+0300).

image

Please note that this is an ONLINE TWO Day Event.

image

To register click here


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

David Linthicum claimed “Verizon and other big telecom companies could find that getting to the cloud is a bit more difficult than just writing a big check” in a deck for his The Verizon-Terremark deal: An uncertain way into the cloud article of 2/1/2011 for InfoWorld’s Cloud Computing blog:

image You may have heard that Verizon has bid $1.4 billion for "cloud provider" Terremark. You may ask, What is Terremark and why would Verizon want to buy it? Terremark is all about managed infrastructure services and cloud computing services delivered from 13 data centers in the United States, Europe, and Latin America -- that is, colocation and managed services, not cloud services in the sense of Amazon Web Services or Google Apps.

image Like other managed services and colocation providers, Terremark has done a good deal of repositioning itself into the cloud computing space in the last few years. In fact, it's difficult to figure out where the traditional managed services business ends and the cloud begins. Still, Terremark has enough "cloud" in its business to attract a telco like Verizon that has an even less forward-looking business: provisioning communication infrastructure.

Verizon has the same problem as many other telecommunications giants: It has fat pipes and knows how to move data, but it doesn't know how to turn its big honking networks into big honking cloud computing offerings. Indeed, Verizon had some disappointing starts in the last year or so, showing both a lack of focus and sophistication around what cloud computing means to both enterprises and government.

Will the Terremark deal change all that? It's a step in the right direction, but Verizon has many more moves to make before it really gets into the cloud market.

What this acquisition will do is put a comparable price on the other cloud computing providers, such a Rackspace, GoGrid, and Savvis, that are on the shopping list of AT&T and international telecommunication providers, I'm sure. I suspect we'll see two or three more megadeals this year, as others follow Verizon's lead.

Most large telecommunication companies are just too slow to make it to the cloud themselves by the time the market peaks, so buying their way into a market is always a safer choice. However, I suspect many big telecom companies moving to the cloud will find there is more to cloud computing than providing managed services that are just renamed "cloud services." The level of sophistication in true cloud services will rise quickly in the next year or so, and those just pushing infrastructure connected to big networks will find they are a commodity.


Joe Arnold reported OpenStack Object Storage Moves Beyond Rackspace in a 2/1/2011 post to the CloudScaling blog:

Earlier this week, one of our clients, a Tier 1 ISP, launched an object storage cloud based on OpenStack, an open source compute and storage framework created by Rackspace and NASA. The new storage cloud is the first commercial OpenStack-based storage offering in the market after Rackspace itself, which is based on the same technology. Cloudscaling assisted in developing this solution for the new product, including hardware, networking, configuration, systems integration, monitoring and management.

To understand why this is important, let’s look at why object storage is different than block storage, as well as the cases where each excels.

Back in 2006, Amazon introduced a novel approach to storage when it launched S3 (Simple-Storage Service). It was tuned for web workloads that could benefit from high amounts of read/write concurrency such as media assets, application data and user-generated content.

To take advantage of cloud-based object stores, applications had to be architected to use them rather than rely on traditional block-based or file-based storage systems. This is because object stores are not traditional file systems. For example, they are not suitable for transactional databases or running operating systems. And they can’t be accessed like a traditional file system — it’s just objects accessed via an API.

In exchange for these limitations, users received virtually unlimited storage, a huge and flat namespace, highly scalable read/write access, and the ability to serve the content directly to clients from the cloud storage system. These storage systems were also designed to be incredibly robust; Amazon claims 99.999999999% durability of files stored on S3.

As more and more applications began using cloud-based object storage systems over the past few years, there has been tremendous growth in tools, native language bindings, and application development frameworks that conform to these same object storage APIs, making it easier and easier for developers to work with these types of data stores. As a result of the better tooling, usage has dramatically increased.

Amazon’s own Object storage service (S3) has posted impressive growth. The company states that S3 grew from 40 billion objects at the end of Q4 2008 to 102 billion in Q4 2009 and 262 billion in Q4 2010. Peak requests now top 200,000 per second. Another data point is that Amazon’s S3 is used by 1.48% of all websites (according to Buildwith). It’s clear that most developers needing large pools of data content would not consider using anything but an object store.

In the summer of 2010, Rackspace open-sourced a version of their object storage system as part of the larger OpenStack project. This meant that with some extra work, others could both use and build on the work that they had done. It’s possible to deliver object storage services based on the same software as Rackspace has in production at scale, taking advantage of the same design philosophies that S3 pioneered.

We at Cloudscaling are part of the growing community of designers who believe that clouds should be built the way Google and Amazon build them: wih inexpensive but reliable commodity hardware, running open source software, and leveraging high-levels of automation. These combine to create clouds that are perform well both technically and economically at web scale.

This is the same philosophy that our client adopted on its object storage cloud. OpenStack’s Object Storage services was the foundation of a design that, when combined with the appropriate hardware, resulted in a complete top-to-bottom object store. It’s a powerful storage service that they can add to their many other service offerings.

We think that OpenStack has a very bright future ahead of it and we’re happy to be working to help build that future.


Lydia Leong (@cloudpundit) reported Time Warner Cable acquires NaviSite in a 2/1/2011 post to her Cloud Pundit blog:

image The dominos continue to fall: Time Warner Cable is acquiring NaviSite, for a total equity value of around $230 million. By contrast, Verizon bought Terremark at a valuation of $1.4 billion. Terremark had about $325m in 2010 revenues; NaviSite had about $120m. So given that the two companies do substantially similar things — NaviSite has largely shed its colocation business, but does managed hosting and cloud IaaS, as well as application hosting (which Terremark doesn’t do) — the deal values NaviSite much less richly, although it still represents a handsome premium to the price NaviSite was trading at.

The really fascinating aspect of this is that it’s not Time Warner Telecom that’s doing this. It’s Time Warner Cable. TWT would have made instant and obvious sense — TWT doesn’t have a significant hosting or cloud portfolio and it arguably needs one, as it competes directly with folks like AT&T and Verizon; most carriers of any scale and ambition are eventually going to be in this business. But TWC is, well, an MSO (multiple system operator, i.e., cable company). They do sell non-consumer network services, but mostly to the true SMB (and their press release says they’ll be targeting SMBs with NaviSite’s services). But NaviSite is more of a true mid-market play; they’ve shed more and more of their small business-oriented services over time, to the betterment of their product portfolio. NaviSite has services that best fit the mid-market and the enterprise, at this point.

NaviSite has been on a real upswing over the past year — Gartner Invest clients who have talked to me about investment opportunities in the space over the last year, know that I’ve pointed them out as a company worth taking a look at, thanks to the growing coherence of their strategy, and the quality of their cloud IaaS platform. This is a nice exit for shareholders, and I wish them well, but boy… bought by a cable company. That’s a fate much worse than being bought by a carrier. I can’t remember another major MSO having bought an enterprise-class hoster in the past, and that’s going to put TWC in interesting virgin territory. And there’s no mercy for NaviSite here — TWC has announced they’re getting folded in, not being run as a standalone subsidiary.

Lest it be said that I am negative on network service providers, I will point out that I have seen plenty of these acquisitions over the years. All too often, network service providers acquire hosters and then destroy them; the only one that I’ve seen that worked out really well was AT&T and USi, and that was largely because AT&T had the intelligence to place USi’s CEO in charge of their whole hosting business and let him reform it. Broadly, the NSPs usually do benefit from these acquisitions, but value is often destroyed in the process, primarily due to the cultural clashes and failure to really understand the business they’ve acquired and what made it successful.

I wonder why TWC didn’t buy someone like GoDaddy instead — rumor has long had it that GoDaddy’s been looking for a buyer. Their portfolio much more naturally suits the small business, and they’ve got a cloud IaaS offering about to launch publicly. It would seem like a much more natural match for the rest of TWC’s business. I suppose we’ll see how this plays out. [Emphasis added.]

(I expect that we’ll be issuing a formal First Take with advice for clients once we have a chance to talk to NaviSite and TWC about the acquisition.)

Want to bet GoDaddy’s IaaS offering will be a price leader?


Jeff Barr (@jeffbarr) posted Coming Soon - Oracle Database 11g on Amazon Relational Database Service to the Amazon Web Services blog on 1/31/2011:

image As part of our continued effort to make AWS even more powerful and flexible, we are planning to support Oracle Database 11g Release 2 via the Amazon Relational Database Service (RDS) beginning in the second quarter of 2011.

image Amazon RDS makes it easy for you to create, manage, and scale a relational database without having to worry about capital costs, hardware, operating systems, backup tapes, or patch levels. Thousands of developers are already using multiple versions of MySQL via RDS. The RDS tab of the AWS Management Console, the Command-Line tools, and the RDS APIs will all support the use of the Oracle Database as the "Database Engine" parameter.

As with today's MySQL offering, Amazon RDS running Oracle Database will reduce administrative overhead and expense by maintaining database software, taking continuous backups for point-in-time recovering, and exposing key operational metrics via Amazon CloudWatch. It will also allow scaling of compute and storage capacity to be done with a few clicks of the AWS Management Console. Concepts applicable to MySQL on RDS, including backup windows, DB Parameter Groups, and DB Security Groups will also apply to Oracle Database on RDS.

You will be able to pay for your use of Oracle Database 11g in several different ways:

If you don't have any licenses, you'll be able to pay for your use of Oracle Database 11g on an hourly basis without any up-front fees or mandatory long-term commitments. The hourly rate will depend on the Oracle Database edition and the DB Instance size. You will also be able to reduce your hourly rate by purchasing Reserved DB Instances.

If you have existing Oracle Database licenses you can bring them to the cloud and use them pursuant to Oracle licensing policies without paying any additional software license or support fees.

I think that this new offering and the flexible pricing models will be of interest to enterprises of all shapes and sizes, and I can't wait to see how it will be put to use.

If you would like to learn more about our plans by visiting our new Oracle Database on Amazon RDS page. You'll be able to sign up to be notified when this new offering is available and you'll also be able to request a briefing from an AWS associate.

In anticipation of this offering, you can visit the Amazon RDS page to learn more about the benefits of Amazon RDS and see how you can deploy a managed MySQL database today in minutes. Since the user experience of Amazon RDS will be similar across the MySQL and Oracle offerings, this is a great way to get started with Amazon RDS in ahead of the forthcoming Oracle Database offering.

Hourly access prices weren’t available when this post was written (2/1/2011).


<Return to section navigation list> 

0 comments: