Thursday, June 24, 2010

Windows Azure and Cloud Computing Posts for 6/24/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this daily series.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated in June 2010 for the January 4, 2010 commercial release. 

Azure Blob, Drive, Table and Queue Services

Kevin Kell claims Visual Studio 2010 – The Tools Keep Getting Better! and cites the new Azure Toolkit for Visual Studio 2010 capabilities to brows Windows Azure storage in this 6/24/2010 post:

Whatever one says about Microsoft, you have to acknowledge that they do build good tools. The developer has always been at the heart of Microsoft’s business strategy and Microsoft has always delivered well thought out development tools.

Specifically, for the Azure developer, the latest release of the Azure Toolkit for Visual Studio offers some nice features.

For one, there is the ability to browse objects in Azure storage using the Server Explorer right from within Visual Studio:

Previously it was necessary to use a third party tool (for example Cloud Storage Studio from Cerebrata) to achieve this functionality. Now, it comes for “free” with the toolkit.

Also, it is now possible to deploy projects directly from Visual Studio into Windows Azure:

  • Is Windows Azure always the right choice? No!
  • Is Windows Azure better than or worse than other PaaS offerings? Well, it depends!
  • Is Windows Azure worth a look, particularly if you are already a .Net developer? You Betcha!
  • Come and get a real in-depth grounding in the fundamentals …
  • Come to Learning Tree’s new course – Windows Azure Platform Introduction: Programming Cloud-Based Applications to learn more!

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Steven Forte continues his series with Part IV, Using Telerik’s new LINQ implementation with WCF RIA Services Part IV: Making an OData feed from your Domain Service, on 6/24/2010:

imageRead the other posts in this series:

In the previous blog posts listed above, I showed how Telerik’s new LINQ implementation works with WCF RIA Services. I showed how to build your own Domain Service, build custom query methods, and make a metadata class. In this post I will show how to expose your Domain Service as an OData feed.

The Open Data Protocol (OData) is a Web protocol for querying and updating data in a RESTful fashion. You create OData feeds when you want to set up feeds for 3rd parties to consume, typically without your knowledge. For example Twitter has a RESTful feed of all its public tweets and many applications will consume that feed.

You may use WCF RIA Services to create your application, however, you may want to expose parts of your application as a feed for others to consume. This is real easy to do. Let’s see how.

I will continue using the same project from the first three parts of this blog series. In the server ( project you have to do three things. First set a reference to System.ServiceModel.DomainServices.Hosting.OData.

Next we have to configure an OData endpoint. You do this by adding the following to your web.config under the system.serviceModel node:

name="OData" type="System.ServiceModel.DomainServices.Hosting.ODataEndpointFactory, 
Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />

Lastly, you have to tell RIA Services what methods of your DomainService you want to expose. The methods you expose have to be an IQueryable and parameterless (which means the query methods in Part II are ineligible) and decorated with the IsDefult=true attribute. I will expose our GetCustomers() method from Part I as shown here by adding the attribute to the method:

   1:  //enable OData
   2:  [Query(IsDefault = true)]
   3:  public IQueryable<Customer> GetCustomers()
   4:  {
   5:      return this.DataContext.Customers
   6:          .Where(c => c.Country == "Germany")
   7:          .OrderBy(c => c.CustomerID);
   8:  }

Now you can run your project and view the OData feed from a browser. The format of the URL is the namespace+typename for the DomainService with dots replaced by hyphens followed by “.svc/odata/”. (Note, I have found that this is case sensitive and requires the terminating /.)

So for example, our Namespace is SilverlightApplication6.Web and our Domain Service is DomainService1, so our url would be http://servername/SilverlightApplication6-Web-DomainService1.svc/odata/

My URL is the following and the results are shown below:



Now let’s explore the OData feed. Being a RESTful service you will access the feed and each resource via HTTP. The resource in this case will be the names of your Entities. What is great is that the OData feed respects the business rules of your RIA Service (since it is using the same DomainService), so you don’t have to worry about data leakage, nor duplicate any work replicating your business rules. Let’s drill down into the CustomerSet:



That is it. You can then consume the feed from an iPhone app, .NET application, Excel PowerPivot, or any other application that supports HTTP and XML (which is pretty much anything.)

Vitek Karas posted Data Services Expressions – Part 6 – Key lookup in this 6/16/2010 post:

imageSeries: This post is the sixth part of the Data Services Expressions Series which describes expressions generated by WCF Data Services.

We will be looking into navigations in the next couple of posts. To be able to do that we first have to explain so called key lookups. With a query to a given entity set, the service returns all the entities in that set. We can use filters and such to limit the entities returned, but such queries will always assume that the result may contain multiple items.

In order to specify a single entity, the query must somehow identify just one instance. In WCF Data Services each entity has key properties, which serve this purpose. By specifying values for all of the key properties, any given entity instance is uniquely identified. Key lookup is a query (or part of query) which specifies the values for the key properties. So using key lookup we can ask for just one entity. Simple key lookup is like this:


The above query returns a single entity or 404 response if the product specified doesn’t exist. Our Product entity has just one key property called ID and thus it’s not necessary to specify the name of the key property. The query above asks for product with its key property (ID) of value 1 (integer).

When such query is processed and translated into expression tree WCF Data Services will actually generate a query which asks for all products with ID equal to 1. Once that query returns its results, the service will verify that only one result was returned. So the above URL gets translated into expression tree like this one:

System.Collections.Generic.List`1[TypedService.Product].Where(element => (element.ID == 1))

It is pretty straightforward, it’s the query root followed by a filter which picks only products with ID equal to 1.

Note that we would get the exact same expression by running a URL query like:

http://host/service.svc/Products?$filter=ID eq 1

But the response for this query is different, it returns a list with a single item instead of the item itself directly.

Now let’s see what happens if the given entity has more than one key property. We will use an entity called Record which has two key properties; PartitionID which is an integer and RowID which is a string. A query for a single record looks like this:


The expression tree which is generated for this query is a bit different than what we could expect. WCF Data Services will use a separate Where call for each key property it needs to filter the results on, so it looks like this:

    .Where(element => (element.PartitionID == 1))
    .Where(element => (element.RowID == "id0"))

Note again that it is expected that such query returns at most one result. If that’s not true the service will fail with a 500 status code.

And that’s it, key lookups are really not complicated. In the next post we will look at our first navigations over navigation properties.

Thanks to Marcelo Lopez Ruiz for the heads up about Vitek’s post in his Key lookup in WCF Data Services post of 6/23/2010:

Vitek has another great post in his series over at

There are a couple of things that I think are worth calling out.

  1. The provider doesn't know if it needs to produce many items or just one. It can figure it out by looking at the expression and see if there's a key filter of course, but generally it doesn't know. This is because the difference between /Customers(1) and /Customers?$filter=ID eq 1 affects the serialization more than anything else, when choosing to return a feed or an entry. The query should behave the same way, so it's only the runtime that keeps track of this information.
  2. Each key value in a compound-key entity is a separate Where operation (mostly). Logically these can be merged into a single Where operation with an And operator for each equality, and the results should be the same - the provider is free to do that of course. You would get the longer expression if someone queried for /Customers?$partitionid eq 1 and rowid eq 2 instead of /Customers(partitionid=1,rowid=2).

Thankfully, most everyone will work with an existing provider like the ADO.NET Entity Framework, so you'll never have to worry about these details.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Vittorio Bertocci posted All Will Be Revealed: ~7 Hours Recordings from the WIF Workshops on 6/23/2010:

As promised, the recordings of the main sessions of the latest WIF Workshop are now available  on Channel9!

The course starts from the very high level intro you’ve seen various times, and progresses into the deepest WIF training content we’ve ever published (that is, until this bad boy will be finally on the shelves).

In a short you can expect to see a new version of the Identity Developer Training Kit, which will include the slides used during the class and embedded players pointing to those very videos: together with the usual labs, that will truly be an event-in-a-box package you’ll be able to use if you want to redeliver a WIF workshop in your area!

WIF Workshop 1: Introduction to Claims-Based Identity and WIF

This session provides a light introduction to claims-based identity: the problems it solves, the canonical authentication scenario, key concepts and terminology.
The main Windows Identity Foundation API surface for non-security developers is introduced.

Following are the two workshops devoted to Windows Azure:

WIF Workshop 9: WIF and Windows Azure

The last session of the training covers the use of WIF in Windows Azure. After a quick introduction to Windows Azure and the infrastructural differences between web roles and on-premises deployment, the session provides practical advices on aspects of distributed development such as handling NLB sessions, certificate management, dealing with volatile application URI, handling tracing, metadata generation considerations, and so on. The discussion covers both Web roles and WCF roles.

WIF Workshop 10: Lab about WIF and Windows Azure

The last lab of the workshop covers the use of WIF on Windows Azure, demonstrating in practice how to cope with NLB sessions, volatile application URI, dynamic configuration, metadata generation, tracing and so on.

Here are the remaining seven workshops:

WIF Workshop 2: Lab on Basic Web Sites

The first lab of the workshop offers an overview of what can be achieved when using WIF with Web sites: authentication externalization, integration with IsInRole and ASP.NET authorization, customization of the application via claims, claims-based authorization.
This video introduces the viewer to the lab format and gives some advices about lab execution.

WIF Workshop 3: Scenarios and Architecture I

In this session you will learn about the difference between IP-STS and FP-STS and how to choose where to put STSes in your architecture. You will learn about federation, home realm discovery and how to leverage the WIF extensibility model in order to handle multiple identity providers.

WIF Workshop 4: Scenarios and Architecture II

This short session explores the architectural implications of using claims for authorization purposes

WIF Workshop 5: Lab about Web Sites and STS

The second lab of the workshop explores some of the patterns discussed in the former section. One lab demonstrates how a generic web site can be enhanced with identity provider capabilities regardless of the authentication technology it uses, simply by adding an STS page.

Another lab shows how to use an existing membership store for authenticating calls to a custom STS and sourcing claim values.

WIF Workshop 6: WIF ASP.NET Pipeline and Extensibility Points

This session explores in depth how WIF tackles the sign-in scenario.
After a general intro to the WIF configuration element, the session describes how WS-Federation is used for driving the various browser redirects which ultimately constitute the sign in experience. Most of the time is spent digging deep in how WIF leverages the ASP.NET HttpModule extensibility mechanism and its own classes & events for implementing the sign-in sequence.

WIF Workshop 7: WIF and WCF

This session describes in detail the difference between passive and active scenarios, specifically around the confirmation method for toekns (bearer vs. holder-of-key).
The WIF object model and WCF integration are discussed, with special attention to similarities to what has been seen for the ASP.NET case and differences with the traditional, WCF-only programming model.

The notion of trusted subsystem is explored at lenght, providing the backdrop for the introduction to WSTrustChannel, CreateChannelActingAs and CreateChannelWithIssuedToken.

WIF Workshop 8: Lab about WIF and WCF

This lab explores the idea of delegated service call via ActAs tokens: the exercise from the Web sites lab shows how to do that from an ASP.NET to a WCF backend, while the one from the WCF lab focuses on flowing identity info through a chain of services calls.
The first exercise of the WCF lab does not use an STS for authentication. It uses username & password credentials, and is designed to highlight the differences between the old WCF-only model and the enhanced model offered by WIF.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Gunther Lenz claimed CloudPoll goes viral… in this 6/24/2010 post to the US ISV Evangelism blog:

CloudPoll attracted over 5,000 individual users last month, but did you create or answer a poll on CloudPoll and shared your poll on Facebook?! No? Then check it out now, on !

Our very own CloudPoll reference application, hosted in Windows Azure, enables any Facebook user to answer, create, customize, share, and evaluate polls on Facebook. We got great traction with Outback hosting a poll on CloudPoll weekly and the Windows Azure FB team lets Facebook users vote on the most anticipated feature for the next releases :

clip_image002           clip_image004

But that is not all, the CloudPoll application is based on the Windows Azure Toolkit for Facebook which in turn is the foundation for the Hooters Facebook application


Poll away my friends :-)!

The Windows Azure Team posted Real World Windows Azure: Interview with Andy Harjanto, Cofounder at Guppers on 6/24/2010:

As part of the Real World Windows Azure series, we talked to Andy Harjanto, cofounder at Guppers, about using the Windows Azure platform to deliver the company's mobile service, which enables people to access business data from any mobile phone.

MSDN: Tell us about Guppers and the services you offer.

Harjanto: Guppers enables business to take advantage of mobility, cloud computing, and social networking to exchange business data through email, short messaging service (SMS) messages, and from any mobile device, such as Windows Phone 7.

MSDN: What were the biggest challenges that Guppers faced prior to implementing the Windows Azure platform?

Harjanto: We previously used Amazon Simple Storage Service (S3) for data storage in the cloud. However, because we host our own web servers to improve performance, we had to maintain file caches locally, which was time consuming and resulted in inefficient scaling. When we initially launched our service, we had great press coverage and had a huge surge in traffic, but we couldn't copy incoming files to Amazon S3 fast enough. The servers crashed and our website went down temporarily.

MSDN: Can you describe the solution you built with Windows Azure to address your need for scalability and high performance?

Harjanto: We implemented the Windows Azure platform for both our web-based application and our storage needs. When customers use the Guppers service to exchange data, requests are added to Queue storage services where Worker roles in Windows Azure pick up and process the requests. Document files are added to Windows Azure Blob storage and messages are stored in Windows Azure Table storage. We store user and account information in Microsoft SQL Azure. We have to communicate back and forth with telecommunications providers, so we use the Windows Azure platform AppFabric Service Bus to expose our application across network boundaries. To achieve even higher bandwidth, we plan on implementing the Windows Azure Content Delivery Network to cache our blob content.

Figure 1. The Guppers web-based interface. Guppers enables customers to exchange business data through email, SMS text, and any mobile device.

MSDN: What makes your solution unique?

Harjanto: A key differentiator of Guppers, in addition to the unique way we enable customers to access business data, is that we are a small company that can operate websites and services that are on par with popular, high-traffic, enterprise websites. With Windows Azure, latency is a thing of the past and scalability is something we don't even worry about.

MSDN: What kinds of benefits are you realizing with Windows Azure?

Harjanto: We are able to scale up very cost-effectively. Had we scaled up our existing data center to meet demand, we would have paid approximately U.S.$90,000 annually in salary for extra IT resources to just maintain the infrastructure, plus additional operating costs of $700 each month. By using Windows Azure, we don't have to worry as much about fluctuations in a turbulent economy because we only pay for what we use. Development was also fast, which is important to us as a small company that doesn't have unlimited resources to dedicate to projects. It took two developers just two days to migrate our service to Windows Azure, and we'll be able to develop new enhancements in the future just as fast.

Read the full story at:

To read more Windows Azure customer success stories, visit:

I remember talking a few years ago with an individual named Andy Harjanto who was a Microsoft PM. Probably him.

Bruce D. Kyle suggests that you Learn How to Migrate Real World Project to Cloud in this 6/22/2010 post:

This five part video series chronicles the migration of the MSDEV Web sites from a traditional, third party hosted, terrestrial infrastructure, to its new home in the cloud using Windows Azure. See this real world project through the eyes of the MSDEV team and their partners at Slalom Consulting as they take you through the process from planning to completion. The series is available at msdev Cloud Migration.

Episode 1. Planning. MSDEV team members discuss the reasoning behind moving the msdev sites to the Cloud, while Stephen Roger and George Ghali from Slalom Consulting step through the planning of the migration.

Episode 2. Content. MSDEV team members provide insight into the type and scope of the content on the msdev site, and Ryan Kaneshiro from Slalom Consulting takes us through moving that content to the Cloud.

Episode 3: Web Sites. msdev team members talk about the various Web sites that make up msdev. Hieu Trac and Adam McKenzie from Slalom Consulting lead us through the process of porting those different sites to Window Azure.

Episode 4: Services. Joel Forman from Slalom Consulting talks about some of the behind the scenes services running on the msdev Web sites and how those services will be implemented once the sites move to the Cloud.

Episode 5: Data. Hieu Trac and Adam McKenzie are back, along with the rest of the gang from Slalom Consulting to wrap up the series by discussing how the data from the various msdev sites handled the migration to the Cloud with Windows Azure.

Return to section navigation list> 

Windows Azure Infrastructure

David Linthicum asserted “All the cloud hype justifies criticisms that cloud computing is the latest technology hot air, but under the hype is pragmatic value” in his Cloud skeptics are right to be wary, but not dismissive post of 6/24/2010 to InfoWorld’s Cloud Computing blog:

image I enjoyed Matt Prigge's latest post, "Confessions of a cloud skeptic," and its honest look at the essence of cloud computing. As Prigge puts it: "Frankly, I've never seen what all the fuss is about. When I first started hearing rumblings about cloud infrastructure a few years ago, I actually thought I might have missed some huge technological development. It didn't take me long to figure out that at a very basic level, cloud infrastructure isn't new at all. It's the marketing and spin that's new."

I have a tendency to agree with Prigge and others who have called the hype around cloud computing into question. It's clearly an overheated space right now, and many organizations are moving into cloud computing because of the hype, not for business and technology requirements. That's dangerous.

The trouble is that most of what is said about cloud computing today comes from marketing departments and not thought leaders.

That said, there is true value within cloud computing. You just have to understand what's truly innovative and unique about it, and what's just "cloudwashing." For example, as Prigge points out, we've been doing large-scale storage and compute for many years now within enterprises. So what's truly innovative around cloud computing? Plenty.

They way I see it, cloud computing is the ability to use core infrastructure services, such as storage and compute, over the Internet as true components of architecture in highly scalable and elastic ways. Storage and compute is nothing new, but the the model for consuming those types of services is new and innovative. Thus, the value is the concept of using these architectural components from an outside source, and paying for only the services you use. Moreover, there's value from the cloud's on-demand provisioning, multitenant access to resources, and economies of scale.

Neil MacKenzie provides a table of contents for The Windows Azure Platform: Articles from the Trenches in his 6/23/2010 post:

Eric Nelson of Microsoft UK edited a collection of short articles on Windows Azure into a 96-page eBook, The Windows Azure Platform: Articles from the Trenches, available as a free download. The contents are as follows:

Getting Started
  • Jason Nappi - 5 steps to getting started with Windows Azure
  • Sarang Kulkarni - The best tools for working with Windows Azure
  • Steven Nagy - Architecting for Azure - Building Highly Scalable Applications
  • Marcus Tillett - The Windows Azure Platform and Cost-Oriented Architecture
  • Simon Munro - De-risking Your First Azure Project
  • Grace Mollison - Trials & tribulations of working with Azure when there’s more than one of you
  • Grace Mollison - Using a CI build to achieve an automated deployment of your latest build
  • Rob Blackwell - Using Java with the Windows Azure Platform
Windows Azure
  • Steven Nagy - Auto-Scaling Windows Azure Compute Instances
  • Josh Tucholski - Building a content-based router service on Windows Azure
  • Steve Towler - Bing Maps Tile Servers using Azure Blog Storage
  • Neil Mackenzie - Azure Drive
  • Mark Rendle - Azure Table Service as a NoSQL database
  • Neil Mackenzie - Queries and Azure Table
  • Saksham Gautam - Tricks for storing time and date fields in Table Storage
  • Josh Tucholski - Using Worker Roles to Implement a Distributed Cache
  • David Gristwood - Logging, Diagnostics and Health Monitoring of Windows Azure Applications
  • Neil Mackenzie - Service Runtime in Windows Azure
SQL Azure
  • Juliën Hanssens - Connecting to SQL Azure In 5 Minutes
Windows Azure Platform AppFabric
  • Richard Prodger - Real Time Tracing of Azure Roles From Your Desktop

<Return to section navigation list> 

Cloud Security and Governance

Lori MacVittie asserted Like most architectural decisions the choice between hardware and virtual server are not mutually exclusive in her Data Center Feng Shui: SSL article of 6/24/2010 for F5’s DevCentral blog:

Like most architectural decisions the choice between hardware and virtual server are not mutually exclusive.

image_thumb[4] The argument goes a little something like this: The increases in raw compute power available in general purpose hardware eliminates the need for purpose-built  hardware. After all, if general purpose hardware can sustain the same performance for SSL as purpose-built (specialized) hardware, why pay for the purpose-built hardware? Therefore, ergo, and thusly it doesn’t make sense to purchase a hardware solution when all you really need is the software, so you should just acquire and deploy a virtual network appliance.

The argument, which at first appears to be a sound one, completely ignores the fact that the same increases in raw compute power for general purpose hardware are also applicable to purpose-built hardware and the specialized hardware cards that provide acceleration of specific functions like compression and RSA operations (SSL). But for the purposes of this argument we’ll assume that performance, in terms of RSA operations per second, are about equal between the two options.

That still leaves two very good situations in which a virtualized solution is not a good choice. 


For many industries, federal government, banking, and financial services among the most common, SSL is a requirement – even internal to the organization. These industries also tend to fall under the requirement that the solution providing SSL be FIPS 140-2 or higher compliant. If you aren’t familiar with FIPS or the different “levels” of security it specifies, then let me sum up: FIPS 140 Level 2 (FIPS 140-2) requires a level of physical security that is not a part of Level 1 beyond the requirement that hardware components be “production grade”, which we assume covers the general purpose hardware deployed by cloud providers.

quote-left Security Level 2 improves upon the physical security mechanisms of a Security Level 1 cryptographic module by requiring features that show evidence of tampering, including tamper-evident coatings or seals that must be broken to attain physical access to the plaintext cryptographic keys and critical security parameters (CSPs) within the module, or pick-resistant locks on covers or doors to protect against unauthorized physical access.

-- FIPS 140-2, Wikipedia

FIPS 140-2 requires specific physical security mechanisms to ensure the security of the cryptographic keys used in all SSL (RSA) operations. The private and public keys used in SSL, and its related certificates, are essentially the “keys to the kingdom”. The loss of such keys is considered quite the disaster because they can be used to (a) decrypt sensitive conversations/transactions in flight and (b) masquerade as the provider by using the keys and certificates to make more authentic phishing sites. More recently keys and certificates, PKI (Public Key Infrastructure), has been an integral component of providing DNSSEC (DNS Security) as a means to prevent DNS cache poisoning and hijacking, which has bitten several well-known organizations in the past two years.

Obviously you have no way of ensuring or even knowing if the general purpose compute upon which you are deploying a virtual network appliance has the proper security mechanisms necessary to meet FIPS 140-2 compliance. Therefore, ergo, and thusly if FIPS Level 2 or higher compliance is a requirement for your organization or application, then you really don’t have the option to “go virtual” because such solutions cannot meet the physical requirements necessary.


imageA second consideration, assuming performance and sustainable SSL (RSA) operations are equivalent, is the resource utilization required to sustain that level of performance. One of the advantages of purpose built hardware that incorporates cryptographic acceleration cards is that it’s like being able to dedicate CPU and memory resources just for cryptographic functions. You’re essentially getting an extra CPU, it’s just that the extra CPU is automatically dedicated to and used for cryptographic functions. That means that general purpose compute available for TCP connection management, application of other security and performance-related policies, is not required to perform the cryptographic functions. The utilization of general purpose CPU and memory necessary to sustain X rate of encryption and decryption will be lower on purpose-built hardware than on its virtualized counterpart.

That means while a virtual network appliance can certainly sustain the same number of cryptographic transactions it may not (likely won’t) be able to do much other than that. The higher the utilization, too, the bigger the impact on performance in terms of latency introduced into the overall response time of the application.

You can generally think of cryptographic acceleration as “dedicated compute resources for cryptography.” That’s oversimplifying a bit, but when you distill the internal architecture and how tasks are actually assigned at the operating system level, it’s an accurate if not abstracted description. 

Because the virtual network appliance must leverage general purpose compute for what are computationally expensive and intense operations, that means there will be less general purpose compute for other tasks, thereby lowering the overall capacity of the virtualized solution. That means in the end the costs to deploy and run the application are going to be higher in OPEX than CAPEX, while the purpose-built solution will be higher in CAPEX than in OPEX – assuming equivalent general purpose compute between the virtual network appliance and the purpose-built hardware.


Randy Bias reported on 6/23/2010 from the Velocity Conference Cloud: Change Management & Cloud Operations:

image Our own Andrew Shafer, killed it today at the Velocity Conference.  His presentation is a must read for webops, devops, and those aspiring to build 100% uptime cloud services.

It’s hard for folks to internalize how things are changing in Internet-land, but I think you’ll get closer through this presentation.  It’s not the same-old, same-old any more. Cloud computing is the biggest change to how IT functions since the 1980s and the advent of the personal computer (and hence the rise of client-server/enterprise computing). Enjoy … (and outstanding job, Andrew!)

Change Management Velocity2010

View more presentations from Andrew Shafer.

<Return to section navigation list> 

Cloud Computing Events

GigaOm posted video archives of days 1 and 2 of his Structure 2010 conference in San Francisco on 6/23 and 6/24/2010:

Wade Wegner posted Real-World Patterns for Cloud Computing at TechEd NA 2010 on 6/23/2010:

imageIt was an amazing TechEd NA 2010, and I admit that it took me a few days to recover.  Between the heat and humidity, great times with friends, and good food, I managed to spend a bit of time at the conference.

I had the pleasure of co-presenting with Jerome Schulist, a solutions architect at the Tribune Company.  Jerome is one of the architects that engineered the solution that has allowed the Tribune Company to store and process terabytes of data on the Windows Azure platform.  This solution involves a number of really interesting scenarios, including:

  • Parallelized upload of terabytes of digital content into Windows Azure blob storage using .NET Framework 4.0
  • Best practices for uploading a massive amount of content
  • Scaling strategy for Windows Azure blob storage through multiple storage accounts and a “round robin” pattern
  • Content reprocessing with Windows Azure worker roles
  • Automatic scale-out and scale-back of worker roles through queue lengths

For detailed information on this solution, you can take a look at the Tribune Company’s Windows Azure case study or you can watch our TechEd NA 2010 presentation here.

As promised in the session, you can find the final code built during the session on SkyDrive here.  Just remember to update the config files with your own credentials.

The Windows Azure Team announced Microsoft Distinguished Engineer, Yousef Khalidi, to Join Cloud Panel at Structure 2010 in San Francisco this Thursday, June 24, 2010 on 6/22/2010:

Microsoft Distinguished Engineer, Yousef Khalidi will participate in the panel, "The Platform-as-a-Service (PaaS)" this Thursday, June 24, 2010, 3:40 pm at Structure 2010, GigaOM's premier thought leadership conference, which will be held June 23-24, 2010 at the Mission Bay Conference Center in San Francisco. If you can't make it to Structure 2010, the session will be available via livestream and a recap of the panel will be posted to this blog on Friday, June 25, 2010.

imageThis should be an interesting conversation with an eclectic group of panelists who are set to debate the future of PaaS and which direction the market may ultimately take.  Other participants in the panel, moderated by GigaOM Pro's Infrastructure Curator, Derrick Harris, include Lew Moorman, president of Rackspace Hosting, Mike Piech, senior director of product marketing at Oracle, Byron Sebastian, CEO of Heroku, and Tien Tzuo, founder and CEO of Zuora.

If you do plan to attend Structure, you may also want to check out Yousef's other session, "Why the cloud? Why the Windows Azure platform?" on Wednesday, June 23, 2010 at 3:15 pm.   This session will focus on how cloud computing enables developers to play a key role in shifting the IT dynamic and driving new revenue through innovative application development.  Yousef will also drive a discussion around how Microsoft is addressing this space via the Windows Azure platform and how customers should be planning for the cloud, both today and for the future.

Sorry to be late with this, but it didn’t appear in IE8’s feeds list until today. (IE8 has problems with the new blog application’s Atom feed.) More about GigaOM’s Structure 2010 Conference follows. I’m watching Yousef’s Thursday panel session and searching for the video archive of his Wednesday session as I type this.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Bluewolf announced its Cloud Transformation Solutions for Amazon Web Services on 6/24/2010 at the GigaOm Structure 2010 conference in San Francisco:

Bluewolf's total solution suite for cutting-edge Cloud infrastructure technologies. Introducing Bluewolf Cloud Transformation Solutions for Amazon Web Services.

image Enterprises are supporting hundreds - even thousands - of applications for the growing hunger of the business, dramatically driving up the cost of computing. The cost crunch can cripple an IT budget.

To keep up with this business app frenzy, companies are looking beyond traditional hardware to Cloud-enabled technologies for their infrastructure needs. And platforms like Amazon Web Services (AWS) have the potential to revolutionize IT departments by aligning them with the needs of today's CIO.

As an AWS Solution Provider, Bluewolf enables companies to realize the benefits of Cloud hosting and storage solutions through a complete range of consulting services.

Bluewolf offers several solutions for organziations looking to unlock the opportunities of the emerging Cloud infrastructure technologies.

Introducing Bluewolf Cloud Transformation Solutions for AWS

Each solution adapts to your organization’s IT needs, no matter how evolved you are in adopting Cloud technologies.

Bluewolf will plan and implement the migration of your infrastructure to the AWS platform and deliver advanced management for your virtualized environment in the Cloud.

  • Meet the scalability and performance requirements of your business-critical applications
  • Focus on developing the product and the market, rather than worrying about the framework and the supporting data centers

By partnering with Cloud expert Bluewolf, you can offload the monitoring and management of your systems completely and focus on driving business.

  • Database administration
  • System administration
  • Web-server administration
  • Disaster recovery

A focus on LAMP

If you've made the cost effective decision to use open source software for your business needs, take the next step by moving it to the Cloud.

Often utilized by New Media, start-up, and agile companies, the LAMP Stack (typically LINUX, Apache, MYSQL, PHP/Perl/Python) is popular because it's free, open source, ubiquitous, secure, easily adaptable, easy to code, and easy to deploy.

Bluewolf's sweat spot in Amazon's Cloud is the LAMP Stack. Outsource the management of your Cloud-based LAMP Stack to Bluewolf open source and Cloud Computing experts.

Click HERE to learn more about Bluewolf Cloud Transformation Solutions for Databases.

Audrey Watters reported Zuora Announces Z-Commerce for the Cloud on 6/23/2010:

zuora logo.jpgZuora, a subscription billing company, announces today the release of Z-Commerce for the Cloud. Z-Commerce enables cloud providers to automate metering, pricing, and billing.

According to Zuora CEO Tien Tzuo, cloud computing is a disruptive technology that requires enterprises to become part of what he calls "the subscription economy." While purchasing servers and computing hardware was a marker of the "ownership model," Tzuo argues that cloud computing requires a different business model, one that emphasizes pay-as-you-go.

Pricing models for Z-Commerce for the Cloud include charging based on demand, reservation, location-based, and off-peak pricing, and can handle cloud service and software-as-a-service billing. Z-Commerce also features a private cloud billing setup for IT departments that need to implement departmental charge-backs.

Zuora contends that the lack of a billing infrastructure for the cloud has made it nearly impossible for many providers to effectively bring their offerings to market and see Z-Commerce for the Cloud as filling this gap.

Zuora announced earlier this week that it would be participating in Microsoft's Azure Technology Adoption Program. "Cloud computing opens up worlds of new opportunity for commerce," said Dianne O'Brien, senior director of business strategy for Windows Azure. "But in order to seize those opportunities and thrive in this new model, we need commerce systems that enable us ability to meter, price, and bill for usage in the cloud."

According to Tzuo, "We've spent a full year working with cloud leaders such as EMC and VMware to understand the needs of both cloud providers and their customers. Now, we've delivered a platform with all the metering, pricing, and billing capabilities required for the cloud to fully live up to its promise."

<Return to section navigation list> 

blog comments powered by Disqus