Thursday, April 08, 2010

Windows Azure and Cloud Computing Posts for 4/8/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

Update 4/15/2010: Corrected the spelling of Shannon Lowder’s last name (see the SQL Azure Database, Codename “Dallas” and OData section.)

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in April 2010. 

Azure Blob, Table and Queue Services

Steve Marx’s Leasing Windows Azure Blobs Using the Storage Client Library of 4/8/2010 begins:

One of the unsung heroes of Windows Azure storage is the ability to acquire leases on blobs. This feature can help you solve thorny concurrency challenges, perform leader election, serialize edits, and much more. Look for more discussion of leases over on the Windows Azure Storage Team blog in the coming weeks.

A question came up on the Windows Azure forum yesterday about how to use this blob lease functionality from the storage client library that ships with the Windows Azure SDK. Unfortunately, methods to acquire, renew, break, and release leases is not yet included in the top-level Microsoft.WindowsAzure.StorageClient namespace. This makes it a bit hard to figure out how to use leases. In this post, I’ll show you what’s available in the storage client library to manage leases, and I’ll share some code to help you get going.

Using the Protocol namespace

The lease operations can be found in the Microsoft.WindowsAzure.StorageClient.Protocol namespace, which provides lower-level helpers to interact with the storage REST API. In that namespace, there’s a method called BlobRequest.Lease(), which can help you construct a web request to perform lease operations.

Here’s a simple method which attempts to acquire a new lease on a blob and returns the acquired lease ID. (For convenience, I’ve made this an extension method of CloudBlob, which allows for syntax like myBlob.AcquireLease().)

public static string AcquireLease(this CloudBlob blob)
    var creds = blob.ServiceClient.Credentials;
    var transformedUri = new Uri(creds.TransformUri(blob.Uri.ToString()));
    var req = BlobRequest.Lease(transformedUri,
        90, // timeout (in seconds)
        LeaseAction.Acquire, // as opposed to "break" "release" or "renew"
        null); // name of the existing lease, if any
    using (var response = req.GetResponse())
        return response.Headers["x-ms-lease-id"];

The call to BlobRequest.Lease() gives me an HttpWebRequest which I can then execute. To make sure I’m using the correct URL and authorization, I’m using TransformUri() and SignRequest(). The former updates the URL with a Shared Access Signature (if needed), and the latter constructs the correct Authorization header (if needed). Doing both ensures that no matter which kind of access I’m using, I have a properly authorized HttpWebRequest.

Finally I execute the web request and read the x-ms-lease-id header to get back the newly-acquired lease (which will be in GUID format).

Steve continues with code examples for:

    • Using the lease once acquired
    • The rest of the lease methods
    • Simple usage

and provides downloadable sample code.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Kevin Kline’s The NoSQL Movement: Hype or Hope? article of 4/7/2010 for Database Trends and Applications magazine’s April 2010 issue casts a jaundiced eye on the NoSQL movement’s tenets:

If you spend any time at all reading IT trade journals and websites, you've no doubt heard about the NoSQL movement.  In a nutshell, NoSQL databases (also called post-relational databases) are a variety of loosely grouped means of storing data without requiring the SQL language.  Of course, we've had non-relational databases far longer than we've had actual relational databases.  Anyone who's used products like IBM's Lotus Notes can point to a popular non-relational database.  However, part and parcel of the NoSQL movement is the idea that the data repositories can horizontally scale with ease, since they're used as the underpinnings of a website.  For that reason, NoSQL is strongly associated with web applications, since websites have a history of starting small and going "viral," exhibiting explosive growth after word gets out.

In contrast, most relational database platforms require a lot of modifications to successfully grow in scalability from small to medium to global.  For a good review of such a growth pattern, and the frequent re-designs that explosive growth requires, read the story of MySpace's evolution as a Microsoft SQL Server shop at

On the negative side, NoSQL databases circumvent the data quality assurance of relational databases best known as ACID (atomicity, consistency, isolation, durability) property of transactions.  So, while NoSQL databases might be very fast and scale easily, they do not typically guarantee that a transaction will be atomic, consistent, isolated, and durable.  In other words, you could lose data, and there is no guarantee that a transaction will always complete successfully, or completely roll back.

The market for NoSQL is still very immature and crowded with many open source and proprietary products.  Well-known vendors for NoSQL databases include Google's Big Table offering and Amazon's Dynamo, both of which are available as inexpensive cloud services.  Some of the most talked about NoSQL platforms on the open source side include Apache's HBase and CouchDB; Facebook's Cassandra; and LinkedIn's Project Voldemort. …

Kevin Kline is the technical strategy manager for SQL Server Solutions at Quest Software. You might be interested in his earlier Server in the Clouds?article with this abstract:

The idea of "SQL Server in the cloud" is all the rage as I write this article. Many SQL Server experts already predict the demise of the IT data center and a complete upending of the current state of our industry, in which large enterprises can spend millions of dollars on SQL Server licenses, hardware and staff. I have to admit, when I first heard about this idea, I was ecstatic. What could be better for an enterprise than to have all the goodness of a SQL Server database with none of the hardware or staffing issues? However, on deeper examination, there is much about which to be cautious.

Shannon Lowder describes Migrating Databases to SQL Azure without current migration tools such as the SQL Server Migration Wizard or Azure Data Sync in this 4/7/2010 post:

When converting a database from an older version of Microsoft SQL to Azure, there will be many gotchas along the way.  I'd like to help you learn from the troubles I had along the way, hopefully sparing you a bit of time that was lost during my first conversion.

Getting Started

I'm going to assume you already have your account, and have already set up the database and firewall settings for your Azure server.  If you haven't please visit and follow their getting started guide.  This will walk you through each of the steps you'll need to have completed before any of the following article will help.

To get started developing in Azure you could either build a database from scratch, or "export" your current database to your azure server.  Since I have several databases that I've built throughout the years I figured I'd start my development by upgrading an existing database to Azure.

Shannon probably could have saved considerable time by checking out these posts:

Ron Jacobs’ Using System.Web.Routing with Data Services (OData) post of 4/5/2010 answers “How do you get rid of the .svc extension with WCF Data Services?:”

So you like the new OData protocol and the implementation in WCF Data Services… but you hate having a “.SVC” extension in the URI?

How do you get rid of the .svc extension with WCF Data Services?

Simple… Just use the new ServiceRoute class. For this example, I’m using a project from my MIX10 session Driving Experiences Via Services using the .NET Framework. The sample includes a simple WCF Data Service that returns information about a conference.

As you can see I’ve put the service under the Services folder.  To access it you have to browse to http://localhost:62025/Services/Conference.svc/

The default behavior of the URI in this case is that the folder and file name in the web site are used to create the URI.  But if you don’t like that with .NET 4 you can just create a route. 

Just add a Global.asax file to your site and add the following code [that Ron shows you].

This code creates a route for the URI http://localhost:62025/Conference that will use the DataServiceHostFactory to create our WCF Data Service class which is the type Conference.

Simple, easy and very cool…

Carl and Richard interview Brad Abrams, Bob Dimpsey and Lance Olson about OData for NET Rocks Show #519.

Brad is is currently the Group Program Manager for Microsoft’s UI Framework and Services team, Bob is the Product Unit Manager for the Application Server Developer Platform, and Lance is a Group Program Manager building developer tools and runtimes for data on the SQL Server team.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

The Azure AppFabric team announces The Windows Azure platform AppFabric April 2010 Release is Live in this 4/7/2010 post:

The Windows Azure platform AppFabric April Release is now live. In addition to improvements in stability, scale, and performance, this release addresses two previously communicated issues in the March billing preview release. We recommend that you re-check your usage previews for April 7th or later per the instruction in the March Billing Preview announcement to ensure that you sign up for the appropriate pricing plan in preparation for AppFabric billing that will start after April 9. For more information about pricing and the billing related issues addressed in this release, please visit our pricing FAQ page.

Please refer to the release notes for a complete list of known issues and breaking changes in this release. To obtain the latest copy of the AppFabric SDK, visit the AppFabric portal or the Download Center.

Vittorio Bertocci reports in his Patria Natia Tour: keynotes di VS2010 & Basta! Italia, Community Tour a Catania & Venezia post of 4/7/2010 (in Italian) that he will deliver a pair of keynotes and two claims-based identity sessions in mid-April: 

image In September 2005 I embarked on this venture Redmondiana, buying the domain from good emigrant Genovese. At the time I never imagined that a few years later I would come in Italy away to deliverer the keynote of one of the most important product launches of our recent history! Needless to say that I am deeply honoured & extremely thing, and I'm looking forward to meet the developers Italians from toured with Lorenzo, Francesca and all the members of Microsoft France involved in launching Visual Studio 2010 (and they are doing an amazing job). Below the agenda:

  • 12 April, 14: 30-18: 00. Keynote's launch of Visual Studio 2010, transmitted LIVE directly on the pages of the event
  • 13 April, 9: 30-10: 15. Keynote Basta! Italy in Rome
  • 14 April, 10: 00-11: 40. Keynote & claims based identity session at community event of OrangeDotNet in Catania
  • 15 April, 10: 00-11: 40. Keynote & claims based identity session at community event of XeDotNet in Venice

Great tour! Especially when you consider that in 33 years that I have lived in Italy have never been ne ne in Veneto in Sicily ...

I'm really curious to see if give sessions in English will be like moving from the ball medical supertele, or if the almost 5 years to use the Italian primarily as VPN with my wife when we shop have ruined my otherwise proverbial scilinguagnola ... I hope you'll be including:-))

There you see on tour!!!

Italian –> English translation by Microsoft (Bing) Translator.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Joshua Kurlinski from Symon Communications and Joseph Fultz from the Dallas Microsoft Technology Center deliver this 00:21:15 Scaling Web Sites with Azure and Local Cache Channel9 video segment of 4/8/2010:

In this screencast we learn how Microsoft teamed up with Symon Communications to build a scalable content delivery system for mobile devices. This Proof of Concept was created to help with Symons digital signage network, but the caching mechanism could potentially benefit anyone looking to leverage Windows Azure for massive scale. Joshua Kurlinski from Symon talks about the problems we needed to solve, and Joseph Fultz from the Dallas MTC (Microsoft Technology Center) walks us through the solution we created in depth.

To learn more about Symon Communications, visit If you’d like to learn more about this solution, you can read this article on Josephs blog.

Mike Wickstrand’s From your perspective, what is the reasonable length of time it should take to deploy an application to Windows Azure? Twtpoll survey lets you select from various deployment times:

Here were the results as of 11:00 AM on 7/8/2010:


I voted for #1.

Mike is Senior Director of Product Planning for Windows Azure.

Maarten Balliauw explains Running PHP on Windows Azure and other related topics on 4/8/2010:

Yesterday I did some talks on PHP and Windows Azure at JumpIn Camp in Zürich together with Josh Holmes. Here are the slide decks and samples we used.

Scaling Big while Sleeping Well

Josh talked on what Windows Azure is, what components are available and how you can get started with PHP and Windows Azure: Scaling Big While Sleeping Well. View more presentations from Josh Holmes.

Running PHP in the Cloud

I did not do the entire deck, but showed some slides and concepts. This is mainly the same content as Josh’s session with some additions: Running PHP In The Cloud. View more presentations from Maarten Balliauw.

Windows Azure Storage & SQL Azure

This deck talks about the different storage concepts and how to use them in PHP: Windows Azure Storage & Sql Azure. View more presentations from Maarten Balliauw.

Sample code

As a demo, I had ImageCloud, a web application similar to Flickr. Here’s the sample code: ImageCloud.rar (5.00 mb)

tbtechnet answers PHP on Windows Azure… What’s the Scoop? in this 4/7/2010 post to the Windows Azure Platform, Web Hosting and Web Services blog:

Ever wondered what’s involved to get a PHP app to work with Windows Azure? We took a crack at explaining to PHP developers how to work with Windows Azure.

PHP on Windows Azure Quickstart Guide - Creating and Deploying PHP Projects on Windows Azure

The guide shows how to develop and test PHP code on a local development machine, and then how to deploy that code to Windows Azure. The material is intended for developers who are already using PHP with an integrated development environment (IDE) such as Eclipse, so the guide doesn't cover PHP syntax or details of working with Eclipse, but experienced PHP programmers will see that it is easy to set up Eclipse to work with Windows Azure to support web-based PHP applications.

Along with the new Windows Azure Command-line Tools for PHP Developers which provides a command-line utility to PHP developers the guide is worth a look.

Return to section navigation list> 

Windows Azure Infrastructure

CloudTweaks reports on Microsoft Research’s Cloud Computing Project – “Cloud Faster” in this 4/8/2010 post:

To make cloud computing work, we must make applications run substantially faster, both over the Internet and within data centers. Our measurements of real applications show that today’s protocols fall short, leading to slow page-load times across the Internet and congestion collapses inside the data center. We have developed a new suite of architectures and protocols that boost performance and the robustness of communications to overcome these problems.

About Cloud Faster

We have developed a new suite of architectures and protocols that boost performance and the robustness of communications to overcome these problems. The results are backed by real measurements and a new theory describing protocol dynamics that enables us to remedy fundamental problems in the Transmission Control Protocol.

To speed up the cloud, we have developed two suites of technology:

  • DCTCP – changes to the congestion control algorithm of TCP that decreases application latency inside the data center by decreasing queue lengths and packet loss while maintaining high throughput.
  • WideArea TCP – changes to the network stack of the “last-hop server” – the last server to touch packets before they travel to the client – that reduce the latency for transferring small objects (5 to 40 KB) by working around last-mile impairments such as loss and high RTT.

We will demo the experience users will have with Bing Web sites, both with and without our improvements. The difference is stunning. We also will show visualizations of intra-data-center communication problems and our changes that fix them. This work stems from collaborations with Bing and Windows Core Operating System Networking.

DCTCP – Reducing Latency Inside the Data Center

The following videos shows what happens when a server (marked 21) in a rack sends a request for information to 20 other servers in the same rack, and then waits for their responses so that it can formulate a summary. This Partition/Aggregate pattern is very common in data center applications, forming the heart of applications as diverse as Web search (querying very large indexes), ad placement (find the best ads to show with a web page), and social networking (find all a user’s friends, or the most interesting info to show to that user).

In both videos, we see a burst of activity as the request is sent out, with all servers responding at roughly the same time with a burst of packets that carries the first part of their response. This burst is known as incast, and it causes the queue at the switch to rapidly grow in size (shown as blocks extending out on a 45 degree angle).


In the case of DCTCP, senders start receiving congestion notifications much earlier than with TCP. They adjust their sending rates, and the queue never overflows. Even after the initial burst of activity, the operation of DCTCP is much smoother than TCP, with senders offering roughly equal amounts of traffic — so much that it even appears they are “taking turns.”

Watch the Videos

David Gristwood has updated his SlideShare deck to 70 slides in Understanding The Azure Platform March 2010. Dave is a Microsoft application architect.

<Return to section navigation list> 

Cloud Security and Governance

Ina Fried claims The cloud--it's not for control freaks in this 4/8/2010 post to CNet News’ Beyond Binary column:

Moving server software to the cloud has a lot of advantages. A company no longer has to worry about patches, deploying upgrades, and an number of other concerns.

But it also has one big downside--one that many CIOs are still struggling with--a the loss of control.

"They do lose control, when they move to a cloud-based service, of some things," Microsoft Senior Vice President Chris Capossela said during a lunch meeting on Wednesday. "They lose control of when things get updates. They lose control of saying 'no' to some new thing."

Capossela acknowledged that many technology executives, even those who are shifting work to the cloud, see it as a mixed bag.

"On Mondays, Wednesdays, and Fridays they hate it, and on Tuesdays and Thursdays they are really excited by it," Capossela said. "What I mean by that is they see the excitement and the benefits of it and they are also scared of it."

To the end user, it doesn't make a huge difference; Microsoft's software looks basically the same whether it is running in a customer's data center or as a service from Microsoft. If anything, the service customers are happier because they get new versions more quickly.

However, to the IT department, those two scenarios look very different. When they run the software on their own, customers have to budget for upgrades, manage installations, and monitor servers. In the latter scenario, the company doesn't do any of that but at a different cost: they have little say which versions of the software are running.

Photo of Chris Capossela by Microsoft.’s The Cloud is unsafe? Quite the opposite post of 4/8/2010 posits:

As it turns out, 45% of tech execs believe that the risks of cloud computing outweigh the benefits.

I don’t get it. Yes the risks are obvious, and I realise that the primary concern for large enterprise in entering the cloud is the increased vulnerability of information contained within it. As with almost all cloud solutions a company does not host it’s own cloud servers and is therefore not responsible for managing the security of the servers.

This is very much akin to trusting a stranger with your children. If something ever happened to them, wouldn’t you rather it happen when they were with you? After all who can love and protect them better than their own parents? My response to this is simple: If the person you were trusting your children with were a kick-ass Steven Seagal-type character, would you still not trust them to be safe? Certainly you will still love them more than Steven Seagal (maybe??), but barring some divine lifting-a-car-off-your-child type moment, you surely can’t protect them better.

This is how I view the cloud. Disregarding amateurish companies that consider the cloud as dollars for storage with little concern for security, cloud services such as Amazon’s AWS are the Steven Seagals of the cloud. It is their business to ensure that they are the best at what they do and security is absolutely paramount. As Google have stated in the past, one small error and everyone stops trusting you. They simply can’t afford any mistakes, even more so than the enterprises that use them.

With that said, how then can enterprise possibly trust their own ’secure’ non-cloud based solutions over the cloud? Their networks are still connected to the net (although very well firewalled), their information is still accessible to anyone desperate enough to retrieve it (just ask the Chinese government), and surely – SURELY – most enterprise IT experts are no more and probably less skilled than those working for the likes of Amazon. In saying that, if you were to get hacked, wouldn’t you prefer liability lie with someone else for the resulting damages? I sure would.

The cloud can in some twisted way be viewed as a form of insurance. They mess up and you sue. You mess up… ????

Slavik Markovich’s The Next Challenge for Database Security: Virtualization and Cloud Computing article of 4/7/2010 for Database Trends and Applications April 2010 issue begins:

It's hard enough to lock down sensitive data when you know exactly which server the database is running on, but what will you do when you deploy virtualization and these systems are constantly moving?  And making sure your own database administrators (DBAs) and system administrators aren't copying or viewing confidential records is already a challenge - how are you going to know when your cloud computing vendor's staff members are not using their privileges inappropriately?  These are just two of the obstacles that any enterprise must overcome in order to deploy a secure database platform in a virtual environment, or in the cloud. In some cases, these concerns have been preventing organizations from moving to virtualization or cloud computing.

Security in a Dynamic Systems Environment

Whether we're talking about your own VMware data center, or an Amazon EC2-based cloud, one of the major benefits is flexibility.  Moving servers, and adding or removing resources as needed, allows you to maximize the use of your systems and reduce expense.  But, it also means that your sensitive data, which resides in new instances of your databases, are constantly being provisioned (and de-provisioned). While gaining more flexibility, monitoring data access becomes much more difficult.  If the information in those applications is subject to regulations like Payment Card Industry Data Security Standard (PCI DSS) or Health Insurance Portability and Accountability Act (HIPAA), you need to be able to demonstrate to auditors it is secure.

As you look at solutions to monitor these "transient" database servers, the key to success will be finding a methodology that is easily deployed on new virtual machines (VMs) without management involvement.  Each of these VMs will need to have a sensor or agent running locally - and this software must be able to be provisioned automatically along with the database software, without requiring intrusive system management, such as rebooting, for example whenever you need to install, upgrade or update the agents.  Even better, if it can automatically connect to the monitoring server, you'll avoid the need to reconfigure constantly to add/delete new servers from the management console.  The right architecture will allow you to see exactly where your databases are hosted at any point in time, and yet centrally log all activity and flag suspicious events across all servers, wherever they are running. …

Bill Brenner reports from the SaaS Connect Conference “SaaS, Security and the Cloud: It's All About the Contract” post to Network World’s Security blog of 4/7/2010:

The term Software as a Service (SaaS) has been around a long time. The term cloud is still relatively new for many. Putting them together has meant a world of hurt for many enterprises, especially when trying to integrate security into the mix.

During a joint panel discussion hosted by CSO Perspectives 2010 and SaaScon 2010 Wednesday, five guys who've been there sought to help attendees avoid the same ordeal. Perhaps the most important lesson is that contract negotiations between providers is everything. The problem is that you don't always know which questions to ask when the paperwork is being written.

Panelists cited key problems in making the SaaS-Cloud-Security formula work: SaaS contracts often lack contingency plans for what would happen if one or more of the companies involved suffer a disruption or data breach. The partners -- the enterprise customer and the vendors -- rarely find it easy getting on the same page in terms of who is responsible for what in the event of trouble. Meanwhile, they say, there's a lack of clear standards on how to proceed, especially when it comes to doing things in the cloud.

Add to that the basic misunderstandings companies have on just what the cloud is all about, said Jim Reavis, co-founder of the Cloud Security Alliance.

"It's important we understand there isn't just one cloud out there. It's about layers of services," Reavis said. "We've seen an evolution where SaaS providers ride atop the other layers, delivered in public and private clouds."

Somewhere in the mix, plenty can go wrong. …

Jay Heiser asks Its 11PM, do you know where your data is? in this 4/7/2010 post to the Gartner blogs:

Where is it?Every evening for several decades, a number of American television stations announced that it was 10pm, and asked the public service question “Do you know where your children are?”  Anyone using a cloud computing service should be asking the same question about their data.

Over the next few months, I’m going to be researching an area of cloud computing risk that hasn’t received adequate attention: data continuity and recovery.

Theoretically, the cloud computing model should be a resilient one, and a number of vendors claim that their model is built to automatically replicate data to an alternate site, protecting their customers from the risk of hardware failure, or even site failure. I have no trouble believing this.

What I do have trouble with is accepting unsubstantiated vendor claims that this is a more reliable mechanism than anything I can do for myself. There is no perfect mechanism for backing up data, but if I choose to be responsible for backing up my own data, I’ve got quite a bit of useful knowledge about the reliability of the mechanisms I choose, and the degree to which the processes are performed.  I can verify the integrity and completeness of the copies, I can store them offsite and post armed guards, and I can periodically test to ensure that restoration is possible.  None of this is foolproof, but it can be reliable to what ever degree I desire.

If I choose instead to rely on a cloud service provider, I have no ability to know where the primary data is, let alone have an ability to verify that redundant copies of all my data exist in a different site. I have no ability to know the likelihood that my provider would be able to restore my data in case of an accident, let alone restore something important that I accidentally deleted.

And if my data in the cloud  is being backed up in real time, it raises another significant question: if the original data is corrupted, won’t the same corruption affect the copy?  Mistakes and errors replicate at the speed of the cloud. What if data loss occurs as the result of some sort of cascading failure, or external attack?  Isn’t it reasonable to assume that this would affect all copies of the data?  Traditional backups are inherently more reliable in that offline data is insulated from failure modes that are inherent to realtime online redundancy models.

If you don’t know where your data is, can you confirm that it will be there when you need it most?

What evidence do you have from your provider that their proprietary technology is reliable?

<Return to section navigation list> 

Cloud Computing Events

Resource Plus announces its State of the Cloud 2010 Executive Conference to be held 4/26 through 4/27/2010 at the Seaport World Trade Center in Boston, MA:

SOTC 2010 - State of the CloudCloud Computing is in the news. But is it in your business plan? Some businesses are already realizing the significant benefits of Cloud Computing—slashing IT costs, boosting efficiency, and enabling new sales capabilities. Others may be reluctant to jump because it’s perceived as new, unfamiliar, and potentially risky.

The State of the Cloud Executive Conference helps you take Cloud Computing from an intriguing idea to a day-to-day business reality:

  • Separate the hype from the true business potential
  • Hear from companies that have made their move to the Cloud
  • Chart your company’s technological and financial future
  • Learn about all aspects of this critical emerging technology
  • Chart your company’s technological and financial future
  • Get objective insights from independent industry experts
  • Find out what third-party solution providers have to offer
Spend a day in the Cloud

It all happens at this convenient, one-stop Cloud Computing conference designed for CEOs, CIOs, senior IT executives, and other key decision-makers. Topics covered in our keynote presentations and panel sessions will help you better understand the current Cloud offerings, price points, and total cost of ownership. Plus, leading Cloud Computing service experts will showcase their Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) offerings.

We’ll explore:
  • How Cloud Computing can give your company a competitive edge
  • Scalability and portability for IaaS, PaaS, and SaaS
  • Application and infrastructure interoperability
  • Total cost of ownership and cost optimization scenarios
  • Cloud Computing security/privacy implications and solutions
  • Public vs. private Cloud solutions
  • Preparation and impact to your organizational workflow

Microsoft is a gold sponsor.

Claude (The Cloud Architect) reports Upcoming CloudStorm Editions in this 4/8/2010 post:

Next month, CloudStorm will take place at Cloud Expo in New York. It is being held on April 19, 10AM, the morning before Cloud Expo proper opens its doors at the Javits Convention Center.

CloudStorm will run from 10:00AM - 12 noon on Monday, 19th April and includes lightening talks by Cordys, Soasta (Cloudtest), Rightscale, Virtual Ark, Amplidata, A-Server (, Zenith Infotech (Smart Style Office) and more to come!

Register to attend here if you want to attend,you receive a free pass. If you want to talk at the this edition, I have 2 speaking slots left: a booth at cloud expo is required. If you have that, the speaking slot is offered free of charge.

Also, we have still some speaking slots left in CloudStorm Düsseldorf on May 4th, CloudStorm Amsterdam on May 6th, Cloud Expo Europe in London on June 16th. Costs are +- €1.000 each, providing speaking,demo table, delegate bag insert, full list of attendees including contact details

More info on

Rutrell Yasin reports “The Cloud Summit will turn attention to standards for data interoperability, portability and security” in his Feds, industry to hash out cloud standards at May summit article of 4/7/2010 for Federal Computer Week:

The National Institute of Standards and Technology will host a Cloud Summit on May 20 with federal agencies and the private sector with the intent to develop data interoperability, portability and security standards for cloud computing that can be applied across agencies.

Vivek Kundra, the federal chief information officer, told an audience at the Brookings Institution today that establishing such standards is essential to making full use of cloud computing's potential.

By Aug. 1, NIST officials plan to move forward with initial specifications, which will lead to the launch of a portal for cloud standards where various stakeholders can collaborate online in a cloud environment, Kundra said.

“NIST will convene people around the table, and part of what we want to do is test case studies,” Kundra said during an address on “The Economic Gains of Cloud Computing,” sponsored by Brookings in Washington, D.C. The event was moderated by Darrell West, vice president and director of Governance Studies with Brookings, which released a report entitled “Saving Money Through Cloud Computing.”

See Windows Azure and Cloud Computing Posts for 4/7/2010+ for more details on the Brookings Institution’s conference.

Simon Munro laments the canned production values of typical Windows Azure demos and announces UK AzureNET User Group Redux in his Emerging Azure Rockstars post of 4/7/2010:

I’ve just about had my fill of Azure demos and presentations. After more than a year in beta it seemed that the only people that could stand up in front of a crowd and talk about Azure were those from Microsoft or their hymn-singing partners. It is not that the presentations and videos are bad, it is just that they are a lot of the same – either a ‘Hello cloud’ introduction to the platform, release of new features that have been asked for or pimping how similar Azure development is to existing .net development.

Most of the presentations that I have seen, although many of them very good and done by really smart people, have the sheen of marketing snake oil. It all seems to perfect and simple - all clinical, clean and fashionable like CSI Miami where everything works, rather than messy and grungy like the development world that we have to live in every day. …

Simon continues:

So it is fortunate (and about time) that the first Azure meeting of the year is going to have presentations – not by some big name to attract the usual sheeple that marketing attracts, but by some people that have worked with Azure and delivered something that is real. I have had some chats with Simon Evans, James Broome and Grace Mollison over the last couple of months as they have developed a solution on top of Azure – a solution that would have no business case if it weren’t for the cloud model. I have heard from James as he wrestled the development fabric to fit in with his BDD style and the support of Grace in providing a build server for the team using a product that doesn’t have a server version. I watched from a distance as Simon got his head down and dealt with the persistence issues and had to ignore him after the n th time that he exclaimed how cool and easy the CDN is.

Next week is a busy week for the Microsoft community. There is all the stuff that Microsoft is putting on around the launch of VS2010, SQL R2 and others. There is a Silverlight user group meeting and I will be presenting at SQLBits on Friday. Thursday night is the turn of UKAzureNet and even though it might be less central, it is being done in a cinema and our new emerging rockstars will be on a stage of sorts. We need your support and attendance to help get it as full as possible – we don’t expect it to be like the opening weekend of Avatar, but hope that we’ll have more than a handful of usual suspects throwing popcorn.

You are guaranteed to hear some interesting stories from the trenches.

You can register here UK AzureNET User Group: Phoenix from the Flames

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Lori MacVittie claims “Stateless applications may be the long term answer to scalability of applications in the cloud, but until then, we need a solution like sticky sessions (persistence)” in her Amazon Makes the Cloud Sticky post of 4/8/2010:

Amazon recently introduced “stickiness” to its ELB (Elastic Load Balancing) offering. I’ve written a bit about “stickiness”, a.k.a. what we’ve called persistence for oh, nearly ten years now, before so I won’t reiterate again but to say, “it’s about time.” A description of why sticky sessions is necessary was offered in the AWS blog announcing the new feature:

blockquote Up until now each Load balancer had the freedom to forward each incoming HTTP or TCP request to any of the EC2 instances under its purview. This resulted in a reasonably even load on each instance, but it also meant that each instance would have to retrieve, manipulate, and store session data for each request without any possible benefit from locality of reference.

-- New Elastic Load Balancing Feature: Sticky Sessions

What the author is really trying to say is that without “sticky sessions” ELB breaks applications because it does not honor state. Remember that most web applications today rely upon state (session) to store quite a bit of application and user specific data that’s necessary for the application to behave properly. When a load balancer distributes requests across instances without consideration for where that state (session) is stored, the application behavior can become erratic and unpredictable. Hence the need for “stickiness”. …

Lori continues with “WHY is THIS IMPORTANT?” and “THE NECESSARY EVIL of STATE” sections.

Lydia Leong describes Cogent’s Utility Computing service in this 4/8/2010 post:

A client evaluating cloud computing solutions asked me about Cogent’s Utility Computing offering (and showed me a nice little product sheet for it). Never having heard of it before, and not having a clue from the marketing collateral what this was actually supposed to be (and finding zero public information about it), I got in touch with Cogent and asked them to brief me. I plan to include a blurb about it in my upcoming Who’s Who note, but it’s sufficiently unusual and interesting that I think it’s worth a call-out on my blog.

Simply put, Cogent is allowing customers to rent dedicated Linux servers at Cogent’s POPs. The servers are managed through the OS level; customers have sudo access. This by itself wouldn’t be hugely interesting (and many CDNs now allow their customers to colocate at their POPs, and might offer self-managed or simple managed dedicated hosting as well in those POPs). What’s interesting is the pricing model.

Cogent charges for this service based on bandwidth (on a Mbps basis). You pay normal Cogent prices for the bandwidth, plus an additional per-Mbps surcharge of about $1. In other words, you don’t pay any kind of compute price at all. (You do have to push a certain minimum amount of bandwidth in order for Cogent to sell you the service at all, though.) …

Mike Kirkwood reports This Tweet is Priority 1:'s Chatter is Transactional Social Media in this 4/8/2010 post to the ReadWriteCloud:

chatter LedeSoon, Twitter users will be in a better position to get satisfaction with the companies that they do business with. This morning, is announcing that the Chatter beta developer preview has grown to 500 companies and is integrated with its popular Service Cloud offering. The company has shown its ability to leverage the disruption of social media - rather than be disrupted by it.

We had a chance to review the new tools and experience what an end-to-end social media driven customer experience looks like. It was eye-opening for us - and is coming soon to the 70,000-plus customers of SalesForce platform.

The first thing we learned in our briefing with SalesForce is that the company has fully digested the reality of the new web. The company talks about how it started on a mission to bring the power of great web applications like to enterprise customers. Now, ten years later, the web and the company have moved on towards the new dominant engagement model on the web, Facebook, YouTube, and Twitter. …

<Return to section navigation list> 

blog comments powered by Disqus