Wednesday, May 26, 2010

Windows Azure and Cloud Computing Posts for 5/26/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this daily series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated in June 2010 for the January 4, 2010 commercial release. 

Azure Blob, Drive, Table and Queue Services

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Wayne Walter Berry explains SQL Azure and Session Tracing ID in this 5/26/2010 post to the SQL Azure Team blog:

image If you have been paying close attention, you will have noted that SQL Server Management Studio 2008 R2 has added a new property for connections to SQL Azure -- the Session Tracing ID.

A session tracing identifier is a unique GUID that is generated for every connection to SQL Azure. On the server side, the SQL Azure team tracks and logs all connections by the Session Tracing Id and any errors that arise from that connection. In other words, if you know your session identifier and have an error, Azure Developer Support can look-up the error in an attempt to determine what caused it.

SQL Server Management Studio

In SQL Server Management Studio, you can get your session tracing identifier in the properties window for the connection.

clip_image002[4]

Transact-SQL

You can also ask for your Session Tracing ID directly in Transact-SQL using this query:

SELECT CONVERT(NVARCHAR(36), CONTEXT_INFO())
C#

Alternatively, you can use this C# code:

using (SqlConnection conn = new SqlConnection(…))
{
    // Grab sessionId from new connection
    using (SqlCommand cmd = conn.CreateCommand())
    {
        conn.Open();
         cmd.CommandText = "SELECT CONVERT(NVARCHAR(36), CONTEXT_INFO())";
        sessionId = new Guid(cmd.ExecuteScalar().ToString());
    }
}

It is important to note that the Session Tracing ID is per connection to the server, and ADO.NET pools connections on the client side. Which means that some instances of SqlConnection will have same Session Tracing Id, since the connection they represent is recycled from the connection pool.

Summary

If you have the Session Tracing ID, along with the server name and the approximate time when calling Azure Developer Support you can expedite the debugging process and save yourself valuable time. Do you have questions, concerns, comments? Post them below and we will try to address them.

Azret Botash continues his OData-XPO series with OData WCF Data Service Provider for XPO - Part 1 of 5/26/2010:

image In the previous post, we have introduced a WCF Data Service Provider for XPO. Let’s now look at how it works under the hood.

For a basic read-only data service provider we only need to implement two interfaces:

  • IDataServiceMetadataProvider : Provides the metadata information about your entities. The data service will query this for your resource types, resource sets etc…
  • IDataServiceQueryProvider : Provides access to the actual entity objects and their properties. Most importantly, it is responsible for the IQueryable on which data operations like $filter, $orderby, $skip are performed.

Custom provider implementations are picked up via IServiceProvider.GetService.

Azret continues with sample code and concludes:

What’s next?
  • Get the WCF Data Service Provider Source.
  • Learn about implementing custom data service providers from Alex. (I will only cover XPO related details.)

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Kim Cameron wrote in his Interview on Identity and the Cloud post of 5/24/2010 to the IdentityBlog:

I just came across a Channel 9 interview Matt Deacon did with me at the Architect Insight Conference in London a couple of weeks ago.  It followed a presentation I gave on the importance of identity in cloud computing.   Matt keeps my explanation almost… comprehensible - readers may therefore find it of special interest.  Video is here.

image 

In addition, here are my presenation slides and video .

Following is Channel9’s abstract:

"The Internet was born without an identity". -Kim Cameron.
With the growing interest in "cloud computing", the subject of Identity is moving into the limelight. Kim Cameron is a legend in the identity architecture and engineering space. He is currently the chief architect for Microsoft's identity platform and a key contributor to the field at large.
For more info on Architect Insight 2010, including presentation slides and videos go to www.microsoft.com/uk/aic2010

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

msdev.com is moving to Windows Azure according to this news item of 5/26/2010:

msdev is moving into the cloud with the Windows Azure Platform. We want to share our experience with you and will be uploading a series of videos documenting our move.

You can find more information about the move to Azure below: -

What's Happening?
We're moving all of our web properties from running in a hosting environment on physical servers to the Windows Azure Platform. In doing so, we hope to: -
  • Reduce monthly expenses for hosting and bandwidth
  • Improve streaming and downloading experiences to our end users
The project has been divided into 3 main releases as follows: -
  • March: Migrate all msdev content approx 60gb worth: Release 1
  • April: Migrate Partner site and Channel Development site: Release 2
  • May: Migrate the MSDev site and Admin Tools: Release 3
Release One: Migration of +60Gb msdev training content into the cloud
This will allow the incorporation of the Content Delivery Network (CDN) features of Windows Azure. This first release was completed in March 2010. The major items to address in this first release were as follows:
  • Physically moving the existing training content from their current location to Windows Azure Blob storage.
  • Updating the URLs stored in the database to point to the new location
  • Changing the way content producers provide video content to the site administrator
  • Updating the tools that allow an administrator to associate a training video with a training event
Release Two: Migration of two web properties – the Partner and Channel Development Websites
A team consisting of individuals with deep knowledge of these properties and knowledge of the Windows Azure Platform has been assembled to work on this effort. These two properties will be live on the Windows Azure Platform in April 2010.
Release Three: Migration of the msdev site and associated admin tools
This migration will occur in May 2010. With the completion of this third release, the entire suite of properties will be running on the Windows Azure Platform.
Coming Soon!
Look out for a series of videos documenting our move to the Windows Azure Platform.

Lori MacVittie cautions Just when you thought the misconceptions regarding cloud computing couldn’t get any worse…they do in her And That, Young Cloudwalker, Is Why You Fail to F5’s DevCentral blog:

image

We have, in general, moved past the question “what is cloud” and onto “what do I need to do to move an application to the cloud?” But the question “what is cloud” appears not to have reached consensus and thus advice on how to move an application into the cloud might be based on an understanding of cloud that is less than (or not at all) accurate. The problem is exacerbated by the reality that there are several types or models of cloud: SaaS, PaaS, and IaaS. Each one has different foci that impact the type of application you can deploy in its environment. For example, moving an application to SaaS doesn’t make much sense at all because SaaS is an application; what you’re doing is moving data, not the application. That makes it difficult to talk in generalities about “the cloud”. 

This lack of consensus results in advice based on assumptions regarding cloud that may or may not be accurate. This is just another reason why it is important for any new technological concept to have common, agreed upon definitions. Because when you don’t, you end up with advice that’s not just inaccurate, it’s downright wrong. 

CLOUD ONLY SUPPORTS WHAT KIND of APPLICATIONS?

Consider this post offering up a “Practical Top Ten Checklist” for migrating applications to “cloud.” It starts off with an implied assumption that cloud apparently only supports web applications:

1. Is your app a web app? It sounds basic, but before you migrate to the web, you need to make sure that your application is a web application. Today, there are simple tools that can easily convert it, but make sure to convert it.

First and foremost, “cloud” is not a synonym for “the Internet” let alone “the web.” The “web” refers to the collective existence of applications and sites that are delivered via HTTP. The Internet is an interconnection of networks that allow people to transfer data between applications. The “cloud” is a broad term for an application deployment model that is elastic, scalable, and based on the idea of pay-per-use. None of these are interchangeable, and none of  them mean the same thing.  The wrongness in this advice is the implied assertion that “cloud” is the same as the “web” and thus a “cloud application” must be the same as a “web application.” This is simply untrue.

blockquote A 'cloud' application need not be a web application.  Using on-demand servers from infrastructure-as-a-service (IaaS) providers like Rackspace, Amazon, and GoGrid, you can operate almost any application that can be delivered from a traditional data center server.  This includes client/server architecture applications, non-GUI applications, and even desktop applications if you use Citrix, VNC or other desktop sharing software.  Web applications have obvious advantages in this environment, but it is by no means a requirement.

-- David J. Jilk, CEO Standing Cloud

As David points out, web applications have obvious advantages – including being deployable in a much broader set of cloud computing models than traditional client/server applications – but there is no requirement that applications being deployed in “a cloud” be web applications. The goodness of virtualization means that applications can be “packaged up” in a virtual machine and run virtually (sorry, really) anywhere that the virtual machine can be deployed: your data center, your neighbor’s house, a cloud, your laptop, wherever. Location isn’t important and the type of application is only important with regards to how you access that application. You may need policies that permit and properly route the application traffic applied in the cloud computing provider’s network infrastructure, but a port is a port is a port and for the most routers and switches don’t care whether it’s bare nekkid TCP or HTTP or UDP or IMAP or POP3 or – well, you get the picture.

This erroneous conclusion might have been reached based on the fact that many cloud-based applications have web-based user interfaces. But when you dig down, under the hood, the bulk of what they actually “do” in terms of functionality is not web-based at all. Take “webmail” for example. That’s a misnomer in that the applications are mail servers; they use SMTP and POP3/IMAP to exchange data. If you take away the web interface the application still works and it could be replaced with a fat client and, in fact, solutions like Gmail allow for traditional client-access to e-mail via those protocols. What’s being scaled in Google’s cloud computing environment is a mix of SMTP, POP3, IMAP (and their secured equivalents) as well as HTTP. Only one of those is a “web” application, the rest are based on Internet standard protocols that have nothing to do with HTTP. 

REDUX: “THE CLOUD” can support virtually any application. Web applications are ideally suited to cloud, but other types of client/server applications will also benefit from being deployed in a cloud computing environment. Old skool COBOL applications running on mainframes are, of course, an exception to the rule. For now.

Lori continues with a “SCALABILITY and REDUNDANCY are WHOSE RESPONSIBILITY??” topic and concludes:

REDUX: ONE of “THE CLOUD”s fundamental attributes is that it enables elastic scalability and, through such scalability implementations, a measure of redundancy. Nothing special is necessarily required of the application to benefit from cloud-enabled scalability, it is what it is. Scalability is about protocols and routing, to distill it down to its simplest definition, and as long as it’s based on TCP or UDP (which means pretty much everything these days) you can be confident you can scale it in the cloud.

D. Johnson’s Different Types of Cloud ERP post of 5/26/2010 to the ERPCloudNews blog explains:

Cloud Infrastructure and it’s impact on Hosting and SaaS

Cloud technology enables SaaS and powerful new forms of hosting that can reduce the cost of service delivery. Note that cloud does not equal SaaS and cloud is not mutually exclusive from hosting.

How much cloud do you need?

Customers can purchase services with different amounts of “cloud” in the service delivery stack. Assume that we have four distinct layers of delivery: cloud infrastructure (hardware resources for the cloud), cloud platform (operating system resources for the cloud), cloud applications (application resources built for the cloud), and client resources (user interface to the cloud). This distinction helps us illustrate the way cloud services are offered in the diagram below.

The Cloud Stack

Cloud Delivery Options

In this simplified diagram, we show three types of cloud services:

  • Cloud Infrastructure (for example: Amazon, GoGrid) delivers an cloud infrastructure where you install and maintain a platform and an application.
  • Cloud Platform (for example: Windows Azure) delivers a cloud platform where you install and maintain your applications without worrying about the operating environment.
  • Cloud Application (for example: Salesforce.com) delivers a complete application, all you maintain is your client access program which is frequently a browser.
SaaS ERP and Cloud Models

Even legacy ERP vendors are moving to cloud technologies to offer software as a service to their customers. When vendors offer SaaS, the customer is only responsible for maintaining their client device (usually just a browser).

Vendors can offer SaaS utilizing all three cloud infrastructures above. Some vendors such as Acumatica offer all three types of services.

  • Offering SaaS using a cloud application is straightforward. In this case the vendor builds an application which is tightly integrated with infrastructure and hardware so that the three components cannot be separated.
  • Offering SaaS using a cloud platform means that the vendor must manage the application layer separately from the platform layer. This architecture gives the vendor the flexibility to move the application to a separate cloud platform provider.
  • Offering SaaS using a cloud infrastructure is similar to a managed hosting scenario. In this case the vendor installs and manages both an operating system and their application on top of a multi-tenant hardware infrastructure. This technique provides maximum flexibility, but may increase overhead slightly.
Comparing SaaS Offering Options
  • SaaS using a Cloud Application
    Maximizes efficiencies for “cookie cutter” applications
    Vendor lock-in, customer does not have option to move application to a different provider
  • SaaS using a Cloud Platform
    Mix of flexibility and savings
    Coordination challenges – vendor manages the application while a service provider manages infrastructure
  • SaaS using a Cloud Infrastructure
    Maximizes flexibility to switch providers or move on-premise
    Some would argue this is nothing more than a hosted service with a slightly lower pricing structure
  • Multi-tenant applications
    Multi-tenant applications can be deployed in any scenario to reduce overhead associated with upgrading multiple customers and maintaining different versions of software. This implies that multi-tenancy reduces the flexibility to run an old version of software and limits customization and integration potential. Multi-tenant options should be priced lower to offset the loss of flexibility.
** Recommendation **

For a complex application such as enterprise resource planning (ERP), we advise selecting a vendor that can provide flexibility. ERP systems are not like CRM, email, or other cookie-cutter applications. Your ERP application needs to grow and change as your business changes.

Key questions that you need to ask:
1. Do you need significant customizations and interfaces with on-premise systems?
2. Will you need to move your ERP architecture on-premise in the future?
3. Do you need to own your operating environment and the location of your data?
4. Do you prefer to own software instead of renting it?

If you answered “yes” or “maybe” to any of these questions, you should consider the Cloud Platform or Cloud Infrastructure options. These options provide maximum flexibility as well as the option to own your software.

If you answered “no” to these questions, then a cloud application may provide price benefits that offset the vendor lock-in issues. Be careful that the price that the vendor quotes in year 1 is not going to change significantly in the future when it may be difficult to leave the platform.

Reuben Krippner updated his PRM Accelerator (R2) for Dynamics CRM 4.0 project on 5/25/2010 and changed the CodePlex license from the usual Microsoft Public License (MsPL):

The Partner Relationship Management (PRM) Accelerator allows businesses to use Microsoft Dynamics CRM to distribute sales leads and centrally manage sales opportunities across channel partners. It provides pre-built extensions to the Microsoft Dynamics CRM sales force automation functionality, including new data entities, workflow and reports. Using the PRM Accelerator, companies can jointly manage sales processes with their channel partners through a centralized Web portal, as well as extend this integration to automate additional business processes.

The accelerator installation package contains all source code, customizations, workflows and documentation.

Please note that this is R2 of the PRM Accelerator - there are a number of new capabilities which are detailed in the documentation.

Please also review the new License agreement before working with the accelerator.

According to Reuben’s re-tweet, CRMXLR8 offers “optional use of #Azure to run your partner portal; can also run your portal on-prem[ises] if you like!”

Return to section navigation list> 

Windows Azure Infrastructure

Jonathan Feldman’s Cloud ROI: Calculating Costs, Benefits, Returns research report for InformationWeek::Analytics is available for download as of 5/25/2010:

The decision on whether to outsource a given IT function must be based on a grounded discussion about data loss risk, lock-in and availability, total budget picture, reasonable investment life spans, and an ability to admit that sometimes, good enough is all you need.

Cloud ROI: Calculating Costs, Benefits, Returns
Think that sneaking feeling of irrelevance is just your imagination? Maybe, maybe not. Our April 2010 InformationWeek Analytics Cloud ROI Survey gave a sense of how nearly 400 business technology professionals see the financial picture shaking out for public cloud services. One interesting finding: IT is more confident that business units will consult them on cloud decisions than our data suggests they should be.

Fact is, outsourcing of all types is seen by business leaders as a way to get new projects up fast and with minimal miss, fuss and capital expenditures. That goes double for cloud services. But when you look forward three or five years, the cost picture gets murkier. When a provider perceives that you’re locked in, it can raise rates, and you might not save a red cent on management in the long term. In fact, a breach at a provider site could cost you a fortune—something that’s rarely factored into ROI projections.

In our survey, we asked who is playing the Dr. No role in cloud. We also examined elasticity and efficiency. Premises systems—at least ones that IT professionals construct—are always overbuilt in some way, shape or form. We all learned the hard way that you’d better build in extra, since the cost of downtime to add more can be significant. Since redundancy creates cost, we asked about these capacity practices, flexibility requirements, key factors in choosing business systems, and how respondents evaluate ROI for these assets.

Your answers showed us that adopting organizations aren’t nearly as out to lunch as cloud naysayers think. In this report, we’ll analyze the current ROI picture and discuss what IT planners should consider before putting cloud services into production, to ensure that the fiscal picture stays clear. (May 2010)

  • Survey Name: InformationWeek Analytics Cloud ROI Survey
  • Survey Date: April 2010
  • Region: North America
  • Number of Respondents: 393

Download

Daniel Robinson reported “Ryan O'Hara, Microsoft's senior director for System Center, talks to V3.co.uk about bringing cloud resources under the control of existing management tools” in his Microsoft to meld cloud and on-premise management article of 5/26/2010 for the V3.co.uk site:

Microsoft's cloud computing strategy has so far delivered infrastructure and developer tools, but the company is now looking to add cloud support into its management platform to enable businesses to control workloads both on-premise and in the cloud from a single console.

imageMicrosoft's System Center portfolio has focused on catching up with virtualisation leader VMware on delivering tools that can manage both virtual and physical machines on-premise, according to Ryan O'Hara, senior director of System Center product management at Microsoft.

"Heretofore we've been investing in physical-to-virtual conversion integrated into a single admin experience, and moving from infrastructure to applications and service-level management," he said.

Microsoft is now looking at a third dimension, that of enabling customers to extend workloads from their own on-premise infrastructure out to a public cloud, while keeping the same level of management oversight.

"We think that on-premise architecture will be private cloud-based architecture, and this is one we're investing deeply in with Virtual Machine Manager and Operations Manager to enable these private clouds," said O'Hara.

Meanwhile, the public cloud element might turn out to be a hosted cloud, an infrastructure-as-a-service, a platform-as-a-service or a Microsoft cloud like Azure.

The challenge is to extend the System Center experience to cover both of these with consistency, according to O'Hara. He believes this is where Microsoft has the chance to create some real differentiation in cloud services, at least from an enterprise viewpoint.

"I think, as we extend cross these three boundaries, it puts System Center and Microsoft into not just an industry leading position, but a position of singularity. I don't think there is another vendor who will be able to accomplish that kind of experience across all three dimensions," he said.

This is territory that VMware is also exploring with vSphere and vCloud, and the company signaled last year that it planned to give customers the ability to move application workloads seamlessly between internal and external clouds.

Daniel continues his story on page 2 and concludes:

Microsoft is also spinning back the expertise it has gained from Azure and its compute fabric management capabilities into Windows Server and its on-premise infrastructure, according to O'Hara. Announcements around this are expected in the next couple of months.

The next generation of Virtual Machine Manager and Operations Manager, which Microsoft has dubbed its 'VNext' releases, are due in 2011 and will have "even more robust investments incorporating cloud scenarios", O'Hara said.

Ryan Kim’s Survey: Bay Area more tech and cloud savvy article of 5/26/2010 for the San Francisco Chronicle’s The Tech Chronicles blog reports:

image We here in the Bay Area are tech savvy lot, down with the cloud computing (when we understand it) and emerging technologies.

That's the upshot of a survey by Penn Schoen Berland, a market research and consulting firm that is opening an office in San Francisco. Not necessarily ground-breaking stuff considering we're in Silicon Valley but it's still interesting to see how we stand compared to the rest of the country.

According to the survey, Bay Ares residents are more excited by technology (78 percent of Bay Area respondents vs. 67 percent for the U.S.) and are more involved in technology innovations (60 percent Bay Area compared to 50 percent for the U.S)

While only 18 percent of Americans can accurately define the cloud and cloud computing, 23 percent of Bay Area residents know what it is. Bay Area residents are more interested in using the cloud for things like applications (81 percent Bay Area vs. 65 percent U.S.), backing up computer or phone data (72 percent Bay Area vs. 64 percent U.S.) and online document collaboration (66 percent bay Area vs. 51 percent U.S.)

When it comes to new technology, 60 percent of Bay Area respondent[s] said they like to have the latest and greatest, compared to 50 percent for the rest of the country. People here also want to be involved in making new tech (57 percent Bay Area vs. 49 percent U.S.)

Bay Area resident are significantly more likely to use Facebook, Firefox, Gmail, iTunes and Google Chrome, Google Docs and LinkedIn than others around the country. We also really like Microsoft Office but we're we're less likely to use Microsoft's Internet Explorer browser.

Penn Schoen Berland’s findings are certainly no surprise.

John Soat asks Cloud 2.0: Are We There Yet? in this 5/25/2010 post to Information Week’s Plug Into the Cloud blog (sponsored by Microsoft):

image Cloud computing is still partly hype, part reality. Broken down into its constituent parts (SaaS, PaaS, IaaS) it is a pragmatic strategy with a history of success. But the concept of “the cloud” still has some executives scratching their heads over what’s real, what’s exactly new about it, and how it fits into their IT plans. Is it time to move to Cloud 2.0?

Jeffrey Kaplan, managing director of ThinkStrategies, a cloud consultancy, has written an interesting column for InformationWeek’s Global CIO blog about the evolution of cloud computing and how it is poised to enter the “2.0” stage. The ubiquitous “2.0” designation passed from software development to popular culture several years ago, and it is used to express a significant advancement or shift in direction. Because of its ubiquity, the 2.0 moniker has lost some of its specificity.

Right now, public cloud computing is dominated by the “XaaS” models: software-as-a-service (SaaS), platform-as-a-service (PaaS), and infrastructure-as-a-service (IaaS). Some organizations are experimenting with private clouds, which import the public cloud’s capabilities around resource scalability and dynamic flexibility into proprietary data centers.

To me, Cloud 2.0 is exemplified in one word: hybrid. Cloud computing will offer its most compelling advantages when organizations can combine private clouds and public clouds (XaaS) in IT architectures that stretch the definition of flexibility and agility.

An example of that Cloud 2.0 direction is Microsoft’s Azure strategy. Azure is Microsoft’s platform-as-a-service offering, capable of supporting online demand for dynamic processing and services. Microsoft is also building similar automation and management capabilities into its Windows Server technology, which should enable the development of private clouds and their integration with the public cloud, specifically in this instance Azure. [Emphasis added.]

It might be a little soon to jump to the Cloud 2.0 designation just yet. But as development continues, it’s not that far away, and it’s not too soon to start figuring it into your IT strategy.

Ellen Rubin continues the public-vs-private-cloud controversy in her Private Clouds: Old Wine in a New Bottle post of 5/25/2010, which describes “The Need for Internal Private Cloud”:

I recently read a Bank of America Merrill Lynch report about cloud computing, and they described private clouds as "old wine in a new bottle." I think they nailed it!

The report points out that a typical private cloud set-up looks much the same as the infrastructure components currently found in a corporate data center, with virtualization added to the mix. While the virtualization provides somewhat better server utilization, the elasticity and efficiency available in the public cloud has private clouds beat by a mile.

In short, the term "private cloud" is usually just a buzzword for virtualized internal environments that have been around for years. By replicating existing data center architectures, they also recreate the same cost and maintenance issues that cloud computing aims to alleviate.

Despite their limitations, there is still a lot of industry talk about creating internal private clouds using equipment running inside a company’s data center. So why do people consider building private clouds anyway?

To answer this question, you have to step back and examine some of the fundamental reasons why people are looking to cloud computing:

  1. The current infrastructure is not flexible enough to meet business needs
  2. Users of IT services have to wait too long to get access to additional computing resources
  3. CFOs and CIOs are tightening budgets, and they prefer operational expenses (tied directly to business performance) vs. capital expenses (allocated to business units)

In every case, the public cloud option outperforms the private cloud. Let’s examine each point:

  1. Flexibility – the ability to access essentially unlimited computing resources as you need them provides the ultimate level of flexibility. The scale of a public cloud like Amazon’s EC2 cannot possibly be replicated by a single enterprise. And that’s just one cloud – there are many others, allowing you to choose a range of providers according to your needs.
  2. Timeframes – to gain immediate access to public cloud compute resources, you only need an active account (and of course the appropriate corporate credentials). With a private cloud, users have to wait until the IT department completes the build out of the private cloud infrastructure. They are essentially subject to the same procurement and deployment challenges that had them looking at the public cloud in the first place.
  3. Budgets – everyone knows that the economic environment has brought a new level of scrutiny on expenses. In particular, capital budgets have been slashed. Approving millions of dollars (at least) to acquire, maintain and scale a private cloud sufficient for enterprise needs is becoming harder and harder to justify — especially when the "pay as you go" approach of public clouds is much more cost-effective.

There are many legitimate concerns that people have with the public cloud, including security, application migration and vendor lock-in. It is for these reasons and more that we created CloudSwitch. We’ve eliminated these previous barriers, so enterprises can take immediate advantage of the elasticity and economies of scale available in multi-tenant public clouds. Our technology is available now, and combines end-to-end security with point-and-click simplicity to revolutionize the way organizations deploy and manage their applications in public clouds.

Sir Isaac Newton may not have dreamed about clouds, but his first Law of Motion, "a body at rest tends to stay at rest", has been a good harbinger of cloud adoption until now. It is fair to expect that people will grasp for private clouds simply because it’s more comfortable (it’s the status quo). However, the rationale for public cloud adoption is so compelling that a majority of organizations will choose to embrace the likes of Amazon, Terremark, and other clouds. As adoption increases, private clouds will be used only for select applications, thus requiring far fewer resources than they currently demand. We’re also seeing the emergence of “hybrid” clouds that allow customers to toggle compute workloads between private and public clouds on an as-needed basis.

In the end, we will have new wine and it will be in a new bottle. With CloudSwitch technology, 2010 is shaping up to be a great vintage.

Rory Maher, CFA claims in his THE MICROSOFT INVESTOR: Cloud Computing Will Be A $100 Billion Market post of 5/18/2010 to the TBI Research blog:

imageCloud Computing Is A $100 Billion Market (Merrill Lynch)
Merrill Lynch analyst Kash Rangan believes the addressable market for cloud computing is $100 Billion (that's about twice Microsoft's annual revenue).  This is broken down between applications ($48 Billion), platform ($26 Billion), and infrastructure ($35 Billion).  Along with Google and Salesforce.com, Microsoft is one of the few companies positioned well across all segments of the industry.  Why is this important?  "Azure, while slow to take off could accelerate revenue and profit growth by optimizing customer experience and generating cross-sell of services."  This is true, but the bigger story is if Microsoft can gain enough traction in cloud computing to offset losses in share by its Windows franchise.  At this early stage it is not looking like this will be the case.

and reports:

Microsoft Presents At JP Morgan Conference: Tech Spend Encouraging, But Cloud A Risk (JP Morgan)
Stephen Elop, President of Microsoft Business Division (MBD) presented at JP Morgan's TMT conference yesterday.  Analyst John DiFucci had the following takeaways:

  • Elop was cautious but did indicate he was seeing early signs of an increase in business spending.
  • The company is rolling out cloud-like features to its products in order to fend off competitors, but cloud products would likely decrease overall profit margins.
  • Office 2010 getting off to a strong start.  Elope noted "there were 8.6 million beta downloads of Office 2010, or three times the number of beta downloads seen with Office 2007."

<Return to section navigation list> 

Cloud Security and Governance

See Chris Hoff (@Beaker) presented his “Cloudifornication: Indiscriminate Information Intercourse Involving Internet Infrastructure” Gluecon keynote in the Cloud Computing Events section below.

Lydia Leong analyzes Shifting the software optimization burden in her 5/26/2010 post to her CloudPundit: Massive-Scale Computing (not Gartner) blog:

image Historically, software vendors haven’t had to care too much about exactly how their software performed. Enterprise IT managers are all too familiar with the experience of buying commercial software packages and/or working with integrators in order to deliver software solutions that have turned out to consume far more hardware than was originally projected (and thus caused the overall project to cost more than anticipated). Indeed, many integrators simply don’t have anyone on hand that’s really a decent architect, and lack the experience on the operations side to accurately gauge what’s needed and how it should be configured in the first place.

Software vendors needed to fix performance issues so severe that they were making the software unusable, but they did not especially care whether a reasonably efficient piece of software was 10% or even 20% more efficient, and given how underutilize enterprise data centers typically are, enterprises didn’t necessarily care, either. It was cheaper and easier to simply throw hardware at the problem rather than to worry about either performance optimization in software, or proper hardware architecture and tuning.

Software as a service turns that equation around sharply, whether multi-tenant or hosted single-tenant. Now, the SaaS vendor is responsible for the operational costs, and therefore the SaaS vendor is incentivized to pay attention to performance, since it directly affects their own costs.

Since traditional ISVs are increasingly offering their software in a SaaS model (usually via a single-tenant hosted solution), this trend is good even for those who are running software in their own internal data centers — performance optimizations prioritized for the hosted side of the business should make their way into the main branch as well.

I am not, by the way, a believer that multi-tenant SaaS is inherently significantly superior to single-tenant, from a total cost of ownership, and total value of opportunity, perspective. Theoretically, with multi-tenancy, you can get better capacity utilization, lower operational costs, and so forth. But multi-tenant SaaS can be extremely expensive to develop. Furthermore, a retrofit of a single-tenant solution into a multi-tenant one is a software project burdened with both incredible risk and cost, in many cases, and it diverts resources that could otherwise be used to improve the software’s core value proposition. As a result, there is, and will continue to be, a significant market for infrastructure solutions that can help regular ISVs offer a SaaS model in a cost-effective way without having to significantly retool their software.

<Return to section navigation list> 

Cloud Computing Events

The Glue 2010 conference (#gluecon) at the Omni Interlocken Resort in Broomfield (near Denver), CO is off to a roaring start on 5/26/2010 with controversial keynotes by database visionaries:

Hopefully, Gluecon or others have recorded the session and will make the audio and/or video contact available to the public. If you have a link to recorded content, please leave a copy in a comment.

Michael Stonebraker will present a Webinar - Mike Stonebraker on SQL "Urban Myths" on 6/3/2010 at 1:00 PM - 2:00 PM PDT (advance registration required.) His Errors in Database Systems, Eventual Consistency, and the CAP Theorem post to the Communications of the ACM blog appears to be the foundation of his Gluecon 2010 keynote. (Be sure to read the comments.)

Chris Hoff (@Beaker) presented his “Cloudifornication: Indiscriminate Information Intercourse Involving Internet Infrastructure” keynote at 11:40 to 12:10 MDT. Here’s a link to an earlier 00:58:36 Cloudincarnation from TechNet’s Security TechCenter: BlueHat v9: Cloudifornication: Indiscriminate Information Intercourse Involving Internet Infrastructure:

Where and how our data is created, processed, accessed, stored, backed up and destroyed in what are sure to become massively overlaid cloud-based services - and by whom and using whose infrastructure - yields significant concerns related to security, privacy, compliance, and survivability. This presentation shows multiple cascading levels of failure associated with relying on cloud-on-cloud infrastructure and services, including exposing flawed assumptions and untested theories as they relate to security, privacy, and confidentiality in the cloud, with some unique attack vectors.

Presented by Chris Hoff, Director of Cloud and Virtualization Solutions, Cisco

image
image image
Eric Brewer Michael Stonebraker Chris Hoff

See also BlueHat v9: Interview with Katie Moussouris and Chris Hoff:

Chris Hoff is Director of Cloud and Virtualization Solutions, Data Center Solutions at Cisco Systems. He has over fifteen years of experience in high-profile global roles in network and information security architecture, engineering, operations and management with a passion for virtualization and all things cloud.

Presented by Katie Moussouris, Senior Security Strategist, Security Development Lifecycle, Microsoft and Chris Hoff, Director of Cloud and Virtualization Solutions, Cisco

The Voices of Innovation Blog posted this Engagement in Washington: Brad Smith on Cloud Computing article about the Gov 2.0 Expo on 5/26/2010:

This morning, Microsoft Senior Vice President and General Counsel Brad Smith gave a keynote speech, "New Opportunities and Responsibilities in the Cloud," at the Gov 2.0 Expo in Washington, DC. Voices for Innovation has been covering cloud computing policy and business opportunities for several months, and earlier this year, Smith spoke with VFI after delivering a speech at the Brookings Institution. You can view that video at this link.

We were going to recap Smith's speech at Gov 2.0, but Smith himself posted a blog, "Unlocking the Promise of the Cloud in Government," on the Microsoft on the Issues blog. We have re-posted this brief essay below. One significant takeaway: engagement. Smith writes, "Microsoft welcomes governments and citizens alike to participate in shaping a responsible approach to the cloud." That's what VFI is all about. VFI members are on the front lines of technology, developing and implementing innovative solutions. You should bring your expertise to discussions when the opportunity arises. Now, from Brad Smith...

Unlocking the Promise of the Cloud in Government

By Brad Smith
Senior Vice President and General Counsel

Over the past few months, starting with my January speech at the Brookings Institution in Washington, D.C., I’ve talked a lot about the great potential for cloud computing to increase the efficiency and productivity of governments, businesses and individual consumers. To realize those benefits, we need to establish regulatory and industry protections that give computer users confidence in the privacy and security of cloud data.

Today, I returned to Washington to continue the discussion as one of the plenary speakers at the Gov 2.0 Expo 2010.

As I shared during my presentation, we are constantly seeing powerful new evidence of the value of cloud computing.

Today, for example, we announced that the University of Arizona chose Microsoft’s cloud platform to facilitate communications and collaboration among the school’s 18,000 faculty and staff.   After initially looking at various supposedly “free” online services, the institution selected Microsoft’s Business Productivity Online Suite to update its aging e-mail system and to provide new calendaring and collaboration tools.  U. of A. officials concluded that, as a research university that conducts $530 million in research annually, it needed the enterprise-level security and privacy protections that BPOS could provide, but which the alternative services could not match.

I also talked about how cloud computing offers governments new opportunities to provide more value from publicly available data. The city of Miami, for instance, is using Microsoft’s Windows Azure cloud platform for Miami311, an online service that allows citizens to map some 4,500 non-emergency issues in progress.  This capability has enabled the city to transform what had essentially been a difficult-to-use list of outstanding service requests into a visual map that shows citizens each and every “ticket” in progress in their own neighborhood and in other parts of the city.

Stories like these are increasingly common.  Across the United States, at the state and local level, Microsoft is provisioning 1.4 million seats of hosted services, giving customers the option of cloud services. 

At Microsoft, we see how open government relies heavily on transparency, particularly around the sharing of information. This means not only making data sets available to citizens, but making the information useful.  If we want to engage citizens, then the cloud can play a role in bringing government information to life in ways that citizens can use in their daily activities.

But with new opportunities come new challenges.  The world needs a safe and open cloud with protection from thieves and hackers that will deliver on the promise of open government.  According to a recent survey conducted by Microsoft, more than 90 percent of Americans already are using some form of cloud computing. But the same survey found that more than 75 percent of senior business leaders believe that safety, security and privacy are top potential risks of cloud computing, and more than 90 percent of the general populations are concerned about the security and privacy of personal data.

Given the enormous potential benefits, cloud computing is clearly the next frontier for our industry.  But it will not arrive automatically.   Unlocking the potential of the cloud will require better infrastructure to increase access.  We will need to adapt long-standing relationships between customers and online companies around how information will be used and protected.  And we will need to address new security threats and questions about data sovereignty.

The more open government we all seek depends, in part, on a new conversation within the technology industry, working in partnership with governments around the world.  Modernizing security and privacy laws is critical, and broad agreement is needed on security and privacy tools that will help protect citizens.  We need greater collaboration among governments to foster consistency and predictability.  Microsoft welcomes governments and citizens alike to participate in shaping a responsible approach to the cloud.

***

You can follow VFI on Twitter at http://twitter.com/vfiorg.

Mike Erickson posted the Agenda - Azure Boot Camp SLC on 5/25/2010:

As I stated in my last post we are holding a day of Windows Azure training and hands-on labs in Salt Lake City on June 11th. Here is the link to register:
Register for Salt Lake City Azure Boot Camp
And here is the agenda:

  • Welcome and Introduction to Windows Azure
  • Lab: "Hello, Cloud"
  • Presentation: Windows Azure Hosting
  • Presentation: Windows Azure Storage
  • Lab: Hosting and Storage
  • Presentation: SQL Azure
  • Lab: SQL Azure
  • Presentation: AppFabric
  • Wrap-up

The day will be a pretty even split of presentations and actual hands-on work. You will be given 2 days of access to the Windows Azure platform allowing you to work through the labs and have a little more time to explore yourself.

Please let me know if you have any questions mike.erickson@neudesic.com - I'm hoping to see the event fill up!

image_thumb[7][1]My List of 34 Cloud-Related Sessions at the Microsoft Worldwide Partner Conference (WPC) 2010 presents the results of a search for sessions returned by WPC 2010’s Session Catalog for Track = Cloud Services (19) and Key Word = Azure (15).

Scott Bekker claims “With a little more than a month until the Microsoft Worldwide Partner Conference in Washington, D.C., session descriptions are beginning to appear in earnest on the Microsoft Partner Network Portal” in a preface to his 11 Things to Know About... WPC Sessions article of 5/26/2010 for the Redmond Channel Partner Online blog:

imageWith a little more than a month until the Microsoft Worldwide Partner Conference (WPC) in Washington, D.C., session descriptions are beginning to appear in earnest on the Microsoft Partner Network (MPN) Portal. We've scoured the available listings for some highlights.

Steve Ballmer Keynote. The CEO is confirmed for his usual keynote. Even when Ballmer doesn't have news, partners tell us they draw energy from Ballmer's WPC speeches. This year, he probably will have news about what Microsoft being "all in" on the cloud means to partners.

Kevin Turner Keynote. Partners are in the COO's portfolio, so it's always crucial to hear what he has to say. At the least, he's usually entertaining in his unbridled competitiveness.

Allison Watson Keynotes. Worldwide Partner Group CVP Watson will play her usual role, introducing the big keynotes each day and giving her own. Expect a lot of detail about cloud programs and MPN transition specifics.

Cloud Sales. If you're interested in making money with the cloud Microsoft-style, sessions galore await at the WPC. A few that caught our attention: "Best Practices: Selling Cloud-Based Solutions to a Customer" and "Better Together: The Next Generations of Microsoft Online Services + Microsoft Office 2010."

A Wide Lens on the Sky. For those with a more philosophical bent, there's "Cloud as Reality: The Upcoming Decade of the Cloud and the Windows Azure Platform."

Geeking Out in the Cloud. If you want to drill down, there are sessions like this one for ISVs: "Building a Multi-Tenant SaaS Application with Microsoft SQL Azure and Windows Azure AppFabric."

Vertical Clouds. Many sessions are geared toward channeling cloud computing into verticals, such as, "Education Track: Cloud Computing and How This Fits into the Academic Customer Paradigm."

Other Verticals. If a session like "Driving Revenue with Innovative Solutions in Manufacturing Industries" doesn't float your boat, there's probably something equally specific in your area that will.

Market Research. Leverage Microsoft's ample resources in sessions like, "FY11 Small Business Conversations."

"Capture the Windows 7 Opportunity." Judging by the adoption curve, you'll want to execute on anything you learn from this session PDQ.

"Click, Try, Buy! A Partner's Guide to Driving Customer Demand Generation with Microsoft Dynamics CRM!" How could you pass up a session with that many exclamation points?

Rethinking an SMB Competency

One of the new competencies of the Microsoft Partner Network that was supposed to go live in May was the Small Business Competency. Hold the music. Eric Ligman, global partner experience lead for Microsoft, wrote the following on the SMB Community Blog on May 6:

"In the small business segment, we are 'doubling-down' on the [Small Business Specialist Community (SBSC)] designation by making it our lead MPN offering for Partners serving the needs of small business. We are postponing the launch of the Small Business Competency and Small Business Advanced Competency in the upcoming year to further evaluate the need to have a separate offering outside of SBSC in the small business segment."

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

David Linthicum claims “Perhaps it's time these two 800-pound cloud computing gorillas got engaged” and asks Is a Salesforce.com and Google marriage in the works? in this 5/25/2010 post to InfoWorld’s Cloud Computing blog:

imageInfoWorld's editor Eric Knorr asked the question on everyone's mind: Will Google buy Salesforce next? He makes a good point in his blog post:

“Google I/O last week provided yet another indication of the two companies' converging interests. Just as VMware and Salesforce struck an alliance last month to enable Java applications to run on the Force.com cloud development platform, VMware and Google announced a similar arrangement for the Google App Engine platform.”

Google and Salesforce chase the same market and, thus, could provide strong channels for each other. Though Salesforce's Force.com and Google's App Engine overlap a bit, they can be combined fairly easily, and the hybrid product would be compelling to developers looking to get the best bang for the line of code.

A few more reasons:

  • Google and Salesforce have well-defined points of integration established from an existing agreement. Thus, creating additional bindings for combined offerings shouldn't be much of a learning effort.
  • Salesforce won't have its market dominance forever, which investors and maybe even the executives at Salesforce.com should understand. I suspect they'll want to sell at the top of the market, which is now.
  • What would you say to a free version of Salesforce.com driven by ad revenue? I have a feeling the market would be there to support this, and it would provide a great upsell opportunity into the subscription service.

William Vambenepe (@vambenepe) wrote in his Dear Cloud API, your fault line is showing post of 5/25/2010:

image Most APIs are like hospital gowns. They seem to provide good coverage, until you turn around.

I am talking about the dreadful state of fault reporting in remote APIs, from Twitter to Cloud interfaces. They are badly described in the interface documentation and the implementations often don’t even conform to what little is documented.

If, when reading a specification, you get the impression that the “normal” part of the specification is the result of hours of whiteboard debate but that the section that describes the faults is a stream-of-consciousness late-night dump that no-one reviewed, well… you’re most likely right. And this is not only the case for standard-by-committee kind of specifications. Even when the specification is written to match the behavior of an existing implementation, error handling is often incorrectly and incompletely described. In part because developers may not even know what their application returns in all error conditions.

After learning the lessons of SOAP-RPC, programmers are now more willing to acknowledge and understand the on-the-wire messages received and produced. But when it comes to faults, there is still a tendency to throw their hands in the air, write to the application log and then let the stack do whatever it does when an unhandled exception occurs, on-the-wire compliance be damned. If that means sending an HTML error message in response to a request for a JSON payload, so be it. After all, it’s just a fault.

But even if fault messages may only represent 0.001% of the messages your application sends, they still represent 85% of those that the client-side developers will look at.

Client developers can’t even reverse-engineer the fault behavior by hitting a reference implementation (whether official or de-facto) the way they do with regular messages. That’s because while you can generate response messages for any successful request, you don’t know what error conditions to simulate. You can’t tell your Cloud provider “please bring down your user account database for five minutes so I can see what faults you really send me when that happens”. Also, when testing against a live application you may get a different fault behavior depending on the time of day. A late-night coder (or a daytime coder in another time zone) might never see the various faults emitted when the application (like Twitter) is over capacity. And yet these will be quite common at peak time (when the coder is busy with his day job… or sleeping).

All these reasons make it even more important to carefully (and accurately) document fault behavior.

The move to REST makes matters even worse, in part because it removes SOAP faults. There’s nothing magical about SOAP faults, but at least they force you to think about providing an information payload inside your fault message. Many REST APIs replace that with HTTP error codes, often accompanied by a one-line description with a sometimes unclear relationship with the semantics of the application. Either it’s a standard error code, which by definition is very generic or it’s an application-defined code at which point it most likely overlaps with one or more standard codes and you don’t know when you should expect one or the other. Either way, there is too much faith put in the HTTP code versus the payload of the error. Let’s be realistic. There are very few things most applications can do automatically in response to a fault. Mainly:

  • Ask the user to re-enter credentials (if it’s an authentication/permission issue)
  • Retry (immediately or after some time)
  • Report a problem and fail

So make sure that your HTTP errors support this simple decision tree. Beyond that point, listing a panoply of application-specific error codes looks like an attempt to look “RESTful” by overdoing it. In most cases, application-specific error codes are too detailed for most automated processing and not detailed enough to help the developer understand and correct the issue. I am not against using them but what matters most is the payload data that comes along.

On that aspect, implementations generally fail in one of two extremes. Some of them tell you nothing. For example the payload is a string that just repeats what the documentation says about the error code. Others dump the kitchen sink on you and you get a full stack trace of where the error occurred in the server implementation. The former is justified as a security precaution. The latter as a way to help you debug. More likely, they both just reflect laziness.

In the ideal world, you’d get a detailed error payload telling you exactly which of the input parameters the application choked on and why. Not just vague words like “invalid”. Is parameter “foo” invalid for syntactical reasons? Is it invalid because inconsistent with another parameter value in the request? Is it invalid because it doesn’t match the state on the server side? Realistically, implementations often can’t spend too many CPU cycles analyzing errors and generating such detailed reports. That’s fine, but then they can include a link to a wiki a knowledge base where more details are available about the error, its common causes and the workarounds.

Your API should document all messages accurately and comprehensively. Faults are messages too.

<Return to section navigation list> 

blog comments powered by Disqus