Thursday, February 18, 2010

Windows Azure and Cloud Computing Posts for 2/17/2010+

Windows Azure, SQL Azure Database, Azure AppFabric and related cloud computing topics now appear in this weekly series.

 
• Update 3/18/2010: SQL Azure Service Update (SU) 1 and more.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in February 2010. 

Azure Blob, Table and Queue Services

No significant articles today.

<Return to section navigation list> 

SQL Azure Database (SADB, formerly SDS and SSDS)

• David Robinson reports SQL Azure Database Service Update 1 is Live in a 2/17/2010 (6:24 PM) post to the SQL Azure Team blog:

Two short weeks have passed since the general availability of SQL Azure and the Windows Azure platform and our first Service Update (SU1) is live. Thank you for continuing to provide feedback; we are already incorporating some of it. In addition to a few bug fixes, we have added the following new features:

Troubleshooting and Supportability DMVs

Dynamic Management Views (DMVs) return state information that can be used to monitor the health of a database, diagnose problems, and tune performance. These views are similar to the ones that already exist in the on-premises edition of SQL Server.

The DMVs we have added are as follows:

  • sys.dm_exec_connections – This view returns information about the connections established to your database.
  • sys.dm_exec_requests – This view returns information about each request that executes within your database
  • sys.dm_exec_sessions – This view shows information about all active user connections and internal tasks.
  • sys.dm_tran_database_transactions – This view returns information about transactions at the database level.
  • sys.dm_tran_active_transactions – This view returns information about transactions for your current logical database.
  • sys.dm_db_partition_stats – This view returns page and row-count information for every partition in the current database.

Ability to move between editions

One of the most requested features was the ability to move up and down between a Web or Business edition database.   This provides you greater flexibility and if you approach the upper limits of the Web edition database, you can easily upgrade with a single command. You can also downgrade if your database is below the allowed size limit.

You can now do that using the following syntax:

ALTER DATABASE database_name
{
    MODIFY (MAXSIZE = {1 | 10} GB)
}

Idle session timeouts

We have increased the idle connection timeout from 5 to 30 minutes. This will improve your experience while using connection pooling and other interactive tools

Long running transactions

Based on customer feedback, we have improved our algorithm for terminating long running transactions. These changes will substantially increase the quality of service and allow you to import and export much larger amounts of data without having to resort to breaking your data down into chunks.

We value and act upon the feedback that you provide to us, so please keep it coming and we will keep the updates coming

<Return to section navigation list> 

AppFabric: Access Control, Service Bus and Workflow

Mieszko Matkowski’s Name Identifiers in SAML assertions post of 2/17/2010 to the “Geneva” Team (CardSpace) blog begins:

In this post I will show how to setup your Relying Party Trust issuance policy to create name identifier in assertion. For AD FS 2.0 the name identifier is yet another claim but you may want to generate name identifiers if you plan to:

  • Use SAML 2.0 protocol (particularly name identifier is necessary if you plan to take advantage of SAML logout protocol),
  • Federate with non-AD FS 2.0 deployment.

I will show name identifier configuration on two privacy sensitive scenarios: persistent identifier, transient identifier. Persistent identifier is meant to obfuscate the real user identity, so it’s not possible to link user activities across different relying parties. At the same time the STS guarantees that persistent id will remain the same each time same the user logs in again. Transient identifier has similar properties but it’s only valid for single login session (i.e. it will be different each time the user authenticates again, but will stay the same as long as the user is authenticated). …

Before I start I assume you already configured sample Relying Party Trust with basic policy. In case you don’t, here is some recommended reading:

Mieszko continues with illustrated “Persistent name identifier” and “Transient name identifier” sections.

David Kearns claims “'Provisioning-on-demand' systems among the possibilites” as a preface to his How to improve provisioning post of 2/16/2010 to NetworkWorld’s Security blog:

Last week I moderated a panel at Kuppinger Cole's virtual conference on identity, which talked about "Provisioning and Access Governance Trends" with Engiweb Security's Cris Merritt and Deepak Taneja, founder of Aveksa. You can see a replay of the session by going here and clicking the "watch now" button. …

We think of provisioning as the most mature of the IdM services (it's been with us for more than 10 years) and we may think of it now as mostly "pipes" rather than "gold fixtures" (to refer to the plumbing analogy of computing) but there is still room for improvement.

Two areas were discussed for improvement. The first is a major change in the way provisioning is done. Currently, a "connector" between the provisioning engine and the application, service or data store being provisioned has to be created. While vendors have move[d] to reusable connectors and some (notably Courion) have tried to commoditize them, Merritt suggested that they need to be abstracted into a service to be called on, as needed, by the applications or services needing the data. Further, we looked at creating "provisioning on demand" systems -- where users could, via a self-service mechanism -- do their own provisioning. Subject, of course, to access governance principles. …

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Lori MacVittie says she was “Surprised? I was, but I shouldn’t have been” in her Lots of Little Virtual Web Applications Scale Out Better than Scaling Up post of 2/18/2010:

While working on other topics I ran across an interesting slide in a presentation given by Microsoft at TechEd Europe 2009 on virtualization and Exchange. Specifically the presenter called out the average 12% overhead incurred from the hypervisor on systems in internal testing. Intuitively it seems obvious that a hypervisor will incur overhead; it is, after all, an application that is executing and thus requires CPU, I/O, and RAM to perform its tasks. That led to me to wonder if there was more data on the overhead from other virtualization vendors.

I ended up reading an enlightening white paper from VMware on consolidation of web applications and virtualization in which it observes that multi-virtual configurations actually outperformed in terms of capacity and performance a server configured with a similar number of CPUs. Note that this is specifically for web applications, though I suspect that any TCP-heavy application would likely exhibit similar performance characteristics. …

Lori continues with detailed Scale Out Virtually for Best Results and Optimal Strategy for Addressing Scalability sections.

• William Vambenepe’s Waiting for events (in Cloud APIs) post of 2/17/2010 attempts to define “an event based interface instead of a request-reply based interface”:

Events/alerts/notifications have been a central concept in IT management at least since the first SNMP trap was emitted, and probably even long before that. And yet they are curiously absent from all the Cloud management APIs/protocols. If you think that’s because “THE CLOUD CHANGES EVERYTHING” then you may have to think again. Over the last few days, two of the most experienced practitioners of Cloud computing pointed out that this omission is a real pain in the neck. RightScale’s Thorsten von Eicken was first to request “an event based interface instead of a request-reply based interface”, pointing out that “we run a good number of machines that do nothing but chew up 100% cpu polling EC2 to detect changes”. George Reese seconded and started to sketch a solution. And while these blog posts gave the issue increased visibility recently, it has been a recurring topic on the AWS Forum and other similar discussion boards for quite some time. For example, in this thread going back to 2006, an Amazon employee wrote that “this is a feature we’ve discussed recently and we’re looking at options” (incidentally, I see a post by Thorsten in that old thread). We’re still waiting.

Let’s look at what it would take to define such a feature.

I have some experience with events for IT management, having been involved in the WS-Notification family of specifications and having co-chaired the OASIS technical committee that standardized them. This post is not about foisting WS-Notification on Cloud APIs, but just about surfacing some of the questions that come up when you try to standardize such a mechanism. While the main use cases for WS-Notification came from IT (and Grid) management, it was supposed to be a generic mechanism. A Cloud-centric eventing protocol can be made simpler by focusing on fewer use cases (Cloud scenarios only). In addition, WS-Notification was marred by the complexity-is-a-sign-of-greatness spirit of the time . On this too, a Cloud eventing protocol could improve things by keeping IBM at bay simplicity in mind. …

William reminisced about the WS-* standards fiasco and asked Can Cloud standards be saved? from the complexity introduced by vendor control of software standards and disdain for input to the standards process from independent experts in his 2/15/2010 article noted in my Windows Azure and Cloud Computing Posts for 2/15/2010+ item.

• Toddy Mladenov explains a possible reason for Azure deployments sticking in his Windows Azure deployment - did you forget to pack your DLLs? post of 2/17/2010:

OK, back to the topic deployment stuck in Initializing, Busy, Stopping! One more thing you can do is to crack-open your package and look inside if you have all the necessary DLLs. Thanks to Jim who send me the note, and has a good post how to look inside the service package.

Deploying Package with Missing DLLs on Windows Azure

Using the steps he describes in his post I created simple project. The app does nothing but shows static text. For the purpose of this exercise I have initially set Copy Local attribute for the following DLL to false:

Microsoft.WindowsAzure.Diagnostics 

After clicking Publish on the project I got the following in the output windows:

C:\Program Files (x86)\MSBuild\Microsoft\Cloud Service\1.0\Visual Studio 10.0\Microsoft.CloudService.targets(0,0): warning : CloudServices44 : Forcing creation of unencrypted package.

This is the guarantee that I have the package unencrypted, and will be able to look inside. Note: Please keep in mind that you will not see this warning in Visual Studio 2010 Beta because a bug.

In order to look in the package I had to change the extension of the package file from CSPKG to ZIP.

Renaming CSPKG file to ZIP

Toddy, who’s on Microsoft’s Windows Azure Team in Redmond, continues with step-by-step details.

Joannes Vermorel announced Lokad.Cloud, an O/C (object/cloud) mapper for auto-scaling Windows Azure in this post to code.google.com of 2/17/2010. He writes in the introduction:

Elastic computing resource allocation is a cornerstone of cloud computing. Windows Azure let you decide how many virtual machines (VMs) should be allocated to support the workload of your cloud app.

Lokad.Cloud emphasizes a design where the workload capacity of your cloud app can be incrementally improved by adding more VMs; and - the other way around - incrementally degraded by removing VMs. …

Lokad.Cloud embeds the

Management API and let you control the number of workers allocated for your cloud app. The screenshot below illustrates how the number of worker instances can be manually set from the Lokad.Cloud Console:

Mark Rendle of Net Solutions (UK) describes How to migrate ScrumWall to Azure in less than 20 minutes in this 2/17/2010 post with a link to a live Azure demo project:

This post should really be title[d]how we (could have) migrated ScrumWall to Azure in less than 20 minutes”. But I’ll come on to that later.

I’ve been speaking about Azure at a number of events recently. As part of the talks, I have given a quick overview of the developer portal for Azure and the Visual Studio tools. I’ve wrapped this up in a demo using our ScrumWall product (http://scrumwall.cloudapp.net). …

The majority of functionality provided by ScrumWall is contained in a Silverlight application. There are a number of WCF services to provide data to this application and some fairly basic ASP.NET pages. There are also a couple of HTTP handlers to manage the authentication, which is provide by Microsoft Live ID.

Overall ScrumWall contains a reasonable number of different components and is a good “test” candidate to migrate to Azure.

A step-by-step tutorial follows.

Mark Rendle’s “next post” is Working with namespaces in LINQ to XML [and Windows Azure] of the same date:

I was working on an Azure utility with Johan on Friday, which had to deal with the XML responses provided by the Service Management REST API. XML like this:

<HostedServices xmlns="http://schemas.microsoft.com/windowsazure"
xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<HostedService>
<Url>https://management.core.windows.net/GUID/services/hostedservices/foo</Url>
<ServiceName>foo</ServiceName>
</HostedService>
</HostedServices>

It seems to be required in some circles – Microsoft included – to add an xmlns declaration in any and all XML, regardless of how small or self-contained it may be. And I suppose that, in and of itself, wouldn't really annoy me that much. But, try and process the XML above using Microsoft's own LINQ to XML classes (.NET 3.5 and above) and will it work?

Mark says “No” and provides the workaround.

Chris Preimesberger’s 10 Cloud Computing Trends That Are Rapidly Catching On slide presentation of 2/17/2010 for eWeek details the following nine topics:

    • The Public-Private Hybrid Is Here to StayPrivate Cloud Deployments Will Be Fast and FuriousHR, Collaboration Will Get Large in the Cloud
    • Former No. 1 Obstacle—Integration—Getting Solved
    • Tier 1 IT Vendors Will Move to Cloud in a Big Way
    • Channels Will Provide New Openings for Cloud-based Apps
    • Expect to See Cloud-based Front Offices
    • Cloud-based Security Will Have Its Day, and Soon
    • Social Networks Will Become Mainstream in the Enterprise

Chris concludes that:

    • Microsoft Azure Will Become a Major Actor
      Windows Azure will cause a shift in cloud computing due to the vast number of .Net developers (more than six million). Net developers will now finally have the chance to jump into the cloud without having to learn new platforms or tools. The cloud platform race will be a tight one between the Java camp led by Google and Amazon Web Services and the Windows Azure camp.

I’m not ready to buy Google and AWS in “the Java camp.” Google still seems dedicated to Python and AWS is OS-agnostic, but supports Windows.

John Moore chides Google Health for adopting a modified version of the Continuity of Care Record (CCR) in his CCD Standard Gaining Traction, CCR Fading post to the Chilmark Research blog of 2/17/2010:

In a number of interviews with leading HIE vendors, it is becoming clear that the clinical standard, Continuity of Care Document (CCD) will be the dominant standard in the future.  The leading competing standard, Continuity of Care Record (CCR) appears to be fading with one vendor stating that virtually no client is asking for CCR today.  This HIE vendor did state that one client did ask for CCR, but only to enable data transfer to Google Health.

CCR was created by ASTM with major involvement by AAFP wih the objective to create a standard that would be far easier to deploy and use by smaller physician practices.  At the time of CCR formation, the dominant standard was HL7’s CDA, a beast of a standard that was structured to serve large hospitals and based on some fairly old technology and architectural constructs.  With competing CDA and CCR standards in the market, there was a need for some rationalization which led to the development of CCD, a standard that combined some of the best features of CCR and CDA.

Today, CCD is seen as a more flexible standard that is not nearly as prescriptive as CCR. This allows IT staff to structure and customize their internal HIT architecture and features therein for their users and not be confind to a strict architectural definition such as that found in CCR.  (Note: such strict definitions are not always a bad thing as they can greatly simplify deployment and use, but such simplicity comes at a price, flexibility.)

Unfortunately for Google Health, who has built its system on top of a modified version of CCR, this trend   likely lead to increasingly difficulty in convincing healthcare providers to provide patient health records in a CCR format.  Google would be wise to immediately begin the work necessary to bring CCD documents into their system as the writing on the wall is getting clearer by the day.  CCR is a standard that will fade away.

Microsoft HealthVault supports both CCD and CCR data types (see Storing CCR and CCD Data in HealthVault.)

The Windows Azure Team announced a New Video Series Helps Illustrate "Why Windows Azure?" on 2/16/2010:

Need help understanding - or explaining - why Windows Azure might be right for your business?  Check out Why Windows Azure?, a series of short videos with Microsoft Trainer Bill Lodin, who uses a unique whiteboard to help illustrate why people should adopt Windows Azure. Each light-hearted video is less than 15 minutes and easy to follow so we encourage you to watch them all and pass along to others who want to know more about Windows Azure.

  • What Is Windows Azure and Why Is It In the Cloud?:  Bill and his whiteboard sidekick introduce the concept of Cloud Computing and Microsoft's implementation of Cloud Computing with Windows Azure.
  • The Windows Azure Development Experience:  Bill and his helper focus on the underlying technology of Windows Azure that developers need to understand as they make the move to Cloud Computing with Microsoft.
  • Moving Existing Applications to Windows Azure:  Bill and his cheeky whiteboard friend illustrate some of the issues users may encounter when moving existing applications to Windows Azure and, more importantly, how to mitigate potential issues.
  • The Windows Azure TCO and ROI Calculator:  Bill and the ever-present whiteboard walk users through the TCO and ROI calculator and how users can use it to calculate how much it will cost them to move to Windows Azure and what kind of savings they can anticipate.
  • Front Runner:  With the help of his whiteboard, Bill will show users the benefits of the Front Runner program for the Windows Azure platform.
  • Sign Up:  This episode discusses the various offers available to users who want to get started using Windows Azure and shows users how they can get started with Windows Azure for free.

Let us know what you think - are these videos helpful?  What other resources would help you as you assess Windows Azure for your business?  We look forward to hearing from you.

Return to section navigation list> 

Windows Azure Infrastructure

David Linthicum asserts “Any investment made in SOA carries over nicely to cloud computing, as SOA is its architectural foundation” in his Debunking the myths about SOA's relationship to the cloud post of 2/18/2010 to the InfoWorld Cloud Computing blog:

There is much confusion out there around the intersection of SOA and cloud computing, something I addressed in my latest book. The Open Group's Chris Harding has also written a great article on that topic, diving into the issues around SOA and cloud computing -- in other words, about the confusion.

This issue can be best summarized by this quote: "'I just got my people trained on SOA,' I heard someone say recently, 'and now they want to go off and learn about cloud computing. When are they ever going to do some actual work?'"

A common misconception is that cloud computing is replacing SOA, which is not at all true. The two are very different. SOA is an architectural pattern; it's something you do. Cloud computing is nothing more than a set of architectural options, including private, public, and community clouds. They go very much together; indeed, cloud computing makes SOA much easier and much more valuable. …

• John Furrier’s Bang Bang: Cisco Dumps HP As Certified Partner HP Returns Fire With Deal with Qlogic post of 2/18/2010 to his SiliconAngle blog begins:

I got confirmation today from someone inside Cisco that an internal memo went out announcing that Cisco is dumping HP as a certified partner.  Concurrent with the Cisco internal memo HP makes a move to join forces with QLogic for stackable switches. …

John concludes:

Bigger Picture - The Datacenter Operating System

The datacenter is a battleground for new models. In particular one that is very trendy right now is the optimization of components in the datacenter to construct a datacenter operating system.

HP Labs is doing some amazing research in sustainable datacenters and companies like HP, Cisco, and Juniper are vying to be a complete player in this "new datacenter operating system".

Pradeep Sindhu, founder of Juniper Networks, was just talking about this trend of the datacenter being a complete system last night at Mobile World Congress.  Going further firms that take a complete systems view of a datacenter are poised to be more innovative says Pradeep.

Big players who have massive datacenters or cloud platoforms (e.g. Google) go great lengths to optimize every part of the datacenter equation. So this stackable switch deal makes sense.

We were just talking on twitter about how any small change in performance on components can yield big change in the economics and sustainability. Experts like Pradeep are saying that this notion of a whole complete datacenter is a systems problem and companies need to think holistically about every part.  It's all about getting everything to Ethernet and thinking about the datacenter as a system.

HP has the depth and field team to compete with Cisco.  We are starting to see those moves take place.  I'm sure there will be more not less of partnering especially as networking players become systems players and move up the stack.

Moving up the stack is something Cisco has not been strong at.

It will be interesting to see how this change plays out in HP’s recent agreement with Microsoft.

Geva Perry analyzes James Urquhart’s post (see below) in Geva’s Cloud Computing as Commodity article of 2/17/2010:

My partner in crime on the Overcast podcast, James Urquhart, published a nice blog post today titled: Cloud computing and 'commodity'. Read the post and come back. I'll wait.

Done? OK.

James seems to be responding to what is apparently some kind of controversy, which I don't understand because there is truly nothing new under the sun. Read Clayton Christensen's 300-page book Seeing What's Next and come back. I'll wait.

Done? Good.

All products are in a constant commoditization process always, or a "race to the bottom" as James refers to it. Period. Therefore, in order to maintain differentiation smart vendors continuously innovate and add additional features and "crust" capabilities, which over time will themselves be commoditized. Rinse and repeat. Therefore, in economic theory, all industries are destined to become commodity industries (known as the Industry Lifecycle).

Let's see how this plays out in cloud computing (intentionally simplified greatly to make a point):

  • Amazon is first mover with server and storage API provisioning on-demand and pay-per-use pricing
  • Everyone and their brother offers the same at similar prices
  • Some differentiation in API capabilities is created (could be richness, coverage of corner cases, ease-of-use, features such as auto-scaling, etc.)
  • A formal API standard emerges and everyone again offers exactly the same
  • Rackspace differentiates with better SLAs (Rackspace: The Avis of Cloud Computing?)
  • Everyone matches their SLAs
  • VMWare and Microsoft start offering higher-level components moving towards a platform-as-a-service
  • Salesforce.com differentiates its PaaS by building a large ecosystem around its platform
  • Amazon acquires several start-ups and offers the same...

You get the point.

Geva continues with a discussion of market equilibrium and commodity products, such as the shipping container.

James Urquhart quotes Simon Wardley in a Cloud computing and 'commodity' post of 2/17/2010 to the C|Net News’ The Wisdom of Clouds blog:

One of my favorite bloggers (and long-time cloud pundit), Simon Wardley, once wrote a short post that clarified the meanings of two words that are key to understanding the value of cloud computing:

“I thought I'd just re-iterate the distinction between [two] terms that was first identified by Douglas Rushkoff:-

  • Commodification (mid to late 1970s, Word) is used to describe the process by which something which does not have an economic value is assigned a value and hence how market values can replace other social values. It describes a modification of relationships, formerly untainted by commerce, into commercial relationships.

  • Commoditization (early to mid 1990s, Neologism) is the process by which goods that have economic value and are distinguishable in terms of attributes (uniqueness or brand) end up becoming simple commodities in the eyes of the market or consumers. It is the movement of a market from differentiated to undifferentiated price competition, from monopolistic to perfect competition.

(Graphic credit: Wikimedia Commons)”

You should definitely read the rest of Wardley's post to get a clear sense of where each word applies, but I wanted to make sure you understood these two concepts because there are some interesting debates about how commoditizaton and commodification apply to cloud computing. …

James continues with the details of cloud commodification vs. commoditization.

Lori MacVittie asks What if users could specify their own SLAs? and adds “More interesting, what if you had the means to actually try to meet them?” in a 2/17/2010 post:

On the surface, Infrastructure 2.0 seems to have very little value to the end-user. It is, after all, about collaboration at the infrastructure layer. It is under the covers, as it were, of the application blanket with which end-users actually interact. But it may end up that Infrastructure 2.0 will have a direct impact on the control the user has over the way in which applications are delivered. Which is to say they might one day have some. What this means is something along the lines of taking the “choose your download mirror” capability offered by popular download sites and cranking it up about six clicks on the dial. Yes, we’re going to turn it up to eleven. First, let’s lay out some options for these fictional (but very demanding) users:

checkbox_icon I value speed and function equally

checkbox_iconI value speed over function

checkbox_iconI value function over speed

These aren’t your enterprise SLA definitions, granted, but when you’re presenting a user with options that ultimately must be translated into technical terms, you really can’t get too specific. And really, even this level of “SLA” is more than most users have ever had aside from “high bandwidth | low bandwidth” and “big pictures | no pictures”. Besides, this is my post, I’ll define my SLAs any way I want.

Now, how would you go about implementing the means by which such SLAs might be enforced?

If you said “context-aware global application delivery” you’re on the right track. If you added “enabled by Infrastructure 2.0” you get extra points. And a cookie.

Lori continues with a detailed analysis of how the above might be accomplished:

Brenda Michelson’s F5 Networks Cloud Computing Survey: Business Reasons, Business Influencers analyzes F5 Networks’ mid-2009 survey:

After a short break for some client work, I’m back to my cloud computing survey list.  This afternoon, I’ve reviewed the Google Communications Intelligence Report, October 2009, Rackspace’s No More Servers, November 2009 and F5 Networks’ Cloud Computing Survey, June- July 2009 [pdf].  The Google and Rackspace surveys were interesting, but small and midsize business oriented and therefore not relevant for my enterprise considerations project.

The F5 Networks survey [pdf] presented findings in 5 areas:

  • Confusion about the definition of cloud computing
  • Cloud computing has gained critical mass
  • Cloud computing is more than SaaS
  • Core technologies for building the cloud
  • Influencers go beyond IT

The section I found most interesting was the last, which covered the business drivers for public and private cloud computing adoption, as well as the organizational areas leading the adoption charge. …

Mitch Tulloch has updated his Free ebook: Understanding Microsoft Virtualization R2 Solutions from Microsoft Press, according to this 2/16/2010 post:

Here it is! Mitch Tulloch has updated his free ebook of last year. You can now download Understanding Microsoft Virtualization R2 Solutions in XPS format here and in PDF format here.

Six chapters adding up to 466 pages:

Chapter 1: Why Virtualization? This chapter provides an overview of Microsoft’s
integrated virtualization solution and how it plays a key role in Dynamic IT, Microsoft’s strategy for enabling agile business. The chapter also describes the benefits businesses can achieve through virtualization and how Microsoft’s virtualization platforms, products and technologies can help these businesses move their IT infrastructures toward the goal of Dynamic IT

Chapter 2: Server Virtualization This chapter covers the Hyper-V role of Windows
Server 2008 R2 and Microsoft Hyper-V Server 2008 R2 and how these platforms can be used to manage virtualization server workloads in the datacenter. The chapter explores features of Hyper-V including the new Live Migration feature of Windows Server 2008 R2. It also describes the benefits of deploying Hyper-V, and various usage scenarios.

Chapter 3: Local Desktop Virtualization This chapter describes various Microsoft
virtualization technologies that enable client operating systems and applications to
run within a virtualized environment hosted on the user’s computer. The platforms
and products covered in this chapter include Windows Virtual PC and the Windows
XP Mode environment, Microsoft Enterprise Desktop Virtualization (MED-V), and
Microsoft Application Virtualization (App-V).

Chapter 4: Remote Desktop Virtualization This chapter describes various Microsoft
virtualization technologies that enable client operating systems and applications to
run within a virtualized environment hosted on a server located in the datacenter. The platforms and products covered in this chapter include Remote Desktop Services in Windows Server 2008 R2, Microsoft Virtual Desktop Infrastructure (VDI), and App-V for Remote Desktop Services.

Chapter 5: Virtualization Management This chapter describes how System Center
Virtual Machine Manager (VMM) 2008 can be used to centrally manage all aspects of a virtualized IT infrastructure. The chapter explains how VMM works and explores how to use the platform to manage virtual machines running on Windows Server 2008 R2 Hyper-V servers. The chapter also describes the benefits of the other members of the System Center family of products.

Chapter 6: Cloud Computing This chapter examines Microsoft’s emerging cloud
computing platform, how it works, and what benefits businesses can obtain from it. The chapter examines both private and public cloud solutions including Windows
Azure, and describes how Microsoft’s Dynamic Data Center Toolkit can be used to integrate cloud computing as a part of your virtualized IT infrastructure.

Following are  Chapter 6’s sections from the introduction’s “How This Book Is Organized” section:

    • What Is Cloud Computing? p. 431
    • Private vs. Public Cloud p. 432
    • Examining the Benefits of Cloud Computing p. 433
    • Benefits of Using a Private Cloud vs. a Public Cloud p. 433
    • Increasing Use of IT Resources p. 434
    • Examining Cloud-Computing Usage Scenarios p. 435
    • Understanding Microsoft’s Cloud-Computing Platform p. 435
    • Understanding Different Cloud Services p. 435
    • Implementing Cloud Services p. 437
    • Understanding the Dynamic Data Center Toolkit p. 438
    • Comparing the Toolkits p. 440
    • Understanding the Private-Cloud Architecture p. 441
    • Implementing a Private-Cloud Solution p. 443
    • Windows Azure p. 444
    • The Dynamic Data Center Alliance p. 446
    • Availability of Microsoft’s Cloud-Computing Platform p. 446
    • Additional Resources p. 447
    • Additional Resources for Microsoft’s Cloud-Computing Initiative p. 447
    • Additional Resources for Windows Azure p. 447

You’ll probably find the “Understanding the Dynamic Data Center Toolkit” and later sections to be of the most interest, but it seems to me that it’s a difficult job to cover cloud computing in 16 pages.

K. Scott Morrison analyzes Eric Knorr’s comments for InfoWorld (see below) in a The Revolution Will Not Be Televised post of 2/17/2010:

Technology loves a good fad. Agile development, Web 2.0, patterns, Web services, XML, SOA, and now the cloud—I’ve lived through so many of these I’m beginning to loose track. And truth be told, I’ve jumped on my fair share of bandwagons. But one thing I have learned is that the successful technologies move at their own incremental pace, independent of the hype cycle around them. Two well known commentators, Eric Knorr from Infoworld, and David Linthicum, from Blue Mountain Labs, both made posts this week suggesting that this may be the case for cloud computing.

Eric Knorr, in his piece Cloud computing gets a (little) more real, writes:

“The business driver for the private cloud is clear: Management wants to press a button and get what it needs, so that IT becomes a kind of service vend-o-matic. The transformation required to deliver on that promise seems absolutely immense to me. While commercial cloud service providers have the luxury of a single service focus, a full private cloud has an entire catalogue to account for — with all the collaboration and governance issues that stopped SOA (service-oriented architecture) in its tracks.”

I agree with Eric’s comment about SOA, as long as you interpret this as “big SOA”. The big bang, starting-Monday-everything-is-SOA approach certainly did fail—and in hindsight, this shouldn’t be surprising. SOA, like cloud computing, cuts hard across fiefdoms and challenges existing order. If you move too fast, if your approach is too draconian, of course you will fail. In contrast, if you manage SOA incrementally, continuously building trust and earning mindshare, then SOA will indeed work.

Successful cloud computing will follow the incremental pattern. It just isn’t reasonable to believe that if you build a cloud, they will come—and all at once, as Eric contends. We have not designed our mission critical applications for cloud deployment. Moreover, our people and our processes may not be ready for cloud deployment. Like the applications, these too can change; but this is a journey, not a destination. …

Scott goes on to discuss David Linthicum’s writing about private clouds, and concludes:

This revolution just doesn’t make good TV. The hype will certainly be there, but the actual reality will be a slow, measured, but nonetheless inevitable transition.

PS: The title [of this post], of course, is from the great Gil Scott-Heron

Eric Knorr reports “HP rolls out a new cloud consulting practice, while Cisco takes a step toward erasing the line between the data center and the cloud” in his Cloud computing gets (a little) more real article of 2/16/2010 to InfoWorld’s Cloud Computing blog:

The business driver for the private cloud is clear: Management wants to press a button and get what it needs, so that IT becomes a kind of service vend-o-matic. The transformation required to deliver on that promise seems absolutely immense to me. While commercial cloud service providers have the luxury of a single service focus, a full private cloud has an entire catalogue to account for -- with all the collaboration and governance issues that stopped SOA (service-oriented architecture) in its tracks.

The long, slow march toward greater agility and optimization of resources is basically the story of IT. It has had many names (beginning with re-engineering in the '80s), and HP has done a pretty good job of articulating the latest cloud version. And as Kedrie points out, the abundance of horsepower and bandwidth today -- not to mention the acute pain of managing increased complexity -- could let cloud computing succeed where previous grand designs failed.

But trust me, it will be incremental. People will buy bundled solutions from HP and IBM to cloud-enable this and that service inside the firewall. Advances like Cisco's OTV will make integration with service provider offerings more feasible. Meanwhile, vendors will try their usual lock-in ploys, and enterprise IT managers will protect their turf from disruptive change. It's an old story, but the creative efforts to tell it in a new way are, after all, what keep IT moving forward.

<Return to section navigation list> 

Cloud Security and Governance

Ellen Messmer claims “Changes expected to be implemented by the fall” in her PCI Security Standards Council readying new payment-card security standard article of 2/16/2010 for NetworkWorld:

The Payment Card Industry data security standards, which influence design of networks where sensitive payment-card account data is stored, are expected to be further revised by the PCI Security Standards Council over the next few months.

Bob Russo, general manager of the PCI Security Standards Council, says that by early summer the organization expects to be able to issue a summary for a new PCI standard, which would go into effect in about October. Russo, who will speak about this topic at the upcoming RSA Conference in San Francisco, said the council is readying guidelines on technical topics that include end-to-end encryption for account data and the use of virtualization technologies, with the expectation that new payment transaction standard will be ready.

"In May, we'll be ready with a draft revision of what the standards will look like," Russo says. "In early summer, there'll be a summary of what the changes will be." …

<Return to section navigation list> 

Cloud Computing Events

Andrew Coates reminds Australian developers of upcoming Windows Azure User Group Briefings by David Lemphers in Adelaide, Melbourne, Brisbane, Canberra and Sydney in his Windows Azure User Group Briefings of 1/27/2010:

The one and only Dave Lemphers is coming back to Australia for a week at the end of February for a whirlwind tour of the country to coincide with the launch in Australia of Windows Azure. He'll be doing public technical briefings hosted by the user groups in 5 capital cities.

  • Mon 22 Feb 12:00-13:30, Adelaide, Microsoft Innovation Centre Level 2, Westpac House
    91 King William Street, Adelaide, SA 5000
    Register Here
  • Tue 23 Feb 18:00-20:30, Melbourne, Microsoft Theatre, Level 5, 4 Freshwater Place,
    Southbank, VIC
  • Wed 24 Feb 12:00-13:00, Brisbane, Microsoft Theatres, Level 9, Waterfront Place, 1 Eagle Street, Brisbane
  • Thu 25 Feb 12:00-13:00, Canberra, Microsoft Theatre, Level 2, Walter Turnbull Building, 44 Sydney Avenue, Barton, ACT 2600
  • Thu 25 Feb 18:30-20:30, Sydney, Microsoft Theatres, 1 Epping Rd, North Ryde, NSW 2113

If/when there are registration links for the events in the other 4 cities, I'll update them here. Otherwise, just rock up at the date/time above.

For more details about Windows Azure in Australia, see Greg Willis' post from last month.

The Open Fabrics Alliance announces OFAlliance to Host 6th Annual International Sonoma Workshop March 14-17 at The Lodge at Sonoma in this 2/17/2010 press release:

The OpenFabrics Alliance (OFA), an organization that develops, tests, licenses and distributes multi-platform, open-source software (the OpenFabrics Enterprise Distribution - OFEDTM) for high-performance computing and datacenter applications, today unveiled the agenda for the 6th Annual International Sonoma Workshop, which is scheduled for March 14-17 at The Lodge at Sonoma.

The agenda, registration details and hotel room-reservation information are available at www.OpenFabrics.org. The workshop starts Sunday evening, March 14, with a keynote presentation titled “Requirements for Exascale Systems” by Barney Maccabe and Steve Poole from the National Center for Computational Sciences at Oak Ridge National Laboratory, home to several of the world’s most powerful supercomputers.

The goal of the workshop is to plan the next steps in the evolution of OFED based on real-world customer experiences and vendor input. The theme of the workshop is “Exascale to Enterprise,” indicating the range of IT applications that can be accelerated with OFED’s energy and resource-efficient RDMA and kernel-bypass architectures for both high-performance computing sites and enterprise datacenters. Customer technologists, OFED developers and product managers will share their experiences and discuss future requirements for OFED during the workshop.

OFA members appear to me to be big spenders; the Lodge at Sonoma suggests that you “Endulge yourself at an exquisite Northern California Spa Resort.”

The Windows Azure Team recommends (a bit late) that you Learn More About Windows Azure at A Free, Local Event Near You in this 2/17/2010 post:

There's still time to attend a free local event to learn more about cloud computing and Windows Azure!  Join your local MSDN and TechNet teams at an upcoming session to take a deep dive into cloud computing, better understand Windows Azure and assess how you can leverage it in your work.  Attendees will get a free thumb drive with key information and demos.

The MSDN-hosted session, "Take Your Applications Sky High with Cloud Computing and the Windows Azure Platform", provides a developer-focused overview of Windows Azure Platform and the cloud computing services that can be used either together or independently to build highly scalable applications. As the day unfolds, attendees will also explore data storage, SQL Azure, and the basics of deployment with Windows Azure.

The TechNet-hosted session, "Windows 7, Virtualization and Windows Azure", provides an IT Generalist/Implementer/Manager an overview of Windows Azure including real-world examples of how companies are using Windows Azure today, guidance in thinking about applying it for customer-facing applications, and how it can add flexibility to your computing infrastructure.  This event will also touch on the future of the Windows Azure Platform.

Register today to attend a free, live session in your local area.  Click here to find the event best for you.

Attendance at initial gatherings must have been lower than expected because five of the event pairs have already taken place (in St. Louis, Dallas, Minneapolis, Chicago and Southfield, MI.)

Forrester Events announced a $300 reduction in the registration fee for its Infrastructure & Operations Forum 2010 in a 2/17/2010 e-mail. The forum will be held 3/17 to 3/18/2010 at the InterContinental Dallas hotel in Dallas TX, USA.

The forum includes a Track S: Spotlight Track: Cloud Computing:

The sessions contained in this track [is are] based on content within the established Infrastructure & Operations Forum agenda. These sessions are targeted for those attendees solely interested in content focused on Cloud Computing.

Reuven Cohen lists 22 Upcoming CloudCamps Q2 2010 in this 2/17/2010 post:

CloudCamp continues it's global march around the world. I wanted to take a brief moment to thank our regional organizers as well as our sponsors (which number in the hundreds). If you haven't attended a CloudCamp yet, now is your chance and if there isn't one happening close by, than why not go ahead and help organize one for your region.

    1. February 20, 2010 in Delhi, India
    2. February 20, 2010 in CloudCamp Tour, India
    3. February 23, 2010 in Chennai, India
    4. February 25, 2010 in Hyderabad, India
    5. February 26, 2010 in Auckland, New Zealand
    6. February 27, 2010 in Pune, India
    7. February 28, 2010 in Bangalore, India
    8. March 2, 2010 in Minneapolis, USA
    9. March 4, 2010 in Sydney, Australia
    10. March 5, 2010 in Chicago, IL, USA
    11. March 6, 2010 in Chicago, IL, USA
    12. March 13, 2010 in Vancouver, BC, Canada
    13. March 16, 2010 in Philadelphia, PA, USA
    14. March 17, 2010 in Cologne, Germany
    15. March 23, 2010 in Washington, DC, USA
    16. March 26, 2010 in Wellington, New Zealand
    17. April 1, 2010 in Melbourne, Australia
    18. April 6, 2010 in Toronto, Canada
    19. April 8, 2010 in Perth, Australia
    20. April 15, 2010 in Cork, Republic of Ireland
    21. April 20, 2010 in Springfield, MA 01105, USA
    22. April 30, 2010 in Rio de Janiero, Brazil

It’s clear that Ruv’s CloudCamp idea has gathered substantial mindshare worldwide.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Ray DePena asserts “Cloud computing continues to gain steam, but who is gaining your mind share?” in a preface to his flawed Amazon, Salesforce.com, Google Continue to Lead the Cloudsphere post of 2/18/2010:

Cloud computing continues to gain steam, and it has been some time since we first asked, who is gaining your mind share?

As previously noted in the 4Q09 article, this list is a subjective impression of Cloud Service Providers (SaaS, PaaS, IaaS) on the cloud computing radar.

As usual, large technology companies like AT&T, Dell, EMC, HP, Microsoft, Unisys, and others where cloud computing is a small part of their overall offering portfolio have been excluded.

So here is my list of the Top 25 Cloud Services Providers gaining mind share in the first quarter of 2010.

  1. Amazon Web Services (AWS), Elastic Compute Cloud (EC2), Simple Storage Service (S3), andVirtual Private Cloud (VPC)
  2. Salesforce.com / Sales Cloud 2 (CRM), Service Cloud 2 (Support), Force.com (Development Platform), Chatter (Collaboration)
  3. Google Apps (AppEngine)
  4. Citrix – XenServer (Virtualization)
  5. VMWare – vSphere (Virtualization)
  6. Rackspace – Mosso
  7. NetSuite
  8. Rightscale
  9. Joyent
  10. GoGrid
  11. 3Tera – AppLogic
  12. Caspio
  13. <Zuora
  14. Eucalyptus
  15. CohesiveFT
  16. Boomi
  17. Appirio – Cloud Connectors
  18. Relational Networks – LongJump
  19. AppZero
  20. Enomaly – Elastic Compute Platform (ECP)
  21. Astadia
  22. Bluewolf
  23. Intacct
  24. 3PAR
  25. Elastra

If Ray excluded “large technology companies like AT&T, Dell, EMC, HP, Microsoft, Unisys, and others where cloud computing is a small part of their overall offering portfolio” why are AWS and Google App Engine in the list? AWS and GAE are certainly small parts of Amazon’s and Google’s portfolios. Omitting Microsoft and IBM is the post’s fatal flaw, in my opinion.

R “Ray” Wang’s Quips: Salesforce.com Announces Private Beta Of Chatter post of 2/17/2010 to his A Software Insider's Point of View blog analyzes Salesforce.com’s “move into Social” Customer Relationship Management:

Announced at the 2009 Dreamforce conference, Chatter represents both a collaboration application and platform.  Software built on the Force.com platform will gain the collaboration capabilities.   Solutions in AppExchange will be able to use profiles, real time streams, and other API’s.   With a 100 customers testing out user experience, scalability, and security, Salesforce.com, moves from vaporware to beta.   Some key features include:

  • Aggregating streams of information. Employees can subscribe to feeds such as internal updates, social networks, and documents.
  • Automating status updates. Users can receive updates from system and user generated alerts.  Alerts can include documents and related links.
  • Enabling secure document sharing. Chatter feeds can be searched to find relevant information.  Document sharing is protected by a secure sharing model from the Force.com platform. …

Ray continues with analysis of:

    • The Bottom Line For Customers - Chatter Represents A First Step Towards Social CRM …
    • The Bottom Line For Vendors - Chatter Beta Buys Salesforce.com Time To Fend Of Best of Breed Competitors. ..

Geva Perry reports Rackspace Kicks-Ass in 2009 in this 2/17/2010 to his Thinking Out Cloud blog:

Just a day after I post Rackspace: The Avis of Cloud Computing?, the company (NYSE:RAX) announced its 2009 results, and boy did they kill it. Net revenues grew 18.4% year-over-year and EBITDA rose 33.4% from 2008.

Especially interesting was the success in the cloud computing side of the house (the rest is traditional managed hosting). Here's an excerpt from MySanAntonio.com (Rackspace is based San Antonio, TX):

“Cloud computing now accounts for roughly 9 percent of the company's total revenue, but that's expected to grow to about 14 percent this year, according to Tier 1 Research. By 2012, Tier 1 Research estimates Rackspace's cloud computing business will generate $272 million in revenue.”

Also interesting was the following statement from CEO Lanham Napier:

“In 2009, given the extreme uncertainty in the economy, we were focused on cash and margins, and we proved we could flex our model, which helped us separate ourselves from the pack and emerge as a stronger competitor,” Napier said. “In 2010, we will shift our primary focus to growth.”

So as I said in my previous post, I continue to expect them to do well.

Salesforce.com announces that it has Launched the Private Beta of Salesforce Chatter, Bringing Enterprise Collaboration Into the Era of Cloud Computing and Social Networking in this 2/17/2009 press release that asserts “Salesforce Chatter accelerates the demise of Microsoft SharePoint and IBM Lotus Notes - liberating companies from the cost and complexity of legacy collaboration software”:

Salesforce.com (NYSE: CRM), the enterprise cloud computing (http://www.salesforce.com/cloudcomputing/) company, today announced the private beta program for Salesforce Chatter, the industry's first real-time enterprise collaboration application and platform. One hundred industry innovators from around the globe, including Reed Exhibitions, Schumacher Group, and TransUnion were chosen to empower their employees to "know it now" through enterprise collaboration in the program. In the private beta, customers will also be able to realize anytime, anywhere access to Chatter's real-time feeds via BlackBerry or iPhone mobile devices.

In the past, companies have struggled with the problem of understanding everything that's going on within their organization and they are constantly missing out on critical information because collaboration tools make users do all the work. With Chatter, salesforce.com will empower companies to break free from the cost and complexity of legacy software such as SharePoint and Lotus Notes. Chatter is easy to use and delivers relevant information to each user based on the people, documents, and apps they decide to follow. While similar to the look of popular consumer social networking sites, Chatter is the first ever trusted, secure enterprise application that allows companies to collaborate in real time through profiles, feeds and status updates.

Despite adding “Twitter for Salesforce.com,” I believe the company’s claim about “the demise of Microsoft SharePoint” is greatly exaggerated (with apologies to Sam Clemens/Mark Twain.)

Kevin Jackson reports “The European Authorities do not currently recognize the European Cloud Computing industry” in his EuroCloud Expands Quickly post of 2/17/2010:

Last October I introduced EuroCloud as a pan-European business network with the goal of promoting European use of cloud computing.

In the intervening three months, the organization has grown to include representation from 16 countries with four additional ones in the pipeline! According to Bernd Becker, an Eurocloud executive, the organization has experienced strong enthusiastic support.

Officially founded in Paris on Jan 29th, 2010, EuroCloud promotes SaaS and Cloud services across Europe. Drivers for the creation of this group include:

  • Europe has a fast growing SaaS and Cloud Computing industry, but each country is currently operating separately with few contacts in other European countries.
  • National SaaS vendors are growing and are looking to build European and international relationships through business and technological partnerships.
  • The European Authorities do not currently recognize the European Cloud Computing industry, which is an industry that can help stimulate the economic and technological environment to promote new Cloud Computing industries; and
  • Cloud Computing implies application integration into an Application-Oriented Ecosystem. Developing new application partnerships, both European and worldwide represents the next crucial step.

As an organizing tenet, EuroCloud relies on a two-level framework including respect to local cultures along with the will to promote a real European spirit:

For more on EuroCloud visit their website.

Mark O’Neill’s How Cloud Service Brokers Enable the Cloud Marketplace post of 2/17/2010 asserts “The Cloud Service Broker also meters the connections so that there are no billing surprises” and continues:

Today, Mike Vizard from CTO Edge covers the Vordel Cloud Service Broker and mentions that:

"...customers are going to want to see an ecosystem of cloud computing services from multiple vendors that will allow them to dynamically allocate various jobs based on the capabilities and pricing offered by the cloud computing service. To accomplish that, IT organizations are going to have to deploy something that functions like a cloud computing broker at the edge of the enterprise."

This, of course, is exactly what the Vordel Cloud Service Broker provides. It mitigates against the differing proprietary interfaces provided by multiple Cloud providers. Once these proprietary interfaces are smoothed over by the Vordel Cloud Service Broker, this enables a number of exciting consequences. Mike Vizard mentions the usage of Amazon Spot Pricing as one of them. …

<Return to section navigation list> 

blog comments powered by Disqus