Sunday, October 11, 2009

Windows Azure and Cloud Computing Posts for 10/7/2009+

Windows Azure, Azure Data Services, SQL Azure Database and related cloud computing topics now appear in this weekly series.

••• Update 10/11/2009: Me: PassportMD SSL certificate now OK but no sync with HealthVault; Wade Wegner: Create an MVC Azure app in < 4 minutes; Radim Marek: Amazon EC2 still vulnerable to UDP flood attacks; Sam Johnston: How Open Cloud could have saved Sidekick users' skins; Ben Day: Azure: Is the Relational Database Dead?; and a few others.

Update 10/10/2009: John Miller: Grid Computing with Azure; Charlton Barreto: More about the DDoS attack on Bitbucket at AWS; Dana Gardner: Architects to cloud advocates: Get real.

• Update 10/9/2009: Sanjay Jain: Incubation Week for Windows Azure; Aaron Skonnard: Demo code from his VSLive! Orlando 2009 sessions; Chris Hoff (@Beaker): Security Issues with Shared AMIs; Reuven Cohen: The Canadian government’s cloud computing plans; David Pallman: Speaking on Azure Migration; Bing Maps team: Windows Azure Silverlight map updates; Jessica Scarpati: Telecom Carriers and Cloud Services; Wade Wegner: Windows Azure and ADFS v2 Federation; Windows Azure Team: Move your Azure assets from the Quincy, WA to the San Antonio, TX data center; and more.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

See the Wrox October 2009 Newsletter with an excerpt from Chapter 4 and the first advert for the book in MSDN Magazine’s 11/2009 issue.

Azure Blob, Table and Queue Services

Deylo Woo’s Windows Azure Development Storage Setup for SQL Server 2008 post of 10/8/2009 describes in detail how to modify local configuration files to substitute a default or named instance of SQL Server 2008 Developer edition or higher for Azure Development Storage’s .\SQLEXPRESS default.

Eric Nelson posts his Slides and links from Windows Azure Platform Storage session on 10/7/2009:

I delivered a session on Windows Azure Storage and SQL Azure Database at the UK Azure user group on the 6th of October.

A big thanks to everyone who attended – and for putting up with my last minute improvisation when we realised that 2/3rd of the audience were actually brand new to Windows Azure (and presumably therefore brand new to the user group). …

In addition to Eric’s embedded player, you can view the slides on SlideShare.

Ryan Dunn’s Launching MyAzureStorage.com post of 10/6/2009 announces “a sample TableBrowser service we are hosting for developers at MyAzureStorage.com:”

We built this service using ASP.NET MVC on a rich AJAX interface.  The goals of this service were to provide developers to an easy way to create, query, and manage their Windows Azure tables.  What better way to host this than on a scalable compute platform like Windows Azure?

Create and Delete Tables

If you need to create or manage your tables, you get a nice big list of the ones you have.

image

Create, Edit, and Clone your Entities

I love being able to edit my table data on the fly.  Since we can clone the entity, it makes it trivial to copy large entities around and just apply updates.

image

Query Entities

Of course, no browser application would be complete without being able to query your data as well.  Since the ADO.NET Data Services syntax can be a little unfamiliar at first, we decided to go for a more natural syntax route.  Using simple predicates long with OR, AND, and NOT operations, you can easily test your queries.

Display Data

Lastly, we have tried to make showing data in Windows Azure as convenient as possible.  Since data is not necessarily rectangular in nature in Windows Azure tables, we have given you some options:  First, you can choose the attributes to display in columns by partition.  Next, you expand the individual entity to show each attribute.

Please note:  during login you will need to supply your storage account name and key.  We do not store this key.  It is kept in an encrypted cookie and passed back and forth on each request.  Furthermore, we have SSL enabled to protect the channel.

The service is open for business right now and will run at least until PDC (and hopefully longer).  Enjoy and let me know through the blog any feedback you have or issues you run into.

<Return to section navigation list> 

Johnny Halife provides more details about Consuming Windows Azure Blob Storage from Ruby in this 10/6/2009 post:

Hey Folks, today I’m proud to announce my first release of the waz-blobs ruby gem, for interacting with Windows Azure Blob Storage from Ruby programming language.

Yes, it’s 100% organic Ruby code, there’s no strange Microsoft library that you need to consume and even better it was written and tested on Mac OS X and Ubuntu 8.10.  This post is about the motivation, and design process I’ve taken. Here you will find a minimal reference to the code, if you’re looking for the bits go straight to http://github.com/johnnyhalife/waz-blobs/ that includes the whole API documentation. …

I loved the experience, and I’m loving each moment I spend hacking on this code, it’s probably one of the most fun projects I’ve ever faced. As a .NET Developer, I figured out that even the Storage Client shipped with Azure SDK isn’t great (it’s not even fully implemented agains the API Spec), so instead of complaining I developed my own.

As summary: I did it for fun, I did it because I like having choices (not only S3) and because I wanted to see how real is Microsoft statement about interoperability for WAZ Services, which end up being completely true this time. [Emphasis Johnny’s.]

SQL Azure Database (SADB, formerly SDS and SSDS)

Jayaram Krishnaswamy demonstrates a Ground to SQL Azure migration using MS SQL Server Integration Services in this lengthy, fully Packt Publishing article:

In this article … you will learn how to migrate a table from your ground based SQL Server 2008 to your cloud based SQL Azure instance using MS SQL Server Integration Services.

Enterprise data can be of very different kinds ranging from flat files to data stored in relational databases with the recent trend of storing data in XML data sources. The extraordinary number of database related products, and their historic evolution, makes this task exacting. The entry of cloud computing has turned this into one of the hottest areas as SSIS has been one of the methods indicated for bringing ground based data to cloud storage in SQL Azure, the next milestone in Microsoft Data Management. The reader may review my book on this site, "Beginners Guide to Microsoft SQL Server Integration Services" to get a jump start on learning this important product from Microsoft. …

<Return to section navigation list> 

.NET Services: Access Control, Service Bus and Workflow

• Wade Wegner’s Passive Federation with Windows Azure and ADFS v2 (codenamed "Geneva" Server) post of 10/9/2009 offers a detailed analysis and tutorial of identity federation of Windows Azure authentication with Active Directory Federation Services v2.

One of the critical elements a company needs to consider when moving to the cloud is how they will leverage their existing identity stores.  Most companies have made significant investments in various identity solutions (i.e. providing for SSO, identity consolidation, federating with partners, etc.) and it’s imperative to ensure that applications and services in the cloud can take advantage of these resources.

The practice of enabling the portability of identity information across otherwise autonomous security domains, called identity federation, is no longer a luxury – it’s a necessity.  The inability to allow users to access resources in different datacenters, with various trading partners, or on the Web, can quickly cripple a companies productivity (not to mention user satisfaction).  Historically, providing for the needs of Web-based single sign on (SSO) and cross-domain resource access has been very difficult to accomplish.  It may have required the replication of identity stores in a host of one-off scenario, or even (gasp!) compromising security best practices in order to satisfy a business need.

More recently, various kinds of identity architecture have made identity federation much easier.  Practices like claims-based authentication, standards like WS-Federation, and so forth allow companies to establish trust domains between different organization and parties much easier.

ADFS v2 (which was previously codenamed "Geneva" Server") expands this capability by bringing claims-based identity federation to cloud-based applications that live in on the web, in the enterprise, or across an organization.

You can download the Windows Identity Foundation and Windows Azure Passive Federation code for the tutorial from the MSDN Code Gallery.

See Aaron Skonnard’s demo code for his Programming .NET Services presentation at VSLive! Orlando 2009 in the the Cloud Computing Events section:

Companies need infrastructure to integrate services for internal enterprise systems, services running at business partners, and systems accessible on the public Internet. And companies need be able to start small and scale rapidly. This is especially important to smaller businesses that cannot afford heavy capital outlays up front. In other words, companies need the strengths of an ESB approach, but they need a simple and easy path to adoption and to scale up, along with full support for Internet-based protocols. These are the core problems that .NET Services address, specifically through the Service Bus, Access Control, and Workflow services.

Unfortunately, Workflow services won’t be available until .NET Framework 4 releases sometime next year.

<Return to section navigation list> 

Live Windows Azure Apps, Tools and Test Harnesses

••• Wade Wegner’s Webcast: Running an ASP.NET MVC Web Application in Windows Azure post of 10/11/2009 is a “… short, quick webcast that shows exactly the steps you need to take.” Wade says:

Before you try this yourself, make sure you satisfy the following dependencies:

For those of you that have no desire to watch a four minute video, and would rather have a quick walkthrough, here you go:

  1. Create a blank cloud services project.  Do not add any roles to the project.
  2. Add a new ASP.NET MVC Web Application to the solution.
  3. Add  the ASP.NET MVC Web Application as a web role in the cloud services project.
  4. Add the Microsoft.ServiceHosting.ServiceRuntime.dll assembly to the ASP.NET MVC Web Application.
  5. Set the following MVC assemblies to Copy Local True.
    • System.Web.Abstractions
    • System.Web.Mvc
    • System.Routing
  6. Run the application.

••• I had to update my Logins to PassportMD Personal Health Records Account No Longer Fail with Expired Certificate post three times this (10/11/2009) morning for:

  1. Initial logins with continuing expired SSL certificate failures for more than a month
  2. Login failure fixed at about 10:00 AM PT, but site performance very slow
  3. Failure to synchronize data with Microsoft HealthVault, site performance still slow

My conclusion: As Casey Jones said: Helluva way to run a railroad (or a personal health record site)

•• John Miller’s undated Grid Computing with Windows Azure post offers downloads of his presentation to the 2009.2 Northern Virginia Code Camp and demo source code, as well as a link to David Pallman’s Azure Grid framework on CodePlex:

Azure Grid is the community edition of the Neudesic Grid Computing Framework. It provides a solution template and base classes for loading, executing, and aggregating grid tasks on the Windows Azure platform. It also includes a sample grid application.

Azure Grid is documented in a 3-part article on David Pallmann's blog, also available in PDF form in the downloads area. The article describes the design pattern for Azure Grid, then proceeds to code and run a sample fraud scoring grid application.

http://davidpallmann.blogspot.com/2009/04/grid-computing-on-azure-cloud-computing.html,
http://davidpallmann.blogspot.com/2009/04/grid-computing-on-azure-cloud-computing_25.html
http://davidpallmann.blogspot.com/2009/04/grid-computing-on-azure-cloud-computing_9559.html

Azure Grid includes a WPF-based Grid Manager for launching and monitoring job runs.

The Task View shows the tasks and cloud workers executing them:

grid3_job2taskview.jpg

The Bing Maps team offers a Silverlight-powered Azure application, which displays monthly updates to Bing maps in a slideshow. Here’s a capture of an aerial  map of San Miguel, El Savador added in September 2009 (click for 1024 x 768-pixel full-size capture):

See Sanjay Jain’s post about the Microsoft BizSpark Incubation Week for Windows Azure in the Cloud Computing Events section.

Barbara Duck (@MedicalQuack) reports via a Tweet the Microsoft H1N1 site using Azure technology too. Her Microsoft H1N1 Flu Response Center – Online Tool To See if You or Someone You Know May Have It post of 10/7/2009 begins:

Earlier today Microsoft announced their online tool to determine if you or somebody you know might have the H1N1 virus.  This is an online evaluation to where you are presented with a number of questions for response.  I did a fictional analysis just to check it out and see how it worked.image

As you can see in the image [to the right], I received a red box saying this person could have the H1N1 virus.  I also went to the next step and added this to my HealthVault Account just to go through the motions as I really didn’t need the evaluation right not but wanted to see how the rest of the process takes place.

I was not able to make the online blogger conference today but here’s a brief run down of what was covered.  

“A representative from the Company will discuss how you can use online resources to manage health concerns from your home, debut the service and answer any questions.
A doctor from Emory University who developed the service will also be available to answer questions about H1N1 and what people can do to help limit the spread of the disease.”

Bill Crounse, MD asserts Needed: One Hell of a Training Program on 10/7/2009 in response to a post by Barbara Duck:

… Yesterday I did a keynote for a national conference of clinical  case managers.  I’d say members of the audience were mostly female nurses between the ages of 40 and 60.  I’m sure a lot of the information I shared with them seemed more like Star Wars than anything close to the reality they work in every day.  I also encounter lots of physicians who are totally clueless that there will soon be penalties if they are not using electronic medical records.  And just like Barbara Duck has experienced, the majority of community physicians and other clinicians I meet have never heard of HealthVault, Amalga, Google Health, Keas, American Well, PatientsLikeMe, Navigenics, 23andMe, and so on.

image Dr. David Blumenthal (I wonder if most docs have even heard of him) has announced a “workforce training initiative” to educate more health information management professionals with expertise in electronic health records and related technologies.  He says at least 50,000 new jobs are needed in the field.  I would add, based on what I’ve experienced, that we will also need training for perhaps ten or twenty times that number of people; i.e. most of the physicians, nurses and other clinicians who are currently practicing in offices, clinics and hospitals all over America.

It’s not that these folks have their heads in the sand. Most of them are working so hard day to day in patient care, trying to stay afloat and keep their practices from going under, that they literally don’t have time to come up for air.  So what happens when we expect them to use all of this technology and also give 45 million more people access to their services?  That is going to call for one hell of a training program!

Bill Crounse, MD is Microsoft's worldwide health senior director.

Joseph Goedert’s ONC: What do Consumers Prefer? post of 10/7/2009 to the Health Data Management Blog reports:

The Office of the National Coordinator for Health Information Technology has released a draft Consumer Preferences Requirements Document for public comment.

To date, the federal government's initiatives for a national health information technology agenda have not "formally addressed all of the interoperability considerations for the communication of consumer preferences to support the goal of patient/consumer focused healthcare and population health," according to ONC. "Therefore, the purpose of this document is to support the ONC Strategic Plan and the national HIT agenda related to standards development and harmonization process by describing business processes, information exchanges, stakeholders, functional requirements, issues and policy implications involving consumer preferences."

The draft document is here. (The article’s link is to a list of search results.)

<Return to section navigation list> 

Windows Azure Infrastructure

• Luis Alverez Martins posts the 66 slides of his Overview of Windows Azure and the Azure Services Platform presentation to CloudViews.org’s Cloud Computing Conference 2009. Slide 37 lists some of the high-volume services that Microsoft’s data centers are supporting today:

The Windows Azure Team warns in the following email, which I received on 10/9/2009, that you must move your Windows Azure production applications and storage accounts from the ‘USA-Northwest’ region (Quincy, WA) to the ‘USA – Southwest’ region (San Antonio, TX) before the end of October:

Windows Azure CTP participant,

You are receiving this mail because you have an application or storage account in Windows Azure in the “USA - Northwest” region.  Windows Azure production applications and storage will no longer be supported in the ‘USA-Northwest’ region.  We will be deleting all Windows Azure applications and storage accounts in the “USA - Northwest” region on October 31st.

To move your application/storage, first delete the project using the “Delete Service” button.  Then recreate it, choosing the “USA - Southwest” region.  (It may take a few minutes for your previous application and storage account names to become available again.)

clip_image002

Note that deleting your storage account will destroy all of the data stored in that account.  Copy any data you wish to preserve first.

If you would like help migrating your project or have any other concerns, please reply to this mail.

- The Windows Azure Team

I was hoping that the team would provide a simple tool to move the apps and their data (or do it for us.) No such luck.

Does Microsoft intend to sow the land on which they built the Quincy, WA data center with salt? Stay tuned.

• Carl BrooksIDC: Cloud will be 10% of all IT spending by 2013 post to SearchCloudComputing.com notes that “IDC's updated IT Cloud Services Forecast predicts that public cloud computing will make up $17.4 billion worth of IT purchases and be a $44 billion market by 2013” and continues:

The IDC predictions presumably did not account for the estimated $19 billion, out of the U.S. government's $70 billion IT budget, that Federal CIO Vivek Kundra has vowed to spend on cloud computing. It also does not count spending on private cloud. IDC did not respond to requests for comment about its methods.

"It's increasingly apparent to everybody that this is a real phenomenon and not entirely marketing hype," said Jeff Kaplan, principal analyst at Boston-based consulting firm THINKstrategies. He said the numbers are an important indicator of the potential for cloud services.

Kaplan said that IDC had correctly forecast the economic downturn as a factor in the growth of cloud computing, but noted that there was a flip side as well. Even if buyers are attracted to the cloud pricing and consumption model, they're strapped for cash. The forecast states that actual spending is about six months behind 2008 predictions. "[IT buyers] still don't have money to spend on anything," even if there's a cheap cloud option. Another problem is persistent confusion about what constitutes cloud computing.

"There is a land grab on right now -- the truth is the market hasn't grown as fast as it could have," said Kaplan, because of the hype and overblown claims by vendors trying to cash in on the cloud label. That has left important enterprise buyers suspicious and confused, despite the growth of Amazon and Rackspace's cloud businesses. …

Carl continues with further analysis and closes with:

"Now, it's up to the industry to get out of its own way and let it happen." Kaplan said.

InformationWeek has reformatted Charles Babcock’s recent Cloud Computing: Platform As A Service feature article into a downloadable Analytics report (requires site registration). Related reports available from the same landing page include Research: Cloud Governance, Risk and Compliance, The Public Cloud: Infrastructure As A Service, Research: Cloud Computing: A Walk in the Clouds.

Janet Garvin’s Perceptions of Cloud Service Providers October 2009 Market Alert for Evans Data Corporation is “A poll of the perceptions of software developers on the major players in the Cloud computing space” that tells you “Who's Who and Where in the Clouds” (site registration required):

Want to know who's going to be leading the pack in the world of cloud computing? If you're planning on developing in the cloud or deploying your apps to the cloud, it's critical to understand the likelihood of success with each vendor. A solid indicator comes from the perceptions of software developers - the very people who are using Cloud today and planning for it tomorrow. We asked developers what they think of the top competitors and they ranked them on completeness of solution, ability to execute, security, latency, reliability, scalability and vendor lock-in.

See how developers rate the top players including:

  • Amazon
  • AT&T
  • Google
  • IBM
  • Microsoft
  • HP
  • Rackspace

When planning your migration to the cloud use some real data from real developers to choose your vendors. …

Image credit: Evans Data Corporation.

I’m surprised that the developers Evans Data interviewed rated Microsoft so far below Amazon and Google and even below IBM in “Ability to Execute” and “Completeness of Solution.” At least they rate higher than VMware, Sun, HP and Rackspace in “Completeness of Solution.”

Janet Garvin is the founder of Evans Data Corporation.

James Urquhart’s Cloud computing and the big rethink: Part 3 post of 10/7/2009 continues his series that claims:

[C]loud computing and virtualization will drive homogenization of data center infrastructure over time, and how that is a contributing factor to the adoption of "just enough" systems software. That, in turn, will signal the beginning of the end for the traditional operating system, and in turn, the virtual server.

However, this change is not simply being driven by infrastructure. There is a much more powerful force at work here as well--a force that is emboldened by the software-centric aspects of the cloud computing model. That force is the software developer. …

The Aberdeen Group’s Cloud Computing Is Democratizing Computing Power and Traditional IT Barriers of Cost, Time, Quality, Scale, and Geographic Location press release reveals “top performing companies that have adopted cloud computing have reduced IT costs 18% and data center power consumption by 16%:”

The new report, "Business Adoption of Cloud Computing: Reduce Cost, Complexity and Energy Consumption," examined the business adoption of cloud computing of 184 organizations, including small-to-medium and enterprise businesses.

The survey revealed the top business pressures driving the adoption of cloud computing include:

  • Overall cost of IT infrastructure
  • Need to enhance competitive advantage
  • Lack of flexibility in the current IT environment
  • Need to support additional services or users   

"Small businesses and startups are adopting cloud computing to break down traditional technological and financial barriers in the delivery of new categories of software innovation," said Bill Lesieur, research director, Aberdeen. "Larger businesses are cutting costs with cloud computing while embarking on a transformation of their IT service delivery models based on SOA architectures and cloud computing." …

Lori MacVittie posits that Infrastructure 2.0 Is the Beginning of the Story, Not the End in this 10/7/2009 post:

The term “Infrastructure 2.0” seems to be as well understood as the term “cloud computing.” It means different things to different people, apparently, and depends heavily on the context and roles of those involved in the conversation. This shouldn’t be surprising; the term “Web 2.0” is also variable and often depends on the context of the conversation. The use of the versioning moniker is meant, in both cases however, to represent a fundamental shift in the way the technologies are leveraged by people.

In the case of Web 2.0 it’s about the shift toward interactive, integrated web applications used to collaborate (share) data with people. In the case of Infrastructure 2.0, it’s about a shift toward interactive, integration infrastructure used to collaborate (share) data with infrastructure. …

James E. Gaskin claims Clouds Now Strong Enough To Support Your Business in this 10/7/2009 article for NetworkWorld:

Technology makes life easier for small businesses, even if you can't see that while cursing your personal computer for some problem or another today. Not only have hardware costs dropped by an order of magnitude over the past two decades, you can now run your business quite well without any hardware beyond one laptop or netbook for every employee. The fuzzily-named “cloud” can support your business without any local hardware. And when you do want local hardware appliances, they should be tied into the cloud as well for disaster recovery support.

Let's define “cloud” as a hosted service leveraging hardware not in your location. You can have a private cloud, as many large companies do, by providing remote user services from a centralized but company owned data center. Mainframes could be called the original cloud with our definition, because few people were in the same location as their computer.

Smaller companies, even those with multiple locations, find a private cloud expensive, making them overkill when balancing cost versus benefits. Third party clouds, however, can now do everything a business needs. The smaller the company, the more they should look to hosted “cloud” providers for services ranging from marketing to customer acquisition to accounting to project management to payroll. You don't have to use hosted services for all these things, but if you do, you'll save considerable money upfront and get constant software upgrades as part of your deal.

David Linthicum describes 4 things that are driving cloud computing: “Some of the factors that are hurting cloud computing are also helping it -- the extreme hype, for example” in this 10/7/2009 post. Here are Dave’s choices:

    • The cloud computing hype
    • The cloud computing providers themselves
    • The down economy
    • Quick cloud computing wins …

Steve Lesem recommends CFOs: Questions to ask your CIO about Cloud Computing in this 10/6/2009 post:

A recent paper from Deloitte titled CFO Insights: Heading for the Clouds raises some very good points from the perspective of the CFO. It's worth a quick read.

In essence, the case is made that Cloud computing presents a significant opportunity because it allow companies to reduce the capital costs of information technology. It allows companies to convert the cost of computing from capital expenditures to primarily an operating expense. The author emphasizes that since the IT budget is often one of the largest expenses a company incurs, CFOs should ask their CIOs how they plan to leverage cloud computing to reduce costs and increase service responsiveness. In my view this is clearly a critical issue for CFOs looking to improve their financial results in a down economy.

Steve follows with a few questions CFOs should ask. …

Gaston Hiller claims Rich Services Cloud Applications Require Parallel Programming Skills in this InformationWeek article of 10/5/2009:

The interest on Rich Services Cloud Applications is growing fast. Users want responsive and immersive interactions from any locations. Nowadays, you cannot think about a business application without mobility in mind. However, you cannot avoid creating a rich user experience (UX) in mobile devices whilst accessing services on the cloud. If you want to offer a really nice experience, you'll have to use parallel programming skills everywhere. …

Microsoft Research updated its Dryad Project description page with the announcement that “An academic release of DryadLINQ is now available for public download.”

Dryad is an infrastructure which allows a programmer to use the resources of a computer cluster or a data center for running data-parallel programs. A Dryad programmer can use thousands of machines, each of them with multiple processors or cores, without knowing anything about concurrent programming. …

BusinessCloud9’s Private Clouds set to dominate until 2012, says Gartner of 10/2/2009 (detailed site registration required) claims:

  • Private Clouds will be a stepping stone to public Clouds
  • A hybrid model will emerge to change the nature of the IT department

Businesses are more likely to invest in private Clouds over the next couple of years than take a leap of faith into public alternatives, but over time a hybrid model will emerge.

Research firm Gartner predicts that until at least 2012, more investment will be put into private services than public Cloud providers, although it sees a significant role emerging for such public offerings over time.

“For now, private Cloud Computing will not just be a viable term, it will be a significant strategic investment for most large organisations.” said Phil Dawson, research vice president at Gartner. “We predict that through 2012, more than 75% of organisations use of Cloud Computing will be devoted to very large data queries, short-term massively parallel workloads, or IT use by start-ups with little to no IT infrastructure.”

According to Gartner, there are some critical issues to consider before investing in private Clouds though, most notably around the area of potential lock-in.  Because private Clouds are deployed for approved members only, making access for the general public or businesses can be difficult. …

PrivateCloud9 is a publication of Sift Media, UK.

<Return to section navigation list> 

Cloud Security and Governance

Sam Johnston explains How Open Cloud could have saved Sidekick users' skins in this detailed and thoughtful paean of 10/11/2009 to open cloud formats and APIs as well as the Cloud Computing Bill of Rights:

The cloud computing scandal of the week is looking like being the catastrophic loss of millions of Sidekick users' data. This is an unfortunate and completely avoidable event that Microsoft's Danger subsidiary and T-Mobile (along with the rest of the cloud computing community) will surely very soon come to regret.

There's plenty of theories as to what went wrong - the most credible being that a SAN upgrade was botched, possibly by a large outsourcing contractor, and that no backups were taken despite space being available (though presumably not on the same SAN!). Note that while most cloud services exceed the capacity/cost ceiling of SANs and therefore employ cheaper horizontal scaling options (like the Google File System) this is, or should I say was, a relatively small amount of data.

As such there is no excuse whatsoever for not having reliable, off-line backups - particularly given Danger is owned by Microsoft (previously considered one of the "big 4" cloud companies even by myself). It was a paid-for service too (~$20/month or $240/year?) which makes even the most expensive cloud offerings like Apple's MobileMe look like a bargain (though if it's any consolation the fact that the service was paid for rather than free may well come back to bite them by way of the inevitable class action lawsuits).

Microsoft has a public-relations disaster in the works. Although the (appropriately named) Danger operation appears to be a subsidiary, not a division, the parent corporation shares responsibility for governing operational policies (including data backup.)

Others climbing on the EPIC FAIL! article bandwagon include:

•• Sonoa Systems offers six short videos in its Cloud Security series - issues around PII, privacy, and audit compliance from this undated blog post:

Greg recently sat down with Ryan Bagnulo, Security Architect for ASPECT-i, to discuss a number of cloud security concerns and issues.   
We captured these discussions in six short videos, each focusing on a  topic.  Here are the first two on PII, data filtering, and audit and regulatory concerns,  (see the full series here.)
In this first video, Greg and Ryan set things up with discussions on:

  • Challenges in deploying cloud, starting with: should you trust your cloud administrator?
  • Good data for early cloud adoption (such as public data like news, stocks)

This 2nd short focuses on:

  • issues around PII (personally identifiable information)
  • counter-measures, such as de-identifying data with filtering, screening or access control
  • privacy and regulatory risks around stored in the cloud.
  • best practices for protecting data
  • implications for violating security breaches privacy regulations

•• Dana Gardner advises Architects to cloud advocates: Get real in this 10/8/2009 BriefingsDirect podcast and partial transcript:

The popularity of the concepts around cloud computing have caught many IT departments off-guard.

While business and financial leaders have become enamored of the expected economic and agility payoffs from cloud models, IT planners often lack structured plans or even a rudimentary roadmap of how to attain cloud benefits from their current IT environment.

New market data gathered from recent HP workshops on early cloud adoption and data center transformation shows a wide and deep gulf between the desire to leverage cloud method and the ability to dependably deliver or consume cloud-based services.

So, how do those tasked with a cloud strategy proceed? How do they exercise caution and risk reduction, while also showing swift progress toward an “Everything as a Service” world? How do they pick and choose among a burgeoning variety of sourcing options for IT and business services and accurately identify the ones that make the most sense, and which adhere to existing performance, governance and security guidelines?

It’s an awful lot to digest. As one recent HP cloud workshop attendee said, “We’re interested in knowing how to build, structure, and document a cloud services portfolio with actual service definitions and specifications.”

Here to help better understand how to properly develop a roadmap to cloud computing adoption in the enterprise, we’re joined by three experts from HP: Ewald Comhaire, global practice manager of Data Center Transformation at HP Technology Services; Ken Hamilton, worldwide director for Cloud Computing Portfolio in the HP Technology Services Division, and Ian Jagger, worldwide marketing manager for Data Center Services at HP. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

•• Charlton Barreto’s Cloud Security issues get real post of 10/8/2009 adds substantial details to the Register’s DDoS attack rains down on Amazon cloud article of 10/5/2009 by Cade Metz:

… For Nøhr and other Bitbucket devotees, it seems odd that traffic from the net at large could bring down what should be "internal" storage resources. Nøhr speculates that Bitbucket's storage sits on the same network interface that connects the site to the outside world. He asks why the storage isn't on a separate channel - and why Amazon doesn't have methods in place to rapidly detect and combat such DDoS attacks.

"I do think they could’ve taken precautions to at least be warned if one of their routers started pumping through millions of bogus UDP packets to one IP," he wrote, "and I also think that 16+ hours is too long to discover the root of the problem."

Cloudsecurity.org's Balding is equally surprised that an outside attack could somehow "get between" EC2 and EBS. But since Amazon treats its service as a black box, he says, it's difficult to tell what actually occurred. He says it's possible that the attack came from inside EC2 - i.e. from another EC2 customer - but this is unlikely. "You'd think that Amazon could have shut down that sort of thing pretty quickly," he tells The Reg.

Amazon did not immediately respond to a request for comment, but we made contact before Pacific Coast office hours. We will update this story when the company responds.

In a security white paper (pdf) dated September 2008, the company says it uses standard DDoS-fighting techniques such as syn cookies and connection limits. It also says: "To further mitigate the effect of potential DDoS attacks, Amazon maintains internal bandwidth which exceeds its provider-supplied Internet bandwidth."

Bitbucket runs its entire site on Amazon's Elastic Compute Cloud, using the company's Elastic Block Store (EBS) for storing its database, log files, user data, and more. EBS provides persistent storage for EC2 server instances.

Mary Hayes Weier asks Is Workday's 15-Hour SaaS Outage Acceptable? in this 10/9/2009 post to InformationWeek’s Plug into the Cloud blog:

On Sept. 24, Workday's SaaS service for human resources, financial applications and payroll was down for 15 hours. That's right, not 15 minutes, not 1.5 hours, but 15 hours. Google Gmail is down for 90 minutes, as it's as if the world has come to an end. So it begs the question: Is 15 hours' downtime for core applications such as accounting and HR acceptable?

The day after the outage, Workday Co-CEO Aneel Bhusri posted a blog explaining to customers what happened, but it wasn't until this week that the ERP-focused blogosphere and twitterers began discussing the incident. In a blog posted Thursday, software consultant Michael Krigsman took pangs to point out that Workday did a nice job of damage control, daring to say that the outage was actually about "a success and not a failure." …

As I said in a comment to Mary’s post:

I don't believe that 15 hours of downtime (in a month, let along in a 24-hour period) is acceptable performance for an online financial service, especially payroll, regardless of how the service’s management attempts to whitewash it.

• Reuven Cohen’s Canadian Government Unveils Cloud Computing Strategy post of 10/8/2009 claims “The Obama administration has provided Canada with a strong role model:”

This week I had the honor of organizing and hosting the first in a series of Global Government Cloud Computing Roundtables. This first event was held in Ottawa, Ontario and was coordinated in partnership with Jennifer Meacher of Canada's Foreign Affairs and International Trade(DFAIT) and held along side the GTEC conference.

The purpose of this by invitation meeting was to provide an international forum for leading government CIOs and CTOs to discuss the opportunities and challenges of implementing cloud computing solutions in the public sector. Representatives from the GSA's Office of Citizen Services and Communications as well as a variety of senior officials from various Canadian government departments were in attendance. Attendees were eager to share insights into the opportunities and challenges facing cloud computing both in the Canadian Government as well as more broadly. Needless to say it was a lively discussion.

Jirka Danek, the Canadian Government's CTO (Public Works) outlined a detailed strategy for Cloud Computing within the Canadian Government (Full text posted below). For those of you that are unfamiliar with Public Works Government Services Canada. PWGSC is similar to the General Services Administration in the United States with a mandate to be a common service agency for the Government of Canada's various departments, agencies and boards. …

For me one of the more exciting parts of the day was when Danek unveiled a detailed strategy for cloud computing, which I have the honor of sharing publicly below. (Download Avaliable here.)

• Phil Wainewright contends that On-Premise Proves Less Secure Than Cloud, at least for “smaller companies” in this post of 10/1/2009 to ebizQ, which I missed at the time:

Yet more evidence emerges that your data is safer with a cloud provider than it is when stored on an enterprise's own IT systems. eWeek last week reported a survey by the Ponemon Institute and security company Imperva found that only a third of smaller companies have bothered to implement the Payment Card Industry's (PCI) Data Security Standard, introduced in 2005 by the major credit card companies to protect customer's personal information. Here's the most disturbing finding from the survey of 560 US and multinational organizations:

"According to the survey, 79 percent have experienced a data breach involving the loss or theft of credit card information and 60 percent of respondents didn't think they had sufficient resources to comply with PCI and bring about a necessary level of cardholder security."


IDG News Service reported another disturbing finding:

"Around 10 percent of the respondents who said they were PCI DSS compliant said they weren't using basic security software such as antivirus, firewalls and SSL (Secure Sockets Layers), [Amichai] Shulman, [Imperva's CTO] said ... 'I would find it very hard to explain why I'm not using SSL as part of my PCI compliance,' Shulman said. 'It seems to me that there is too much room for misinterpretation of the requirement, and companies are abusing it'."

In my personal view, none of these businesses have the least excuse for their cavalier attitude to the security of customer data. If they're not prepared to invest in adequate security, then they should move their payment processing to a SaaS provider without a moment's delay. Any reputable SaaS provider will provide robust PCI compliance as a default feature. Not using SaaS in such circumstances is a gross dereliction of duty.

I agree with Phil’s conclusion in his last paragraph.

GovInfoSecurity.com offers netForensics’ Eight Elements of an Effective Plan for
FISMA Compliance
white paper on 10/8/2009:

    • Complying with FISMA requirements can be tough. It's almost always time consuming, costly, and complex and for some agencies it seems impossible to achieve. A recent GAO congressional report says that most agencies continue to have security weaknesses in major categories of controls. This puts U.S. economic and national security interests at risk. In fact, with the growing sophistication of security attacks, we've actually seen a dramatic rise in security incidents reported by agencies over the past few years.
    • If you're one of these agencies that's still struggling to achieve FISMA compliance, maybe it's time to jumpstart your risk management program. …

Andrea DiMaio asks How Do I Know That I Am Using Government Data? in his 10/7/2009 post about the new XML version of the Federal Register to the Gartner blogs:

In a previous post I raised, amongst others, the issue of authenticity and quality of open government data. Yesterday, this came up in an interview appeared on the O’Reilly Radar, to Raymond Mosley, Director of the Office of the Federal Register (OFR) and Michael L. Wash, CIO of the Government Printing Office (GPO).

The actual news was the announcement of a freely available Federal Register in XML format.  For non-US readers, the Federal Register is “a description of the Executive branch’s doings, including 150 daily policy decisions of President and Federal agencies, such as proposed and enacted changes to federal regulations”. This is clearly an important step in the open data journey that the Obama administration is pursuing. …

David Linthicum contends Coding for data integration means getting it wrong 100 times, before you get it right once in this 10/7/2009 post to ebizQ’s Leveraging Information and Intelligence section:

Rick Sherman had a nice post entitled "The Trial-and-Error Method for Data Integration." Clearly, Rick and I are kindred spirits, and I thought he made some great points.

He points out the larger problems as:

  • "Not Developing an Overall Architecture and Workflow "
  • "Thinking that Data Quality is a Product Rather than a Process"
  • "Assuming Custom Coding is Faster than ETL Development"

"The usual development approach for data integration is to gather the data requirements, determine what data is needed from source systems, create the target databases such as a data warehouse, and then code. This is an incomplete, bottom-up approach. It needs to be coupled with a top-down approach that emphasizes an overall data integration architecture and workflow."

This is a huge issue that I see as I wonder the data integration universe. There is little or no architectural thinking around data integration, and those charged with creating the solution simply attack the problem...code or buy technology first, ask questions later. The end result is a data integration architecture that has to be adjusted 5 times to meet the needs of the problem domain. That is not cost ineffective, and just not smart. …

Patrick Thibodeau’s CIA endorses cloud computing, but only internally article of 10/7/2009 for ComputerWorld claims: “While it can improve security, the agency won't be outsourcing data to Google or Amazon:”

One of the U.S. government's strongest advocates of cloud computing is also one of its most secretive operations: the Central Intelligence Agency. The CIA has adopted cloud computing in a big way, and the agency believes that the cloud approach makes IT environments more flexible and secure.

Jill Tummler Singer, the CIA's deputy CIO, said that she sees enormous benefits to a cloud approach. And while the CIA has been moving steadily to build a cloud-friendly infrastructure -- it has adopted virtualization, among other things -- cloud computing is still a relatively new idea among federal agencies.

"Cloud computing as a term really didn't hit our vocabulary until a year ago," said Singer.

But now that the CIA is building an internal cloud, Singer sees numerous benefits. For example, a cloud approach could bolster security, in part, because it entails the use of a standards-based environment that reduces complexity and allows faster deployment of patches. …

Jimmy Blake reports on his preparations for ISO27001 certification in his ISO 27001 in a cloud world post of 10/6/2009:

We’re preparing to go through our ISO 27001 certification at the moment and it struck me quite how different it is to certify as a cloud service vendor rather than as a traditional company.

Excuse my over simplification of the ISO 27001 process for those not involved in it, but effectively there are four stages:

  1. Define the organisation’s acceptable risk
  2. Work out what risk the organisation is exposed to
  3. Apply controls to reduce the residual risk to a level at or below the acceptable risk
  4. Rinse, repeat

A common method is to conduct a risk assessment, perhaps using the methodology covered in ISO 27001’s sister publication ISO 27005,  and then apply controls to manage the identified risks from another sister publication ISO 27002.

Now an organisation is normally free to choose whatever acceptable level of risk they feel the organisation is able to bear.  Often a higher level of acceptable risk is what gives an organisation a competitive advantage, allowing them to be nimble enough to take advantages that other, more risk adverse, organisations cannot. …

Carolyn Duffy Marsan reports Pentagon: Our cloud is better than Google's on 10/5/2009: “U.S. military says its cloud computing platform is more secure, reliable than commercial offerings:”

The U.S. Defense Department is offering cloud computing services that military officials claim are safer and more reliable than commercial providers such as Google.

At a press conference Monday, the Defense Information Systems Agency (DISA) announced that it is allowing military users to run applications in production mode on its cloud computing platform, which is called RACE for Rapid Access Computing Environment.

Since its launch a year ago, RACE has been available for test and development of new applications, but not for operations.

Military officials say RACE is now ready to deliver cutting-edge applications to military personnel.

Henry Sienkiewicz, technical program director of DISA's computing services and RACE team, says RACE is more secure and stable than commercial cloud services. Google, for example, has suffered from frequent service outages including high-profile Gmail and Google News outages in September. …

<Return to section navigation list> 

Cloud Computing Events

••• Ben Day will answer the Azure: Is the Relational Database Dead? question at the Connecticut .NET User Group’s meeting on 10/13/2009:

Ben will show you how to write an application for Windows Azure using ASP.NET and WCF on an "Internet Scale".

When: 10/13/2009 5:30 PM – 8:00 PM ET   
Where: 74 Batterson Park Road, Farmington, CT 06032, USA

••• CloudStorm lists 11 confirmed speakers for its 10/13/2009 CloudStorm London event at 17:30 PM BST in Inner Temple, London, UK.

When: 10/13/2009 17:30 PM  BST  
Where: Inner Temple, London, UK

David Pallman is Speaking on Azure Migration 10/15/09 at So Cal .NET Architecture Group, according to this 10/10/2009 post:

On October 15th '09 I'll be speaking at the So Cal .NET Architecture Group on Migrating .NET Applications to Azure.

Migrating .NET Applications to Azure

In our October meeting we’ll look at what’s involved in migrating .NET applications and databases over to Azure, Microsoft’s cloud computing platform. You’ll learn about the challenges and best practices around migrating databases to SQL Azure and migrating web applications to Windows Azure. Experiences doing this in the real world for early adopters will be shared. We’ll also discuss how to design applications and databases that are capable of running in the cloud or in the enterprise.

The next SoCal IASA chapter meeting will be Thursday October 15, 2009 at Rancho Santiago Community College District, 2323 N. Broadway, Santa Ana. Meeting starts at 7:00 pm, pizza and networking 6:30 pm. Meeting cost is $5 to help us cover the cost of food and beverages. RSVP please: http://www.socaldotnetarchitecture.org/.

When: 10/15/2009 6:30 PM PT   
Where: Rancho Santiago Community College District, 2323 N. Broadway, Santa Ana, CA, USA   

• Aaron Skonnard shares his demo code from four sessions he presented at VSLive! Orlando 2009 collected into a single zip file available from his Demos from my VSLive Orlando 2009 sessions post of 10/8/2009:

  • Windows Azure: A New Era of Cloud Computing
  • Programming .NET Services
  • Windows Communication Framework and Workflow 4
  • Building RESTful Services with ADO.NET Data Services

• Sanjay Jain reports Microsoft BizSpark Incubation Week for Windows Azure @ Atlanta 09 Nov 09 in this 10/7/2009 post to the Microsoft Dynamics ISV Evangelism blog:

With several successful Microsoft BizSpark Incubation Weeks (Win7 Boston, Win7 Reston, CRM Reston, CRM Boston, Win 7 Irvine, Mobility Mountain View,), we are pleased to announce Microsoft BizSpark Incubation Week for Windows Azure at Atlanta, GA during week of 9th Nov’09. …

Microsoft BizSpark Incubation Week for Windows Azure is designed to offer following assistance to entrepreneurs.

  • Learning and building new applications in the cloud or use interoperable services that run on Microsoft infrastructure to extend and enhance your existing applications with help of on-site advisors and off-shore development team
  • Getting entrepreneurs coaching from guest speakers and a panel of industry experts
  • Generating marketing buzz for your brand
  • Creating opportunity to be highlighted at upcoming launch

We are inviting nominations from BizSpark Startups interested in Windows Azure Platform that target one or more of the following:

… The Microsoft BizSpark Incubation Week for Windows Azure will be held at Microsoft Technology Center, Atlanta, GA from Mon 11/09/2009 to Fri 11/13/2009. This event consists of ½ day of training, 3 ½ days of active prototype/development time, and a final day for packaging/finishing and reporting out to a panel of judges for various prizes.

This event is a no-fee event (plan your own travel expenses) and each team can bring 3 participants (1 business and 1-2 technical). To nominate your team, please submit the following details to Sanjay Jain (preferably via your BizSpark sponsor). Nominations will be judged according to the strength of the founding team, originality and creativity of the idea, and ability to leverage Windows Azure Scenarios.

When: 11/9 to 11/13/2009   
Where:  Microsoft Technology Center, 1125 Sanctuary Parkway, Suite 300 Alpharetta, GA 30004 USA

• Oracle Corp.’s OpenWorld conference convenes next week in San Francisco’s Moscone Center convention halls. Despite Larry Ellison’s derision, Oracle’s Focus on Cloud Computing article for the conference lists 29 sessions.

When: 11/11 to 11/15/2009   
Where:  Moscone Center, San Francisco, CA, USA

The Boston Azure User Group will hold its first meeting on 10/22/2009 from 6:30 to 8:30 PM (tentative) at the Microsoft New England R&D (NERD) Center.

For this first meeting of the Boston Azure User Group, Brian Lambert of Microsoft is the featured speaker. Brian is an engineer in Microsoft's Software + Services Concept Development team in Cambridge, MA. Brian is an Azure Ninja and will share some of his real-world experience in developing for Windows Azure, Microsoft's new Cloud platform. …

The Boston Azure User Group is @bostonazure on Twitter and its Web site is here.

Tentative Agenda:

    • 6:30 - 7:00 - Pizza, Meet & Greet
    • 7:00 - 8:15 - Brian Lambert's Talk
    • 8:15 - 8:30 - Q&A, discussion for future

When: 10/22/2009 6:30 to 8:30 PM (tentative)   
Where: Microsoft New England R&D (NERD) Center, One Memorial Drive, Cambridge, MA 02142, USA  (Directions to NERD)

James Conard describes what’s in store for PDC 2009 attendees in this 00:10:58 Channel9 video, Countdown to PDC09: Da Cloud! Da Cloud!:

Meet the owner of the PDC cloud services track, James Conard.  Yep, it’s true, general availability of Azure will be announced at PDC (buy a datacenter in the cloud, if you will) but don’t think that’s all we’ve got up our sleeves!  The Windows Azure Platform has come a long way in the last year since it was first announced at PDC last year, and we have a lot of progress to talk about, plus new innovations that we’ll unveil at PDC this year – listen to James as he drops hints as to what those announcements will be.  And speaking of hints, don’t forget, Mike’s last HardHat Challenge remains unsolved.  He drops another clue in this show, so listen carefully! …

When: 11/17 to 11/19/2009   
Where: Los Angeles Convention Center, Los Angeles, CA USA 

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Radim Marek claims Amazon EC2 still vulnerable to UDP flood attacks in this 10/11/2009 post:

Unfortunate events surrounding the DDoS attack against BitBucket kicked-off heated discussions about the nature of this vulnerability. Where Amazon officially acknowledged this to be a single isolated incident, many others started asking questions why did it happen in first place?

  • Was BitBucket’s security group configuration set to block UDP traffic?
  • How come they haven’t got better visibility of the on-going attack?
  • Is this really Amazon’s fault?

Both personal and professional interest led me to find out more. Having designed series of tests how to replicate this scenario, I’ve started first instance and set up the target environment.

He then goes on with demonstrations that prove his claim.

Radim is Director of Operations for Good Data.

Tim Greene’s Bitbucket's downtime is a cautionary cloud tale post of 10/6/2009 to NetworkWorld’s CloudSecurity blog claims:

Bitbucket’s weekend troubles with Amazon’s cloud services are instructive, but don’t necessarily indicate a problem with cloud security.

The major lesson to learn from Bitbucket’s experience is don’t put all your bits in one bucket. The company, which hosts a Web-based coding environment, entrusted all its network to Amazon’s cloud services, either its EC2 computing resources or its EBS storage service.

Any business that is going to do that needs a plan B for when things go wrong as they did last weekend when Bitbucket apparently suffered a DDoS attack that took Bitbucket and Amazon about 20 hours to diagnose and fix. Meanwhile, customers of Bitbucket couldn’t work on their own projects because they couldn’t reach Bitbucket. …

• Chris Hoff (@Beaker)’s AMI Secure? (Or: Shared AMIs/Virtual Appliances – Bot or Not?) post of 10/8/2009 questions the trustworthiness of prebuilt AMIs for Amazon Web Service EC2:

… The convenience of pre-built virtual appliances offered up for use in virtualized environments such as VMware’s Virtual Appliance marketplace or shared/community AMIs on AWS EC2 make for a tempting reduction of time spent getting your virtualized/cloud environments up to speed; the images are there just waiting for a a quick download and then a point and click activation.  These juicy marketplaces will continue to sprout up with offerings of bundled virtual machines for every conceivable need: LAMP stacks, databases, web servers, firewalls…you name it.  Some are free, some cost money.

There’s a darkside to this convenience. You have no idea as to the trustworthiness of the underlying operating systems or applications contained within these tidy bundles of cloudy joy.  The same could be said for much of the software in use today, but cloud simply exacerbates this situation by adding abstraction, scale and the elastic version of the snuggie that convinces people nothing goes wrong in the cloud…until it does

While trust in mankind is noble, trust in software is a palm-head-slapper.  Amazon even tells you so:

AMIs are launched at the user’s own risk. Amazon cannot vouch for the integrity or security of AMIs shared by other users. Therefore, you should treat shared AMIs as you would any foreign code that you might consider deploying in your own data center and perform the appropriate due diligence.

Ideally, you should get the AMI ID from a trusted source (a web site, another user, etc). If you do not know the source of an AMI, we recommended that you search the forums for comments on the AMI before launching it. Conversely, if you have questions or observations about a shared AMI, feel free to use the AWS forums to ask or comment.

Remember that in IaaS-based service offerings, YOU are responsible for the security of your instances.  Do you really know where an AMI/VM/VA came from, what’s running on it and why?  Do you have the skills to be able to answer this question?  How would you detect if something was wrong? Are you using hardening tools?  Logging tools?  Does any of this matter if the “box” is rooted anyway? …

• Jessica Scarpati reports Carriers poised to offer cloud computing services, but with some risks on 10/8/2009 in this feature-length analysis for SearchTelephone.com:

Enterprises love the flexibility that cloud computing services offer over the traditional hosting services that telecommunications carriers have enjoyed as an enterprise cash cow in recent years. Carriers must adapt or risk losing enterprise business.

Carriers must proceed cautiously, however, to ensure that they have the right cloud platform and the capacity and security that enterprises will demand from cloud computing services. Carriers should also be ready for a radically different revenue model, as cloud computing customers prefer pay-as-you-go services to being locked into contracted hosted services.

"The hosting environment is under attack, so [cloud services are] something they need to think about," said Alex Winogradoff, a vice president at Gartner Inc., and the author of Dataquest Insight: How and Why Telecommunications Carriers Must Pursue Cloud Services Opportunities Now. "If they don't go and follow suit and become cloud providers, they're going to lose any growth in the marketplace."

The cloud-based services market will be worth $150 billion by 2013, with telecommunications carriers unlikely to grab more than $5 billion to $6 billion -- about 5% -- of that pie unless they acquire existing cloud service providers, according to Gartner. …

Jessica quotes Gartner vice president Alex Winogradoff:

Everything telephone companies build is … a much higher-quality solution. By nature, it has to be. Just look at Gmail -- how many times has Gmail failed? …

She continues with Part 2 that analyzes risks carriers face when offering cloud computing services.

Guy Rosen updates his earlier estimate of the number of Amazon EC2 instances in his Amazon Usage Estimates and Updates post of 10/7/2009, which is based on RightScale data:

RightScale decided to apply the findings to the mountain of EC2 data they have – a few years worth. Firstly, this solved a few of the remaining puzzles in the ID formula, on the Series ID and Superseries ID (I’ve updated the original post to reflect this). Moreover, the wider perspective led them to estimate that the total number of instances launched is actually a whopping 15.5 million. RightScale’s full findings can be found here.

Image credit: Guy Rosen

Rich Miller reports EMC Readies Cloud Compute Service on 10/8/2009:

It wasn’t surprising that storage powerhouse EMC got into the cloud storage business. But the company is now offering a VMware-based service to offer compute capacity in the cloud, a niche currently dominated by the EC2 platform from Amazon Web Services.

The Atmos Online Compute Service will provide instant access to virtual servers running in EMC’s data centers. The cloud compute service is designed to work seamlessly with EMC’s Atmos Online Storage Service, much like the relationship between EC2 and the Amazon S3 storage cloud. EMC published a Getting Started Guide late last month, and GigaOm reports that the Atmos compute service will use software from LineSider Technologies.

The details emerging this week confirm earlier reports from Gestalt IT, which reported details of the Atmos Online Compute offering back in August.

Yet another cloud computing source (YACCS) which, because it comes from EMC, is likely to be very expensive.

GigaSpaces and GoGrid announce in this press release of 10/7/2009 that they will Join Forces to Create an Enterprise-grade PaaS Offering for Java and .NET:

GigaSpaces Technologies, developer of the industry-leading, cloud-enabled eXtreme Application Platform (XAP), and GoGrid, the Cloud Infrastructure Hosting service from ServePath Dedicated Hosting, have combined their technologies to create the most robust Platform-as-a-Service (PaaS) offering for Java and .NET available. The combination of XAP's power, interoperability, and elasticity with GoGrid's secured, hardened, scalable, and customizable environment provides a first-of-its-kind, enterprise-grade public cloud offering customers can rely on to deliver all their mission-critical applications. …

It will be interesting to see if whatever the two participants decide to name their PaaS venture is competitive with Windows Azure.

Go to commentsComments

Maureen O’Gara reports Adobe’s Aiming ColdFusion at Multiple Clouds of 10/6/2009: “Called ColdFusion 9 in the Cloud it's meant to run on Amazon EC2 and S3 as a hosted service:”

Adobe is nosing its flagship ColdFusion rapid web site and Internet application development platform onto multiple clouds.

Turns out it's been quietly collaborating with Alagad, a North Carolina web development services provider, on a cloud implementation of ColdFusion 9.

Adobe just released ColdFusion 9 on Monday. The cloud version of the stuff is still in private beta.

Called ColdFusion 9 in the Cloud it's meant to run on Amazon EC2 and S3 as a hosted service.

And, besides the new Alagad approach, Adobe says its new instance-based licensing for ColdFusion 9 will let developers install the widgetry on virtual instances in the cloud environment of their choice to prototype, develop, test and host ColdFusion applications. …

Perot SystemsHit the Ground Running: The day of Electronic Health Records is here page starts by asking “How will the economic stimulus transform your organization?” and continues:

On February 17, 2009, the future of healthcare IT changed. The landmark American Recovery and Reinvestment Act of 2009 (ARRA) set aside significant federal funding for healthcare-related spending, including provisions for health IT. This unprecedented federal funding and support creates new opportunities   for physicians, hospitals, and other providers to adopt and use electronic health records (EHR), as well as other technology to advance the delivery of healthcare. 

What is a Health Information Exchange?

A Health Information Exchange (HIE) provides the capability to electronically move clinical information between disparate health care information systems while maintaining the meaning of the information being exchanged. The goal of HIE is to facilitate access and retrieval of clinical data to provide safer, more timely, efficient, effective, equitable and patient-centered care.

What is the value of a Health Information Exchange?

  • Facilitates use of an interoperable electronic health record for a patient comprised of information from multiple providers' medical records
  • Increases point of care information enhancing clinical decision making
  • Improves cost and quality of provider care through reduced medical errors and enhanced care coordination
  • Reduces redundancy/unnecessary utilization of lab results and images as patients move across care settings
  • Facilitates a secure, interoperable health information infrastructure to reduce cost associated with care …

The page offers numerous links to healthcare links related to ARRA. Perot Systems’ experience in the healthcare industry was clearly one of the major incentives for its acquisition by Dell, Inc.

Mike Neal’s Microsoft and Red Hat Complete Cooperative Technical Support TechNet post of 10/7/2009

Back in February we announced our work with Red Hat to enable cooperative technical support for virtualized environments. I'm excited to announce we've completed certification in each others' programs! Customers now can deploy Microsoft Windows Server and Red Hat Enterprise Linux and a range of select applications, virtualized on Red Hat and Microsoft virtualization products, knowing that the combined solutions will be supported by both companies.

Here are the details:

  • Red Hat Enterprise Linux 5.2, 5.3, 5.4 have passed cert tests when running on Windows Server 2008 Hyper-V, Microsoft Hyper-V Server 2008, Windows Server 2008 R2 Hyper-V, Microsoft Hyper-V Server 2008 R2. See more at RedHat's certified hardware site.
  • Windows Server 2003/ Windows Server 2008 / Windows Server 2008 R2 are validated to run on Red Hat Enterprise Linux 5.4, using their KVM-based hypervisor. See more at Microsoft Server Virtualization Validation Program site.

Beyond the OS, both companies have select applications that would receive technical support when running on certified server virtualization software. The Microsoft applications can be seen in KB article 957006. On the Red Hat side, you can now run JBoss Enterprise Middleware within a virtual machine guest on Hyper-V and receive coordinated technical support.  This is a step forward for enterprise customers, hosting providers, systems integrators, and those who want to offer their customers the top x86 operating systems to run applications. …

Mike Neil is general manager of Windows Server and Server Virtualization.

John Foley reports on a forthcoming white paper by Amazon’s Werner Vogels in an Amazon's Three Steps To Cloud Computing article of 10/6/2009 for InformationWeek’s Cloud Computing blog:

Amazon CTO Werner Vogels is preparing a paper that summarizes his views on how large companies can adopt cloud services. Here's a sneak peek at his soon-to-be-published report.

I got a chance to talk to Werner at the InformationWeek 500 conference a few weeks ago in Data Point, Calif. The topic of our hour-long conversation, in front of a live audience, was "Can The Cloud Scale For The Enterprise?" You can view the full video, which covers a lot of ground, below.

Werner spends a good deal of his time in the market serving as a sounding board and adviser to CIOs and CTOs who are assessing where cloud services should fit into their IT plans. Based on that experience, he's outlining best practices in a paper that I expect to be released any day. At the IW500 conference, he boiled it down to a three-step process:

  1. First, all new development of Web applications gets done in the cloud.
  2. Second, IT departments plug into the cloud selectively to learn what works. This is a near-term step and you could call it proof of concept, but Vogels says it actually goes beyond experimentation for many companies. Maybe it's using the cloud for application development and testing or for SharePoint services, for example.
  3. Third, IT managers need to assess their overall IT portfolio to see what can be moved into the cloud over the long term. One level of assessment should take place around application types (HPC, knowledge management, etc.). Another around "dependencies," ie. your installed database management software and data center hardware. Risk and ROI are part of the assessment, too. …

John’s post contains an embedded player for Werner’s presentation to the IW500 conference.

Leena Rao reports Rackspace Launches NoMoreServers.com To Tout Computing-As-A-Service in this 10/7/2009 TechCrunch post:

When Salesforce.com founder and CEO Marc Benioff launched his CRM platform in the cloud in 1999, he embarked on a “No Software” campaign to tout his “Software as a Service” agenda. Today, hosting service Rackspace is promoting a similar campaign with the launch of NoMoreServers.com, a site dedicated to the emergence of Computing-as-a-Service models (like hosting, cloud computing and SaaS) to power enterprise IT.

NoMoreServers.com is a rally cry of the computing-as-a-service era. The site seeks to empower businesses to acknowledge the decline of in-house computing and the rise of the All Cloud Enterprise (ACE). Covering hosting, cloud computing, SaaS, and the key vendors driving them (eg: Amazon, Google, Rackspace, Salesforce, etc), NoMoreServers.com will feature daily commentary explaining all things cloud computing. The site will include third-party content and news about hosting, cloud computing and will have a live community portal for visitors to engage on the topic of outsourcing computing. …

Ben Kepes describes Wolf Frameworks – Another Contender for the PaaS Crown in this 10/7/2009 post:

The other day I was given a briefing by Sunny Ghosh from Wolf Frameworks. Founded in 2006, WOLF is a 100% browser (Ajax, XML and .NET) based standards compliant, PaaS that is targeting users who need to create mashable and interoperable SaaS Business Applications. Wolf is targeting the SMB market that is resource poor and is concerned about quick development and application portability - this portability is one of the main strings to Wolf’s bow as we’ll see later.

When asked what the point of difference Wolf had when compared to the myriad other PaaS providers, Wolf articulated it simply as “No Development PaaS”. Sunny went on to articulate four key points;

  1. Wolf allows users to setup a database server in their private cloud or just extract their entire relational Database with a single click.
  2. Wolf doesn’t require programming language knowledge – rather their PaaS follows design discipline in their browser-based environment.
  3. In terms of application portability and interoperability, users can work code free, and utilise web services calls from the Wolf Business Rules Action list.
  4. An application created on Wolf allows for migration of the application design (Entity Relationship, Business Rules, etc) in a portable XML Format. …

G2iX Unveils First Server Using Open Standards for Custom PaaS Creation Inside the Enterprise in this 10/7/2009 press release:

Today at the ITU Telecom World (http://www.itu.int/WORLD2009/) conference, G2iX unveiled the Morph CloudServer (http://www.g2ix.com/morph), the first appliance using open standards to dynamically create custom Platform as a Service (PaaS) environments. Following industry-leading standards established by Amazon's (http://aws.amazon.com/solutions/case-studies/morph/) Elastic Compute Cloud (EC2 (http://aws.amazon.com/ec2/)) and Eucalyptus (http://www.eucalyptus.com/), the Morph CloudServer offers unprecedented flexibility, control and agility for developing and deploying software on a massive scale.

Businesses now have a more effective way to safely pilot new ideas, facilitate global expansion, and significantly decrease time to market. "For now, private cloud computing will not just be a viable term, it will be a significant strategic investment for most large organizations," said Gartner Research VP Phil Dawson in a recent release (http://www.gartner.com/it/page.jsp?id=1193913). "We predict that through 2012, more than 75 percent of organizations' use of cloud computing will be devoted to very large data queries, short-term massively parallel workloads, or IT use by startups with little to no IT infrastructure." …

IBM Launches New Storage Cloud Solution for the Enterprise according to this 10/6/2009 press release:

At its Information Infrastructure Analyst Summit IBM (NYSE:IBM) announced its intention to enter the cloud storage space with the launch of the IBM Smart Business Storage Cloud, IBM Information Archive and new consulting services.

Traditional storage cloud solutions have piqued the interest of many enterprise clients for their very low price points, but these systems have mostly been limited to ‘sandbox’ use cases for secondary or tertiary copies of data or for use in development and test environments in which data does not have to be frequently accessed and does not tend to grow into large scales.

The IBM Smart Business Storage Cloud is a private cloud offering that utilizes low cost components in a true scale-out clustered model not offered by its competition. Hallmarks of the solution include support for multiple petabytes of capacity, billions of files, and scale-out performance previously limited to the largest ‘high performance computing’ systems.

Industry-leading technologies like IBM’s General Parallel File System have been combined with the latest advances in storage, virtualization and server technologies like XIV and BladeCenter to provide performance and scalability all under one globally addressable namespace. The solution is highly secure and built to make use of a client’s existing security and authentication infrastructure. In addition to this, IBM offers tightly coupled services for implementation support and an optional ongoing lightweight managed service to help clients manage their cloud environment on an ongoing basis. …

My conclusion: Everyone and his dog is getting into the cloud computing act.

<Return to section navigation list> 

blog comments powered by Disqus