Sunday, August 23, 2009

Windows Azure and Cloud Computing Posts for 8/19/2009+

Windows Azure, Azure Data Services, SQL Azure Database and related cloud computing topics now appear in this weekly series.

•• Updated 8/22 and 8/23/2009: Live Framework and Services removed from Windows Azure, Windows Azure World Tour, and PHP Toolkit for ADO.NET Data Services
• Updated 8/20 and 8/21/2009: Windows Azure PDC pre-conference, AWS price cuts, major additions and minor edits.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use these links, click the post title to display the single article you want to navigate.

Azure Blob, Table and Queue Services

Steve Marx explains Using Container-Level Access Policies in Windows Azure Storage in this 8/22/2009 post:

Last week, I blogged about shared access signatures, one of the new blob storage features introduced in July.  This feature lets you embed signatures in URLs to grant granular access to containers and blobs.  In the approach I took in the blog post, you embed the access policy directly in the URL, which means there’s no way for you to modify or revoke permission after I’ve given out the URL.  To limit the scope of these unrevokable privileges, explicit access policies in the URL are limited to granting permissions for up to only one hour.

To grant longer-term permissions or to retain the ability to modify or revoke permissions after handing them out, you can use another new feature called container-level access policies (MSDN documentation).  These are named access policies that take the place of explicit policies in the URL.  In this blog post, I’ll walk you through a simple example of using a container-level access policy.

Steve is really on a roll! Thanks to Mike Amundsen for the heads up.

• Magnus Mårtensson’s Using the CloudStorage.API: The Entity Storage post of 8/20/2009 completes the three-part

[B]log series on basic usage of the Cloud Storage API (CloudStorage.API). This time we will show how to interact with the Cloud Entity Storage.

See below for links to earlier articles.

• Maarten Balliauw reports that Shared Access Signature documentation for PHP SDK for Windows Azure blob data services is available on CodePlex as of 8/20/2009.

• Rob Gillen’s SilverLight and Paging with Azure Data post of 8/20/2009 explains how “to tell the browser that you’d rather handle HTTP requests yourself:”

By simply registering the http protocol (you can actually do it as granular as a site level) as handled by the Silverlight client, “magic” happens and you suddenly have access to the properties of the WebClient (ResponseHeaders) and HttpWebRequest (Response.Headers) objects that you would have expected to. The magic line you need to add prior to issuing any calls is as follows:

bool httpResult = WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp);

• Rob Gillen continues his performance analysis of RESTful data access formats in his AtomPub, JSON, Azure and Large Datasets, Part 2 post of 8/20/2009:

Last Friday I posted some initial results from some simplistic testing I had done comparing pulling data from Azure via ATOM (the ADO.NET data services client) and JSON. I was surprised at the significant difference in payload and time to completion. A little later, Steve Marx questioned my methodology based on the fact that Azure storage doesn’t support JSON. Steve wasn’t being contrary, but rather pushing for clarification to the methodology of my testing as well as a desire to keep people from attempting to exploit the JSON interface of Azure storage when none exists. This post is a follow up to that one and attempts to clarify things a bit and highlight some expanded findings.

The platform I’m working against is an Azure account with a storage account hosting the data (Azure Tables), and a web role providing multiple interaction points to the data, as well as making the interaction point anonymous. Essentially, this web role serves as a “proxy” to the data and reformats it as necessary. After Steve’s question last week, I got to wondering particularly about the overhead (if any) the web role/proxy was introducing and if, esp[ecially] in the case of the ATOM data, it was drastically affecting the results. I also got to wondering if the delays I was experiencing in data transmission were, in some part, caused by the fact of having to issue 9 serial requests in order to retrieve the entire 8100 rows that satisfied my query.

• Steve Lesem’s And I bet you thought Cloud Storage was just a utility computing model applied to storage... post of 8/20/2009 begins:

My last post on REST generated some attention. Since it is an important topic, I wanted to share some additional links for those who are trying to improve their understanding of REST.

Magnus Mårtensson continues his “blog series on using the Cloud Storage API (CloudStorage.API) by showing how it interacts with Cloud Blob Storage. As in the previous post [he uses] a short example focusing on basic usage.”

The implementation we use for the API is developed against Azure but the API should be reusable for any type of Cloud Storage. While doing this our main goals are to:

  • Enable testability
  • Abstract away storage
  • Create an extensible and easy to evolve application that supports good developments practices

This is the second of three posts showing functionality of the API. The first explained how to use the Message Queue and the last one will explain the Entity Storage.

Here are other posts in this series:

Note: We are about to publish all of these samples on Azure Contrib. (Soonish…)

<Return to section navigation list> 

SQL Azure Database (SADB, formerly SDS and SSDS)

Jeff Currier explains why the SQL Azure team is sending out batches of invitation tokens in his SQL Azure: Invite codes, traffic patterns and feedback post of 8/21/2009:

We've sent out the first couple batches of invite codes this week for SQL Azure (apologies if you haven't yet received yours).  It's important to keep in mind that in general services tend to do this kind of thing for good reason.  So, what are the reasons? There are many but these are the two that I look at the most:

1) Traffic patterns - How are the newly on-boarded people using the service.  Are they using it in the manner that we thought they would or are they doing things we totally didn't expect.  Is there some constraint that we hadn't previously observed in our own in-house load/stress testing?  Thus far, this hasn't been the case for us :-)

2) Load - The second thing we're trying to understand how many people that we send invite codes to are actually using the service (thereby increasing the load on the system).  People consume resources in the system which, ultimately, there is a finite amount of.  Therefore, this needs to be monitored carefully. 

So far, we're looking good in these departments but my bet, if folks are like me, that we will see an increase in both of these things this weekend as people get more play time with the service.

Jeff also notes that the SQL Azure team is seeking feedback on the portal experience and SQL Azure Database usage in the SQL Azure – Getting Started forum from those users (like me) who have received a token.

Zach Skyles Owens observes that the USE <database> statement is NOT supported in SQL Azure CTP1 in this 8/19/2009 post:

So, now that you’ve got a token and can connect to SQL Azure, you probably want to create a database and start using it.  There’s one thing that can cause a ton of heart ache…  USE <database> is not supported!

The reason USE <database> isn’t support[ed] is because when you connect to one database you are essentially being tied to a particular server cluster via the SQL Azure TDS Gateway.  Your database[s] aren’t all on the same physical machine, therefore you must specify the database when you connect.  Does that make sense?

• Jeffrey Schwartz reports SQL Azure CTP Released But Wait in Line for A Token in this post to Redmond Developer News’ Data Driver blog. Jeff quotes me:

"They are being very hazy on how long it's going to take them for people who had SSDS [SQL Server Data Services] or SDS [SQL Data Services] accounts tokens for SQL Azure," Jennings said in an interview. In fact Microsoft's Zach Owens warned Jennings it could take up to two weeks to get tokens.

"Over the next week or two everyone who has already signed up for a SQL Azure Invitation Code should be receiving an email sent to the address associated with your Live ID containing the token and a link to redeem it," Owens wrote in a blog posting today. [See below.]

I received my token yesterday (Thursday, 8/20/2009) morning and have been running tests with SQL Server Management Studio (SSMS) against SQL Azure Database with variations on the classic Instnwind.sql script for several hours so far. I’ll update this section with a post later today (Friday, 8/21 or tomorrow) with preliminary results.

Zach Skyles Owens says in his SQL Azure Invitation Codes post of 8/19/2009:

Over the next week or two everyone who has already signed up for a SQL Azure Invitation Code should be receiving an email sent to the address associated with your Live ID containing the token and a link to redeem it.  We understand that everyone would like their tokens yesterday but we need to work through the list and ramp up the service.

Once the list of current requests has been processed, new requested will be fulfilled within a day or two.

We are working on integrating the SQL Azure and Windows Azure provisioning experience.  We realize that it is very inconvenient to have to have to make requests for two different tokens from different places. 

What about customers who already have an account on the previous version of SQL Data Services/SQL Server Data Services which had an ACE model with a REST API?  When will they get tokens?  We will be providing all of those users with a token, but in the meantime I’d recommend that all of those users sign up for the CTP.

If you haven’t already done so, please sign up for the CTP today!
http://go.microsoft.com/fwlink/?LinkID=149681&clcid=0×09.

@gsdean and @abbott report in 8/19/2009 Tweets: “[Y]ou can't register for [A]zure in either [C]hrome or [S]afari” browsers.

Doug Hauger explains SQL Azure Database’s business side in Doug Hauger: Inside the Windows Azure Platform Business Model, a 15-minute Channel9 interview of 8/17/2009:

Meet Doug Hauger, Azure General Manager. Doug owns the business side of the Azure Platform equation. How was the pricing determined? Are there different plans for "garage innovators" versus large enterprise customers? What does it all really mean? Would we be able to finish the converstion in under 15 minutes? Of course, the complexity of the Azure business model would determine the time it takes to explain it (and the thinking behind it). Well, as you can see by the length of the interview, apparently the Azure people constructed a pricing model that is greatly simplified compared to some of our other business pricing models from years past. The overall simplicity of the plan is impressive.

The Data Platform Insider blog announces SQL Server StreamInsight and SQL Azure Database CTP Availability on 8/18/2009:

SQL Azure Database August CTP Available Today

Also available today is the first community technology preview of SQL Azure Database, a cloud-based relational database service built on Microsoft SQL Server technologies. With SQL Azure Database, you can easily provision and deploy relational database solutions to the cloud, and take advantage of a globally distributed data center that provides enterprise-class availability, scalability, and security with the benefits of built-in data protection, self-healing and disaster recovery. To register for the free trial, visit http://msdn.microsoft.com/en-us/sqlserver/dataservices/default.aspx. To learn more about SQL Azure, visit http://www.microsoft.com/azure/sql.mspx.

David Robinson of the SQL Azure Team has a more detailed announcement, Try SQL Azure Database CTP Today, which also has links to the CTP for the SQL Server Driver for PHP.

If you registered for SSDS or SDS, completed the survey(s), and received an invitation from the Microsoft Connect site, you’re already registered and can’t register again. According to a reply by Dave in the SQL Azure – Getting Started Forum forum to Jamie Thomson’s Try SQL Azure Database CTP Today thread:

Connect is where you go to register. Since it is showing you as registered, just hang tight. The invites are on their way. Just be patient, its going to take some time for the backlog to get processed.

You can get additional information about SQL Azure invitations here.

Repeated from Windows Azure and Cloud Computing Posts for 8/17/2009+ due to importance.

<Return to section navigation list> 

.NET Services: Access Control, Service Bus and Workflow

• Eric Chabow’s brief NIST Initiates Cryptographic Key Framework post of 8/19/2009 reports:

The National Institute of Standards and Technology issued Wednesday a draft summary report on its cryptographic key management workshop held June 8 and 9 at its headquarters in suburban Washington.

According to NIST, the Cryptographic Key Management (CKM) workshop was initiated by its Computer Security Division to identify and develop technologies that would allow organizations to leap ahead of normal development lifecycles to vastly improve the security of future sensitive and valuable computer applications.

• Eugenio Pace says Welcome to the Enterprise Line, our next stop will be Station #1: “SSO”. Mind the gap in this 8/19/2009 post and explains:

The themes for our first “enterprise” [claims-based authentication] scenario are:

  1. Intranet and extranet Web SSO
  2. Using claims for user profile information
  3. RBAC with Claims
  4. Single Sign Off
  5. Single company
  6. No federation

Variations in the scenario:

  1. Hosting on Windows Azure

Brent Stineman’s .NET Service Bus (Part 2 continued) – Hands on Queues post of 8/18/2009 continues his study of .NET Service Bus Queues:

If you read my last post, you may recall that I said that .NSB queues were more robust [than Windows Azure Queues] and I pointed out a few key differences. Well, I was understanding things just a bit. They are actually significantly more robust and also require a few different techniques when dealing with them. In this update I will explore those differences in a bit more detail as well as cover the basics of sending and receiving messages via .NET Service Bus queues.

<Return to section navigation list> 

Live Windows Azure Apps, Tools and Test Harnesses

Elisa Flasko reports in her Announcing the PHP Toolkit for ADO.NET Data Services [proper case caps added] post of 8/21/2009:

This morning the Microsoft Interoperability team announced the release of a new project that bridges PHP and.NET: the PHP Toolkit for ADO.NET Data Services. The toolkit makes it easier for PHP developers to connect to and take advantage of services built using ADO.NET Data Services. The PHP Toolkit for ADO.NET Data Services is an open source project and is available today on Codeplex at phpdataservices.codeplex.com

For an overview and quick demo of the toolkit check out the Channel9 video with Pablo Castro and Claudio Caldato, Senior Program Manager with the Interoperability Technical Strategy team.

For more information on the toolkit check out the Interoperability Teams blog post and phpdataservices.codeplex.com.

Not sure what was with Elisa’s lower case “php toolkit” and “ado.net data services” in the post’s title. Glad to see Pablo Castro posting again about Astoria. (His last preceding post to his personal blog was dated 3/13/2009.) Raju Phani piped in with his PhP and Astoria = PHASTORIA !! post on the same topic.

• Glenn Laffel, MD’s HHS a Lock to Certify EHRs post of 8/21/2009 to the Practice Fusion EHR Bloggers blog reports:

Last week, the Department of Health and Human Services announced it would almost certainly assume responsibility for deciding which electronic health record systems qualify for bonus payouts under Medicare, as called for by ARRA, the economic stimulus program that went into effect last winter.

The announcement amounted to a very bad day for the Certification Commission for Healthcare Information Technology (CCHIT), a private certification group that had monopolized the EHR certification process since its inception in 2004.

CCHIT has been under withering attack since last spring when it came to peoples’ attention that it was derived from a trade group that was run by members of the very EHR vendors that stood to gain most from the Federal largesse. …

See also Robert Rowley, MD’s post in the Cloud Security and Governance section. 

David Worthington reports PreEmptive's Dotfuscator instruments Azure applications in this 8/19/2009 post to SDTimes:

PreEmptive Solutions has updated its Dotfuscator .NET runtime intelligence product to work with Windows Azure. The company says that runtime intelligence is a useful way to gauge how much a hosted application will cost to run.

On July 14, the update became available for free to customers who have current license agreements. Dotfuscator instruments Azure application assemblies and allows signals to emerge at any endpoint, said Sebastian Holst, chief marketing officer for PreEmptive.

Dotfuscator can protect source code from being reverse-engineered, track application feature use, enable time limits on application use, and detect and defend against tampering. It can also stream alerts and runtime data to Web-based services.

Its feature- and session-tracking capabilities would be most useful to Azure developers, Holst said. "Organizations that are thinking about targeting Azure have a couple of questions front and center. Does it work the way that I expect it to? And how are bandwidth and storage behaving?" he said.

<Return to section navigation list> 

Windows Azure Infrastructure

Mary Joe Foley reports Microsoft to cut Live Framework and Services from its Azure cloud platform in this 8/21/2009 post to ZDNet’s All About Microsoft blog:

… Via an August 21  blog post by Corporate Vice President of Live Services David Treadwell, Microsoft officials shared the news that they are shifting gears. [According to a LiveSide post by Kip Kniskern of the same date,] Microsoft is rejiggering its Live Framework and Live Services platform, somehow making it part of the Windows Live Wave 4 set of consumer-focused Web services that is expected to go to testers in the coming months. (Microsoft is telling developers it will provide specifics about how it plans to integrate the Live Framework into Windows Live in the coming months.) …

Microsoft is telling testers that Live Mesh, its online synchronization and collaboration service, won’t be affected by the change — at least in terms of its availability to testers. Back at PDC, Microsoft officials positioned the Live Services platform as the underpinning for the Live Mesh developer stack… but it seems the two aren’t as tightly joined as Microsoft execs may have hinted.)

Retaining Live Mesh within Windows Azure makes sense as it appears to be related to the forthcoming SQL Azure Data Hub service that’s based on the Microsoft Sync Framework.

According to Kip’s quotation from an Angus Logan e-mail message, SharePoint and CRM Services also have been removed from the Windows Azure Platform.

Dana Gardner posits ITIL 3 leads way in helping IT transform into mature business units via the 'reset economy' in this Briefings Direct podcast and transcription of 8/23/2009:

… To help unlock the secrets behind ITIL 3, and to debunk some confusion about ITIL v3, I recently gathered three experts on ITIL for a sponsored podcast discussion on how IT leaders can best leverage ITSM.

Please welcome David Cannon, co-author of the Service Operation Book for the latest version of ITIL, and an ITSM practice principal at HP; Stuart Rance, service management expert at HP, as well as co-author of ITIL Version 3 Glossary; and Ashley Hanna, business development manager at HP and also a co-author of ITIL Version 3 Glossary. …

The post also includes this link:

Free Offer: Get a complimentary copy of the new book, Cloud Computing For Dummies, courtesy of Hewlett-Packard (a $30 value) at www.hp.com/go/cloudpodcastoffer.

Lydia Leong is planning a Gartner Cloud IaaS adoption survey, according to her post of 8/23/2009. She says:

My colleagues and I are planning to field a survey about cloud computing adoption (specifically, infrastructure as a service), both to assess current attitudes towards cloud IaaS as well as ask people about their adoption plans. The target respondents for the survey will be IT buyers.

We have some questions that we know we want to ask (and that we know our clients, both end-users and vendors, are curious about), and some hypotheses that we want to test, but I’ll ask in this open forum, in an effort to try to ensure the survey is maximally useful: What are the cloud-adoption survey questions whose answers would cause you to change your cloud-related decision-making? (You can reply in a comment, send me email, or Twitter @cloudpundit.)

I expect survey data will help vendors alter their tactical priorities and may alter their strategic plans, and it may assist IT buyers in figuring out where they are relative to the “mainstream” plans (useful when talking to cautious business leadership worried about this newfangled cloud thing). …

Reuven Cohen’s A Public Cloud by Any Other Name is Private post of 8/22/2009 analyzes the interplay between “Appirio’s corporate blog post “Rise and Fall of the Private Cloud and the comments made by [Chris] Hoff in response” and concludes with another private cloud definition:

In a nutshell a private cloud is the attempt to build an Amazon or Google or even a Microsoft style web centric data center infrastructure in your own data center on your own equipment. For me, and the customers we typically deal with at Enomaly -- a private cloud is about applying the added benefits of elasticity, rapid scale (internal or external), resource efficiency and utilization flexibility that you gain by managing your infrastructure as a multi tenant service. So at the end of the day one person's public cloud is an others private cloud, it just depends on your point of view.

Steve Nagy’s Considerations for Migrating To Azure of 8/23/2009 opines about the types of applications (SOA, SaaS and S+S) for which Windows Azure is suited and the techniques for moving such applications to the Azure cloud.

Peter Mell and Tim Grance report that the Draft NIST Working Definition of Cloud Computing v15 (8/19/2009) document and Presentation on Effectively and Securely Using the Cloud Computing Paradigm v25 (8/12/2009) presentation updates are available from the NIST Web site:

NIST is posting its working definition of cloud computing that serves as a foundation for its upcoming publication on the topic (available below). Computer scientists at NIST developed this draft definition in collaboration with industry and government. It was developed as the foundation for a NIST special publication that will cover cloud architectures, security, and deployment strategies for the federal government.

NIST’s role in cloud computing is to promote the effective and secure use of the technology within government and industry by providing technical guidance and promoting standards. To learn more about NIST's cloud efforts, join the NIST cloud computing announcement mailing list (very low volume) by sending an email to "listproc@nist.gov" with "subscribe cloudlist" in the message body text.

This material is public domain although attribution to NIST is requested. It may be freely duplicated and translated.

Dion Hinchcliffe’s What Does Cloud Computing Actually Cost? An Analysis of the Top Vendors post of 8/22/2009:

Earlier this week Amazon lowered the costs of their Elastic Compute Cloud (EC2) reserved instances for organizations prepared for 1 to 3 year terms of usage (and properly understand their ceiling on steady state compute capacity.) This brings the cost of a continuously available cloud computing instance down by 30% to as little as 4.3 cents an hour (or a 3.0 cents/hour simple rate). …

Microsoft a hair cheaper (by $0.005/hour) than Amazon for Windows instances for the time being. When Azure launches, Microsoft says it will offer subscriptions that "provide payment predictability and price discounts that reflect levels of usage commitment.." At that point, Microsoft will likely be much cheaper than Amazon for Windows instances, the latter whose commitment to Windows in the cloud seems fairly uncertain at the moment. On the other hand, you can't run Unix/Linux on Microsoft's cloud at all at the moment. Interestingly, both Amazon and Microsoft offer 3 and a half nines of service level availability (99.95%), making comparing the service prices even easier.

To get a 99.95% uptime SLA for Azure, you must run two compute instances. Dion provides “[Five] Lessons From Today's Cloud Computing Value Propositions” and concludes:

I'm hoping to cover additional cloud computing providers and their pricing/value propositions more here this fall to round out our understanding of the competitive advantages that their cost models can provide businesses. In the meantime, expect few changes from the big players until after Azure launches and limited ones from the smaller players, which often use services such as EC2 to provide their own offerings. As Andrew McAfee recently pointed out [in his “The Cloudy Future of Corporate IT” essay of 8/21/2009], the transition to new economic models is long but usually inevitable and cloud computing is no exception.

McAfee’s essay is a great read, as are some of the comments. “McAfee is currently a principal research scientist at the Center for Digital Business in the MIT Sloan School of Management, and a fellow at the Harvard’s Berkman Center for Internet and Society.”

 Rich Breuckner reports “Those guys over at Oracle have just published a white paper on Architectural Strategies for Cloud Computing” in a Sun HPC Watercooler blog post of 8/22/2009:

For IT departments in larger enterprises, developing a private cloud often makes the most financial and business sense. When developing the architectural vision, an enterprise architect should bear in mind the characteristics of cloud computing as well as consider some of the organizational and cultural issues that might become obstacles to the adoption of the future architecture.

When moving ahead, decisions must be made on whether the future-state technical architecture should emphasize compatibility with the current standard or start from scratch to minimize cost. Future state systems architecture designs involve trade-offs between lower cost/operational efficiency and greater flexibility. Using an Enterprise Architecture framework can help enterprise architects navigate these trade-offs and design a system that accomplishes the business goal." Download White Paper

Steve Marx claims Clouds are Always Moving: A Glimpse Inside Windows Azure Planning in this 8/21/2009 post:

In the world of shrink-wrapped products, new releases are shipped on a regular basis, and customers need to wait until the new version comes out to benefit from improvements.  A service like Windows Azure is a completely different world.  Our service is continuously improving, and our users benefit immediately from the work that we do.  You could say that we ship every day, or you could say that we never “ship” at all.

This has implications on the way we plan the future evolution of Windows Azure.  A process that works for a version of a product isn’t necessarily the right process for planning a service with no discrete releases.  There are three facts about Windows Azure that in my opinion have to shape our planning process:

  1. Quality is our number one priority.  Given the choice between delivering a new feature or maintaining the high quality of our service, we’ll always choose quality.  Sometimes this introduces uncertainty in our plans (particularly around schedules).
  2. Feedback shapes our product.  Because we’re constantly making improvements and adding functionality, we’re constantly getting new feedback.  This in turn helps us refine our plans.
  3. Agility is critical.  Competition changes, technology changes, the economy changes, etc., all in real-time.  We need to continuously and rapidly adapt to these changing circumstances.

(By the way, another interesting effect of the above is that we tend to talk less than a lot of product teams about specific future features.  We try to only talk about those specifics that you need to know because they might affect your plans.)

My take is that the team should talk more than other teams about specific future features, especially for SQL Azure, because SADB v1 has serious limitations. One serious question is whether the 10-GB size limit on databases will be lifted and, if so, when? Another issue is the status of full-text search, which was promised for SSDS and SDS.

• Randy BiasSubscription Modeling & Cloud Performance essay of 8/21/2009 covers oversubscribing of cloud-computing resources:

An infrequently talked about, but very important aspect of cloud computing performance is ‘oversubscription’.  Oversubscribing is the act of selling more resources than you actually have to customers [under] the assumption that the average usage will be equal to or less than the actual resources on hand.  This is the de facto practice within the hosting and service provider market and has been from the get-go.

For example, typically most Internet Service Providers (ISP) oversubscribe their backbone bandwidth.  You won’t notice because most of the time you’ll get your full bandwidth.  Why?  Because even during peak traffic times not everyone is using the network.  Perhaps many or most of the folks in a given area might at any one time and service providers try to develop a subscription model that is as efficient as possible, giving most users their full bandwidth, most of the time, while allowing the provider to keep down their costs.

What happens when a service provider is wrong?  Ever been to some kind of major event where a huge number of people converge in a very small area?  Have you been unable to get or receive cell phone calls?  That’s what happens.  Even telecommunications companies and wireless providers oversubscribe.

• David Linthicum asks Are Consultants Killing Cloud Computing? in this 8/20/2009 post to InformationWeek:

It's clear that hype-driven cloud computing translates into dollars given to consultants who promise to lead enterprises to the Promised Land of "as-a-service." The coordinates being set by some consultants could lead enterprises to the wrong clouds with the wrong applications, and cost enterprises millions more than expected with no savings and increased risk.

So, what are they doing wrong? The key issues include:

  • Following the hype.
  • Picking the wrong battles.
  • Not considering the business.
  • Being a bit too chummy with providers.

• Foon Rhee reports Biden announces medical record grants in the amount of $1.2 billion from the economic stimulus package to “accelerate the use of electronic health records” (EHR) in this 8/20/2009 article for the Boston Globe. Rhee continues:

The grants include $598 million to establish 70 Health Information Technology Regional Extension Centers, which will provide hospitals and doctors with hands-on technical assistance, and $564 million to states and agencies to support information sharing with a nationwide system of networks.

• Brett Winterford’s Stress tests rain on Amazon's cloud post of 8/20/2009 says “Availability an issue for Amazon EC2, Google AppEngine and Microsoft Azure.” The post reports that:

Stress tests conducted by Sydney-based researchers have revealed that the infrastructure-on-demand services offered by Amazon, Google and Microsoft suffer from regular performance and availability issues.

The team of researchers, led by the University of New South Wales (UNSW) and in collaboration with researchers at NICTA (National ICT Australia) and the Smart Services Cooperative Research Centre (CRC), have spent seven months stress testing Amazon's EC2, Google's AppEngine and Microsoft's Azure cloud computing services.

The analysis simulated 2000 concurrent users connecting to services from each of the three providers, with researchers measuring response times and other performance metrics. …

Liu will present the findings and offer developers advice on how to build robust applications to withstand the cloud's limitations at the Australian Architecture Forum in Sydney on Monday, August 24.

• David Linthicum analyzes Liu’s stress tests in his Which cloud platforms deliver reliable service? post of 8/21/2009 to InfoWorld’s Cloud Computing blog: “New stress-testing results for Amazon, Google, and Microsoft show uneven performance -- and surprising optimization for some tasks.”

… The researchers created stress tests that simulated 2,000 concurrent users connected to applications hosted on the Amazon EC2, Google AppEngine, and Microsoft Azure cloud computing platforms. As always with these types of tests, there is some good news and some bad news:

  • The good news is that the testing did confirm that these cloud computing platforms were able to scale as needed and responded dynamically to an increasing demand load. In essence, when the demand increased, the cloud computing systems dynamically provided the additional capacity required to support the demand.
  • The bad news is that performance varied greatly. Indeed, according to the researchers, response times during the tests differed by a factor of 20, depending on the time of day the testing occurred. This is consistent with my experience and is perhaps due to the fact that multitenant, on-demand infrastructures are, well, multitenant, thus serving many users simultaneously, the number of which rises and falls during the day. …

• Lori MacVittie cautions “There are three "gotchas" associated with deploying applications into a load balanced environment” in her Coding In the Cloud article of 8/14/2009 in Dr. Dobbs Journal. She continues:

While there's a lot of talk about cloud computing these days -- why you should use it, why you shouldn't, and so on -- there's little discussion on the ramifications of cloud computing and really any on-demand infrastructure on application development. So before you jump into the cloud with both feet, it's important to understand how an on-demand environment like cloud computing affects the way applications execute because there are some gotchas that can come back to haunt you after deployment that don't appear until after the application is deployed. …

• Cyrus Golkar’s The Cloud Computing Tsunami: Gartner Predictions - Efficiency and Cost Control Will Transform the IT Industry of 8/19/2009 relies on Gartner predictions:

I start with listing 4 of the Gartner top 10 IT predictions for the next three to five years for cloud computing, software as service (SaaS), data center power/cooling efficiency and open source software. All of these predictions indicate that data center efficiency and cost containment will transform the IT industry over the next 5 years.

Key Gartner predictions for the data center for the next 5 years:

  • By 2011, early technology adopters will forgo capital expenditures and instead purchase 40 percent of their IT infrastructure as a service.
  • By 2012, at least one-third of business application software spending will be as service subscription instead of as product license.
  • By 2009, more than one-third of IT organizations will have one or more environmental criteria in their top six buying criteria for IT-related goods. Initially, the motivation will come from the wish to contain costs. Enterprise data centers are struggling to keep pace with the increasing power requirements of their infrastructures.
  • By 2012, 80 percent of all commercial software will include elements of open source technology.

• Lori MacVittie dispels The Myth of 100% IT Efficiency and claims “Idle resources will always need to exist, especially in a cloud architecture” in this 8/19/2009 post:

With IT focused on efficiency – for reduction in operating expenses and in the interests of creating a greener computing center – there’s a devil danger that we’ll attempt to achieve 100% efficiency. You know, the data center in which no compute resources are wasted; all are applied toward performing some task – whether administrative, revenue generating, development cycles, or business-related – and no machine is allowed to sit around idle.

Because, after all, idleness is the devil’s playground, isn’t it?

Steve Yi explains Partnering With Azure Services in this one-hour session from Microsoft’s Worldwide Partners Conference (WPC) 2009:

Attend this session to see how partners can take advantage of the Azure services to help advance their business. You will have the opportunity to learn about the basic partnering models, how partners can make money, scenarios by partner type and hear from partners about their experiences in working with Windows Azure, SQL Azure, and .NET Services.

Disregard the above title conflict.

Lydia Leong says Gartner’s client inquiries about cloud computing are increasing in her Cloudy inquiry trends post of 8/19/2009:

… With the economy picking up a bit, and businesses starting to return to growth initiatives rather than just cost optimization, and the approach of the budget season, the flow of client inquiry around cloud strategy has accelerated dramatically, to the point where cloud inquiries are becoming the overwhelming majority of my inquiries. Even my colocation and data center leasing inquiries are frequently taking on a cloud flavor, i.e., “How long more should we plan to have this data center, rather than just putting everything in the cloud?”

Organizations have really absorbed the hype — they genuinely believe that shortly, the cloud will solve all of their infrastructure issues. Sometimes, they’ve even made promises to executive management that this will be the case. Unfortunately, in the short term (i.e., for 2010 and 2011 planning), this isn’t going to be the case for your typical mid-size and enterprise business. There’s just too much legacy burden. Also, traditional software licensing schemes simply don’t work in this brave new world of elastic capacity.

The enthusiasm, though, is vast, which means that there are tremendous opportunities out there, and I think it’s both entirely safe and mainstream to run cloud infrastructure pilot projects right now, including large-scale, mission-critical, production infrastructure pilots for a particular business need (as opposed to deciding to move your whole data center into the cloud, which is still bleeding-edge adopter stuff). Indeed, I think there’s a significant untapped potential for tools that ease this transition. (Certainly there are any number of outsourcers and consultants who would love to charge you vast amounts of money to help you migrate.) …

Devan Sabaratnam’s The Challenge of Offline SaaS Revisited post of 8/19/2009 begins:

Paul Michaud wrote a post here a couple of days ago on the challenge of allowing offline usage in a Saas based system.  In his comprehensive discussion, he used the example of a Saas based contact management system and the complexities involved in allowing users to take their data, manipulate it offline, and then synchronise it all back up to the Cloud again.

A challenge it certainly is, even for a (dare I say it) simple system such as contact management.  This reminds me of a long and convoluted discussion we had amongst our development team and clients here last year over trying to create a Cloud based ERP system which allowed offline access.  Here was our 'perfect world' scenario:

The entire company would run their accounting system in 'the Cloud', with users scattered across various locations being able to use a web browser on any sort of PC or operating system to update customers, suppliers, stock and invoices.  In addition, field sales agents could download a copy of the inventory database to their laptops, or even smartphones, and be able to create invoices for customers whilst sitting in their offices or warehouses.  No need for expensive wireless or roaming data connections.

Sounds simple?  We thought so too, initially. 

One of Devan’s problems is that no legitimate business allows salesmen to issue invoices when placing sales orders. Commercial practice is to issue invoices only for items shipped to the customer.

Pat Helland, who has worked for Microsoft twice and Amazon once, has written and presented at length about guessing and partial knowledge for years. His Memories, Guesses, and Apologies post of 5/15/2007 is typical of his methods of dealing with uncertainty and concurrency conflicts in order processing:

 

David Linthicum claims “Nick Carr is wrong: There will be no 'big switch.' Cloud computing will instead phase in over time” in his The cloud computing revolution will not be televised post of 8/19/2009 to InfoWorld’s Cloud Computing blog:

People are asking me a lot these days about the impending "death of the datacenter" as predicted by those looking at cloud computing. Driving this notion are books such as Nick Carr's "The Big Switch," which I addressed in a previous blog post, that claim the movement to cloud computing will be sudden. Sorry, but that revolution won't happen. We don't make any sudden movements in this business.

I believe the shift away from the traditional datacenter model will be more like the proverbial frog in the boiling water. That is, if a frog is placed in boiling water, it will jump out, but if it is placed in cold water that is slowly heated, it will not perceive the danger and will be cooked to death. The use of cloud computing will come into the enterprise over time, and while the datacenter will indeed change, the gradual move to cloud computing won't be a "big switch."

Jay Fry’s Isn't IT automation inherently evil? (I mean, you saw 'The Terminator,' right?) post of 8/18/2009 casts an approving eye on autonomic computing:

While the general take on CloudWorld in San Francisco last week may have been that it was merely a shadow of what industry attendees were expecting, at least one presentation seems to have registered on the "worthy-of-discussion" meter. Lew Tucker from Sun was written up by Larry Dignan of ZDNet, Reuven Cohen, The Register, and others, for Lew's commentary on self-provisioning applications and "future cloud apps that won't need humans."

A couple things strike me about this "humanless computing" (as Reuven put it): first, whether people really think it through or not, this kind of automation is absolutely required for cloud computing. The types of dynamic infrastructures that businesses are hoping to get from the cloud just can't have a human in the minute-by-minute IT operations loop. (See also: human telephone switch operators.)

Automation is the key to the future success of the Windows Azure Platform and SQL Azure Database.

<Return to section navigation list> 

Cloud Security and Governance

• Robert Rowley, MD answers Is my data safe with Practice Fusion? in this 8/20/2009 post:

We recently reviewed the question of putting medical data in the Internet “cloud” from the standpoint of safety (guarding against loss of data), and of security (guarding against theft of data). The discussion was a general overview of the issues involved in paper vs. local EHR deployment vs. hosting in the Internet “cloud” – but what about Practice Fusion? How safe if my medical data on that platform?

As noted in the previous posts, medical information (specifically, Protected Health Information, or PHI – which is subject to HIPAA Privacy Rules) in a paper-based environment is the least safe and secure. Local disasters can lead to wholesale, irretrievable loss (like a building fire, hurricane, etc), and individual charts can be lost or looked at inappropriately with relative ease. Office policy is supposed to be in place to address these concerns, but in reality the implementation of this is hit-and-miss across the landscape. …

As an aside, I regularly receive paper PHI intended for another Roger Jennings who lives in the San Francisco Bay Area.

• Bruce Guptill claims Data Breaches Belie Security Concerns Regarding On-Premise vs. SaaS and Cloud in this 8/19/2009 Saugatuck Research Alert:

User executives continue to question the security of SaaS solutions, while massive security breaches continue to be suffered by significant on-premise user data systems. Yet to date (to our knowledge), not one SaaS vendor has reported any significant security breach or data loss.

Over the four years that Saugatuck has been executing its annual survey of user executives regarding SaaS adoption, the number-one concern cited regarding SaaS has been data security and privacy.

As shown by Bruce’s “User Executives’ Top SaaS Concerns, 2006 – 2009” table. [Site registration required.]

• Ben KepesSaaS Certainty – Escrow is the Answer post of 8/19/2009 reports:

I got an email the other day from Escrow Associates,  a provider of software escrow services that has just today announced the release of their SaaS software escrow agreement. For those not accustomed to escrow services, they are a contract where by the IP of a product is held by a third party and is released to the counter party in the contract if the owner of the IP goes out of business or cancels the product – in other words they guarantee ownership of data – be it user data, application data or integration data. …

Rich Miller analyzes PCI-DSS compliance for applications running on Amazon EC2 with S3 storage in his A PCI-Compliant Cloud? Not at Amazon post of 8/19/2009:

There’s an ongoing debate about the ability of cloud computing services to meet enterprise regulatory compliance requirements, including the Payment Card Industry Data Security Standard (PCI DSS) standard that is essential for e-commerce. Martin McKeay at the Network Security Blog recently highlighted the admission by one of the most popular cloud services, Amazon Web Services, that it does not support the highest levels of PCI compliance.

“From a compliance and risk management perspective, we recommend that you do not store sensitive credit card payment information in our EC2/S3 system because it is not inherently PCI level 1 compliant,” an Amazon representative told a customer in an exchange that was posted on an AWS web forum. A key issue is that PCI auditors are unable to inspect Amazon’s data centers. …

Rich’s conclusions are similar to Lori MacVittie’s in her Amazon Compliance Confession About Customers, Not Itself post of 8/18/2009.

James Urquhart’s Two cloud standardization efforts made public post of 8/19/2009 points to several groups claiming standards status for cloud computing:

The last several days has seen two standardization related events that I think are worth of note. Standardization, of course, is a critical element to creating fluid markets for compute, development and application services in the cloud. There are several efforts already underway, including the Distributed Management Task Force (DMTF) Open Cloud Standards Incubator, the Open Grid Forum's Open Cloud Computing Interface working group, and the Storage Network Industry Association Cloud Storage Technical Work Group. A great resource to see the spectrum of cloud standards activity can be found at the OMG's cloud-standards.org WIKI.

The standardization effort that was officially announced this week is already listed on that WIKI: the Open Group Cloud Work Group. …

The other effort that made itself public this week was Chris Hoff's call for participants in developing a standard for security assessment and management.

Krishnan’s post below spells out Chris Hoff’s current A6 activities.

Krishnan Subramanian says A6 Workgroup On The Way Soon in this 8/19/2009 post:

Sometime in late June, I wrote a post about a dilemma faced by customers of Cloud based services and pointed out to an elegant solution by Craig Balding of CloudSecurity.org. Unlike the traditional on-premise hosting, where it is possible to conduct external vulnerability scanning to check the security state of the computing environment, the Cloud based on-demand hosting offers an unique dilemma due to the multi-tenant nature of the Clouds.

“In the world of Cloud Computing, multi-tenancy is fast becoming the important keyword. In this new era of shared resources, businesses are increasingly facing the dilemma of not being able to do the scan. They can't do the penetration testing to make sure that their environment is secure. In fact, most of the Cloud providers explicitly prohibit such a scan through their terms because any scan employed by a business has the potential to disrupt the services for other customers.”

In the same post, I also pointed out an elegant solution by Craig where he suggested an API call offered by cloud providers that a customer can call with parameters for conveying source IP address(es) that will perform the scanning, and optionally a subset of their Cloud hosted IP addresses, scan start time and/or duration.

Christofer Hoff, building on Craig Balding's idea [in Follow-On: The Audit, Assertion, Assessment, and Assurance API (A6)], suggested that such an API, along with the vulnerability scanning, should provide a standardized way to do configuration management, asset management, patch remediation, compliance, etc. He concluded that:

“This way you win two ways: automated audit and security management capability for the customer/consumer and a a streamlined, cost effective, and responsive way of automating the validation of said controls in relation to compliance, SLA and legal requirements for service providers.”

He named it "The Audit, Assertion, Assessment, and Assurance API (A6)" taking some help from his colleagues on Twitter. Slowly, the idea gained some traction. He, along with few other security gurus including Ben from Canada, started working on a RESTful interface for accessing the A6 API.

Something like the A6 API might be the only hope for PCI-DSS Level 1 compliance in the cloud.

Michael Cooney’s FTC's electronic health record breach rule sparks debate post of 8/18/2009 to NetworkWorld’s Layer 8 blog asks “Protecting healthcare privacy at core of FTC rules but are they enough?”

Trying to get a handle on what most certainly will be an explosion of digitization of medical records the Federal Trade Commission today issued the final rules requiring "certain Web-based businesses to notify consumers" when the security of their electronic health information is breached.

But are the rules meaty enough or will they merely offer more fuel to the already burning healthcare fire?

First, let's understand what's happening.  Congress this spring told the FTC to issue the breach rule as part of the American Recovery and Reinvestment Act of 2009. The rule applies to vendors of personal health records - which provide online repositories that people can use to keep track of their health information - and entities that offer third-party applications for personal health records. …

The proposed notification rules are similar to California’s five-year-old SB 1386 (Cal. Civ. Code 1798.82 and 1798.29) for payment-card privacy breaches. According to the National Conference of State Legislatures, “Forty-five states, the District of Columbia, Puerto Rico and the Virgin Islands have enacted legislation requiring notification of security breaches involving personal information.” Electronic medical records, in general, are covered by federal HIPAA privacy rules.

Chris Hoff (@Beaker) comments On Appirio’s Prediction: The Rise & Fall Of Private Clouds in this 8/18/2009 post:

I was invited to add my comments to Appirio’s corporate blog in response to my opinions of their 2009 prediction “Rise and Fall of the Private Cloud,” but as I mentioned in kind on Twitter, debating a corporate talking point on a company’s blog is like watch[ing] two monkeys trying to screw a football; it’s messy and nobody wins.

However, in light of the fact that I’ve been preaching about the realities of phased adoption of Cloud — with Private Cloud being a necessary step — I thought I’d add my $0.02.  Of course, I’m doing so while on vacation, sitting on an ancient lava flow with my feet in the ocean in Hawaii, so it’s likely to be tropical in nature.

<Return to section navigation list> 

Cloud Computing Events

Steve Marx announces the Windows Azure World Tour 2009 in his 8/21/2009 post, which links to The Hub:

Cloud computing looks like the biggest change to hit our industry in many years. The advent of cheap, scalable computing power available over the Internet will affect almost everybody who works in IT. But taking advantage of this shift requires understanding this new approach and how to exploit it.

In this session aimed at decision makers, David Chappell looks at the Windows Azure platform and what it means for ISVs, custom software development firms and enterprises. The topics he’ll cover include:

  • An overview of the Windows Azure platform: Technology and business model
  • The cloud platform context: Google, Amazon, Salesforce.com, and more
  • Using the Windows Azure platform: Application scenarios

When: 9/18 through 10/21/2009   
Where: Boston, Chicago, Reston, Dallas, Mountain View, London, Munich, Paris, Tokyo, Bangalore, and Mumbai.

• David Pallman reports that the O.C. Azure User Group August 2009 Meeting on Migration to the Cloud will occur on 8/27/2009 from 6:00 to 8:00 PM at Quickstart Intelligence:

At the August meeting we'll be looking at how to migrate Enterprise applications over to the Azure cloud computing platform. We'll discuss and show what's involved in moving web sites, web services, databases, and security to the cloud. We'll also discuss the business aspects of migration to the cloud including ROI.

As usual, we'll also have pizza, beverages, and give-aways.

RSVP at https://www.clicktoattend.com/invitation.aspx?code=140025

When: 8/27/2009 from 6:00 to 8:00 PM    
Where: QuickStart Intelligence, 16815 Von Karman Ave., Suite 100, Irvine, CA 92606, USA

Steve Marx announces Real World Azure: Coming to a City Near You! in this 8/20/2009 post:

RealWorldAzureBanner

Windows Azure’s hitting the road throughout [the] central US.  If you’re near one of the above cities, check out the full tour schedule.  In each city, there’s a half-day event in the morning presented by TechNet for IT professionals and a half-day event in the afternoon presented by MSDN for developers and architects.

• Chris Auld will present Architecting and Developing for Windows Azure as a PDC 2009 preconference workshop on 11/16/2009:

The workshop will focus on equipping attendees with the skills to architect and develop real world applications using Windows Azure. Going beyond ‘demo-ware’ we will examine the theory and technical implementation of large scale elastic applications. It is expected that attendees will have some prior experience with Windows Azure and the Azure Services Training Kit is a recommended pre-requisite.

During this full-day workshop, we will discuss approaches to delivering the best raw performance from our Windows Azure applications, and how to achieve linear scale-out through the use of additional instances. We will also discuss data management approaches using Windows Azure and SQL Azure’s partitioning capabilities. Lastly, we will examine patterns for deploying Windows Azure applications reliably and with minimal or no impact on the end user experience, and the security environment within which Windows Azure operates, along with ways to provide a bridge between on-premises and cloud based identity assets and applications.

Registration is US$395 with PDC registration, US$495 workshops only.

When: 11/16/2009   
Where: Los Angeles Convention Center (PDC 2009), Los Angeles, CA, USA

• Robert Hess interviews Chris Auld for Channel9 about his PDC workshop presentation in Chris Auld - PDC09 Architecting and Developing for Windows Azure of 8/19/2009 [see above.]

Cloudera announces the Hadoop World: NYC conference to be held in New York City on 10/2/2009.

Preliminary Agenda: Hadoop is Everywhere

While we are still working out a few details, we are happy to share the following tentative agenda. We got so many submissions, we had to break the afternoon out into three tracks, and we still had to turn down some talks with a lot of potential. Please stay tuned for Keynotes, schedule details, and additional breakouts / sessions.

Hadoop Training: Developers, Administrators and Managers

We will be offering a full schedule of Hadoop training prior to the event with separate tracks for Developers (3 days), Administrators (1 day), and Managers (1 day). …

Sam Dean’s Cloudera Announces Hadoop World, and Hadoop Marches On post of 8/19/2009 includes more details about Cloudera and the conference.

When: 10/2/2009   
Where: Roosevelt Hotel, New York, NY, USA (Updated: Venue on 8/20/2009)

For more details about Cloudera and Hadoop, see the Other Cloud Computing Platforms and Services section.

Nasscom Events announces EmergeOut Conclave will have "Cloud Computing - A wave of opportunities for SMEs" as its theme when it opens 8/28/2009 at the Meridien Hotel in New Delhi, India:

Keynote: The Future of Software as a Service(SaaS) and Cloud Computing

  • How SaaS and Cloud Computing are changing the IT industry and the Opportunities they present
  • India market and the SaaS/Cloud Computing landscape
  • Some case studies on companies who have adapted SaaS
  • Demystifying SaaS, PaaS, IaaS business
  • Small is BIG: Re-inventing business models which will work for India
  • VAS leveraging the SaaS platform
  • Showcasing the NASSCOM EMERGE 50

Valedictory session: The Cloud with the SILVER lining

You can register here. Registration ranges from US$ 45 to 55 (INR 2,000 to 2,500) depending on Nasscom or TIE membership.

When: 8/28/2009   
Where: Meridien Hotel, New Delhi, India

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Bruce Eric investigates Dell's New Cloud Evangelist Barton George - What Does It Mean? in this 8/21/2009 post:

You may have seen that Dell has hired a cloud evangelist -- Barton George . As is often the case in a large organization, I learned about his joining the company on Twitter and on Barton's blog announcing his move . I'm thrilled to have Barton join the Dell team and he's already shown that he is prolific and understands the dynamics of communicating through a blog. That Dell has hired a Cloud Evangelist -- what does this mean? It means we've hired a cloud evangelist. That's all I can say. …

In his first interview, Barton talks to James Urquhart, who gave a talk at [CloudWorld] on "Virtualization to Cloud." In his second interview, he talks with Brian Aker, lead architect for Drizzle on the use of MySQL for the Web. He's got a slew of other interviews that you'll see next week both on his blog (I've encouraged him to keep his blog alive on Wordpress and we'll soon be adding it to our blogroll) but you'll also be able to read his content here on Inside Enterprise IT.

Lydia Leong’s The Magic Quadrant, Amazon, and confusion post of 8/20/2009 begins:

Despite my previous clarifying commentary on the Magic Quadrant for Web Hosting and Cloud Infrastructure Services (On Demand), posted when the MQ was published, and the text of the MQ itself, there continues to be confusion around the positioning of the vendors in the MQ. This is an attempt to clarify, in brief.

This MQ is not a pure cloud computing MQ. It is a hosting MQ. Titling it as such, and making it such, is not some feeble attempt to defend the traditional way of doing things. It is designed to help Gartner’s clients select a Web hoster, and it’s focused upon the things that enterprises care about. Today, our clients consider cloud players as well as traditional players during the selection process. Cloud has been highly disruptive to the hosting industry, introducing a pile of new entrants, revitalizing minor players and lifting them to a new level, and forcing successful traditional players to revise their approach to the business. [Emphasis Lydia’s.]

She then goes on to answer “The most common question asked by outsiders who just look at the chart and nothing more is, ‘Why doesn’t Amazon score higher on vision and execution?’”

• Jeff Barr announces Lower Pricing for Amazon EC2 Reserved Instances in this 8/19/2009 post to the Amazon Web Services (AWS) blog:

Given the many ways that our customers have already put them to use, I am happy to tell you that we've lowered the prices for newly purchased Amazon EC2 Reserved Instances! On a three year term, you can now get an m1.small instance for an effectively hourly rate of just $0.043 per hour (4.3 cents). The new pricing is now in effect.

If the Window Azure team continues to consider AWS their primary competitor, they’ll need to rethink Azure “subscription” pricing to remain competitive.

• Kevin Jackson says in his US Navy Experiments With Secure Cloud Computing post of 8/20/2009:

This week in San Diego, CA the US Navy held the initial planning conference for Trident Warrior '10. The Trident Warrior series is the premier annual FORCEnet Sea Trial Event sponsored by Naval Network Warfare Command (NETWARCOM). FORCEnet’s experimental results are incorporated into a definitive technical report used to develop Military Utility Assessment (MUA) recommendations. This report is provided to the Sea Trial Executive Steering Group (STESG) for consideration and acquisition recommendations. …

Working with Amazon Web Services and Security First Corporation, the Dataline-led team will explore the ability of cloud computing technologies to support humanitarian assistance and disaster relief military missions. As currently planned, the test scenario will simulate the secure use of a cloud-based collaboration environment. Both synchronous and asynchronous collaboration technologies will be leveraged. Information and data access among multiple operational groups will be dynamically managed based on simulated ad-hoc mission requirements. …

Doug Gourlay lands the Vice President of Marketing position at Arista Networks, according to his A Smaller Company, a Big Job, a Great Opportunity post of 8/19/2009:

As many people guessed on the poll that … ran in early July on NetworkWorld I could not relax for too long -- today I started my new job as the vice president of marketing at Arista Networks. In the spirit of any good marketing executive I prepared a bit of a messaging doc and FAQ sheet to help me out in case I get questions by industry press, friends, my favorite bloggers, or the legions of Twitter-bots that follow my life's twists and turns...

Arista’s Jayshree Ullal says in her Arista Networks Welcomes New Vice Presidents blog of the same date:

As VP of Marketing, Doug is responsible for Arista's market leadership in 10GbE and Cloud Networking. Doug is an industry luminary in the field of networking and data centers. He is indeed a talented and strategic executive. I have had the pleasure of working with him as a friend, mentor and boss for a decade. His rare combination of skills in technology, customer and strategic marketing were instrumental in driving several key Cisco initiatives. Doug has a keen aptitude for taking complex topics and deconstructing them for customer and mass consumption. As an industry professional, he is an articulate blogger and a savvy speaker.

Savio Rodrigues asks if Cloudera is to Hadoop as Kleenex is to facial tissues? in this 8/19/2009 post to InfoWorld’s Cloud Computing blog:

Organizing the inaugural Hadoop World Conference and delivering Hadoop training could position Cloudera as the go-to Hadoop company.

Cloudera is making a credible play to become the commercial brand associated with Apache Hadoop.  Not only did Hadoop founder Doug Cutting recently join Cloudera from Yahoo, Cloudera is set to announce the inaugural Hadoop World Conference, scheduled for Oct. 2 in New York City.

The conference is being organized by Cloudera founder Christophe Bisciglia, and the tentative agenda has presentations from, amongst others, Cloudera, Yahoo, Facebook, IBM, Microsoft, eBay, Visa, About.com, The New York Times, and JPMorganChase.

According to Christophe, "Hadoop is changing the way that users manage and process ever-increasing volumes of data.  Hadoop World in New York City will showcase this powerful new open source technology, with special focus on how traditional enterprises use it to solve real business problems."

For more details, see the Cloud Computing Events section.

Carl BrooksA new breed of cloud provider? Real servers, EC2 prices post of 8/19/2009 says:

Newservers is a curious new breed of cloud computing that eschews the frugal, high-efficiency, open source virtualized infrastructure used by Amazon, Rackspace and others. Instead, NewServers sells dedicated physical servers in exactly the same way Amazon sells virtual ones: By the hour, by the server, on the Web or with an application programming interface (API) -- and at comparable prices.

In 2004, CEO J.P. Gagne founded hosting company NewServers and, in 2007, was bitten by the cloud bug. Here Gagne explains [in a Q&A session] why he went real rather than virtual.

David Linthicum claims “The Open Group is looking at architecture and interoperability issues in a good way. Let's hope they succeed in helping the cloud evolve as it should” in his A new champion for preventing vendor lock-in in the cloud post of 8/18/2009 for InfoWorld’s Cloud Computing blog:

… I find the Open Group's thinking around the use of cloud computing in traditional enterprise architecture frameworks and in SOA more advanced than that of the other working groups and standards organizations I deal with.

What's desperately needed now is that we slow down on defining the "what" with cloud computing and focus more on the "how." The Open Group seems focused on the "how," which is encouraging.

Business Wire announces SpringSource Launches the Enterprise Java Cloud in this 8/19/2009 press release in which SpringSource claims its new Cloud Foundry product enables the firm to be:

[T]he first vendor to offer a self-service, pay-as-you-go, public cloud deployment platform for full-feature Java web applications that unifies the entire build, run and manage application lifecycle for Java. …

Cloud Foundry is based on SpringSource’s recent acquisition of Cloud Foundry, Inc., an Oakland, Calif. - based software company. SpringSource Cloud Foundry is built on the innovative open-source Cloud Tools project and extends SpringSource’s solutions for building, deploying and managing Java applications to take full advantage of the power of elastic cloud computing. Cloud Foundry, now incorporated into SpringSource’s product line, launches and automatically scales Java web applications in the cloud with a few clicks of the mouse. [Emphasis added.]

<Return to section navigation list> 

blog comments powered by Disqus