Windows Azure and Cloud Computing Posts for 9/21/2009+
Windows Azure, Azure Data Services, SQL Azure Database and related cloud computing topics now appear in this weekly series.
Tip: Copy •, •• or ••• to the clipboard, press Ctrl+F and paste into the search text box to find updated articles.
•• Update 9/23/2009: Rob Gillen’s Azure with Large Data Sets presentation and live demo, Jay Fry’s review of the 451 Group’s “Cloud in Context” Event, CloudSwitch leaves stealth mode, Mary Hayes Weier says subscription-based pricing for Oracle products is “on Safra’s desk,” Linda MGlasson on “The Future of PCI,” Chris Hoff warns about patches to IaaS and PaaS services, Gartner’s Tom Bittman proposes recorded music as A Better Cloud Computing Analogy than water or electricity, two Johns Hopkins cardiologists recommend standardizing EHR/PHR on VistA.
• Update 9/22/2009: Zend Simple Cloud API and Zend Cloud, Ruv on OpenCloud APIs, John Treadway on Cloud Computing and Moore’s Law, Lori MacVittie on Cloud Computing versus Cloud Data Centers, Andrea DiMaio and the Government 2.0 HypeCycle, and more.
Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon.
Read the detailed TOC here (PDF). Download the sample code here. Discuss the book on its WROX P2P Forum.
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:
- Azure Blob, Table and Queue Services
- SQL Azure Database (SADB)
- .NET Services: Access Control, Service Bus and Workflow
- Live Windows Azure Apps, Tools and Test Harnesses
- Windows Azure Infrastructure
- Cloud Security and Governance
- Cloud Computing Events
- Other Cloud Computing Platforms and Services
To use these links, click the post title to display the single article you want to navigate.
Azure Blob, Table and Queue Services
•• Rob Gillen (@argodev) delivered a Windows Azure: Notes from the Field presentation to the Huntsville [AL] New Technology Users group (HUNTUG.org) on 9/14/2009 that demonstrates methods for processing large earth science datasets with Azure tables. See the Live Windows Azure Apps, Tools and Test Harnesses section for details.
• Zend Technologies reduces cloud-storage vendor lock-in anxiety with its Simple Cloud API of 9/22/2009 for Windows Azure, Amazon Web Services, Nirvanix and RackSpace storage services. See the Live Windows Azure Apps, Tools and Test Harnesses section for details.
Simon Munro’s Catfax project on CodePlex demonstrates moving SQL data to and from the cloud using SQL CLR, Azure WCF and Azure Storage:
Catfax is a demonstration project which shows how rows can be uploaded and retrieved from the cloud in a manner that is well integrated with SQL Server using SQL-CLR. The application has a SQL-CLR stored procedure that calls a WCF service hosted on Azure. The Azure web role stores the data in Azure Tables which can be retrieved later from SQL by executing a similar SQL-CLR sproc.
A more detailed description can be found [in this] blog post: http://blogs.conchango.com/simonmunro/archive/2009/07/08/catfax-sql-clr-wcf-and-windows-azure.aspx.
Simon’s project is similar for Azure Tables is similar to George Huey’s for SQL Azture databases (see below), but Simon’s is a two-way street.
<Return to section navigation list>
SQL Azure Database (SADB, formerly SDS and SSDS)
My original Using the SQL Azure Migration Wizard with the AdventureWorksLT2008 Sample Database post is updated for George Huey’s new SQL Azure Migration Wizard v.0.2.7, which now handles T-SQL scripts for exporting schema and data from local SQL Server 2005+ databases to SQL Azure in the cloud.
<Return to section navigation list>
.NET Services: Access Control, Service Bus and Workflow
No significant new posts on this topic today.
<Return to section navigation list>
Live Windows Azure Apps, Tools and Test Harnesses
•• Rob Gillen (@argodev) delivered a Windows Azure: Notes from the Field presentation to the Huntsville [AL] New Technology Users group (HUNTUG.org) on 9/14/2009 that demonstrates methods for processing large earth science datasets with Azure tables. Here’s the session’s description:
Come learn about Microsoft's Azure platform (and cloud computing in general) as we look at an application built to assist in the processing and publishing of large-scale scientific data. We will discuss architecture choices, benchmarking results, issues faced as well as the work-arounds implemented.
Rob is a developer focused on Microsoft technologies for over ten years working in the service provider (hosting) market place as well as with federal and corporate customers. Rob specializes in application and service provisioning, identity management, SharePoint and is currently working on the intersection of traditional HPC and the commercial “cloud”. Rob has spent the last two years working on the applications team at Oak Ridge National Laboratory and is currently working in the Computer Science Research Group at ORNL studying the role of cloud computing in our portfolio of scientific computing.
You can learn more in slides 19 through 21 about Rob’s processing of a 1.2 GB NetCDF (Network Common Data Form) file of climate data for the 20th century stored in Azure tables as a flattened view. Rob documents varying methods of loading the tables in slides 22 through 25. Here his live Silverlight visualization of one set of data shown in slide 26 (click for full-size image, 700 KB):
Rob’s blog offers a series of detailed articles posted while he was testing the data processing and visualization techniques described in his HUNTUG presentation and the live demo.
His slide 28 observes that “ATOM is *very* bloated (~9 MB per time period), average of 55 seconds over 9 distinct serial calls)” whereas “JSON is better (average of 18.5 seconds and 1.6 MB).” I’ve been raising this issue periodically since Pablo Castro initially adopted the AtomPub format for ADO.NET Data Services. See Rob’s AtomPub, JSON, Azure and Large Datasets, Part 2 of 8/20/2009 and AtomPub, JSON, Azure, and Large Datasets of 8/14/2009:
•• Suzanne Fedoruk of the Physicians Wellness Network reported on 9/23/2009 that PWN Announces That Consumers Can Now Store webLAB Test Results In Their Microsoft HealthVault Account:
Each day more consumers are turning to webLAB to save time and money by ordering general wellness lab tests online. Physicians Wellness Network (PWN) today announced that consumers can now store, track and trend their webLAB test results in their Microsoft HealthVault account. HealthVault is an open, Web-based platform designed to empower consumers by putting them in control of their health information.
Informed Consumers Make Better Health Choices
"PWN physicians know that informed consumers make better health choices. Storing webLAB test results in a Microsoft HealthVault account makes this possible," said Brent Blue, M.D., president of PWN. "When people can track and trend important numbers, such as their cholesterol levels, they are armed with information to manage their health choices." …
I’m still waiting for QuestDiagnostics and Wallgreens Pharmacy to create their promised links to HealthVault.
•• Two Johns Hopkins Medical Institutions cardiologists recommend adopting the Veterans Administration’s VistA EHR application in their Zakaria and Meyerson: How to Fix Health IT article of 9/17/2009 for the Washington Post:
… Most currently available electronic medical record software is unwieldy and difficult to quickly access, and there is still no vehicle for the timely exchange of critical medical data between providers and facilities. The stimulus bill included $50 billion dollars to promote uniform electronic record standards, but it will be difficult and costly to construct new systems ensuring interoperability of all current hospital software.
A cheaper and more effective solution is to adopt a standard electronic record-keeping system and ask that all health information software interface with it. In fact, a proven system already exists. The software is called the Veterans Health Information Systems and Technology Architecture (VistA), which the Veterans Affairs Department developed. VistA requires minimal support, is absolutely free to anyone who requests it, is much more user-friendly than its counterparts, and many doctors are already familiar with it. … [Wikipedia link added.]
• Zend Technologies reduces cloud-storage vendor lock-in anxiety with its Simple Cloud API of 9/22/2009 for Windows Azure, Amazon Web Services, Nirvanix and RackSpace storage services.
My take: Zend's Simple Cloud API is a set of interfaces for RESTful file storage, document storage, and simple queue services with implementations for Amazon Web Services, Windows Azure storage services, Nirvanix Storage Delivery Network and Rackspace Cloud Files. Identical, or at least similar, implementations for major cloud storage providers will reduce IT managers' widely publicized apprehension of cloud vendor lock-in.
Zend will deliver the PHP implementation for the open source Zend Framework as the "Zen Cloud," which follows in the footsteps of other "OpenCloud" APIs, such as those from Sun Microsystems and GoGrid, as well as earlier Rackspace APIs. The TIOBE Programming Community Index for September 2009 reports that PHP is now #3 in programming language popularity, up from #5 in September 2008, so the Zend Cloud implementation has a large potential audience among developers for Amazon, Nirvanix, and Rackspace storage.
Google is conspicuous by its absence as a Zend contributor. However, that's not surprising because Google offers "Python as a Service" for the Web and doesn't emphasize cloud storage in its marketing materials.
Windows Azure is a .NET Platform as a Service (PaaS) offering but Microsoft (in conjunction with RealDolmen) released CTP3 of the Windows Azure SDK for PHP (PHPAzure) on 9/8/2009 as an open-source "commitment to Interoperability." The relative benefits of PHPAzure, Simple Cloud API and Zend Cloud to IT managers and developers remain to be seen. PHPAzure takes advantage of Azure-specific features, such as transactions on members of the same entity group, whereas the Simple API/Zend adapters offer least-common-denominator features of the four supported services.
My conclusion: Windows Azure developers will continue to program in C# and use the sample StorageClient libraries to integrate Azure .NET Web and Worker projects with RESTful Azure storage services. Zend’s initiative might convince the Azure team to formalize StorageClient as an official supplement to its RESTful storage APIs.
• Vijay Rajagopalan, Principal Architect, from the Interoperability Technical Strategy team at Microsoft gives an overview of the Simple API for Cloud Application Services and details the initial contribution from Microsoft in this 00:06:42 Channel9 video of 9/22/2009.
• Maarten Balliaux describes his Zend Framework: Zend_Service_WindowsAzure Component Proposal in detail on this Zen wiki page:
Zend_Service_WindowsAzure is a component that allows applications to make use of the Windows Azure API's. Windows Azure is a Microsoft platform which allows users to store unstructured data (think: files) and structured data (think: database) in a cloud service. More on http://www.microsoft.com/Azure.
The current proposal targets all 3 Windows Azure storage services. These services are:
- Blob Storage
- Table Storage
- Queue Service
An example implementation of this can be found on CodePlex: PHP SDK for Windows Azure and in the ZF SVN laboratory.
• Mary Jo Foley adds her insight of the topic with Zend, Microsoft, IBM join forces to simplify cloud-app devlopment for PHP coders on 9/22/2009:
Zend Technologies and a number of its partners — including Microsoft — unveiled on September 22 another cloud-interop intiative. This one is aimed at developers who are writing new cloud-centric apps in PHP.
All the right buzzwords are part of the newly unveiled Simple API for Cloud Application Services. It’s an open-source initiative that currently includes Zend, Microsoft, IBM, Nirvanix, Rackspace and GoGrid as the founding members. (No Google and no Amazon, however.) It’s all about interoperability and community and dialogue.
For developers of new “cloud-native” applications, “this is a write once and run anywhere” opportunity, said Zend CEO Andi Gutmans. …
• Maureen O’Gara chimes in with IBM, Microsoft, Others in Lock-Picking Cloud API Push of 9/22/2009:
Half the apps on the Internet are written in PHP. That gives Zend Technologies, the PHP house, a stake in the cloud.
So it’s rounded up cloud merchants Microsoft, IBM, Rackspace, GoGrid and Nirvanix and has gotten them to support its new open source drive to create a so-called Simple API for Cloud Application Services that developers can write to – or, Zend thinks as likely, rewrite to – to get native cloud apps.
These apps in turn promise to break the lock on closed clouds like Amazon’s, making it possible to move applications and their data in and out of clouds, migrating them around virtually all the major nebulae.
The trick will be in creating Simple Cloud API adapters.
Zend cloud strategist Wil Sinclair – that’s right, Wil – says both Amazon and Google were asked to join the initiative.
Google’s widgetry is based on Python so it’s got an excuse for not joining. Anyway, the in-house Google Data Liberation Front is at least promising to cut the shackles that condemn captive users to remain customers of Google services because their data is held hostage like it already has with Google App Engine.
• See Doug Tidwell explains Cloud computing with PHP, Part 1: Using Amazon S3 with the Zend Framework in the Other Cloud Computing Platforms and Services section.
• Eric Nelson’s Using IIS to generate a X509 certificate for use with the Windows Azure Service Management API – step by step of 9/22/2009 is a detailed tutorial:
This is one of a series of posts on my preparations for sessions on Azure and ORMs at Software Architect 2009.
One of the things that has been added to Windows Azure while i have been “elsewhere” is the Service Management API which the team introduced on the 17th of this month (Sept 2009).
This is a REST-based API which allows:
- Deployments – Viewing, creating, deleting, swapping, modifying configuration settings, changing instance counts, and updating the deployment.
- Listing and viewing properties for hosted services, storage accounts and affinity groups
It uses X509 client certificates for authentication. You can upload any valid X509 certificate in .cer format to the Windows Azure developer portal and then use it as a client certificate when making API requests.
But… you need an X509 certificate. If you have the Windows SDK installed then you can use makecert (details on the original post). An alternative is to use IIS 7. I decided to use IIS to get my X509 but it turned out a little less obvious than I expected. Hence a step by step is called for. …
• Jonathan Lindo ruminates about Fixing Bugs in the Cloud in this 9/22/2009 post:
… One of the essential elements of success is getting a solid, scalable application online and running smoothly and securely. But there just hasn’t been a lot of innovation here.
Being able to quickly identify, respond to and resolve issues in a SaaS application is critical, because if one server has a bad day, it’s not one customer that feels pain, it’s hundreds or thousands. And that’s bad. SaaS acts like a big hairy amplifier on any defect or scalability issue that might be lurking in your app.
Technologies like Introscope, Patrol, Vantage, Snort and my software debugging company Replay are starting to address the needs, but our customers are still pioneering and forging the landscape as they increasingly feel the pains of this new software paradigm we find ourselves in. …
Msdevcon will offer six new SQL Azure training courses starting on 9/28/2009 in its Microsoft SQL Azure series:
- Microsoft SQL Azure Overview for the Technical Decision Maker 9/28/2008
- Microsoft SQL Azure Overview for Developers 9/28/2008
- Microsoft SQL Azure RDBMS Support 9/28/2008
- Microsoft SQL Azure Programmability 9/28/2008
- Microsoft SQL Azure Tooling 10/5/2008
- Microsoft SQL Azure Security Model 10/5/2008
The above are in addition to the many members of their Azure Services for Developer series.
Sara Forrest writes Bosworth wants you to take charge of your health in her 9/21/2009 post to ComputerWorld:
Adam Bosworth is asking you to take your health into your own hands (or at least into your computer). The former head of Google Health, Bosworth is now working on a new start-up, Keas Inc., which is dedicated to helping consumers take charge of their own health data. His work focuses on making individual health records easily accessible, thus preventing overtreatment and overspending through proper patient education.
While attending the Aspen Health Forum this summer, he took a few minutes to explain the importance of public access to health data.
“Let's talk a little bit about how you got to where you are today. I worked for Citicorp in the distant past, Borland building Quattro, Microsoft for 10 long years building what I now call Lego blocks for adults, BEA Systems for three years, Google, and three of my own start-ups.
I decided about five years ago that I'd spend the next 25 trying to improve health care and help bring it into the 21st century. I went to Google with that in mind and got sidetracked for 18 months running and building what are generally called Google Apps today before getting to work on Google Health. Keas, my current company, is in some way the culmination of everything I've learned in computing, applied to how to improve health care.” …
Adam also is known as the “father of Microsoft Access.”
Howard Anderson reports that healthcare providers are Weighing EHR/PHR Links in this 9/21/2009 post:
Provider organizations have to address several critical issues when launching personal health records projects, one consultant says. Among those issues, he says, is whether to enable patients to access a complete electronic health record and export it to a PHR--a step that John Moore, managing partner of Chilmark Research, Cambridge, Mass., advocates.
Hospitals and clinics also must decide what data elements are most essential to a PHR. Although many agree that medication lists and allergies must be in a PHR, providers are pondering whether to include all lab tests as well as diagnostic images, Moore notes.
Providers also must determine whether to enable patients to add their own notes to data imported from an EHR to a PHR, such as to question a doctor's findings, the consultant says. Plus, they must determine whether those patient notes will then flow into the EHR.
A strong advocate of two-way links between EHRs and PHRs, Moore also says practice management systems should be added to the mix to help enable patients to use a PHR to, for example, schedule an appointment. …
Carl Brooks’ Public sector drags its heels on cloud post of 9/18/2009 cites examples of foot-dragging by public agencies:
As firms experiment with pay-as-you computing infrastructures and an ever-broadening constellation of services and technologies, cloud computing is all the rage in the prviate sector. But the public sector -- a vast technology consumer in the U.S. with different spending habits, requirements and obligations --is dragging its heels.
Public-sector IT departments, for instance, aren't rewarded for investing in the latest technology and for reducing costs; instead, they're expected to keep systems working far past standard technology lifecycles. …
Reuven Cohen analyzes Public Cloud Infrastructure Capacity Planning in this 9/21/2009 post:
In the run of a day I get a lot of calls from hosting companies and data centers looking to roll out public cloud infrastructures using Enomaly ECP. In these discussions there are a few questions that everyone seems to ask.
- How much is it going to cost?
- What is the minimum resources / capacity required to roll out a public cloud service?Both questions are very much related. But to get to and idea of how much your cloud infrastructure is going to cost, you first need to fully understand what your resource requirements are and how much capacity (minimum resources) will be required to maintain an acceptable level of service and hopefully turn a profit.
In traditional dedicated or shared hosting environment, capacity planning is typically a fairly straight forward endeavor, (a high allotment of bandwidth and a fairly static allotment of resources), a single server (or slice of a server) with a static amount of storage and ram. If you run out of storage, or get too many visitors, well too bad. It is what it is. Some managed hosting providers offer more complex server deployment options but generally rather then one server you're given a static stack of several, but the concept of elasticity is not usually part of the equation.
Is it problems with capacity planning that are holding back adoption of cloud computing by government agencies?
<Return to section navigation list>
Windows Azure Infrastructure
•• Krishnan Subramanian asks Does Private SaaS Make Any Sense? and says “Maybe” in this 9/23/2009 post:
Last week, I had a twitter discussion with James Watters of Silicon Angle about the idea of Private SaaS. He is of the strong opinion that Private SaaS is meaningless. Even though I share his opinion on it, I am not religious about having multi-tenancy as the requirement in the definition of SaaS.
The biggest advantage of SaaS is the huge cost savings it offers due to the multi-tenant architecture. However, enterprises are reluctant to embrace SaaS applications due to concerns about reliability, security, privacy, etc.. But, the other advantages of SaaS like low resource overhead, centralized control of user applications, simplified security and patch management, etc. are very attractive to the enterprises. In order to capture the enterprise markets, some of the vendors are shifting towards a Private SaaS approach.
•• Tom Bittman proposes recorded music as A Better Cloud Computing Analogy than water or electricity in this 9/22/2009 post to the Gartner blogs. Radio delivered “music as a service” (MaaS?) but “on-premises” music hasn’t died.
• John Treadway explains the relationship between Moore’s Law and the Cloud Inflection in IT Staffing in this 9/21/2009 post:
I was in a meeting last week with Gartner’s Ben Pring and he made an interesting observation that cloud computing at the end is just a result of Moore’s law. The concept is fairly simple and charts a path of increasingly distributed computing from mainframes, to minicomputers, to workstations and PCs (which resulted in client/server), then on to the Internet, mobile computing, and finally to cloud computing. But cloud computing is not an increase in distribution of computing — it’s actually the reverse. Sure, there are more devices than ever. But since internet application topologies have replaced client/server, the leveraging of computing horsepower has migrated back to the data center.
The explosion in distributed computing brought on by ever faster processors (coupled by lower prices on CPUs, memory and storage) allowed for the client/server revolution to push workloads onto the client and off of the server. Today, much of the compute power of edge devices (PCs, laptops and smart phones) is not used for computing, but for presentation. Raw workload processing is happening on the server to an increasing degree. …
Until the cloud, Moore’s law resulted in a steady increase in demand for skilled systems and network administrators. At some point, the economies of scale and concentrating effects of cloud computing – particularly in the area of IT operations – will be visible as a measurable decline in the demand for these skills.
John is the newly appointed Director, Cloud Computing Portfolio for Unisys.
• Lori MacVittie’s Cloud Computing versus Cloud Data Centers post of 9/21/2009 contends: “Isolation of resources in ‘the cloud’ is moving providers toward hosted data centers:”
Isolation of resources in “the cloud” is moving providers toward hosted data centers and away from shared resource computing. Do we need to go back to the future and re-examine mainframe computing as a better model for isolated applications capable of sharing resources?
James Urquhart in “Enterprise cloud computing coming of age” gives a nice summary of several “private” cloud offerings; that is, isolated and dedicated resources contracted out to enterprises for a fee. James ends his somewhat prosaic discussion of these offerings with a note that this “evolution” is just the beginning of a long process.
But is it really? Is it really an evolution when you appear to moving back toward what we had before? Because the only technological difference between isolated, dedicated resources in the cloud and “outsourced data center” appears to be the way in which the resources are provisioned. In the former they’re mostly virtualized and provisioned on-demand. In the latter those resources are provisioned manually. But the resources and the isolation is the same. …
• The new Tech Hermit reports More Bad News for Microsoft Data Center Program on 9/21/2009:
Following on the terrible blow that Debra Chrapaty is leaving Microsoft for greener pastures at Cisco, the program received another huge blow with the resignation of Joel Stone who was responsible for the Operations of all North America based facilities. Moreover, he is taking a prominent position at Global Switch overseeing worldwide data center operations and will be based out of the United Kingdom. ..
The many mails we have received here at Tech Hermit feel that these resignations have more to do with a failure or at least troubled integration with the various Yahoo executives integrating into the program. As you may know, Dayne Sampson, and Kevin Timmons from Yahoo recently joined the Microsoft GFS organization the latter having responsibilities for Data Center Operations previously run by General Manager, Michael Manos.
One thing is clear that after the departure of Manos, the only real voice from Microsoft around infrastructure leadership was Chrapaty. With her departure and now key operations leadership as well, we have to ask is Microsoft’s data center program done for?
• Rich Miller’s Tech Hermit Blog Returns post of 9/22/2009 report on the reincarnation of the Tech Hermit brand and the Digital Cave blog, which has offered many insights into Microsoft’s data center operations.
• Jake Sorofman reads the crystal ball in DATACENTER.NEXT: Envisioning the Future of IT of 9/21/2009:
These days, there’s a lot of time spent defining cloud computing. If you believe the pundits, its definition remains a mystery—a cryptic riddle waiting to be deciphered.
Personally, I’m not that interested in defining cloud.
What is far more interesting to me is defining the future of IT, which almost certainly embodies aspects of what most people would recognize as cloud computing. Whether the future of IT is cloud itself is a silly tautological question since we haven’t defined cloud in the first place.
What we do know is that IT is facing a fundamental transformation—a transformation forced by technological, economic, competitive forces. Technologically, enterprises are recognizing that IT has become unthinkably complex. Economically, enterprises are under pressure to slash budgets and do more with less. And competitively, enterprises are recognizing that IT has become core to business and the delay of yesterday’s IT creates serious competitive risk. …
Jake Sorofman is Vice President of Marketing, rPath
Kara Swisher reports Top Microsoft Infrastructure Exec Chrapaty Heads to Cisco in this 9/20/2009 post to D | All Things Digital:
One of Microsoft’s top execs, Debra Chrapaty, who heads its infrastructure business, is leaving the software giant to take a top job at Cisco (CSCO), sources said.
Chrapaty–whose title is corporate VP of Global Foundation Services–is also one of increasingly few top women tech execs at Microsoft (MSFT), where she has worked for seven years.
The job put her in charge of, as a Microsoft site notes, “strategy and delivery of the foundational platform for Microsoft Live, Cloud and Online Services worldwide including physical infrastructure, security, operational management, global delivery and environmental considerations. Her organization supports over 200 online services and web portals from Microsoft for consumers and businesses.”
James Hamilton’s Here’s Another Innovative Application post of 9/21/2009 begins:
Here’s another innovative application of commodity hardware and innovative software to the high-scale storage problem. MaxiScale focuses on 1) scalable storage, 2) distributed namespace, and 3) commodity hardware.
Today's announcement: http://www.maxiscale.com/news/newsrelease/092109.
They sell software designed to run on commodity servers with direct attached storage. They run N-way redundancy with a default of 3-way across storage servers to be able to survive disk and server failure. The storage can be accessed via HTTP or via Linux or Windows (2003 and XP) file system calls. The later approach requires a kernel installed device driver and uses a proprietary protocol to communicate back with the filer cluster but has the advantage of directly support local O/S read/write operations.
MaxiScale’s approach sounds similar to that used to provide redundancy for Windows Azure tables and SQL Azure databases.
<Return to section navigation list>
Cloud Security and Governance
•• Chris Hoff (@Beaker) brings up issues about updating IaaS and PaaS cloud services for the second time in his Redux: Patching the Cloud post of 9/23/2009:
… What happens when the underlying CloudOS gets patched (or needs to be) and blows your applications/VMs sky-high (in the PaaS/IaaS models?)
How does one negotiate the process for determining when and how a patch is deployed? Where does the cloud operator draw the line? If the cloud fabric is democratized across constituent enterprise customers, however isolated, how does a cloud provider ensure consistent distributed service? If an application can be dynamically provisioned anywhere in the fabric, consistency of the platform is critical.
I followed this up with a practical example when Microsoft’s Azure services experienced a hiccup due to this very thing. We see wholesale changes that can be instantiated on a whim by Cloud providers that could alter service functionality and service availability such as this one from Google (Published Google Documents to appear in Google search) — have you thought this through? …
•• Linda MGlasson begins a series on “The Future of PCI” with The Future of PCI: 4 Questions to Answer on 9/22/2009:
It's been an interesting year for the Payment Card Industry Data Security Standard (PCI DSS, or just PCI).
On one hand there were the Heartland Payment Systems (HPY) and Network Solutions data breaches, after which at least one industry analyst declared "It's stop pretending that PCI is working."
On the other, there is the State of Nevada, which has passed a new law requiring businesses to comply with PCI when collecting or transmitting payment card information.
In the middle, is a debate among payment card companies, banking institutions, merchants, industry groups and even congressional leaders, questioning the merit of the standard and all hinting at the same open question: What is the future of PCI?
PCI stakeholders are gathering this week for the 2009 PCI Security Standards Council Community meeting in Las Vegas, NV. … [PCI link added.]
Linda continues with the four questions.
David Linthicum’s Should Failures Cast Shadows on Cloud Computing? post to InformationWeek’s Intelligent Enterprise blog of 9/21/2009 posits:
The Gmail outage last week left many asking about the viability of cloud computing, at least, according to PC World and other pundits.
"Tuesday's Gmail outage was not only an inconvenience it calls into question -- yet again -- the feasibility of present-day cloud computing. One popular prediction is that future computers won't need huge hard drives because all our applications and personal data (photos, videos, documents and e-mail) will exist on remote servers on the Internet (otherwise known as 'cloud computing')." Every time Twitter goes out, or, in this case, a major free email system goes down, everyone uses the outage as an opportunity to cast shadows on cloud computing. I'm not sure why. In many cases its apples versus oranges, such as Twitter versus Amazon EC2. Also, systems go down, cloud and enterprise, so let's get over that as well.
Joseph Goedart reports Baucus Wants Tighter HIPAA Standards in this 9/21/2009 post to the Health Data Management site:
The health care reform plan issued by Senate Finance Committee chair Sen. Max Baucus (D-Mont.) calls for mandated adoption of "operating rules" that would significantly tighten the standards of HIPAA administrative/financial transactions. It also would increase the number of transaction sets.
The "operating rules" referenced in the plan are those developed under the voluntary CORE initiative under way for several years. CORE is the Committee on Operating Rules for Information Exchange within CAQH, a Washington-based payer advocacy group. The initiative seeks to build industry consensus on tightening of the HIPAA standards to facilitate health care financial/administrative transactions and offer more information to providers. …
<Return to section navigation list>
Cloud Computing Events
•• Jay Fry processes customer feedback about cloud computing in his Making cloud computing work: customers at 451 Group summit say costs, trust, and people issues are key post of 9/22/2009:
A few weeks back, the 451 Group held a short-but-sweet Infrastructure Computing for the Enterprise (ICE) Summit to discuss "cloud computing in context." Their analysts, some vendors, and some actual customers each gave their own perspective on how the move to cloud computing is going -- and even what's keeping it from going. [Link to ICE added.]
The customers especially (as you might expect) came up with some interesting commentary. I'm always eager to dig into customer feedback on cloud computing successes and roadblocks, and thought some of the tidbits we heard at the event were worth recounting here.
- Jay’s topics include:
- Clouds under the radar
- Customers: Some hesitate to call it cloud
- Cloud: It's (still) not for the faint of heart
- Biggest pain: impact on the people and the organization
- Need to move beyond just virtualization
- Can I drive your Mercedes while you're not using it?
- Are we making progress on cloud computing?
Where: Grand Hyatt Hotel, San Francisco, CA, USA
Brent Stineman’s Twin Cities Cloud Computing – August Meeting Recap post of 9/20/2009 reviews an unscheduled visit by David Chappell to the Twin Cities Cloud Computing User group’s August 2009 meeting:
David’s presentation was divided into two portions. The first and most lengthy was a detailing of what is the Windows Azure Platform. Its obvious that David has spent a significant amount of time with the Windows Azure product team. Not only does he have a great understanding of the products past and present, but it seemed like he knew more than he was letting on about its future. The most important take-away I had from this was understanding the target audience for each of the components of the Windows Azure Platform.
Windows Azure, the application hosting platform, was intended to allow someone to build the next Facebook or Twitter. That’s why its database is a horizontally scalable system that is not based on traditional RDBMS models. This is also why its includes features and a price-tag that is unlike contemporary co-location type hosting packages. Those packages are targeted at simpler hosting needs. On the flip side of this is SQL Azure, a vertically scaling database that provides full RDBMS support. This component is less interested in scalability as it is in providing a targeted cloud based database solution.
<Return to section navigation list>
Other Cloud Computing Platforms and Services
•• Mary Hayes Weier reports Oracle Contemplates Huge Shift: Subscription-Based Pricing in this 9/23/2009 post to InformationWeek’s Plug into the Cloud blog:
Oracle, it seems, is trying to hammer out a strategy to more heavily embrace the most radical faction of the SaaS movement, one that completely upends the traditional software vendor profit model: Subscription-based pricing. If what Oracle said yesterday in a Web event is true, this could be a huge shift for the software giant.
Oracle launched the virtual Web event, around midmarket software announcements, with a live video keynote address featuring some Oracle execs and a presentation about what's new. There it was in the preso: new pricing options will include "subscription-based pricing."
As noted in a story posted earlier today, that means Oracle will offer SaaS beyond the two apps (On Demand CRM and Beehive) it now offers, for all or some of the business applications it sells to midsize companies. The question is how exactly it plans to do that. When I asked Mark Keever, the Oracle VP who heads up midmarket apps, about subscription-based pricing in a follow-up call Tuesday, he didn't have any more details he could share with me right now. But, his group did have permission to say that subscription-based pricing would be available for midsize companies.
Just for laughs, Larry Ellison goes bonkers over cloud computing at the Churchill Club while Ed Zander looks on in this 00:03:13 You Tube video.
•• CloudSwitch claims to be a “fast-growing cloud computing company backed by Matrix Partners, Atlas Venture and Commonwealth Capital Ventures, currently in stealth-mode” in this initial appearance of their Web site and blog on 9/23/2009:
We're building an innovative software appliance that delivers the power of cloud computing seamlessly and securely so enterprises can dramatically reduce cost and improve responsiveness to the business.
With CloudSwitch, enterprises are protected from the complexity, risks and potential lock-in of the cloud, turning cloud resources into a flexible, cost-effective extension of the corporate data center.
We're led by seasoned entrepreneurs from BMC, EMC, Netezza, RSA, SolidWorks, Sun Microsystems and other market-leading companies, and we're building a world-class team with proven expertise in delivering complex enterprise solutions.
•• Ellen Rubin asks Moving to the Cloud: How Hard is it Really? and notes “Today's cloud providers impose architectures that are very different from those of standard enterprise applications” in a 9/23/2009 post to the CloudSwitch blog:
Many IT managers would love to move some of their applications out of the enterprise data center and into the cloud. It's a chance to eliminate a whole litany of costs and headaches: in capital equipment, in power and cooling, in administration and maintenance. Instead, just pay as you go for the computing power you need, and let someone else worry about managing the underlying infrastructure.
But moving from theory into practice is where things get complicated. It's true that a new web application built from scratch for the cloud as a standalone environment can be rolled out quickly and relatively easily. But for existing applications running in a traditional data center and integrating with a set of other systems, tools and processes, it's not nearly so simple.
• Doug Tidwell explains Cloud computing with PHP, Part 1: Using Amazon S3 with the Zend Framework in this detailed IBM developerWorks tutorial of 9/22/2009:
Cloud computing promises unlimited disk space for users and applications. In an ideal world, accessing that storage would be as easy as accessing a local hard drive. Unfortunately, the basic APIs of most cloud storage services force programmers to think about protocols and configuration details instead of simply working with their data. This article looks at classes in the Zend Framework that make it easy to use Amazon's S3 cloud storage service as a boundless hard drive.
I’m unsure why IBM promotes Amazon Web Services; perhaps it’s because AWS isn’t Microsoft or Google.
• Ruven Cohen asks What is an OpenCloud API? in this 9/14/2009 post:
When it comes to defining Cloud Computing I typically take the stance of "I know it when I see it". Although I'm half joking, being able to spot an Internet centric platform or infrastructure is fairly self evident for the most part. But when it comes to an "OpenCloud API" things get a little more difficult.
Lately it seems that everyone is releasing their own "OpenCloud API's", companies like GoGrid and Sun Microsystems were among the first to embrace this approach offering there API's under friendly open creative common licenses. The key aspect in most of these CC licensed API's is the requirement that attribution is given to the original author or company. Although personally I would argue that a CC license isn't completely open because of this attribution requirement, but at the end of the day it's probably open enough.
Ruv concludes:
This brings us to what exactly is an OpenCloud API?
A Cloud API that is free of restrictions, be it usage, cost or otherwise.
and offers his $0.02 on Zend’s cloud initiative with New Simple Cloud Storage API Launched of 9/22/2009.
Andrea DiMaio reports Open Data and Application Contests: Government 2.0 at the Peak of Inflated Expectations on 9/22/2009:
Government 2.0 is rapidly reaching what we at Gartner call the peak of inflated expectations. This is the highest point in the diagram called “hype cycle”, which constitutes one of our most famous branded deliverables to our clients and that often feature on the press.
Almost all technologies and technology-driven phenomena go through this point, at variable speed. A few die before getting there, but many stay there for a while and then head down toward what we call the “trough of disillusionment”, i.e. the lowest point in that diagram, to then climb back (but never as high as at the peak) toward the so-called “plateau of productivity”, where they deliver measurable value.
If one looks at what is going on around government 2.0 these days, there are all the symptoms of a slightly (or probably massively) overhyped phenomenon. Those that were just early pilots one or two years ago, are becoming the norm. New ideas and strategies that were been developed by few innovators in government are now being copied pretty much everywhere. …
Anthony Ha’s Dell buying Perot Systems for $3.9B post of 9/21/2009 to the Deals&More blog summarizes the purchase:
Dell announced today that it’s acquiring Perot Systems, the IT services provider founded by former presidential candidate H. Ross Perot, for $3.9 billion.
Perot Systems has more than 1,000 customers, including the Department of Homeland Security and the US military, according to the Associated Press, with health care and government customers accounting for about 73 percent of its revenue. In the last year, the companies say they made a combined $16 billion in enterprise hardware and IT services.
Dell is buying Perot stock for $30 a share, and says it plans to turn Perot into its services unit. The deal should help Dell sell its computers to Perot customers. It’s expected to close in the November-January quarter.
Last year, Dell competitor Hewlett Packard bought another Perot-founded services company, Electronic Data Systems.
As reported in an earlier OakLeaf post, Perot Systems was Dell’s pre-purchase choice for hosting cloud-based EMR/EHR applications. According to Perot CEO Peter Altabef, Perot Systems is one of the largest services companies serving the health-care sector, from which it derives about 48 percent of its revenue; around 25 percent of revenue comes from government customers.
More commentary on Dell’s acquisition of Perot:
- Dell and Perot: What this means by Brian Sommer for ZDNet’s Software and Services Safari blog
- Dell: Perot Systems purchase an 'anchor' acquisition; More deals likely by Larry Dignan for ZDNet’s Between the Lines blog
- Update: Dell agrees to buy Perot Systems for $3.9B: Perot will become Dell's global services unit by Peter Sayer for IDG News Service
Rich Miller reports Amazon EC2 Adding 50,000 Instances A Day in this 9/21/2009 post:
Amazon doesn’t release a lot of detail about the growth and profitability of its Amazon Web Services (AWS) cloud computing operation. But a recent analysis found that Amazon EC2 launched more than 50,000 new instances in a 24-hour period in just one region. Cloud technologist Guy Rosen analyzed activity on EC2 using Amazon resource IDs, and estimates that the service has launched 8.4 million instances since its debut. …
The new analysis follows up on previous research by Rosen on the number of web sites hosted on EC2 and other leading cloud providers. He noted that the data is a one-day snapshot, and could be skewed by a number of factors, but says the numbers are “impressive, to say the least.”
Maureen O’Gara reports Citrix Aims To Cripple VMware’s Cloud Designs on 9/12/2009 (missed when posted):
Citrix is going to try to bar VMware from getting its hooks deep in the cloud by developing the open source Xen hypervisor, already used by public clouds like Amazon, into a full-blown, cheaper, non-proprietary Xen Cloud Platform (XCP).
It intends to surround the Xen hypervisor with a complete runtime virtual infrastructure platform that virtualizes storage, server and network resources. It’s supposed to be agnostic about virtual machines and run VMware’s, which currently run only on its own infrastructure.
<Return to section navigation list>