Windows Azure and Cloud Computing Posts for 8/13/2009+
Windows Azure, Azure Data Services, SQL Azure Database and related cloud computing topics now appear in this weekly series.
••• Updated 8/16/2009: Azure Storage API, REST APIs for storage, SQL Server 2008 R2 virtualization
•• Updated 8/15/2009: New Azure UK apps review, electronic health records
• Updated 8/14/2009: AzurePHP supports Azure Blob’s new Shared Access Signature feature, ScrumWall case study, Azure Riviera sample app code, cloud-related events, other additions.
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:
- Azure Blob, Table and Queue Services
- SQL Azure Database (SADB)
- .NET Services: Access Control, Service Bus and Workflow
- Live Windows Azure Apps, Tools and Test Harnesses
- Windows Azure Infrastructure
- Cloud Security and Governance
- Cloud Computing Events
- Other Cloud Computing Platforms and Services
To use these links, click the post title to display the single article you want to navigate.
Azure Blob, Table and Queue Services
••• Magnus Mårtensson’s Introducing the Cloud Storage API post of 8/15/2009 describes a new Windows Azure project template, which:
- Enables testability
- Abstracts away storage and delivers persistence ignorance
- Is extensible and easy to evolve during development
… This post is the first in a brief series of posts that take us that second step toward our lofty goal somewhere beyond the clouds: This is the step that makes a Cloud Application Persistence Ignorant and enables testing of these applications without depending on storage. …
You can download CloudStorage.API from CodePlex. (The API is an extension of Magnus’ earlier combination of “Windows Azure with the Managed Extensibility Framework: Windows Azure + Managed Extensibility Framework (MEF) = true. The purpose of that post was to enable extensibility on Azure and to break out of the dependency imposed by the RoleManager static class in the Windows Azure SDK.”)
••• Steve Leem promotes REST-style Web services APIs for storage in his And Now, The REST of the Story... post of 8/15/2009:
Most of us in the Cloud Storage industry strongly believe that a key capability of a storage cloud is the REST style Web Services API. Many of the most popular storage cloud services include or exclusively use REST, including SoftLayer's CloudLayer, Amazon S3, Nirvanix SDN and Rackspace Cloud Files. [Emphasis Steve’s.]
Other access methods that are most often associated with Cloud Storage access include cifs, NFS and WebDAV, NFS and cifs are not particularly usable via an Internet connection and therefore useless in public cloud offerings. While WebDAV is very useful for an Internet connection, it is similarly limited, in that all three protocols support traditional file operations like store and retrieve, versus the robust set of services that Web Services APIs can deliver.
••• Benjamin Carlyle posits MIME types holding REST back in this 9/16/2009 essay:
With the increasing focus on REST within enterprise circles, the practice of how REST gets done is becoming more important to get right outside of the context of the Web. A big part of this is the choice of application protocol to use, the "Uniform Contract" exposed by the resources in a given architecture. Part of this problem is simple familiarisation. Existing enterprise tooling is built around conventional RPC mechanisms, or layered on top of HTTP in SOAP messages. However, another part is a basic functional problem with HTTP that has not yet been solved by the standards community. …
A significant weakness of HTTP in my view is its dependence on the MIME standard for media type identification and on the related iana registry. This registry is a limited bottleneck that does not have the capacity to deal with the media type definition requirements of individual enterprises or domains. Machine-centric environments rely in a higher level of semantics than the human-centric environment of the Web. In order for machines to effectively exploit information, every unique schema of information needs to be standardised in a media type and for those media types to be individually identified. The number of media types grows as machines become more dominant in a distributed computing environment and as the number of distinct environments increases.
•• Rob Gillen analyzes the differences in data overhead and performance between transmission of Azure Table data in AtomPub and JSON formats in his AtomPub, JSON, Azure, and Large Datasets post of 8/14/2009.
Responding to a Tweet by “@smarx [that] pointed out that this post is a bit misleading (my word) in that Azure storage doesn’t support JSON. I have a web role in place that serves the data, which, upon reflection could be introducing some time delays into the Atom feed. I will test further and update this post.”
I’ve made several, less detailed posts here on the issue of XML bloat of RESTful data retrieval using the AtomPub format. The early chapters of my book also deal with this issue.
•• Fahed Bizzari’s The Definitive Guide to GET vs POST article of 8/14/2009 offers four rules for using HTTP GET or POST methods for requests:
- Rule #1: Use GET for safe actions and POST for unsafe actions.
- Rule #2: Use POST when dealing with sensitive data.
- Rule #3: Use POST when dealing with long requests.
- Rule #4: Use GET in AJAX environments.
Of course, Fahed embellishes his rules with explanations.
• Brent Stineman’s .NET Service Bus (Part2) – Hands on with Queues post of 8/14/2009 tackles .NET Service Bus (NSB) queues:
… Both [Azure Data Services and NSD Queues] are based on RESTful services that require authorization for access. And with both you can create, delete, and list queues as well as put and retrieve message from them. The .NSB version of Queues however has a fully supported API for .NET access (the StorageClient api for Azure storage is only a sample right now). You can define policies for .NSB queues and control access a bit more granularly. Finally, we have the ability to monitor for queue exceptions (possibly caused by policy violations). …
• Maarten Balliauw reports in a Tweet that the PHP SDK for Windows Azure, which is available for download from CodePlex, now supports Access Blob’s new Shared Access Signature feature “in a cool way.” He also reports that documentation is coming.
For more information on Shared Access Signature and other new blob features, see New Windows Azure Blob Features – August 2009 of 8/11/2009 and Steve Marx’s New Storage Feature: Signed Access Signatures of 8/11/2009 for a sample application and live demo.
<Return to section navigation list>
SQL Azure Database (SADB, formerly SDS and SSDS)
••• Brent Ozar describes SQL Server 2008 R2: Virtualization for Databases in this 8/10/2009 post:
SQL Server 2008 R2 introduces the concept of the SQL Server Utility: a pool of resources (instances) that host applications (databases).
The Utility is managed by a Utility Control Point: a single SQL Server instance that gathers configuration and performance data.
All of this is visualized through SQL Server Management Studio’s Utility Explorer:
This dashboard shows some basic metrics about CPU use and storage use at both the instance level and the application level.
Down the road – years down the road – this might provide DBAs with the same level of fast reaction times that virtualization admins currently enjoy. Got a database running out of space? Move it to a server with more space. Got an application burning up too much CPU power? Slide it over to one that’s got the latest and greatest processors.
What we need to know is how SQL Azure Database uses SQL Server 2008 R2 virtualization, if it does. As soon as I get my new 64-bit server system running, I’ll give it SQL Server Utility a try.
Lori MacVittie’s The Business Intelligence--Cloud Paradox post of 8/13/2009 carries this deck: “Simultaneously one of the best use-cases for cloud as well as the worst. What’s IT to do?” The paradox:
Thought #1: That’s a perfect use case for cloud! Business intelligence processing is (a) compute intensive and (b) interval based (periodic processing; BI processing and reports are generally scheduled occurrences). It is a rare organization that builds OLAP cubes and runs ETL processes and generates reports all the time. But when they do, look out, they can bring a server (or three) to their knees.
Thought #2: That’s a horrible use case for cloud! The data, though it may be anonymized to remove personally identifiable information/account data, is still sensitive. It’s not any one field or combination of fields that are sensitive, it’s the whole data set. That combination of data is used to make business decisions, analyze business performance, and provides a great view of an organization’s current operating and financial status. That kind of data is not something you want shared outside the organization. [Emphasis Lori’s.]
Gavin Clarke reports Open and proprietary ISVs discover love in the cloud: Fluffy BI in this 8/13/2009 post to The Register:
Software companies from opposite sides of the open-source track have found common ground on the cloud: business intelligence.
JasperSoft, Talend, Vertica, and RightScale have announced a stack they said will automate the job of integrating and setting online BI to eliminate manual integration work.
The companies said customers could build and use analytic sandboxes and disposable data marts to gain intelligence on a wide range of data, including brief or seasonal projects. It's a pay-as-you go service that comes with a 30-day trial here.
We’re still waiting for the SQL Azure team to describe its forthcoming BI and Reporting Services features.
Marc Friedman’s Understanding SQL Server Fast_Forward Server Cursors post to the Tips, Tricks, and Advice from the SQL Server Query Processing Team of 8/12/2009 explains the differences between fast_forward and traditional firehose (read_only forward_only) cursors:
SQL Server's server cursor model is a critical tool to many application writers. Fast_forward cursors are very popular as an alternative to read_only forward_only cursors, but their inner workings are not well-publicized. So I thought I'd give it a go.
A server cursor is a cursor managed by SQL Engine. It consists of a query execution and some runtime state, including a current position. SQL clients can use a server cursor to fetch results of a query one-at-a-time, or in batches of N-at-a-time, rather than by the usual all-at-a-time, firehose, default query result set. Many applications and client libraries have iteration models, but server cursors are the only way for SQL Server engine to do incremental query processing. …
<Return to section navigation list>
.NET Services: Access Control, Service Bus and Workflow
Eugenio Pace’s Claims based Authentication & Authorization Guide – The design of the book post of 8/12/2009 begins:
As I mentioned in my previous post, we are going to use a “case study” approach in this book in which we will be presenting a series of concrete scenarios, each one will introduce some very specific requirements. Then we will be showing and discussing possible solutions in that context.
The intent is that each chapter would be more or less self contained, but with references to other sections of the book as needed. The content model for each chapter is roughly this:
So in the solution space, we will go all the way from design to a complete running example.
There’s some implicit roadmap hinted in the “tube map” for all scenarios. Our intention is to create “learning paths”, so you can choose what to read and in which sequence based on your specific needs. Like taking one train and then a connection somewhere else. Kind of Cortazar’s “Hopscotch”, but without the magic realism. :-)
Matias Woloski adds his commentary to Eugenio Pace’s patterns & practices post of 8/11/2009 in this Claims based Authentication & Authorization: The Guide post dated 8/14/2009:
This is not a new topic as Eugenio suggests in his blog, but it’s getting more and more attention because:
- Technology is more mature, hence it’s easier to implement claim-based identity
- Enterprises are failing to control the amount of different identity repositories, leading to higher provisioning/deprovisioning costs, security problems, etc.
- End users want simpler user experiences and less passwords
- The cloud makes all these even more challenging
Aaron Skonnard’s My latest whitepaper on WCF 4.0 post of 8/12/2009 describes his latest whitepaper on MSDN:
It covers the various new features found in WCF 4.0. The following abstract will give you a better of idea of the main areas I cover in this 60+ page paper: A Developer’s Introduction to Windows Communication Foundation (WCF) .NET 4 Beta 1.
The .NET Framework 4 comes with some compelling new features and welcomed improvements in the area of Windows Communication Foundation (WCF). These WCF enhancements focus primarily on simplifying the developer experience, enabling more communication scenarios, and providing rich integration with Windows Workflow Foundation (WF) by making “workflPeople are doing some interesting things with Windows Azure in the UKow services” a first-class citizen moving forward. [Emphasis added.]
Dana Gardner wants your take on ESBs, according to his Got middleware? Got ESBs? Take this survey, please post to ZDNet’s BriefingsDirect blog of 8/13/2009:
… I’m hoping that you users and specifiers of enterprise software middleware, SOA infrastructure, integration middleware, and enterprise service buses (ESBs) will take 5 minutes and fill out my BriefingsDirect survey. We’ll share the results via this blog in a few weeks.
We’re seeking to uncover the latest trends in actual usage and perceptions around these technologies — both open source and commercial.
How middleware products — like ESBs — are used is not supposed to change rapidly. Enterprises typically choose and deploy integration software infrastructure slowly and deliberately, and they don’t often change course without good reason. …
<Return to section navigation list>
Live Windows Azure Apps, Tools and Test Harnesses
•• Robert O’Harrow Jr’s HHS Takes On Health Records of 8/15/2009 for the Washington Post claims “Private Sector Previously Certified Firms Set to Get Stimulus”:
The Department of Health and Human Services is almost certain to take on responsibility for creating the criteria used to decide what health records technologies qualify for billions of dollars in reimbursements to medical offices under a new stimulus program, officials said Friday.
The decision represents a significant restriction of the role played by a private certification group, begun several years ago by the technology industry, which until recently had served as the government's gatekeeper for endorsing systems designed to improve the sharing of medical records.
The Certification Commission for Healthcare Information Technology, or CCHIT, came under sharp criticism in May after a Washington Post story showed that it has close ties to a trade group whose members stand to receive billions as a result of the stimulus legislation. …
•• John Moore asks Has Government Set EHR Goals Too High? in this 8/14/2009 post to InformationWeek’s HealthCare blogs: “Despite the pending $36.3 billion that the U.S. government plans to spend over the next several years to drive physician adoption of electronic health record software, the market is at a standstill.”
Why?
It's really quite simple and logical. That $36.3 billion targeted for EHR adoption was part of the massive stimulus package signed by President Obama in early 2009. Within that package, was the $36.3 billion for the "meaningful use of certified EHRs." Funny thing, though, no one was quite sure what "meaningful use" meant or what a "certified EHR" was. Defining those terms was left to the Department of Health and Human Services' Office of the National Coordinator for Healthcare IT (ONC).
Physicians may receive as much as $48,000 in reimbursement under Medicare (hospitals may see up to $11 million) for meaningful use of certified EHRs. However, without a clear understanding of what was expected under meaningful use, let alone what a certified EHR really is, physicians and hospital CIOs have wisely waited for a clear signal from the administration and the ONC.
John continues with a detailed analysis of the effects of subsequent updates to the definition of “meaningful use.”
•• CMIO reports AHRQ set to disburse $300M for comparative effectiveness projects on 8/14/2009:
The Agency for Healthcare Research and Quality (AHRQ) is asking for grant and contract proposals for comparative effectiveness research projects that will be funded through the American Recovery and Reinvestment Act (ARRA) of 2009.
Of the $300 million appropriated by ARRA to comparative effectiveness research, $148 million in grants will be provided for evidence generation. This includes $100 million for the Clinical and Health Outcomes Initiative in Comparative Effectiveness and $48 million for the establishment of national patient registries that can be used for researching longitudinal effects of different interventions and collecting data for under-represented populations. …
CMIO claims to be a “new magazine focused on educating and empowering physician leaders and healthcare executives to advance, integrate and leverage clinical information systems to enhance best practices, knowledge resources and operational benefits across healthcare enterprises and networks.”
•• Microsoft’s UK ISV Developer Evangelism Team posted People are doing some interesting things with Windows Azure in the UK on 8/12/2009 to highlight “some companies that were doing interesting things with Azure and find out a little more about why the chose Azure, and what their experiences had been.”
The post offers videos of demonstrations at at Microsoft UK Azure Day by:
- Active Web Solutions
- EasyJet.com
- DotNetSolutions (see below)
- PensionDCisions
- TBS Enterprise Mobility
and a list of additional reviews in the IT trade press:
- http://www.v3.co.uk/computing/news/2246103/easyjet-flies-clouds-azure
- http://www.v3.co.uk/computing/analysis/2246022/q-mark-taylor-microsoft-uk
- http://www.computerweekly.com/Articles/2009/07/14/236876/microsoft-azure-goes-online-priced-to-compete-with-amazon.htm
- http://www.cio.co.uk/opinion/veitch/2009/07/14/microsoft-lays-out-ts-and-cs-for-windows-azure-cloud-vision/?intcmp=ARP4
- http://www.itpro.co.uk/612736/need-to-know-windows-azure
• Microsoft Case Studies’ under-the-radar description of Dot Net Solutions’ ScrumWall service hosted on Windows Azure is available for review in this Systems Integrator Launches Innovative Software with Minimal Capital Investment article of 7/7/2009:
The case study quotes Dan Scarfe, Chief Executive Officer, Dot Net Solutions:To improve its own development process, Dot Net Solutions created a virtual project-collaboration application. When the software, called ScrumWall, drew great interest from customers, the company used Windows® Azure™ to offer it as a hosted service. The solution made it possible for Dot Net Solutions to bring its new product to market quickly, with minimal investment costs, while offering superior performance to users. …
Windows Azure enables us to move into the realm of the ISV. We’re already experts at delivering custom software for customers. We can now take these skills and build a software product, delivering it to a potentially massive user base—but without the risk of hosting it on our own infrastructure.
• Cumulx, a Windows Azure Cloud ISV Partner, and the Platform Evangelism Group have developed Announcing Riviera - Windows Azure Reference Application, source code for v1 of which is available for download from the MSDN Code Gallery as of 8/7/2009:
Project Riviera is a comprehensive code sample to demostrate how to develop multi-tenant highly-scalable line-of-business application on Windows Azure Platform. This sample is developed by Global Partner Architecture Team in Developer & Platform Evangelism group at Microsoft in collaboration with Cumulux - our Cloud ISV partner. Riviera uses Customer Loyalty Management scenario for illustration purpose but many building blocks are applicable to range of line-of-business applications.
Click here to view a screncast of Riviera, Architecture details and other related information.
Key features of Riviera
- Multi-tenant data store based on Azure Table Storage as well as SQL Azure.
- Per tenant customization of data model
- Per tenant customization of business logic (using Windows Workflow in Windows Azure)
- Per tenant customization of user interface using Silverlight 3.0. Customization can be multi-level – custom theme, custom XAML, and custom XAP.
- Automated tenant provisioning
- Windows Azure web role->Azure Queue->worker role pattern for high volume transaction processing that can scale on demand
- Claims aware web service and web application using Geneva Framework
- Active and Passive Federation using Geneva Framework, Geneva Server and .NET Access Control Service (ACS)
- Windows Live ID authentication for consumer facing web site
- Use of Patterns & Practices Enterprise Library Caching and Logging application blocks in Windows Azure
Notes
- Project Riviera is not a product or solution from Microsoft. It is a comprehensive sample code developed for evangelism purpose.
- Riviera includes implementation of Security Token Service (STS) using Geneva Framework in Windows Azure. We would like to emphasize that this scenario is currently not supported (at the time of July 2009 CTP). This is primarily because of lack of certificate store support in Windows Azure at this time. So although the implementation works in Windows Azure, we advise not to do so for production environment until such scenario can be supported on Windows Azure and product group provides guidance to do so. [Emphasis added.]
• Laith Noel describes Building Twtmug, a Windows Azure services that “look[s] up your public photos based on your user name, and then post[s] them to Twitter with a short link.”
[S]ince then Smugmug themselves created a twitter integration within their website, which mean Twtmug is no more useful for most of people :( .
[S]o since I spent a good time on it trying to iron things out to work, I decided to publish the code, so that other people may use it for their own ideas.
[T]he code is not perfect, but it did work.
Laith’s post includes screen captures, links to the C# libraries used and downloadable source code.
Jeffrey Schwartz offers more details on the PHP vs. ASP.NET performance flap in his PHP Versus ASP.NET Benchmarks Drive Debate post of 8/13/2009 to the Visual Studio Magazine site. (Full disclosure: I am a contributing editor of Visual Studio Magazine and have written articles for the magazine and its predecessors for about 15 years.)
Joe Stagner’s PHP versus ASP.NET – Windows versus Linux – Who’s the fastest ? post of 8/10/2009 started a dustup with reports that Web apps written with ASP.NET outperformed those written with PHP.
Joe’s Comments on my recent benchmarks. post of the same date says:
Overall I’ve been pretty impressed with the reactions to my first round of PHP/Linux/Windows/ASP.NET performance tests. I’d like to comment of the comments First, while I appreciate the enthusiasm of my .NET friends, the point of my exercise was to give me (and other folks at Microsoft) a starting place to understand some things about the performance of PHP on Windows and of ASP.NET. If your faith in ASP.NET is incrementally sustained by virtue of this data than I’m happy. If I didn’t think that .NET was a as good as or better than any other technology for building web applications...
His PHP Linux Windows ASP.NET Performance – Redux ! post of 8/13/2009 adds op code caching:
Andi [Gutmans], Brandon [Savage], and other[s] pointed out that in business application deployment, most PHP shops use op code caching, so I installed it on Linux and Windows and re-ran most of the tests.
Some items showed a small improvement in performance, some stuff up to 25% faster, but overall it was far less than I expected.
Some things ran slower with APC [Alternative PHP Cache] but I attribute that to simple machine variance. (Note, the numbers in the table are NOT the first page run. I loaded the page, refreshed twice to make sure to hit the cache and then took the numeric results.)
Joe didn’t include comparative test data for PHPAzure.
<Return to section navigation list>
Windows Azure Infrastructure
•• Alan Leinward wonders Do Enterprises Need a Toll Road to the Cloud? in this 8/14/2009 post to GigaOm and suggests:
This new provider — let’s call it CloudNAP (Cloud Network Access Point) — would solely be in the business of providing a toll road between the enterprise and the public cloud providers. The business of selling connectivity to the Internet, or transit, is a common ISP offering. The CloudNAP transit service would be different, however, in that it would be focused on delivering connectivity solely between enterprises and cloud services providers and not between enterprises or between clouds. In order to make network connectivity to the toll road cost-effective for an enterprise, CloudNAP would offer POPs (point-of-presence) in multiple geographies. Each CloudNAP POP would have dedicated leased lines to the networks of the major cloud services providers such as Amazon Web Services, Microsoft Azure, Google AppEngine, the Rackspace Cloud, etc.
• Kim A. Terry asks for More SaaS please, but easy on the clouds in this 8/14/2009 essay that expands on the following points:
- Using a generic cloud platform implies much about the overall service levels and security of the SaaS application itself.
- There is no such thing as a general analysis of SaaS security, reliability or performance since most vendors do not run on generic cloud platforms.
- As a prospective customer, each SaaS application you consider requires it own due diligence.
Kim A. Terry is president of Terrosa Technologies.
• Lori MacVittie’s Putting the Cloud Before the Horse post of 8/14/2009 asserts “Without processes the cloud is not a cloud” and:
The on-demand piece of your little private cloud is almost entirely managed by human beings, which means you aren’t getting nearly the efficiencies you could be getting if you’d taken the next step: automation.
IT ISN’T REALLY A CLOUD UNLESS IT’S AUTOMATED [Emphasis Lori’s]
Another vote of confidence for the Windows Azure Platform’s approach.
• Sam Gross claims Cloud Computing Isn’t About Cost of Hardware or Software in this 8/13/2009 Tweet: “It’s about instant provisioning & service models that refocus IT on the app layer.”
That’s why the Windows Azure Platform emphasizes simple provisioning, automated elasticity, and leveraging developers’ .NET, C# and VB proficiency.
Sam also Tweets Infrastructure Exists on Behalf of Apps: “Clients of cloud computing should be businesses more interested in the destination than the path” on the same date.
Sam Gross is Vice President of Global IT Outsourcing Solutions at Unisys Corporation and is well worth following for his on-target Twitter aphorisms about cloud computing.
James Watter’s Cloud Collision: Epic Public vs. Private Debates Begin of 8/10/2009 contends:
Public vs. private is the hottest topics going in the cloud blogosphere. The reason is simple: customization is the biggest market in IT—and keeping things somewhat custom is in a lot of peoples interest. …
The best argument for private cloud development is a quick glance at the status quo. A market willing to spend 400B++ a year customizing, won’t suddenly conclude a bare bones public cloud is a cheap and cheerful alternative. So yes, to all of the private cloud proponents, you are right, they will have a huge role in the future of IT spending. The overall net cost of applications today is simply too massive to make the potential risks of simultaneously migrating to a new architecture and an outside provider worth it.
But, although the change may occur over geologic time with the status quo being favored–new application development and consolidation are the wind/water/tectonic metronome of geologic time in IT. So to accurately articulate the coming impact and segmentation of the cloud we should carefully study both new apps and consolidation trends. …
I find the ability to build an application with world-scale built in pretty exciting—but [Tom Siebel] is right. Replacing what we have isn’t the exciting part, using the virtues and open communities building up around public clouds and their architectures to build what’s next is.
<Return to section navigation list>
Andrea DiMaio’s Former Government Official Calls Cloud Computing and Web 2.0 “Fads” post of 8/13/2009 debunk’s Michael Daconta’s contentions that:
- Cloud computing is a red herring
- Web 2.0 is not pixie dust
Daconta is “currently CTO at Accelerated Information Management LLC and former Metadata Program Manager for the Department of Homeland Security.”
Cloud Security and Governance
• Wesley Higaki’s Is Federal Accreditation Enough For Enterprise Cloud Computing? post of 8/14/2009 discusses reports that Google is seeking FISMA accreditation of its sloud services, presumably including Google App Engine:
The argument that cloud computing systems could be more secure than traditional IT systems assume that those customers who would use Google Apps or Amazon Web Services have more confidence that Google and Amazon would do a better job of securing their systems than those customers would. They have more confidence in Google and Amazon to “do the right things” to secure their systems and networks. Confidence, however is more than just “doing the right things”, it is also PROVING that you are doing the right things. It was recently reported that Google is trying to get FISMA accreditation of its services to prove that their systems meet government standards presumably at the request of the General Services Administration (GSA). Given that the GSA is responsible for the Federal government’s “SmartBUY” program, this is probably a good move by Google. GSA approval would presumably allow any Federal agency to purchase Google’s cloud services without further certification or accreditation
What does FISMA (i.e. NIST SP 800-37 and SP 800-53) compliance mean to Google and other (non-Federal) customers? To expedite the proliferation of cloud services, the GSA is saying that their accreditation of cloud services is good enough for all Federal agencies. The GSA implies that they understand the types and levels of risk for all agencies and can assess cloud services to determine an acceptable level of risk. FISMA compliance should give the Federal customers some sense that the Google Apps meet the same acceptance criteria as any other system in the government. Extending this way of thinking beyond the Federal government, would you trust the GSA to understand your enterprise’s risks?
Paul Enfield, J.D. Meier, and Prashant Bansode of the Microsoft patterns & practices Azure Security team invite you to their survey for Azure Security guidance at http://www.zoomerang.com/Survey/?p=WEB229HQAL433P:
It's brief (11 questions on one page), and it helps influence our priorities and focus.
To set the stage, we're in early exploration. This is where we gather the stories, questions, and tasks for a guidance project. In terms of what we’re trying to accomplish:
- Pave a path of prescriptive guidance for Azure security arch/design/dev practices.
- Create a guide along the lines of our WCF Security Guide - http://www.codeplex.com/WCFSecurity (it includes checklists, guidelines, how tos, and end-to-end application scenarios)
Note - a few customers have let us know that they want us to model after the WCF Security Guide. If there are other examples you'd like us to model from, We'd like the feedback.
Alex Meisel describes “Why a Traditional Web Application Firewall Will Not Work” and explains why a distributed Web Application Firewall (dWAF) is required for multitenant public clouds in his Safety in the Cloud(s): 'Vaporizing' the Web Application Firewall to Secure Cloud Computing post of 8/13/2009.
Alex is CTO of Art of Defense, GmbH.
Jon Pescatore’s On The Internet, No One Knows If You Are Really Just a Dozen Lines of Code post of 8/13/2009 posits:
Determining if a “visitor” is a human being or just a piece of software is a tougher problem. This is different from the problem of moving beyond passwords for registered users. We’re still in the parking lot here - we can’t limit access to ticket holders yet. Many have thrown CAPTCHA screens at the problem, but everyone hates those things. Some decent solutions are starting to show up, from scripts on load balancer/application delivery controllers to smarter DDoS detection algorithms to web application firewall filters.
The problem is not that much different from the email spam problem when you get right down to it. So, I think the most effective solutions will show up in “security as a service” offerings. From carriers offering “clean bits” in their pipes to your web sites, to “good guy man in the middle” services like the Akamais and the Dasients of the world have started to offer, the accuracy of determining if an inbound HTTP connection is human or not can be much higher when there is a broad view of where things are coming from and what else they have been doing.
Bottom line - when you are looking at upgrading your approach to protecting your corporate web servers, include requirements for taming those parking lots, too.
<Return to section navigation list>
Cloud Computing Events
• Symplified is sponsoring a Secrets to Success in the Cloud live lunch event in Santa Clara, CA on 8/20/2009 from 7:30 AM to 12:30 PM covering:
- What your cloud security and collaboration plan should be.
- What are the barriers to Cloud computing and how are enterprises are overcoming them?
- What are people using SaaS and the Cloud for today and how is it impacting the business?
- What are the security and compliance issues the Cloud creates and how can they be solved?
Speakers are Accenture’s Eric Ashdown, SAP’s Todd Rowe, and Symplified’s Jonti McLaren.
Register here.
When: 8/20/2009 from 7:30 AM to 12:30 PM PDTWhere: Santa Clara Marriott Hotel, 2700 Mission College Boulevard, Santa Clara, CA, USA
• Bryan Ott, Vice President Global Service Lines, Systems and Technology, Unisys Corporation, will present a Moving Applications to the Cloud – Determining What Applications Make Sense for Your Business Webinar on 9/19/2009 at 8:00 AM to 9:00 AM PDT:
- If you’re seeking insight and real world guidance on how to approach your entry into the cloud, then this webinar is for you.
- Hear … key trends and implications of cloud computing based on research from leading industry analysts
- Learn … approaches to understand which applications make sense in the clouds and why?
- Understand … what specific steps you should take today to begin the journey to the clouds for those applications
Register here.
When: 9/19/2009 at 8:00 AM to 9:00 AM PDTWhere: The Internet (GoToMeeting Webinar)
• Rich Miller’s Day 2 Roundup: CloudWorld and NGDC post to the Data Center Knowledge blog of 8/14/2009 provides links to articles and a press release:
The Register covered the keynote by Sun cloud computing CTO Lew Tucker, who predicted that web applications will become self-provisioning. “Whereas previously, it seems like only viruses and bots on the net have been able to take over computers and use them for their own purposes, now we’re actually seeing that applications themselves respond to increased demand or load and are able to provision services,” Tucker said. A provocative idea, but problematic as well, as Tucker acknowledges by making a Skynet reference.
Larry Dignan at ZDNet has a video excerpt from Wednesday’s well-reviewed panel on the challenges and opportuntiies in mainstream adoption of cloud computing.
UK tech pub V3 (formerly vnunet) profiles a session in which Asurion’s Robert Lefkowitz advaned the notion of “IT delis” able to swiftly process custom orders for services. “People think that people in IT are bozos,” he said. “One of the ways you can dispel that is to do simple stuff quickly.” …
• Anil John’s Cloud Computing Thoughts from Catalyst09 post of 8/14/2009 links to Chris Haddad’s notes on the Burton APS Blog and offers this observation:
But the message that I often hear from Cloud vendors is:
- We want to be an extension of your Enterprise
- We have deep expertise in certain competencies that are not core to your business, and as such you should let us integrate what we bring to the table into your Enterprise
... and variations on this theme. …
VMware announces vmworld 2009: hello freedom to be held 8/31 to 9/3/2009 at San Francisco’s Moscone Center. Following are sessions (as of 8/13/2009) carrying the Cloud subtopic:
- TA3286 Applications in the Cloud: Getting off the ground - Breakout Session
- TA3326 Building an Internal Cloud-the Journey and the Details - Breakout Session
- TA3576 Early vSphere Deployment Stories - Breakout Session
- TA3882 The Cloud- What is it and why should I care - Breakout Session
- TA3901 Security and the Cloud - Breakout Session
- TA4100 Internal Clouds: Customer perspective and implementations - Panel Session
- TA4101 Buying the Cloud: Customer perspective and considerations on what you should send to an external cloud - Panel Session
- TA4102 Unveiling New Cloud Technologies - Breakout Session
- TA4103 Engineering the Cloud-The Future of Cloud - Panel Session
- TA4820 What Keeps Clouds Up? - Breakout Session
- TA4902 IBM ‘s Cloud Computing Solutions - Breakout Session
- TA4940 Navigating the Cloud: IT Management Challenges and Opportunities - Breakout Session
If the organizers get around to making the preceding links useful, I’ll update this item.
When: 8/31 to 9/3/2009
Where: Moscone Center, San Francisco, CA, USA
The 451 Group is a sponsor of the Infrastructure Computing for the Enterprise (ICE) Cloud in Context event to be held 9/3/2009 at the Hyatt Regency San Francisco. The agenda consists primarily of presentations by 451 Group executives.
When: 9/3/2009Where: Hyatt Regency, San Francisco, CA, USA
Kevin Jackson reports in his GSA To Present On Cloud Initiative at NCOIC Plenary post of 8/13/2009:
When: 9/21/2009A General Services Administration (GSA) representative is now scheduled to provide a briefing on the agency's cloud computing initiative during a "Best Practices for Cloud Initiatives using Storefronts" session on September 21, 2009 in Fairfax, VA. The session, part of the Network Centric Operations Industry Consortium (NCOIC) Plenary, is expected to foster an interactive dialog on interoperability and portability standards for Federal cloud computing deployments.
Through the recent release of a Infrastructure-as-a-Service (IaaS) Request for Quote (RFQ), the GSA has positioned itself as a significant participant in the federal government's move toward the use of cloud computing technologies. Casey Coleman, GSA CIO, has previously stated that cloud computing is the best way for government technology to move forward. To support this effort, the agency is encouraging an active dialog with industry on possible future standardization issues such as:
- Interfaces to Cloud Resources supporting portability of PaaS tools and SaaS applications;
- Interfaces to Cloud Resources supporting interoperability across Clouds;
- Sharing and/or movement of virtual computational resources across Clouds;
- Data sharing and movement across Clouds;
- Authentication and authorization across Clouds;
- Messaging to and from Clouds; and
- Metering, monitoring and management across Clouds.
Where: Hyatt Fair Lakes, 12777 Fair Lakes Circle, Fairfax, VA 22033, USA
Dan Kuznetsky gives his initial impressions of OpenSource World/Next Generation Data Center/CloudWorld in his OpenSource World/NGDC/CloudWorld Experiences post of 8/12/2009 to ZDNet:
- The event is significantly smaller than any I can remember in the past.
- The vendor area is open a much shorter time, it is much smaller than ever before and fewer vendors are represented.
- Most of the people I spoke with come from San Francisco. The primary exceptions are vendor representatives, media representatives, representative of research firms and speakers which came to the event from all over. This used to be an international event.
- The folks from IDG World Expo were as friendly and helpful as ever.
Kuznetsky continues with more details of the event.
When: 8/12-8/13/2009
Where: Moscone Center, San Francisco, CA, USA
Jay Fry’s Inklings of what you really need for cloud management post of 8/12/2009 analyzes presentations by David Linthicum, Gordon Haff, James Urquhart (Cisco), Sam Charrington (Appistry), CA's Stephen Elliot, and Joe Weinman of AT&T at OpenSource World/Next Generation Data Center/CloudWorld and concludes:
… CA is starting to weave cloud capabilities into its entire (very broad) product line. Since CA started talking about its activity in the cloud space last November, they've now brought data center automation, application performance management, database management, and service management into the mix. And systems management is underway.
Moves like what CA is doing are starting to answer a lot of the "thorny questions" that customers are asking. Sure, there are more solutions and capabilities to deliver. But, if the industry discussion and conference chatter is any indication, people are starting to ask questions about how they cover a lot of the messier areas to make cloud computing work. And the first steps at answering those questions are appearing. …
The Windows Azure Platform’s sweet spot, of course, is cloud management.
When: 8/12-8/13/2009Where: Moscone Center, San Francisco, CA, USA
<Return to section navigation list>
Other Cloud Computing Platforms and Services
••• James Urquhart claims Telecoms are missing their cloud opportunity in this 8/16/2009 post:
For some time now I've been advocating to my core network provider customers (and anyone else who would willingly listen) a concept that I think is both central to the future of cloud computing and one of the great opportunities this market disruption presents. The idea is simple: who will be the cloud service aggregator to enterprises, large and small? …
However, just imagine for a moment that AT&T and their peers put aside the desire to "out enterprise" Savvis or Terremark or even Amazon or Google, and instead focused on doing what they do best: being the provider of services that act as a gateway to a much wider market of services. Imagine those services are sold much like mobile phone plans, with monthly base charges for a certain level of service, and overages charged by CPU hour or whatever other metric makes the most sense. The service, in turn, provides a single face to almost the entire cloud marketplace. …
••• Katie Hoffman reports IBM Seeks Stimulus Money Through Cloud Computing (Update 2) for Bloomberg on 9/13/2009:
International Business Machines Corp. aims to grab a piece of the more than $1 trillion in global stimulus spending by pitching cloud-computer projects for health care and energy.
The world’s biggest computer-services provider is talking to those customers about deals, said Erich Clementi, who leads the cloud business. Cloud computing lets clients store data on someone else’s computer servers so they don’t have to maintain their own.
The U.S. government’s stimulus plan will put more than $100 billion toward health-care networks, energy grids and other technology projects, according to researcher IDC. IBM may benefit from that spending because cloud technology can help those operations run more efficiently, said Frank Gens, an analyst at Framingham, Massachusetts-based IDC.
“Uncle Sam is coming down with funding,” Gens said. “Cloud computing’s coming at a very good time.” Total cloud spending will top $40 billion by 2012, almost triple last year, according to the researcher. …
•• Gary Orenstein’s How Yahoo, Facebook, Amazon and Google Think About Big Data post of 8/15/2009 to GigaOm begins:
Collectively, Yahoo, Facebook, Amazon and Google are rewriting the handbook for big data. Startups intending to reach these proportions must also change their thinking about data, and enterprises need this model for internal deployments as a way to retain an economic edge.
The four leading web giants have designed systems from scratch, evidence that workloads have altered, business models are different, and economies have changed — all demanding a new approach. Yahoo revealed a few weeks ago how it approaches unstructured data on an Internet scale with MObStor, the technology that “grew out of Yahoo Photos” but now serves the unstructured storage needs across the company. Earlier this year, Facebook unveiled Haystack, its solution to managing its growing photo collection (which could reach 100 billion photos in 2009 if it continues with current growth rates). In 2007, Amazon outlined Dynamo, an “incrementally scalable, highly available key-value storage system.” All of these were predated by The Google File System, presented as a research paper in October 2003.
See below for the GFS: Evolution on Fast-forward interview in ACM Queue magazine.
•• Reuven Cohen reports Amazon Adds Data Portability With New Import/Export Service in this 8/14/2009 post that includes a “When to consider AWS import/export” decision chart based on data transfer quantity in GB or TB.
•• Cade Metz claims Sun hails rise of self-scaling software in this post of 8/13/2009 to The Register:
CloudWorld Lew Tucker envisions a world in which web applications can scale up their own hardware resources. Apps will not only run in the proverbial cloud, he says, they'll have the power to grab more cloudiness whenever they need it.
"As we look into the future, we're going to see that applications are going to be increasingly responsible for self-provisioning," Sun's cloud-computing chief technology officer told a sparsely attended CloudWorld conference in downtown San Francisco this morning. "As a computer scientist, I think that is an area of cloud computing that's most interesting.
Microsoft certainly appears to agree with the last sentence.
•• Ian Foster’s What's faster--a supercomputer or EC2? post of 8/5/2009 engendered a summary review by HPCwire’s Michael Feldman in a Slow Moving Clouds [Are] Fast Enough for HPC post of 8/10/2009:
Ian Foster penned an interesting blog last week comparing the utility of a supercomputer to that of public cloud for HPC applications. Foster pointed out that while the typical supercomputer might be much faster than a generic cloud environment, the turnaround time might actually be much better for the cloud. He argues that "the relevant metric is not execution time but elapsed time from submission to the completion of execution." …
• Marshall Kirk McKusick interviews Sean Quinlan about GFS: Evolution on Fast-forward, the evolution of the Google File System from the single-master to the distribute-master model, for the Association of Computing Machinery (ACM) Queue magazine on 8/7/2009. The detailed discussion covers the origin and evolution of the GFS.
• Maureen O’Gara claims CA Teams with Amazon and “They intend to co-market the integrated widgetry to joint customers” (whatever that means) in this 8/14/2009 post:
With an eye over its shoulder to see what rivals HP and IBM are doing, CA is making a cloud play in concert with Amazon EC2.
Both are hunting large enterprise accounts because - in the words of Willie Sutton - "that's where the money is."
They are telling them to manage the cloud as a critical component of their IT architectures - "a simple extension of their enterprise IT infrastructure," as Amazon puts it - which then spells the need for comprehensive management capabilities between existing internal infrastructure and the cloud.
So CA's business-driven automation, service management, application performance management and database management solutions now support the Amazon cloud. …
• Wesley Higaki’s Is Federal Accreditation Enough For Enterprise Cloud Computing? post of 8/14/2009 discusses reports that Google is seeking FISMA accreditation of its sloud services, presumably including Google App Engine. See the original entry in the Cloud Security and Governance section above.
• Maureen O’Gara reports Hadoop Co-Creator [Doug Cutting] to Leave Yahoo and “He’s bound for Hadoop commercializer Cloudera” in this 8/14/2009 post:
As predictably as night follows day, Hadoop co-creator Doug Cutting is leaving Yahoo bound for Hadoop commercializer Cloudera, which took in a $6 million second round in June to take Hadoop to the enterprise.
According to Cloudera's blog he starts September 1 and will apparently continue working on the Avro data serialization sub-project.
Hadoop's other founder Mike Cafarella is technically at Cloudera but only as a consultant since he'll start teaching computer science at the University of Michigan in December.
• Charles Babcock claims VMware Got What It Paid For: A Vision Of The Future in this 8/13/2009 InformationWeek post:
VMware's acquisition of SpringSource is not a match made in heaven. It's going to take an effort by both parties to make this marriage work. Still, it looks like one of the few responses VMware could make to counter Microsoft (NSDQ: MSFT)'s dangerous invasion of its turf.
These thoughts were prompted by an exchange with Salil Deshpande, a general partner of on the three largest investors in SpringSource, Bay Partners. "I don’t think VMware can remain simply a virtualization vendor. Virtualization is just table-stakes, at this point," he wrote in an email response to my query on why VMware was making its $362 million investment.
Deshpande didn't mention Microsoft and he has no special knowledge of VMware's intentions. But with Microsoft offering Hyper-V as a feature of the operating system, what, in the long run, was VMware supposed to do? Stand by and watch its $1.3 billion virtualization empire get commoditized? How could it take advantage of current computing trends and ward off an incipient invasion of its customer base? …
Promise: This [probably] is my last VMware/SpringSource item.
• Jeff Barr describes Pig Latin - High Level Data Processing with Elastic MapReduce in this 8/11/2009 post to the Amazon Web Services blog:
Amazon Elastic MapReduce now includes support for the Pig Latin programming language.
A product of the Apache Software Foundation, Pig Latin is a SQL-like data transformation language. You can use Pig Latin to run complex processes on large-scale compute clusters without having to spend time learning the MapReduce paradigm. Pig Latin programs are built around efficient high-level data types such as bags, tuples, maps, and fields and operations like LOAD, FOREACH, and FILTER.
Robert Rowley’s Medical Data in the Internet “Cloud” (part 1) – Data Safety essay of 8/13/2009 begins:
The question of data security in a “brave new world” of cloud-based Electronic Health Records (EHRs), Personal Health Records, and iPhone and other smartphone apps that could transmit personal health information, has attracted the attention of many. Web-based services – so-called “cloud computing” – are not inherently secure.
Such technology is focused more on widespread reach and interconnectedness rather than on making sure that the connections and the data are foolproof. Yet much of our personal information, such as banking information, is housed electronically and accessed through the web – we have become so accustomed to it that we seldom think very much about it. Personal health information, moreover, is protected by law: HIPAA, which is focused around physician and hospital-centered recordkeeping, and now ARRA, which extends HIPAA-like protection to patient-centered Personal Health Records as well.
And concludes:
Of course, the more centralized the data becomes, the bigger the target it becomes (“why do you rob banks? – because that’s where the money is!”). Creating good “locks” to secure the data becomes a focus of “cloud”-based vendors. Data security – making sure that data exchange across the Internet is safe, and that data storage is sufficiently fragmented and encrypted to minimize the risk of hacking – is the focus of HIPAA and ARRA regulation, and is the focus of the next installment in this series.
Robert Rowley, MD, is Chief Medical Officer, Practice Fusion Inc.
Katie Hoffman reports IBM Seeks Stimulus Money Through Cloud Computing (Update2) in this 8/13/2009 Bloomberg article:
Aug. 13 (Bloomberg) -- International Business Machines Corp. aims to grab a piece of the more than $1 trillion in global stimulus spending by pitching cloud-computer projects for health care and energy.
The world’s biggest computer-services provider is talking to those customers about deals, said Erich Clementi, who leads the cloud business. Cloud computing lets clients store data on someone else’s computer servers so they don’t have to maintain their own.
The U.S. government’s stimulus plan will put more than $100 billion toward health-care networks, energy grids and other technology projects, according to researcher IDC. IBM may benefit from that spending because cloud technology can help those operations run more efficiently, said Frank Gens, an analyst at Framingham, Massachusetts-based IDC.
“Uncle Sam is coming down with funding,” Gens said. “Cloud computing’s coming at a very good time.” Total cloud spending will top $40 billion by 2012, almost triple last year, according to the researcher.
The U.S. health-care industry will receive about $21.1 billion in technology funding through the stimulus package, while energy will get $77.6 billion, according to IDC.
The U.S. government’s plan to improve health-care records will just be “the tip of the iceberg” in spending for that industry, Clementi said in an interview. IBM declined to identify potential customers. …
Rich Miller offers a First Look at Yahoo’s New Design on 8/13/2009 for “the Yahoo Computing Coop design for its new data center in Lockport, N.Y. The design features an unusual roof design to allow hot air to escape from the facility:”
Yahoo has broken ground on its new data center in Lockport, N.Y. The $150 million project features a new design called the Yahoo Computing Coop, which emphasizes free cooling and air flow management. The Yahoo Computing Coops will be metal prefabricated structures measuring 120 feet by 60 feet. The company plans to use five of these structures in its Lockport complex.
It certainly looks like a coop. The question is: Where are the chickens?
B. Guptill’s VMware to Acquire SpringSource: Obvious, Strategic, and not Necessarily Open Research Alert of 8/12/2009 for Saugatuck Research is a two-page analysis of the acquisition (site registration required):
… VMware is buying SpringSource to appeal to leading-edge developers while building its own capabilities to become an enterprise-class data center software provider. This includes on-premise virtualizations, applications, software development environments and tools, as well as Cloud-based development platforms, applications, infrastructure, virtualization and management. To paraphrase VMware’s own press release, the acquisition strengthens the company’s ability to provide developers with “a more integrated, application-centric position.”
VMware has made no secret of this strategy over the years, and has regularly indicated that it would move beyond virtualization (please see Strategic Perspective MKT-347, “VMware Analyst Event Indicates Technology, Offering and Competitive Direction,” published 17May07, and QuickTake QT-377, “VMware: Making x86 Server Virtualization Real,” published 21Aug07). In such a strategic context, the SpringSource acquisition makes perfect sense. …
The deal obviously suggests threats to such software development icons as IBM, Microsoft, Oracle and others (including a variety of emerging PaaS providers). These competitors all have established, strong presences in accounts and markets directly affected by the acquisition. And they can be expected to ramp up relevant marketing and sales activities to take advantage of a period of uncertainty while the deal is consummating. We expect to see a series of escalating marketing battles for the hearts and minds of software developers. …
Dan Kuznetsky adds an even more detailed analysis of the VMware/SpringSource marriage in his VMware's vision: Clouds everywhere post of 8/12/2009 to ZDNet’s Virtually Speaking blog:
While I was making my way out to San Francisco for OpenSource World/Next Generation Datacenter/Cloudworld, VMware issued a press release stating that they were acquiring Spring Source, a supplier of Platform as a Service offerings. This, I believe, opens the door for VMware to offer a nearly complete, end-to-end solution for those wanting a Cloud Computing offering. The only big pieces missing are the datacenters themselves and the final mile from the network to the customer. …
VMware sees a future of PaaS and Cloud Computing as a whole that is bright enough that it was willing to pay $420 million for Spring Source. This is an amazing sum when one considers Spring Source’s current projected revenues and the fact that the 451 Group projects that revenues for the entire Cloud Computing market are going to be in that same ballpark in 2009. Let’s see if this was a wise decision.
Dan is a member of the senior management team of The 451 Group.
<Return to section navigation list>