Sunday, June 14, 2009

Windows Azure and Cloud Computing Posts for 6/8/2009+

Windows Azure, Azure Data Services, SQL Data Services and related cloud computing topics now appear in this weekly series.

Update 6/13 and 6/14/2009: Mary Jo Foley on xRM and the Azure Services Platform, other Additions
Update 6/11 and 6/12/2009: Additions
• Update 6/9/2009: Additions

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use these links, click the post title to display the single article you want to navigate.

“Cloud Security and Governance” section added 6/8/2009.

Azure Blob, Table and Queue Services

<Return to section navigation list> 

••• Simon Davies reports that the blob and table storage services will require “the x-ms-version to be specified by all non-anonymous commands” in his 6/12/2009 post, Updated Windows Azure Table and Blob Storage Whitepapers. Simon quotes the updated whitepapers:

By PDC 2009, we plan to require the x-ms-version to be specified by all non-anonymous commands. Until then, if no version specified for a given request, we assume that the version of the command the request wants to execute is the CTP version of the Windows Azure Storage APIs from PDC 2008. If a request comes in an invalid x-ms-version, it will be rejected.

Alin Irimie’s Using Paging and Leverage Concurrency in Windows Azure Tables, Sync Between Devices and the Cloud with FeedSync reports three new Azure “how-do-I” video segments in this 6/12/2009 post:

    • How Do I: Use Paging in Windows Azure Tables.
      To improve application usability, many applications need to support viewing data page-by-page. In this screencast, you’ll learn how Windows Azure table storage provides a built-in mechanism that allows you to efficiently page through query results.
    • How Do I: Sync Between Devices and the Cloud with FeedSync?
      Syncing the cloud and a growing world of devices is a fundamental need in today’s world. In this video, you will learn how to use FeedSync feeds to synchronize Live Framework data between a device and the cloud.
    • How Do I: Leverage Concurrency in Windows Azure Table Storage?
      Windows Azure table storage is designed to support many users at the same time. In this session, you’ll learn how Windows Azure table storage supports concurrency, and you’ll learn a few strategies to help you deal with any concurrency violations.

SQL Data Services (SDS)

<Return to section navigation list> 

••• Eugenio Pace describes his First experiments with (new) SQL Data Services in this 6/12/2009 post. Eugenio writes:

After some initial “hello world-ish” tests, I wanted to try something more interesting so I decided to port IssueTracker into SDS.

As you know, IssueTracker was originally designed for SDS’ previous ACE model (Authority, Container, Entity), so my first task was to re-write the data access layer to use SQL Server.

One of my goals in this experiment was to test SDS “impedance match” with on-premises SQL Server. Also, I wanted to develop independently of the availability of SDS. Not that SDS is unreliable, but currently it is available only inside Microsoft’s corporate network. I didn’t want to VPN into corpnet for this when working from home.

So I chose to develop exclusively against my local SQL Express instance first and then make a switch to the real SDS:

• Matthew Aslett’s On the opportunities for cloud-based databases and data warehousing post on the 451 Group’s blog concentrates on the recent Greenplum announcement, but concludes:

[W]e are confident that Greenplum’s won’t be the last announcement from a data management focused on enabling private cloud computing deployments. While much of the initial focus around cloud-based data management was naturally focused on the likes of SimpleDB the ability to deliver flexible access to, and processing of, enterprise data is more likely to be taking place behind the firewall while users consider what data and which applications are suitable for the public cloud.

The post includes a link to a report on SDS’s metamorphosis to a fully-relational Database as a Service in the cloud, but “You must be a current client, or active trialist of The 451 Group's services to view [it.]”

• David Robinson announces on 6/9/2009 One more TechEd Video - The New Face of Microsoft SQL Data Services, as the session and Tech*Talk videos slowly dribble out of the Tech*Ed Online forum. The PAN66 video is here, not where the link in Dave’s blog points, and it carries the following description:

The SQL Data Services team has recently announced an acceleration of its product roadmap and is now providing the world's first relational database as a service. Watch members of the product team as they discuss the new features of SDS including the development model, tools support, and how the services interoperate with the other building block services of the Azure Services Platform.

Speaker(s): David Robinson, Nino Bice, Rick Negrin, Zhongwei Wu

Alex Popescu makes the case for A Schema-less Relational Database in this 6/8/2009 post:

A relational database management system (RDBMS) imposes a fixed schema, so why would I use schema-less and relational database in the same sentence? [Emphasis by Alex.]

Prospective Azure developers comparing schemaless SQL Server Data Services (SSDS) tables that used the Entity-Attribute-Value (EAV) model with Azure EAV tables asked the same question. However, the SQL Server team came up with a different answer than Alex:

They abandoned the approach in favor of a fully relational, schema-centric "SQL Server in the clouds" version at the demand of potential Azure developers in early 2009.

I'd say the plain old EAV model (used by Azure Tables) is the way to go if you don't want to be bound by RDBMS constraints.

The above two paragraphs are from my comment to Alex’s post. (Alex is Co-founder and CTO of InfoQ.com, Founder of DailyCloud.net.)

Maureen O’Gara says “Enterprise Data Clouds solve three key problems facing the data warehouse market” as an introduction to her Petabyte-Scale Data Analytics Moving to the Cloud post of 6/8/2009. Maureen writes:

Forrester analyst Jim Kobielus has predicted that data warehousing will evolve into a “virtualized, cloud-based, supremely scalable distributed platform.”
Greenplum, the massively parallel open source data warehouse company, says it’s already happening and that companies like Fox Interactive Media, Zions Bank and Future Group, the big Indian retailer, have already built early iterations of so-called Enterprise Data Clouds (EDC) using its latest widgetry.

It also figures that the Enterprise Data Cloud will displace the data warehouse appliance architectures that Oracle is so fond of, one of the reasons it’s supposedly buying Sun.

Greenplum claims that Oracle is already way behind and playing catch-up with its relatively new Exadata data warehouse appliance; Greenplum and Netezza have been offering appliances for years.

Look for Microsoft to detail their Business Intelligence (BI) offerings for SDS after it’s CTP this fall.

.NET Services: Access Control, Service Bus and Workflow

 <Return to section navigation list>

••• The .NET Services Team warns that Workflow Services will be taken down in its Upcoming Important Changes to Microsoft .NET Workflow Service post of 6/12/2009:

One of the comments that we’ve consistently heard about the .NET Workflow Service is that you want the Workflow Service to be built on .NET Framework 4‘s workflow engine. This is currently not the case, since we are prior to the release date of .NET Framework 4.

As the direct result of user feedback, we will hold off further releases of the Workflow Service until after .NET Framework 4 ships. Since there will be important changes to the Workflow Service before it goes to full production, we are planning to take down the existing Workflow Service as part of service improvements in the month of July. This means any solutions that currently rely on the Workflow Service will have to be modified on or before July 1 in order to continue functioning smoothly. …

The sixth iteration of Microsoft’s .NET Services will still include the Microsoft .NET Access Control Service and the Microsoft .NET Service Bus. …

A rather strange description of “service improvements” n'est-ce pas? A description of the “modifications” required by July 1 would be appreciated, too.

I assume that the temporary demise of .NET Workflow Services is due to the team’s desire to run WF on Dublin.

You can learn more about WF and WCF changes in .NET 4.0 and download sample WF apps from links in MSDN’s Upcoming Changes to .NET Framework 4: Windows Communication Foundation (WCF) and Windows Workflow Foundation (WF) page. The .NET Framework 4: Workflow Foundation - Beta 1 and WF 4 Migration Guidance documents and samples also are useful.

••• The .NET Services Team’s .NET Services June 18th 2009 QFE Pre-Announcement and Scheduled Maintenance post of 6/12/2009 announces:

.NET Services Team is going to release June 2009 QFE on 6/18/2009 (Thursday). Users will have NO access to .NET Services Portal and .NET Services during the scheduled maintenance down time from 6/18/2009 noon to 6/18/2009 5pm.

Queues and Routers data will NOT be persisted and restored after the maintenance. Users will need to back up their data if they wish to restore them after the QFE release.

The QFE provides a fix for:

It is not possible to retrieve message (read and delete) from a Service Bus Queue through REST client such as Silverlight

Vittorio Bertocci says “Geneva Framework in Windows Azure is not supported” in his 6/10/2009 response to Stephane Gunet’s ACS passive federation in a webrole : with source and demo (updated for Geneva beta 2) thread in the .NET Services - Technical Discussions forum. The full text of Vibro’s message:

As of today, the use of Geneva Framework in Windows Azure is not supported and it requires hacks that may expose your application to serious security risks (a bit more background here). The product teams are working on a strategic solution for this.

The question is “when will the strategic solution be available to Azure devs?”

Live Windows Azure Apps, Tools and Test Harnesses

<Return to section navigation list> 

••• Mary Jo Foley reports Microsoft takes off its xRM platform-as-a-service gloves by “building a new platform based on its CRM core engine” in this 6/12/2009 article. According to Mary Jo:

Two years ago, Microsoft CEO Steve Ballmer mentioned for the first time publicly that Microsoft was building a new platform based on its CRM core engine. Since that time, it’s been tough to get the Softies to provide any real specifics on that technology, once known as the “Titan” platform, and now known as xRM.

But with Salesforce.com stepping up its Force.com push, Microsoft seems to have decided it’s finally time to talk turkey about xRM and how Microsoft plans to position a Microsoft-hosted version of it as part of its Azure cloud platform. …

xRM is an “anything relationship management” platform, upon which hundreds of partners,ISVs and customers already have written line-of-business apps (LOBs) using the core stack that powers Microsoft Dynamics CRM, explained Bryan Nielson, Microsoft’s Director of Worldwide Product Marketing for Dynamics CRM and XRM. The kinds of LOBs built using XRM range from a seeing-eye-dog application to complex human-resource-management systems, he said.

Remember Microsoft’s fall 2008 rollout of Azure, when execs briefly outlined forthcoming “SharePoint Services” and “Dynamics CRM Services” that would be part of the services layer that sits on top of the Red Dog (Windows Azure OS)? (See the Azure architectural diagram if you’re feeling dazed and confused.) It turns out that mysterious “Dynamics CRM Services” box is actually “xRM application services,” the team told me this week, and said Microsoft’s slideware department would be updating their diagrams accordingly:

What that means to customers and partners is when Microsoft launches the first final release of Azure this fall, the ability to host relationship-management applications in Microsoft’s cloud — something to which Ballmer alluded vaguely two years ago — will be live, too. This hosted xRM offering is Microsoft’s answer to Force.com and other “platform-as-a-service” competitors.

There’s an xRM Virtual Users group at http://www.xrmvirtual.com, which is “powered by Dynamics CRM Online and Windows Azure.”

••• Steve Marx describes his latest Azure demo, The CIA Pickup, in his 6/12/2009 post: Actually, I’m a CIA Agent, as demonstrated in this Video: The CIA Pickup - a Windows Azure Sample. A mashup with Twilio handles the telephony:

•• Shankar Pal’s Exchange Hosted Archive - A True Testament of Scalability post of 6/8/2009 to the SQL Data Services Team Blog describes how Microsoft runs its Exchange Hosted Archive (EHA) “on the same relational database service infrastructure as SDS.”

I wanted to share with you some of my experiences on the scalability of SQL Data Services and how this is best exemplified by one of our online services, the Microsoft Exchange Hosted Archive (EHA). This is a very rich service for e-mail archive, e-Discovery and regulatory compliance for corporate customers and large organizations. The next generation EHA uses the same relational database service infrastructure as SDS. I will focus on the section of the service pertaining to the scale aspects of the workload, and discuss how the relational database service addresses the scale requirements of EHA. [Emphasis added.]

Notice that Shankar did not say that EHA runs on SDS.

Barb Mosher describes the Visual Studio 2010 Extensions: Windows Azure Tools for Cloud Services as they related to VS 2010 in this brief post. The post is brief because new features are few.

Gaurav Mantri started an Azure Database Upload Utility - Available on CodePlex thread on 6/8/2009 in the Windows Azure forum. Guarive writes:

We have released a utility on CodePlex just now which will allow you to import data from SQL Server databases (Version 2008/2005) and export it into your Azure Table Storage.
Feature Highlights:

  • Supports SQL Server 2008/2005
  • Supports both SQL and Integrated Authentication
  • As a data source you can select one of the existing tables, views or write your own select query.
  • Allows you to selectively import columns.
  • Allows you to map column data types to Azure data types.

This project was developed by Interns @ Cerebrata Software as a part of their college curriculam. More details about the project (source code, binary, user guide) can be found by visiting project site on CodePlex at:
http://azuredatabaseupload.codeplex.com/

Azure Infrastructure

<Return to section navigation list> 

••• Pete Boden‘s Securing Microsoft’s Cloud Infrastructure: Part 2 post of 6/8/2009 discusses Microsoft’s history in online services and security. Pete continues:

… Microsoft focuses on three key areas to provide a trustworthy cloud:

• Utilizing a risk-based information security program that assesses and prioritizes security and operational threats to the business

• Maintaining and updating a detailed set of security controls that mitigate risk

• Operating a compliance framework that ensures controls are designed appropriately and are operating effectively

Microsoft’s Information Security Program defines the compliance framework and how our security team operates.  The program has been independently certified by British Standards Institute (BSI) Management Systems America as being compliant with ISO/IEC 27001:2005.

The framework that enabled Microsoft to earn the ISO 27001:2005 accreditation and SAS Type I and Type II attestations for our cloud infrastructure also sets the stage for product and service delivery teams to more efficiently obtain additional certifications and attestations as appropriate. Microsoft’s independently certified programs help to demonstrate the continued relevance of these programs to the evolution of challenges and opportunities in the online services marketplace. … 

Pete Boden is Microsoft’s General Manager, Online Services Security & Compliance, Global Foundation Services

•• Dmitry Sotnikov claims Garter finds a ‘Killer App’ for Cloud Computing in his 6/10/2009 post, which provides a summary of the contents of the $495 “APaaS: A Step to a ‘Killer App’ for Cloud Computing?” report.

•• David Deans claims “[c]onfusion continues regarding the true meaning of cloud-based services” in his Demand for Cloud Infrastructure as a Service post of 6/12/2009:

Some market studies continue to identify confusion regarding the true meaning of cloud-based services, and the apparent benefits derived by the early-adopters. One recent example comes from a survey of financial professionals in the UK.

However, there is already growing demand from informed executive business and IT decision makers that are eager to move forward with various forms of cloud service deployments.

In fact, Forrester Research has embarked on a new “Cloud and Virtualization Survey Data” series that offers key insights on where the market demand is developing, and they also debunk several stereotypes.

He then goes on to describe Forrester’s conclusion:

Forrester reached an interesting conclusion from their market assessment -- that’s contrary to conventional wisdom regarding the initial demand for cloud services. Enterprises are leading the adoption, not small and medium sized businesses (SMBs). Moreover, they have different technology preferences and comfort levels with virtualization.

Forrester also believes that early adopters of IaaS service offerings are driven by the instant provisioning of servers and the pay-per-use pricing model. Furthermore, the enterprise IT operations buyers, unlike developer buyers, may want to integrate their on-premise infrastructure with anything they deploy to a service provider, either temporarily or permanently.

•• Ari Rabkin of UC Berkeley’s RAD Lab compares clouds and peer-to-peer in this 6/11/2009 post to the Above the Clouds blog. Topics include:

    • Definitions
    • Limitations
    • Opportunities

•• Simon Munro’s Azure’s Unplayed Private Cloud Card well-turned essay of 6/11/2009 predicts that Microsoft will “publish ‘Microsoft Windows 20xx Azure Edition’ as a SKU on their price lists for their well established channel to take to market.  We just don’t know what the ‘xx’ is.”

Microsoft has steadfastly denied the intention to join the private clould market because Azure depends on specific, proprietary hardware running in Microsoft Data Centers. Simon argues:

[I]f Microsoft sees that the market is turning against the private cloud and if their Excel spreadsheet shows that there is more money to be made out of private Azure than public Azure then all that they need to do is package their Azure fabric as a product.  Azure runs on Windows on commodity hardware and could (probably) easily be worked into a package that can be marketed to enterprise customers.  I am sure that there is some code that will need to be reworked and some wizards added, but Microsoft is reasonably good at writing software, so that shouldn’t be a problem.  What Microsoft does have is a distribution channel and a piece of PaaS software that runs on Windows.  Google would never be able to (or want to) extricate their PaaS from their commodity hardware, networks and secret sauce and  Amazon doesn’t have a distribution channel for enterprise software – leaving Microsoft as a viable cloud provider that could have both a private and public cloud offering.  So as competitors, yet again, predict the collapse of Microsoft – this time based on the private vs public cloud – I suggest that they don’t poke the sleeping giant too much.

I agree that Azure will become another edition of Windows 20xx Server. Like Simon, I’m a bit cloudy on the value of xx.

•• Tom Vanderbilt’s five-page Data Center Overload post to the New York Times’ “Magazine Preview” section on 6/8/2009 is a great read for upper management types that don’t understand the relationship between publically accessible data centers and cloud computing.

James Urquhart posits Cloud Must Balance Innovation With Operational Excellence in this 6/9/2009 post to the Cisco Data Center Networks blog. James begins:

One of Cisco’s internal mantras of late is the need to balance innovation with operational excellence. Our corporate CTO, Padmasree Warrior, laid out this argument as a part of a recent series predicting the future of collaboration. Essentially, a critical organizational debate is changing focus from “what takes precedence—innovation OR operational excellence” to “how do we balance the two?” Coming from the Silicon Valley, with its history of “fresh out of college” entrepreneurship and collapsing bubbles, this is a much needed discussion. …

• Lydia Leong’s Wading into the waters of cloud adoption post of 6/9/2009 observes:

I’ve been pondering the dot write-ups that I need to do for Gartner’s upcoming Cloud Computing hype cycle, as well as my forthcoming Magic Quadrant on Web Hosting (which now includes a bunch of cloud-based providers), and contemplating this thought:

We are at the start of an adoption curve for cloud computing. Getting from here, to the realization of the grand vision, will be, for most organizations, a series of steps into the water, and not a grand leap. …

Two client questions have been particularly prominent in the inquiries I’ve been taking on cloud (a super-hot topic of inquiry, as you’d expect): Is this cloud stuff real? and What can I do with the cloud right now? Companies are sticking their toes into the water, but few are jumping off the high dive. What interests me, though, is that many are engaging in active vendor discussions about taking the plunge, even if their actual expectation (or intent) is to just wade out a little. Everyone is afraid of sharks; it’s viewed as a high-risk activity.

In my research work, I have been, like the other analysts who do core cloud work here at Gartner, looking at a lot of big-picture stuff. But I’ve been focusing my written research very heavily on the practicalities of immediate-term adoption — answering the huge client demand for frameworks to use in formulating and executing on near-term cloud infrastructure plans, and in long-term strategic planning for their data centers. The interest is undoubtedly there. There’s just a gap between the solutions that people want to adopt, and the solutions that actually exist in the market. The market is evolving with tremendous rapidity, though, so not being able to find the solution you want today doesn’t mean that you won’t be able to get it next year.

William Hurley asks Will 'enterprise data clouds' reinvent data warehouses? in this 6/8 post to InforWorld’s Cloud Computing blog, and gives this answer in the deck:

Greenplum looks to use commoditized hardware to manage multiple warehouses, jumping into a cloud trend.

Hurley continues:

Today database vendor Greenplum unveiled a "solution for enterprise data clouds." The company claims this represents a shift in how enterprise data is managed, and it's aiming to displace data warehouse appliances in large enterprises all together. Fox Interactive Media, T-Mobile, Zions Bank, and others are already working with Greenplum to build early iterations of enterprise data clouds (EDCs).

So what the heck are EDCs? It turns out that it's just jargon for using cloud computing to create and manage multiple data warehouses on a common pool of commoditized hardware. I wanted the real scoop, so I sat down with Scott Yara, Greenplum's cofounder and president, to talk about this new offering and the future of enterprise-class data warehousing. [Link added.]

See also Maureen O’Gara’s post in the SQL Data Services (SDS) section.

Lori McVittie says “Automating components is easy. It’s automating processes that’s hard” in her And the Killer App for Private Cloud Computing Is post of 6/8/2009. Lori writes:

The premise that if you don’t have an infrastructure comprised solely of Infrastructure 2.0 components then you cannot realize an automated, on-demand data center is, in fact, wrong. While the capabilities of modern hardware that come with Infrastructure 2.0 such as a standards-based API able to be leveraged by automation systems certainly makes the task all the more simple, it is not the only way that components can be automated. In fact, “legacy” infrastructure has been automated for years using other mechanisms that can certainly be incorporated into the dynamic data center model.

When it’s time to upgrade or purchase new solutions, those components enabled with standards-based APIs should certainly be considered before those without, but there’s no reason that a hybrid data center replete with both legacy and dynamic infrastructure components cannot be automated in such a way as to form the basis for a “private cloud.” The thought that you must have a homogeneous infrastructure is not only unrealistic it’s also indicative of a too-narrow focus on the individual components rather than systems – and processes - that make up data center operations.

Krishnan Subramanian posits SaaS Vendors Need A Mental Shift in this 6/8/2009 post:

In April, I wrote a post about how SaaS users need a mental shift to do computing differently in this Cloud era. I have argued that they have to adjust themselves psychologically to play the Cloud game where they need to let go some control over their data in order to get access from anywhere in the world. In today's post, I am going to wear an users' hat and talk about how SaaS vendors need to change the way they do business in this era. This post is a result of my recent experience with two of my favorite SaaS applications.

John Markoff and Clay Shirky talk to David Gelernter in this Edge Roundtable video, Lord of the Cloud of 6/8/2009:

In June, 2000 Edge published David Gelernter's audacious "The Second Coming: A Manifesto", in which he wrote: "Everything is up for grabs. Everything will change. There is a magnificent sweep of intellectual landscape right in front of us". Ppublication of the manifesto led to one of the most vibrant and interesting Edge discussions, with contributions from many of the leading Edge thinkers in the area of computation. from Stewart Brand, to Freeman dyson, to W. Daniel Hillis.

From the abstract by David Gelernter:

The central idea we were working on was this idea of de-localized information — information for which I didn't care what computer it was stored on. It didn't depend on any particular computer. I didn't know the identities of other computers in the ensemble that I was working on. I just knew myself and the cybersphere, or sometimes we called it the tuplesphere, or just a bunch of information floating around. We used the analogy — we talked about helium balloons. We used a million ways to try and explain this idea.

Dan Chenok issues a Call for Privacy Act to Catch Up with IT in this 6/5/2009 podcast of an interview by Eric Chabrow with Dan Chenok:

The law rarely keeps pace with advancements in information technology, and the 35-year-old federal Privacy Act has failed to provide the proper framework needed to protect the privacy of citizens.

Dan Chenok chaired the federal Information Security and Privacy Advisory Board that issued a report entitled Toward a 21st Century Framework for Federal Government Privacy Policy that calls for the creation of a federal chief privacy officer as well as chief privacy officers in major federal agencies and a federal Chief Privacy Officers' Council. The panel also recommended steps Congress and the Obama administration should take to change federal laws and regulations to allow the government to more efficiently use specific technologies, such as cookies, while maintaining citizens' privacy.

Chenok, the one-time highest ranking non-political IT official in the Office of Management and Budget and now a senior vice president at IT services provider Pragmatics, spoke with Information Security Media Group's Eric Chabrow and explains how changing the way privacy is governed will enhance protection for American citizens.

Bill Stempf’s Economics of Cloud computing presentation for the ACM post of 6/5/2009 describes his presentation:

I presented a paper last month for the ACM and IEEE that will be published in the Cloud Computing Journal next month.  Thought I would post a few links here for those who are interested in cloud - I did cover Azure.  I'll do a blog post for Azure and VB when I manage to upload SHARP to the cloud, like I plan to. [Emphasis added.]

The post contains links to the slides, presentation video, and post on Ulitzer.com.

Cloud Security and Governance

<Return to section navigation list>

••• Pete Boden‘s Securing Microsoft’s Cloud Infrastructure: Part 2 post of 6/8/2009 discusses Microsoft’s history in online services and security in the Azure Infrastructure section.

••• Dave Greenfield says Cloud computing security to grow in 2009 in his 6/14/2009 post to ZDNet’s Team Think blog:

As I mentioned the other day, I’ve just completed a report with Osterman Research on the messaging security market.  What we found will be good news for cloud computing providers.

While enterprise users continue to spend a large percentage of their workday involved with messaging activities, the Internet remains a dangerous place for users. Websense, for example, reported that 57 percent of attacks are delivered via the Web. Commtouch found that SPAM accounted for 72 percent of all email traversing the Internet in the first quarter of 2009. …

As such, while server-based solutions will continue to dominate the messaging security market, cloud-based solutions will constitute a growing percentage of purchases.  The number of respondents who deployed hosted security services grew by nine percentage points since last year.  Over the next 12 months hosted anti-spam services, such as those offered by Kaspersky, Trend Micro and more recently Microsoft, are also expected to show their greatest growth.

••• Chris Hoff’s Cloud Computing Security: (Orchestral) Maneuvers In the Dark? post of 6/14/2009 analyzes Kevin Jackson’s Cloud Computing: The Dawn of Maneuver Warfare in IT Security post. @Beaker writes:

Kevin’s essay is an interesting — if not hope-filled — glimpse into what IT Security could be as enabled by Cloud Computing and virtualization, were one to be able to suspend disbelief due to the realities of hefty dependencies on archaic protocols and broken trust models let alone huge gaps in technology and operational culture.  Readers of my blog will certainly recognize this from “The Four Horsemen of the Virtualization Security Apocalypse” and “The Frogs Who Desired a King: A Virtualization and Cloud Computing Security Fable

and continues with a detailed essay on cloud computing security.

Chris’s Hey, Uh, Someone Just Powered Off Our Firewall Virtual Appliance… post of 6/11/2009 discusses this scenario:

Since virtual appliances (VAs) are just virtual machines (VMs) what happens when a SysAdmin spins down or moves one that happens to be your shiny new firewall protecting your production VMs behind it, accidentally or maliciously?  Brings new meaning to the phrase “failing closed.”

and eats a big slice of humble pie in his Dear Mr. Schneier, I Was A Jackass & I’m Sorry… of 6/10/2009.

•• Kevin Jackson expands on his earlier post on the subject (see below) in Expanding Maneuver Warfare in IT of 6/12/2009 begins:

Earlier this week I published "Cloud Computing: The Dawn of Maneuver Warfare in IT Security" via Ulitzer. In publishing the article my intent was to explore the more dynamic approach to information security offered by cloud computing. Although the conversation continues in earnest, today I would like to highlight Ben's thoughts from Iron Fog:

"What about managing virus outbreaks, patch deployment and vulnerability detection?

•• Dana Gardner’s latest BriefingDirect 0f 6/11/2009 is Analysts define growing requirements for how governance supports corporate cloud computing features panelists David A. Kelly, president of Upside Research; Joe McKendrick, independent analyst and ZDNet blogger, and Ron Schmelzer, senior analyst at ZapThink.

The post contains extended excerpts from panelists views and links to a full transcript of the discussion. You can also Listen to the podcast, Download the podcast, or find it on iTunes/iPod and Podcast.com.

•• Alexander Howard’s Gartner and CA on addressing compliance requirements in cloud computing post of 6/11/2009 points to:

Linda Tucci’s excellent new SearchCIO.com article … : “Addressing compliance requirements in cloud computing contracts.”

In the piece, Tucci reports on interviews with Debra Logan, an enterprise content management analyst at Stamford, Conn.-based Gartner Inc, and Tom McHale, vice president of product management for CA’s GRC Manager suite, to gain answers to the following questions:

  • Who has access to sensitive data in the cloud?
  • Data backup: How often, how long, how well?
  • How will you manage e-discovery requests and satisfy different retention laws?

“Even before price negotiations begin, CIOs must understand that data backup and storage in the cloud does not remove a company’s responsibility for the legal, regulatory and audit obligations attached to that information,” Tucci writes. “CIOs should be ready with a list of compliance questions for cloud vendors. But don’t expect their answers to suffice.”

Alex continues with analyses of the Gartner recommendations for minimizing cloud security risks by other analysts.

Alan Wilensky’s Even the mighty shall sometimes cloudfail post of 6/11/2009 analyzes Amazon EC2’s partial outage due to a lightening strike and recommends:

… But when the cloud fails, your alternatives have to be in place. Such as: POS systems might have a set of distributed machines to capture inbound records and route card transactions. Rapid Replenishment systems might capture transaction logs for instant replications once your cloud host comes back. You might have a set of managed APIs that broker to another cloud and then reconcile the resynch.

Many paths. However, there are some businesses that can tolerate the outages that are sure to occur as more move to remote services. One thing is for sure: The single point of failure is not just the cloud infrastructure and platform providers. The land rush to get the mid market onto PAAS solutions has been somewhat willfully blind regarding the following fact – most small /med biz has only one high speed connection, and most have not thought through the issues of hot comms failover at multiple sites. …

Reuven Cohen provides additional background on the Amazon outage in his Amazon EC2 gets Zapped Overnight post of 6/11/2009.

I thought lightening arrestors and surge suppressors were designed to handle most lightening strikes.

Stoledano provides 3 Reasons Why Encryption is Overrated in this 6/10/2009 post to the Cleversafe blog:

    • Future processing power
    • Key management
    • Disclosure laws

Instead, he recommends dispersal rather than encryption:

With full disclosure – Cleversafe’s storage solution is based on Dispersal – consider its security benefits. Dispersed Storage technology divides data into slices, which are stored in different geographies.  Each slice contains too little information to be useful but any threshold can be used to recreate the original data.  Translation – a malicious party cannot recreate data from a slice, or two, or three, no matter what the advances in processing power.  And Dispersal does not require the time and energy of re-encryption to sustain data protection.

Maybe encryption alone is “good enough” in some cases now  - but Dispersal is “good always” and represents the future.

Mario Santana’s Cloud Computing and Security Issues article for Computer Technology Review is an essay that claims:

The cloud brings with it a layer of additional security considerations, in terms of both technology and process.

This layer of additional security isn’t necessarily scary or complicated.  But right now, trust in the security of cloud computing is the number one impediment to its growth.  This article takes a look at the cloud from various points of view.  I will compare real-world examples to look at security implications of the Cloud, and show how they integrate with traditional security processes.

The article is interesting primarily because Mario Santana is director of Secure Information Services at Terremark Worldwide, Inc.

• Kevin Jackson asserts “IT security can now use maneuver concepts for enhance defense” in his Cloud Computing: The Dawn of Maneuver Warfare in IT Security post of 6/9/2009:

Until now, IT security has been akin to early 20th century warfare.  After surveying and carefully cataloging all possible threats, the line of business (LOB) manager and IT professional would debate and eventually settle on appropriate and proportional risk mitigation strategies. The resulting IT security infrastructures and procedures typically reflected a “defense in depth” strategy, eerily reminiscent of the French WWII Maginot line . Although new threats led to updated capabilities, the strategy of extending and enhancing the protective barrier remained. Often describe as an “arms race”, the IT security landscape has settled into ever escalating levels of sophisticated attack versus defense techniques and technologies. Current debate around cloud computing security has seemed to continue without the realization that there is a fundamental change now occurring. Although technologically, cloud computing represents an evolution, strategically it represents the introduction of maneuver warfare into the IT security dictionary.

• Andrew Lavers explains Enterprise Policy for Zero-click Sign-in Using Information Cards with Geneva Beta 2 in this 6/9/2009 post. Topics include:

    • Reducing your login steps one click at a time
    • How Jerry the domain administrator can pick out cards for his users automatically
    • What constitutes a Card Usage Policy
    • Application patterns and hostname wildcards in a Card Usage Policy

It remains to be seen if zero-click login will work for Azure projects using .NET Access Control Services. According to Microsoft’s Yi-Lun Lao in a May 19 response to my Details of CardSpace Credentials for WSHttp Bindings Simple (WSHttpRelayEchoService) thread in the .NET Services - Technical Discussions forum:

Also note currently the Geneva CardSpace is not compatible with Windows CardSpace. [Emphasis added.]

Krishnan Subramanian’s SaaS Security - Tell Me More post of 6/9/2009 recommends concentrating on “people access” to your data:

… It is the responsibility of the SaaS vendors to educate users about their people centric security practices. It is the responsibility of the SaaS users to get to know these details from the vendors. As I have emphasized several times in this space, SaaS requires a mental shift on the part of the users. To make these adjustment comfortable for them, SaaS vendors should be more forthcoming about their security practices regarding the handling of data. In fact, some of the SaaS vendors are already doing this. For example, Google explains their practices clearly in this document. …

Cloud Computing Events

<Return to section navigation list>

••• Enterprise 2.0 Conference will hold an Evening in the Cloud event on 6/22/2009 at the Westin Boston Waterfront:

[C]ome face-to-face with top purveyors of cloud-based computing for a lively debate hosted by David Berlind, and a vibrant discussion you won't soon forget.*

The US$195 registration fee “[i]cludes access to all keynotes, the Expo Pavilion and sponsored sessions. Please note that space at the Evening in the Cloud event is limited and seats will only be reserved for the first 300 people to register.”

•• Chenxi Wang and Eric Olden will present a Forrester Research Webcast, The Enterprise Edge & The Cloud: Securely Integrating Enterprise Identities to SaaS and the Cloud, on 6/25/2009 at 1:00 PM EDT. The post says attendees will learn:

    • Why the enterprise must focus on extending the enterprise network to the Cloud
    • The role of Cloud Identity Gateways, their relation to Web Security Proxies and how to use these solutions together
    • How to extend systems like Active Directory and LDAP to SaaS apps for access control, SSO, user account management and unified auditing
    • How Windows network sessions can be seamlessly extended through a Cloud Identity Gateway into SAML federated SaaS apps
    • How to de-provision user access inside the firewall and have that propagate across the Cloud to SaaS apps

The Webcast requires registration at the above link.

When: 6/25/2009 10:00 AM PDT 
Where: The Internet

•• Alan Williamson recommends Register[ing] Today for SOA World and Attend SOA & Cloud Bootcamp for Free in his lengthy SOA & Cloud Bootcamp: Who Ya Gonna Call? Cloudbusters! essay of 6/11/2009.

The Expo plus Bootcamp discounted price of $200 for online registration expires today (6/12/2009)

When: 6/22 – 6/23/2009 
Where: The Roosevelt Hotel, NY, NY 

SYS-CON announces Cloud Computing Expo 2009 West: Call for Papers Deadline June 30 on 6/11/2009:

The Call for Papers for the 4th International Cloud Computing Conference & Expo, which will be held at the Santa Clara Convention Center in Santa Clara, CA, on November 2- 4, 2009, is now open. The Conference Theme in Santa Clara is "Bringing the Economics of the Web to Enterprise IT Through Cloud Computing."

Our aim with each conference is to showcase breakout sessions from members of every layer of the Cloud ecosystem. The submission process is 100% online and the submissions URL is here.

When: 11/2 – 11/4/2009 
Where: Santa Clara Convention Center, Santa Clara, CA

•• Gartner, Inc.’s Gartner Outlines Seven Practical Ways to Save Costs in the Data Center post covers “Key Issues for Data Center Professionals to Be Discussed at Gartner Data Center Conferences October 5-6 in London and December 1-4 in Las Vegas:”

  1. Rationalize the Hardware
  2. Consolidate Data Center Sites
  3. Manage Energy and Facilities Costs
  4. Renegotiate Contracts
  5. Manage the People Costs
  6. Sweat the Assets
  7. Virtualization

Additional information is available in the Gartner report "How to Cut Your Data Center Costs." The report is available on Gartner's Web site.

Gartner analysts will discuss the key issues for the data center during the Gartner Data Center Conferences taking place October 5-6 at the Royal Lancaster Hotel in London, and December 1-4 at Caesars Palace in Las Vegas. The event provides data center professionals with real-world perspectives and strategies to transform their data centers.

When: 10/5 – 10/6/2009 
Where: Royal Lancaster Hotel, London, UK

When: 12/1 – 12/4/2009 
Where: Ceaser’s Palace, Las Vegas, NV

• David Pallmann will speak at the San Diego Azure User Group Meeting 6/11 about Silverlight and Azure:

You'll learn how to create rich Silverlight applications that are Azure-hosted and take advantage of cloud services. We'll build an Azure-hosted Silverlight application from the ground up that utilizes web services and cloud storage.

Click here to register.

When: 6/11/2009, 6:00 to 8:00 PM PDT 
Where:  AMN Healthcare, 12400 High Bluff Dr # 100, San Diego, CA 92130

• Mike Amundsen will present a "Programming the Cloud with HTTP/REST" session at CodeStock 2009 on 6/26 – 6/27/2009 in Knoxville, TN. CodeStock includes several other Azure and cloud-related sessions. It’s likely that Aaron King’s Data Syncing using SQL Server Data Services presentation really covers today’s SQL Data Services (SDS), but the latter will undergo a major transformation starting in late July.

When: 6/26 – 6/27/2009 
Where: Lamar Alexander Bldg., Pellissippi State, 10915 Hardin Valley Rd., Knoxville, TN 37933 

Kevin Jackson’s Vivek Kundra to Speak at NRO Showcase post of 6/8/2009 notes:

According to the event website, the current rise in the use of Web 2.0 social media has spawned innovative and unorthodox approaches to meet the demands of today's global and highly mobile workforce. The data-driven web has morphed into a user-centric web, with increasingly empowered users who are accustomed to and demand self-service and highly customizable experiences. An expected highlight will be a demonstration of "Apps for Democracy" by Vivek Kundra, Federal CIO.

Michele Weslander Quaid, Chief Technology Officer at the National Reconnaissance Office, announces the Unleashing the Crowd in the Cloud: Igniting the Innovation Insurgency conference to be held 6/17 – 6/18/2009 at the National Reconnaissance Office in Chantilly, VA. Keynote and special guest speakers include:

  • Vivek Kundra, Federal CIO
  • Jeff Jonas, IBM
  • Vint Cerf, Google
  • Dion Hinchcliffe, Hinchcliffe & Associates
  • David Stephenson, author of Democratizing Data

It’s unfortunate that the Agenda page doesn’t include the topics the speakers intend to discuss and that you need an Intelligence Community (IC) badge or a TS/SCI clearance to attend.

When: 6/17 – 6/18/2009 
Where: National Reconnaissance Office, 14675 Lee Road, Chantilly, VA 20151-1715

Other Cloud Computing Platforms and Services

<Return to section navigation list> 

••• Dion Hinchcliffe’s Cloud computing and open source face-off post of 6/14/2009 claims:

[O]pen source has become a key enabler for cloud computing by providing both cheap inputs (as in free) as well as rich capabilities to providers of cloud services. The writing, however, is beginning to appear on the wall: the cloud computing industry will use open source as leverage for a new generation of proprietary platforms-as-a-service, very much like the established Web 2.0 services in the consumer space have used open source platforms to capture and create lock-in around data.

I agree. PaaS clouds will create vendor lock-in whether based on open-source or proprietary stacks.

••• James Hamilton discusses Erasure Coding and Cold Storage in this 6/13/2009 post:

Erasure coding provides redundancy for greater than single disk failure without 3x or higher redundancy. I still like full mirroring for hot data but the vast majority of the worlds data is cold and much of it never gets referenced after writing it: Measurement and Analysis of Large-Scale Network File System Workloads. For less-than-hot workloads, erasure coding is an excellent solution. Companies such as EMC, Data Domain, Maidsafe, Allmydata, Cleversafe, and Panasas are all building products based upon erasure coding.

At FAST 2009 in late February, A Performance Evaluation and Examination of Open-Source Erasure Coding Libraries For Storage will be presented. This paper looks at 5 open source erasure coding systems and compares there relative performance. The open source erasure coding packages implement Read-Solomon, Cauchy Read-Solomon, Even-Odd, Row-Diagonal Parity (RDP), and Minimal Density RAID-6 codes.

James then summarizes the authors’ findings.

••• John Foley questions the need for Holyoke, MA’s proposed data center in his Cloud Computing Vs. $100 Million Data Center post of 6/12/2009 to InformationWeek:

Citizens of Holyoke, Mass., had reason to celebrate this week as the governor of Massachusetts and other dignitaries--including Cisco CEO John Chambers, EMC CEO Joe Tucci, and the presidents of Boston University, MIT, and the University of Massachusetts--announced plans to build a $100 million data center in their town. It's an ambitious proposal, but is it necessary? …

The drawback to a cloud services approach is that Holyoke wouldn't get the new construction and other related jobs that local politicians and residents understandably hope for. But job creation is a bad rationale for building a data center, anyway. Better to focus on research, innovation, and the myriad business opportunities created by on-demand IT resources regardless of how they're delivered.

I wonder if the funding is from a federal stimulus project.

•• Maureen O’Gara says Oracle’s Secret Plan for Sun is “Oracle CEO Larry Ellison is reportedly determined to turn Oracle-Sun into a cloud-based company” in this 6/12/2009 post:

Oracle CEO Larry Ellison, who's orchestrating things even if co-president Chuck Phillips is nominally in charge of integration from the Oracle side, is reportedly determined to exploit the cloud phenomenon and turn Oracle-Sun into a cloud-based company by making sure he's got Oracle's whole software stack on network-based appliances built by Sun.

He's also reportedly quite taken with Sun's container-based Modular Datacenter, expecting to do better with it than Sun has.

Maureen didn’t disclose in the post who reported Ellison’s intention.

•• Jana Technology ServicesUsing Amazon EC2 for PCI DSS compliant applications post of 4/29/2009 says:

Many of these [12 PCI-DSS] requirements can’t be met strictly by a datacenter provider, but in Amazon’s case, they will be able to provide an SAS70 Type 2 Audit Statement in July that will provide much of the infrastructure information needed to meet PCI DSS certification.

The post continues with a list of the Control Objectives that the Amazon Audit will address.

•• Amazon Web Services published an Overview of Security Services whitepaper dated June 2009 which continues to assert:

To provide customers with assurance of the security measures implemented, AWS is working with a public accounting firm to ensure continued Sarbanes Oxley (SOX) compliance, and attain certifications and unbiased Audit Statements such as recurring Statement on Auditing Standards No. 70: Service Organizations, Type II (SAS70 Type II).

AWS has made this commitment in several preceding papers, blog, and forum posts but doesn’t advise when users of their services can expect to see a final, completed SAS 70 Type II attestation. My request for this information in AWS forums and Twitter have gone unanswered.

Update 9/12/2009: See the Jana Technology Services post above in this section that reports Amazon will have a SAS 70 Type II attestation by July (2009).

The whitepaper purports to cover the following topics:

  • Certifications and Accreditations
  • Secure Design Principles
  • Physical Security
  • Backups
  • Network Security
  • AWS Security
    • Amazon Elastic Compute Cloud (Amazon EC2) Security
    • Amazon Simple Storage Service (Amazon S3) Security
    • Amazon SimpleDB Security
    • Amazon Simple Queue Service (Amazon SQS) Security
    • Amazon CloudFront Security
    • Amazon Elastic MapReduce Security

I agree with Reuven Cohen’s summary on Twitter: No Transparency, trust us, we're smarter then [you]. [But i]ts better then nothing.

•• cloud-standards.org started the Cloud Standards Wiki on 6/9/2009. The only significant content at present is a Main Page that claims:

The goal of this wiki is to document the activities of the various SDOs working on Cloud standards.

We plan to follow the same process as was used to create the group known as the Standards Development Organization Collaboration on Networked Resources Management (SCRM-WG): SCRM wiki

This includes the development of a Cloud Landscape to overview the various efforts and introduce terms and definitions that allow each standard to be described in common language, and an entry for each standard categorized by organization.

So far, there’s no indication of the identity of the proponents. The Wbumpus who has added items presumably is Winston Bumpus from VMWare. According to a recent Tweet, Reuven Cohen (@ruv) might be its “self-appointed czar.”

James Urquhart’s Open Cirrus research cloud gains new members post of 6/9/2009 reports:

HP, Intel, and Yahoo are announcing Monday at the first Open Cirrus Summit that they have signed on three more research organizations to their joint cloud test bed. The new institutions include the Russian Academy of Sciences, South Korea's Electronics and Telecommunications Research Institute, and MIMOS, a strategic research and development organization under the Ministry of Science, Technology and Innovation in Malaysia.

Open Cirrus, described by the companies as "a global, multiple data center, open-source test bed for the advancement of cloud-computing research," was launched in July of 2008, and represented one of the first large-scale systems deployments targeted at teaching and researching large-scale cloud architectures. IBM and Google have teamed up on a similar project.

 

0 comments: