Showing posts with label Zend. Show all posts
Showing posts with label Zend. Show all posts

Sunday, May 23, 2010

Windows Azure and Cloud Computing Posts for 5/23/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this daily series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated in May 2010 for the January 4, 2010 commercial release. 

Azure Blob, Drive, Table and Queue Services

Jeff00Seattle updated his Windows Azure Drives: Part 1: Configure and Mounting at Startup of Web Role Lifecycle article for The Code Project on 5/23/2010:

An approach for providing Windows Azure Drives (a.k.a. XDrive) to any cloud-based web applications through RoleEntryPoint callback methods and exposing successful mounting results within an environment variable through Global.asax callback method derived from the HttpApplication base class.

Introduction

This article presents an approach for providing Windows Azure Drives (a.k.a. XDrive) to any cloud-based web applications through RoleEntryPoint callback methods and exposing successful mounting results within an environment variable through Global.asax callback method derived from the HttpApplication base class. The article will demonstrate how to use this approach in mounting XDrives before a .NET (C#) cloud-base web application is running.

The next article will take this approach and apply it to PHP cloud-based web applications.

Jeff continues with the details for creating a “WebRole DLL in C# that would manage all XDrives' mounting before the PHP web application starts.”

image

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Nick Josevski’s OData, AtomPub and JSON of 5/23/2010 continues his OData series:

Atom FeedContinuing my mini-series of looking into OData I thought I would cover off the basic structure of AtomPub and JSON. They are both formats that OData can deliver the requested resources (a collection of entities; e.g. products or customers).

For the most part there isn’t much difference in terms of data volume returned by AtomPub vs JSON, tho AtomPub being XML, is slightly more verbose (tags and closing tags) and referencing namespaces via xmlns. A plus for AtomPub for your OData service is ability to define the datatype as you’ll see below via m:type the example being an integer Edm.Int32. Whereas the lack of such features is a plus in a different way for JSON – it’s simpler, and a language such as JavaScript interprets the values of basic types (string, int, bool, array, etc).

I’m not attempting to promote one over the other, just saying that each can serve a purpose. If you’re after posts that discuss this is a more critical fashion, have a look at this post by Joe Gregorio.

What I do aim to show is that comparing the two side by side there’s only a slight difference, and based on what you’re intending to accomplish with processing said data the choice for format is up to you. If you’re just re-purposing some data on a web interface JSON would be a suitable choice. If you’re processing the data within another service first, making use of XDocument (C#.NET) would seem suitable.

NOTE: In the examples that follow the returned result data is from the NetFlix OData service, I have stripped out some of the xmlns, and shortened/modified the urls in particular omitting http:// just so it fits better (less line wrapping).

So let us compare…

AtomPub
Yes that stuff that’s makes up web feeds.

Example from the NetFlix OData feed access via URL http://odata.netflix.com/Catalog/Titles:

image

JSON
Yes that simple text used in JavaScript.

Example from the NetFlix OData feed access via URL http://odata.netflix.com/Catalog/Titles?$format=JSON:

image

Gregg Duncan wrote Happy Birthday Data.gov. You’ve grown so in the last year… (from 47 to 272,677 datasets) on 5/22/2010:

WhiteHouse.gov - Data.gov: Pretty Advanced for a One-Year-Old

The White House EmblemThe White House Emblem“One year ago, data.gov was born with 47 datasets of government information that was previously unavailable to the public. The thinking behind this was that this data belonged to the American people, and you should not only know this information, but also have the ability to use it. By tapping the collective knowledge of the American people, we could leverage this government asset to deliver more for millions of people.

Today, there are more than 250,000 datasets, hundreds of applications created by third parties, and a global movement to democratize data. To date, the site has received 97.6 million hits, and following the Obama Administration’s lead, governments and institutions of all sizes are unlocking the value of data for their constituents.  San Francisco, New York City, the State of California, the State of Utah, the State of Michigan, and the Commonwealth of Massachusetts have launched data.gov-type sites, as have countries such as Canada, Australia, and the UK as well as the World Bank.

…”

Data.gov

“…

imageData.gov is leading the way in democratizing public sector data and driving innovation. The data is being surfaced from many locations making the Government data stores available to researchers to perform their own analysis. Developers are finding good uses for the datasets, providing interesting and useful applications that allow for new views and public analysis. This is a work in progress, but this movement is spreading to cities, states, and other countries. After just one year a community is born around open government data.

Just look at the numbers:

6 Other nations establishing open data
8 States now offering data sites
8 Cities in America with open data
236 New applications from Data.gov datasets
253 Data contacts in Federal Agencies
272,677 Datasets available on Data.gov

…”

Gregg concludes:

If only there was a API for Data.gov (cough… odata/”Dallas” would be very cool here… cough)

Still, there’s a ton of “data” here. Now to only turn it into information and finally wisdom…

Andy Novick’s Introduction to SQL Azure is a 00:02:18 Webcast posted 5/17/2010:

The first step in working with SQL Azure is to set up an account and provision a server. It only take a couple of minutes and when you're done you'll have a server name and a full fledged DNS path to it as well. We'll have more on this topic soon!

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

No significant articles today.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Microsoft Case Studies posted IT Company [ISHIR Infotech] Attracts New Customers at Minimal Cost with Cloud Computing Solution on 5/19/2010:

imagePartner Profile

ISHIR Infotech is an outsourced product development company. With extensive experience in the product development lifecycle, ISHIR helps emerging software leaders bring superior products to market.

Business Situation

ISHIR wanted to transform its in-house Vendor Management Solution (VMS) application into a commercially-viable solution, but without changing its business model and without capital expenses.

Solution

The company evaluated several service providers, but chose Windows Azure and Microsoft SQL Azure to quickly migrate its existing application to the cloud.

David Pallman outlines and details 11 of Neudesic’s Windows Azure Best Practices in this 4/22/2010 article that I missed when posted:

At Neudesic we do a great deal of Windows Azure consulting and development. Here are the best practices we've identified from our field experience:

  1. Validate your approach in the cloud early on
  2. Run a minimum of 2 server instances for high availability
  3. SOA best practices generally apply in the cloud.
  4. SOAP is out, REST is in.
  5. Be as stateless as possible.
  6. Co-locate code and data as much as possible.
  7. Take advantage of data center affinity.
  8. Retry calls to Windows Azure services, SQL Azure databases, and your own web services before failing.
  9. Use separation of concerns to isolate cloud/enterprise differences.
  10. Get as current as possible before migrating to azure.
  11. Migrate applications one tier at a time.

David fleshes out each item in his post.

Return to section navigation list> 

Windows Azure Infrastructure

Alan Irimie offers a brief review of Sriram Krisnan’s Programming Windows Azure: Programming the Microsoft Cloud Book in this 5/23/2010 post:

The book by Sriram Krishnan Programming Windows Azure is now available on Amazon. Sriram Krishnan is a Program Manager on the Windows Azure team at Microsoft. At Windows Azure, he ran the feature teams which built the service management APIs, geo-capabilities and several back-end infrastructure pieces. He is a prolific speaker and has delivered talks at several conferences including PDC and MIX.

The book’s first half focuses on how to write and host application code on Windows Azure, while the second half explains all of the options you have for storing and accessing data on the platform with high scalability and reliability. Lots of code samples and screenshots are available to help you along the way.

Not everything about Windows Azure is covered in this book, and it is impossible for one book to cover it all. It is a must read however, for any Windows Azure developer. It is one of those must have books.

<Return to section navigation list> 

Cloud Security and Governance

No significant articles today.

<Return to section navigation list> 

Cloud Computing Events

Waldo reviews Freddy Kristiansen’s session at Directions EMEA 2010: Windows Azure Applications on 5/22/2010:

image I've been looking forward to this [Directions EMEA 2010] session of Freddy [Kristiansen]. He has been advertising it during our session on Wednesday, and apparently, people listened, because lots of the people that attended our session were here as well. Off course, it's a hot topic as well .. the cloud.. what the hell is that cloud?? And Azure?? I don't know if these questions were really answered during the session, but at least, everyone has got a pretty nice picture now of what it can be useful for ... .

I'm not going to blog all the details, because very soon, Freddy will blog all about it on his blog.

He started out with a simple explanation about Windows Azure: it's some kind of "Cloud Services Operation System" and serves as the development, service hosting and service management environment for the windows azure platform... . Simply said: it's something out there (hosted by Microsoft) that you can use (or abuse) to put your services .. and off course pay for what you use. If you need a lot of resources, you'll get it automagically (and pay for it, off course), if you don't need it, not. For example, ticket sales for Michael Jackson (he used Bruce Springsteen as an example, but I like Michael Jackson a little bit more :-) ) would have sold out in a matter of minutes. So, in a matter of minutes, you have to be able to sell 40000 tickets, and after that, you need a lot less resources from the server(s). Cloud services is a great way to deal with this, but not only this, off course.

Cloud services is just something out there, hosted, that you can use to make your setup much easier as well. For your internet applications, you don't have to foresee some hosted environment to do your stuff. It's already there, on Microsoft hardware, and they guarantee it's scalable, secure, reliable and an uptime of 100%.

What a lot of people wonder is whether it's good for hosting ERP .. well, Freddy stated pretty clear that it's definitely not intended for that... so now you know ;°).
He continued by explaining that there are multiple ways of using the web services .. and you can also use NAV 2009 web service on the internet .. by using a proxy service, which he explains on his blog. Thing is that he showed us how to connect over the cloud (service bus) with a guestbook application and an iPhone app... . Quite nice examples, but again, I'm making it myself easy, and I'm not going into details, because Freddy announced multiple times that he is going to publish everything on his blog .. so you'll be able to find every detail there shortly.

Now, we'll have to think about applications in where we can use "the cloud" ... . It brings us a lot of opportunies. It's just a matter of how creative we are to develop solutions for it .. . It's nice to see that Microsoft are bringing services to the cloud as well (like the Dynamics Online Payments thingy...) .. so we'll have to go on that bus as well.

Good job, Freddy, and keep the blogging comin' :-).

David Makogon wrote Richmond Code Camp May 2010 Materials: Azure talk on 5/22/2010:

On May 22, I presented “Azure: Taking Advantage of the Platform.” Here’s the slide deck, sample code, and sample PowerShell script from the talk:

Link to SkyDrive

Thanks to everyone who attended, and for all the great questions! Here are a few takeaways from the talk:

  • The Azure portal is http://www.azure.com. Here’s where you’ll be able to administer your account. You’ll also see a link to download the latest SDK.
  • To set up an MSDN Premium account, visit my blog post here for a detailed walkthrough.
  • Download the SDK here. Then grab the Azure PowerShell cmdlets here.
  • To understand the true cost of web and worker roles, visit my blog post here, and the follow-up regarding staging here.
  • The official pricing plan is here. MSDN Premium pricing details are here.
  • The Azure teams have several blogs, as well as voting sites for future features. I compiled a list of the blogs and voting sites here.
  • Remember to configure your service and storage to be co-located in the same data center. This is done by setting affinity when creating your services.
  • While all storage access is REST-based, the Azure SDK has a complete set of classes that insulate you from having to construct properly-formed REST-based calls.
  • We talked about the limited indexing available with table storage (partition key + row key). Don’t let this be a deterrent: Tables are scalable up to 100TB, where SQL Azure is limited to 50GB. Consider using SQL Azure for relational data, and offload content to table storage, creating a hybrid approach that offers both flexible indexing and massive scalability. You can reference partition keys in a relational table, for instance.
  • Clarifying timestamps across different data centers and time zones (a question brought up in Brian Lanham’s Azure Intro talk): Timestamps are stored as UTC.
  • Don’t forget about queue names: they must be all lower-case letters, numbers, or dash (and must start and end with letter or number)

If anyone’s interested in a 2-day Azure Deep Dive, I’ll be teaching a free 2-day Azure Bootcamp July 7-8 in Virginia Beach. Register here.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Derrick Harris explains Why Amazon Should Worry About Google App Engine for Business in this 5/23/2010 post to the GigaOm blog:

I wrote last week that the time may be right for Amazon Web Services to launch its own platform-as-a-service (PaaS) offering, if only to preempt any competitive threat from other providers’ increasingly business-friendly PaaS offerings. The time is indeed right, now that Google has introduced to the world App Engine for Business.

That’s because App Engine for Business further advances the value proposition for PaaS. PaaS offerings have been the epitome of cloud computing in terms of automation and abstraction, but they left something to be desired in terms of choice. With solutions like App Engine for Business, however, the idea of choice in PaaS offerings isn’t so laughable. Python or Java. BigTable or SQL. It’s not AWS (not that any PaaS offering really can be), but it’s a big step in the right direction. App Engine for Business is very competitive in terms of pricing and support, too.

Google is often is cited as a cloud computing leader, but until now had yet to deliver a truly legitimate option for computing in the cloud. Mindshare and a legit product make Google dangerous to cloud providers of all stripes, including AWS.

The integration of the Spring Framework in App Engine for Business is important because it means that customers have the option of easily porting Java applications to a variety of alternative cloud environments. Yes, AWS supports Spring, but the point is that Google is now on board with what is fast becoming the de facto Java framework for both internal and external cloud environments.

Meanwhile, in the IaaS market, AWS is busy trying to distinguish itself on the services and capabilities levels now that bare VMs are becoming commodities. Thus, we get what we saw this week, with AWS cutting storage costs for customers who don’t require high durability (a move some suggest was in response to a leak about Google’s storage announcement), and increasing RDS availability with cross-Availability-Zone database architectures. It’s all about differentiation around capabilities, support and services, and every IaaS provider is engaged in this one-upmanship.

If PaaS is destined to become the preferred cloud computing model, and if the IaaS market is becoming a rat race of sorts, why not free cloud revenues from the IaaS shackles and the threat of PaaS invasion? Amazon CTO Werner Vogels will be among several cloud computing executives speaking at Structure 2010 June 23 & 24, so we should get a sense then what demands are driving future advances for AWS and other cloud providers. For more on Google vs. Amazon and PaaS vs. IaaS, read my entire post here. [Requires GigaOm Pro subscription.]

GigaOM’s must-attend cloud computing and Internet infrastructure conference. Join us for insight into this fast-paced, trillion-dollar market. Register now »

Derrick also offers a brief How Google Put the Cloud Computing World on Notice post of the same date:

I wrote last week that the time might be right for Amazon Web Services to launch its own platform-as-a-service (PaaS) offering, if only to preempt any competitive threat from other providers’ increasingly business-friendly PaaS offerings. That stance is firmer than ever now that Google has introduced to the world App Engine for Business, which further advances the PaaS value proposition. Subscribe [to GigaOm Pro] to read the rest of this article.


James Urquhart replies to Geva Perrry’s post [see below] with a Does cloud computing need LAMP? article of 5/23/2010 for C|Net News’ The Wisdom of Clouds blog:

The LAMP stack is a collection of open-source technologies commonly integrated to create a platform capable of supporting a wide variety of Web applications. LAMP typically consists of Linux, Apache Tomcat, MySQL, and either the PHP, Python or Perl scripting languages. Famously used at some of the best known Web businesses (such as Wikipedia), LAMP has seen widespread adopting in corporate and government settings in the last several years.

My cohost on the occasional Overcast podcast, Geva Perry, recently wrote a blog post asking a simple but profound question: who will build the LAMP cloud? Who will create the first platform as a service (PaaS) offering, a complete programming environment that hides the operational challenges of running applications in the open-source stack while providing all the tools and compatibility LAMP offers today?

As I initially read the post, I thought "good question." As Geva notes, there is a huge gap in the existing market--almost a bias towards Java and Ruby that ignores the value of LAMP:

“Salesforce.com and VMware recently unveiled a Java-focused platform-as-a-service offering, VMForce.com. Meanwhile, Microsoft has Azure, a PaaS offering focused on the .Net stack, and startups Heroku and Engine Yard both deliver Ruby-on-Rails cloud platforms. But who's going to offer a PaaS for LAMP?”

Geva goes on to analyze the key players in the market in some depth. For example, Zend Technologies, a business founded by the creators of PHP, may be planning to do so with $9M in funding secured this week. Google, with App Engine, already has a Python offering that goes some of the way toward LAMP, and recently announced MySQL support for later this year, but also seems to be switching its focus to Java.

Geva also runs down analyses of Microsoft Azure, Heroku, and the Ruby community, and Amazon Web services.

After reading Geva's post, however, I was struck by the very first comment he received from Kirill Sheynkman, arguing that the future of LAMP in the cloud may be a moot point:

“Yes, yes, yes. PHP is huge. Yes, yes, yes. MySQL has millions of users. But, the "MP" part of LAMP came into being when we were hosting, not cloud computing. There are alternative application service platforms to PHP and alternatives to MySQL (and SQL in general) that are exciting, vibrant, and seem to have the new developer community's ear. Whether it's Ruby, Groovy, Scala, or Python as a development language or Mongo, Couch, Cassandra as a persistence layer, there are alternatives. MySQL's ownership by Oracle is a minus, not a plus. I feel times are changing and companies looking to put their applications in the cloud have MANY attractive alternatives, both as stacks or as turnkey services [such as] Azure and App Engine.”

I have to say that Kirill's sentiments resonated with me. First of all, the LA of LAMP are two elements that should be completely hidden from PaaS users, so does a developer even care if they are used anymore? (Perhaps for callouts for operating system functions, but in all earnestness, why would a cloud provider allow that?)

Second, as he notes, the MP of LAMP were about handling the vagaries of operating code and data on systems you had to manage yourself. If there are alternatives that hide some significant percentage of the management concerns, and make it easy to get data into and out of the data store, write code to access and manipulate that data, and control how the application meets its service level agreements, is the "open sourceness" of a programming stack even that important anymore?

This discussion reflects a larger discussion about the future of open source in a world dominated by cloud computing. If you can manipulate code, but not deploy it (because it is the cloud provider's role to deploy platform components), what advantage do you gain over platform components provided to you at a reasonable cost that "just work," but happen to be proprietary?

I'd love to hear your thoughts on the subject. Has cloud computing reduced the relevance of the LAMP stack, and is this indicative of what cloud computing will do to open-source platform projects in general?

Graphics Credit: Fractal Angel

Geva Perry asks Who Will Build the LAMP Cloud? in this 5/22/2010 post to the GigaOm blog:

Zend Technologies, whose founders created the programming language PHP and subsequently touts itself as “the PHP company,” said Monday that it raised an additional $9 million. But while the press release offered little information as to the money’s intended use, it did contain a somewhat cryptic quote from its lead investor and board member, Moshe Mor of Greylock Partners (italics mine):

“Today’s enterprises are looking to agile Web and Cloud-based technologies such as PHP to deliver business value better and faster…We believe that Zend’s leadership position in the PHP space enables the company to drive its solution to deeper adoption across a broad commercial audience in the U.S. and around the globe.”

Since when is PHP a “cloud-based technology” (whatever that means)? I know Mor to be a smart guy, so I can only assume there’s more to his statement than meets the eye — and I believe it has to do with the LAMP stack.

Salesforce.com and VMware recently unveiled a Java-focused platform-as-a-service offering, VMForce.com. Meanwhile, Microsoft has Azure, a PaaS offering focused on the .Net stack, and startups Heroku and Engine Yard both deliver Ruby-on-Rails cloud platforms. But who’s going to offer a PaaS for LAMP?

One candidate is, of course, Zend, the commercial company behind PHP, the biggest P in LAMP. Zend is also the driving force behind the Simple Cloud API, which is intended to simplify integration between PHP applications and cloud services. But for Zend, which has operated under a typical open-source commercialization model by offering services, support and premium commercial licenses for on-premise installations, operating a cloud service is a whole new area of competency that requires an entirely new business model.

Google is another candidate. The search giant already has a PaaS offering, Google App Engine, that supports both Java and Python, another one of the Ps in LAMP. But until recently it’s been accused of being a lightweight offering that creates lock-in by forcing developers to use Google-specific programming models, such as with threading and data structure. In fact, because of this, Google’s platform lacked MySQL support, the M in LAMP. And although Google recently rolled out a version of its App Engine tweaked for the enterprise, including support for MySQL, the focus seems to be on Java, not on LAMP.

Heroku is another possibility, perhaps surprisingly given how much the startup is identified with the Ruby community. As Stacey noted in a post about its recent $10 million investment announcement:

“We don’t think the market is going to end up with a Ruby platform and a Java platform and a PHP platform,” Byron Sebastian, Heroku’s CEO, said to me in an interview. “People want to build enterprise apps, Twitter apps and to do what they want regardless of the language.” Sebastian said he sees the round as a huge validation for the Ruby language as a way to build cloud-based applications, but doesn’t want to tie Heroku too closely to Ruby. “The solution is going to be a cloud app platform, rather than as a specific language as a service,” Sebastian said.

I like Sebastian and the Heroku guys a lot, but my head’s still spinning from that ambivalent statement.

image Even Microsoft has committed to supporting PHP and MySQL on its Azure platform, behind which there’s already an open-source project called PHPAzure. But the operating system is still Windows, so the Microsoft initiative does not qualify as a LAMP stack cloud.

Finally, Amazon can never be discarded as a significant player whenever it comes to cloud computing. As Derrick Harris has postulated, there’s a strong possibility that Amazon will come out with a PaaS offering. And if it does, a LAMP stack-focused platform makes a lot of sense, given that it already offers a MySQL database-as-a-service offering with its Amazon RDS service.

Then again, there could always be a startup hard at work building the LAMP Cloud. Do you know of anyone else? Would you want a PHP or LAMP platform as a service? Let us know in the comments.

Vivek Kundra’s 35-page State of Public Sector Cloud Computing white paper PDF of 5/20/2010 carries this Executive Summary:

image The Obama Administration is changing the way business is done in Washington and bringing a new sense of responsibility to how we manage taxpayer dollars. We are working to bring the spirit of American innovation and the power of technology to improve performance and lower the cost of government operations.

The United States Government is the world’s largest consumer of information technology, spending over $76 billion annually on more than 10,000 different systems. Fragmentation of systems, poor project execution, and the drag of legacy technology in the Federal Government have presented barriers to achieving the productivity and performance gains found when technology is deployed effectively in the private sectors.

In September 2009, we announced the Federal Government’s Cloud Computing Initiative. Cloud computing has the potential to greatly reduce waste, increase data center efficiency and utilization rates, and lower operating costs. This report presents an overview of cloud computing across the public sector. It provides the Federal Government’s definition of cloud computing, and includes details on deployment models, service models, and common characteristics of cloud computing.

As we move to the cloud, we must be vigilant in our efforts to ensure that the standards are in place for a cloud computing environment that provides for security of government information, protects the privacy of our citizens, and safeguards our national security interests. This report provides details regarding the National Institute of Standards and Technology’s efforts to facilitate and lead the development of standards for security, interoperability, and portability.

Furthermore, this report details Federal budget guidance issued to agencies to foster the adoption of cloud computing technologies, where relevant, and provides an overview of the Federal Government’s approach to data center consolidation.

This report concludes with 30 illustrative case studies at the Federal, state and local government levels. These case studies reflect the growing movement across the public sector to leverage cloud computing technologies.

<Return to section navigation list> 

Wednesday, September 23, 2009

Windows Azure and Cloud Computing Posts for 9/21/2009+

Windows Azure, Azure Data Services, SQL Azure Database and related cloud computing topics now appear in this weekly series.

Tip: Copy , or ••• to the clipboard, press Ctrl+F and paste into the search text box to find updated articles.

•• Update 9/23/2009: Rob Gillen’s Azure with Large Data Sets presentation and live demo, Jay Fry’s review of the 451 Group’s “Cloud in Context” Event, CloudSwitch leaves stealth mode, Mary Hayes Weier says subscription-based pricing for Oracle products is “on Safra’s desk,” Linda MGlasson on “The Future of PCI,” Chris Hoff warns about patches to IaaS and PaaS services, Gartner’s Tom Bittman proposes recorded music as A Better Cloud Computing Analogy than water or electricity, two Johns Hopkins cardiologists recommend standardizing EHR/PHR on VistA

• Update 9/22/2009: Zend Simple Cloud API and Zend Cloud, Ruv on OpenCloud APIs, John Treadway on Cloud Computing and Moore’s Law, Lori MacVittie on Cloud Computing versus Cloud Data Centers, Andrea DiMaio and the Government 2.0 HypeCycle, and more.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon.

Read the detailed TOC here (PDF). Download the sample code here. Discuss the book on its WROX P2P Forum.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use these links, click the post title to display the single article you want to navigate.

Azure Blob, Table and Queue Services

•• Rob Gillen (@argodev) delivered a Windows Azure: Notes from the Field presentation to the Huntsville [AL] New Technology Users group (HUNTUG.org) on 9/14/2009 that demonstrates methods for processing large earth science datasets with Azure tables. See the Live Windows Azure Apps, Tools and Test Harnesses section for details.

Zend Technologies reduces cloud-storage vendor lock-in anxiety with its Simple Cloud API of 9/22/2009 for Windows Azure, Amazon Web Services, Nirvanix and RackSpace storage services. See the Live Windows Azure Apps, Tools and Test Harnesses section for details.

Simon Munro’s Catfax project on CodePlex demonstrates moving SQL data to and from the cloud using SQL CLR, Azure WCF and Azure Storage:

Catfax is a demonstration project which shows how rows can be uploaded and retrieved from the cloud in a manner that is well integrated with SQL Server using SQL-CLR. The application has a SQL-CLR stored procedure that calls a WCF service hosted on Azure. The Azure web role stores the data in Azure Tables which can be retrieved later from SQL by executing a similar SQL-CLR sproc.

A more detailed description can be found [in this] blog post: http://blogs.conchango.com/simonmunro/archive/2009/07/08/catfax-sql-clr-wcf-and-windows-azure.aspx.

Simon’s project is similar for Azure Tables is similar to George Huey’s for SQL Azture databases (see below), but Simon’s is a two-way street.

<Return to section navigation list> 

SQL Azure Database (SADB, formerly SDS and SSDS)

My original Using the SQL Azure Migration Wizard with the AdventureWorksLT2008 Sample Database post is updated for George Huey’s new SQL Azure Migration Wizard v.0.2.7, which now handles T-SQL scripts for exporting schema and data from local SQL Server 2005+ databases to SQL Azure in the cloud.

<Return to section navigation list> 

.NET Services: Access Control, Service Bus and Workflow

No significant new posts on this topic today.

<Return to section navigation list> 

Live Windows Azure Apps, Tools and Test Harnesses

•• Rob Gillen (@argodev) delivered a Windows Azure: Notes from the Field presentation to the Huntsville [AL] New Technology Users group (HUNTUG.org) on 9/14/2009 that demonstrates methods for processing large earth science datasets with Azure tables. Here’s the session’s description:

Come learn about Microsoft's Azure platform (and cloud computing in general) as we look at an application built to assist in the processing and publishing of large-scale scientific data. We will discuss architecture choices, benchmarking results, issues faced as well as the work-arounds implemented.

Rob is a developer focused on Microsoft technologies for over ten years working in the service provider (hosting) market place as well as with federal and corporate customers. Rob specializes in application and service provisioning, identity management, SharePoint and is currently working on the intersection of traditional HPC and the commercial “cloud”. Rob has spent the last two years working on the applications team at Oak Ridge National Laboratory and is currently working in the Computer Science Research Group at ORNL studying the role of cloud computing in our portfolio of scientific computing.

You can learn more in slides 19 through 21 about Rob’s processing of a 1.2 GB NetCDF (Network Common Data Form) file of climate data for the 20th century stored in Azure tables as a flattened view. Rob documents varying methods of loading the tables in slides 22 through 25. Here his live Silverlight visualization of one set of data shown in slide 26 (click for full-size image, 700 KB):

 

Rob’s blog offers a series of detailed articles posted while he was testing the data processing and visualization techniques described in his HUNTUG presentation and the live demo.

His slide 28 observes that “ATOM is *very* bloated (~9 MB per time period), average of 55 seconds over 9 distinct serial calls)” whereas “JSON is better (average of 18.5 seconds and 1.6 MB).” I’ve been raising this issue periodically since Pablo Castro initially adopted the AtomPub format for ADO.NET Data Services. See Rob’s AtomPub, JSON, Azure and Large Datasets, Part 2 of 8/20/2009 and AtomPub, JSON, Azure, and Large Datasets of 8/14/2009:

Suzanne Fedoruk of the Physicians Wellness Network reported on 9/23/2009 that PWN Announces That Consumers Can Now Store webLAB Test Results In Their Microsoft HealthVault Account:

Each day more consumers are turning to webLAB to save time and money by ordering general wellness lab tests online. Physicians Wellness Network (PWN) today announced that consumers can now store, track and trend their webLAB test results in their Microsoft HealthVault account. HealthVault is an open, Web-based platform designed to empower consumers by putting them in control of their health information.

Informed Consumers Make Better Health Choices

"PWN physicians know that informed consumers make better health choices. Storing webLAB test results in a Microsoft HealthVault account makes this possible," said Brent Blue, M.D., president of PWN. "When people can track and trend important numbers, such as their cholesterol levels, they are armed with information to manage their health choices." …

I’m still waiting for QuestDiagnostics and Wallgreens Pharmacy to create their promised links to HealthVault.

•• Two Johns Hopkins Medical Institutions  cardiologists recommend adopting the Veterans Administration’s VistA EHR application in their Zakaria and Meyerson: How to Fix Health IT article of 9/17/2009 for the Washington Post:

… Most currently available electronic medical record software is unwieldy and difficult to quickly access, and there is still no vehicle for the timely exchange of critical medical data between providers and facilities. The stimulus bill included $50 billion dollars to promote uniform electronic record standards, but it will be difficult and costly to construct new systems ensuring interoperability of all current hospital software.

A cheaper and more effective solution is to adopt a standard electronic record-keeping system and ask that all health information software interface with it. In fact, a proven system already exists. The software is called the Veterans Health Information Systems and Technology Architecture (VistA), which the Veterans Affairs Department developed. VistA requires minimal support, is absolutely free to anyone who requests it, is much more user-friendly than its counterparts, and many doctors are already familiar with it. … [Wikipedia link added.]

Zend Technologies reduces cloud-storage vendor lock-in anxiety with its Simple Cloud API of 9/22/2009 for Windows Azure, Amazon Web Services, Nirvanix and RackSpace storage services.

My take: Zend's Simple Cloud API is a set of interfaces for RESTful file storage, document storage, and simple queue services with implementations for Amazon Web Services, Windows Azure storage services, Nirvanix Storage Delivery Network and Rackspace Cloud Files. Identical, or at least similar, implementations for major cloud storage providers will reduce IT managers' widely publicized apprehension of cloud vendor lock-in.

Zend will deliver the PHP implementation for the open source Zend Framework as the "Zen Cloud," which follows in the footsteps of other "OpenCloud" APIs, such as those from Sun Microsystems and GoGrid, as well as earlier Rackspace APIs. The TIOBE Programming Community Index for September 2009 reports that PHP is now #3 in programming language popularity, up from #5 in September 2008, so the Zend Cloud implementation has a large potential audience among developers for Amazon, Nirvanix, and Rackspace storage.

Google is conspicuous by its absence as a Zend contributor. However, that's not surprising because Google offers "Python as a Service" for the Web and doesn't emphasize cloud storage in its marketing materials.

Windows Azure is a .NET Platform as a Service (PaaS) offering but Microsoft (in conjunction with RealDolmen) released CTP3 of the Windows Azure SDK for PHP (PHPAzure) on 9/8/2009 as an open-source "commitment to Interoperability." The relative benefits of PHPAzure, Simple Cloud API and Zend Cloud to IT managers and developers remain to be seen. PHPAzure takes advantage of Azure-specific features, such as transactions on members of the same entity group, whereas the Simple API/Zend adapters offer least-common-denominator features of the four supported services.

deployment_scenario.jpg

My conclusion: Windows Azure developers will continue to program in C# and use the sample StorageClient libraries to integrate Azure .NET Web and Worker projects with RESTful Azure storage services. Zend’s initiative might convince the Azure team to formalize StorageClient as an official supplement to its RESTful storage APIs.

Vijay Rajagopalan, Principal Architect, from the Interoperability Technical Strategy team at Microsoft gives an overview of the Simple API for Cloud Application Services and details the initial contribution from Microsoft in this 00:06:42 Channel9 video of 9/22/2009.

Maarten Balliaux describes his Zend Framework: Zend_Service_WindowsAzure Component Proposal in detail on this Zen wiki page:

Zend_Service_WindowsAzure is a component that allows applications to make use of the Windows Azure API's. Windows Azure is a Microsoft platform which allows users to store unstructured data (think: files) and structured data (think: database) in a cloud service. More on http://www.microsoft.com/Azure.

The current proposal targets all 3 Windows Azure storage services. These services are:

  • Blob Storage
  • Table Storage
  • Queue Service

An example implementation of this can be found on CodePlex: PHP SDK for Windows Azure and in the ZF SVN laboratory.

• Mary Jo Foley adds her insight of the topic with Zend, Microsoft, IBM join forces to simplify cloud-app devlopment for PHP coders on 9/22/2009:

Zend Technologies and a number of its partners — including Microsoft — unveiled on September 22 another cloud-interop intiative. This one is aimed at developers who are writing new cloud-centric apps in PHP.

All the right buzzwords are part of the newly unveiled Simple API for Cloud Application Services. It’s an open-source initiative that currently includes Zend, Microsoft, IBM, Nirvanix, Rackspace and GoGrid as the founding members. (No Google and no Amazon, however.) It’s all about interoperability and community and dialogue.

For developers of new “cloud-native” applications, “this is a write once and run anywhere” opportunity, said Zend CEO Andi Gutmans. …

Maureen O’Gara chimes in with IBM, Microsoft, Others in Lock-Picking Cloud API Push of 9/22/2009:

Half the apps on the Internet are written in PHP. That gives Zend Technologies, the PHP house, a stake in the cloud.

So it’s rounded up cloud merchants Microsoft, IBM, Rackspace, GoGrid and Nirvanix and has gotten them to support its new open source drive to create a so-called Simple API for Cloud Application Services that developers can write to – or, Zend thinks as likely, rewrite to – to get native cloud apps.

These apps in turn promise to break the lock on closed clouds like Amazon’s, making it possible to move applications and their data in and out of clouds, migrating them around virtually all the major nebulae.

The trick will be in creating Simple Cloud API adapters.

Zend cloud strategist Wil Sinclair – that’s right, Wil – says both Amazon and Google were asked to join the initiative.

Google’s widgetry is based on Python so it’s got an excuse for not joining. Anyway, the in-house Google Data Liberation Front is at least promising to cut the shackles that condemn captive users to remain customers of Google services because their data is held hostage like it already has with Google App Engine.

See Doug Tidwell explains Cloud computing with PHP, Part 1: Using Amazon S3 with the Zend Framework in the Other Cloud Computing Platforms and Services section.

Eric Nelson’s Using IIS to generate a X509 certificate for use with the Windows Azure Service Management API – step by step of 9/22/2009 is a detailed tutorial:

This is one of a series of posts on my preparations for sessions on Azure and ORMs at Software Architect 2009.

One of the things that has been added to Windows Azure while i have been “elsewhere” is the Service Management API which the team introduced on the 17th of this month (Sept 2009).

This is a REST-based API which allows:

  • Deployments – Viewing, creating, deleting, swapping, modifying configuration settings, changing instance counts, and updating the deployment.
  • Listing and viewing properties for hosted services, storage accounts and affinity groups

It uses X509 client certificates for authentication. You can upload any valid X509 certificate in .cer format to the Windows Azure developer portal and then use it as a client certificate when making API requests.

But… you need an X509 certificate. If you have the Windows SDK installed then you can use makecert (details on the original post). An alternative is to use IIS 7. I decided to use IIS to get my X509 but it turned out a little less obvious than I expected. Hence a step by step is called for. …

Jonathan Lindo ruminates about Fixing Bugs in the Cloud in this 9/22/2009 post:

… One of the essential elements of success is getting a solid, scalable application online and running smoothly and securely. But there just hasn’t been a lot of innovation here.

Being able to quickly identify, respond to and resolve issues in a SaaS application is critical, because if one server has a bad day, it’s not one customer that feels pain, it’s hundreds or thousands. And that’s bad. SaaS acts like a big hairy amplifier on any defect or scalability issue that might be lurking in your app.

Technologies like Introscope, Patrol, Vantage, Snort and my software debugging company Replay are starting to address the needs, but our customers are still pioneering and forging the landscape as they increasingly feel the pains of this new software paradigm we find ourselves in. …

Msdevcon will offer six new SQL Azure training courses starting on 9/28/2009 in its Microsoft SQL Azure series:

The above are in addition to the many members of their Azure Services for Developer series.

Sara Forrest writes Bosworth wants you to take charge of your health in her 9/21/2009 post to ComputerWorld:

Adam Bosworth is asking you to take your health into your own hands (or at least into your computer). The former head of Google Health, Bosworth is now working on a new start-up, Keas Inc., which is dedicated to helping consumers take charge of their own health data. His work focuses on making individual health records easily accessible, thus preventing overtreatment and overspending through proper patient education.

While attending the Aspen Health Forum this summer, he took a few minutes to explain the importance of public access to health data.

“Let's talk a little bit about how you got to where you are today. I worked for Citicorp in the distant past, Borland building Quattro, Microsoft for 10 long years building what I now call Lego blocks for adults, BEA Systems for three years, Google, and three of my own start-ups.

I decided about five years ago that I'd spend the next 25 trying to improve health care and help bring it into the 21st century. I went to Google with that in mind and got sidetracked for 18 months running and building what are generally called Google Apps today before getting to work on Google Health. Keas, my current company, is in some way the culmination of everything I've learned in computing, applied to how to improve health care.” …

Adam also is known as the “father of Microsoft Access.”

Howard Anderson reports that healthcare providers are Weighing EHR/PHR Links in this 9/21/2009 post:

Provider organizations have to address several critical issues when launching personal health records projects, one consultant says. Among those issues, he says, is whether to enable patients to access a complete electronic health record and export it to a PHR--a step that John Moore, managing partner of Chilmark Research, Cambridge, Mass., advocates.

Hospitals and clinics also must decide what data elements are most essential to a PHR. Although many agree that medication lists and allergies must be in a PHR, providers are pondering whether to include all lab tests as well as diagnostic images, Moore notes.

Providers also must determine whether to enable patients to add their own notes to data imported from an EHR to a PHR, such as to question a doctor's findings, the consultant says. Plus, they must determine whether those patient notes will then flow into the EHR.

A strong advocate of two-way links between EHRs and PHRs, Moore also says practice management systems should be added to the mix to help enable patients to use a PHR to, for example, schedule an appointment. …

Carl BrooksPublic sector drags its heels on cloud post of 9/18/2009 cites examples of foot-dragging by public agencies:

As firms experiment with pay-as-you computing infrastructures and an ever-broadening constellation of services and technologies, cloud computing is all the rage in the prviate sector. But the public sector -- a vast technology consumer in the U.S. with different spending habits, requirements and obligations --is dragging its heels.

Public-sector IT departments, for instance, aren't rewarded for investing in the latest technology and for reducing costs; instead, they're expected to keep systems working far past standard technology lifecycles. …

Reuven Cohen analyzes Public Cloud Infrastructure Capacity Planning in this 9/21/2009 post:

In the run of a day I get a lot of calls from hosting companies and data centers looking to roll out public cloud infrastructures using Enomaly ECP. In these discussions there are a few questions that everyone seems to ask.

- How much is it going to cost?
- What is the minimum resources / capacity required to roll out a public cloud service?

Both questions are very much related. But to get to and idea of how much your cloud infrastructure is going to cost, you first need to fully understand what your resource requirements are and how much capacity (minimum resources) will be required to maintain an acceptable level of service and hopefully turn a profit.

In traditional dedicated or shared hosting environment, capacity planning is typically a fairly straight forward endeavor, (a high allotment of bandwidth and a fairly static allotment of resources), a single server (or slice of a server) with a static amount of storage and ram. If you run out of storage, or get too many visitors, well too bad. It is what it is. Some managed hosting providers offer more complex server deployment options but generally rather then one server you're given a static stack of several, but the concept of elasticity is not usually part of the equation.

Is it problems with capacity planning that are holding back adoption of cloud computing by government agencies?

<Return to section navigation list> 

Windows Azure Infrastructure

Krishnan Subramanian asks Does Private SaaS Make Any Sense? and says “Maybe” in this 9/23/2009 post:

Last week, I had a twitter discussion with James Watters of Silicon Angle about the idea of Private SaaS. He is of the strong opinion that Private SaaS is meaningless. Even though I share his opinion on it, I am not religious about having multi-tenancy as the requirement in the definition of SaaS.

The biggest advantage of SaaS is the huge cost savings it offers due to the multi-tenant architecture. However, enterprises are reluctant to embrace SaaS applications due to concerns about reliability, security, privacy, etc.. But, the other advantages of SaaS like low resource overhead, centralized control of user applications, simplified security and patch management, etc. are very attractive to the enterprises. In order to capture the enterprise markets, some of the vendors are shifting towards a Private SaaS approach.

Tom Bittman proposes recorded music as A Better Cloud Computing Analogy than water or electricity in this 9/22/2009 post to the Gartner blogs. Radio delivered “music as a service” (MaaS?) but “on-premises” music hasn’t died.

John Treadway explains the relationship between Moore’s Law and the Cloud Inflection in IT Staffing in this 9/21/2009 post:

I was in a meeting last week with Gartner’s Ben Pring and he made an interesting observation that cloud computing at the end is just a result of Moore’s law.  The concept is fairly simple and charts a path of increasingly distributed computing from mainframes, to minicomputers, to workstations and PCs (which resulted in client/server), then on to the Internet, mobile computing, and finally to cloud computing.  But cloud computing is not an increase in distribution of computing — it’s actually the reverse.  Sure, there are more devices than ever.  But since internet application topologies have replaced client/server, the leveraging of computing horsepower has migrated back to the data center.

The explosion in distributed computing brought on by ever faster processors (coupled by lower prices on CPUs, memory and storage) allowed for the client/server revolution to push workloads onto the client and off of the server.  Today, much of the compute power of edge devices (PCs, laptops and smart phones) is not used for computing, but for presentation.  Raw workload processing is happening on the server to an increasing degree. …

Until the cloud, Moore’s law resulted in a steady increase in demand for skilled systems and network administrators.  At some point, the economies of scale and concentrating effects of cloud computing – particularly in the area of IT operations – will be visible as a measurable decline in the demand for these skills.

John is the newly appointed Director, Cloud Computing Portfolio for Unisys.

Lori MacVittie’s Cloud Computing versus Cloud Data Centers post of 9/21/2009 contends: “Isolation of resources in ‘the cloud’ is moving providers toward hosted data centers:”

Isolation of resources in “the cloud” is moving providers toward hosted data centers and away from shared resource computing. Do we need to go back to the future and re-examine mainframe computing as a better model for isolated applications capable of sharing resources?

James Urquhart in “Enterprise cloud computing coming of age” gives a nice summary of several “private” cloud offerings; that is, isolated and dedicated resources contracted out to enterprises for a fee. James ends his somewhat prosaic discussion of these offerings with a note that this “evolution” is just the beginning of a long process.

imageBut is it really? Is it really an evolution when you appear to moving back toward what we had before? Because the only technological difference between isolated, dedicated resources in the cloud and “outsourced data center” appears to be the way in which the resources are provisioned. In the former they’re mostly virtualized and provisioned on-demand. In the latter those resources are provisioned manually. But the resources and the isolation is the same. …

The new Tech Hermit reports More Bad News for Microsoft Data Center Program on 9/21/2009:

Following on the terrible blow that Debra Chrapaty is leaving Microsoft for greener pastures at Cisco, the program received another huge blow with the resignation of Joel Stone who was responsible for the Operations of all North America based facilities. Moreover, he is taking a prominent position at Global Switch overseeing worldwide data center operations and will be based out of the United Kingdom. ..

The many mails we have received here at Tech Hermit feel that these resignations have more to do with a failure or at least troubled integration with the various Yahoo executives integrating into the program. As you may know, Dayne Sampson, and Kevin Timmons from Yahoo recently joined the Microsoft GFS organization the latter having responsibilities for Data Center Operations previously run by General Manager, Michael Manos.

One thing is clear that after the departure of Manos, the only real voice from Microsoft around infrastructure leadership was Chrapaty. With her departure and now key operations leadership as well, we have to ask is Microsoft’s data center program done for?

Rich Miller’s Tech Hermit Blog Returns post of 9/22/2009 report on the reincarnation of the Tech Hermit brand and the Digital Cave blog, which has offered many insights into Microsoft’s data center operations.

Jake Sorofman reads the crystal ball in DATACENTER.NEXT: Envisioning the Future of IT of 9/21/2009:

These days, there’s a lot of time spent defining cloud computing. If you believe the pundits, its definition remains a mystery—a cryptic riddle waiting to be deciphered.

Personally, I’m not that interested in defining cloud.

What is far more interesting to me is defining the future of IT, which almost certainly embodies aspects of what most people would recognize as cloud computing. Whether the future of IT is cloud itself is a silly tautological question since we haven’t defined cloud in the first place.

What we do know is that IT is facing a fundamental transformation—a transformation forced by technological, economic, competitive forces. Technologically, enterprises are recognizing that IT has become unthinkably complex. Economically, enterprises are under pressure to slash budgets and do more with less. And competitively, enterprises are recognizing that IT has become core to business and the delay of yesterday’s IT creates serious competitive risk. …

Jake Sorofman is Vice President of Marketing, rPath

Kara Swisher reports Top Microsoft Infrastructure Exec Chrapaty Heads to Cisco in this 9/20/2009 post to D | All Things Digital:

One of Microsoft’s top execs, Debra Chrapaty, who heads its infrastructure business, is leaving the software giant to take a top job at Cisco (CSCO), sources said.

Chrapaty–whose title is corporate VP of Global Foundation Services–is also one of increasingly few top women tech execs at Microsoft (MSFT), where she has worked for seven years.

The job put her in charge of, as a Microsoft site notes, “strategy and delivery of the foundational platform for Microsoft Live, Cloud and Online Services worldwide including physical infrastructure, security, operational management, global delivery and environmental considerations. Her organization supports over 200 online services and web portals from Microsoft for consumers and businesses.”

James Hamilton’s Here’s Another Innovative Application post of 9/21/2009 begins:

Here’s another innovative application of commodity hardware and innovative software to the high-scale storage problem. MaxiScale focuses on 1) scalable storage, 2) distributed namespace, and 3) commodity hardware.

Today's announcement: http://www.maxiscale.com/news/newsrelease/092109.

They sell software designed to run on commodity servers with direct attached storage. They run N-way redundancy with a default of 3-way across storage servers to be able to survive disk and server failure. The storage can be accessed via HTTP or via Linux or Windows (2003 and XP) file system calls. The later approach requires a kernel installed device driver and uses a proprietary protocol to communicate back with the filer cluster but has the advantage of directly support local O/S read/write operations.

MaxiScale’s approach sounds similar to that used to provide redundancy for Windows Azure tables and SQL Azure databases.

<Return to section navigation list> 

Cloud Security and Governance

 Chris Hoff (@Beaker) brings up issues about updating IaaS and PaaS cloud services for the second time in his Redux: Patching the Cloud post of 9/23/2009:

… What happens when the underlying CloudOS gets patched (or needs to be) and blows your applications/VMs sky-high (in the PaaS/IaaS models?)

How does one negotiate the process for determining when and how a patch is deployed?  Where does the cloud operator draw the line?   If the cloud fabric is democratized across constituent enterprise customers, however isolated, how does a cloud provider ensure consistent distributed service?  If an application can be dynamically provisioned anywhere in the fabric, consistency of the platform is critical.

I followed this up with a practical example when Microsoft’s Azure services experienced a hiccup due to this very thing.  We see wholesale changes that can be instantiated on a whim by Cloud providers that could alter service functionality and service availability such as this one from Google (Published Google Documents to appear in Google search) — have you thought this through? …

Linda MGlasson begins a series on “The Future of PCI” with The Future of PCI: 4 Questions to Answer on 9/22/2009:

It's been an interesting year for the Payment Card Industry Data Security Standard (PCI DSS, or just PCI).

On one hand there were the Heartland Payment Systems (HPY) and Network Solutions data breaches, after which at least one industry analyst declared "It's stop pretending that PCI is working."

On the other, there is the State of Nevada, which has passed a new law requiring businesses to comply with PCI when collecting or transmitting payment card information.

In the middle, is a debate among payment card companies, banking institutions, merchants, industry groups and even congressional leaders, questioning the merit of the standard and all hinting at the same open question: What is the future of PCI?

PCI stakeholders are gathering this week for the 2009 PCI Security Standards Council Community meeting in Las Vegas, NV. … [PCI link added.]

Linda continues with the four questions.

David Linthicum’s Should Failures Cast Shadows on Cloud Computing? post to InformationWeek’s Intelligent Enterprise blog of 9/21/2009 posits:

The Gmail outage last week left many asking about the viability of cloud computing, at least, according to PC World and other pundits.

"Tuesday's Gmail outage was not only an inconvenience it calls into question -- yet again -- the feasibility of present-day cloud computing. One popular prediction is that future computers won't need huge hard drives because all our applications and personal data (photos, videos, documents and e-mail) will exist on remote servers on the Internet (otherwise known as 'cloud computing')."

Every time Twitter goes out, or, in this case, a major free email system goes down, everyone uses the outage as an opportunity to cast shadows on cloud computing. I'm not sure why. In many cases its apples versus oranges, such as Twitter versus Amazon EC2. Also, systems go down, cloud and enterprise, so let's get over that as well.

Joseph Goedart reports Baucus Wants Tighter HIPAA Standards in this 9/21/2009 post to the Health Data Management site:

The health care reform plan issued by Senate Finance Committee chair Sen. Max Baucus (D-Mont.) calls for mandated adoption of "operating rules" that would significantly tighten the standards of HIPAA administrative/financial transactions. It also would increase the number of transaction sets.

The "operating rules" referenced in the plan are those developed under the voluntary CORE initiative under way for several years. CORE is the Committee on Operating Rules for Information Exchange within CAQH, a Washington-based payer advocacy group. The initiative seeks to build industry consensus on tightening of the HIPAA standards to facilitate health care financial/administrative transactions and offer more information to providers. …

<Return to section navigation list> 

Cloud Computing Events

Jay Fry processes customer feedback about cloud computing in his Making cloud computing work: customers at 451 Group summit say costs, trust, and people issues are key post of 9/22/2009:

A few weeks back, the 451 Group held a short-but-sweet Infrastructure Computing for the Enterprise (ICE) Summit to discuss "cloud computing in context." Their analysts, some vendors, and some actual customers each gave their own perspective on how the move to cloud computing is going -- and even what's keeping it from going. [Link to ICE added.]

The customers especially (as you might expect) came up with some interesting commentary. I'm always eager to dig into customer feedback on cloud computing successes and roadblocks, and thought some of the tidbits we heard at the event were worth recounting here.

    • Jay’s topics include:
    • Clouds under the radar
    • Customers: Some hesitate to call it cloud
    • Cloud: It's (still) not for the faint of heart
    • Biggest pain: impact on the people and the organization
    • Need to move beyond just virtualization
    • Can I drive your Mercedes while you're not using it?
    • Are we making progress on cloud computing?
When: 9/3/2009   
Where: Grand Hyatt Hotel, San Francisco, CA, USA 

Brent Stineman’s Twin Cities Cloud Computing – August Meeting Recap post of 9/20/2009 reviews an unscheduled visit by David Chappell to the Twin Cities Cloud Computing User group’s August 2009 meeting:

David’s presentation was divided into two portions. The first and most lengthy was a detailing of what is the Windows Azure Platform. Its obvious that David has spent a significant amount of time with the Windows Azure product team. Not only does he have a great understanding of the products past and present, but it seemed like he knew more than he was letting on about its future. The most important take-away I had from this was understanding the target audience for each of the components of the Windows Azure Platform.

Windows Azure, the application hosting platform, was intended to allow someone to build the next Facebook or Twitter. That’s why its database is a horizontally scalable system that is not based on traditional RDBMS models. This is also why its includes features and a price-tag that is unlike contemporary co-location type hosting packages. Those packages are targeted at simpler hosting needs. On the flip side of this is SQL Azure, a vertically scaling database that provides full RDBMS support. This component is less interested in scalability as it is in providing a targeted cloud based database solution.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

•• Mary Hayes Weier reports Oracle Contemplates Huge Shift: Subscription-Based Pricing in this 9/23/2009 post to InformationWeek’s Plug into the Cloud blog:

Oracle, it seems, is trying to hammer out a strategy to more heavily embrace the most radical faction of the SaaS movement, one that completely upends the traditional software vendor profit model: Subscription-based pricing. If what Oracle said yesterday in a Web event is true, this could be a huge shift for the software giant.

Oracle launched the virtual Web event, around midmarket software announcements, with a live video keynote address featuring some Oracle execs and a presentation about what's new. There it was in the preso: new pricing options will include "subscription-based pricing."

As noted in a story posted earlier today, that means Oracle will offer SaaS beyond the two apps (On Demand CRM and Beehive) it now offers, for all or some of the business applications it sells to midsize companies. The question is how exactly it plans to do that. When I asked Mark Keever, the Oracle VP who heads up midmarket apps, about subscription-based pricing in a follow-up call Tuesday, he didn't have any more details he could share with me right now. But, his group did have permission to say that subscription-based pricing would be available for midsize companies.

Just for laughs, Larry Ellison goes bonkers over cloud computing at the Churchill Club while Ed Zander looks on in this 00:03:13 You Tube video

•• CloudSwitch claims to be a “fast-growing cloud computing company backed by Matrix Partners, Atlas Venture and Commonwealth Capital Ventures, currently in stealth-mode” in this initial appearance of their Web site and blog on 9/23/2009:

We're building an innovative software appliance that delivers the power of cloud computing seamlessly and securely so enterprises can dramatically reduce cost and improve responsiveness to the business.

With CloudSwitch, enterprises are protected from the complexity, risks and potential lock-in of the cloud, turning cloud resources into a flexible, cost-effective extension of the corporate data center.

We're led by seasoned entrepreneurs from BMC, EMC, Netezza, RSA, SolidWorks, Sun Microsystems and other market-leading companies, and we're building a world-class team with proven expertise in delivering complex enterprise solutions.

•• Ellen Rubin asks Moving to the Cloud: How Hard is it Really? and notes “Today's cloud providers impose architectures that are very different from those of standard enterprise applications” in a 9/23/2009 post to the CloudSwitch blog:

Many IT managers would love to move some of their applications out of the enterprise data center and into the cloud. It's a chance to eliminate a whole litany of costs and headaches: in capital equipment, in power and cooling, in administration and maintenance. Instead, just pay as you go for the computing power you need, and let someone else worry about managing the underlying infrastructure.

But moving from theory into practice is where things get complicated. It's true that a new web application built from scratch for the cloud as a standalone environment can be rolled out quickly and relatively easily. But for existing applications running in a traditional data center and integrating with a set of other systems, tools and processes, it's not nearly so simple.

Doug Tidwell explains Cloud computing with PHP, Part 1: Using Amazon S3 with the Zend Framework in this detailed IBM developerWorks tutorial of 9/22/2009:

Cloud computing promises unlimited disk space for users and applications. In an ideal world, accessing that storage would be as easy as accessing a local hard drive. Unfortunately, the basic APIs of most cloud storage services force programmers to think about protocols and configuration details instead of simply working with their data. This article looks at classes in the Zend Framework that make it easy to use Amazon's S3 cloud storage service as a boundless hard drive.

I’m unsure why IBM promotes Amazon Web Services; perhaps it’s because AWS isn’t Microsoft or Google.

Ruven Cohen asks What is an OpenCloud API? in this 9/14/2009 post:

When it comes to defining Cloud Computing I typically take the stance of "I know it when I see it". Although I'm half joking, being able to spot an Internet centric platform or infrastructure is fairly self evident for the most part. But when it comes to an "OpenCloud API" things get a little more difficult.

Lately it seems that everyone is releasing their own "OpenCloud API's", companies like GoGrid and Sun Microsystems were among the first to embrace this approach offering there API's under friendly open creative common licenses. The key aspect in most of these CC licensed API's is the requirement that attribution is given to the original author or company. Although personally I would argue that a CC license isn't completely open because of this attribution requirement, but at the end of the day it's probably open enough.

Ruv concludes:

This brings us to what exactly is an OpenCloud API?
A Cloud API that is free of restrictions, be it usage, cost or otherwise.

and offers his $0.02 on Zend’s cloud initiative with New Simple Cloud Storage API Launched of 9/22/2009.

Andrea DiMaio reports Open Data and Application Contests: Government 2.0 at the Peak of Inflated Expectations on 9/22/2009:

Government 2.0 is rapidly reaching what we at Gartner call the peak of inflated expectations. This is the highest point in the diagram called “hype cycle”, which constitutes one of our most famous branded deliverables to our clients and that often feature on the press.

Almost all technologies and technology-driven phenomena go through this point, at variable speed. A few die before getting there, but many  stay there for a while and then head down toward what we call the “trough of disillusionment”, i.e. the lowest point in that diagram, to then climb back (but never as high as at the peak) toward the so-called “plateau of productivity”, where they deliver measurable value.

If one looks at what is going on around government 2.0 these days, there are all the symptoms of a slightly (or probably massively) overhyped phenomenon. Those that were just early pilots one or two years ago, are becoming the norm. New ideas and strategies that were been developed by few innovators in government are now being copied pretty much everywhere. …

Anthony Ha’s Dell buying Perot Systems for $3.9B post of 9/21/2009 to the Deals&More blog summarizes the purchase:

Dell announced today that it’s acquiring Perot Systems, the IT services provider founded by former presidential candidate H. Ross Perot, for $3.9 billion.

Perot Systems has more than 1,000 customers, including the Department of Homeland Security and the US military, according to the Associated Press, with health care and government customers accounting for about 73 percent of its revenue. In the last year, the companies say they made a combined $16 billion in enterprise hardware and IT services.

Dell is buying Perot stock for $30 a share, and says it plans to turn Perot into its services unit. The deal should help Dell sell its computers to Perot customers. It’s expected to close in the November-January quarter.

Last year, Dell competitor Hewlett Packard bought another Perot-founded services company, Electronic Data Systems.

As reported in an earlier OakLeaf post, Perot Systems was Dell’s pre-purchase choice for hosting cloud-based EMR/EHR applications. According to Perot CEO Peter Altabef, Perot Systems is one of the largest services companies serving the health-care sector, from which it derives about 48 percent of its revenue; around 25 percent of revenue comes from government customers.

More commentary on Dell’s acquisition of Perot:

Rich Miller reports Amazon EC2 Adding 50,000 Instances A Day in this 9/21/2009 post:

Amazon doesn’t release a lot of detail about the growth and profitability of its Amazon Web Services (AWS) cloud computing operation. But a recent analysis found that Amazon EC2 launched more than 50,000 new instances in a 24-hour period in just one region. Cloud technologist Guy Rosen analyzed activity on EC2 using Amazon resource IDs, and estimates that the service has launched 8.4 million instances since its debut. …

The new analysis follows up on previous research by Rosen on the number of web sites hosted on EC2 and other leading cloud providers. He noted that the data is a one-day snapshot, and could be skewed by a number of factors, but says the numbers are “impressive, to say the least.”

Maureen O’Gara reports Citrix Aims To Cripple VMware’s Cloud Designs on 9/12/2009 (missed when posted):

Citrix is going to try to bar VMware from getting its hooks deep in the cloud by developing the open source Xen hypervisor, already used by public clouds like Amazon, into a full-blown, cheaper, non-proprietary Xen Cloud Platform (XCP).

It intends to surround the Xen hypervisor with a complete runtime virtual infrastructure platform that virtualizes storage, server and network resources. It’s supposed to be agnostic about virtual machines and run VMware’s, which currently run only on its own infrastructure.

<Return to section navigation list>