Monday, June 22, 2009

Windows Azure and Cloud Computing Posts for 6/15/2009+

Windows Azure, Azure Data Services, SQL Data Services and related cloud computing topics now appear in this weekly series.

Updated 6/20 – 6/21/2009: Additions
Updated 6/18 – 6/19/2009: Additions and correction of date typos
• Updated 6/16 – 6/17/2009: Microsoft’s Allison Watson on Azure pricing; Robert Le Moine on Taking.NET Development to the Cloud and other additions

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use these links, click the post title to display the single post you want to navigate.

Azure Blob, Table and Queue Services

<Return to section navigation list> 

• Stefan Tilkov discusses REST and Transactions with Arnon Rotem-Gal-Oz’s observations about RETRO: A RESTful Transaction Model in this 6/17/2009 post.

Brent Stineman’s A last word on Azure Queues (Performance) post of 6/14/2009 describes his downloadable Queue performance test app:

Some time ago, someone came by the MSDN Windows Azure forums and asked a question regarding performance of Azure Queues. They didn’t just want to know something simple like call performance, but wanted to know more about throughput, from initial request until final response was received. So over the last month I managed to put together something that lets me create what I think is a fairly solid test sample. The solution involves a web role for initializing the test and monitoring the results, and a multi-threaded worker role that actually performs the test. Multiple worker roles could also have been used, but I wanted to create a sample that anyone in the CTP or using the local development fabric could easily execute. …

Krishnan Subramanian describes Adobe’s new Tables app in his 6/15/2009 Adobe Releases Spreadsheet SaaS Application And Adds Premium Version post:

Let me highlight some of the features of Tables as noted in their announcement

• All users can add data simultaneously - solving one of the biggest problems with shared worksheets. All data is always in up-to-date for everyone.
Presence - lets you know who else is working on the table and where they are working
Private and common views - allows the team to work together, but see the information that is important each person. Private views let you see information that is important to you, without disturbing others working on the sheet.
Filtering is real time so you can play with the data and adjust your filter in real time, without having to open a dialog box for every change.
Sorting - quick, simple and always includes all of the data

The most interesting feature for me is the idea of Private View and Common View. This feature really solves the problem encountered by people collaborating on spreadsheets online. Apart from these features, the functionality is very basic (well, that is the reason this product is still in the labs) and they have promised to add more features in the near future.

SQL Data Services (SDS)

<Return to section navigation list> 

See The Fortworth .NET User Group’s June 2009 meeting will be Developing Applications Using [SQL] Data Services in the Cloud Computing Events section.

.NET Services: Access Control, Service Bus and Workflow

 <Return to section navigation list>

Vittorio Bertocci’sThe Id Element Weekly: Donovan Follette on making the shift from ADFS v1 to Geneva Server post of 5/12/2009 describes the Donovan Follette on making the shift from ADFS v1 to Geneva Server video segment on Channel9:

In this week’s episode of the ID Element Vittorio interviews Donovan Follette… as the guest!

Donovan is a senior technical evangelist and a host for this very show: he worked on identity since he joined Microsoft in 2005, and is a well known expert in the ADFS community. In this episode Vittorio talks with Donovan about the relationship between ADFS and Geneva Server: Donovan explains in details how to map the old terminology to the new concepts introduced in Geneva, focusing on differences and similarities in the two approaches, and in general equipping today’s  ADFS expert with everything he or she needs for hitting the ground running with Geneva Server.

•• Matias Woloski describes the Claims-Driven Modifier control’s expressions for ClaimValue, Condition and Mapping, which the designer ordinarily sets for you (see below for more about the control).

Vittorio Bertocci’s Use claims for driving your web UI… without even *seeing* a line of code post of 6/19/2009 describes the new ASP.NET Claims-Driven Modifier server control. Vibro says:

While pretty much everybody can understand (& appreciate) the high level story about claims, it is not always easy to make it concrete for everybody. The developer who had to deal with code handling multiple credentials, or had to track down where a certain authorization decision happen, sees very clearly where and how claims can make his life easier: UI developers, however, may have found challenging to bridge the gap between understanding the general story and finding tangible ways in which claims make their work easier. Until now (at least i hope).

We have put together a demo which shows an example of what you could build on top of the Geneva Framework infrastructure and further raise the lever of abstraction, to the point that a web developer is empowered to take advantage of the information unlocked by the claims with just few clicks. This touches on the theme of customization, which somehow gets less attention that authentication and authorization (for obvious reasons) but that deserves its place nonetheless. In any case, it’s not rocket science: it is a simple ASP.NET control that can modify the value of properties of other controls on the page, according to the value of the incoming claims. Despite its simplicity, it allows a surprising range of tricks :-) 

The code of the demo is available on code gallery, at http://code.msdn.microsoft.com/ClaimsDrivenControl.

• Matias Woloski talks about the authorization process in Identity thoughts #2: Level 2 Authorization.

The authorization decision happens near the application or the service because it knows about the resource (each application has a different domain model).

The following figure shows a very high level architecture of the components and its interactions:

• Matias Woloski’s Identity thoughts #1: Analogy between a single app and a federated app post of 6/16/2009 offers a table that “shows an analogy of identity concepts between a single application and a federated application.”

The single app has its own identity silo and the federated app relies on an STS (like Geneva Server). I find this analogy useful to explain how things differ from the non-federated non-claim-based world.

Neil Kidd confirms that .NET Workflow Services will be missing from the RTM of Azure Services Platform v1 in his .NET Services workflow is moving to Fx 4‘s workflow engine, but … post of 6/16/2009.

Neil works for Microsoft in the UK as an Architect in the Microsoft Technology Centre.

• Manuel Meyer delivers an Introduction to Windows Azure Live Mesh/Live Framework Part 1 as the first in a series about Live Mesh, Live Services and the Live Framework.

• Vittorio Bertocci is Announcing FabrikamShipping, in-depth semi-realistic sample for Geneva Framework in this 6/16/2009 post:

Do you remember the PDC session in which Kim announced all the new wave of identity products, including Geneva?

During that session I showed a pretty comprehensive demo, where  all the products & services worked together for enabling a fairly realistic end-to-end scenario. You have seen demos based on the same scenario at TechEd EU, TechDays and in many presentations from my colleagues in the various subsidiaries; finally, if you came at the Geneva booth at RSA chances are that you got an detailed walkthrough of it. Since people liked it so much, we thought it would have been nice to extract just the main web application from that scenario, and make it available to everyone in form of an in-depth example. You can find the code in a handy self-installing file on code gallery, at http://code.msdn.microsoft.com/FabrikamShipping (direct link here).

Mary Jo Foley’s Too many .Nets, too little time? gets the word out that the .NET Services team is dropping .NET Workflow Services until .NET 4 releases, as I reported in last week’s post.

Oren Melzer explains Silent Information Card Provisioning with Geneva Server in this 6/15/2009 post:

One obstacle that administrators looking to deploy information cards in an enterprise will inevitably face is getting information cards to their users. Nobody wants to have to send an email to their users saying that in order to access a web service, they’ll need to go to an issuance website and download an information card. Things should just work. With that in mind, the “Geneva” Server and CardSpace teams created Silent Card Provisioning, a feature that uses Group Policy to deploy information cards to domain users automatically.

Leon Welicki’s Sequential and Flowchart modeling styles post of 6/12/2009 begins:

WF 4 ships with an activity palette that consists of many activities – some of these are control flow activities that represent the different modeling styles developers can use to model their business process. Sequence and Flowchart are a couple of modeling styles we ship in WF 4. In this post, we will present these modeling styles, learn what they are, when to use what, and highlight the main differences between them.

Leon Welicki is a Program Manager on Microsoft’s Connected Framework Team

DotNetBlogger posted Introduction to Workflow Tracking in .NET Framework 4.0 Beta1 on 6/11/2009.

By now you must be aware of the significantly enhanced Windows Workflow Foundation (WF) scheduled to be released with .Net Framework 4.0. The road to WF 4.0 and .Net Framework 4.0 Beta1 documentation for WF can give you more details. Being a member of the team responsible for the development of the WF tracking feature, I am excited to discuss the components that constitute this feature. In a nutshell, tracking is a feature to gain visibility into the execution of a workflow. The WF tracking infrastructure instruments a workflow to emit records reflecting key events during the execution. For example, when a workflow instance starts or completes tracking records are emitted. Tracking can also extract business relevant data associated with the workflow variables. For example, if the workflow represents an order processing system the order id can be extracted along with the tracking record. In general, enabling WF tracking facilitates diagnostics or business analytics over a workflow execution. For people familiar with WF tracking in .Net 3.0 the tracking components are equivalent to the tracking service in WF 3. In WF 4.0 we have improved the performance and simplified the programming model for WF tracking feature.

Here’s a high-level view:

Nuno Filipe Godinho’s Interesting Incubation and Innovation Projects from Microsoft in the Cloud spectrum post of 6/15/2009 describes and links to the following Azure incubation projects, most of which are relatively new:

Live Windows Azure Apps, Tools and Test Harnesses

<Return to section navigation list> 

Bruno Tekaly’s Azure – Rich Client(s) meets Azure Table Data. Smart Grid Sample – Step 01 post of 6/20/2009 starts a series that uses Azure Tables as the data source for an ASP.NET and a Silverlight rich client example:

Azure – Rich Client(s) meets Azure Table Data. Smart Grid Sample – Step 02 explains the components to be used.

Steve Marx posted an Update to The CIA Pickup Source on 6/19/2009 before leaving for his vacation in Moscow (Russia). Steve says:

The new code is already live at www.theciapickup.com.  Please download the source again to pick up the changes, and keep the feedback coming!

Be sure to watch Steve’s “What is Windows Azure?” (a Hand-Drawn Video) of 6/19/2009.

• Robert Le Moine calls Taking.NET Development to the Cloud “a leap of faith” in this 6/17/2009 post that describes how his employer uses the Azure Services Platform as virtual laboratory for application development:

Cloud computing platforms, such as Microsoft Azure, offer compelling advantages for building new scalable .NET applications. But can the Cloud be used for developing existing .NET applications? In this article, I'll explain how we've made the leap to Cloud-based development for our internal applications and the lessons we've learned along the way. Specifically, I'll describe our checklist for selecting a Cloud vendor and how we've used the virtualization capabilities of the Cloud to improve our agile development process. I'll also outline the quantifiable benefits we've seen, including saving $100,000 in capital expenditure and reducing our iteration cycle times by 25%.

As the development team lead for Buildingi, a corporate real estate consultancy that specializes in back-office technology solutions to manage large portfolios, I'm responsible for building Web-based applications using Visual Studio, the Microsoft .NET Framework, and Silverlight. Last year we started looking at Cloud Computing to gain the advantages of a scalable, virtualized platform for software. …

There are a few use cases where the Cloud is not recommended for testing (see below). These include tests that require specific x86 hardware (e.g., BIOS driver tests) and some types of performance and stress testing. If an application requires an onsite Web Service behind a firewall, this can usually be accessed using a VPN connection. …

• West Monroe Partners created this Windows Azure and Silverlight Interactive Map (http://tasteofchicago09.cloudapp.net/Map.aspx) for the forthcoming Taste of Chicago festival scheduled for 6/26 – 7/5/2009 in Chicago’s Grant Park, Lakefront and Loop neighborhoods.

The West Monroe Partners Launches New Interactive Map for Taste of Chicago press release of 6/16/2009 provides a lengthy description of the project:

In addition to developing the interactive Taste of Chicago map, West Monroe Partners and Microsoft also partnered to create a hosting solution that could handle the web site's user load-including half a million views. The hosted Microsoft Windows Azure solution provides the equivalent capacity of 25 purchased servers, with no infrastructure investment required by the City of Chicago. [Link added.]

• Dan Griffin’s Cloud Backup application “is a disaster recovery solution that allows you to export a Hyper-V virtual machine, archive it in Azure, and later restore it,” which he posted to CodePlex on 6/16/2009. Dan’s Cloud Backup whitepaper, CloudBackup.pdf, is a fully illustrated guide to using his solution.

• Ben Riga and Girish Raja describes a self-service Dynamics CRM project hosted in Windows Azure (Dynamics CRM Online) in their Dynamics Duo Rides again Channel9 video of 6/15/2009:

In this episode we walk through the demo in some detail.  The Wide World Importers Conference site we use here is the main site for a fictitious conference.  The self-service part of this is entirely hosted on Windows Azure.  As we walk through the registration process the information is retrieved and stored directly in Dynamics CRM Online.  Naturally, as we’ve said in the past, Dynamics CRM is great at managing both contact and transactional information.  We also look at how, by using 3rd party web services, we can compose new capabilities into our system.  In this case we show how to integrate an internet flight booking service into the attendee registration process and then store that complex flight booking information in the Dynamics CRM data store.  Finally we show how to use Silverlight to build a compelling user experience for a self-service portal.  This one is pretty slick.

The project also uses SQL Data Services to store data.

For more background and a video about Dynamics CRM and Azure, see Ben’s Self-Service Dynamics CRM solutions fly on Windows Azure post of 1/7/2009.

See Brent Stineman’s A last word on Azure Queues (Performance) post of 6/14/2009, which describes his downloadable Queue performance test app. in the Azure Blob, Table and Queue Services section.

Azure Infrastructure

<Return to section navigation list> 

••• Reuven Cohen’s Hoff's Cloud Metastructure post of 6/20/2009 discusses @Beaker’s Metastructure concept for defining cloud infrastructure:

Recently, Chris Hoff posted an interesting concept for simply defining the logical parts of a cloud computing stack. Part of his concept is something he is calling the "Metastructure" or "essentially infrastructure is comprised of all the compute, network and storage moving parts that we identify as infrastructure today. The protocols and mechanisms that provide the interface between the infrastructure layer and the applications and information above it".

Actually I quite like the concept and the simplicity he uses to describe it. Hoff's variation is the practical implementation for a meta-abstraction layer that sits nicely between existing hardware and software stacks while enabling a bridge for future yet undiscovered / undeveloped technologies.. The idea of a Metastructure provides an extensible bridge between the legacy world of hardware based deployments and the new world of virtualized / unified computing. (You can see why Hoff is working at Cisco, he get's the core concepts of unified computing -- one API to rule them all)

••• Chris Hoff posted Incomplete Thought – Cloudanatomy: Infrastructure, Metastructure & Infostructure on 6/19/2009:

I wanted to be able to take the work I did in developing a visual model to expose the component Cloud SPI layers into their requisite parts from an IT perspective and make it even easier to understand.

Specifically, my goal was to produce another visual and some terminology that would allow me to take it up a level so I might describe Cloud to someone who has a grasp on familiar IT terminology, but do so in a visual way:

Ron Schmelzer of ZapThink asks Who's Architecting the Cloud? in this 6/19/2009 post:

Will the cloud succumb to the same short-sighted, market pressure that doomed the ASP model and still plagues SaaS approaches?

As the hype cycle for the cloud computing continues to gather steam, an increasing number of end users are starting to see the silver lining, while others are simply lost in the fog. It is clear that the debate over the definition, business model, and benefits of cloud will continue for some time, but it is also clear that the sluggish economic environment is increasing the appeal of having someone else pay for the robust infrastructure needed to run one’s applications. Yet, all this talk of leveraging cloud capabilities, or perhaps even building one’s own cloud, whether for public or private consumption, introduces thorny problems. How can we make sure that the cloud will bring us closer to the heavenly vision of IT we search for rather than a fog that hides a complex mess? Who will make sure that the cloud vision isn’t just another reinterpretation of the Software-as-a-Service (SaaS), Application Service Provider (ASP), grid and utility computing model that provided some technical answers but didn’t simplify anything for the internal organization? Who is architecting this mess?

Jonathan Feldman discusses whether single-provider clouds are a “single vulnerability” in his Of Cloud 9 and The Importance of Parachutes post of 6/19/2009 for InformationWeek Analytics:

Back when I did a lot of security work, we used to joke around that single sign on should be called "single vulnerability". Maybe single provider cloud models should be called "single point of failure".

Toodledo went down hard last week . I rely massively on Toodledo to organize my massively complicated work and family life. But I wasn't terribly upset because my data lives in more than one place. I wrote a draft of this blog on the Toodledo site, but I could have easily written it on the equipment that houses the synchronized copy of my notes. The site being down was annoying but not, as we say in the support business, without its workaround. …

Multiple data centers replicating apps and data are the obvious workaround for “single vulnerability” syndrome.

Charles Babcock says Cloud Standards Will Emerge From Current Haze in this 6/18/2009 post for InformationWeek’s Cloud Computing Destination:

What standards do you follow if you're interested in getting started in cloud computing? The short answer is, there are few clearly defined standards in what remains a loosely defined area. Nevertheless, the main outline is clear. Follow the leaders and follow the Web.

In an InformationWeek Webcast on The Cloud and Virtualization June 16, I tried to lay out a few of the standards that will dominate cloud computing. One assumption is that cloud computing will adopt the most efficient paradigms found on the Internet, say the massive and uniformly managed server farms of Google and Amazon.

Dana Gardner’s latest DirectBriefing is EDS’s David Gee on the spectrum of cloud and outsourcing options unfolding before ID architect of 6/19/2009:

HP's purchase last year of EDS came just as talk of cloud computing options ramped up. So how does long-time outsourcing pioneer EDS fit into a new cloud ecology?

Is EDS, in fact, a cloud provider? And how will IT departments properly factor their decisions on what to keep on-premises in data centers versus placing assets and workloads on someone else's cloud infrastructure?

•• Yves Goeleven’s Event driven architecture onto the Azure Services Platform article of 6/19/2009 for the Microsoft Benelux Architect Newsletter begins:

In this article, I will guide you through this new environment and point out some of these design challenges that the cloud presents to us. I will also propose an architectural style, and some additional guidance, that can be used to overcome many of these challenges. Furthermore I'll give you an overview of the tools offered by the Azure cloud platform that can be used to implement such a system.

Yves details three event-processing styles: Simple Event Processing, Stream Event Processing, and Complex Event Processing.

Stream Event Processing

•• Lori MacVittie analyzes Lydia Leong’s post (see below) in her Your Cloud is Not a Precious Snowflake (But it Could Be) post of 6/18/2009:

She lists traits common to most cloud providers: premium equipment, VMWare-based, private VLANs, private connectivity, and co-located dedicated gear but doesn’t really get into what really is – or should be – the focus of cloud offerings: services. To be more specific, infrastructure services.

A cloud provider of course wants a solid, reliable infrastructure. That’s why they tend to use the same set of “premium” equipment. But as Lydia points out differentiation requires services above and beyond simple hosting of applications in somebody else’s backyard.

Lydia Leong distinguishes Job-based vs. request-based computing in this 6/18/2009 post to the Gartner Blog:

Companies are adopting cloud systems infrastructure services in two different ways: job-based “batch processing”, non-interactive computing; and request-based, real-time-response, interactive computing. The two have distinct requirements, but much as in the olden days of time-sharing, they can potentially share the same infrastructure. …

Observation: Most cloud compute services today target request-based computing, and this is the logical evolution of the hosting industry. However, a significant amount of large-enterprise immediate-term adoption is job-based computing.

Dilemma for cloud providers: Optimize infrastructure with low-power low-cost processors for request-based computing? Or try to balance job-based and request-based compute in a way that maximizes efficient use of faster CPUs?

• Krishnan Subramanian’s Nature’s Attack On Amazon And The Instance Vs Fabric Debate post of 6/17/2009 discusses the pros and cons of instance-based and fabric-based clouds:

Last week, a lightning strike rendered part of Amazon EC2 belonging to a single zone cutoff from the real world. I don't want to go into whether it is an outage or not debate but towards a different kind of debate. Ever since Cloud Computing started gaining traction, we have a debate in the industry about whether the instance based setup is better or a fabric based one. I thought I will revisit this debate again in the light of the recent Amazon EC2 "it's not an outage" incident. Let me do a brief recap of the terminologies and, then, see how the debate shapes up in the aftermath of the "Amazon lightning incident". …

• Vivek Kundra answers Cloud Computing: 10 Questions For Federal CIO Vivek Kundra from InformationWeek’s J. Nicholas Hoover. Background:

Federal CIO Vivek Kundra is well known for innovative approaches to government IT. He introduced Google Apps to the city of Washington, D.C. when he was its CTO of back in 2007.

He's brought with him to the federal government a philosophy that cloud computing could save money, facilitate faster procurement and deployment of technologies, and allow government agencies to concentrate more on strategic IT projects.

InformationWeek sat down with him at his office last week to discuss his thoughts about cloud computing in government, and what it would take to make cloud technologies easier to adopt in the federal space.

• Jim Metzler and Steve Taylor co-author The hype surrounding cloud computing, the first of two articles for NetworkWorld that compares the hype about cloud computing with that for Asynchronous Transfer Mode (ATM) a few years ago.

• PRNewswire reports that “MSMS Connects With HealthVault to Make Health Management Easier and More Effective” in this Michigan State Medical Society Collaborates With Microsoft to Expand Health Care Technology in Michigan press release of 6/17/2009:

The Michigan State Medical Society (MSMS) today announced a collaboration with Microsoft Corp., Compuware subsidiary Covisint and MedImpact Healthcare Systems, Inc., to be first in the nation to provide statewide connectivity of medical and pharmacy data for Michigan. Patients and physicians who use the medical society's electronic portal, MSMS Connect, will now have access to critical health care data in one location -- Microsoft HealthVault. This new collaboration expands MSMS' nation-leading effort to help implement electronic health care technology statewide. …

When fully implemented into MSMS Connect, the addition of HealthVault will enable patients to store their individual health data, or their whole family's health record, in one location at no cost. Through HealthVault, which is built on a privacy- and security-enhanced foundation, patients will have complete control over their electronic health data and can give permission to their physicians and other health care providers to view it. Patients can access data from their physicians, health plans, and pharmacies, as well as upload information from medical devices that monitor a number of factors including heart rate, blood pressure and blood sugar.

• Ina Fried’s Microsoft to announce Azure business plan next month post of 6/15/2009 quotes from an interview with Microsoft Corporate Vice President Allison Watson:

[T]he company will get concrete about the financial details and say how partners can help sell Azure at Microsoft's Worldwide Partner Conference which runs July 13-16 in New Orleans.

When Microsoft announced Azure, it said that all of the applications would be run from its data centers. However, Watson said the company is also looking at ways that partners can host cloud-based solutions.

"We've had some interesting conversations," Watson said.

Watson’s comments about enabling partners to host cloud-based solutions bodes well for potential on-site (private-cloud) Azure implementations, which would downplay cloud lock-in issues other than a choice of operating system. It’s a foregone conclusion that moving Azure projects to platforms other than Windows Server will be impossible.

Azure watchers have expected details about the Service Level Agreement (SLA) for Azure, but none of the articles about the interview mention SLAs explicitly. However, its likely that Azure SLAs and pricing will be interdependent.

Julie Bort chimes in with additional history of Microsoft pricing in her Long-awaited pricing details of Windows Azure expected soon post of 6/15/2009 to NetworkWorld’s Microsoft Subnet.

• Bill Stempf offers comparative cost analyses in The Dollars and Sense of Cloud Computing of 6/16/2009, which claims “Cloud computing makes a lot of sense in this economic environment - Part 2:”

This is about the money. Hate to say it, I really hate to say it, but what is going to make cloud computing take off is the financial - the economic realities of hardware, staff, and power consumption. Each of our money wizards has a perspective on this, and we will take them one at a time.

Here’s a link to Part 1.

• Rob England claims Cloud Computing Outlook [Is] Far From Sunny in this 6/16/2009 contrarian post to ServerWatch:

Cloud computing is "buzz" concept of the year for 2009. It has its place, especially for high-risk/low-capital applications like startups or small business or web sites, but for enterprise computing and — especially for improving existing core applications — I have a more jaundiced view.

As a concept, cloud computing is a pointer to the future, but there is much hype around the present. As James Maguire of Datamation put it recently: "As Cloud computing has emerged as a red hot trend, tech vendors of every stripe have painted the term 'Cloud' on their products, much like food brands all tout that they're 'low fat'."

• ebizQ’s Cloud Computing discussion section seeks answers to Where is the Revenue Stream in Cloud Computing?

This question comes from our cloud computing virtual conference, and asks: Where is the revenue stream in cloud computing? Who controls the money. If you are using services you are not responsible for, how will different providers receive their revenue?

The replies as of 6/15/2009 make an interesting read.

 Marianne Kolbasuk McGee asks Is That A Cloud On Healthcare's Horizon? in her 6/16/2009 post to InformationWeek’s Cloud Computing Destination. Marianne reports:

Cloud models are starting to provide an attractive option for large and influential regional medical centers to get lots of small, local, laggard doctor offices trading in their paper patient files for electronic medical records. Are there clouds in your forecast?

Beth Israel Deaconess Medical Center (BIDMC), together with its Beth Israel Deaconess Physicians Organization (BIDPO), is just one of a handful of large and prestigious health care organizations in the country helping small doctor offices in their region (in this case, the Boston area) to deploy e-medical record systems.

A cloud model allows these doctor offices to use software to manage their practices and patient data, but the servers are located remotely and supported by BIDMC and Concordant, a services provider. BIDMC is covering about 85% of the non-hardware expenses for the practices to deploy the eClinicalWorks software, and the doctor offices pay a monthly subscription fee of between $500 and $600 for support.

A similar cloud plan is also being used by University Health System of Eastern Carolina to get small doctor practices in rural North Carolina using 21st century technology, says CIO Stuart James. "Most providers can't afford to hire IT people to keep these systems running," he says. "This keeps the costs down." …

Greg Ness analyzes Nick Carr's Cloud-Network Disconnect in this 6/15/2009 post that carries “Virtualization and cloud computing are promising to change the way in which IT services are delivered” as its deck.

Nicholas Carr told a recent audience at IDC Directions that "Cloud computing has become the center of investment and innovation."   While he is not a technologist, his sometimes shocking insight into the transformation of IT have been prescient, even if he doesn't sweat the details of how complex IT infrastructures can morph into the equivalent of today's public utilities.

To his credit Carr has predicted the rise of the cloud computing press release, multiple cloud conferences and panels and even the SaaS repositioning exercise.  He also foresaw the rise in Amazon and Google cloud announcements, perhaps years ahead of profits and/or material revenue. …

Tom Lounibos claims The Next BIG Cloud Service may be Reliability-as-a-Service in this 6/15/2009 post. Tom writes:

Aggregators …, such as FaceBook and Apple, are taking notice of what they are publishing to their sites these days with a growing concern that their own brand will be affected by poor performance by association. This forces SaaS vendors to look beyond their own cool features and rethink how with whom they deploy their applications with.   Even the leading Managed Service Providers (Rackspace, Terramark, and Savvis) and emerging Cloud Platform  Providers (Amazon, IBM, and Force.com) are rushing to deliver newer Services to ensure their customer’s that they have the most reliable deployment environment for SaaS based applications.  Reliability matters more today then ever!

David Linthicum’s Cloud Computing and SOA Convergence in Your Enterprise: A Step-by-Step Guide, Rough Cuts is a downloadable version of his book that was published May 6, 2009 by Addison-Wesley Professional as part of the Addison-Wesley Information Technology Series series. Here’s the description:

This book is the bible for those looking to take advantage of the convergence of SOA and cloud computing, including detailed technical information about the trend, supporting technology and methods, and a step-by-step guide for doing your own self-evaluation, and, finally, reinventing your enterprise to become a connected, efficient money-making machine. This is an idea-shifting book that sets the stage for the way information technology is delivered. This is more than just a book that defines some technology; this book defines a class of technology, as well as approaches and strategies to make things work within your enterprise.

Author David S. Linthicum has written the book in such a way that IT leaders, developers, and architects will find the information extremely useful. Many examples are included to make the information easier to understand, and ongoing support from the book’s Web site is included.  Prerequisites for this book are a basic understanding of Web services and cloud computing, and related development tools and technologies at a high level. However, the non-technical will find this book just as valuable as a means of understanding this revolution and how it affects your enterprise.

You can read the TOC, but nothing else, at no charge.

Mache Creeger describes his Cloud Computing: An Overview survey article for the Association for Computing Machinery (ACM) Queue magazine as a “summary of important cloud-computing issues distilled from ACM CTO Roundtables.” Topics include:

    • What is Cloud Computing?
    • CapEx vs. OpEx Tradeoff
    • Benefits
    • Use Cases
    • Distance Implications between Computation and Data
    • Data Security
    • Advice
    • Unanswered Questions

I don’t usually include survey articles in my cloud posts, but publication by ACM Queue gives this article higher than average clout.

Randy Bias continues his Cloud Futures posts with Cloud Futures Pt. 3: Focused Clouds of 6/15/2009. Randy classifies Focused Clouds in the following categories:

and recommends “Focus, Focus, Focus.”

Randy’s earlier articles are:

James Hamilton’s PUE and Total Power Usage Efficiency (tPUE) post of 6/14/2009 begins:

I like Power Usage Effectiveness as a course measure of infrastructure efficiency. Its gives us a way of speaking about the efficiency of the data center power distribution and mechanical equipment without having to qualify the discussion on the basis of server and storage used or utilization levels, or other issues not directly related to data center design. But, there are clear problems with the PUE metric. Any single metric that attempts reduce a complex system to a single number is going to both fail to model important details and it is going to be easy to game. PUE suffers from some of both nonetheless, I find it useful.

In what follows, I give an overview of PUE, talk about some the issues I have with it as currently defined, and then propose some improvements in PUE measurement using a metric called tPUE.

Cloud Security and Governance

<Return to section navigation list>

Hartford Financial Services Group, Inc.’s Cyberbuzz page contains links to current articles and podcasts about security and data risk; “data malpractice;” and Web 2.0 defamation lawsuits.

According to NetworkWorld’s Tim Green in his Insurers keep an eye on cloud security threats article of 5/22/2009:

The Hartford has a dedicated insurance offering called CyberChoice that pays off if failure of the IT infrastructure results in liability for loss of personal information, intellectual property and the like. The insurance pays for investigation of the failure and payment of the costs of notifying customers if there is a reportable breach.

Passing the insurance company’s test of whether to insure a business is not easy, says Drew Bartkiewicz, vice president of technology and new media markets for The Hartford. Only a very few corporations – mostly Fortune 500 – even apply for the insurance, and of those who do, two thirds are turned away for coverage because they don’t live up to the requirements.

Chris Hoff wants to See You At Structure09 and Cisco Live! according to his post of 6/18/2009:

I managed to squeak out some additional time at the end of my first docking with the Mothership in San Jose next week such that I can attend Cisco Live!/Networkers the week after.  I’ll be at Live! up to closing on 7/1. …

If you’re going to be there, let’s either organize a tweet-up (@beaker) or a blog-down…

• Eric Chabrow’s NIST Issues Two Reports article of 6/16/2009 for GovInfoSecurity.com provides brief descriptions of:

• David Linthicum “talks about how to use governance to make cloud computing work” in this  Governance and Cloud Computing podcast of 6/17/2009.

• Ellen Messmer reports “Microsoft’s “IT Infrastructure Threat Modeling Guide” offers security advice” in her 6/15/2009 Microsoft's threat-modeling guide: Think like an attacker article for NetworkWorld:

Microsoft offers up security advice on how to fend off attacks against corporate IT resources by looking at ways that attackers can undermine an organization in its “IT Infrastructure Threat Modeling Guide” published today.

“Look at it from the perspective of an attacker,” says Russ McRee, senior security analyst for online services at Microsoft, the primary author of the 32-page guide that discusses the fundamentals and tactics of network defense. McRee said the “IT Infrastructure Threat Modeling Guide” is actually the outcome of a lot of thinking about the topic at Microsoft, which itself is using the guide as a reference.

The guide is not about Microsoft products and in fact “needs to be agnostic so it can work for anyone,” says McRee. “An organization has to figure out what their threats are.”
The guide offers ways that IT staff—especially those without formal security training—can analyze their own wired and wireless networks, model them for security purposes, in some cases along the lines of “trust boundaries and levels,” to determine where defenses should be. …

• Craig Balding slams self-serving security audits of PaaS and IaaS vendors in his Stop the Madness! Cloud Onboarding Audits - An Open Question… post of 6/16/2009. Craig writes near the end of his detailed post:

If you’re following along thus far, you’ll also see the possibility for trusted 3rd party auditors to digitally ’sign’ individual policy statements made by cloud providers they have audited. That signature could itself reflect the assurance level you need.  This in turn could help drive the nascent cyberinsurance market for cloud…assuming the auditor is open to counterclaims by the insurer ;-).

Microsoft’s SAS 70 attestations and ISO/IEC 27001:2005 certifications by the British Standards Institution (BSi), as described in Charlie McNerney’s Securing Microsoft’s Cloud Infrastructure post of 5/27/2009, are a step in the right direction.

Joe McKendrick coments on Dana Gardner’s BriefingDirect in his SOA, IT and cloud governance converge into 'total services governance' post of 6/17/2009.

Dana Gardner’s latest BriefingDirect is Hurdles To Cloud Adoption Swirl Around Governance of 6/15/2009:

Our panel of IT analysts discusses the emerging requirements for a new and larger definition of governance. It's more than IT governance, or service-oriented architecture (SOA) governance. The goal is really more about extended enterprise processes, resource consumption, and resource-allocation governance.

In other words, "total services governance." Any meaningful move to cloud-computing adoption, certainly that which aligns and coexists with existing enterprise IT, will need to have such total governance in place. Already, we see a lot of evidence that the IT vendor community and the cloud providers themselves recognize the need for this pending market need and requirement for additional governance.

Kevin Jackson reports on an interchange of Tweets with cloud security expert Chris Hoff (a.k.a. @Beaker) in this Maneuver Warfare in IT: A Cheerleading Pundit post of 6/15/2009. Chris has just taken a high-level job with Cisco.

Success Factors’ announces SuccessFactors Leads Enterprise Cloud Security With Strategic Technology Partnership With WhiteHat Security and Imperva in this 6/15/2009 press release.

Cloud Computing Events

<Return to section navigation list>

UKAzure Net announces The Cumulonimbus Event on 7/29/2009 at Microsoft’s London office:

Our 2nd meeting has been booked!  Our first event was a fantastic success and we hope to emulate this with the next two speakers. 

Richard Godfrey will demonstrate his KoodibooK product and demonstrate how it can be scaled using Azure.

Bert Craven will discuss how Azure can be used from a technical and commercial proposition from an enterprise such as EasyJet.  He will also demonstrate moving a WCF service into the cloud using the .NET Service Bus and Relay Bindings.

When: 7/29/2009 from 6:00 PM to 9:30 PM GMT 
Where: Microsoft, Cardinal Place 100 Victoria Street, London SW1E 5JL, UK 

• Peter Laudati’s Azure Fire Starter Philly – Saturday June 20th post of 3/16/2009 describes the event and its schedule:

Learn about Windows Azure and Azure services which enable developers to easily create or extend their applications and services. From consumer-targeted applications and social networking web sites to enterprise class applications and services, these services make it easy for you to give your applications and services the most compelling experiences and features.

  • 9:00-10:30 Introduction to Azure
  • 10:45-12:15 Azure Storage
  • 12:15-1:15 Working Lunch - Putting it together - Building a simple Azure Application
  • 1:30-2:00 Azure Services (Service Bus, Workflow Services)
  • 2:15-3:45 Azure Services (Access Control Service)
  • 4:00-5:30 Introduction to Live Services

Register free at Microsoft Events.

When: 6/20/2009 from 9:00 AM to 5:30 PM EDT 
Where: Microsoft - Malvern MPR 1 & 2, Great Valley Corporate Center, 45 Liberty Boulevard Suite 210, Malvern, PA 19355

The CloudCamp site announces CloudCamp San Francisco on 6/24/2009 from 5:30 to 10:00 PM at 835 Market Street, Suite 700, San Francisco, CA (Microsoft’s SFO office.)

Tentative Schedule:
5:30 Registration & Networking
6:00 Intro & Welcome to CloudCamp
6:15 Unpanel
7:00 Prepare for Unconference
7:15 Unconference - Sessions 1
8:00 Unconference - Sessions 2
8:45 Unconference - Sessions 3
9:30 Summary of Sessions
9:45 Networking

When: 6/24/2009 from 5:30 to 10:00 PM PDT 
Where: 835 Market Street, Suite 700, San Francisco, CA (Microsoft’s SFO office.)

The World Bank presents Financial Crisis and Cloud Computing: Delivering More for Less, Demystifying Cloud Computing as Enabler of Government Efficiency and Transformation, a Government Transformation Initiative Workshop on 6/16/2009 at 9:00 AM to 1:00 PM EDT in Washington DC that features a live Webcast:

The workshop will discuss the emergence of cloud computing and the advantages that it offers, particularly in terms of cost savings. The workshop will also highlight various challenges that need to be addressed with a special focus on connectivity, business models, efficiency, reliability, integration, security, privacy and interoperability issues.

The key objective is to clarify the rather misty concept of cloud computing for both World Bank staff and our country clients. There is a lot of confusion around this idea with over 20 definitions offered so far by various parties. The workshop will also clarify the potential role of the World Bank and other development organizations in helping developing countries to realize this opportunity.

This workshop is organized by the Global ICT Department and other partners as part of the Government Transformation Initiative, a collaboration between World Bank and the private sector aimed at supporting government leaders pursuing ICT-enabled public sector transformation.

Register to confirm your participation in the Webcast.

When: 6/16/2009 9:00 AM to 1:00 PM 
Where: Washington, DC (Internet Webcast)

The Fortworth .NET User Group’s June 2009 meeting will be Developing Applications Using [SQL] Data Services, presented by Rob Vettor, .NET Architect/Senior Solution Developer at Jack Henry and Associates on 6/16/2009 at Justin Brands. According to this 6/14/2009 post:

In this session, we’ll…

  • Gain a clear understanding of a data service and how the REST protocol plays a key role
  • Explore local, or “on-premises,” data services implemented with the ADO.NET Data Services Framework
  • Explore Cloud-based data services implemented with SQL Data Services
  • Walk through examples with Silverlight and ASP.NET Ajax
  • Show how the ADO.NET Entity Framework provides an underlying foundation for data services 
  • Contrast the difference between SQL Data Services in the cloud and cloud data storage

You’ll walk-away with a clear understanding of how this technology works as well as what is available now and in the near future.

I wonder if Rob has an advance copy of SDS’s first relational CTP.

When: 6/16/2009 
Where: Justin Brands, Ft. Worth, TX

David Linthicum and Ed Horst will conduct a Webinar entited Managing Business Transactions from the Enterprise to the Cloud and Back on 6/17/2009 at 9:00 AM PDT. AmberPoint is sponsoring the Webinar and Joe McKendric will moderate it. Details:

In this informative webcast we’ll take you through the basics of implementing SOA systems that leverage cloud computing. We’ll focus on how to manage these systems, taking into account the special requirements posed by transactions flowing from the enterprise to the cloud and back.

SOA and cloud computing expert David Linthicum, author of “Cloud Computing and SOA Convergence in Your Enterprise,” will walk you through the approach of bringing transactional SOA to the clouds, and the best practices in SOA governance. Ed Horst, Vice President of Product Strategy for industry leader AmberPoint, will cover best practices for managing composite application that leverage cloud computing.

Register here. (Site registration is required)

When: 6/17/2009 9:00 AM PDT 
Where: The Internet

Other Cloud Computing Platforms and Services

<Return to section navigation list> 

••• Greg Ness claims The Intercloud Makes Networks Sexy Again and “Cisco Leads with Vision” in this 6/19/2009 post:

Who knows who created the intercloud term, but it is a major development in articulating the enterprise cloud payoff.  Check out this Cisco blog and intercloud preso.  It is a grand and spectacular vision of where computing needs to go.

Think of the intercloud as an elastic mesh of on demand processing power deployed across multiple data centers.  The payoff is massive scale, efficiency and flexibility.

Just when you thought that Google and Amazon would control the skies, along comes Cisco with a brilliant vision that amplifies the role of the network and offers enterprises a sexy alternative.

Kevin Jackson’s Two Days with AWS Federal post describes an upcoming two days of training with Amazon Web Services (AWS) Federal:

Today, I start two days of training with Amazon Web Services (AWS) Federal. If that's the first time you've ever heard about an AWS Federal division, your not alone. Held in downtown Washington, DC the course was invite-only and attendance was IT services firms that had demonstrated a clear track record of success in the Federal market.

He then goes on to list the companies in attendance, describes AWS’s use of the term “70/30 switch” and describes the first days session contents.

Bernard Gordon recounts the Amazon Web Services Start-Up Event at the PlugandPlayTechCenter in Sunnyvale in his The Cloud as Innovation Platform: Early Examples article of 6/18/2009 for the Norwegian branch of IDG New Service:

… Turning to the Amazon event, four Amazon customers presented and discussed their use of cloud computing (my discussion of the following is from notes and memory, as the slides are not yet available). …

The customers were ShareThis, Pathwork Diagnostics, SmugMug and NetFlix.

David Meyer’s BT moves infrastructure into the cloud post of 6/18/2009 for ZDNet.UK leads with:

BT is about to formally launch a virtualised infrastructure service called BT Virtual Data Centre, which will form the basis of its cloud-computing strategy.

VDC involves the virtualisation of servers, storage, networks and security delivered to customers via an online portal as cloud-based services. On Thursday, BT's Global Services division announced the customer rollout of VDC, which will initially target multinational corporate customers and the public sector.

"VDC is the basis of our cloud-computing offering," Neil Sutton, BT Global Services's product chief, told ZDNet UK on Thursday. "We've begun to deliver communications-as-a-service and hosted services for voice, unified communications and CRM, and we see a roadmap where people want to be able to provision an infrastructure end-to-end. We want to deliver those things as a service in a predictable and flexible manner."

C. Burns and M. West cowrote IBM’s Cloud Takes Shape, But Offering Still Lacks Necessary Guidance, a Saugatuck Research Alert about IBM’s newly announced “Smarter IT” cloud strategy. (Free site registration required.) The report identifies several “areas where user IT organizations will definitely need guidance and services” from IBM.  

• Himanshu Vasishth’s System.Data.OracleClient Update post of 6/15/2009 to the ADO.NET Team Blog announces that the System.Data.OracleClient class will be deprecated in .NET Framework 4.0 in favor of third-party versions:

… We learned  that a significantly large portion of customers use our partners’  ADO.NET providers for Oracle;  with regularly updated support for Oracle releases and new features. In addition, many of the third party providers are able to consistently provide the same level of quality and support that customers have come to expect from Microsoft. This is strong testament of our partners support for our technologies and the strength of our partner ecosystem.  It is our assessment that even if we made significant investments in ADO.Net OracleClient to bring it at parity with our partners based providers, customers would not have a compelling reason to switch to ADO.Net OracleClient. …

Himanshu Vasishth is MSFT’s Program Manager, ADO.NET OracleClient (for a while).

Q: Does Oracle’s pending purchase of Sun Microsystems have something to do with the ADO.NET Team’s decision?
A: Probably (see below).

• Dan Woods explains Why Oracle Wants Solaris in this 6/16/2009 for Forbes magazine. Woods writes:

My guess is that the "Industry in a Box" vision mentioned by Charles Phillips, Oracle's co-president, will actually become the next wave of cloud computing. In a previous column, I recommended that Google ( GOOG - news - people ) get into the appliance business. My guess is Oracle will follow this path with a vengeance. Solaris will power Oracle's cloud offerings, but through appliances, Oracle will bring the cloud to the data center.

Remember that Google, the leading provider of large-scale computing services in the cloud, does so by building its own hardware and software that is integrated and optimized for the task. I believe that Oracle recognizes that there are limits to the amount of enterprise IT that can be put into the cloud. Problems such as security, disaster recovery and moving huge amounts of data are significant barriers to cloud migration. But many of the same economic and operational benefits of the cloud can be achieved through remotely managed appliances that integrate software and hardware in one box. Oracle can run these over the Net using the Smart Services model I wrote about in Mesh Collaboration. The customer gets all the benefits of the cloud without having to move data off premise.

• Lydia Leong discusses differentiation of cloud vendor offerings in her “Enterprise class” cloud post to the Gartner blogs of 6/16/2009:

There seems to be an endless parade of hosting companies eager to explain to me that they have an “enterprise class” cloud offering. (Cloud systems infrastructure services, to be precise; I continue to be careless in my shorthand on this blog, although all of us here at Gartner are trying to get into the habit of using cloud as an adjective attached to more specific terminology.)

If you’re a hosting vendor, get this into your head now: Just because your cloud compute service is differentiated from Amazon’s doesn’t mean that you’re differentiated from any other hoster’s cloud offering. …

Ashlee Vance reports Sun Is Said to Cancel Big Chip Project (the Rock CPU) in her 6/15/2009 article for the NY Times’ Bits column:

Sun has been working on the Rock project for more than five years, hoping to create a chip with many cores that would trounce competing server chips from I.B.M. and Intel. The company has talked about Rock in the loftiest of terms and built it up as a game-changing product. In April 2007, Jonathan Schwartz, the chief executive of Sun, bragged about receiving the first test versions of Rock. …

This marks the second high-end chip in a row that Sun has canceled before its release. These types of products cost billions of dollars to produce, and Sun now has about a 10-year track record of investing in game-changing chips that failed to materialize.

You can bet your children’s college fund that Oracle had something to do with killing Rock.

• David Linthicum says Another Reason to Put Data in the Cloud is Google’s Fusion Tables in his 6/16/2009 post.

Google Labs recently announced Google Fusion Tables, an "experimental system" for fusing data management and collaboration. In other words, it's a means to merge many data sources, including any electronic conversations around data, visualization and data queries. Fusion Tables provide a platform to analyze data along with tools for electronically collaborating about that analysis.

The use cases here are numerous, but the core idea is that users will upload data, and then analyze and visualize the data on Google Maps or mashed up with other APIs, such as the Google Visualization API. Nothing new there, right? Wrong. Fusion Tables also provide for the discussion of data at the row or column level, or even specific data elements... think database and business intelligence meets Google Docs. However, the biggest bang for this new cloud service is the ability to "fuse" multiple sets of data that are logically related and then determine patterns.

This looks to me like the capability that Jon Udell has been seeking for his calendar curating project the last several months.

• Stacey Higgenbotham’s The GigaOM Interview: Kristof Kloeckner, CTO of IBM Cloud Computing post of 6/15/2009 begins:

IBM’s first true cloud computing products, announced today, consists of workload-specific clouds that can be run by an enterprise on special-purpose IBM gear, Big Blue building that same cloud on its special-purpose gear running inside a firewall, or running the workload on IBM’s hosted cloud. The offering seems like a crippled compromise between the scalability and flexibility that true computing clouds offers and what enterprises seem to be demanding when it comes to controlling their own infrastructure. I spoke today with the chief technology officer of IBM’s cloud computing division, Kristof Kloeckner, to learn more. [Emphasis added.]

Reuven Cohen summarizes the “Big Blue Cloud” in his The Big Blue Cloud, Getting Ready for the Zettabyte Age post of 6/16/2009:

Well IBM has gone and done it, they've announced a cloud offering yet again. Actually what's interesting about this go, is not that they're getting into the cloud business (again) but instead this time they're serious about it. And like it or not, they're approach actually does kind of make sense for, assuming you're within their target demographic (the large enterprise looking to save a few bucks).

My summary of the "Big Blue Cloud" is as follows: It's not what you can do for the cloud, but what the cloud can do for you. Or simply, it's about the application, duh? …

James Urquhart’s IBM releases new enterprise cloud portfolio of 6/15/2009 is another analysis of IBM’s “Big Blue Cloud.”

Jason Hiner attempts to answer Why Microsoft, Google, and Amazon are racing to run your data center in this 6/15/2009 post to ZDNet’s Behind the Lines blog:

The race for your data center has already begun. Google, Microsoft, and Amazon are the leading players in a global data center build-out that has not been slowed by the current economic recession and that over next decade will change the face of both consumer computing and IT departments.

The reason why these three companies are building out data center capacity around the world at a breakneck pace is that they want to be ready with enough capacity to handle the two big developments that are preparing to transform the technology world:

  1. Cloud computing: Applications and services delivered over the Internet
  2. Utility computing: On-demand server capacity powered by virtualization and delivered over the Internet

With both of these trends, the biggest target is private data centers. Cloud computing wants to run the big commoditized applications (mail, groupware, CRM, etc.) so that an IT department doesn’t have to run them from a private data center. …

Rich Miller’s IBM’s Cloud Gains Definition post of 6/15/2009 to the Data Center Knowledge site:

This week IBM is rolling out new products that begin to bring some definition to its cloud computing roadmap. IBM is offering several services enabling public cloud computing. But Big Blue’s sharpest focus is on the private cloud, which presents an opportunity to sell hardware and software rather than monthly subscriptions.

Here’s what IBM is announcing:

Public Cloud: IBM can run your application testbed in its public cloud today, and will soon offer a subscription service to host virtual desktops in its data centers. The IBM Smart Business Test Cloud Services taps into, while the upcoming IBM Smart Business Desktop Cloud will establish a beachhead for expected future growth in enterprise desktop virtualization as a service delivery strategy. …

Private Cloud: IBM CloudBurst provides customers with a private cloud in a single 42U rack for about $200,000. Included is a Websphere CloudBurst Appliance that comes pre-loaded with images for quickly deploying application environments based on IBM’s WebSphere software. … 

John Treadway claims that the NYTimes broke IBM’s 6/16/2009 embargo by releasing details today and posted IBM Smart Business #CloudComputing Press Release (DRAFT) on 6/15/2009. John adds more IBM collateral in Tomorrow’s IBM “Smart Business” #CloudComputing Strategy – Today of the same date.

Steve Lohr provides more background on IBM’s offerings in his I.B.M. to Help Clients Fight Cost and Complexity article for the New York Times of 6/14/2009.

According to a Chris Hoff Tweet of the same date, IBM’s private cloud offering will compete with Cisco’s Unified Computer System.

Nicholas Kolakowski reports Salesforce Offers Free Edition of Force.com in the 6/15/2009 eWeek article:

Salesforce.com announced on June 15 the release of the Force.com Free Edition, a stripped-down version of its cloud computing platform for the enterprise. By relying on cloud-based resources, Force.com clients can run Websites and build Web applications without an on-premises infrastructure.

Each client utilizing the free version of Force.com can deploy their newly built Web applications to up to 100 users. In addition, the free edition gives clients access to one Website with up to 250,000 page views per month, 10 custom objects/custom database tables per user, a sandbox development environment, free online training, and a library of sample applications.

eBizQ’s Force.com Sites Expanding Cloud Platform to Deliver Real-Time Web Sites and Web Applications post of 6/15/2009 describes new Force.com sites:

With the addition of Force.com Sites, companies can now use Force.com to build and run applications for their internal business processes as well as public-facing Web sites - entirely on salesforce.com's real-time cloud computing platform.

Salesforce.com’s press release of 6/15/2009 is here.

blog comments powered by Disqus