Showing posts with label SQL Azure Spatial Features. Show all posts
Showing posts with label SQL Azure Spatial Features. Show all posts

Wednesday, July 14, 2010

Windows Azure and Cloud Computing Posts for 7/14/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this daily series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.

Azure Blob, Drive, Table and Queue Services

No significant articles today.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Wayne Walter Berry describes how SQL Azure relates to Windows Azure in his SQL Azure and Windows Azure post of 7/14/2010 to the SQL Azure Team blog:

image I have been deep diving into SQL Azure by blogging about circular references and connection handling for the last couple of months – which are great topics. However, in an internal meeting with the Windows Azure folks last week, I realized that I hadn’t really talked about what SQL Azure is and how to get started with SQL Azure. So I am going to take a minute to give you my unique perspective on SQL Azure and how it relates to our brethren, Windows Azure.

image SQL Azure is a cloud-based relational database service built on SQL Server technologies. That is the simplest sentence that describes SQL Azure. You can find more of a description here.

Windows Azure

image SQL Azure is independent from Windows Azure. You don’t need to have a Windows Azure compute instance to use SQL Azure. However, SQL Azure is the best and only place for storing relational data on the Windows Azure Platform. In other words, if you are running Windows Azure you probably will have a SQL Azure server to hold your data. However, you don’t need to run your application within Windows Azure account just because you have your data stored in SQL Azure. There are a lot of clients and platforms other than Windows Azure that can make use of SQL Azure, including PowerPivot, WinForms applications (via ADO.NET), JavaScript running in the browser (via OData), Microsoft Access, and SQL Server Reporting Services to name a few.

Windows Azure Platform Training Kit

You don’t have to download the Windows Azure Platform Training Kit in order to use SQL Azure. The Platform Training kit is great for learning SQL Azure and getting started however it is not required.

Here are some of the highlights from the June 2010, Windows Azure Platform Training Kit:

Hands on Labs

  • Introduction to SQL Azure for Visual Studio 2008 Developers
  • Introduction to SQL Azure for Visual Studio 2010 Developers
  • Migrating Databases to SQL Azure
  • SQL Azure: Tips and Tricks

Videos

  • What is SQL Azure? (C9 - VIDEO) . You can also view this video here without downloading the Training Kit.

Presentations

  • Introduction to SQL Azure (PPTX)
  • Building Applications using SQL Azure (PPTX)
  • Scaling out with SQL Azure.

Demonstrations

  • Preparing your SQL Azure Account
  • Connecting to SQL Azure
  • Managing Logins and Security in SQL Azure
  • Creating Objects in SQL Azure
  • Migrating a Database Schema to SQL Azure
  • Moving Data Into and Out Of SQL Azure using SSIS
  • Building a Simple SQL Azure App
  • Scaling Out SQL Azure with Database Sharding
Windows Azure SDK

For Windows Azure, there is a development fabric that runs on your desktop to simulate Windows Azure; this is great tool for Windows Azure development. This development fabric is installed when you download and install the Windows Azure SDK. Installing the SDK is not required for SQL Azure. SQL Server 2008 R2 Express edition is the equivalent of SQL Azure on your desktop. There is no SQL Azure development fabric, nor a simulator for the desktop. This is because SQL Azure is extremely similar to SQL Server, so if you want to test your SQL Azure deployment on your desktop you can test it in SQL Server Express first. They are not 100% the compatible (details here); however you can work out most of your Transact-SQL queries, database schema design, and unit testing using SQL Server express.

I personally use SQL Azure for all my development, right after I design my schemas in SQL Server Express edition. It is just as fast for me as SQL Server Express, as long as my queries are not returning a lot of data, i.e. large number of rows with lots of varbinary(max) or varchar(max) data types.

Downloads

So if you don’t need the Windows Azure Platform Training Kit, nor the Windows Azure SDK in order to use SQL Azure, what should you download and install locally? If you are developing on SQL Azure, I recommend the SQL Server Management Studio Express 2008 R2. SQL Server Management Studio provides some of the basics for managing your tables, views, and stored procedures, and also helps you visualize your database. If you are using PowerPivot to connect to a “Dallas” or private SQL Azure database you just need to have PowerPivot installed. The same goes for all the other clients that can connect to SQL Azure, obviously you have to have them installed.

Purchase

If you are going to store your Windows Azure data in SQL Azure, then there are some good introductory and incentive plans, which can be found here. If you just want to use SQL Azure, then you want to purchase the “Pay as you Go” option from this page. That said, there are always new offers, so things could change after this blog post.

Summary

You can use SQL Azure independently of Windows Azure and there is no mandatory software required for your desktop to use or develop SQL Azure. However, if you are using Windows Azure, SQL Azure is the best storage for your relational data.

M Dingle reports WPC: GCommerce creates cloud-based inventory system using Microsoft SQL Azure to transform the special order process in a post of 7/13/2010 to the Tech Net Blogs:

In D.C. this week at WPC, listen to Steve Smith of GCommerce talk about growing market momentum for its Virtual Inventory Cloud (VIC) solution. Built on the Windows Azure platform, VIC enables hundreds of commercial wholesalers and retailers to improve margins and increase customer loyalty through better visibility into parts availability from suppliers.

Read the full release on GCommerce's website.

Chris Pendleton’s Data Connector: SQL Server 2008 Spatial & Bing Maps post of 7/13/2010 to the Bing Community blog notes that the new Bing Data Connector works with SQL Azure’s spatial data types:

For those of you who read my blog, by now I’m sure you are all too aware of SQL Server 2008’s spatial capabilities. If you’re not up to speed with the spatial data support in SQL Server 2008, I suggest you read up on it via the SQL Server site. Now, with the adoption of spatial into SQL Server 2008 there are a lot of questions around Bing Maps rendering support. Since we are one Microsoft I’m sure you heard the announcement of SQL Server 2008 R2 Reporting Services natively supporting Bing Maps.

Okay, so let’s complete the cycle. Now, I want more than just Reporting Services – I want access to all those spatial methods natively built into SQL Server 2008. I want to access the geography and geometry spatial data types for rendering on to the Bing Maps Silverlight Control. Enter, the Data Connector. OnTerra has created an simple, open source way to complete the full cycle of importing data into SQL Server 2008 and rendering onto Bing Maps as point, lines and polygons. Data Connector is available now on CodePlex, so go get it.

DataConnector for SQL Server 2008 and Bing Maps

Do you know what this means??? It means with only a few configurations (and basically no coding) you can pull all of your wonderful geo-data out of SQL Server 2008 and render it onto Bing Maps Platform! You get to tweak the functions for colors and all that jazz; but, holy smokes this will save you a bunch of time in development. Okay, so using SQL Server 2008 as your database for normal data, also gets you free spatial support and now you have a simple way to visualize and visually analyze all of the information coming out of the database. Did you know that SQL Server 2008 Express (you know, the free desktop version of SQL Server 2008) has spatial support.

Did you know that we launched SQL Server 2008 into Windows Azure calling it SQL Azure? So, if you’re moving to the cloud out of the server farm you have access to all the spatial methods in SQL Server 2008 from the Windows Azure cloud. Any way you want to slice and dice the data we have you covered – desktop, server, cloud; and, Bing Maps with the Data Connector is now ready to easily bring all the SQL Server 2008 data to life without any other software, webware or middleware needed!

Oh, and we have a HUGE Bing Maps booth at the ESRI User Conference going on RIGHT NOW in San Diego. If you’re there, stop by and chat it up with some of the Bing Maps boys. Wish I was there. Also, a reminder that Bing Maps (and OnTerra) are at the Microsoft Worldwide Partner Conference, so if you’re there stop by and talk shop with them too.

Here are some helpful links:

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

No significant articles today.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Joe Panattieri interviewed Jon Roskill re partners marketing BPOS vs. Windows Azure apps in his Microsoft Channel Chief: Potential BPOS Marketplace? post of 7/14/2010 to the MSPMentor blog:

imageA few minutes ago, I asked Microsoft Channel Chief Jon Roskill the following question: Can small VARs and MSPs really profit from basic BPOS (Business Productivity Online Suite) applications like SharePoint Online and Exchange Online? Or should those small partners begin to focus more on writing their own targeted cloud applications for Windows Azure? Roskill provided these perspectives.

imageFirst, let’s set the stage: VARs and MSPs receive a flat monthly fee for reselling BPOS applications like Exchange Online and SharePoint Online. In stark contrast, Windows Azure is an extendable cloud platform. Partners can use Azure to write and deploy customer applications in the Microsoft cloud, that means partners can create business assets that deliver even more revenue on Azure.

But Roskill offered some caution. He noted that business app stores are quite different than consumer market places like the Apple App Store. Yes, developers can write applications for Azure. But he also sees opportunities for partners to profit from BPOS add-ons.

How? It sounds like Microsoft plans to take some steps to expand BPOS as a platform that VARs and MSPs can use for application extensions. For instance, Roskill mentioned potential workflow opportunities for VARs and MSPs down the road.

were_all_in_homeRoskill certainly didn’t predict that Microsoft would create a BPOS application marketplace. But the clues he shared suggest BPOS will become more than basic hosted applications. It sounds like Microsoft will find a way for VARs and MSPs to truly add value, rather than simply reselling standard hosted applications.

The press conference with Roskill continues now. Back later with more thoughts.

Watch Jon Roskill’s Vision Keynote video of 7/14/2010 here.

PC Magazine defines an MSP (Managed Service Provider) as follows:

(Managed Service Provider) An organization that manages a customer's computer systems and networks which are either located on the customer's premises or at a third-party datacenter. MSPs offer a variety of service levels from just notifying the customer if a problem occurs to making all necessary repairs itself. MSPs may also be a source for hardware and staff for its customers.

Here’s Joe Panattieri’s promised “more thoughts” about Jon Roskill in a subsequent Microsoft Channel Chief Concedes: I Need to Learn About MSPs post of 7/14/2010:

imageIt was a brief but revealing comment. During a press briefing at Microsoft Worldwide Partner Conference 2010 (WPC10) today, new Microsoft Channel Chief Jon Roskill (pictured) described his views on VARs, distributors, integrators and other members on the channel partner ecosystem. Then, in an unsolicited comment, he added: “MSP… it’s a piece [of the channel] I need to learn more about.” Rather than being depressed by Roskill’s comment, I came away impressed. Here’s why.

imageWhen Roskill entered the press meeting, he and I spoke on the side for about a few moments. Roskill mentioned he had read MSPmentor’s open memo to him. Roskill said he agreed with the memo — which called on Microsoft’s channel team spend more time with MSPs, among other recommendations.

Roskill is a long-time Microsoft veteran but he’s been channel chief for less than a month. He’s drinking from a fire hose — navigating the existing Microsoft Partner program and new announcements here at the conference.

Sticking to the script, Roskill has been reinforcing CEO Steve Ballmer’s core cloud messages to partners — Get in now or get left behind. During today’s press conference, Roskill described why he’s upbeat about BPOS, Azure and the forthcoming Azure Appliance for partners. (I’ll post a video recap, later.)

But Roskill’s actions may wind up speaking louder than his words. During a keynote session on Tuesday, Roskill’s portion of the agenda was hit and miss. On stage under the Tuesday spotlight, he looked like an Olympic skater still getting a feel for the ice.

New Day, New Audience

During the far smaller press conference today (Wednesday), Roskill looked at ease. He even took a few moments to walk the room, shake hands with reporters and introduce himself during brief one-on-one hellos with about 20 members of the media. Some media members may question Microsoft’s channel cloud strategy. But Roskill welcomed the dialog — which is a solid first step as Channel Chief.

No doubt, Roskill still has more work to do. Portions of the media asked — multiple times — why Microsoft won’t let partners manage end-customer billing for BPOS. Each time, Roskill said he’s listening to partner feedback and media inquiries, and Microsoft has adjusted its BPOS strategy from time to time based on that feedback. But ultimately, it sounds like Microsoft thinks it will become too difficult to give each partner so much individual control over billing.

Good Listener?

That said, Roskill is an approachable guy. He’s obviously reading online feedback from the media and from partners. During the 30-minute press conference, he made that 15-second admission about needing to learn more about MSPs.

I respect the fact that he said it, and I look forward to seeing how Microsoft attempts to move forward with MSPs.

Read More About This Topic

Microsoft recently posted a new Why Dynamic Data Center Toolkit? landing page for the Dynamic Data Center Toolkit v2 (DDCTK-H) version for MSBs, which was released in June 2010:

Build Managed Services with the Dynamic Data Center Toolkit for Hosting Providers

Your customers are looking for agile infrastructure delivered from secure, highly available data centers so they can quickly respond to rapid business fluctuations. You can address this need by extending your hosting portfolio with high-value, high-margin services, such as managed hosting, on demand virtualized servers, clustering, and network services.

With the Dynamic Data Center Toolkit for Hosting Providers, you can deliver these services, built on Microsoft® Windows Server® 2008 Hyper-V™ and Microsoft System Center. The toolkit contains guidance to help you establish appropriate service level agreements (SLAs) and create portals that your customers can use to directly provision, manage, and monitor their infrastructure.

In-Depth Resources

The Dynamic Data Center Toolkit enables you to build an ongoing relationship with your customers while you scale your business with these resources:

  • Step-by-step instructions and technical best practices to provision and manage a reliable, secure, and scalable data center
  • Customizable marketing material you can use to help your customers take advantage of these new solutions
  • Sample code and demos to use in your deployment

The MSB landing page offers two screencasts and links to other resources for MSBs. The Microsoft Cloud Computing Infrastructure landing page targets large enterprises:

image

Josh Greenbaum recommends xRM as an Azure app toolset in his How to Make Money in the Cloud: Microsoft, SAP, the Partner Dilemma and The Tools Solution post of 7/13/2010 to the Enterprise Irregulars blog:

It’s cloud’s illusions I recall, I really don’t know clouds at all…..

image One of the primary devils in the details with cloud computing will always be found in the chase for margins, and this is becoming abundantly clear for Microsoft’s market-leading partner ecosystem, gathered this week in Washington, DC. for their Worldwide Partner Conference. Chief cheerleader Steve Ballmer repeated the standard mantra about how great life will be in the cloud, something I tend to agree with. But missing from Ballmer’s talk was the money quote for Microsoft’s partners:  “…..and here’s how, once we’ve stripped the implementation and maintenance revenue from your business, you’ll be able to make a decent margin on your cloud business.”

More than the coolness of the technology, it’s the coolness of the possibility for profit margins that will make or break Microsoft Azure and any other vendor’s cloud or cloud offering. This is hardly just Microsoft’s problem, SAP has grappled with this margin problem as it has moved in fits and starts towards this year’s re-launch of Business ByDesign. And it’s top of mind across the growing legions of ISVs and developers looking at what they can do to capitalize on this tectonic shift in the marketplace.

The trick with chasing cloud margins is that the chase dovetails nicely with a concept I have been touting for a while, the value-added SaaS opportunity. What is obvious from watching Ballmer and his colleagues discuss the future according to Azure is that much of the profits to be gained from moving existing applications and services into the cloud will largely go to Microsoft. That’s the first generation SaaS opportunity – save money by flipping existing applications to the cloud and reap the economies of the scale inherent in consolidating maintenance, hardware, and support costs. To the owner of the cloud goes the spoils.

This is the same issue, by the way, bedeviling SAP’s By Design: SAP will run its own ByD cloud services, and in doing so, assuming that the problems with the cost-effectiveness of ByD have been full resolved, sop up the first order profits inherent in the economies of scale of the cloud. How SAP’s partners will make the healthy margins they need to be in the game with SAP has been, in retrospect, a bigger problem than the technology issues that stymied ByD’s initial release. And, by the way, thinking that value-added partners – the smart, savvy ones SAP wants to have on board selling ByD – will be happy with a volume business won’t cut it. Smart and savvy won’t be interested in volume, IMO.

But there is an answer to the answer that will appeal to everyone:  build net new apps that, in the words of Bob Muglia, president of Microsoft’s server and tools business, weren’t possible before. This act of creation is where the margins will come from. And the trick for partners of  any cloud company is how easy the consummation of this act of creation will be.

The easiest way to do for partners to create this class of apps is for the vendor to offer the highest level of abstraction possible: from a partner/developer standpoint, this means providing, out of the box and in the most consumable manner possible, a broad palate of business-ready services that form the building blocks for net new apps.

Microsoft is starting to do this, and Muglia showed off Dallas, a data set access service that can find publicly and privately available data sets and serve up the data in an Azure application, as an example of what this means. As Azure matures, bits and pieces of Microsoft Dynamics will be available for use in value-added applications, along with value-added services for procurement, credit card banking, and other business services.

This need for value-added apps then begs two important questions for Microsoft and the rest of the market: question one is what is the basic toolset to be used to get value-added SaaS app development started? And question two: who, as in what kind of developer/partner, is best qualified to build these value-added apps?

I’ll start with the second question first. I have maintained for a while that, when it comes to the Microsoft partner ecosystem, it’s going to be up to the Dynamics partners to build the rich, enterprise-class applications that will help define this value-added cloud opportunity. The main reason for this is that the largest cloud opportunity will lie in vertical, industry-specific applications that require deep enterprise domain knowledge. This is the same class of partner SAP will need for ByD as well.

Certainly there will be opportunities for the almost all of the 9 million Microsoft developers out there, especially when it comes to integrating the increasingly complex Microsoft product set – Sharepoint. SQL Server, Communications Server, etc. – into the new cloud-based opportunities.

But with more of the plumbing and integration built into Azure, those skills, like the products they are focused on, will become commoditized and begin to wane in importance and value, and therefore limit the ability of these skills to contribute to strong margins.

The skills that will rise like crème to the top will be line of business and industry skills that can serve as the starting point for creating new applications that fill in the still gaping white spaces in enterprise functionality. Those skills are the natural bailiwick of many, but not all, Dynamics partners: Microsoft, like every other channel company, has its share of great partners and not so great partners, depending on the criteria one uses to measure channel partner greatness. Increasingly, in the case of Azure, that greatness will be defined by an ability to create and deliver line of business apps, running in Azure, that meet specific line of business requirements.

The second skillset, really more ancillary than completely orthogonal to the first, involves imagining, and then creating, the apps “that have never been built before.” This may task even the best and brightest of the Dynamics partners, in part because many of these unseen apps will take a network approach to business requirements that isn’t necessarily well-understood in the market. There’s a lot of innovative business thinking that goes into building an app like that, the kind found more often in start-ups than in existing businesses, but either way it will be this class of app that makes Azure really shine.

Back to toolset question: Microsoft knows no shortage of development tools under Muglia’s bailiwick, but there’s one that comes from his colleague  Stephen Elop’s Business Solutions group that is among the best-suited for the job. Muglia somewhat obliquely acknowledged when I asked him directly that this tool would one day be part of the Azure toolset, but it’s clear we disagree about how important that toolset is.

The toolset in question is xRM, the extended CRM development environment that Elop’s Dynamics team has been seeding the market with for a number of years. xRM is nothing short of one of the more exciting ways to develop applications, based on a CRM-like model, that can be run on premise, on-line today, and on Azure next year. While there are many things one can’t do with xRM, and which therefore require some of Muglia’s Visual Studio tools, Microsoft customers today are building amazingly functional apps in a multitude of industries using xRM.

The beauty of xRM is that development can start at a much higher level of abstraction than is possible with a Visual Studio-like environment. This is due to the fact that it offers up the existing services of Dynamics CRM, from security to workflow to data structures, as development building blocks for the xRM developer. One partner at the conference told me that his team can deliver finished apps more than five times faster with xRM than with Visual Studio, which either means he can put in more features or use up less time or money. Either one looks pretty good to me.

The success of xRM is important not just for Microsoft’s channel partners: the ones who work with xRM fully understand its value in domains such as Azure. xRM is also defines a model for solving SAP’s channel dilemma as well for ByD. The good news for SAP is that ByD will have an xRM-like development environment by year’s end, one that can theoretically tap into a richer palate of processes via ByD than xRM can via Dynamics CRM.

Of course, xRM has had a double head start: it’s widely used in the market, and there’s a few thousand partners who know how to deploy it. And xRM developers will have the opportunity when Dynamics CRM 2011 is released (in 2011, duh) of literally throwing a switch and deploying their on premise or in the cloud. Thus far, ByD’s SDK only targets on-demand deployments. And there are precious few ByD partners today, and no one has any serious experience building ByD apps.

What’s interesting for the market is that xRM and its ByD equivalent both represent fast-track innovation options that can head straight to the cloud. This ability to innovate on top of the commodity layer that Azure and the like provide is essential to the success of Microsoft’s and SAP’s cloud strategies. Providing partners and developers with a way to actually make money removes an important barrier to entry, and in the process opens up an important new avenue for delivering innovation to customers. Value-added SaaS apps are the future of innovation, and tools like xRM are the way to deliver them.  Every cloud  has a silver lining: tools like xRM hold the promise of making  that lining solid gold.

The Microsoft Case Study Team dropped Software Developers [Intergrid] Offer Quick Processing of Compute-Heavy Tasks with Cloud Services on 7/8/2011:

image InterGrid is an on-demand computing provider that offers cost-effective solutions to complex computing problems. While many customers need support for months at a time, InterGrid saw an opportunity in companies that need only occasional bursts of computing power—for instance, to use computer-aided drafting data to render high-quality, three-dimensional images. In response, it developed its GreenButton solution on the Windows Azure platform, which is hosted through Microsoft data centers. Available to software vendors that serve industries such as manufacturing, GreenButton can be embedded into applications to give users the ability to call on the power of cloud computing as needed. By building GreenButton on Windows Azure, InterGrid can deliver a cost-effective, scalable solution and has the opportunity to reach another 100 million industry users with a reliable, trustworthy platform.

Business Situation

The company wanted to give software developers the opportunity to embed on-demand computing options into applications, and it needed a reliable, trustworthy cloud services provider.

Solution

InterGrid developed its GreenButton solution using the Windows Azure platform, giving software users the option to use cloud computing in an on-demand, pay-as-you-go model.

Benefits

  • Increases scalability
  • Reduces costs for compute-heavy processes
  • Delivers reliable solution

Return to section navigation list> 

Windows Azure Infrastructure

Mary Jo Foley quotes a Microsoft slide from WPC10 in her Microsoft: 'If we don't cannibalize our existing business, others will' post of 7/14/2010 to the All About Microsoft blog:

imageThat’s from a slide deck from a one of many Microsoft presentations this week at the company’s Worldwide Partner Conference, where company officials are working to get the 14,000 attendees onboard with Microsoft’s move to the cloud. It’s a pretty realistic take on why Microsoft and its partners need to move, full steam ahead, to slowly but surely lessen their dependence on on-premises software sales.

Outside of individual sessions, however, Microsoft’s messaging from execs like Chief Operating Officer Kevin Turner, has been primarily high-level and inspirational.

“We are the undisputed leader in commercial cloud services,” Turner claimed during his July 14 morning keynote. “We are rebooting, re-pivoting, and re-transitioning the whole company to bet on cloud services.”

were_all_in_homeTurner told partners Microsoft’s revamped charter is to provide “a continuous cloud service for every person and every business.” He described that as a 20-year journey, and said it will be one where partners will be able to find new revenue opportunities. …

Read more

Watch Kevin Turner’s Vision Keynote 7/14/2010 video here.

<Return to section navigation list> 

Windows Azure Platform Appliance 

Chris Czarnecki claims Private Clouds Should Not Be Ignored in this 7/14/2010 post to Learning Tree’s Perspectives on Cloud Computing blog:

image With the publicity surrounding Cloud Computing it is easy to form the opinion that Cloud Computing means using Microsoft Azure, Google App Engine or Amazon EC2. These are public clouds that are shared by many organisations. Related to using a public cloud is the question of security. What often gets missed in these discussions is the fact that Cloud Computing very definitely offers a private cloud option.

Leveraging a private cloud can offer an organisation many advantages, not least of which is a better utilisation of existing on-premise IT resources. The number of products available to support private clouds is growing. For instance Amazon recently launched their Virtual Private Cloud which provides a secure seamless bridge between a company’s existing IT infrastructure and the AWS cloud. Eucalyptus provides a cloud infrastructure for private clouds which in the latest release, includes Windows image hosting, group management, quota management and accounting. This is a truly comprehensive solution for those wishing to run a private cloud.

image The latest addition to the private cloud landscape is the launch by Microsoft of the Appliance for building private clouds. Microsoft have developed a private cloud appliance in collaboration with eBay. This product compliments Azure by enabling the cloud to be locked down behind firewalls and intrusion detection systems, perfect for handling customer transactions and private data. Interestingly the private cloud appliance allows Java applications to run as first class citizens alongside .NET. Microsoft is partnering with HP, Fujitsu and Dell who will adopt the appliance for their own cloud services.

I am really encouraged by these developments in private cloud offerings, as during consulting and teaching the Learning Tree Cloud Computing course, the concerns of security and data location are offered as barriers to cloud adoption. My argument that private or hybrid clouds offer solutions is being reinforced by the rapidly increasing products in this space and their adoption by organisations such as eBay.

Alex Williams is conducting ReadWriteCloud’s Weekly Poll: Will Private Clouds Prevail Or Do Platforms Represent the Future? in this 7/13/2010 post:

Thumbnail image for oracleweeklypollchart.pngThe news from Microsoft this week about its private cloud initiative points to an undeniable trend. The concept of the private cloud is here to stay.

This sticks in the craw of many a cloud computing veteran who makes the clear distinction between what is an Internet environment as opposed to an optimized data center.

Amazon CTO Werner Vogels calls the private variety "false clouds." Vogels maintains that you can't add cloud elements to a data center and suddenly have a cloud of your own.

In systems architecture diagrams, servers are represented by boxes and storage by containers. Above it all, the Internet is represented by a cloud. That's how the cloud got its name. Trying to fit the Internet in a data center would be like trying to fit the universe in a small, little house. It's a quantum impossibility.

Our view is more in line with Vogels. In our report on the future of the cloud, Mike Kirkwood wrote that it is the rich platforms that will come to define the cloud. That makes more sense for us as platforms are part of an Internet environment, connected in many ways by open APIs.

But what do you think? Here's our question:

  • Will Private Clouds Prevail Or Do Platforms Represent the Future?
  • Platforms will prevail as services. They are the true foundation for the organic evolution of the cloud.
  • Private clouds are only an intermediary step before going to a public cloud environment.
  • Yes, private clouds will rule. The enterprise is too concerned about security to adopt public cloud environments.
  • There will be no one cloud environment that prevails. Each has its own value.
  • No, private clouds will serve to optimize data centers but they will not be the cloud of choice for the enterprise.
  • Private clouds will be the foundation for most companies. They will use public and hybrid clouds but the private variety will be most important.
  • Baloney. Private clouds are glorified data centers.
  • Other

Go to the ReadWriteCloud and vote!

The concept of the private cloud is perceived as real, no surprise, by the players represented in the vast ecosystem that is the enterprise. You can hear their big sticks rattling as they push for private clouds.

Virtualization is important for data center efficiency. it helps customers make a transition to the public cloud.

But it is not the Internet. Private clouds still mean that the enterprise is hosting its own applications. And for a new generation, that can make it feel a bit like a college graduate of 30 years ago walking into an enterprise dominated by mainframes and green screens.

That's not necessarily a bad thing. A private cloud is just not a modern oriented vision. To call it cloud computing is just wrong. It's like saying I am gong to have my own, private Internet.

Still, we expect the term to be here for the long run, especially as the marketing amps up and "cloud in a box" services start popping up across the landscape.

The only thing we can hope is that these private clouds allow for data to flow into public clouds.

It is starting to feel like the concept of the cloud is at the point where the tide can go either way. Will we turn the cloud into an isolated environment or will the concepts of the Internet prevail? Those are big questions that will have defining consequences.

<Return to section navigation list> 

Cloud Security and Governance

Lori MacVittie describes Exorcising your digital demons in her Out, Damn’d Bot! Out, I Say! post of 7/14/2010 to F5’s DevCentral blog:

stop-woman

Most people are familiar with Shakespeare’s The Tragedy of Macbeth. Of particularly common usage is the famous line uttered repeatedly by Lady Macbeth, “Out, damn’d spot! Out, I say” as she tries to wash imaginary bloodstains from her hands, wracked with the guilt of the many murders of innocent men, women, and children she and her husband have committed.

It might be no surprise to find a similar situation in the datacenter, late at night. With the background of humming servers and cozily blinking lights shedding a soft glow upon the floor, you might hear some of your infosecurity staff roaming the racks and crying out “Out, damn’d bot! Out I say!” as they try to exorcise digital demons from their applications and infrastructure. 

Because once those bots get in, they tend to take up a permanent residence. Getting rid of them is harder than you’d think because like Lady Macbeth’s imaginary bloodstains, they just keep coming back – until you address the source.

A RECURRING NIGHTMARE

One of the ways in which a bot can end up in your datacenter wreaking havoc and driving your infosec and ops teams insane is through web application vulnerabilities. These vulnerabilities in both the underlying language and server software as well as the web application itself, are generally exploited through XSS (Cross-site scripting). While we tend to associate XSS with attempts to corrupt a data source and subsequently use it as distribution channel for malware, a second use of XSS is to enable the means by which a bot can be loaded onto an internal resource. From there the bot can spread, perform DoS attacks on the local network, be used as a SPAM bot, or join in a larger bot network as part of a DDoS.

The uses of such deposited bots is myriad and always malevolent.

Getting rid of one is easy. It’s keeping it gone that’s the problem. If it entered via a web application it is imperative that the vulnerability be found and patched. And it can’t wait for days, because the bot that likely exploited that vulnerability and managed to deploy the bot inside your network is probably coming back. In a few hours. You can remove the one that’s there now, but unless the hole is closed, it’ll be back – sooner rather than later.

According to an ARS Technica report, you may already know this pain as “almost all Fortune 500 companies show Zeus botnet activity”:

quote-leftUp to 88% of Fortune 500 companies may have been affected by the Zeus trojan, according to research by RSA's FraudAction Anti-Trojan division, part of EMC.

The trojan installs keystroke loggers to steal login credentials to banking, social networking, and e-mail accounts.

This is one of those cases in which when is more important than where the vulnerability is patched initially. Yes, you want to close the hole in the application, but the reality is that it takes far longer to accomplish that then it will take for attackers to redeposit their bot. According to WhiteHat Security’s Fall 2009 Website Security Statistics Report PDF a website is 66 percent likely to be vulnerable to an XSS exploit, and it will take an average of 67 days for that vulnerability to be patched.

That’s 67 days in which the ops and infosec guys will be battling with the bot left behind – cleaning it up and waiting for it to show up again. Washing their hands, repeatedly, day and night, in the hopes that the stain will go away.

WHAT CAN YOU DO?

The best answer to keeping staff sane is to employ a security solution that’s just a bit more agile than the patching process. There are several options, after all, that can be implemented nearly immediately to plug the hole and prevent the re-infection of the datacenter while the development team is implementing a more permanent solution.

Virtual patching provides a nearly automated means of plugging vulnerabilities by combining vulnerability assessment services with a web application firewall to virtually patch, in the network, a vulnerability while providing information necessary to development teams to resolve the issue permanently.

Network-side scripting can provide the means by which a vulnerability can be almost immediately addressed manually. This option provides a platform on which vulnerabilities that may be newly discovered and have not yet been identified by vulnerability assessment solutions can be addressed.

Web application firewall can be manually instructed to discover and implement policies to prevent exploitation of vulnerabilities. Such policies generally target specific pages and parameters and allow operations to target protection at specific points of potential exploitation.

out-damnd-bot

All three options can be used together or individually. All three options can be used as permanent solutions or temporary stop-gap measures. All three options are just that, options, that make it possible to address vulnerabilities immediately rather than forcing the organization to continually battle the results of a discovered vulnerability until developers can address it themselves.

More important than how is when, and as important as when is that the organization have in place a strategy that includes a tactical solution for protection of exploited vulnerabilities between discovery and resolution. It’s not enough to have a strategy that says “we find vulnerability. we fix. ugh.” That’s a prehistoric attitude that is inflexible and inherently dangerous given the rapid evolution of botnets over the past decade and will definitely not be acceptable in the next one. …

Read more

Michael Krigsman posted his Gartner releases cloud computing ‘rights and responsibilities’ analysis to the Enterprise Irregulars blog on 7/14/2010:

image Analyst firm Gartner published a set of guidelines intended to ease relationships between cloud vendors and users. As cloud computing becomes more pervasive, the ecosystem (including vendors and analysts) is seeking ways to align expectations among relevant parties.

image Gartner specified “six rights and one responsibility of service customers that will help providers and consumers establish and maintain successful business relationships:”

The right to retain ownership, use and control one’s own data - Service consumers should retain ownership of, and the rights to use, their own data.

The right to service-level agreements that address liabilities, remediation and business outcomes – All computing services – including cloud services – suffer slowdowns and failures. However, cloud services providers seldom commit to recovery times, specify the forms of remediation or spell out the procedures they will follow.

The right to notification and choice about changes that affect the service consumers’ business processes – Every service provider will need to take down its systems, interrupt its services or make other changes in order to increase capacity and otherwise ensure that its infrastructure will serve consumers adequately in the long term. Protecting the consumer’s business processes entails providing advanced notification of major upgrades or system changes, and granting the consumer some control over when it makes the switch.

The right to understand the technical limitations or requirements of the service up front – Most service providers do not fully explain their own systems, technical requirements and limitations so that after consumers have committed to a cloud service, they run the risk of not being able to adjust to major changes, at least not without a big investment.

The right to understand the legal requirements of jurisdictions in which the provider operates – If the cloud provider stores or transports the consumer’s data in or through a foreign country, the service consumer becomes subject to laws and regulations it may not know anything about.

The right to know what security processes the provider follows - With cloud computing, security breaches can happen at multiple levels of technology and use. Service consumers must understand the processes a provider uses, so that security at one level (such as the server) does not subvert security at another level (such as the network).

The responsibility to understand and adhere to software license requirements - Providers and consumers must come to an understanding about how the proper use of software licenses will be assured.

Readers interested in this topic should also see enterprise analyst, Ray Wang’s, Software as a Service (SaaS) Customer’s Bill of Rights. That document describes a set of practices to ensure consumer protections across the entire SaaS lifecycle, as indicated in the following diagram:

My take. For cloud computing to achieve sustained success and adoption, the industry must find ways to simplify and align expectations between us

Michael is a well-known expert on why IT projects fail, CEO of Asuret, a Brookline, MA consultancy that uses specialized tools to measure and detect potential vulnerabilities in projects, programs, and initiatives. He’s also a popular and prolific blogger, writing the IT Project Failures blog for ZDNet.

The Government Information Security blog offered A CISO's Guide to Application Security white paper from Fortify on 7/14/2010:

Focusing on security features at both the infrastructure and application level isn't enough. Organizations must also consider flaws in their design and implementation. Hackers looking for security flaws within applications often find them, thereby accessing hardware, operating systems and data. These applications are often packed with Social Security numbers, addresses, personal health information, or other sensitive data.

In fact, according to Gartner, 75% of security breaches are now facilitated by applications. The National Institute of Standards and Technology, or NIST, raises that estimate to 92%. And from 2005 to 2007 alone, the U.S. Air Force says application hacks increased from 2% to 33% of the total number of attempts to break into its systems.

To secure your agency's data, your approach must include an examination of the application's inner workings, and the ability to find the exact lines of code that create security vulnerabilities. It then needs to correct those vulnerabilities at the code level. As a CISO, you understand that application security is important. What steps can you take to avoid a security breach?

Read the CISO's Guide to Application Security to learn:

  • The significant benefits behind application security
  • Implement a comprehensive prevention strategy against current & future cyberattacks
  • 6 quick steps to securing critical applications

Download Whitepaper

<Return to section navigation list> 

Cloud Computing Events

The Windows Azure Platform Partner Hub posted Day 2 WPC: Cloud Essentials Pack and Cloud Accelerate Program on 7/142010:

image The Microsoft Partner Network is trying to make it easier for partners to adopt new cloud technologies and providing a logo and competency benefits for the partners who are driving successful business by closing deals with Microsoft Online Services, Windows Intune, and the Windows Azure Platform.

Get details on the Cloud Essentials Pack and Cloud Accelerate Program here: www.microsoftcloudpartner.com/

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Liz McMillan asserts “Health information exchange will enable secure web-based access” as a preface to her Verizon Unveils 'Cloud' Solution to Facilitate Sharing of Information post of 7/14/2010:

One of the biggest obstacles to sharing patient information electronically is that health care systems and providers use a wide range of incompatible IT platforms and software to create and store data in various formats. A new service - the Verizon Health Information Exchange - will soon be available via the "cloud" to address this challenge.

image The Verizon Health Information Exchange, one of the first services of its kind in the U.S., will consolidate clinical patient data from providers and translate it into a standardized format that can be securely accessed over the Web. Participating exchange providers across communities, states and regions will be able to request patient data via a secure online portal, regardless of the IT systems and specific protocols the providers use. This will enable providers to obtain a more complete view of a patient's health history, no matter where the data is stored.

Having more information at their fingertips will help providers reduce medical errors and duplicative testing, control administrative costs and, ultimately, enhance patient safety and treatment outcomes. With monthly charges based on a provider's patient-record volume, the service is economical because subscribers only pay for what they use.

"By breaking down the digital silos within the U.S. health care delivery system, the Verizon Health Information Exchange will address many of the interoperability barriers that prevent sharing of clinical data between physicians, clinics, hospitals and payers," said Kannan Sreedhar, vice president and global managing director, Verizon Connected Health Care Solutions. "Providing secure access to patient data will enable health care organizations to make a quantum leap forward in the deployment of IT to meet critical business and patient-care issues."

Adoption of health information exchanges is expected to grow as a result of the American Recovery and Reinvestment Act of 2009. As of March, 56 federal grants totaling $548 million have been awarded to states to facilitate and expand the secure electronic movement and use of health information among organizations, using nationally recognized standards.

Strong Security, Comprehensive Service

Because the Verizon Health Information Exchange will be delivered via Verizon's cloud computing platform, health care organizations will be able to use their current IT systems, processes and workflows, without large additional capital expenditures. The service will be ideal for large and small health care providers.

The Verizon Health Information Exchange will use strong identity access management controls to provide security for sensitive patient information. Only authorized users will have access to patient clinical data.

To build out the solution, Verizon will leverage the capabilities of several key technology and service providers - MEDfx, MedVirginia and Oracle - to deliver key features of the service including: clinical dashboard, record locator service, cross-enterprise patient index and secure clinical messaging.

"The ability to dynamically scale technical resources and pay for those used are key benefits of health information exchange platforms hosted in the cloud," said Lynne A. Dunbrack, program director, IDC Health Insights. "Cloud-based platforms will appeal to small to mid-sized organizations looking to shift technology investment from cap-ex to op-ex and to large regional or statewide initiatives that need to establish connectivity with myriad stakeholders with divergent needs and interoperability requirements."

It appears to me that Verizon is entering into competition with existing state HIEs who are or will be the recipients of the $548 million in ARRA grants. Will states be able to use ARRA funds to outsource HIE operations to Verizon?

Nicole Hemsoth reports being caught off-guard in her A Game-Changing Day for Cloud and HPC post of 7/13/2010 to HPC in the Cloud’s Behind the Cloud blog:

Nicole HemsothOn Monday the news was abuzz with word from the Microsoft Partner Conference—tales of its various partnerships to enhance its Azure cloud offering were proliferating, complete with the requisite surprises and some items that dropped off the radar as soon as I heard them given our HPC focus. Monday night,  I went to bed comforted by the notion that when I would wake up on Tuesday, for once I would have a predictable amount of news coming directly from the Redmond camp. I wasn’t sure what partnerships they would declare or how they planned on bringing their private cloud in a box offering to real users, but I knew that I would be writing about them in one way or another.

Little did I know that the Tuesday news I was expect[ing] didn’t surface at all and in its place [appeared] this remarkable announcement from Amazon about its HPC-tailored offering, which they (rather clunkily) dubbed “Cluster Compute Instances” --(which is thankfully already being abbreviated CCI).

I almost fell off my chair.

Then I think I started to get a little giddy. Because this means that it’s showtime. Cloud services providers, private and public cloud purveyors—everyone needs to start upping their game from here on out, at least as far as attracting HPC users goes. If I recall, Microsoft at one point noted that HPC was a big part of their market base—a surprise, of course, but it’s clear that this new instance type (that makes it sound so non-newsy, just calling it another new instance type) is set to change things. Exciting stuff, folks.

As you can probably tell from this little stream of consciousness ramble, this release today turned my little world on its head, I must admit. I hate to be too self-referential, but man, I wish I could [go] back now and rewrite some elements of a few articles that have appeared on the site where I directly question the viability of the public cloud for a large number of HPC applications in the near future, including those that invoke the term “MPI”—Actually though, it’s not necessarily me saying these things, it’s a host of interviewees as well. The consensus just seemed to be, Amazon is promising and does work well for some of our applications but once we step beyond that, there are too many performance issues for it to be a viable alternative. Yet. Because almost everyone added that “yet” caveat.

I did not expect to see news like this in the course of this year. I clapped my hands like a little girl when I read the news (which was after I almost fell off my chair). It is exciting because it means big changes in this space from here on out. Everyone will need to step up their game to deliver on a much-given promise of supercomputing for the masses. This means ramped up development from everyone, and what is more exciting that a competitive kick in the behind to get the summer rolling again in high gear on the news-of-progress front?

Microsoft’s news was drowned out today—and I do wonder about the timing of Amazon’s release. Did Amazon really, seriously time this with Redmond’s conference where they’d be breaking big news? Or was the timing of this release a coincidence? This one gets me—and no one has answered my question yet about this from Amazon. What am I thinking though; if I ask a question like that I get a response that is veiled marketing stuff anyway like, “we so firmly believed that the time is now to deliver our product to our customers—just for them. Today. For no other reason than that we’re just super excited.”

We’re going to be talking about this in the coming week. We need to gauge the impact on HPC—both the user and vendor sides, and we also need to get a feel for what possibilities this opens up, especially now that there is a new player on the field who, unlike the boatload of vendors in the space now, already probably has all of our credit card numbers and personal info. How handy.

To review Microsoft’s forthcoming HPC features for Windows Azure, which they announced in mid-May, 2010, read Alexander Wolfe’s Microsoft Takes Supercomputing To The Cloud post to the InformationWeek blogs of 5/17/2010:

Buried beneath the bland verbiage announcing Microsoft's Technical Computing Initiative on Monday is some really exciting stuff. As Bill Hilf, Redmond's general manager of technical computing, explained it to me, Microsoft is bringing burst- and cluster-computing capability to its Windows Azure platform. The upshot is that anyone will be able to access HPC in the cloud.

HPC stands for High-Performance Computing. That's the politically correct acronym for what we used to call supercomputing. Microsoft itself has long offered Windows HPC Server as its operating system in support of highly parallel and cluster-computing systems.

The new initiative doesn't focus on Windows HPC Server, per se, which was what I'd been expecting to hear when Microsoft called to corral me for a phone call about the announcement. Instead, it's about enabling users to access compute cycles -- lots of them, as in, HPC-class performance -- via its Azure cloud computing service.

As Microsoft laid it out in an e-mail, there are three specific areas of focus:

    • Cloud: Bringing technical computing power to scientists, engineers and analysts through cloud computing to help ensure processing resources are available whenever they are needed—reliably, consistently and quickly. Supercomputing work may emerge as a “killer app” for the cloud.
    • Easier, consistent parallel programming: Delivering new tools that will help simplify parallel development from the desktop to the cluster to the cloud.
    • Powerful new tools: Developing powerful, easy-to-use technical computing tools that will help speed innovation. This includes working with customers and industry partners on innovative solutions that will bring our technical computing vision to life.

Trust me that this is indeed powerful stuff. As Hilf told me in a brief interview: "We've been doing HPC Server and selling infrastructure and tools into supercomputing, but there's really a much broader opportunity. What we're trying to do is democratize supercomputing, to take a capability that's been available to a fraction of users to the broader scientific computing."

In some sense, what this will do is open up what can be characterized as "supercomputing light" to a very broad group of users. There will be two main classes of customers who take advantage of this HPC-class access. The first will be those who need to augment their available capacity with access to additional, on-demand "burst" compute capacity.

The second group, according to Hilf, "is the broad base of users further down the pyramid. People who will never have a cluster, but may want to have the capability exposed to them in the desktop."

OK, so when you deconstruct this stuff, you have to ask yourself where one draws the line between true HPC and just needing a bunch of additional capacity. If you look at it that way, it's not a stretch to say that perhaps many of the users of this service won't be traditional HPC customers, but rather (as Hilf admitted) users lower down the rung who need a little extra umph.

OTOH, as Hilf put it: "We have a lot of traditional HPC customers who are looking at the cloud as a cost savings."

Which makes perfect sense. Whether this will make such traditional high-end users more like to postpone purchase of a new 4P server or cluster in favor of additional cloud capacity is another issue entirely, one which will be interesting to follow in the months to come.

You can read more about Microsoft's Technical Computing Initiative here and here.

Joe Panettieri reported Intel Hybrid Cloud: First Five Partners Confirmed in a 7/13/2010 post to the MSPMentor blog:

There’s been considerable buzz about the Intel Hybrid Cloud in recent days. The project involves an on-premises, pay-as-you-go server that links to various cloud services. During a gathering this evening in Washington, D.C., Intel confirmed at least five of its initial Intel Hybrid Cloud integrated partners. Intel’s efforts arrive as Microsoft confirms its own cloud-enabled version of Windows Small Business Server (SBS), code-named Aurora. Here are the details.

image First, a quick background: The Intel Hybrid Cloud is a server designed for managed services providers (MSPs) to deploy on a customer premise. The MSPs can use a range of software to remotely manage the server. And the server can link out to cloud services. Intel is reaching out to MSPs now for the pilot program.

By the end of this year, Intel expects roughly 30 technology companies to plug into the system. Potential partners include security, storage, remote management and monitoring (RMM), and other types of software companies, according to Christopher Graham Intel’s product marketing engineer for Server CPU Channel Marketing.

Graham and other Intel Hybrid Cloud team members hosted a gathering tonight at the Microsoft Worldwide Partner Conference 2010 (WPC10). It sounds like Graham plans to stay on the road meeting with MSPs that will potentially pilot the system in the next few months.

Getting Started

At least five technology companies are already involved in the project with Intel. They include:

  1. Astaro, an Internet security specialist focused on network, mail and Web security.
  2. Level Platforms, promoter of the Managed Workplace RMM platform. Level Platforms also is preloaded on CharTec HaaS servers and HP storage systems for MSPs.
  3. Lenovo, which launched its first MSP-centric server in April 2010.
  4. SteelEye Technology Inc., a business continuity and high availability specialist.
  5. Vembu, a storage company that has attracted more than 2,000 channel partners, according to “Jay” Jayavasanthan, VP of online storage services.

I guess you can consider Microsoft partner No. 6, since Windows Server or Windows Small Business Server (SBS) can be pre-loaded on the Intel Hybrid Cloud solution.

Which additional vendors will jump on the bandwagon? I certainly expect one or two more RMM providers to get involved. But I’m curious to see if N-able — which runs on Linux — will join the party. Intel has been a strong Linux proponent in multiple markets, but so far it sounds like Intel Hybrid Cloud is a Windows-centric server effort…

Market Shifts… And Microsoft

I’m also curious to see how MSPs and small business customers react to Intel Hybrid Cloud. There are those who believe small businesses will gradually — but completely — abandon on-premises servers as more customers shift to clouds.

And Microsoft itself is developing two new versions of Windows Small Business Server — including SBS Aurora, which includes cloud integration for automated backup. I wonder if Microsoft will connect the dots between SBS Aurora and Windows Intune, a remote PC management platform Microsoft is beta testing now.

MSPmentor will be watching both Microsoft and Intel for updates.

<Return to section navigation list> 

Saturday, March 20, 2010

Windows Azure and Cloud Computing Posts for 3/19/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release and new features announced at MIX10 in March 2010. 

Azure Blob, Table and Queue Services

Stefan Tilkov reports about RFC 5789: PATCH Method for HTTP in this 3/19/2010 post to the InnoQ blog:

After a long, long time, the HTTP PATCH verb has become an official standard: IETF RFC 5789. From the abstract:

  • Several applications extending the Hypertext Transfer Protocol (HTTP) require a feature to do partial resource modification. The existing HTTP PUT method only allows a complete replacement of a document. This proposal adds a new HTTP method, PATCH, to modify an existing HTTP resource.

That's pretty great news, even though it will probably take some time before you can actually gain much of a benefit from it. Until now, there were two options of dealing with resource creation (and update, for that matter):

  1. Use a POST to create a new resource when you want the server to determine the URI of the new resource
  2. Use a PUT to do a full update of a resource (or create if it's not there already)

Sometimes, though, what you're looking for is a partial update. You have a bunch of different choices: You can design overlapping resources so that one of them reflects the part you're interested in, and do a PUT on that; or you can use POST, which is so unrestricted it can essentially mean anything.

With PATCH, you have a standardized protocol-level verb that expresses the intent of a partial update. That's nice, but its success depends on two factors:

  1. The availability of standardized patch formats that can be re-used independently of the application
  2. The support for the verb in terms of infrastructure, specifically intermediaries and programming toolkits

In any case, I will definitely start advocating its use for the purpose it's been intended to support, even if this means going with home-grown patch formats for some time: It's still better than POST, and using some sort of x-http-method-override-style workaround should work nicely if needed.

Kudos to James Snell for investing the time and energy to take this up.

Martin Fowler climbs on the REST bandwagon with his Richardson Maturity Model: Steps toward the glory of REST article of 3/18/2010:

A model (developed by Leonard Richardson) that breaks down the principal elements of a REST approach into three steps. These introduce resources, http verbs, and hypermedia controls.

Recently I've been reading drafts of Rest In Practice: a book that a couple of my colleagues have been working on. Their aim is to explain how to use Restful web services to handle many of the integration problems that enterprises face. At the heart of the book is the notion that the web is an existence proof of a massively scalable distributed system that works really well, and we can take ideas from that to build integrated systems more easily.

MartinFowlerTheGloryOfREST659px

Figure 1: Steps toward REST

To help explain the specific properties of a web-style system, the authors use a model of restful maturity that was developed by Leonard Richardson and explained at a QCon talk. The model is nice way to think about using these techniques, so I thought I'd take a stab of my own explanation of it. (The protocol examples here are only illustrative, I didn't feel it was worthwhile to code and test them up, so there may be problems in the detail.) …

Martin continues with his traditional detailed analyses of architectural models.

<Return to section navigation list> 

SQL Azure Database (SADB, formerly SDS and SSDS)

Mary Jo Foley’s Microsoft to provide customers with more cloud storage post of 3/19/2010 reports:

Microsoft made a couple of somewhat under-the-radar storage announcements this past week.

At the Mix 10 conference, during one of the sessions, the SQL Azure team announced that existing SQL Azure customers will be given access to the SQL Azure 50 GB preview on a request basis. Microsoft isn’t yet sharing availability or pricing details for the 50 GB option, but a spokesperson said they’d share those details “in the coming months” as part of the next SQL Azure service update. (Thanks to OakLeaf Systems’ blogger Roger Jennings for the heads up on this one.)

On the cloud-hosted Exchange front, Microsoft also announced this week that it has increased the size of Exchange Online default mailboxes from 5 GB to 25 GB.

Thanks to Mary Jo for the link.

The WCF Data Services Team’s post of 3/19/2010 links to an explanation of Developing an OData consumer for the Windows Phone 7:

A couple of days back at Mix we announced the CTP release of the OData Client Library for Windows Phone 7 series.

Cool stuff undoubtedly. But how do you use it?

… Phani [RajuYN], a Data Services team member, shows you how on his blog.

Phani’s example uses the NetFlix OData service, but could equally well use an SQL Azure OData service.

David Robinson summarizes SQL Azure announcements at MIX in this 3/19/2010 post:

This was an incredible week here at MIX and I presented a session on Developing Web Applications with SQL Azure. During the session I tried to drive home the point that we value and act upon the feedback you provide to us. We also recognize that we need to be more transparent on what features we are working on and when they will be available.

With that in mind, I was happy to announce the following features / enhancements:

Support for MARS

In SU2 (April) we are adding support for Multiple Active Row Sets. This is a great feature available in SQL Server that allows you to execute multiple batches in a single connection.

50GB Databases

We heard the feedback and will be offering a new 50gb size option in SU3 (June). If you would like to become an early adopter of this new size option before SU3 is generally available, send an email to EngageSA@microsoft.com and it will auto-reply with instructions to fill out a survey. Fill the survey out to nominate your application that requires greater than 10gb of storage. More information can be found at Cihan’s blog.

Support for Spatial Data

One of the biggest requests we received was to support spatial data in SQL Azure and that feature will be available for you in SU3 (June). Within this feature is support for the Geography and Geometry types as well as query support in T-SQL. This is a significant feature and now opens the Windows Azure Platform to support spatial and location aware applications.

SQL Azure Labs

We are launching a new site call SQL Azure Labs. SQL Azure Labs provides a place where you can access incubations and early preview bits for products and enhancements to SQL Azure. The goal is to gather feedback to ensure we are providing the features you want to see in the product. All technologies on this site are for testing and are not ready for production use. Some of these features might not even make it into production – it’s all based upon your feedback. Also please note, since these features are actively being worked on, you should not use them against any production SQL Azure databases.

The first preview on the site is the OData Service for SQL Azure. This enables you to access your SQL Azure Databases as an OData feed by checking a checkbox. It also provides you the ability to secure this feed using the Access Control Services that are provided by Windows Azure Platform AppFabric. You also have the ability to access the feed via Anonymous access should you wish to do so. More details on this can be found at the Data Services Team blog.

Keep those great ideas coming and submit them at http://www.mygreatsqlazureidea.com

Steven Forte’s An easy way to set up an OData feed from your SQL Azure database post of 3/19/2010 begins:

Ever since the “new” SQL Azure went to beta, I have craved an automated way to set up an OData (Astoria) Service from my SQL Azure database. My perfect world would have been to have a checkbox next to each table in my database in the developer portal asking to “Restify” this table as a service. It always seemed kind of silly to have to build a web project, create an Entity Framework model of my SQL Azure database, build a WCF Data Services (OData) service on top of that, and then deploy it to a web host or Windows Azure. (This service seems overkill for Windows Azure.) In addition to all of that extra work, in theory it would not be the most efficient solution since I am introducing a new server to the mix.

At Mix this week and also on the OData team blog, there is an announcement as how to do this very easily. You can go to the SQL Azure labs page and then click on the “OData Service for SQL Azure” tab and enter in your SQL Azure credentials and assign your security and you will be able to access your OData service via this method: https://odata.sqlazurelabs.com/OData.svc/v0.1/<serverName>/<databaseName> …

I went in and gave it a try. In about 15 seconds I had a working OData feed, no need to build a new web site, build an EDM, build an OData svc, and deploy, it just made it for me automatically. Saved me a lot of time and the hassle (and cost) of deploying a new web site somewhere. Also, since this is all Azure, I would argue that it is more efficient to run this from Microsoft’s server’s than mine: less hops to the SQL Azure database. (At least that is my theory.) …

To really give this a test drive, I opened up Excel 2010 and used SQL Server PowerPivot. I choose to import from “Data Feeds” and entered in the address for my service. I then imported the Customers, Orders, and Order Details tables and built a simple Pivot Table.

image

This is a great new feature!

image

If you are doing any work with Data Services and SQL Azure today, you need to investigate this new feature. Enjoy!

Stephen O’Grady analyzes The Problem with Big Data in this 3/18/2010 post:

He who has the most data wins. Right?

Google certainly believes this. But how many businesses, really, are like Google?

Not too many. Or fewer than would constitute a healthy and profitable market segment, according to what I’m hearing from more and more of the Big Data practitioners.

Part of the problem, clearly, is definitional. What is Big to you might not be Big to me, and vice versa. Worse, the metrics used within the analytics space vary widely. Technologists are used to measuring data in storage: Facebook generates 24-25 terabytes of new data per day, etc. Business Intelligence practitioners, on the other hand, are more likely to talk in the number of rows. Or if they’re very geeky, the number of variables, time series and such.

Big doesn’t always mean big, in other words. But big is increasingly bad, from what we hear. At least from a marketing perspective. …

Stephen continues his analysis, which is peripherally related to the current NoSQL kerfluffle.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Bruce Kyle’s Windows Azure AppFabric Goes Commercial April 9 post of 3/19/2010 provides a summary of the Azure AppFabric’s features and billing practices:

Windows Azure platform AppFabric will be commercially available on a paid and fully SLA-supported basis on April 9.

You can find our pricing for AppFabric here. You can also go to the pricing section of our FAQs for additional information as well as our pricing blog post.

For more information, see Announcing upcoming commercial availability of Windows Azure platform AppFabric.

About Windows Azure AppFabric

The Windows Azure platform AppFabric provides secure connectivity as a service to help developers bridge cloud, on-premises, and hosted deployments. You can use AppFabric Service Bus and AppFabric Access Control to build distributed and federated applications as well as services that work across network and organizational boundaries.

Service Bus

servicebusService Bus helps to provide secure connectivity between loosely-coupled services and applications, enabling them to navigate firewalls or network boundaries and to use a variety of communication patterns.

You can use Service Bus to connect Windows Azure applications and SQL Azure databases with existing applications and databases. It is often used to bridge on and off-premises applications or to create composite applications.

Service Bus lets you expose apps and services through firewalls, NAT gateways, and other problematic network boundaries. So your could connect to an application behind a firewall or one where your customer does not even expose the application as an endpoint. You can use Service Bus to lower barriers to building composite applications by exposing endpoints easily, supporting multiple connection options and publish and subscribe for multicasting. 

Service Bus provides a lightweight, developer-friendly programming model that supports standard protocols and extends similar standard bindings for Windows Communication Foundation (WCF) programmers.

It blocks malicious traffic and shields your services from intrusions and denial-of-service attacks.

See Windows Azure platform AppFabric on MSDN.

Access Control

accesscontrolAccess Control helps you build federated authorization into your applications and services, without the complicated programming that is normally required to secure applications that extend beyond organizational boundaries. With its support for a simple declarative model of rules and claims, Access Control rules can easily and flexibly be configured to cover a variety of security needs and different identity-management infrastructures.

You can create user accounts that federate a customer's existing identity management system that uses Active Directory service, other directory systems, or any standards-based infrastructure. With Access Control, you exercise complete, customizable control over the level of access that each user and group has within your application. It applies the same level of security and control to Service Bus connections

Identity is federated using access control through rule based authorization that enable your applications to respond as if the user accounts were managed locally. As a developer you use a lightweight developer-friendly programming model based on the Microsoft .NET Framework and Windows Communication Foundation to build the access rules based on your domain knowledge. The flexible standards-based service supports multiple credentials and relying parties.

Bruce continues with links to Resources: Whitepapers, SDK & Toolkits, PDC09 Videos and the AppFabric Labs.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Eugenio Pace’s Windows Azure Guidance – A (simplistic) economic analysis of a-Expense migration post of 3/19/2010 analyzes Microsoft’s billings for Azure in this guidance example, including the first mention I’ve seen of $500/month for a 50 GB database:

A big motivation for considering hosting on Windows Azure is cost. Each month, Microsoft will send Adatum a bill for the Windows Azure resources used. This is a very fast feedback loop on how they are using the infrastructure. Did I say that money is a great motivator yet? (another favorite phrase :-) )

Small digression: I had a great conversation with my colleague Danny Cohen in Tel Aviv a few weeks ago. We talked (among a million other topics :-) ) about Windows Azure pricing and its influence in design. He told me a story about “feedback loops” and their influence in behavior that I liked very much. I’ve been using it pretty frequently. He wrote a very good summary here.

So, what are the things that Adatum would be billed for in the a-Expense application?

image

At this stage of a-Expense migration, there are 5 things that will generate billing events:

  1. In/Out bandwidth. This is web page traffic generated between user’s browsers and a-Expense web site. ($0.10-$0.15/GB)
  2. Windows Azure Storage. In this case it will just be the Profile data. Later it will also be used for storing the receipt scans. ($0.15/GB)
  3. Transactions. Each interaction with the storage system is also billed. ($0.01/10K transactions)
  4. Compute. This is the time a-Expense web site nodes are up. (Small size role is $0.12/hour) 
  5. SQL Storage. SQL Azure comes in 3 sizes: 1GB, 10GB and 50GB. ($10, $100, $500/month respectively).

Eugenio continues with a detailed cost analysis.

Dimitri Sotkinov describes how his company’s QuestOnDemand service uses “Windows Azure as it underlying technology” in his IT Management as a Service: Discussion and Demo post of 3/19/2010:

Microsoft’s TechNet EDGE posted a video with quite detailed discussion of Systems Management as a Service concept, example of such a service (Quest OnDemand), how it uses Windows Azure as the underlying technology, the security model behind it, and so on. Obviously a demo is in there as well.

Check out the video here.

Steve Marx plugs the Windows Azure Firestarter Event: April 6th, 2010 in this 3/18/2010 post:

imageThe Windows Azure Firestarter event here in Redmond, WA is coming up in just a few short weeks. You can attend in person or watch the live webcast by registering at http://www.msdnevents.com/firestarter/.

Will you be there? I’ll be presenting a platform overview at the beginning of the day. If you’re going to attend (virtually or in person), drop me a tweet and let me know what you’d like to see covered.

The Windows Azure Team’s Real World Windows Azure: Interview with Sarim Khan, Chief Executive Officer and Co-Founder of sharpcloud post of 3/18/2010 provides additional backgroud to the sharpcloud Case Study reported in Windows Azure and Cloud Computing Posts for 3/18/2010+

As part of the Real World Windows Azure series, we talked to Sarim Khan, CEO and Co-Founder of sharpcloud, about using the Windows Azure platform to run its visual business road-mapping solution. Here's what he had to say:

MSDN: What service does sharpcloud provide?

Khan: sharpcloud helps businesses better communicate their strategic plans. Our service applies highly visual and commonly used social-networking tools to the crucial task of developing long-term business road maps and strategy.

MSDN: What was the biggest challenge sharpcloud faced prior to implementing Windows Azure?

Khan: To support the global companies that we see as our prime market, we knew that we needed a cloud computing solution that would be easy to scale and deploy. We wanted to devote all of our resources to developing a compelling service, not managing server infrastructure. We initially started using Amazon Web Services, but that still required us to focus time on maintaining the Amazon cloud-based servers

MSDN: Can you describe the solution you built with Windows Azure to help maximize your resources?

Khan: With the sharpcloud service, executives and other users can work within a Web browser to create a framework and real-time dialogue on their road maps. They can define attributes and properties, such as benefit, cost, and risk. They can also add "events" to the road map, such as technologies and trends. Once the road map is populated with events, users can explore it through a three-dimensional view, add information, and explore relationships between events. Because we were already creating an application based on the Microsoft .NET Framework and the Microsoft Silverlight browser plug-in, we only had to rewrite a modest amount of code to ensure that our service and storage mechanism would communicate properly with the Windows Azure application programming interfaces.

Figure 1. The sharpcloud application makes it possible for a virtual team to view events and the relationships among them in a three-dimensional road map and to post comments for other team members to review in real time.

tbtechnet chimes in on the sharpcloud project in his Sharpcloud Leverages Front Runner and Bizspark post of 3/18/2010:

http://www.microsoft.com/casestudies/Case_Study_Detail.aspx?CaseStudyID=4000006685

It was great working with these folk:

sharpcloud took advantage of the Front Runner for Windows Azure Platform. Front Runner is an early adopter program for Microsoft solution partners. It provides technical resources, such as application support for Windows Azure from Microsoft development experts by phone and e-mail, as well as access to Windows Azure technical resources in one, central place. Once an application is certified as compatible with Windows Azure, the program provides additional marketing benefits. These benefits include a promotional toolkit, including a stamp and news release to use on marketing materials, a discount on Ready-to-Go campaign costs, and visibility on a Windows Azure Web site.

sharpcloud also joined the Microsoft BizSpark program for software startup companies. BizSpark unites startups with the resources—including software, support, and visibility—that they need to succeed. The program includes access to Microsoft development tools, platform technologies, and production and hosting licenses.

“The key for us has been the software and support available without licensing cost,” says Khan. “As a small startup, we have to maximize our investment, and these programs have certainly made our development budget go much further than it would have gone otherwise. We’ve also had good support from Microsoft with technologies including Windows 7 and Windows Server 2008 R2. And it’s stimulating to get a window into what other startups are doing with these technologies.”

Thuzi’s Facebook Azure Toolkit v0.9 Beta on CodePlex as of 3/12/2010 carries the following introduction:

Welcome to the Facebook Azure Toolit. This toolkit was built by Thuzi in collaboration with Microsoft to give the community a good starter kit for getting Facebook apps up and running in Windows Azure. Facebook apps hosted in Azure provide a flexible and scalable cloud computing platform for the smallest and largest of Facebook applications. Whether you have millions of users or just a few thousand, the Facebook Azure Toolit helps you to build your app correctly so that if your app is virally successful, you won't have to architect it again. You will just need to increase the number of instances you are using, then you are done scaling. :)

DISCLAIMER: This toolkit does not demonstrate how to create a facebook application. It is assumed that you already know how to do that or are willing to learn how to do that. there are plenty of links below on how to do this. Here is one, Link that is also listed below that helps explain how to set up an app and there are plenty of sample in the Facebook Developers Toolkit.

NOTE: You must use Visual Studio 2010 RC to open the project.

What does this Tool[k]it include?
  • Facebook Developers Toolkit Link
  • Ninject 2.0 for Dependency Injection Link
  • Asp.Net MVC 2 Link
  • Windows Azure Software Development Kit (February 2010) Link
  • AutoMapper Link
  • Azure Toolkit - Simplified library for accessing Message Queues, Table Storage and Sql Server
  • Automated build scripts for one-click deployment from TFS 2010 to Azure …

Thuzi continues with How do I get started?, Getting Setup and Running Locally, and Setting up Deployment in TFS 2010 topics.

Return to section navigation list> 

Windows Azure Infrastructure

David Linthicum asserts “Once it got past the vendor hype, the Cloud Connect event revealed the three key issues that need to be addressed” in The cloud's three key issues come into focus post of 3/19/2010 for InfoWorld’s Cloud Computing blog:

I'm writing this blog on the way back from Cloud Connect held this week in Santa Clara. It was a good show, all in all, and there was a who's-who in the world of cloud computing. I've really never seen anything like the hype around cloud computing, possibly because you can pretty much "cloudwash" anything, from disk storage to social networking. Thus, traditional software vendors are scrambling to move to the cloud, at least from a messaging perspective, to remain relevant. If I was going to name a theme of the conference, it would be "Ready or not, we're in the cloud." …

But beyond the vendor hype, it was clear at the conference that several issues are emerging, even if the solutions remain unclear:

  • Common definitions …
  • Standards …
  • Security …

David expands on the three issues in his post.

Phil Wainwright asks Is SaaS the same as cloud? in this 3/19/2010 post to the Enterprise Irregulars blog:

From the customer’s perspective, it’s all the same. If it’s provided over the Internet on a pay-for-usage basis, it’s a cloud service. Within the industry, we argue about definitions more than is good for us. Customers look in from the outside and see a much simpler array of choices.

Why is this important? It matters to how we market and support cloud services (of whatever ilk). Yesterday EuroCloud UK (disclosure: of which I’m chair) had a member meeting, hosted at SAP UK headquarters, that covered various aspects of the transition to SaaS for ISVs. From the title, you’d imagine it would have little content of relevance to raw cloud providers at the infrastructure-as-a-service layer. (One of our challenges in the early days of EuroCloud, whose founders are more from the SaaS side of things, is to make sure we bring the infrastructure players on board with us). But in fact, much of the discussion covered topics of equal interest at any level of the as-a-service stack: How to work with partners? How to compensate sales teams? What sort of contract to offer customers? How to reconcile paying for resources on a pay-per-use basis with a per-seat licence fee? What instrumentation and reporting of service levels should the provider’s infrastructure include?

And then came the customer presentation, by Symbian Foundation’s head of IT, Ian McDonald. He was there as a customer of SAP’s Business ByDesign SaaS offering, whose team were hosting the meeting. But it soon became clear that his organization’s voracious consumption of cloud services runs the gamut from high-level applications like ByDesign and Google Apps through to Amazon Web Services, Jungle Disk storage and file sharing (stored on either Amazon or Rackspace), even Skype. Symbian’s developers still build their own website infrastructure using open-source platforms but that too is hosted in the cloud. The imperative for Symbian, as a not-for-profit consortium, is to stay flexible and minimize costs. An important part of that is having the capacity to scale rapidly if needed but without having to pay up-front for that capacity. …

Phil continues his argument with takeaways from other EuroCloud UK sessions.

Lori MacVittie claims “Talking about standards apparently brings out some very strong feelings in a whole lot of people” in her Now is the conference of our discontent … post of 3/19/2010:

From “it’s too early” to “we need standards now” to “meh, standards will evolve where they are necessary”, some of the discussions at CloudConnect this week were tinged with a bit of hostility toward, well, standards in general and the folks trying to define them. In some cases the hostility was directed toward the fact that we don’t have any standards yet.

[William Vambenepe has a post on the subject, having been one of the folks hostility was directed toward during one session ]

Lee Badger, Computer Scientist at NIST, during a panel on “The Standards Real Users Need Now” offered a stark reminder that standards take time. He pointed out the 32 months it took to define and agree on consensus regarding the ASCII standard and the more than ten years it took to complete POSIX. Then Lee reminded us that “cloud” is more like POSIX than ASCII. Do we have ten years? Ten years ago we couldn’t imagine that we’d be here with Web 2.0 and Cloud Computing, so should we expect that in ten years we’ll still be worried about cloud computing?

Probably not.

The problem isn’t that people don’t agree standards are a necessary thing, the problem appears to be agreeing on what needs to be standardized and when and, in some cases, who should have input into those standards. There are at least three different constituents interested in standards, and they are all interested in standards for different reasons which of course leads to different views on what should be standardized.

Lori continues with “WHAT are we STANDARDIZING?” and “CLOUDS cannot be BLACK BOXES” topics.

Neil MacKenzie’s Service Runtime in Windows Azure post of 3/18/2010 analyzes Windows Azure’s architecture:

Roles and Instances Windows Azure implements a Platform as a Service model through the concept of roles. There are two types of role: a web role deployed with IIS; and a worker role which is similar to a windows service. Azure implements horizontal scaling of a service through the deployment of multiple instances of roles. Each instance of a role is allocated exclusive use of a VM selected from one of several sizes from a small instance with 1 core to an extra-large instance with 8 cores.  Memory and local disk space also increase in going from a small instance to an extra-large instance.

All inbound network traffic to a role passes through a stateless load balancer which uses an unspecified algorithm to distribute inbound calls to the role among instances of the role. Individual instances do not have public IP addresses and are not directly addressable from the Internet. Instances are able to connect to other instances in the service using TCP and HTTP.

Azure provides two deployment slots: staging for testing in a live environment; and production for the production service. There is no real difference between the two slots.

It is important to remember that Azure charges for every deployed instance of every role in both production and staging slots regardless of the status of the instance. This means that it is necessary to delete an instance to avoid being charged for it.

Fault Domains and Upgrade Domains There are two ways to upgrade an Azure service: in-place upgrade and Virtual IP (VIP) swap. An in-place upgrade replaces the contents of either the production or staging slot with a new Azure application package and configuration file. A VIP swap literally swaps the virtual IP addresses associated with roles in the production and staging slots. Note that it is not possible to do an in-place upgrade where the new application package has a modified Service Definition file. Instead, any existing service in one of the slots must be deleted before the new version is uploaded. A VIP swap does support modifications to the Service Definition file.

The Windows Azure SLA comes into force only when a service uses at least two instances per role.  Azure uses fault domains and upgrade domains to facilitate adherence to the SLA.

When Azure instances are deployed, the Azure fabric spreads them among different fault domains which means they are deployed so that a single hardware failure does not bring down all the instances. For example, multiple instances from one role are not deployed to the same physical server. The Azure fabric completely controls the allocation of instances to fault domains but an Azure service can view the fault domain for each of its instances through the RoleInstance.FaultDomain property.

Similarly, the Azure fabric spreads deployed instances among several upgrade domains. The Azure fabric implements an in-place upgrade by bringing down all the services in a single upgrade domain, upgrading them, and then restarting them before moving on to the next upgrade domain. The number of upgrade domains is configurable through the upgradeDomainCount attribute to the ServiceDefinition root element in the Service Definition file. The default number of upgrade domains is 5 but this number should be scaled with the number of instances. The Azure fabric completely controls the allocation of instances to upgrade domains, modulo the number of upgrade domains, but an Azure service can view the upgrade domain for each of its instances through the RoleInstance.UpdateDomain property. (Shame about the use of upgrade in one place and update in another.) …

Neil continues with explanations of Service Definition and Service Configuration, RoleEntryPoint, Role, RoleEnvironment, RoleInstance, RoleInstanceEndpoint and LocalResource.

<Return to section navigation list> 

Cloud Security and Governance

Joseph Trigliari’s Cloud computing and payment processing security: Not mutually exclusive after all? post of 3/19/2010 to Pivotal Payments: Canadian Merchant Industry News reports:

With cloud computing becoming a more and more popular and attractive IT model for organizations, one serious concern that has arisen is what the implications are for PCI compliance and overall payment processing security.

Many payment processing industry experts maintain that the cloud computing and PCI compliance are mutually exclusive, at least now while there are no set guidelines or requirements governing cloud computing security.

However, PCI expert Walt Conway thinks PCI compliance may be possible in the cloud.
"When you look at the cloud, keep your security expectations realistic," he wrote in an article for StorefrontBacktalk.com. "Don't expect 100 percent security. You don't have 100 percent security anywhere, so don't expect it in the cloud. What you want is the same, hopefully very high, level of security you have now or maybe a little higher."

Some things that organisations moving into the cloud must consider, he writes, are the security of the cloud provider, the new scope of PCI compliance, and the notification and data availability procedures should any client sharing the organisation's cloud experience a breach or subpoena of their data.

The question of PCI compliance in the cloud is only going to become an even larger concern as cloud computing grows in popularity - a recent survey from Mimecast, for example, found that 70 percent of IT decision makers surveyed who already use cloud services plan to increase their cloud investments in the near future.ADNFCR-2514-ID-19680316-ADNFCR

This is a much more optimistic view of PCI compliance with cloud computing than I’ve read previously.

<Return to section navigation list> 

Cloud Computing Events

Sebastian added the first four three slide decks for the San Francisco Cloud Computing Club’s 3/16/2010 meetup colocated with the Cloud Connect Conference in Santa Clara, CA:

    • PaaS seen by Ezra: Ezra's presentation of PaaS at EngineYard
    • Makara: Tobias' presentation of Makara at Cloud Connect
    • Appirio Appirio's presentation at Cloud Club (at Cloud Connect)
    • Cloud club – Heroku: Oren's excellent presentation on the need for PaaS

Updated 3/20/2010 for Cloud club – Heroku.

Liz MacMillan reports Microsoft’s Bill Zack to Present at Cloud Expo East on 3/19/2010:

You are interested in cloud computing, but where do you start? How are vendors defining Cloud Computing? What do you need to know to figure out which applications make sense in the cloud? And is any of this real today?

In his session at the 5th International Cloud Expo, Bill Zack, an Architect Evangelist with Microsoft, will explore a set of five patterns that you can use for moving to the cloud, together with working samples on Windows Azure, Google AppEngine, and Amazon EC2. He will provide the tools and knowledge to help you more clearly understand moving your organization to the cloud. …

Bill Zack is an Architect Evangelist with Microsoft.

Roger Struckhoff claims “Microsoft Architect Evangelist Bill Zack Will Tell All at Cloud Expo” in his Patterns? In Cloud Computing? post of 3/19//2010:

You are interested in cloud computing, but where do you start?"

This question was posed by Microsoft Architect Evangelist Bill Zack, who will present a session on the topic of Patterns in Cloud Computing at Cloud Expo.

People often see sheep or little doggies or Freudian/Rohrsachian images in physical clouds, but fortunately, the patterns in Cloud Computing are less fanciful and more discrete. That said, there remains a lot of confusion about basic definitions within the nascent Cloud Computing industry, no doubt because Cloud is not a new technology, but rather, a new way of putitng things together and delivering computing services. Zack asks, "How are vendors defining Cloud Computing? What do you need to know to figure out which applications make sense in the cloud? And is any of this real today?"

His answer to the last question would be "yes," as his session is "based on real-world customer engagements," he notes. Specfically, "the session explores a set of five patterns that you can use for moving to the cloud, together with working samples on Windows Azure, Google AppEngine, and Amazon EC2. Avoiding the general product pitches, this session provides the tools and knowledge to help you more clearly understand moving your organization to the cloud."

William Vambenepe’s “Freeing SaaS from Cloud”: slides and notes from Cloud Connect keynote post of 3/19/2010 begins:

I got invited to give a short keynote presentation during the Cloud Connect conference this week at the Santa Clara Convention Center (thanks Shlomo and Alistair). Here are the slides (as PPT and PDF). They are visual support for my bad jokes rather than a medium for the actual message. So here is an annotated version.

I used this first slide (a compilation of representations of the 3-layer Cloud stack) to poke some fun at this ubiquitous model of the Cloud architecture. Like all models, it’s neither true nor false. It’s just more or less useful to tackle a given task. While this 3-layer stack can be relevant in the context of discussing economic aspects of Cloud Computing (e.g. Opex vs. Capex in an on-demand world), it is useless and even misleading in the context of some more actionable topics for SaaS: chiefly, how you deliver such services, how you consume them and how you manage them.

In those contexts, you shouldn’t let yourself get too distracted by the “aaS” aspect of SaaS and focus on what it really is.

Which is… a web application (by which I include both HTML access for humans and programmatic access via APIs.). To illustrate this point, I summarized the content of this blog entry. No need to repeat it here. The bottom line is that any distinction between SaaS and POWA (Plain Old Web Applications) is at worst arbitrary and at best concerned with the business relationship between the provider and the consumer rather than  technical aspects of the application. …

William, who’s an Oracle Corp. architect, continues with an interesting analogy involving a slide with a guillotine. I had the pleasure of sitting next to him the San Francisco Cloud Computing Club meet-up on 3/16/2010 at the Cloud Connect Conference in Santa Clara, CA.

Update 3/20/1020: Sam Johnston asserts in a 3/20/2010 tweet:

Observation: @vambenepe's exclusion of SaaS from the #cloud stack also effectively excludes Google from #cloud: http://is.gd/aQqfM

William replies in this 3/20/2010 tweet:

Not excluding SaaS from #Cloud. Just pointing out that providing/consuming/managing SaaS is 90% app-centric (not Cloud-specific) & 10% Cloud

The Voices for Innovation blog’s DC-Area Microsoft Hosted Codeathon, April 9-11 post of 3/19/2010 announces:

If you or your company is in the Washington, DC, area -- or will be passing through on April 9-11 -- please consider coming to a Microsoft-hosted codeathon at the Microsoft offices in Chevy Chase, MD. Organized in conjunction with the League of Technical Voters, the codeathon will focus on making government documents more accessible and citable. You can learn more about the event and sign up for a project at http://dccodeathon.com.

The codeathon will especially benefit from the participation of developers with skills in the following areas:

1) ASP.NET, HTTP/REST, JavaScript, micro formats
2) Windows Azure (PHP and other OSS technologies on Azure a plus)
3) SharePoint 2010/2007
4) Open source technologies, e.g.; PHP, Python, R-on-R

In addition, if you would be interested in getting started on a project before the codeathon, email us at info@voicesforinnovation.org, and we'll put you in touch with the right person at Microsoft. [Emphasis added.]

Clay Ryder casts a jaundiced eye on cloud marketing projections and analyses at the Cloud Connect Conference in his Some Thoughts on the Cloud Connect Conference post of 3/18/2010 to the IT-Analysis blog:

I ventured out to the Cloud Connect conference and expo at the Santa Clara Convention Center this morning. Unlike most trips to industry events where I take part in the transportation cloud, usually of the rail nature, today I was firmly self reliant with a set of four tires on the parking lot known as CA-237. Part of my reasoning was to go get out of office on a sunny day, but more importantly was my intrigue on this market segment known more or less as Cloud Computing. Readers know that I have a heavy dose of skepticism about the marketing fluff related to Clouds, or the often ill defined opportunity that purports to be the next greatest thing in IT. With few exceptions, vendors and the industry as a whole have historically done a poor job of defining this market opportunity. Is Cloud a market segment? Is Cloud a delivery model? Is it both? So far, this mornings keynotes have only reinforced my skepticism, which is unfortunate. For all of its technical promise, the continued lack of industry definition clarity remains deeply troubling.

By mid morning, we had heard from a Deutsche Bank Securities analyst, a market researcher, a couple of systems vendors, a startup pro, and some ancillary folks. The analyst talked about equipment, vague notions of markets, and then hardware sales he projected were related to cloud sales. While his sales projection of $20 billion is non-trivial, it utterly lacked any definition by which to differentiate these sales from plain old enterprise hardware sales. Just what is the uniquely Cloud stuff that accounts for this expenditure? From this presenter, it seemed that Cloud meant nothing more than enterprise IT with virtualization of servers and network switches. This conveniently left out perhaps the fastest growing segment of IT, i.e. storage. This Cloud discussion was underwhelming at best; an example of a 2002 mindset fixated on server virtualization with lip service to linking those servers together. This doesnt sound like a game changer to me and begs the question of why would VCs plow money into such an ill defined opportunity?

Clay continues giving equally low marks to other presenters. He concludes:

Ultimately, the right answer may be to stop looking for the Cloud market altogether. Perhaps Cloud is really just an intelligent delivery model that addresses the state of art in IT. Maybe Cloud is a process, not a product. As such, things would make a whole lot more sense than the confusing overlap of jargon and techno-obfuscation that so many undertake in the name of the Cloud. This would be a welcome improvement not only in nomenclature, but perhaps in market clarity, which would then help drive market adoption. Money tends to follow well defined paths to ROI. Why should Clouds be any different? 

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

John Treadway asserts VMware Should Run a Cloud or Stop Charging for the Hypervisor (or both) in this 3/19/2010 post to the CloudBzz blog. It begins:

I had a number of conversations this past week at CloudConnect in Santa Clara regarding the relative offerings of Microsoft and VMware in the cloud market.  Microsoft is going the vertically integrated route by offering their own Windows Azure cloud with a variety of interesting and innovated features.  VMware, in contrast, is focused on building out their vCloud network of service providers that would use VMware virtualization in their clouds. VMware wants to get by with a little help from their friends.

The problem is that few service providers are really VMware’s friend in the long run.  Sure, some enterprise-oriented providers will provide VMware capabilities to their customers, but it is highly likely that they will quickly offer support for other hypervisors (Xen, Hyper-V, KVM).  The primary reason for this is cost.  VMware charges too much for the hypervisor, making it hard to be price-competitive vs. non-VMware clouds.  You might expect to see service providers move to a tiered pricing model where the incremental cost for VMware might be passed onto the end-customers, which will incentivize migration to the cheaper solutions.  If they want to continue this channel approach but stop enterprises from migrating their apps to Xen, perhaps VMware needs to give away the hypervisor – or at least drop the price to a level that it is easy to absorb and still maintain profitability ($1/month per VM – billed by the hour at $0.0014 per hour plus some modest annual support fee would be ideal).

Think about it… If every enterprise-oriented cloud provider lost their incentive to go to Xen, VMware would win.  Being the default hypervisor for all of these clouds would provide even more incentive for enterprise customers to continue to adopt VMware for internal deployments  (which is where VMware makes all of their money).  Further, if they offered something truly differentiated (no, not vMotion or DRS), then they could charge a premium. …

CloudTweaks explains Why Open Source and Operations Matter in Cloud Computing on 3/19/2010:

Earlier this week, IBM announced a cloud computing program offering development and test services for companies and governments. That doesn’t sound like much, yet on closer inspection it’s a flagstone in the march toward a comprehensive cloud offering at Big Blue. It also demonstrates how operational efficiency is a competitive weapon in our service economy. Let me explain.

As the IT industry shifts from a product base economy to a service-based economy, operational competency is a competitive weapon. Contrast this with the past where companies could rely on closed-APIs, vendor lock in or the reliance on vast resources to build business and keep out the competition. Today, anyone with a good idea can connect to a cloud provider and build a software business over-night –- without massive investment dollars. Instead of forcing people to pay for a CD with your software on it, you deliver a service. In that type of environment where service is king, operational efficiency is crucial. It’s the company with the best execution and operational excellence that prospers. Yes, it’s leveled the playing field, yet ironically the cloud providers themselves are the best examples of operational excellence being the competitive advantage of the 21st century. …

Alex WilliamsThe Oracle Effect: Sun's Best and Brightest Move On to New Places post of 3/18/2020 begins:

drizzlelogo.pngWhat is the effect of the Oracle acquisition of Sun Microsystems on cloud computing? Well, there have been quite a few if you look at where Sun's best and brightest have moved on to in the past few months.

Tim Bray is the latest Sun star to move on. You may know Bray as the co-founder of XML. Eve Maler is also a co-founder of XML. She had worked with Bray for many years until her departure from Sun last Spring to join PayPal. Eve as many of you many know, is one of the leaders in developing identity standards and initiatives.

Perhaps the clearest example is evident at Rackspace where five developers from Sun were recently hired to work on Drizzle, a heavy duty system for high scaling applications in the cloud:

“When it's ready, Drizzle will be a modular system that's aware of the infrastructure around it. It does, and will run well in hardware rich multi-core environments with design focused on maximum concurrency and performance. No attempt will be made to support 32-bit systems, obscure data types, language encodings or collations. The full power of C++ will be leveraged, and the system internals will be simple and easy to maintain. The system and its protocol are designed to be both scalable and high performance.” …

<Return to section navigation list>