Monday, April 12, 2010

Windows Azure and Cloud Computing Posts for 4/12/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in April 2010. 

Azure Blob, Table and Queue Services

Steve Marx continues his Uploading Windows Azure Blobs From Silverlight [Series with] – Part 2: Enabling Cross-Domain Access to Blobs of 4/12/2010:

In this series of blog posts, I’ll show you how to use Silverlight to upload blobs directly to Windows Azure storage. At the end of the series, we’ll have a complete solution that supports uploading multiple files of arbitrary size. You can try out the finished sample at http://slupload.cloudapp.net.

Part 2: Enabling Cross-Domain Access to Blobs

In Part 1: Shared Access Signatures, we saw how to construct a Shared Access Signature that can be used by a client to access blob storage without needing to know the account shared key. For most clients, a Shared Access Signature is all that’s needed to enable access to read and write Windows Azure blobs. However, in the case of Silverlight, there are restrictions on what kind of cross-domain access is allowed. In this post, we’ll see how to enable full access to blob storage through Silverlight.

ClientAccessPolicy.xml

When a Silverlight application makes a cross-domain call (other than those that are allowed by default), it first fetches a file called ClientAccessPolicy.xml from the root of the target server. In our case, our URLs look like http://slupload.blob.core.windows.net/…, so Silverlight will try to access the policy file at http://slupload.blob.core.windows.net/ClientAccessPolicy.xml. We’ll need to make that the correct policy file is served from that location. …

Steve continues with the details of serving the correct policy file.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Naveen Srinivasan’s Using LINQ and Reactive Extensions to overcome limitations in OData query operator post of 4/11/2010 begins:

I was pleased to know that Netflix had OData API to query. The practical reason is obviously was to use the API to query for the movies I want to watch. Like I mentioned in my previous post, I will be using LINQPad 4 for querying purposes, because of its built-in capabilities for OData as well as for Rx.

One thing I discovered after playing around with OData is that not every query operator in LINQ is available in OData. For example the Netflix API has only for 4 operators which are

  1. Filter
  2. Skip
  3. Take
  4. Orderby

And also the query returns only 20 rows as the result for each request. So for example if I have to get 40 rows, on my first request  the server would return 20 rows and in my next request I would have to skip first 20 and take next 20 to get 40 rows. These are some of the limitations.

Here is what I wanted from Netflix, I wanted to movie listings that has an average rating greater than 3.5 ,ordered by release year descending and grouped by listings that are available for instant watch.  So that I can have one queue for movies that I want to watch online and another one that I can request via mail (the ones that is not available in instant watch).  …

Naveen goes on to describe the query required to get what he wanted from Netflix.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

No significant articles today.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

David Aiken’s Web Queue Task Queue Task post of 4/12/2010 discusses decoupling Web and worker role tasks with queues:

As part of my firestarter talk on building applications from the cloud, I talked about decoupling the tasks worker roles had to perform using Queues.

Here is the basic pattern:

Web Q Task Q Task diagram

This is a pattern you should follow when building new applications and services for the cloud.

Why should you think this way?

There are several reasons:

  1. De-Normalizing data for Windows Azure table storage. If you are using table storage, you could well be using a worker role to keep the data de-normalized. As an example, in the Bid Now sample application there is a view of the data that is the most viewed items. This table is updated by a worker. When you view an item on the site, a message is placed on a queue which instructs the worker to update the data in the most viewed items table.
  2. Scale out. Some tasks take longer and/or more resources than others. Some tasks need to be done quicker than others. As an example, it doesn’t matter how quickly we update the most viewed items table, but we had better update the table containing the winning bid very quickly. Breaking out tasks into different workers allows you to scale up and down in the right place.
  3. Failover. At some point, your worker will effectively restart. (could be hardware failure, OS patching etc.) The more work a worker has to do in any one task, the longer it will take to redo this after a failure. It also has a much greater chance of leaving your system in a mid-way state.
  4. Isolation/Layering. You can easily add improvements and deploy fixes into individual instances if they are separated out without any down time.

This has to be the first rule of building cloud apps, but what about a sample that shows it?

If you look at the Bid Now sample app, you will see that although we have a single worker role – the app is in fact implemented using the strategy above. We use a single worker as I didn’t figure you would want to run 6 worker roles for a sample app!

If you want to walk through an example of this, take a look at the following places in Bid Now:

  • Buy/AuctionDetails.aspx.cs in the project BidNow.Web, line 164-168. If the item is viewed, call service.IncreaseAuctionItemViews, which if you follow the link will take you to
  • AuctionService.cs in the project BidNow.Services, line 266-269. Here we add a message to a queue.
  • ViewItemHandler.cs in the project BidNow.Handlers, starting at line 51, reads the message from the queue and increments it up the list. It performs a delete and an add to do this as the view table is ordered by the partition key (more on this in a later post).

You can grab the latest Bid Now Sample from http://code.msdn.microsoft.com/BidNowSample

Lori MacVittie’s Development Performance Metrics Will Eventually Favor Cost per Line of Code post of 4/12/2010 analyzes the ease with which organizations will be able to move applications to the cloud:

imageIt is true right now that for the most part, virtualization changes deployment of applications but not their development. Thus far this remains true, primarily because those with an interest in organizations  moving to public cloud computing have reason to make it “easy” and painless, which means no changes to applications.

But eventually there will be changes that are required, if not from cloud providers then from the organization that pays the bills.

One of the most often cited truism of development is actually more of a lament on the part of systems’ administrators. The basic premise is that while Moore’s Law holds true, it really doesn’t matter because developers’ software will simply use all available CPU cycles and every bit and byte of memory. Basically, the belief is that developers don’t care about writing efficient code because they don’t have to – they have all the memory and CPU in the world to execute their applications. Virtualization hasn’t changed that at all, as instances are simply sized for what the application needs (which is a lot, generally). It doesn’t work the other way around. Yet.

But it will, eventually, as customers demand – and receive - a true pay-per-use cloud computing model.

The premise of pay-for-what-you-use is a sound one, and it is indeed a compelling reason to move to public cloud computing. Remember that according to IDC analysts at Directions 2010, the primary driver for adopting cloud computing is all about “pay per use” with “monthly payments” also in the top four reasons to adopt cloud. Luckily for developers cloud computing providers for the most part do not bill “per use”, they bill “per virtual machine instance.” …

Lori continues with “COST MATTERS in a TRUE PAY-per-USE MODEL” and “YOU CAN’T CONTROL WHAT you CAN’T MEASURE” topics.

Return to section navigation list> 

Windows Azure Infrastructure

The Windows Azure Team announced Windows Azure Platform Expands Global Availability to 41 Countries in conjunction with the VS2010 release celebration on 4/12/2010:

Starting today, the Microsoft Windows Azure platform, including Windows Azure, SQL Azure, and Windows Azure platform AppFabric will be generally available in an additional 20 countries, making our flexible cloud services platform available to a global customer base and partner ecosystem across 41 countries. As a number of time zones apply to our customers and partners worldwide, availability of the Windows Azure platform will roll out to all 20 countries successively this week.

On Feb 1 2010, we announced the general availability of the Windows Azure platform in 21 countries and starting today our global footprint will increase to the following 20 countries and regions in the following currencies:

  • Australia $ AUD
  • Brazil $ USD
  • Chile $ USD
  • Colombia $ USD
  • Costa Rica $ USD
  • Cyprus € EUR
  • Czech Republic € EUR
  • Greece € EUR
  • Hong Kong $ USD
  • Hungary € EUR
  • Israel $ USD
  • Luxemburg € EUR
  • Malaysia $ USD
  • Mexico $ USD
  • Peru $USD
  • Philippines $ USD
  • Poland € EUR
  • Puerto Rico $ USD
  • Romania € EUR
  • Trinidad and Tobago $ USD

The post continues with links to the latest customer case studies, offers, a resource guide, and readiness resources.

Brenda Michelson’s Scale-up is Great. Don’t Forget Scale-out post of 4/12/2010 begins:

David Benari contributed a post to MIT CIO Symposium’s CIO Corner blog on Building Sustainable IT ROI

“When analyzing IT ROI, the ROI-sustainability factor is often overlooked. A frequent scenario involves architecture plans that call for multiple diverse technologies that may each be practical choices on their own, but are virtually incompatible together or require completely different skills/teams to integrate and maintain. To build IT ROI that proves itself beyond the planning stage it is critical that the entire IT-infrastructure, including the human-resources that interact with it, are analyzed as a whole.”

I constantly soapbox on understanding and accounting for the value of IT investments over time, so I enjoyed the entire post.  One point though, the essentialness of planning (architectural planning) is especially pertinent to cloud computing. [Emphasis is mine.]

“Fundamental to sustainable IT-project ROI is the concept of building/buying only what you need for now, but architecting so that you can expand & scale with growth later. Some new technologies really allow IT departments to embrace this; an obvious example is cloud-servers that are both elastic and instantly-reconfigurable. This convenience invites a perception that utilizing additional cheap hardware is better ROI than establishing proper scalable systems architecture.

“Scaling-up” may be a solution that is immediately sufficient, but successful projects eventually outgrow this and need to be able to “scale-out”; this is where projects built without growth plans get into trouble and need to start re-architecting in order to be able to horizontally-distribute sources of bottlenecks.”

While it’s easy, from a motivation (cost) and implementation (VM), perspectives to just throw stuff into a cloud computing environment, don’t let your business get caught by short sightedness.  Just because the environment scales, doesn’t mean your application will.  Remember, software design discipline is an enduring aspect of cloud computing.

James Hamilton rails against the prescriptive approach the American Society of Heating, Refrigeration and Air-Conditioning Engineers (ASHRAE) took to their building efficiency standard in his Right Problem but Wrong Approach post of 4/12/2010:

… Recently, the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) added data centers to their building efficiency standard, ASHRAE Standard 90.1. This standard defines the energy efficiency for most types of buildings in America and is often incorporated into building codes across the country. Unfortunately, as currently worded, this document is using a prescriptive approach. To comply, you must use economizers and other techniques currently in common practice. But, are economizers the best way to achieve the stated goal? What about a system that harvested waste heat and applied it growing cash crops like Tomatoes? What about systems using heat pumps to scavenge low grade heat (see Data Center Waste Heat Reclaimation)? Both these innovations would be precluded by the proposed spec as they don’t use economizers.

Urs Hoelzle, Google’s Infrastructure SVP, recently posted Setting Efficiency Goals for Data Centers where he argues we need goal-based environmental targets that drive innovation rather than prescriptive standards that prevent it. Co-signatures with Urs include:

  • Chris Crosby, Senior Vice President, Digital Realty Trust
  • Hossein Fateh, President and Chief Executive Officer, Dupont Fabros Technology
  • James Hamilton, Vice President and Distinguished Engineer, Amazon
  • Urs Hoelzle, Senior Vice President, Operations and Google Fellow, Google
  • Mike Manos, Vice President, Service Operations, Nokia
  • Kevin Timmons, General Manager, Datacenter Services, Microsoft

I think we’re all excited by the rapid pace of innovation in high scale data centers. We know its good for the environment and for customers. And I think we’re all uniformly in agreement with ASHRAE in the intent of 90.1. What’s needed to make it a truly influential and high-quality standard is that it be changed to be performance-based rather than prescriptive. But, otherwise, I think we’re all heading in the same direction.

B&L Research asks Is There Hypocrisy in the Cloud? in this 4/12/2010 post:

Parent_scold_small Do as I say, not as I do. How many times has a child heard that bit of sophistry from a hypocritical adult? That kind of lip service isn't reserved to duplicitous parents, though, as Robin Harris recently pointed out in his Storage Bits column for ZDNet.

In his piece, Harris was making a case for why private clouds won't be headed for the endangered species list anytime soon. If anyone doubts that premise, all they need to do is look at how cloud providers themselves handle their power issues, he reasoned.

Many of the benefits of cloud computing have an analog in the public power system. Power generation, like data aggregation, is cheapest when it's centralized and distributed through large-scale systems. Yet Google, when it built its dual 85,000 square foot data centers next to a substation that's part of the country's largest and most reliable hydropower systems, chose to surround the facilities with generators. "Why do Google, Amazon and every other cloud service provider invest millions in private power production?" he asked. "They don’t trust public power."

Still, cloud providers would have potential customers trust a commodity—data—that's as important to those customers as power is to the providers to a public distribution system called the Internet. "We cannot rely 100 percent on Internet access to our data," Harris wrote. "Given the outages we’ve seen to date, even 99 percent is a stretch."

If Google can't trust a system that's more than 125 years old, built on a technology that's well understood and an industry that's mature, how can it ask businesses to bet their future on a system that's less than 50 years old and is built on an evolving technology in an immature industry?

Kurt Mackie quotes Forrester Research in his Report: Global IT Revival Has Begun article of 4/9/2010 for Redmond Magazine:

A comeback has begun for the global IT tech sector, according to a quarterly economic report released on Friday by Forrester Research.

The U.S. IT market is set to grow by 8.4 percent this year, according to Andrew Bartels, vice president and principal analyst at Forrester. He made that prediction based on an updated study that analyzes IT global economic results in the first quarter of 2010. Previously, he had predicted growth of 6.6 percent for the U.S. IT market in a Forrester report examining fourth-quarter 2009 results.

The report, "US and Global IT Market Outlook: Q1 2010," predicts that a similar IT tech recovery will occur worldwide. Bartels expects to see the global IT market grow by 7.7 percent. He attributed the slightly slower global growth, compared with the U.S. market, to a weaker Euro currency. Previously, he had predicted growth of 8.1 percent for the global IT market based on fourth-quarter 2009 results.

Bartels has been an optimist about an impending tech-sector recovery -- even during the dark days of last year's fourth quarter, when the general economy had clearly hit the skids. The signs that things were getting better were apparent back in January, he explained in a blog post. He saw glimmers of a building tech boom even as far back as October after taking a look at revised IT-sector investment data presented by the U.S. government.

Bartels expects to see gains in 2010 for computer equipment ("PCs, peripherals and storage equipment") and software sales ("operating system software and applications"). Enterprises and small-to-medium businesses will be in the market for communications equipment, he predicts in the new report.

Those IT professionals who provide systems integration services will have to tough it out until software license purchases start to rise, he cautioned.

The IT tech revival in 2010 will show its strongest results in particular sectors of the economy, including "US manufacturers, financial services firms, utilities, and health care," according to Bartels. The report estimates the overall U.S. tech industry to be worth about $741 billion.

<Return to section navigation list> 

Cloud Security and Governance

Chris Hoff (a.k.a. @Beaker) discusses Intel’s Trusted Execution Technology (TXT) in his More On High Assurance (via TPM) Cloud Environments post of 4/11/2010:

North Bridge Intel G45Back in September 2009 after presenting at the Intel Virtualization (and Cloud) Security Summit and urging Intel to lead by example by pushing the adoption and use of TPM in virtualization and cloud environments, I blogged a simple question (here) as to the following:

Does anyone know of any Public Cloud Provider (or Private for that matter) that utilizes Intel’s TXT?

Interestingly the replies were few; mostly they were along the lines of “we’re considering it,” “…it’s on our long radar,” or “…we’re unclear if there’s a valid (read: economically viable) use case.”

At this year’s RSA Security Conference, however, EMC/RSA, Intel and VMware made an announcement regarding a PoC of their “Trusted Cloud Infrastructure,” describing efforts to utilize technology across the three vendors’ portfolios to make use of the TPM. …

Beaker continues with an analysis the EMC/RSA, Intel and VMware announcement of a proof of concept of their “Trusted Cloud Infrastructure” at this year’s RSA Security Conference.

<Return to section navigation list> 

Cloud Computing Events

Bruno Terkaly announces an additional meetup of the San Francisco Bay Area Cloud Computing Developers Group on 4/25/2010 at 6:30 PM PDT in Microsoft San Francisco (in Westfield Mall where Powell meets Market Street):

This Meetup repeats on the 4th Sunday of every month [and] is focused on hands-on coding for Cloud platforms. The goal is ultimately for members to share best practices and innovate. This meetup is about sharing code and building applications that run in the cloud.

Although I work for Microsoft as a developer evangelist, I welcome all developers from all cloud platforms. Initially, the meetings will be focused on the Windows Azure Platform (Azure). Azure is new and initial meetings will be designed to get developers up and running and working on code, using C#, Visual Basic, PHP, and Java. The developer tooling will be Visual Studio or Eclipse.
Due to security in the building, I need you to email me at bterkaly@microsoft.com. Please provide me:
(1) Subject = Bruno's Azure Training
(2) First and last name
(3) Best Email to reach you at

More meet-ups are scheduled for 4/26, 5/6, 5/23 and 6/27.

Michael Coté will participate in a “Practical Considerations for Managing Your Public/Private Cloud Infrastructure” panel discussion at 3:45 PM to 4:30 PM PDT at Information Week’s Cloud Connect virtual event on 4/20/2010:

Adopting and managing cloud services are an emerging opportunity and challenge in today’s data centers. Computing resources can appear, disappear, and change size on an hourly basis. New dependencies and relationships cause new types of application conflicts. Customers expect more information and access. How does IT manage and deliver service from the brave new world of cloud computing?

Michael Coté is an Industry Analyst for RedMonk.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Chris Hoff (a.k.a. @Beaker) asks Patching the (Hypervisor) Platform: How Do You Manage Risk? and answers on 4/12/2010:

Me again.

In 2008 I wrote a blog titled “Patching the Cloud” which I followed up with material examples in 2009 in another titled “Redux: Patching the Cloud.

These blogs focused mainly on virtualization-powered IaaS/PaaS offerings and whilst they targeted “Cloud Computing,” they applied equally to the heavily virtualized enterprise.  To this point I wrote another in 2008 titled “On Patch Tuesdays For Virtualization Platforms.

The operational impacts of managing change control, vulnerability management and threat mitigation have always intrigued me, especially at scale.

I was reminded this morning of the importance of the question posed above as VMware released a series of security advisories detailing ten vulnerabilities across many products, some of which are remotely exploitable. While security vulnerabilities in hypervisors are not new, it’s unclear to me how many heavily-virtualized enterprises or Cloud providers actually deal with what it means to patch this critical layer of infrastructure.

Once virtualized, we expect/assume that VM’s and the guest OS’s within them should operate with functional equivalence when compared to non-virtualized instances. We have, however, seen that this is not the case. It’s rare, but it happens that OS’s and applications, once virtualized, suffer from issues that cause faults to the underlying virtualization platform itself.

So here’s the $64,000 question – feel free to answer anonymously:

While virtualization is meant to effectively isolate the hardware from the resources atop it, the VMM/Hypervisor itself maintains a delicate position arbitrating this abstraction.  When the VMM/Hypervisor needs patching, how do you regression test the impact across all your VM images (across test/dev, production, etc.)?  More importantly, how are you assessing/measuring compound risk across shared/multi-tenant environments with respect to patching and its impact?

/Hoff

P.S. It occurs to me that after I wrote the blog last night on ‘high assurance (read: TPM-enabled)’ virtualization/cloud environments with respect to change control, the reference images for trust launch environments would be impacted by patches like this. How are we going to scale this from a management perspective?

See Beaker’s More On High Assurance (via TPM) Cloud Environments post as noted in the Cloud Security and Governance section above.

Paul Greenberg analyzes Marc Benioff’s assertion that “We’ve seen the future of enterprise software, and it looks more like Facebook on the iPad than Yahoo on the PC” in his Salesforce ChatterExchange, Chatterboxes, Chatter…chatter post of 4/12/2010:

Marc Benioff, at his Chatter event in NYC yesterday said the following:

“We’ve seen the future of enterprise software, and it looks more like Facebook on the iPad than Yahoo on the PC.”

Now, with my Jetsonesque world-view, I’d love to think that especially since I use Facebook a fair amount and have an iPad which I now officially love and there is evidence that companies that are innovators like salesforce and Apple are thinking about the enterprise in exactly that way. If you saw the announcement of iPhone 4.0 yesterday, business features such as a unified inbox to handle multiple mail accounts will be included as will folder management though I would hardly call these capabilities major enterprise readiness for the iPhone/iPad.

That’s an aside, though, interesting as it may be.  Facebook on the iPad is less important to me today (not in the future) than Chatter in the enterprise.  Even though Chatter is still not released, and as constituted considered a private beta with 500 customers, it is gaining street buzz – and credibility.

Back when Chatter was announced at Dreamforce 2009, I was concerned about it a bit – seeing it’s value as a layer of force.com rather than just a standalone app. That’s still my concern. Nothing has changed there. I also was worried about the lack of filters for the one truly unique feature of Chatter – the ability to subscribe to any data object in the system – be it a salesforce.com directly created one or another system’s – say SAP’s supply chain.  The lack of filters to me, meant that while you could subscribe to what you wanted, you had no way to get rid of the noise that would also be present – for example, if you subscribed to that supply chain feed to track inventory in a dynamic way on specific items you were selling, you’d have to get everything about the supply chain going on you don’t want too. …

<Return to section navigation list> 

blog comments powered by Disqus