|Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.|
•• Updated 4/11/2010: More new articles as of 9:00 AM PDT Sunday added
• Updated 4/10/2010: A few new articles as of 5:00 PM PDT Saturday added.
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:
- Azure Blob, Table and Queue Services
- SQL Azure Database, Codename “Dallas” and OData
- AppFabric: Access Control and Service Bus
- Live Windows Azure Apps, APIs, Tools and Test Harnesses
- Windows Azure Infrastructure
- Cloud Security and Governance
- Cloud Computing Events
- Other Cloud Computing Platforms and Services
To use the above links, first click the post’s title to display the single article you want to navigate.
Discuss the book on its WROX P2P Forum.
See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.
Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.
You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:
- Chapter 12: “Managing SQL Azure Accounts and Databases”
- Chapter 13: “Exploiting SQL Azure Database's Relational Features”
HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in April 2010.
•• The Windows Azure Storage Team posted a very detailed (and lengthy) Using Windows Azure Page Blobs and How to Efficiently Upload and Download Page Blobs tutorial on 4/11/2010:
We introduced Page Blobs at PDC 2009 as a type of blob for Windows Azure Storage. With the introduction of Page Blobs, Windows Azure Storage now supports the following two blob types:
- Block Blobs (introduced PDC 2008) – targeted at streaming workloads.
- Each blob consists of a sequence/list of blocks. The following are properties of Block Blobs:
- Each Block has a unique ID, scoped by the Blob Name
- Blocks can be up to 4MBs in size, and the blocks in a Blob do not have to be the same size
- A Block blob can consist of up to 50,000 blocks
- Max block blob size is 200GB
- Commit-based Update Semantics – Modifying a block blob is a two-phase update process. It first consists of uploading the blocks to add or modify as uncommitted blocks for a blob. Then after they are all uploaded, the blocks to add/change/remove are committed via a PutBlockList to create a new readable version of a blob. Therefore, updating a block blob is a two-phase update process where you upload all changes, and then commit them atomically.
- Range reads can be from any byte offset in the blob.
- Page Blobs (introduced PDC 2009) – targeted at random write workloads.
- Each blob consists of an array/index of pages. The following are properties of Page Blobs:
- Each page is of size 512 bytes, so all writes must be 512 byte aligned, and the blob size must be a multiple of 512 bytes.
- Writes have a starting offset and can write up to 4MBs worth of pages at a time. These are range-based writes that consist of a sequential range of pages.
- Max page blob size is 1TB
- Immediate Update Semantics – As soon as a write request for a sequential set of pages succeeds in the blob service, the write has committed, and success is returned back to the client. The update is immediate, so there is no commit step as there is for block blobs.
- Range reads can be done from any byte offset in the blob.
Unique Characteristics of Page Blobs
We created Page Blobs out of a need to have a cloud storage data abstraction for files that supports:
- Fast range-based reads and writes – need a data abstraction with single update writes, to provide a fast update alternative to the two-phase update of Block Blobs.
- Index-based data structure – need a data abstraction that supports index-based access, in comparison to the list-based approach of block blobs.
- Efficient sparse data structure – since the data object can represent a large sparse index, we wanted to create an efficient way to manage and avoid charging for empty pages. Don’t charge for parts of the index that do not have any data pages stored in them.
Uses for Page Blobs
The following are some of the scenarios Page Blobs are being used for:
- Windows Azure Drives - One of the key scenarios for Page Blobs was to support Windows Azure Drives. Windows Azure Drives allows Windows Azure cloud applications to mount a network attached durable drive, which is actually a Page Blob (see prior post).
- Files with Range-Based Updates – An application can treat a Page Blob as a file, updating just the parts of the file/blob that have changed using ranged writes. In addition, to deal with concurrency, the application can obtain and renew a Blob Lease to maintain an exclusive write lease on the Page Blob for updating.
- Logging - Another use of Page Blobs is to use them for custom logging for their applications. For example, for a given role instance, when the role starts up a Page Blob can be created for some MaxSize, which is the max amount of log space the role wants to use for a day. The given role instance can then write its logs using up to 4MB range-based writes, where a header provides metadata for the size of the log entry, timestamp, etc. When the Page Blob is filled up, then treat the Page Blob as a circular buffer and start writing from the beginning of the Page Blob, or create a new page blob, depending upon how the application wants to manage the log files (blobs). With this type of approach you can have a different Page Blob for each role instance so that there is just a single writer to each page blob for logging. Then to know where to start writing the logs on role failover the application can just create a new Page Blob if a role restarts, and GC the older Page Blobs after a given number of hours or days. Since you are not charged for pages that are empty, it doesn’t matter if you don’t fill the page blob up.
The team continues with code examples for the StorageClient library and concludes:
The following are a few areas worth summarizing about Page Blobs:
- When creating a Page Blob you specify the max size, but are only charged for pages with data stored in them.
- When uploading a Page Blob, do not store empty pages.
- When updating pages with zeros, clear them with ClearPages
- Reading from empty pages will return zeros
- When downloading a Page Blob, first use GetPageRanges, and only download the page ranges with data in them
If you’re working with Windows Azure storage, don’t miss this post!
Brad Calder announced the New Windows Azure Storage Blog, which then contained three posts, in a 4/11/2010 thread of MSDN’s Windows Azure forum. Brad promises “We'll provide information and examples there for Windows Azure Storage about every 2-3 weeks.”
• Steve Marx’s Uploading Windows Azure Blobs From Silverlight – Part 1: Shared Access Signatures post of 4/9/2010 complements the succeeding video episode and begins:
In this series of blog posts, I’ll show you how to use Silverlight to upload blobs directly to Windows Azure storage. At the end of the series, we’ll have a complete solution that supports uploading multiple files of arbitrary size. You can try out the finished sample at http://slupload.cloudapp.net.
Part 1: Shared Access Signatures
Shared Access Signatures are what enable us to use Silverlight to access blob storage directly without compromising the security of our account. In the first part of our series, we’ll cover how Shared Access Signatures are used in http://slupload.cloudapp.net.
Why use Shared Access Signatures?
The typical way to access Windows Azure storage is by constructing a web request, and then signing that web request with the storage account’s shared key. This model works fine when it’s our own trusted code that’s making the call to storage. However, were we to give the shared key to our Silverlight client code, anyone who had access to that Silverlight application would be able to extract the key and use it to take full control of our storage account.
To avoid leaking the storage key, one option is to proxy all requests that come from the client through a web service (which can perform authorization). The web service is then the only code that talks directly to storage. The downside of this approach is that we need to transfer all those bits through a web server. It would be nice to have Silverlight send the data directly to the storage service without leaking our storage credentials.
Shared Access Signatures allow us to separate the code that signs a request from the code that executes it. A Shared Access Signature (SAS) is a set of query string parameters that can be attached to a URL that serve as the authorization for a web request. Contained within the SAS is a policy for which operations are allowed and a signature that proves that the creator of the URL is authorized to perform those operations. By handing this SAS off to our Silverlight client, we pass that authorization down to the client. Now Silverlight can access blob storage directly, but only in the limited ways we granted permission.
Steve continues with the details for “Creating a Shared Access Signature.”
Ryan Dunn and Steve Marx deliver another Channel9 episode on 4/9/2010: Cloud Cover Episode 8 - Shared Access Signatures (00:41:50):
In this episode:
- Learn how to create and use Shared Access Signatures (SAS) in Windows Azure blob storage
- Discover how to easily create SAS signatures yourself.
Upgrade Domains and Fault Domains
Eugenio Pace's Windows Azure Guidance
Handling Error 40552 IN SQL Azure (via Cihan Biyikoglu)
Upcoming Support for .NET 4 in Windows Azure
Cloud Cover is off for the next two weeks, so we made this episode a bit longer than normal.
•• Alessandro Bozzon gets OData’s openness partly correct in his Opening data on the Web: The Open Data Protocol post of 4/11/2010 to the Search Computing Blog:
OData (which seems to be mainly supported by Microsoft) is a Web protocol for querying and updating data. OData can be used to give access to a variety of sources, such as relational databases, file systems, content management systems and traditional Web sites. …
Several application, such as SharePoint, WebSphere, SQL Azure, Azure Table Storage an[d] SQL Server, can already expose data as oData Services. [Emphasis added.] …
… While at a first glance it may looks like oData puts itself at competition to other set of standards, such as RDF, OWL and SPARQL, oData seems more a complementary, possibly lightweight solution. The only issue, so far, is that there are limited options for producing oData services, which are, de-facto, confined to the world of .Net, thus posing a severe limitation to the adoption of the protocol. [Emphasis added.]
Alessandro’s assertion that oData services “are, de-facto, confined to the world of .Net” is negated by IBM’s delivery of oData services by WebSphere. Others are welcome to create AtomPub-compliant oData services on other platforms.
•• Lew Moorman recommends Cassandra and Drizzle as cloud databases for a new LAMP stack in his Open the Clouds With Portable Stacks post of 4/10/2010 to GigaOm:
Currently, moving from one cloud to another is easy, and having multiple clouds to choose from gives customers the ability to utilize a range of features and service models to meet their varying needs. But proprietary next-generation databases, by locking customer code to specific clouds, remove the benefits of market choice, such as customized service experiences, competitive pricing and — most importantly — increased adoption.
To ensure continued advancement of the cloud, the industry needs to turns its support to an open cloud by using database technologies such as Cassandra and Drizzle Drizzle, which are portable to any public or private cloud.
Finding the New LAMP Stack for the Cloud
Many suggest that standards are the key to encouraging broader adoption of cloud computing. I disagree; I think the key is openness. What’s the difference? In the standards approach, a cloud would look and work just like any other. Open clouds, on the other hand, could come in many different flavors, but they would share one essential feature: all of the services they’d offer could run outside of them.
Such is the case with Drizzle, the fork of MySQL built for the big data needs of the cloud era, as well as the open-source Cassandra project, a next-generation database of the NoSQL variety and the engine powering the massive data needs of Twitter and Digg. These database technologies are the future of the webscale business — the next generation of the LAMP stack that helped drive down the cost of creating new startups in the first phase of the web. …
Lew Moorman is the president of Rackspace’s cloud division and the company’s chief strategy officer.
•• Jon Udell’s The “it just works” kind of efficiency post of 4/9/2010 explains:
I’m editing an interview with John Hancock, who leads the PowerPivot charge and championed its support of OData. During our conversation, I told him this story about how pleased I was to discover that OData “just works” with PubSubHubbub. His response made me smile, and I had to stop and transcribe it:
“Any two teams can invent a really efficient way to exchange data. But every time you do that, every time you create a custom protocol, you block yourself off from the effect you just described. If you can get every team — and this is something we went for a long time telling people around the company — look, REST and Atom aren’t the most efficient things you can possibly imagine. We could take some of your existing APIs and our engine and wire them together. But we’d be going around and doing that forever, with every single pair of things we wanted to wire up.
“So if we take a step back and look at what is the right way to do this, what’s the right way to exchange data between applications, and bet on a standard thing that’s out there already, namely Atom, other things will come along that we haven’t imagined. Dallas is a good example of that. It developed independently of PowerPivot. It was quite late in the game before we finally connected up and started working with it, but we had a prototype in an afternoon. It was so simple, just because we had taken the right bets.”
There are, of course, many kinds of efficiency. Standards like Atom aren’t most efficient in all ways. But they are definitely the most efficient in the “it just works” way.
Listen to Jon Udell’s 00:28:03 John Hancock - PowerPivot, OData, and the Democrat[ization of Business Intelligence] interview IT Conversations podcast:
PowerPivot, a new add-in for Excel, can absorb and analyze vast quantities of data. And it can ingest that data from sources that support that Atom-based OData protocol. John Hancock, who led the charge to create PowerPivot, tells host Jon Udell how it works, why it supports OData, and what this will mean not only for corporate business intelligence but also for the analysis of open public data.
• Selcin Turkaslan announces the availability of the Microsoft SQL Azure Survival Guide on the TechNet Wiki:
Do you want to learn what SQL Azure is and what it provides? Here are quick links to key Microsoft documentation resources and other technical information you need to get started. What's missing? Please add. It is the wiki way!
What is SQL Azure?
Microsoft SQL Azure Database is a cloud-based relational database service that is built on SQL Server technologies and runs in Microsoft data centers on hardware that is owned, hosted, and maintained by Microsoft. By using SQL Azure, you can easily provision and deploy relational database solutions to the cloud, and take advantage of a distributed data center.
Visit these sites for introductory information, technical overviews, and resources.
Selcin continues with “Documentation,” “SQL Azure Code Examples,” “Videos,” “Accounts and Pricing,” “Connect” and “Other Resources” topics.
Nick Hill’s Using Microsoft Dallas as a data source for a Drupal Module post of 4/9/2010 to the MCS UK Solution Development Team blog begins:
As I get more and more familiar with the Azure platform, one thing that strikes me more than anything else is that Azure is is not an all or nothing thing. You can use the Azure platform as a component of your overall solution, or you can take advantage of individual services provided by the platform. It is not a case of choosing your technology or vendor and then designing the solution - you can design your solution and choose your platform.
I previously wrote a simple application using open source technology and deployed it wholly within the Azure platform. You can read about it here. This got me thinking about the other end of the spectrum: writing and hosting and application on an open source platform, but making use of Azure for one small component.
The challenge I set myself was to write a Drupal module (I have been wanting to do this for a while now), but to use Microsoft Dallas as the data source.
Drupal is an open source content management system (CMS) written in PHP. It is used as a back-end system for many different types of websites, ranging from small personal blogs to large corporate and political sites. It is also used for knowledge management and business collaboration.
Microsoft® Codename "Dallas" is a new service allowing developers and information workers to easily discover, purchase, and manage premium data subscriptions in the Windows Azure platform. Dallas is an information marketplace that brings data, imagery, and real-time web services from leading commercial data providers and authoritative public data sources together into a single location, under a unified provisioning and billing framework. Additionally, Dallas APIs allow developers and information workers to consume this premium content with virtually any platform, application or business workflow
The module is very simple:
- it connects to a predefined Microsoft Codename 'Dallas' service on Windows Azure;
- Retrieves UNESCO data relating to UK Government expenditure on Education;
- renders the data as a Drupal block;
- allows the block to be positioned on the portal using the administrative interface.
There are a number of parameters that can be set on the Dallas request to filter the returned results (such as for paging or to bring back data for a specific country) or to request a specific format for the returned results (such as ATOM, etc).
To keep things simple I accepted the defaults and specified UK hard coded in the request. …
Nick continues with the details for writing the required Drupal module.
You might find Nick’s earlier Ruby on Rails on Windows Azure with SQL Azure post of 2/26/2010 of interest:
I was recently talking to a customer about the possibility of moving a web site from Linux to Windows Azure. The hosting costs of the application are not excessive, and the customer is happy with the service received. Nevertheless they were very interested in exploring the hosting costs and potential future benefits of the Windows Azure platform.
The web site was developed using Ruby on Rails®, an open-source web framework with a reputation for "programmer happiness and sustainable productivity". The site also makes use of MySQL and memcached.
Most of the necessary jigsaw pieces are already in place:
- Tushar Shanbhag talked about running MySQL on Windows Azure at the PDC last October.
- Simon Davies blogged about getting up and running with Ruby on Rails on Windows Azure, and released a Sample Cloud Project.
- Dominic Green recently discussed memcached on this blog.
We calculated that we could make savings in the hosting costs simply by moving to Azure. However, it soon became apparent that if we could replace MySQL with SQL Azure, then this would provide a number of additional benefits:
- Further reduce hosting costs. MySQL would require a worker role, hosted on its own dedicated node. This would cost around $1200/year whereas hosting the 400Mb database on SQL Azure would cost $9.99 per month.
- Reduced complexity and management. The MySQL accelerator shows how to deploy in a worker role, and includes sample code for managing and backing up the database. However this is all custom code which would increase the total cost of ownership of the solution. Using SQL Azure would greatly simplify the architecture.
- Business Intelligence. The application currently has a custom Business Intelligence module, developed using Flash. Moving to SQL Azure would allow the customer to take advantage of roadmap features to provide clients with a much more sophisticated Business Intelligence module, while again reducing total cost of ownership.
The only missing piece in this jigsaw is the connection of a Ruby on Rails application to SQL Azure. …
Ruby on Rails already works with SQL Server 2000, 2005 and 2008, while SQL Azure is a subset of SQL Server 2008 functionality. The adapter is available on GitHub. I decided to test this out at the weekend. Luckily I found this old article. After a certain amount of ‘fiddling and hacking’, I managed to get it working:
Nick continues with the details for creating the Rails/SQL Azure project.
Nick Kuster explains how to display the NetFlix movie catalog with a .NET MVC2 project in his OData and the NetFlix Catalog API post of 4/9/2010:
The Netflix OData Catalog API was announced during the second keynote of Mix10 earlier this year. This announcement also meant that the live preview of the service was now publicly available. This means that all of the movie information built into the Netflix site (movies, actors, genres, release dates, etc.) is now available through an easy to consume service.
I have a very, very unorganized movie collection, and have been trying to write a system to keep track of what I have and where I have it stored. Whenever I tried to write it I always had to stop myself because I could not get the right amount of information about the movies. I even went has far as scraping the information box on movie pages from Wikipedia. This was promising, but I stopped working on the system after a few days.
Now, armed with the NetFlix Catalog API, I can finally get the right amount of information into my system to make it fun and useful. This is a quick demo of how to get information from the NetFlix Catalog Service. …
Mohammad Mosa (a.k.a., Moses of Egypt) uses Mapping Conceptual Model Function to Complex Type in Entity Framework 4.0 to expose KiGG data as a read-only OData source in this 4/9/2010 post:
You might hear of KiGG, the open source project that is currently live as http://dotnetshoutout.com. I wanted to expose part of KiGG data as an OData Service for read only. But I figured out that exposing raw KiGG schema might not be useful. So I had to choose between 2 options that were up to my mind:
Build some views on the physical store -database-. Create a new entity data model for those views and use the new data model context for DataServiceContext.
Use Entity Framework 4.0 conceptual model function feature with complex types to simulate views. It’s like building views on the conceptual model itself and not the store model.
I picked the 2nd option. The sample downloadable sample is available at the end of this post.
I needed to expose “most viewed” stories from the data model in a simple way. However, the query seemed to be complex when using LINQ over OData or through a URL.
The query will require retrieving stories from Stories entity set and count each story view in StoryViews entity set.
To expose this through an OData service, I thought it would be better to make it as service operation “MostViewed” that will return an IQueryable of a certain complex type.
Mohhamad continues with the details of his implementation.
(KiGG is a Web 2.0 style social news web application developed in Microsoft supported technologies.)
For sometime I have been been meaning to move my 15 year old web site to offsite hosting. Currently it is hosted on a dedicated server running Windows 2003 and SQL Server 2005. One of the issues is that over the years I haven't really put much time in to maintaining the technology. I did upgrade from classic ASP to ASP.NET and did redo some of the pages in ASP MVC. Lately as the site has slowed I have done some performance optimization in SQL Server like query optimization and index building. However, it is still running SQL Server 2005. When I heard about SQL Azure at the PDC two years ago I knew this is where I want to go. However, I was waiting for R2 of SQL Server 2008 to make migrating a little easier. In summary there is always an excuse. So today I thought I would get started and see how far I could get before getting stuck.
“The first thing I learned is that you need to update your database level.”
SQL Azure runs on Level 100. I was currently running my database on level 70 (yes I started it 15 years ago on SQL Server 2000). So for a smooth transition I want to update the database level to work out an query issues. So will you. Upgrade your level to the highest your SQL Server version will support and work out any issues before making the migration to SQL Azure. It will help things go smoothly.
Good news is that I am not stuck yet.
Jason Milgram explains Moving from Client/Server to Client/Cloud with Linxter’s cloud messaging platform in this 4/8/2010:
So what do you do if you developed a client/server application for the SMB market, and you want to grow your customer base and increase revenue? This was the case with a recent customer of ours. Our suggestion was to convert their traditional software sales model (fixed revenue) into a Software-as-a-Service model (recurring revenue). The transformation required Linxter and Microsoft SQL Azure to update their client/server architecture to a client/cloud architecture.
By moving to a client/cloud architecture the company was able to replace their high, upfront license cost with a small, recurring monthly fee per client license, and eliminate the need for a server admin. The company’s client/server product had required customers to pay an upfront license fee, which included client licenses as well as a Microsoft SQL Server license. This upfront cost and need for a server admin seemed to be a barrier to many SMBs who did not want the responsibility of running their own database server — which would include making it highly available via clustering, managing tape backups, and adding more hardware as their storage requirements grew.
Removing upfront cost and admin requirements allowed the company to expand their customer base and increase revenue. Not only was the revenue increase due to customer expansion, but also a higher return per customer. With the new recurring revenue model, the cumulative revenue received from a single customer after twenty four months of use would surpass the initial upfront license of the old model.
So, how were Linxter and SQL Azure used in this client/server to client/cloud transformation?
SQL Azure provided a highly available, scalable, multi-tenant database service hosted by Microsoft in the cloud. The databases could be quickly provisioned and deployed, and no one had to install, set up, patch or manage any software. Plus, high availability and fault tolerance are built-in, and no physical administration is ever required.
Linxter was the key for them to enabling this software + services approach of keeping the power and richness of their already developed desktop app and connecting it with the power of a cloud relational database. Linxter’s cloud messaging platform, which combines web services with a series of traditional features such as local transactional queues and file chunking, enabled them to quickly and efficiently utilize the cloud database in a secure and reliable manner with no specialized coding. …
Ron Jacobs announces Today on http://endpoint.tv – AppFabric Dashboard Overview on 4/9/2010:
AppFabric has this great new Dashboard that gives you insight into what is happening with your services and workflows. In this video, Senior Programming Writer Michael McKeown shows you what the Dashboard can do for you.
For more on the AppFabric Dashboard see the following articles on MSDN
We have more great episodes available at http://endpoint.tv so keep watching
• Eugenio Pace’s Windows Azure Guidance - Background processing I post of 4/9/2010 discusses the background Scan service of the a-Expense guidance project:
If you recall from my original article on a-Expense, there were 2 background processes:
- Scans Service: a-Expense allows its users to upload scanned documents (e.g. receipts) that are associated with a particular expense. The images are stored in a a file share and uses some naming convention to uniquely identify the image and the expense.
- Integration Service: this service runs periodically and generates files that are interfaces to an external system. The generated files contain data about expenses that are ready for reimbursement.
Let’s analyze #1 in this article.
The scan service performs 2 tasks:
- It compresses uploaded images so they use less space.
- It generates a thumbnail of the original scan that is displayed on the UI. Thumbnails are lighter weight and are displayed faster. Users can always browse the original if they want to.
When moving this to Windows Azure we had to make some adjustments to the original implementation (which were based on files and Windows Service).
Eugenio continues with a recommendation to change storage and implement the Scan service with a worker role.
Jeff Douglas reports in a series of tweets that MESH01’s current Standup Paddleboard Graphic Competition (required site registration), which is open from 4/9/2010 through 4/19/2010 and offers a $1,000 prize, runs on Windows Azure:
Jeff is a Senior Technical Consultant at Appirio specializing in cloud-based applications. If you wondered why I was interested in surfboard (stand-up paddleboard) designs, see my No More U.S. Custom Surfboards? Squidoo Lens about the effects of Clark Foam's closing and my few years in the foam-core surfboard business.
Maarten Balliauw posted PhpAzureExtensions to CodePlex’s Windows Azure - PHP contributions project on 4/9/2010:
The PhpAzureExtensions currently support the following modules:
The Windows Azure Team posted Real World Windows Azure: Interview with Patricio Jutard, Chief Technology Officer at Three Melons on 4/9/2010:
As part of the Real World Windows Azure series, we talked to Patricio Jutard, CTO at Three Melons, about using the Windows Azure platform to deliver Bola-an online soccer game available on social networks, such as Facebook and Orkut. Here's what he had to say:
MSDN: Tell us about Three Melons and the services you provide.
Jutard: Three Melons is a game studio-we love games. We develop our games for the Web and for smartphones, and all of our games have a social component.
MSDN: What was the biggest challenge Three Melons faced prior to implementing Windows Azure?
Jutard: We were developing an online soccer game, Bola. As a Facebook application, the game has the potential for a large user base-one that could rapidly gain viral popularity. The ability to handle the processing power and storage required for an application with potentially millions of users was essential for us. At the same time, we needed to be able to scale up and scale down quickly and cost-effectively as demand dictated.
MSDN: Why did you decide to use Windows Azure for the Bola game?
Jutard: We already make extensive use of the Microsoft .NET Framework and Microsoft Visual Studio in our development environment, so we were able to easily and quickly develop on the Windows Azure platform. Also, Windows Azure offers us multiple options for storage-Microsoft SQL Azure for relational storage and Windows Azure storage services for Table, Blob, and Queue storage.
Maarten Balliauw explains Using Windows Azure Drive in PHP (or Ruby) in this 4/8/2010 post:
At the JumpIn Camp in Zürich this week, we are trying to get some of the more popular PHP applications running on Windows Azure. As you may know, Windows Azure has different storage options like blobs, tables, queues and drives. There’s the Windows Azure SDK for PHP for most of this, except for drives. Which is normal: drives are at the operating system level and have nothing to do with the REST calls that are used for the other storage types. By the way: I did a post on using Windows Azure Drive (or “XDrive”) a while ago if you want more info.
Unfortunately, .NET code is currently the only way to create and mount these virtual hard drives from Windows Azure. But luckily, IIS7 has this integrated pipeline model which Windows Azure is also using. Among other things, this means that services provided by managed modules (written in .NET) can now be applied to all requests to the server, not just ones handled by ASP.NET! In even other words: you can have some .NET code running in the same request pipeline as the FastCGI process running PHP (or Ruby). Which made me think: it should be possible to create and mount a Windows Azure Drive in a .NET HTTP module and pass the drive letter of this thing to PHP through a server variable. And here’s how...
Note: I’ll start with the implementation part first, the usage part comes after that. If you don’t care about the implementation, scroll down...
Download source code and binaries at http://phpazurecontrib.codeplex.com. …
Maarten continues with detailed “Building the Windows Azure Drive HTTP module” and “Configuring and using Windows Azure Drive” topics.
Bruce Guptill and Robert McNeill coauthored Saugatuck Research’s Public-sector SaaS Adoption: Slow and Steady? Or Just Slow? Research Alert of 4/8/2010 (site registration required):
New Saugatuck research and recent interaction with public-sector IT executives, government agency IT buyers and managers, consulting firms, and related entities, indicates that the pace of widespread SaaS adoption within most public-sector organizations will be slow when compared to the broader business adoption of SaaS.
Data from Saugatuck’s March 2010 global SaaS survey (see Note 1) indicate a stark, 2014 “tipping point” when it comes to new software deployments. Figure 1 presents survey data regarding how user IT and business executives expect to see their purchases of new software shift from on-premise to Cloud-based (i.e., SaaS) through 2014. …
Tony Bailey published PHP on Windows Azure Quickstart Guide - Creating and Deploying PHP Projects on Windows Azure as an Open Cource on 4/6/2010:
The purpose of this Quickstart Guide is to show PHP developers how to work with Windows Azure. It shows how to develop and test PHP code on a local development machine, and then how to deploy that code to Windows Azure. This material is intended for developers who are already using PHP with an IDE such as Eclipse, so it doesn't cover PHP syntax or details of working with Eclipse, but experienced PHP programmers will see that it is easy to set up Eclipse to work with Windows Azure to support web-based PHP applications.
Download Quickstart Guide here
•• Bruce Johnson’s The Benefits of Windows Azure post of 3/1/2010, which Imissed when published, gives a balanced, third-party SOA architectect/developer’s view of where Azure and cloud computing offer the greatest ROI:
The age of cloud computing is fast approaching. Or at least that's what the numerous vendors of cloud computing would have you believe. The challenge that you (and all developers) face is to determine just what cloud computing is and how you should take advantage of it. Not to mention whether you even should take advantage of it.
While there is little agreement on exactly what constitutes 'cloud computing', there is a consensus that the technology is a paradigm shift for developers. And like pretty much every paradigm shift there is going to be some hype involved. People will recommend moving immediately to the technology en masse. People will suggest that cloud computing has the ability to solve all that is wrong with your Web site. Not surprisingly, neither of these statements is true.
And, as with many other paradigm shifts, the reality is less impactful and slower to arrive than the hype would have you believe. So before you start down this supposedly obvious ‘path to the future of computing’, it's important to have a good sense of what the gains will be. Let's consider some of the benefits that cloud computing offers. …
Bruce Johnson is a Principal Consultant with ObjectSharp and a 30-year veteran of the computer industry.
Guy Rosen’s State of the Cloud – April 2010 post of 4/9/2010 reports “Overall cloud growth this month is at 3.9% (=58% compounded annual growth rate [CAGR]):
Welcome to update #10 in the regular State of the Cloud series. This month we’ll continue to examine how many of the world’s top websites are using cloud providers.
Many of the smaller providers have had a weak month, some even showing up less this month in the sample than they did previously. Only Amazon and Rackspace continue to plough ahead with Amazon gaining 6% and Rackspace 3%.
Overall cloud growth this month is at 3.9% (=58% CAGR).
I assume that Windows Azure and Google App Engine are missing from the preceding graphs because they aren’t commonly used to power Web sites. I’ve asked Guy if that’s the case and will update if and when he replies. Update 4/9/2010 1:00 PM PDT from a reply comment by Bret:
They are left out for being PaaS (Platform as a Service) rather than IaaS (Infrastructure as a Service). Guy's methodology is outlined in the first post in the series at: http://www.jackofallclouds.com/2009/07/top-sites-on-amazon-ec2-july-2009/ and August of 2009 was the first time he listed out multiple providers: http://www.jackofallclouds.com/2009/08/state-of-the-cloud-august-2009/.
David Linthicum asserts “Traditional on-premise vendors argue against the use of multitenant architectures, which is a self-serving diversion” in The silly debate over multitenancy post of 4/9/2010 to InfoWorld’s Cloud Computing blog:
Alok Misra hits on good issues around the dispute, or should I say silly dispute, about multitenancy: "There's a debate in the software industry over whether multitenancy is a prerequisite for cloud computing. "
Let's get this straight right now. Cloud computing is about sharing resources, and you can't share resources without multitenancy. Even if you have virtualization, I don't consider that alone to be cloud computing. Some multitenancy has to exist within the architecture, allowing resources to be apportioned efficiently. That point of view strikes me and to most of those in the cloud computing space as logical, but it seems to be lost on those attempting quick migrations to "the cloud," without having to put in the work. …
Alok agrees: "I sit firmly in the multitenancy camp. A multitenant architecture is when customers share an app in the cloud, while a single-tenant cloud app is similar, if not identical, to the old hosted model. But compare two subscription-based cloud apps side by side -- with the only difference being that one is multitenant and the other is single-tenant -- and the multitenant option will lower a customer's costs and offer significantly more value over time."
Why is there even a debate? The traditional on-premise vendors, as Alok points out, are moving to the cloud. These vendors are finding that building multitenant architectures is a much bigger nut to crack than they thought. Indeed, I've built three of those architectures in my career, and I can confirm that they are not at all simple to design, build, and operate. …
Alok Misra’s Why Multitenancy Matters in the Cloud InformationWeek::Analytics whitepaper’s abstract reads as follows:
There's a debate in the software industry over whether multitenancy is a prerequisite for cloud computing. Those considering using cloud apps might question if they should care about this debate. But they should care, and here's why: Multitenancy is the most direct path to spending less and getting more from a cloud application.
I sit firmly in the multitenancy camp. A multitenant architecture is when customers share an app in the cloud, while a single-tenant cloud app is similar, if not identical, to the old hosted model. But compare two subscription-based cloud apps side by side--with the only difference being that one is multitenant and the other is single tenant--and the multitenant option will lower a customer's costs and offer significantly more value over time. In fact, the higher the degree of multitenancy (meaning the more a cloud provider's infrastructure and resources are shared), the lower the costs for customers.
Capacitas, Ltd.’s A Challenge to the Cloud Computing Business Model post of 4/9/2010 describes the issue of open-ended transaction and data egress/ingress charges:
So everyone’s talking about Cloud now then – it’s the next big thing, apparently. So what is Cloud? It’s a paradigm where much of the low-value back office processing (e.g. SQL, email) and storage is outsourced as a managed service to another company that has lots of high-availability computing power and storage capacity spare. Example providers are Microsoft, Google, Amazon, etc. This leaves the customer able to develop and own their own business logic independently of the commodity computing that sits behind the scenes.
However, there is as I see it one key problem for many adopters of this paradigm, depending on the supplier chosen. The example I’m thinking of is transaction-based pricing. Services that use this pricing model (e.g. Microsoft’s SQL Azure service) are charged for the number of transactions in the billing period, whilst storage is also based on the number of gigabytes stored as well as transactions (see the Microsoft Azure pricing page here).
Whilst these fees are low they will soon add up for a large enterprise. However the key issue is that they are open ended; that is there is no usage cap. Therefore if you build an Azure-based web application and the world beats a path to your door (but doesn’t buy much), then you will be left with a considerable bill.
As an example on the 7th of July 2005 the BBC News website served 5.5TB of data and the transfer peak was 11Gb/s this works out at a cost of $825 for data transfers and $1,000 for hit-related transactions (Note: this ignores any web server caching or graphics, etc.). Now obviously $1,825 is actually a relatively insignificant amount to serve this content to a news-hungry web public. However the key issue here is that it’s an exceptional, and therefore unbudgeted, peak. While a private sector company can work this way it is anathema to a public sector or not-for-profit organisation; these guys prefer to plan their budgets meticulously and often years in advance.
@Capacitas continues with a similar analysis for the public sector.
Edward A. Pisacreta’s A Checklist for Cloud Computing Deals post of 4/9/2010 to the Law Technology News site begins with this preamble:
Cloud computing has become a technology buzzword. Its definition is elusive, but a working definition could be: A service offered by vendors with large computer server networks to provide infrastructure such as processing capacity, storage for electronic data and records, software as a service or provision of services such as e-mail (see, Open Cloud Manifesto).
The idea, as e-commerce and tech-savvy counsel may know, is to use a multilayered network of servers and computers to provide computing and hosting power when needed -- sort of a front-end and back-office architecture with a backup system, without much of the in-house worries that go with investments in IT infrastructure.
Cloud computing can help e-commerce ventures in a variety of ways, including by allowing expansion of services and support during business peaks, such as holidays, or other seasonal or special shopping times. For expansion to cloud computing where formal contracts, or regulatory, fiduciary or other obligations are involved, e-commerce counsel will be called on to ensure all arrangements are proper and beneficial. More on that below.
THE CRUX OF CLOUD COMPUTING
According to the Open Cloud Manifesto, a consortium that promotes standards for and openness to cloud computing, the practice -- by no means new, but recently rising in prominence and use -- has several components, including:
- The ability of the customer to scale up or down as business needs require;
- Avoidance of overinvestment in infrastructure and unavailability of in-house resources when such resources are needed;
- Reduced start-up costs; and
- Reduced reliance on in-house computer resources.
The National Institute of Standards reports that in cloud computing, the cloud's shared pool of resources "can be rapidly provisioned and released with minimal management effort or service provider interaction" (see, Peter Mell and Tim Grance, "The NIST Definition of Cloud Computing, Version 15").
This article sets forth a number of the questions, and answers, that the parties will need to address and settle in a cloud computing arrangement. …
Edward A. Pisacreta is a partner in the New York Office of Holland & Knight LLP and a member of E-Commerce Law and Strategy's Board of Editors.
Patrick Thibodeau asserts “Lack of standards about data handling and security practices garner more attention as the cloud computing industry expands” in his Frustrations with cloud computing mount post of 4/9/2010 from the SaaSCon conference to Computerworld and InfoWorld’s Cloud Computing blog:
Cloud computing users are shifting their focus from what the cloud offers to what it lacks. What it offers is clear, such as the ability to rapidly scale and provision, but the list of what it is missing seems to be growing by the day.
Cloud computing lacks standards about data handling and security practices, and even whether a vendor has an obligation to tell users whether their data is in the U.S. or not. And the industry is only beginning to sort out these issues through groups, such as the year-old Cloud Security Alliance. …
The cloud computing industry has some of the characteristics of a Wild West boomtown. But the local saloon's name is Frustration. That's the one word that seems to be popping up more and more in discussions, particularly at the SaaScon 2010 conference here this week.
This frustration about the lack of standards grows as cloud-based services take root in enterprises. Take Orbitz, the large travel company with multiple businesses that offer an increasingly broad range of services, such as scheduling golf tee times, and booking concerts and cruises.
As with many firms that have turned to cloud-based services, Orbitz is both a provider and user of cloud-based software as a service (SaaS) offering. Ed Bellis, chief information security officer at Orbitz, credits SaaS services, in particular, with enabling the company's growth and allowing it to concentrate on its core competencies.
But in providing SaaS services, Orbitz must address a range of due diligence requirements among customers that are "all across the board," and can vary widely to include on-site audits and data center inspections, he said.
A potential solution is a security data standard being developed by the Cloud Security Alliance that would expose data in a common format and give customers an understanding of exactly "what our security posture is today," said Bellis.
If an agreement can be reached on such a standard "it would be heaven," said Bellis, and would "cut out a third of our internal work on due diligence." But he doesn't know when or if that standard will be reached because of the work it will take to get a large number of users and providers to agree on it. …
K. Scott Morrison’s All Things Considered About Cloud Computing Risks and Challenges post of 4/8/2009 announces a new podcast:
Last month during the RSA show, I met with Rob Westervelt from ITKnowledgeExchange in the Starbucks across from Moscone Center. Rob recorded our discussion about the challenges of security in the cloud and turned this into a podcast. I’m quite pleased with the results. You can pick up a little Miles Davis in the background, the odd note of an espresso being drawn. Alison thinks that I sound very NPR. Having been raised on CBC Radio, I take this as a great compliment.
Jabulani Leffal doesn’t mention the potential conflict of interest involved when members of the Information Systems Audit and Control Association survey themselves (and possibly others) about cloud security risks in a U.S. IT Pros Concerned About Cloud Security article of 4/7/2010 for Redmond Magazine:
Nearly half of U.S. IT professionals canvassed in a survey released today believe that the operational and security risks of cloud computing outweigh its benefits.
That finding comes from the first annual "IT Risk/Reward Barometer" report by ISACA, or Information Systems Audit and Control Association. The ISACA is a trade group consisting of enterprise IT administrators and IT audit specialists.
Despite the hype and enthusiasm surrounding cloud computing, many working in the enterprise space are still wary of adopting the technology, according to the March ISACA survey, which tapped into the opinions of more than 1,800 IT pros.
About 45 percent of respondents said that the security risks of a cloud scenario, at least in the short term, exceed the operational benefits. Only 17 percent were bullish on cloud computing. The remaining 38 percent indicated that they thought the risks were appropriately balanced.
But the report dug deeper. Only 10 percent of respondent organizations plan to use cloud computing for mission-critical IT services. Moreover, one-fourth of the respondents, or 26 percent, do not plan to use the cloud at all. …
ISACA members probably rely on security paranoia to advance their careers.
David Aiken’s New Bid Now Sample on Code Gallery post of 4/8/2010 describes a Windows Azure FireStarter demo:
I’ve just posted the latest version of Bid Now on code gallery at http://code.msdn.microsoft.com/BidNowSample! On code gallery you will find the code, as well as some guidance on how to get Bid Now running on your machine, as well as how you can deploy it to the cloud.
A few days back I presented at the Windows Azure fire starter event here in Redmond. (More on that in a few days when the videos are posted.) This is one of the demos I showed during my talk.
The demo is built on Windows Azure, and uses Windows Azure table and blob storage for data. There are some great things to look at in the code, such as the decoupling of functionality and use of Windows Azure Queues. It is also a great example of how you can build complex data applications using Windows Azure table storage. Think nosql here.
If you look at the homepage you can see there are several “views” of the data. As an example the boots below are shown in both the “Bids ending soon!” and “Hottest” sections. This data is pulled from different Windows Azure tables as all the data has been de-normalized.
Over the next few days I’ll be posting more details on how we built this app, some of the do’s and don’ts as well as how you can use it for your own demos/projects.
THIS POSTING IS PROVIDED “AS IS” WITH NO WARRANTIES, AND CONFERS NO RIGHTS
Oracle Corp. presents an Addressing Attestation and SoD Head-On With Identity & Access Governance Webinar on 5/5/2010 at 10:00 AM PDT:
Take control of your IT and Compliance processes. Attest with confidence, address Segregation of Duties (SoD) upfront and answer the auditors before they ask! Join our experts on this complimentary webcast to ask questions and learn:
• What to expect from Identity & Access Governance Solutions today
• Real-world implementation scenarios, best practices and results
• Oracle’s approach to Identity & Access Governance
• Kevin Kampman, Senior Analyst, Burton Group Identity and Privacy Strategies
• Stuart Lincoln, Vice-President IT P&L Client Services, BNP Paribas North America
• Neil Gandhi, Principal Product Manager, Oracle Identity Management
Lori MacVittie asserts “When co-location meets cloud computing the result is control, consistency, agility, and operational cost savings” in her The Other Hybrid Cloud Architecture post of 4/9/2010:
Generally speaking when the term “hybrid” as an adjective to describe a cloud computing model it’s referring to the combining of a local data center with a distinct set of off-premise cloud computing resources. But there’s another way to look at “hybrid” cloud computing models that is certainly as relevant and perhaps makes more sense for adoptees of cloud computing for whom there simply is not enough choice and control over infrastructure solutions today.
Cloud computing providers have generally arisen from two markets: pure cloud, i.e. new start-ups who see a business opportunity in providing resources on-demand, and service-providers, i.e. hosting providers who see a business opportunity in expanding their offerings into the resource on-demand space. Terremark is one of the latter, and its ability to draw on its experience in traditional hosting models while combining that with an on-demand compute resource model has led to an interesting hybrid model that can combine the two, providing for continued ROI on existing customer investments in infrastructure while leveraging the more efficient, on-demand resource model for application delivery and scalability.
There are several reasons why a customer would want to continue using infrastructure “X” in a cloud computing environment. IaaS (Infrastructure as a Service) is today primarily about compute resources, not infrastructure resources. This means that applications which have come to rely on specific infrastructure services – monitoring, security, advanced load balancing algorithms, network-side scripting – are not so easily dumped into a cloud computing environment because the infrastructure required to deliver that application the way it needs to be delivered is not available.
This leaves organizations with few options: rewrite the application such that it does not require infrastructure services (not always possible for all infrastructure services) or don’t move the application to the cloud.
In the case of Terremark (and I’m sure others – feel free to chime in) there’s a third option: co-locate the infrastructure and move the application to the cloud. This allows organizations to take advantage of on-demand compute resources that cloud offers while not comprising the integrity or reliability of the application achieved through integration with infrastructure services.