|Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.|
Update 4/1/2010: There will be no Windows Azure and Cloud Computing Posts for 4/1/2010+
• Indicates 3/31/2010 items added on 4/1/2010.
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:
- Azure Blob, Table and Queue Services
- SQL Azure Database (SADB)
- AppFabric: Access Control and Service Bus
- Live Windows Azure Apps, APIs, Tools and Test Harnesses
- Windows Azure Infrastructure
- Cloud Security and Governance
- Cloud Computing Events
- Other Cloud Computing Platforms and Services
To use the above links, first click the post’s title to display the single article you want to navigate.
Discuss the book on its WROX P2P Forum.
See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.
Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.
You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:
- Chapter 12: “Managing SQL Azure Accounts and Databases”
- Chapter 13: “Exploiting SQL Azure Database's Relational Features”
HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in April 2010.
Microsoft Research’s Extreme Computing Group (XCG) AzureScope: Benchmarking and Guidance for Windows Azure project’s Best Practices for Developing on Window Azure page recommends:
1. Azure tables only index on the partition key and row key.
Queries against these fields will run quickly, but on large tables querying in other ways may be quite slow.
Sample Code : Querying on Azure table
2. Batch multiple small work items into a single queue message.
Queue and read/delete operations are designed for fault tolerance and thus operate with similar access times as blob and Xtable operations.Decompose per-node work into chunks with run-times larger than 1 second and preferably in the range of 2 to 90 minutes. For many small items, consider batching multiple requests into a single queue entry. Remember that queue entries are limited to 8 KB in size. It may be necessary to store further task information in another data store, such as XTable.
3. Follow the best practices for tuning Azure Tables to get the best performance from them.
Performance of Azure tables can be greatly affected by how they are configured. See this article at MSDN for more details. [Update 4/1/2010: This article (thread) covers a one-year time span. The updated performance tips, dated 3/2/2010, are at the end of the thread.]
4. If performance is the main concern, design a blob storage hierarchy that spreads accesses to the blobs into as many storage partitions as possible.
Note: Designing for the manageability of security policy mechanisms leads one to instead have the fewest number of storage partitions.
5. Make use of Azure Blob storage for quick data access to worker nodes.
Reading blob data from worker nodes normally comes at throughput rates similar to disk reads (30-40 MB/sec). When copying blob data to a local worker role through asynchronous I/O, we have found that you get the most benefit using a memory cache of around 10MB. Also be sure to distribute blob reads across workers, as requesting too many reads at a time from a single node can cause access contention.
Related Test Cases : Blob Read Throughput
6. Replicate data between tables rather than doing a ‘join’ type operation.
In the client this is often preferable as it minimizes calls to the Table Service.
7. Select the right type of blob to use for your application.
Windows Azure provides two different types of blob storage. Block blobs are a series of data blocks of potentially varying size, which are assembled sequentially. Block blobs are optimized for streaming data back to a worker or client. Page blobs represent a range of data that is loaded into chunks of data that fit into pages aligned to 512-byte boundaries. Page blobs are optimized for random read/write scenarios and allow one to write to a range of bytes within the blob. Page blobs are also used as the back end store for Azure Drives.
Storage Type Optimal Use Scenario Block Blob Data which is streamed back to a worker or client. Page Blob Data accessed frequently by random reads and writes. Azure Drive Data that needs to interact with legacy applications.
8. Use batched updates in XTable.
This may require you to keep track of multiple TableContexts (one per partition) and keep count of entity updates within each TableContext.
9. Use snapshots of Azure data when you only need read access.
Snapshots provide a read only copy of your Azure blobs and can be used to make concurrent access from many compute nodes much more efficient. They do this by eliminating contention for the data amongst requestors. An added benefit is that snapshots do not count against your storage costs.
10. When creating temporary Azure Tables create tables with a new name instead of deleting existing tables and reusing the name.
Azure table cleanup can take some time between the request for deleting the table and when the resources for that table are finally cleaned up. During that wait time the name of the table will be unavailable and you will be blocked from creating the table again.
11. When writing to Blobs try to reuse equivalent blocks already stored in the cloud.
When writing a chunk of data to a blob, if you know that the data is already stored in the cloud it is much more efficient to copy that data from its cloud source than to upload it from a client. This case can frequently occur when copying data to a different location in the cloud or when modifying only a portion of the data inside the blob.
• Eugenio Pace recommends Windows Azure Guidance – Replacing the data tier with Azure Table Storage in this 3/31/2010 post:
This new release focuses primarily on replacing the data tier with Azure Table Storage. To make things more interesting, we changed the data model in a-Expense so it now requires two different related entities: the expense report “header” (or master) and the details (or line items) associated with it.
The implementation of this on SQL Azure is trivial and nothing out of the ordinary. We’ve been doing this kind of thing for years. However when using Azure Table Storage this small modification triggers a number of questions:
- Will each of this entities correspond to its own “table”?
- Given that Azure Tables don’t really require a “fixed schema”, should we just store everything in a single table?
#1 is the more intuitive solution if you come with a “relational” mindset. However, the biggest consideration here is that Azure Table Storage doesn’t support transactions across different tables.
Eugenio continues with a discussion of issues that arise when moving from SQL Azure to Windows Azure tables and how to overcome the problems.
Sachin Prakash Sancheti’s Table Storage Vs. SQL Azure post of 3/31/2010 to the Infosys Microsoft Alliance and Solutions blog begins:
At the time of application design decision on Azure, most of us get into this discussion at some point in time, “Table Storage OR SQL Azure”.
I thought of listing down the pros and cons for both of these storage options. Following comparison might be helpful to drive the decision.
List down the expectations from the desired storage from application point of view and compare those requirements with the help of following table.
Sachin continues with a detailed table.
Microsoft Research’s Extreme Computing Group (XCG) AzureScope: Benchmarking and Guidance for Windows Azure project’s Best Practices for Developing on Window Azure page recommends:
1. Do not worry too much about planning for worst case performance behavior in SQL Azure.
In testing performance we found that performance over time was consistent, although there are rare occasions (less than 1% occurrence) where performance degraded significantly. Therefore, we have seen no reason to provision assuming a worst-case behavior in SQL Azure that is significantly worse than average-case behavior.
2. Expect a single Azure-based client accessing SQL Azure to take about twice as long as a single local-enterprise client accessing SQL Server within the enterprise (commodity hardware).
If speed is a chief concern and the number of concurrent clients is expected to be small, a local deployment (with local clients) will obtain the better performance. The opaque (and potentially changing) nature of the Cloud prevents us from determining exactly why there was a 2X slowdown.
3. Keep in mind the concurrency limits in SQL Azure.
We found that the throughput of a single large database instance (10GB max size) peaks at 30 concurrent clients running continuous transactions tested using OLTP-style TCPE database benchmark. (The corresponding peak was seen at 6 concurrent clients in our LAN testing). In that case we experience 50% transaction failure and the average transaction completion time, for those that did complete, was 65% longer than that of a single client. For 8 concurrent clients we found a reasonable 15% transaction completion time increase.
Stephen Walther explains Using jQuery and OData to Insert a Database Record on 3/30/2010:
In my previous blog entry, I explored two ways of inserting a database record using jQuery. We added a new Movie to the Movie database table by using a generic handler and by using a WCF service. In this blog entry, I want to take a brief look at how you can insert a database record using OData. …
When using the .NET framework, you can easily expose database data through the OData protocol by creating a WCF Data Service. In this blog entry, I will create a WCF Data Service that exposes the Movie database table. …
Marcelo Lopez Ruiz’s Sorting OData feeds by their title post of 3/30/2010 begins:
Let’s say you’re browsing the Netflix OData catalog, and you’d like to sort the feed entries by their title. You type http://odata.netflix.com/Catalog/Genres?$orderby=title into your browser, but you get this error message back.
What went wrong? If you look at the payload for all genres, you’ll see the title there. But that’s the title ATOM element; if you want to refer to a property of genres, you need to use the property name from the model. …
So if you run http://odata.netflix.com/Catalog/Genres?$orderby=Name, the response is successful.
Of course you don't have to look at data to figure this out. If you look at the metadata for the service, exposed at http://odata.netflix.com/Catalog/$metadata, you'll see that the mapping is explicitly defined. …
• Pop Catalin started an Implement OData API for StackOverflow Meta StackOverflow question thread on 3/25/2010 with several comments and six answers. See the Scott Hanselman explains Creating an OData API for StackOverflow including XML and JSON in 30 minutes post in Windows Azure and Cloud Computing Posts for 3/29/2010+.
• Nileesha Bojjawar demonstrates using Azure SQL with Spring Hibernate in this 3/25/2010 post to the EMC Consulting blogs:
I tried to set up an application to use SQL Azure as the database for a Spring based Web application.
PS: Remember to check that your network has access to the SQL Azure Server at port 1433. Some corporate networks block this completely or may allow it to the servers in one particular region for e.g. North Europe, in which case you should create the SQL Azure server in that region.
Firstly, I had to install an SQL server locally to create a Dev Database. I wanted to check how the SQL Azure Sync Framework works.
For this example, I have create a simple table in my local SQL Server Express called “UserTransaction” with just id and name, id being the primary key.
Nileesha continues with an illustrated tutorial and tips for handling login errors.
The Windows Azure AppFabric Team announces AppFabric LABS Now Available:
AppFabric Labs is a place for us to preview cool new AppFabric technologies for interested customers. We're very excited about the features we put up on Labs, so we want to get your feedback on them as soon as possible. Labs is a developer preview, not a full commercial release, so there is no formal support or SLA in place.
To see what features are currently available in AppFabric Labs, please review the release notes for our current Labs SDK.
From the release notes file:
The AppFabric Labs environment will be used showcase some early bits and get feedback from the community. Usage for this environment will not be billed.
In this release of the LABS environment, we’re shipping two features:
To get started with Labs:
- Silverlight support: we’ve added the ability for Silverlight clients to make cross-domain calls to the Service Bus and Access Control Services.
- Multicast with Message Buffers: we’ve added the ability for Message Buffers to attach to a multicast group. A message send to the multicast group is delivered to every Message Buffer that is attached to it.
- Go to https://portal.appfabriclabs.com/,
- Sign up using your Live ID,
- Create your Labs project and namespace, and
- Download Labs samples from here to learn more about these new features.
To provide feedback and to learn what the community is building using the Labs technology, please visit the Windows Azure platform AppFabric forum and connect with the AppFabric team.
Running AppFabric V1 Samples Against the Labs Environment
To run the AppFabric V1 SDK samples against the Labs environment, you'll have to rename the ServiceBus.config.txt file found at the AppFabric Labs SDK page to ServiceBus.config and place it in your .NET Framework CONFIG directory. The CONFIG directory is located at:
C:\Windows\Microsoft.NET\Framework\v2.0.50727\CONFIG on x86 systems, and
C:\Windows\Microsoft.NET\Framework64\v2.0.50727\CONFIG on x64 systems.
The AppFabric V1 SDKs Windows Azure sample will not work against the Labs environment. This is because to run V1 samples against Labs you need to place the ServiceBus.config file in your .NET Framework CONFIG directory and Windows Azure doesn't allow that.
When uploading CrossDomain.XML to the Service Bus root, please leave out the <!DOCTYPE> schema declaration.
Silverlight clients convert every Service Bus operation error to a generic HTTP NotFound error.
Microsoft Research’s Extreme Computing Group (XCG) AzureScope: Benchmarking and Guidance for Windows Azure project’s Best Practices for Developing on Window Azure page recommends:
1. Increasing the size of the VM will also increase the size of network throughput.
Our tests show that a ‘large’ VM has roughly twice the network throughput of a ‘small’ VM. Large VMs are also very good for computation that utilizes multiple processing cores.
2. Use node-to-node communication to save on message latency costs.
Early versions of Azure required that all communication between nodes pass through one of the data storage mechanisms provided by Azure. This was an extremely robust form of communication but as a result it was not fast enough for the expectations of many application developers. Azure now provides direct network communication between nodes which allows much faster message passing, but does so at the cost of reliability. If a node fails the requests are not durable and will also be lost, possibly requiring the computation to restart.
Related Test Cases : Ping Pong
Testing & Development
1. In debugging an Azure application the use of ‘heartbeats’ can provide valuable information.
Using a controller worker role to track a heartbeat signal from other worker roles can provide useful tracking information when testing a distributed application. In general it is useful to keep the heartbeat reporting back to the controller at an interval of 10 seconds or less.
2. Include retry logic in applications in all instances where you are attempting to access data from SQL Azure or Azure Storage.
3. Remember that Azure Queues limit queued items to a two hour processing time once they leave the queue.
All work on an item removed from the queue will need to be completed within the two hour window. If not, the item will be returned to the queue and your application may enter an infinite processing loop.
4. Use multiple worker nodes to add work items to the task queue.
Queue write throughput is approximately 20 items per second for one worker. Most applications creating a large number of jobs will create work items much faster than this. Distributing the placement of the items on the queue will speed this process up.
• The Windows Azure Team continues its case-study interview series with Real World Windows Azure: Interview with Andy Lapin, Director of Enterprise Architecture at Kelley Blue Book of 4/1/2010:
As part of the Real World Windows Azure series, we talked to Andy Lapin, Director of Enterprise Architecture at Kelley Blue Book about using the Windows Azure platform to create the company's premier Web site and the benefits that Windows Azure provides. Find out what he had to say:
MSDN: What services does Kelley Blue Book provide?
[Kelley Blue Book]: Kelley Blue Book is an automotive information and value exchange service that gives consumers, automotive dealers, government, and companies in the finance and insurance industries information they need to facilitate vehicle transactions.
MSDN: What was the biggest challenge your company faced prior to implementing Windows Azure?
[Kelley Blue Book]: Our Web site, www.kbb.com, which is built on the Microsoft .NET Framework, has more than 14 million visitors each month. To support traffic, we have two physical data centers where we rent server space. We can handle peak traffic, but we were also paying for underutilized computing resources during non-peak periods. Plus, we need the ability to scale up quickly. For example, when "Cash for Clunkers" was introduced, it was time-consuming for us to manage the physical servers.
MSDN: Describe the solution you built with Windows Azure to help you scale up quickly and cost-effectively?
[Kelley Blue Book]: Because our developers are familiar with .NET-based code, they were able to essentially migrate our site to the Windows Azure platform. We moved all 27 of our Web servers and nine instances of Microsoft SQL Server data management software to the cloud. We're also using Windows Azure Blob Storage and Windows Azure Tables to ensure data persistence. …
Read the full story (case study) here.
• Sarim Khan’s Windows Azure Helps Startup sharpcloud Expand its Business post of 3/31/2010 to The Official Microsoft Blog begins:
Most businesses today use static spreadsheets and presentations that don’t allow for effective collaboration during planning processes. Yet in their personal lives people communicate and collaborate constantly, using dynamic social-networking tools such as Facebook.
Collaboration and real-time communication is critical to business success in today’s dynamic environment. That’s why I co-founded sharpcloud, a startup based in Guildford, England: To apply the principles of social networking to long-term business planning.
Our service combines a highly visual look and feel with common social-networking components to develop, define, and communicate long-term business road maps and strategy. sharpcloud is the first service of its kind to include visualization and social-networking as central ideas in automating strategic planning and road-mapping.
For many reasons it makes sense for sharpcloud to be a cloud-computing application. In particular, a cloud application helps us support the global companies that are our prime target market. The cloud also offers us the ability to quickly and easily scale and deploy to meet the needs of our customers. Finally, the cloud allows us to think and focus on improving our service, without losing development time and effort in managing server infrastructure.
We initially started sharpcloud using Amazon Web Services, but that required us to focus time on maintaining the Amazon cloud-based servers. So to get the results we wanted, we switched to a solution built with Windows Azure. Windows Azure allows us to develop a cost-effective solution and go to market quickly because Microsoft manages the underlying infrastructure for us. We already were creating an application based on the Microsoft .NET Framework and the Microsoft Silverlight browser plug-in, so we only had to rewrite a small amount of code to ensure that our service and storage would communicate properly with the Azure API. …
Sarim is sharpcloud’s CEO and CTO. Read the full story here.
For the short term (until we sell one) there are three cars in my household. A manual transmission, an automatic and a CVT (continuous variable transmission). This makes me uniquely qualified to write about Cloud Computing.
That’s because Cloud Computing is yet another area in which the manual/automatic transmission analogy can be put to good use. We can even stretch it to a 4-layer analogy (now that’s elasticity):
That’s traditional IT. Scaling up or down is done manually, by a skilled operator. It’s usually not rocket science but it takes practice to do it well. At least if you want it to be reliable, smooth and mostly unnoticed by the passengers.
Manumatic transmission (a.k.a. Tiptronic)
The driver still decides when to shift up or down, but only gives the command. The actual process of shifting is automated. This is how many Cloud-hosted applications work. The scale-up/down action is automated but, still contingent on being triggered by an administrator. Which is what most IaaS-deployed apps should probably aspire to at this point in time despite the glossy brochures about everything being entirely automated.
That’s when the scale up/down process is not just automated in its execution but also triggered automatically, based on some metrics (e.g. load, response time) and some policies. The scenario described in the aforementioned glossy brochures.
Continuous variable transmission
That’s when the notion of discrete gears goes away. You don’t think in terms of what gear you’re in but how much torque you want. On the IT side, you’re in PaaS territory. You don’t measure the number of servers, but rather a continuously variable application capacity metric. At least in theory (most PaaS implementations often betray the underlying work, e.g. via a spike in application response time when the app is not-so-transparently deployed to a new node).
OK, that’s the analogy. There are many more of the same kind. Would you like to hear how hybrid Cloud deployments (private+public) are like hybrid cars (gas+electric)? How virtualization is like carpooling (including how you can also be inconvenienced by the BO of a co-hosted VM)? Do you want to know why painting flames on the side of your servers doesn’t make them go faster?
Driving and IT management have a lot in common, including bringing out the foul-mouth in us when things go wrong.
Kaleidoscope’s EntLib for Windows Azure post of 3/29/2010 reports:
Enterprise Library popularly known as EntLib is a collection of Application Blocks targeted at managing oft needed redundant tasks in enterprise development, like Logging, Caching, Validation, Cryptography etc.
Entlib currently exposes 9 application blocks:
- Caching Application Block
- Cryptography Application Block
- Data Access Application Block
- Exception Handling Application Block
- Logging Application Block
- Policy Injection Application Block
- Security Application Block
- Validation Application Block
- Unity Dependency Injection and Interception Mechanism
Ever since the Honeymoon period of PoCs and tryouts is over and Azure started to mainstream and more precisely started to go “Enterprise”, Azure developers have been demanding EntLib for Azure.
The demands seems to have finally been heard and the powers that be have bestowed us with the current beta release EntLib 5.0 which supports Windows Azure.
The application blocks tailored for Azure are:
- Data Access Application Block (Think SQL Azure)
- Exception Handling Application Block (Windows Azure Diagnostics)
- Logging Application Block (Windows Azure Diagnostics)
- Validation Application Block
- Unity Dependency Injection Mechanism
The EntLib 5.0 beta is now available for download.
… Archetype is a rich internet application (RIA) development and design shop. They provide both products and development/design services using Silverlight and Flash with .NET and SQL Server back-ends.
Archetype saw the trend around video and media related solutions moving to the cloud and took the initiative to build out their end-to-end solution. They had a couple of projects where they had to be able to handle large scale very quickly. In general they had trouble scaling up quickly and then back down once the project was over. Windows Azure gave them that capability in both their services and product businesses.
One of their products is a media content management system called Archetype Media Platform (AMP) that allows enterprises to control all their media assets. Danny and Luigi spent some time here showing off some of the video content management and editing capabilities of the AMP solution running on Windows Azure.
Towards the end of the video Luigi shares some of his experiences porting to Windows Azure (first cut working in less than a week). The solution architecture use web roles for the front end and web services as well as worker roles for various activities in the background (e.g. encoding or analyzing media). SQL Azure is used for content metadata and Windows Azure blob storage for the video files.
Steven Nagy’s Windows Azure Development Deep Dive: Working With Configuration post of 3/14/2010 to The Code Project begins:
… One of the things you have to consider in any application is configuration. In windows and web forms we have *.config files to help configure our application prior to start. They are a useful place to store things like provider configuration, IOC container configuration, connection strings, service end points, etc. Let’s face it – we use configuration files a lot.
In this article I will discuss the different types of configuration available to you in Windows Azure, how they can be leveraged in your application, and how configuration items can be changed at runtime without causing your application roles to restart.
The Problems With Configuration in the Cloud
In Windows Azure applications, configuration can work exactly the same as standard .Net applications. If you have a web role, then you have a web.config. And if you have a worker role, you get an app.config. This allows you to provide configuration information to your role when it starts.
But what about configuration values you want to change after your app is deployed and running? It certainly is a lot harder to get in and change a few angle brackets in your web.config after it is deployed to production in the cloud. Do you really want to have to upload a whole new version of the app package with the new web.config file in it?
Or what about being able to change configuration aspects of all your running instances in one go, and not having to stop them from running to do so? Why should a configuration change necessitate a restart, such as is needed with web.config and app.config files?
In Windows Azure we have a new method of configuring our roles that gives us flexibility and consistency in our applications. …
Microsoft Research’s Extreme Computing Group (XCG) hosts a live AzureScope: Benchmarking and Guidance for Windows Azure project. From the Home page:
Welcome to XCG’s AzureScope site.
The purpose of this site is to present the results of regularly running benchmarks on the Windows Azure platform. These benchmarks are intended to assist you in architecting and deploying your research applications and services on Azure. Included in the benchmark suite are tests of data throughput rates, response times, and capacity. Each benchmark is run against a variety of test cases designed to reflect common use scenarios for Azure development.
A series of micro-benchmarks measure the basic operations in Azure such as reading and writing to the various data stores the platform provides and sending communications from one node to another. In addition to these, a set of macro-benchmarks are also provided that illustrate performance for common end-to-end application patterns. We provide the source code for each, which you can use as examples for your development efforts.
Each benchmark test case can be configured to test a wide variety of parameters, such as varying data blob sizes or the number of nodes running a particular job. The benchmarks are run regularly across our various data centers and the results constantly updated here. Further details on what select benchmarks measure and how they are run can be found by following the links to the individual benchmark test case. In addition, for each benchmark we provide our analysis of what the results mean and how this could impact your development against the Windows Azure Platform. If you have feedback or wish to see additional benchmarks included in this suite, please send email to firstname.lastname@example.org
Please Note: AzureScope is in the final stages of preparation for public viewing and the results should appear here soon.
Learn more about Microsoft Research’s Azure Research Engagement.
• tbtechnet asks and answers How Can Companies that Provide Hosted Services Leverage Windows Azure? in this 3/31/2010 post to the Windows Azure Platform, Web Hosting and Web Services blog:
When hosters partner with the Windows Azure Platform, they have access to the three primary elements that allow developers to increase their time to market and enhance scalability.
- Windows Azure operating system as an online service.
- Microsoft SQL Azure as a fully relational cloud database solution.
- Windows Azure platform AppFabric as a way to connect cloud services and on-premises applications.
Hosters have the ability to develop, integrate, and utilize the flexibility and scalability of the Windows Azure Platform cloud hosting solutions while running and maintaining applications within their own data center or network infrastructure.
He continues with brief expansions of these topics:
- Hosters Can Deploy New Offerings Quickly
- Hosters Can Build Tools
- Hosters Can Package Platform-based Services
• Robert MacNeill reports Cloud Computing Tsunami On The Horizon: Service Providers Strategize On Concerns in this three-page 3/31/2010 Saugatuck Research Report (site registration required):
Moving to cloud computing and SaaS will not be smooth sailing for IT services providers. Consider the following: By year-end 2012, 75 percent or more of SMBs, large enterprises and public sector organizations will be using one or more Cloud IT instances to enable and support ongoing business operations and by year end 2014, 40 percent or more of NEW enterprise IT investment spend will be Cloud-based (See Strategic Research Report SSR-706, Lead, Follow, and Get Out of the Way: A Working Model for Cloud IT through 2014, published 25 Feb. 2010).
Through multiple inquiries, briefings and consulting projects that Saugatuck has engaged in over the past six months, this Research Alert highlights three of ten key cloud computing concerns and trends that are forcing service providers to rethink and strategize around for the future – a market segment that has the potential to contract in size as the customer opportunity shifts and evolves. …
• Vinnie Mirchandani’s Private (Cloud) Phantasies post of 3/31/2010 to the Enterprise Irregulars blog debunks the economic benefits of private clouds:
I am hearing plenty of conversations around private clouds. The basic theme is “we will virtualize our processing and storage and get many of the benefits of public clouds”. And, of course, “we will have none of the security and service level issues with public clouds.”
Incumbent application vendors encourage that thinking as a way of deflecting attention from their own bloated piece of the budget. Outsourcers see that as a way to sell VM services. Life goes on.
But here is what clients are signing up for with private clouds:
Few tax or energy or other scale efficiencies
In my upcoming book, Mike Manos who helped Microsoft build out its Azure cloud data centers says
“Thirty-five to 70 individual factors, like proximity to power, local talent pool, are considered (as location factors) for most centers. But three factors typically dominate: energy, telecommunications, and taxes. In the past, cost of the physical location was a primary consideration in data center design.”
So the locations that [A]mazon, and Google, and Yahoo! and other cloud vendors have chosen for their data centers reflect aggressive tax and telecommunication negotiations. Their global networks of data centers also allows them to do what Mike calls “follow the moon” – scouring locations every so often for cheapest locations to process energy intensive computing.
Additionally these new data centers have massive machine to man efficiency ratios – 3,000 to 5,000 servers for every data center staff.
Few clients will get any of these efficiencies from their own data center or even from their “private cloud” outsourcing or hosting provider.
Little AM leverage
The last significant productivity in application management most companies saw was through offshore labor arbitrage. Companies have gradually seen that dwindle with wage inflation, younger offshore staff and related turnover. And the irony is even in massive campuses in Bangalore and elsewhere, there is no real management “scale”. If you walk into one of those buildings you see fortified floors which cordon off each customer’s outsourced team. There is very little sharing of staff or tasks across clients. Compared to cloud AM models, that is hugely “unshared” and inefficient.
Persistence of broken application software maintenance and BPO models…
Murray Gordon announces VM Support in Windows Azure in this 3/29/2010 post to Microsoft’s ISV Community blog:
Many of my customers and partners have raised questions around Windows Azure and the pilot program that Microsoft announced with Amazon Web Services.
While Windows Azure is not participating in this pilot, Microsoft is actively monitoring the feedback and will leverage the results this pilot to shape their future offerings.
Microsoft has committed to enabling customers to purchase Windows Azure through a combination of existing and new licensing agreements. Additionally they will also ensure that they support a centralized seamless and consistent purchasing experience that existing multi-year commercial customers enjoy. Microsoft will have more specific details on this process later in the year.
Below you will find the FAQ for questions related to the pilot program. If you have additional questions please let feel free to comment on this post. I can answer those individually.
1. Will Windows Azure offer VM support?
Yes, Microsoft will add Virtual Machine functionality to Windows Azure to expand the set of existing applications that can be run on it. This Virtual Machine deployment functionality will enable developers to run a wide range of Windows applications in Windows Azure, while taking full advantage of the built in automated service management.
2. What is the pricing for this proposed VM functionality in Windows Azure?
We are not announcing pricing for the proposed Windows Azure VM functionality right now. However, this pricing will be consistent with our current Windows Azure pricing model.
3. How does this proposed VM functionality in Windows Azure differ from Amazon hosting Windows Server VMs?
While Windows Azure is a cloud service that uses (and charges via) computation resources that are analogous to physical computers, it differs in important ways from platforms such as AWS that offer VMs on demand. With a purely VM-based platform, the situation is much like hosting: You bear full responsibility for configuring and managing the VMs and the software they contain. With the proposed VM functionality in Windows Azure, while developers have the flexibility to customize the Windows Azure VM and incorporate it in service models, the platform itself takes care of everything else.
4. When will Windows Azure offer VM capability/support?
We are still engaged in the planning and prioritization for additional functionality in Windows Azure based on customer feedback. As we announced at PDC we will enable customers to migrate existing Windows Server applications through the managed virtual machine (VM) functionality in 2010.
5. Will Windows Azure enable similar Windows Server-license mobility in the future?
The Windows Azure team regularly evaluates new licensing models that could better serve customer needs. We look forward to customer and partner feedback on the Windows Server License Mobility pilot. We will take this feedback into consideration as we structure future licensing models.
6. When will Windows Azure be available in the Enterprise customer programs like the Enterprise Agreement & Select?
In the future, Microsoft will provide the ability for Windows Azure licensing agreements to be integrated into Enterprise customer programs such as Enterprise Agreement and Select. We will provide specifics about the licensing model and pricing details in calendar year 2010.
7. When will Windows Azure platform volume licensing pricing details be available?
We will provide volume licensing details in calendar year 2010. We don’t have specifics to share at this time.
Murray continues with a copy of the original Amazon email.
Mary Jo Foley covers the same topic in her Microsoft: Yes, customers, we will have VM support for Windows Azure post of 3/31/2010.
• Kathleen Lau describes Cloud security's seven deadly sins in this 3/31/2010 article for Computerworld Canada:
A security expert warns organizations making a foray into cloud computing may know familiar terms like multi-tenancy and virtualization, but that doesn't mean they understand everything about putting applications in the cloud.
In the world of cloud computing, those technologies are thrown together to create a new class of applications with their own unique set of governance rules, said Jim Reavis, executive director with the Cloud Security Alliance (CSA).
"This is a new epoch in computing," said Reavis. Even if it all sounds familiar, digging a little deeper will uncover a whole set of new risks.
Organizations will often adopt cloud computing at a much faster rate than that with which security professionals are comfortable, said Reavis. A pragmatic approach is necessary. "Take a risk-based approach to understanding the real risks and mitigating practices, we can leverage to securely adopt the cloud," he said.
CSA, in collaboration with Palo Alto, Calif.-based Hewlett-Packard Co., listed what they called the seven deadly sins of cloud security. The research is based on input from security experts across 29 enterprises, technology providers and consulting firms.
Following are abbreviated versions of the seven deadly sins:
- Data Loss/Leakage …
- Shared Technology Vulnerabilities …
- Malicious Insiders …
- Account, Service and Traffic Hijacking …
- Insecure Application Programming Interfaces …
- Abuse and Nefarious Use of Cloud Computing …
- Unknown Risk Profile …
Archie Reed, chief technology officer for cloud security with Palo Alto, Calif.-based Hewlett-Packard, is careful to note that the list of seven deadly sins in cloud security is not all-encompassing, but high-level. "It should guide your approach, not define it," said Reed. …
• Jonathan Siegel’s User Ignorance Causes Cloud Security Leak; Accounts, Passwords Revealed post of 3/31/2010 reports:
At 1:00 a.m. on Sunday morning I was doing routine maintenance on my personal Amazon Web Services account and instead found myself looking at something I had no right to be seeing: A database with 800,000 user accounts to the e-card site CardMaster.com. Along with that were the database passwords and back end of a major U.S. Public Broadcasting Service news show website (Gwen Ifill's Washington Week), including daily updates from panelists on the stories they cover.
I wish I wasn't the person to find this. I founded one of Amazon's earliest dashboards. My consultancy is on Amazon's European Customer Advisory Board. But this highlights a significant issue in the cloud today: There is a whole new user profile acting as developer and administrator. We are becoming empowered with amazing tools - and being given enough rope to really hang ourselves.
Jonathan continues with more details of his discovery and its ramifications.
Doug Barney reports in his Third-Party Report: Lieberman Software article for the Redmond Report newsletter and post to Barney’s Blog of 3/31/2010:
Lieberman Software, headed by super smart Phil Lieberman, has long been in the Windows admin market. Now Phil is eying the cloud with Enterprise Random Password Manager, which now brings its identity management features to cloud providers.
According to Lieberman (the company and the man), IT interest in the cloud is growing, but so are fears that data will be stolen or spied upon.
We at Redmond are working a cloud security story, so a recent e-mail exchange with Phil was timed perfectly. Here's the gist of Phil's thoughts:
"The entire nature of how insecure the cloud is and how cloud vendors are not taking ownership or providing services for cloud security is a big story that the cloud vendors don't want exposed. Any auditor that allows critical information to reside on these cloud platforms without being able to fully audit the access and security is simply not doing their job. Or if the auditor tells the client that cloud adoption is a mistake and the client moves forward anyway, some companies have better management and direction than others.
Unfortunately, the auditors may find their client companies jumping in to the pool (cloud cesspool), committing their companies 100 percent between audit cycles, then having to give these companies the bad news that their 'findings' show that they did something really risky and stupid just to save a few bucks.
Very few companies are doing their due diligence about cloud security. The cloud vendors are telling us they have no interest in implementing security until customers demand it. It is going to get ugly."
Doug is Editor in Chief, Redmond Magazine. See Phil Lieberman’s The Cloud Challenge: Security essay of 3/29/2010 from Windows Azure and Cloud Computing Posts for 3/29/2010+.
• ebizQ will present Cloud QCamp [on] April 7, 2010:
ebizQ has been covering Cloud Computing for the last three years. Last June, ebizQ organized a Cloud QCamp virtual conference, where leading industry experts and practitioners explored the role of service-oriented architecture (SOA) and business process management (BPM) in supporting cloud-computing initiatives. This April, ebizQ will help enterprises cut through the hype and focus on issues surrounding cloud computing, covering Infrastructure as a Services (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). This year's QCamp will also focus on development of Private Clouds in Enterprises.
Some of the topics and issues we will be focusing on are:
- Virtualization and Cloud Computing interconnection
- Platform lock-In issues with Cloud providers.
- Application development for cloud.
- Migration of existing applications to Cloud.
- The Economics of Cloud Computing
- Best practices in moving services, processes and data to the Clouds
Cloud QCamp will be an informative and educational program for senior IT Architects, Development Managers, Data Center Managers and CIOs to take advantage of private and public cloud technologies within and external to their enterprises. …
Speakers include Joe McKendrick, David Linthicum, and Phil Wainwright. Register to attend here.
• SGEntrepreneurs announces MSDN Presents Windows Azure Platform on 13 Apr at Microsoft Auditorium, Level 21, NTUC Centre, One Marina Boulevard, Microsoft Singapore:
9.00am: Registration + Breakfast
10.00am: The Windows Azure Platform: A Perspective
11.30am: The Windows Azure Platform: A Technical Drill Down
2.00pm: Visual Studio 2010: A Perspective
3.00pm: A Partner’s View on the Microsoft Cloud: Accenture and Avanade
4.15pm: A Partner’s View on the Microsoft Cloud: NCS
We’re about to start the process for the next Magic Quadrant for Cloud Infrastructure Services and Web Hosting, along with the Critical Capabilities for Cloud Infrastructure Services (titles tentative and very much subject to change). Our hope is to publish in late July. These documents are typically a multi-month ordeal of vendor cat-herding; the evaluations themselves tend to be pretty quick, but getting all the briefings scheduled, references called, and paperwork done tends to eat up an inordinate amount of time. (This time, I’ve begged one of our admin assistants for help.)
What’s the difference? The MQ positions vendors in an overall broad market. CC, on the other hand, rates individual vendor products on how well they meet the requirements for a set of defined use cases. You get use-case by use-case ratings, which means that this year we’ll be doing things like “how well do these specific self-managed cloud offerings support a particular type of test-and-development environment need”. The MQ tends to favor vendors who do a broad set of things well; a CC rating, on the other hand, is essentially a narrow, specific evaluation based on specific requirements, and a product’s current ability to meet those needs (and therefore tends to favor vendors that have great product features).
Also, we’ve decided the CC note is going to be strictly focused on self-managed cloud — Amazon EC2 and its competitors, Terremark Enterprise Cloud and its competitors, and so on. This is a fairly pure features-and-functionality thing, in other words.
Anyone thinking about participation should check out my past posts on Magic Quadrants.
CloudTweaks reports in its IBM to provide start-ups cloud-computing technology post of 3/31/2010 that Big Blue is stepping up to compete with Microsoft’s BizSpark on the cloud-computing front:
Information technology and consulting multinational IBM Wednesday unveiled a global entrepreneur initiative to provide start-up companies industry-specific cloud-computing technologies.
‘The initiative will provide start-ups free access to cloud-computing technology to capture emerging business opportunities in fast-growing industries such as energy and utilities, health care, telecom and government,’ IBM Venture Capital group managing director Claudin Fan Munce told reporters here.
Next-generation entrepreneurs will also have access to IBM’s research community, sales, marketing and technical skills under the new programme.
Explaining why the company was opening its resources to start-ups, Munce said businesses the world over were applying new technologies to address industry-specific needs and start-ups were looking for new ways to capitalise on the new trend.
‘We invest over $6 billion per year in research, with about 3,000 people in eight labs across the world. With 4,914 new patents in 2009, we bring innovative technologies to market,’ Munce said.
With its smarter planet strategy and years of investments in research, IBM is skilled in building product and services offerings for businesses based on new ideas. Its industry frameworks are software platforms targeted to industry specific market opportunities such as smarter water, smarter buildings and smarter health care. …
Mike Kirkwood’s Rulers of the Cloud: A Multi-Tenant Semantic Cloud is Forming & EMC Knows that Data Matters post of 3/31/2010 begins:
EMC is a large company focused on high performance storage for enterprises. It's offerings are closely aligned with the idea of extending infrastructure from virtualization to private cloud infrastructure. The company wants to help IT data provisioning services are as easy as Amazon and as secure as Fort Knox.
To get a handle of where enterprise data storage meets the web, we looked for inspiration from architects of the web and Internet, including web pioneer Sir Tim Berner-Lee and Vint Cerf. We take a look at EMC as positioned as the closet, physically, to the core assets of the enterprise.
In this report, we also spoke with Ted Newman, CTO of the Cloud Infrastructure Group of EMC Consulting, which is part of EMC Global Services to find out what is really happening in the enterprise sales and delivery engines.
We mashed his thoughts up with some big-thinkers in the core of computing to get perspective on the company's future as a map to enterprise information assets. …