Thursday, December 16, 2010

Windows Azure and Cloud Computing Posts for 12/15/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px3   
• Update 12/16/2010 7:30 AM PST for an important announcement about new Azure PDC 2010 features in the Windows Azure Infrastructure section.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

Michael Stiefel answers “Windows Azure provides two storage mechanisms: SQL Azure and Azure Storage tables. Which one should you use?” in his  Can Relational Databases Scale? post of 12/14/2010:

image SQL Azure is basically SQL Server in the cloud. To get meaningful results from a query, you need a consistent set of data.

Transactions allow for data to be inserted according to the ACID principle: all related information is changed together. The longer the database lock manager keeps locks, the higher the likelihood two transactions will modify the same data. As transactions wait for locks to clear, transactions will either be slower to complete, or transactions will time out and must be abandoned or retried. Data availability decreases.

Content distribution networks enable read-only data to be delivered quickly to overcome the speed of light boundary. They are useless for modifiable data. The laws of physics drive a set of diminishing economic returns on bandwidth. You can only move so much data so fast.

Jim Gray pointed out years ago that computational power gets cheaper faster than network bandwidth. It makes more economic sense to compute where the data is rather than moving it to a computing center. Data is often naturally distributed. Is connectivity to that data always possible? Some people believe that connectivity will be always available. Cell phone connectivity problems, data center outages, equipment upgrades, and last mile problems indicate that is never going to happen.

Computing in multiple places leads to increased latency. Latency means longer lock retention. Increased locked retention means decreased availability.

Most people think of scaling in terms of large number of users: Amazon, Facebook, or Google. Latency also leads to scalability based on geographic distribution of users, transmission of a large quantity of data, or any bottleneck that lengthens the time of a database transaction.

The economics of distributed computing argue in favor of many small machines, rather than one large machine. Google does not handle its search system with one large machine, but many commodity processors. If you have one large database, scaling up to a new machine can cost hours or days.

The CAP Theorem

Eric Brewer’s CAP Theorem summarizes the discussion. Given the constraints of consistency, availability, and partitioning, you can only have two of the three. We are comfortable with the world of single database/database cluster with minimal latency where we have consistency and availability.

Partitioning Data

If we are forced to partition our data should we give up on availability or consistency? Let us first look at the best way to partition, and then ask whether we want consistency or availability. What is the best way to partition?

If economics, the laws of physics, and current technology limits argue in favor of partitioning, what is the best way to partition? Distributed objects, whether by DCOM, CORBA, or RMI failed for many reasons . The RPC model increases latencies that inhibit scalability. You cannot ignore the existence of the network. Distributed transactions fail as well because once you get beyond a local network the latencies with two-phase commit impede scalability. Two better alternatives exist: a key value/type store such as Azure Storage Services, or partitioning data across relational databases without distributed transactions.

Storage Services allow multiple partitions of tables with entries. Only CRUD operations exist: no foreign key relations, no joins, no constraints, and no schemas. Consistency must be handled programmatically. This model works well with tens of hundreds of commoity processors, and can achieve massive scalability. One can partition SQL Azure horizontally or vertically. With horizontal partitioning we divide table rows across the database. With vertical partitioning we divide table columns across databases. Within the databases you have transactional consistency, but there are no transactions across databases.

Horizontal partitioning works especially well when the data divides naturally: company subsidiaries that are geographically separate, historical analysis, or of different functional areas such as user feedback and active orders. Vertical partitioning works well when updates and queries use different pieces of data. In all these cases we have to deal with data that might be stale or inconsistent.

Consistency or Availability?

imageAsk a simple question: What is the cost of an apology? The number of available books in Amazon is a cached value, not guaranteed to be correct. If Amazon ran a distributed transaction over all your shopping cart orders, the book inventory system, and the shipping system, they could never build a massively scalable front end user interface. Transactions would be dependent on user interactions that could range from 5 seconds to hours, assuming the shopping cart is not abandoned. It is impractical to keep database locks that long. Since most of the time you get your book, availability is a better choice that consistency.

Airline reservation systems are similar. A database used for read-only flight shopping is updated periodically. Another database is for reservations. Occasionally, you cannot get the price or flight you wanted. Using one database to achieve consistency would make searching for fares. or making reservations take forever.

Both cases have an ultimate source of truth: the inventory database, or the reservations database. Businesses have to be prepared to apologize anyway. Checks bounce, the last book in the inventory turns out to be defective, or the vendor drops the last crystal vase. We often have to make records and reality consistent.

Software State is not the State of the World

We have fostered a myth that the state of the software has to be always identical to the state of the world. This often makes software applications difficult to use, or impossible to write. Deciding what the cost of getting it absolutely right is a business decision. As Amazon and the airlines illustrate, the cost of lost business and convenience sometimes offsets the occasional problems of inconsistent data. You must then design for eventual consistency.

Summary

Scalability is based on the constraints of your application, the volume of data transmitted, or the number and geographic distribution of your users.

Need absolute consistency? Use the relational model. Need high availability? Use Azure tables, or the partitioned relational model. Availability is a subjective measure. You might partition and still get consistency. If the nature of your world changes, however, it is not easy to shift from the relational model to a partitioned model.

Michael is a principal of Reliable Software, Inc. and a consultant on software architecture and development, and the alignment of information technology with business goals.


<Return to section navigation list> 

SQL Azure Database and Reporting

Michael Heydt (@mikeheydt) of SunGard Consulting Services wrote a long-awaited, detailed Sharding with SQL Azure whitepaper for Microsoft, which the SQL Azure Team updated on 12/14/2010. From the summary:

image Database sharding is a technique of horizontal partitioning data across multiple physical servers to provide application scale-out.  SQL Azure is a cloud database service from Microsoft that provides database functionality as a utility service, offering many benefits including rapid provisioning, cost-effective scalability, high availability and reduced management overhead. 

imageSQL Azure combined with database sharding techniques provides for virtually unlimited scalability of data for an application.  This paper provides an overview of sharding with SQL Azure, covering challenges that would be experienced today, as well as how these can be addressed with features to be provided in the upcoming releases of SQL Azure.

Following are the white paper’s two illustrations (Figures 1 and 3 are identical):

image Figure 1: SQL Azure Federations

image Figure 2: A Multi-master sharding archetype

and the Summary:

This guidance has discussed the means of performing sharding with SQL Azure as it is today, along with guidance on building an ADO.NET based sharding library that implements recommended practices for sharding with SQL Azure.

Also presented is information showing how sharding features are going to be built directly into SQL Azure, and how the design of the ADO.NET based sharding library will provide a head start on sharding with SQL Azure today and provide a migration path when those features are available in SQL Azure.

In order to be brief and concise, the guidance has remained prescriptive, and certain concepts such as rebalancing, schema management, adding and removing of shards have only been given a cursory mention.  Further detail will be provided in future guidance on these and all other topics mentioned.

To keep up to date on new sharding guidance with SQL Azure and the new federation features, go to our blog at: http://blogs.msdn.com/sqlazure/

Mike’s 42 Spikes blog includes several earlier tutorials, such as  Installing AppFabric Cache and Talking to it with C# of 5/10/2010, oData Enabling a SQL Azure Database of 4/4/2010, and WCF RIA Services and POCO - Is there any Value? of 1/26/2010.


James Vastbinder summarized the preceding whitepaper in his Whitepaper Released: Sharding with SQL Azure article of 12/14/2010 for InfoQ:

image Yesterday Microsoft released a new whitepaper providing guidance on sharding with SQL Azure written by Michael Heydt and Michael Thomassy.  As SQL Azure currently has a limit of 50GB per instance to scale horizontally to larger sizes, one must employ this technique of horizontal partitioning to achieve application scale-out.  The intent of the whitepaper is to deliver guidance on how to architect an application that requires elasticity and fluidity of resources at the data layer over time.

imageThe whitepaper provides:

  • basic concepts in horizontal partitioning / sharding
  • an overview of patterns and best practices
  • challenges which may present themselves
  • high-level design of an ADO.NET sharding library
  • an introduction to SQL Azure Federations

image While horizontal partitioning splits one or more tables by row, it is usually within the same database instance.  The advantage achieved is reduced index size which, in theory, provides faster retrieval rates for data.  In contrast, sharding tackles the same problem by splitting the table across multiple instances of the database which would typically reside on separate hardware requiring some form of notification and replication to provide synchronization between the tables.

In the Microsoft sharding pattern a “sharding key” is used to map data to specific shards which is the primary key in one of the data entities.  Related data entities are clustered into a related set based upon the shared shard key and this unit is referred to as an atomic unit.  All records in an atomic unit are stored in the same shard.  Additionally, the process of rebalancing shards should be an offline process due to key rebalancing as the physical infrastructure is modified.

Microsoft will release SQL Azure Federations which will support sharding at the database level in 2011.  At this time all sharding capabilities must be implemented at the application level using ADO.NET.  This is in contrast to current “NoSQL” alternatives like MongoDB, CouchDB, SimpleDB which support sharding already.

You might also find the James Vastbinder published a Concept Map of SQL Azure on 12/14/2010 article in Windows Azure and Cloud Computing Posts for 12/14/2010+ of interest.


Corey Fowler (@SyntaxC4) posted Migrating Large Databases from On-Premise to SQL Azure on 12/14/2010:

image Recently, I was working on a project that required a site migration from a Shared Hosting server to Windows Azure. This application has been up and running for sometime and had acquired quite a substantially sized database.

altDuring the course of the project I ran across a few road blocks which I wasn’t expecting, due to the experience gained in my previous blog entries: Migrate a database using the SQL Azure Data Sync Tool and Scripting a database for SQL Azure (Issues explained in previous link resolved with launch of SQL Server 2008 R2). Hopefully the following tricks will help you along your data migration.

Using Import/Export in SSMS to Migrate to SQL Azure

In addition to the SQL Azure Data Sync Tool, it is possible to use the existing Import/Export Wizard in SQL Server Management Studio to migrate data to SQL Azure. There are a number of things to keep in mind while using the Import/Export Tool:

SQL Server Native Client to .NET Data Provider for SqlServer

SQL Azure doesn’t fall under the typical SQL Server Native Client 10.0 Product SKU, this means that you’ll have to use the .NET Data Provider to migrate your data. The configuration screen for the provider is very intuitive, but there are two key settings that should be changed from their default values, Asynchronous Processing (set to true) and Connection Timeout (increase to 1500).

SQL-Azure-SMSS-IE-Tool

Without changing the Timeout value the data migration would error out after creating the fist few sets of rows. Making this an Asynchronous process was beneficial when exporting multiple tables at a time.

Work-around for SSIS Type: (Type unknown …) Error

There is a chance when you go to run the migration that you will encounter an error as described in Wayne Berry’s [@WayneBerry] blog post entitled “SSIS Error to SQL Azure with varbinary(max)” on the SQL Azure Blog.

As Wayne explains in his post, there are a number of XML files which contain data mapping information used by the Import/Export Wizard in order to map the data from the source database to the proper data type in the destination database.

[According to Wayne: With the SQL Server 2008 R2 release the MSSQLToSSIS10.xml file has a definition for varbinarymax, the dtm:DataTypeName element. However, the SqlClientToSSIS.xml file doesn’t contain a definition for varbinarymax. This leaves SSIS unable to map this data type when moving data between these two providers.]

Database Seeded Identity Insert Issue

I’m not sure why this happened, but when using the Import/Export even with Identity Insert on, the ID [Identity] Column was not Inserting the correct values. To get around this I used the ROW_NUMBER to generate new Identities and rebuilt the foreign key tables.

There is a lot of chatter on the Forums and other blog posts that say that BCP with the –E switch is the most effective way to do exact copying (with Identity Columns).

For more information:

Cost Effective Approach

A good thing to keep in mind while preparing your database for migration is that transactions as well as data transfer costs are applied to Queries to (and from) SQL Azure. With this in mind it would be best to set up a scenario where you would test your data migration to ensure the data migration would be performed in the least number of attempts as possible.

I’m surprised Corey didn’t use the SQL Azure Migration Wizard, as described in my Using the SQL Azure Migration Wizard v3.3.3 with the AdventureWorksLT2008R2 Sample Database for his migration project.


<Return to section navigation list> 

Marketplace, DataMarket and OData

Paraleap Technologies added their AzureWatch Auto-Scaling Service for Azure Applications application to the Windows Azure Marketplace on 12/15/2010:

image


Tom Mertens announced on 12/15/2010 that OData will be one of the topics at Web Camp Belgium (24 Jan) w/ Scott Hanselman & James Senior:

Microsoft Web Camps are free events that allow you to learn and build on the Microsoft Web Platform. And these Web Camps are coming to Belgium!

imageThe Belgian Web Camp event on 24th of January 2011 is a full-day event where will hear from Microsoft experts on the latest components of the platform, including ASP.NET MVC 3, jQuery, HTML5, OData and WebMatrix. Scott Hanselman will be doing a two-hour keynote together with James Senior, which is followed by three sessions delivered by Gill Cleeren and Katrien De Graeve.

Two options to registering for the event:

  1. Register to attend in-person
  2. Register for the keynote live stream and see how you could have free breakfast delivered to your company: 

Location: Business Faculty, St. Lendriksborre 6 / Font Saint Landry 6, 1120 Brussel (Neder-over-Heembeek), Belgium

Timing: Monday 24 January 2011, 8:30 to 17:00

WebCamps_signature_v2 (2)

8:30

9:00

Welcome and registration

9:00

11:00

Opening Keynote by Scott Hanselman and James Senior on ASP.NET MVC 3 and WebMatrix

11:00

11:30

Coffee Break

11:30

12:30

HTML5: How about today? (Katrien De Graeve)

   

What is HTML5? With more and more browsers supporting HTML5, ECMAScript 5 and other web standards, developers now have a strong web platform they can use to create a new class of web application that is more powerful and interactive than ever before. What's in HTML5 that lets us take our sites to the next level?
Expect demos and code!

12:30

13:30

Lunch

13:30

14:45

Come in as jQuery zero, go out as jQuery hero (Gill Cleeren)

   

jQuery is the web developers’ new favorite. This lightweight JavaScript library has developers writing JavaScript code again, and loving it! What previously needed 20 lines of code can now be done in just 3 lines. Who wouldn’t be enthusiastic? Microsoft showed its love for the library by fully integrating it in Visual Studio. I dare to ask: should you stay behind? In this session, we’ll take a look at jQuery and we’ll teach you what you need to know to get you on your way. More specifically, we’ll look at selectors, attributes, working with WCF, jQuery UI, and much more. You could easily walk out of this session wearing a sticker: “I love jQuery”!

14:45

15:15

Coffee Break

15:15

16:30

Oh, look at that data: using oData to expose your data over the web (Gill Cleeren)

   

While applications, sites, tools all generate tons of useful data, it is sometimes hard to access that data from your own application. To increase the shared value of data, Microsoft has introduced the Open Data protocol. Using Open Data, we can expose any data source as a web-friendly data feed.

In this session, we'll start by looking at oData, to make sure that everyone is on board with all the concepts. We'll see how it adds value for the developer and the end user for many of Microsoft's products and services. We'll then look at how we can build our own oData services using WCF Data Services, from working with the basic concepts to more advanced features such as query interceptors and service operations.

Come and learn about information and entity services that are stunning in their simplicity!

16:30

17:30

Closing drink


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

The Windows Azure App Fabric Labs Team announced Windows Azure AppFabric CTP December release now available on 12/14/2010:

image722322Today we deployed an incremental update to the Access Control Service in the Labs environment. It's available here: http://portal.appfabriclabs.com/. Keep in mind that there is no SLA around this release, but accounts and usage of the service are free while it is in the labs environment.

This release builds on the prior October release of the Access Control Service, and has a few changes:

  • Improved error messages by adding sub-codes and more detailed descriptions.
  • Adding primary/secondary flag to the certificate to allow an administrator to control the lifecycle.
  • Added support for importing the Relying Party from the Federation Metadata.
  • Updated the Management Portal to address usability improvements and support for the new features.
  • Support for custom error handing when signing in to a Relying Party application.

We've also added some more documentation and updated the samples on our CodePlex project: http://acs.codeplex.com.

Like always, we encourage you to check it out and let the team know what you think.


Vittorio Bertocci (@vibronet) reported More ACS Goodness coming your way in a 12/15/2010 post to his personal blog:

image …just in time for the holidays!

The ACS guys just rolled out an improvements-studded update to their service. Some of those improvements include:

  • Improved error messages by adding sub-codes and more detailed descriptions.
  • Adding primary/secondary flag to the certificate to allow an administrator to control the lifecycle.
  • Added support for importing the Relying Party from the Federation Metadata.
  • Updated the Management Portal to address usability improvements and support for the new features.
  • Support for custom error handing when signing in to a Relying Party application.
  • more documentation and updated samples on the CodePlex project: http://acs.codeplex.com.

image722322You can read everything about it in the announcements post. Have fun!

<Return to section navigation list> 

Windows Azure Virtual Network, Connect, and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Wade Wegner (@wadewegner) announced release of the Windows Azure Platform Training Kit, December 2010 Update on 12/15/2010:

I am happy to announce that we have released the December 2010 update to the Windows Azure Platform Training Kit.

imageThe Windows Azure Platform Training Kit includes a comprehensive set of technical content including hands-on labs, presentations, and demos that are designed to help you learn how to use the Windows Azure platform including: Windows Azure, SQL Azure and the Windows Azure AppFabric.

imageThe December update provides new and updated hands-on labs, demo scripts, and presentations for the Windows Azure November 2010 enhancements and the Windows Azure Tools for Microsoft Visual Studio 1.3. These new hands-on labs demonstrate how to use new Windows Azure features such as Virtual Machine Role, Elevated Privileges, Full IIS, and more. This release also includes hands-on labs that were updated in late October and November 2010 to demonstrate some of the new Windows Azure AppFabric services that were announced at the Professional Developers Conference (http://microsoftpdc.com) including the Windows Azure AppFabric Access Control Service, Caching Service, and the Service Bus.

image

Some of the specific changes with the December update of the training kit include:

  • [Updated] All demos were updated to the Azure SDK 1.3
  • [New demo script] Deploying and Managing SQL Azure Databases with Visual Studio 2010 Data-tier Applications
  • [New presentation] Identity and Access Control in the Cloud
  • [New presentation] Introduction to SQL Azure Reporting
  • [New presentation] Advanced SQL Azure
  • [New presentation] Windows Azure Marketplace DataMarket
  • [New presentation] Managing, Debugging, and Monitoring Windows Azure
  • [New presentation] Building Low Latency Web Applications
  • [New presentation] Windows Azure AppFabric Service Bus
  • [New presentation] Windows Azure Connect
  • [New presentation] Moving Applications to the Cloud with VM Role

Give it a try today!

In addition to the training kit, we have updated the Windows Azure Platform Training Courses on MSDN.

The Windows Azure Platform Training Course includes a comprehensive set of hands-on labs and videos that are designed to help you quickly learn how to use Windows Azure, SQL Azure, and the Windows Azure AppFabric. This release provides new and updated hands-on labs for the Windows Azure November 2010 release and the Windows Azure Tools for Microsoft Visual Studio 1.3. Some of these new hands-on labs demonstrate how to use new Windows Azure features such as Virtual Machine Role, Elevated Privileges, Full IIS, and more. This release also includes hands-on labs that were updated in late October 2010 to demonstrate some of the new Windows Azure AppFabric services that were announced at the Professional Developers Conference (http://microsoftpdc.com) including the Windows Azure AppFabric Access Control Service, Caching Service, and the Service Bus.

image


The Windows Azure Team announced the Windows Azure Architect Training Program on 12/15/2010. From the About page:

image The Windows Azure Architect program is a self-paced high-level Microsoft training program, developed in and managed from Sweden. The purpose of the program is to help developers and developing organizations make the best possible use and take maximum advantage of the Windows Azure Platform, both in terms of architecting applications for that platform and of implementing them.

The program has been designed and developed with global consumption in mind. All the content is written in English, e-presentations and e-demonstrations are performed in English, the example application is owned by a company which resides in Hartford in Connecticut USA and has a globally oriented business.

Microsoft has worked together with long-time architectural partner Sundblad & Sundblad to develop and manage this program.

image

The Home page adds:

After having purchased a license to the program you can immediately start studying the modules available. When you have performed all of the required laborations you can take the program’s test to get a diploma as a recognition of your newly developed skills on Windows Azure Technologies. The diploma is signed by Microsoft managers.

The fee for the training program is US$ 1,490 excluding sales tax, with an Early Bird Offer: US$ 1,192 excluding sales tax.


Nick Eaton reported Microsoft's Dynamics sales pitch: Not just on the cloud in a 12/15/2010 post to the Seattle Post-Intelligencer’s Microsoft blog:

When Microsoft releases the next version of its customer relationship management product in mid-January, a cloud-based version of Dynamics CRM 2011 will finally be available as a competitor to Salesforce.com. But Microsoft's sales pitch will highlight, in part, that Dynamics CRM is not just on the cloud.

In fact, Dynamics CRM 2011 and Dynamics CRM Online 2011 will be nearly identical, just hosted in different places, said Michael Park, corporate vice president of sales, marketing and operations for Microsoft Business Solutions.

image "The beauty is it's the same product," he said during an interview Monday.

That gives potential customers the flexibility to choose whether they want a CRM product hosted on-premises or at data centers, Park said. If a customer then decides to switch from on-premises to the cloud -- or vice versa -- it's a relatively easy process.

Small and medium-sized businesses care more about pricing than they do about cloud or on-premises hosting, Park said. Most large enterprises Microsoft has talked to care most about being able to switch to cloud hosting later on.

And some organizations -- particularly government agencies whose rules prohibit some employees from using cloud services for work -- could choose a "hybrid model" that allows IT to manage both on-premises and cloud instances of Dynamics from one place, Park said.

"When you take a step back and take a look at the whole cloud landscape," he said, "we think we have a really good option."

Dynamics CRM Online has been in beta testing since September, but Microsoft has recently started an aggressive marketing campaign. In October, it announced promotional pricing for Dynamics CRM Online of $34 per user per month, down from $44.

On Dec. 6, Microsoft wrote an "open letter" to existing Salesforce (and Oracle) customers, encouraging them to switch and offering $200 off per seat for migration expenses.

"Looks like Microsoft is getting more aggressive in this area," IDC analyst Al Hilwa said last week. "This is a standard guerrilla marketing tactic which will certainly get attention, if not customers. Good to see that this part of Microsoft is awake and watching their main competitor hard."

Then, outside Salesforce's Dreamforce expo in San Francisco last week, Microsoft had representatives zipping around on Segways that featured an ad for Dynamics.

(Salesforce CEO Marc Benioff cleverly twisted that stunt in his favor when he invited the same actor from those Microsoft ads on stage during a Dreamforce keynote, apologizing to the fake customer and convincing him to switch back to Salesforce. The Wall Street Journal reported that the man is a friend of Benioff's.)

Microsoft will continue to focus on Salesforce customers, Park said, including a pitch that Dynamics CRM Online's integration with Windows Azure is superior to whatever results from Salesforce's recent acquisition of cloud platform Heroku. Salesforce shook the tech industry a decade ago when it launched the CRM platform only online.

"I think Salesforce has, in the past 10 years, done a good job being out there," Park said, "but they've gone relatively unchallenged in the cloud."


Wes Yanaga reported Microsoft Dynamics CRM 2011 Release Candidate (RC) Available in a 12/15/2010 post to the US ISV Evangelism blog:

image The Microsoft Dynamics CRM 2011 Release Candidate (RC) is now available for download in the Microsoft Download Center.  As with the Microsoft Dynamics CRM 2011 Beta, the Release Candidate is available for anyone to download, and will be available until the RTM release scheduled for Q1, 2011.

image

Download Link:

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=c3f82c6f-c123-4e80-b9b2-ee422a16b91d&displaylang=en

For More Information Please visit: The Microsoft Dynamics CRM Blog

For Technical and Marketing Support Join Microsoft Platform Ready:

Microsoft Platform Ready brings together a variety of useful assets including training, support, and marketing. Current members of the Microsoft Partner Network, startups participating in Microsoft BizSpark, and established software companies with no prior association to Microsoft are welcome to participate in Microsoft Platform Ready.

For more information about Microsoft Platform Ready, visit the site or contact mprcore@microsoft.com


Klint Finley reported London Underground API Returns Thanks to the Cloud to the ReadWriteCloud blog on 12/14/2010:

London Underground logo Transport for London (TfL) has relaunched its API using Microsoft Azure. Last June, TfL released an API that enabled developers to build tools such as a live London Underground train tracker. Unfortunately, the API was a victim of its own success: the service had to be shut down less than a month later due to the strain the enormous number of API calls were placing on TfL's servers. The Guardian reports that the API is back, and the live train tracking with it.

From Microsoft's announcement:

imageThe latest data feed to be added to the Area and onto Windows Azure is called 'Trackernet' - an innovative new realtime display of the status of the London Underground 'Tube' network. Trackernet is able to display the locations of trains, their destinations, signal aspects and the status of individual trains at any given time. This works by taking four data feeds from TfL's servers and making it available to developers and the public via the Windows Azure platform.

Microsoft and TfL have worked in partnership to create this scalable and strategic platform. The platform is resilient enough to handle several million requests per day and will enable TfL to make other feeds available in the future via the same mechanism, at a sustainable cost.

Services available through the API include:

  • Live traffic disruptions
  • Realtime road message signs
  • Barclays Cycle Hire docking station locations
  • Timetable of planned weekend Tube works
  • Station locations (for Tube, DLR and London Overground)
  • River Thames pier locations
  • Findaride (licensed private hire operators)
  • Oyster Ticket Stop locations

Developers interested in working with the TfL API can find out more here.


Maarten Balliauw posted a Tutorial - Using the Windows Azure SDK for PHP on 12/14/2010to DZone’s PHP Zone blog:

This tutorial focuses on obtaining the Windows Azure SDK and a Windows Azure storage account. In all tutorials, the Windows Azure development storage will be used for working with Windows Azure storage services. Development storage is a simulation environment that runs on your local computer and can be used for development purposes. However, it is also possible to work through the tutorials using a production Windows Azure storage account. Both alternatives are described in the next two topics.

Installing Windows Azure development storage

The Windows Azure SDK provides the Windows Azure development storage and additional features which make it easier to build & test applications. The SDK requires a version of Windows 7, Windows Vista Service Pack 1 or greater, or Windows Server 2008 with a version of SQL Server Express or SQL Server installed.

The download for the Windows Azure SDK can be found at http://msdn.microsoft.com/en-us/windowsazure/cc974146.aspx.

After installation, the development storage service can be started through the Start button, All programs, Windows Azure SDK, Development Storage.

image

Figure 1: Development Storage

The storage service endpoints are listed in the development storage UI:

Access to the storage endpoints is granted based on an account key that you choose and a generated account key. For the development storage environment, the following account is the default one:

Account name: devstoreaccount1
Account key: Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==

Creating a production Windows Azure storage account

If you choose not to work with development storage, or want to deploy your application to the Windows Azure production environment, the following steps need to be taken:

1. You need a Windows Live ID

2. You need a Windows Azure subscription. The Introductory Special offers plenty of free storage for development purposes.

3. After activating your subscription, navigate to the Windows Azure Developer Portal, click on your Project Name, then on New Service and select Storage Account:

image

Figure 2: Creating a storage account

4. Provide a Service Label and Description for your storage account and click Next.

image

Figure 3: Storage account service label

5. After this, a globally unique name for your storage account has to be decided. Also, a Region for your storage service has to be decided: in which datacenter should all data be stored? After specifying the DNS name and Region, click Create.

image

Figure 4: Specifying DNS name and region

6. The storage account will be created and added to your Windows Azure subscription. All details such as storage endpoints, account name and account key are listed on the details page.

image

Figure 5: Storage account details

A Windows Azure storage account is structured as follows: every account includes blob storage, table storage and the queue service with specific HTTP(S) endpoints that are listed on the storage account details page:

The account details for the current storage account are the following:

Account name: mystorageplayground
Account key: Kd9goKFw/QtP0AFIxxUQ08q9ntgTA5TcUcw5cgE3eOUOBDhBIiE991Q4AK/5PmQiWlYzAlWRf1uqVUnq7/FD4Q==

Obtaining & installing the Windows Azure SDK for PHP

The Windows Azure SDK for PHP focuses on providing a means of interacting with the storage and diagnostic components of Windows Azure by providing an abstraction of the REST operations that are exposed by Windows Azure storage in the form of an easy-to-use PHP class library.

Manual installation

The Windows Azure SDK for PHP can be downloaded from http://phpazure.codeplex.com:

1. Navigate to http://phpazure.codeplex.com and click the Downloads tab.

2. Under Recommended Download, download the file that is listed.

3. Extract the downloaded archive to your hard disk. The unzipped folder structure looks like the following:

image

Figure 6: Folder structure of extracted Windows Azure SDK for PHP

You may now use the Windows Azure SDK for PHP in your applications.

Installation via PEAR

If you are used to working with PEAR (PHP Extension and Application Repository, more information via http://pear.php.net),  you can also install the Windows Azure SDK for PHP through a PEAR channel. The following steps can be taken to discover the Windows Azure for PHP PEAR channel and install the latest Windows Azure SDK for PHP package:

1. Open a PHP console and ensure that PEAR is installed.

2. Register the Windows Azure SDK for PHP PEAR channel by issuing the command pear channel-discover pear.pearplex.net

image

Figure 7: Registering the Windows Azure for SDK PEAR channel

3. After successful channel registration, install the Windows Azure SDK for PHP by issuing the command pear install pearplex/PHPAzure

image

Figure 8: Installing the WIndows Azure SDK for PHP PEAR package

4. The Windows Azure SDK for PHP is now installed at <path to PHP>/PEAR/PHPAzure

Additional Resources


<Return to section navigation list> 

Visual Studio LightSwitch

Michael Crump recommends checking the minRuntimeVersion after creating a new Silverlight Application if you’re creating Visual Studio LightSwitch apps in his Part 3 of 4 : Tips/Tricks for Silverlight Developers post of 3/13/2010:

Tip/Trick #13:

What is it? Always check the minRuntimeVersion after creating a new Silverlight Application.

image2224222Why do I care? Whenever you create a new Silverlight Application and host it inside of an ASP.net website you will notice Visual Studio generates some code for you as shown below. The minRuntimeVersion value is set by the SDK installed on your system. Be careful, if you are playing with beta’s like “Lightswitch” because you will have a higher version of the SDK installed. So when you create a new Silverlight 4 project and deploy it your customers they will get a prompt telling them they need to upgrade Silverlight. They also will not be able to upgrade to your version because its not released to the public yet.

How do I do it: Open up the .aspx or .html file Visual Studio generated and look for the line below. Make sure it matches whatever version you are actually targeting.

image


Return to section navigation list> 

Windows Azure Infrastructure

Alves Arlindo reported New Windows Azure Platform Features Available Today in a 12/15/2010 post to his Microsoft Belgium blog:

Building out an infrastructure that supports your web service or application can be expensive, complicated and time consuming. Whether you need to forecast the highest possible demand, build out the network to support your peak times, getting the right servers in place at the right time or managing and maintaining the systems, these actions require time and money to do.

The Windows Azure platform is a flexible cloud computing platform that lets you focus on solving business problems and addressing customer needs instead of building that infrastructure to have your business running on. Furthermore with the platform, there is no need to invest upfront on expensive infrastructure all together. Pay only for what you use, scale up when you need capacity and pull it back when you don’t, all this power is provided by the Windows Azure Platform at your fingertips.

During PDC 2010 we announced much new functionality to become available at the end of this calendar year. Some of these new functionalities are available as of today:

  • Full Administrative Access
  • Full IIS Access
  • Remote Desktop
  • Windows Azure Connect
  • VM Role

Reading about cloud computing is one thing, experimenting and trying it out is a completely different thing. As such Microsoft provides you different ways allowing you exploring these new functionalities while making cloud computing and Windows Azure in particular more accessible to you and your business. All this and much more can be done in three easy steps.

Setup a Free Account

You will need an account and subscription to access the Windows Azure Portal allowing you to deploy your applications. Microsoft offers two choices for having a free subscription:

  • Windows Azure Introductory Special: This is a new offer specially made for you. Limited of one per customer and includes a base amount of the Windows Azure platform services with no monthly commitment and free of charge.
    1. Navigate to the Microsoft Online Services Customer Portal.
    2. Select the country you live in and press continue.
    3. Right click on the sign in link to sign in the portal.
    4. Click on the View Service Details link under the Windows Azure Platform section.
    5. Locate the Windows Azure Platform Introductory Special offer and click on buy.
    6. Provide a name for the subscription.
    7. Check the Rate Plan check box below and click next
    8. Enter the Billing information and click next
    9. Check the Agreement box and click purchase.
    1. Sign in to the Microsoft Online Services Customer Portal.
    2. Click on the Subscriptions tab and find the subscription called “Windows Azure Platform MSDN Premium”.
    3. Under the Actions section, make sure one of the options is “Opt out of auto renew”.  This ensures your benefits will extend automatically.  If you see “Opt in to auto renew” instead, select it and click Go to ensure your benefits continue for another 8 months.
    4. After your first 8 months of benefits have elapsed (you can check your start date by hovering over the “More…” link under “Windows Azure Platform MSDN Premium” on this same page), you will need to come back to this page and choose “Opt out of auto renew” so that your account will close at the end of the 16-month introductory benefit period.  If you keep this account active after 16 months, all usage will be charged at the normal “consumption” rates.

Note: You can have both offers active at the same time providing even more free access to the Windows Azure Platform and related new functionalities.

Download the Required Tools

Following tools are required to access the news features on the Windows Azure Platform:

Use and Experience the New Features

As part of the release of the new features, new detailed walkthroughs are being made available in learning how to use these new features:

  • Introduction to Windows Azure: In this walkthrough, you explore the basic elements of a Windows Azure service by creating a simple application that demonstrates many features of the Windows Azure platform, including web and worker roles, blob storage, table storage, and queues.
  • Deploying Applications in Windows Azure: In this walkthrough, you learn how to deploy your first application in Windows Azure by showing the steps required for provisioning the required components in the Windows Azure Developer Portal, uploading the service package, and configuring the service.
  • Virtual Machine Role: Windows Azure Virtual Machine Roles allow you to run a customized instance of Windows Server 2008 R2 in Windows Azure, making it easier to move applications to the cloud. In this walkthrough, you explore Virtual Machine roles and you learn how to create custom OS images that you deploy to Windows Azure.

It’s strange that the first (and AFAIK, only) comprehensive announcement of these new features is from Microsoft Belgium.


Chris Hoff (@Beaker) posted Incomplete Thought: Why We Have The iPhone and AT&T To Thank For Cloud… on 12/15/2010:

image I’m not sure this makes any sense whatsoever, but that’s why it’s labeled “incomplete thought,” isn’t it? ;)

A few weeks ago I was delivering my Cloudinomicon talk at the Cloud Security Alliance Congress in Orlando and as I was describing the cyclical nature of computing paradigms and the Security Hamster Sine Wave of Pain, it dawned on me — out loud — that we have Apple’s iPhone and its U.S. carrier, AT&T, to thank for the success of Cloud Computing.

Image representing iPhone as depicted in Crunc...My friends from AT&T perked up when I said that.  Then I explained…

So let me set this up. It will require some blog article ping-pong in order to reference earlier scribbling on the topic, but here’s the very rough process:

  1. I’ve pointed out that there are two fundamental perspectives when describing Cloud and Cloud Computing: the operational provider’s view and the experiential consumer’s view.  To the provider, the IT-centric, empirical and clinical nuances are what matters. To the consumer, anything that connects to the Internet via any computing platform using any app that interacts with any sort of information is also cloud.  There’s probably a business/market view, but I’ll keep things simple for purpose of illustration.  I wrote about this here:  Cloud/Cloud Computing Definitions – Why they Do(n’t) Matter…
    -
  2. As we look at the adoption of cloud computing, the consumption model ultimately becomes more interesting than how the service is delivered (as it commoditizes.) My presentation “The Future of Cloud” focused on the fact that the mobile computing platforms (phones, iPads, netbooks, thin(ner) clients, etc) are really the next frontier.  I pointed out that we have the simultaneous mass re-centralization of applications and data in massive cloud data centers (however distributed ethereally they may be) and the massive distribution of the same applications and data across increasingly more intelligent, capable and storage-enabled mobile computing devices.  I wrote about this here: Slides from My Cloud Security Alliance Keynote: The Cloud Magic 8 Ball (Future Of Cloud)
    -
  3. The iPhone isn’t really that remarkable a piece of technology in and of itself, in fact it capitalizes on and cannibalizes many innovations and technologies that came before it.  However, as I mentioned in my post “Cloud Maturity: Just Like the iPhone, There’s An App For That…The thing I love about my iPhone is that it’s not a piece of technology I think about but rather, it’s the way I interact with it to get what I want done.  It has its quirks, but it works…for millions of people.  Add in iTunes, the community of music/video/application artists/developers and the ecosystem that surrounds it, and voila…Cloud.”
    -
  4. At each and every compute paradigm shift, we’ve seen the value of the network waffle between “intelligent” platform and simple transport depending upon where we were with the intersection of speeds/feeds and ubiquity/availability of access (the collision of Moore’s and Metcalfe’s laws?)  In many cases, we’ve had to rely on workarounds that have hindered the adoption of really valuable and worthwhile technologies and operational models because the “network” didn’t deliver.

I think we’re bumping up against point #4 today.  So here’s where I find this interesting.  If we see the move to the consumerized view of accessing resources from mobile platforms to resources located both on-phone and in-cloud, you’ll notice that even in densely-populated high-technology urban settings, we have poor coverage, slow transit and congested, high-latency, low-speed access — wired and wireless for that matter.

This is a problem. In fact it’s such a problem that if we look backward to about 4 years ago when “cloud computing” and the iPhone became entries in the lexicon of popular culture, this issue completely changed the entire application deployment model and use case of the iPhone as a mobile platform.  Huh?

Do you remember when the iPhone first came out? It was a reasonably capable compute platform with a decent amount of storage. It’s network connectivity, however, sucked.

Pair that with the fact that the application strategy was that there was emphatically, per Steve Jobs, not going to be native applications on the iPhone for many reasons, including security.  Every application was basically just a hyperlink to a web application located elsewhere.  The phone was nothing more than a web browser that delivered applications running elsewhere (for the most part, especially when things like Flash were’nt present.) Today we’d call that “The Cloud.”

Interestingly, at this point, he value of the iPhone as an application platform was diminished since it was not highly differentiated from any other smartphone that had a web browser.

Time went by and connectivity was still so awful and unreliable that Apple reversed direction to drive value and revenue in the platform, engaged a developer community, created the App Store and provided for a hybrid model — apps both on-platform and off — in order to deal with this lack of ubiquitous connectivity.  Operating systems, protocols and applications were invented/deployed in order to deal with the synchronization of on- and off-line application and information usage because we don’t have pervasive high-speed connectivity in the form of cellular or wifi such that we otherwise wouldn’t care.

So this gets back to what I meant when I said we have AT&T to thank for Cloud.  If you can imagine that we *did* have amazingly reliable and ubiquitous connectivity from devices like our iPhones — those consumerized access points to our apps and data — perhaps the demand for and/or use patterns of cloud computing would be wildly different from where they are today. Perhaps they wouldn’t, but if you think back to each of those huge compute paradigm shifts — mainframe, mini, micro, P.C., Web 1.0, Web 2.0 — the “network” in terms of reliability, ubiquity and speed has always played a central role in adoption of technology and operational models.

Same as it ever was.

So, thanks AT&T — you may have inadvertently accelerated the back-end of cloud in order to otherwise compensate, leverage and improve the front-end of cloud (and vice versa.)  Now, can you do something about the fact that I have no signal at my house, please?


Andrew Binstock asserted “Even though Microsoft comes to the party later than Amazon and Google, it has a key advantage …” in his Cloud homes still need work post of 12/15/2010 to SDTimes on the Web:

image I’m surely not the first person to point out the sudden interest in the cloud these days. If I were to believe everything I read, I’d be forced to conclude that our universe has just been popped into a computing space that consists of two endpoints: the cloud for the back end and mobile data for the front end. Desktops? Laptops? Local servers? Pshaww, passé!

image This might indeed be how the world will shake out in a few years, but for the time being, we have to solve what’s here and now. The first point to make, then, is that much of the interest in clouds is directed toward internal clouds. IT departments are simply not going to ship their source code or data to some collection of resources hosted by the latest cloud startup. There is a very natural security aversion to doing this. What host, beyond possibly Salesforce, can establish its credentials in security and uptime at a level sufficient to convince IT organizations? So far, few companies have.

Security and reliability, however, are only part of the issue. An important additional concern is the difficulty in defining a significant benefit for IT organizations to host apps or data in a cloud outside the firewall. IT managers who grok the benefit of virtualization and clouds are happier running those platforms on so-called “private clouds,” that is, clouds within the firewall.
The argument in favor of the undifferentiated cloud that is frequently trotted out is the savings that are realized on hardware (the capex) and on management costs (the opex). The hole in these benefits is that hardware is inherently inexpensive today, and the management cost benefits are hard to capture. Certainly, provisioning and decommissioning machines is easier, but new management policies and skills must be learned.

For example, a frequent problem is the profusion of VM snapshots and templates. These are large files that are expensive to store and move around. They also offer comparatively little metadata to guide administration. Emergency management is no trivial matter either. If a cloud system goes down, determining which hardware item has actually failed and what its effects on other jobs will be is not easy.

From page 3:

imageEven though Microsoft comes to the party later than Amazon and Google, it has a key advantage. As a vendor of operating systems and databases, it can customize Windows and SQL Server to run on Azure and provide a transparent experience for businesses. If this comes to pass, I expect public-cloud adoption will begin to accelerate quickly and the “paradigm shift” everyone is predicting will truly be underway.

Read more: Pages 2, 3

Andrew is the principal analyst at Pacific Data Works.


Jay Fry (@jayfry3) posted Beyond Jimmy Buffett, sumos, and virtualization? Cloud computing hits #1 at Gartner Data Center Conference to his Data Center Dialog blog on 12/14/2010:

image I spent last week along with thousands of other data center enthusiasts at Gartner’s 2010 Data Center Conference and was genuinely surprised by the level of interest in cloud computing on both sides of the podium. As keynoter and newly minted cloud computing expert Dave Barry would say, I’m not making this up.

image This was my sixth time at the show (really), and I’ve come to use the show as a benchmark for the types of conversations that are going on at very large enterprises around their infrastructure and operations issues. And as slow-to-move as you might think that part of the market might be, there are some interesting insights to be gained comparing changes over the years – everything from the advice and positions of the Gartner analysts, to the hallway conversations among the end users, to really important things, like the themes of the hospitality suites.

So, first and foremost, the answer is yes, APC did invite the Jimmy Buffett cover band back again, in case you were wondering. And someone decided sumo wrestling in big, overstuffed suits was a good idea.
Now, if you were actually looking for something a little more related to IT operations, read on:

Cooling was hot this year…and cloud hits #1

It wasn’t any big surprise what was at the top of peoples’ lists this year. The in-room polling at the opening keynotes placed data center space, power, and cooling at the top of list of biggest data center challenges (23%). The interesting news was that developing a private/public cloud strategy came in second (16%).
This interest in cloud computing was repeated in the Gartner survey of CIOs’ top technology priorities. Cloud computing was #1. It made the biggest jump of all topics since their ’09 survey, by-passing virtualization, on its way to head up the list. But don’t think virtualization wasn’t important: it followed right behind at #2. Gartner’s Dave Cappuccio made sure the audience was thinking big on the virtualization topic, saying that it wasn’t just about virtualizing servers or storage now. It’s about “the virtualization of everything. Virtualization is a continuing process, not a one-time project.”

Bernard Golden, CEO of Hyperstratus and CIO.com blogger (check out his 2011 predictions here), wondered on Twitter if cloud leapfrogging virtualization didn’t actually put the cart before the horse. I’m not sure if CIOs know whether that’s true or not. But they do know that they need to deal with both of these topics, and they need to deal with them now.

Putting the concerns of 2008 & 2009 in the rear-view mirror

This immediacy for cloud computing is a shift from the previous years, I think. A lot of 2008 was about the recession’s impact, and even 2009 featured sessions on how the recession was driving short-term thinking in IT. If you want to do a little comparison yourself, take a look at a few of my entries about this same show from years past (spanning the entire history of my Data Center Dialog blog to date, in fact). Some highlights: Tom Bittman’s 2008 keynote (he said the future looks a lot like a private cloud), 2008’s 10 disruptive data center technologies, 2008’s guide to building a real-time infrastructure, and the impact of metrics on making choices in the cloud from last year.

The Stack Wars are here

Back to today (or at least, last week), though. Gartner’s Joe Baylock told the crowd in Vegas that this was the year that the Stack Wars ignited. With announcements from Oracle, VCE, IBM, and others, it’s hard to argue.
The key issue in his mind was whether these stack wars will help or inhibit innovation over the next 5 years. Maybe it moves the innovation to another layer. On the other hand, it’s hard for me to see how customers will allow stacks to rule the day. At CA Technologies, we continue to hear that customers expect to have diverse environments (that, of course, need managing and securing, cloud-related and otherwise). Baylock’s advice: “Avoid inadvertently backing into any vendor’s integrated stack.” Go in with your eyes open.

Learning about – and from – cloud management

Cloud management was front and center. Enterprises need to know, said Cameron Haight, that management is the biggest challenge for private cloud efforts. Haight called out the Big Four IT management software vendors (BMC, CA Technologies, HP software, and IBM Tivoli) as being slow to respond to virtualization, but he said they are embracing the needs around cloud management much faster. 2010 has been filled with evidence of that from my employer – and the others on this list, too.

There’s an additional twist to that story, however. In-room polling at several sessions pointed to interest from enterprises in turning to public cloud vendors themselves as their primary cloud management provider. Part of this is likely to be an interest in finding “one throat to choke.” Haight and Donna Scott also noted several times that there’s a lot to be learned from the big cloud service providers and their IT operations expertise (something worthy of a separate blog, I think). Keep in mind, however, that most enterprise operations look very different (and much more diverse) than the big cloud providers’ operations.

In a similar result, most session attendees also said they’d choose their virtualization vendors to manage their private cloud. Tom Bittman, in reviewing the poll in his keynote, noted that “the traditional management and automation vendors that we have relied on for decades are not close to the top of the list.” But, Bittman said, “they have a lot to offer. I think VMware’s capabilities [in private cloud management] are overrated, especially where heterogeneity is involved.”

To be fair: Bittman made these remarks because VMware topped the audience polling on this question. So, it’s a matter of what’s important in a customer’s eyes, I think. In a couple of sessions, this homogeneous v. heterogeneous environment discussion became an important way for customers to evaluate what they need for management. Will they feel comfortable with only what each stack vendor will provide?

9 private cloud vendors to watch

imageBittman also highlighted 9 vendors that he thought were worthy of note for customers looking to build out private clouds. The list included BMC, Citrix, IBM, CA Technologies, Eucalyptus, Microsoft, Cisco, HP, and VMware.
He predicted very healthy competition in the public cloud space (and even public cloud failures, as Rich Miller noted at Data Center Knowledge) and similarly aggressive competition for the delivery of private clouds. He believed there would even be fierce competition in organizations where VMware owns the hypervisor layer. [Emphasis added.]

As for tidbits about CA Technologies, you won’t be surprised to learn that I scribbled down a comment or two: “They’ve made a pretty significant and new effort to acquire companies to provide strategic solutions such as 3Tera and Cassatt,” said Bittman. “Not a vendor to be ignored.” Based on the in-room polling, though, we still have some convincing to do with customers.
Maybe we should figure out what it’ll take to get the Jimmy Buffett guy to play in our hospitality suite next year? I suppose that and another year of helping customers with their cloud computing efforts would certainly help.

In the meantime, it’s worth noting that TechTarget’s SearchDataCenter folks also did a good job with their run-down on the conference, if you’re interested in additional color. A few of them might even be able to tell you a bit about the sumo wrestling, if you ask nicely.
And I’m not making that up either.

For more from Gartner, read Gartner Predicts 2011: IT Opens Up to New Demands and New Outcomes of 12/3/2010 by Daryl C. Plummer and Brian Gammage.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private Clouds

Larry Tabb wrote Companies that Ignore Cloud Computing Could Be Left for Dead on 11/23/2010 for the Wall Street and Tech blog (missed when posted):

image I wrote a few months ago in Wall Street & Technology about the dissolution of the corporate data center brought about by the undermining of data center economics resulting from colocation and proximity hosting. As one major tenant leaves the data center, the "rent" lost from that tenant is shifted to the remaining tenants, increasing their costs and reducing their profitability. Higher allocations force more tenants to leave, until processing costs force everyone out of the data center. It's the "last guy at the bar picks up the tab" phenomenon. I believed this erosion would take place over the next 10 years or so.

Well, it may be faster.

image

I was at a recent cloud computing event, which was hosted by Microsoft and Savvis, along with a number of firms' senior tech folks, and, to my surprise, the discussion of cloud services was very advanced. One executive noted that his firm thought of the cloud in four distinct classes: internal, external proprietary, external gated communities and external public. While the firm was not currently working on an external public strategy, it was very far along on the other three. [Emphasis added.]

image A second executive discussed the increased flexibility afforded by the cloud that allowed his firm to provision resources virtually on the fly. This enables the IT organization not only to deploy cloud-based applications quickly, but also to grab on-demand resources that would be difficult to run full-time because of budget constraints, he explained.

And a third attendee talked about the difficulties of procuring hardware. He said getting even a plain-vanilla server took a minimum of two to three months and that any capital expenditures (even a blade) needed to be signed off on at ridiculously high organizational levels.

The flexibility and agility challenges rang true. Who in a corporate development environment has not been frustrated by the pace of acquiring, developing, testing and deploying business-critical solutions? Then a fourth executive mused about the economics of using internal versus external cloud resources.

Noting that his firm has an internal cloud, the senior technology leader explained that the costs of using the internal cloud were three times higher than those of external services. Further, they could be provisioned in only month-long increments, while external clouds can be provisioned by the hour.

An Easy Calculation

Think about it this way. If a single, quantitatively intensive Monte Carlo simulation prices out to $1,000 for six hours of external cloud services, it would cost $3,000 to run on an internal cloud for the same six hours. But, because of the month-long commitment required for internal resource allocation, instead of this job costing $1,000 externally, it actually would cost $360k because of the 30-day, 24-hour-a-day minimum commitment.

For $1k, I could put it on my card. For $360k, … well, I am not so sure. Talk about misaligned incentives.

On one hand, the firm wants to ensure that its data and compute infrastructure are safe. On the other hand, if the barriers the firm puts in place to keep its infrastructure safe make the business uneconomical and inflexible, something needs to change, and I don't think it's the business need.

If the pace of technology development continues, and firms increasingly leverage colocation and proximity centers for their trading applications, and these colocation centers become de facto cloud support centers, we may see the demise of the corporate data center much sooner than I originally estimated. Whether it is 10 years or five, though, if we do not rethink how we manage our processing centers, resource utilization and corporate computing procurement, deployment and allocation processes, the firms with less-flexible and more-costly infrastructures surely will suffer at the hands of the quick and agile.


<Return to section navigation list> 

Cloud Security and Governance

Lori MacVittie (@lmacvittie) asserted It’s not just that attacks are distributed, but that attacks are also diverse in nature – up and down the stack, at the same time as a prefaced to her What We Learned from Anonymous: DDoS is now 3DoS post of 12/15/2020:

image If Anonymous has taught us anything it’s that the future of information security is in fending off attacks across the breadth and depth of the network stack – and the data center architecture – at the same time. Traditionally DDoS attacks are so-named because the clients are distributed; that is they take advantage of appearing to come from a variety of locations as a means to prevent detection and easy prevention.

anonymousIt’s about the massive scale of a single type of attack as launched by a single attacker (individual or group). But the WikiLeaks attacks have not just been distributed in the sense that it is a concerted effort by distributed attacks to take out sites, it’s been distributed in the sense that it spans the network stack from layer 2 through layer 7. It’s not just a DDoS, it’s a DDDoS: a Diverse Distributed Denial of Service. A 3DoS. 

The result is a flash-crowd style flood of attacks that overwhelm not only the site, but its infrastructure. Firewalls have become overwhelmed, ISPs have been been flooded, services have been disrupted. All because the attacks are not a single concerted effort to disrupt service but a dizzying array of SYN flood, TCP connection flooding, pings of death, excessive HTTP headers and SlowLoris, and good old fashioned HTTP GETs flooding. These attacks are happening simultaneously, directed at the same service, and they’re doing a good job in many cases of achieving its goals: service outages. That’s because defeating an attack is infinitely easier than detecting it in the first place, particularly at the application layer. It’s nearly impossible for traditional security measures to detect because many of the attacks perpetrated at the application layer appear to be completely legitimate requests; there’s just a lot more of them.

Traditional solutions aren’t working. Blocking the attack at the ISP has proven to be too slow and unable to defend against the (new) distributed nature of these attacks. Firewalls have buckled under pressure, unable to handle the load, and been yanked out of the line of fire, being replaced with other technology more capable of fending off attacks across the entire network stack. And now it’s been reported that a Java Script DDoS is in the works, aimed again at MasterCard.

TOP-to-BOTTOM SECURITY

image_thumb_2[1]What this means is organizations need to be thinking of security as spanning all attack vectors at the same time. It is imperative that organizations protect critical applications against both traditional attack vectors as well as those at the application layer disguised as  legitimate requests. Organizations need to evaluate their security posture and ensure that every infrastructure component through which a request flows can handle the load in the event of a massive “3DoS”. It’s not enough to ensure that there’s capacity in the application infrastructure if an upstream network component may buckle under the load. 

And it’s costly to leverage virtualization and cloud computing as a means to automatically scale to keep ahead of the attack. The price of uptime may, for some organizations, become overwhelming in the face of a targeted 3DoS as more and more capacity is provisioned to (hopefully) handle the load.

Organizations should look for strategic points of control within their architectures and leverage capabilities to detect and prevent attacks from compromising availability. In every data center architecture there are aggregation points. These are points (one or more components) through which all traffic is forced to flow, for one reason or another.

For example, the most obvious strategic point of control within a data center is at its perimeter – the router and firewalls that control inbound access to resources and in some cases control outbound access as well. All data flows through this strategic point of control and because it’s at the perimeter of the data center it makes sense to implement broad resource access policies at this point. Similarly, strategic points of control occur internal to the data center at several “tiers” within the architecture. For web applications, that strategic point of control is generally the application delivery controller (load balancer for you old skool architects). Its position in the network affords it the ability to intercept, inspect, and manipulate if necessary the communications occurring between clients and web applications.

These points of control are strategic precisely because their topological location within the architecture provides the visibility necessary to recognize an attack at all layers of the network stack and apply the proper security policies to protect resources downstream that are not imbued with a holistic view of application usage and therefore cannot accurately determine what is legitimate versus what is an attack.

AS ATTACK TECHNIQUES EVOLVE, SO TOO MUST SECURITY STRATEGIES

context_2[1]Old notions of “security” have to evolve, especially as the application itself becomes an attack vector. The traditional method of deploying individual point solutions, each addressing a specific type or class of attack, is failing to mitigate these highly distributed and  diverse attacks. It’s not just the cost and complexity of such a chained security architecture (though that is certainly a negative) it’s that these solutions are either application or network focused, but almost never both. Such solutions cannot individually recognize and thus address 3DoS attacks because they lack the context necessary to “see” an attack that spans the entire network stack – from top to bottom.

An integrated, unified application delivery platform has the context and the visibility across not only the entire network stack but all users – legitimate or miscreant – accessing the applications it delivers. Such a platform is capable of detecting and subsequently addressing a 3DoS in a much more successful manner than traditional solutions. And it’s much less complex and costly to deploy and manage than its predecessors.

The success of Anonymous in leveraging 3DoS attacks to disrupt services across a variety of high-profile sites will only serve to encourage others to leverage similar methods to launch attacks on other targets. As the new year approaches, it’s a good time to make an organizational resolution to re-evaluate the data center’s security strategy and, if necessary, take action to redress architectures incapable of mitigating such an attack.


<Return to section navigation list> 

Cloud Computing Events

Corey Fowler (@SyntaxC4) posted Post #AzureFest Follow-up Videos on 12/14/2010:

imageOn Saturday, I presented how to Register for Windows Azure, Install and Configure the Azure Tools & SDK; as well as Deploy an application to Windows Azure. This presentation was part of a joint effort between Microsoft Canada and ObjectSharp called AzureFest.

image

Microsoft Canada has made a great offering to Canadian User Groups. For each member that deploys an application to Windows Azure and submits a Screenshot of their Application, the User Group is awarded $25.

To make the process easier for your members Barry Gervin and I have created some easy to follow, step-by-step videos of the content.

Registering for the Windows Azure Introductory Offer

In this video, Barry Gervin and Cory Fowler walk through the Introductory Offer Registration Wizard [Credit Card Required].

Signing up for the Windows Azure Introductory Offer

Deploying Nerd Dinner via Windows Azure Portal

To demo how simple it is to upload a Cloud Service Package using the Windows Azure Platform Portal. Barry and I have compiled and bundled a demo Cloud Service Package and Cloud Service Configuration file (Windows Azure Deployment Bundle Download) and have explained how to deploy this package. Don’t forget to send your email off to cdnazure @ Microsoft [dot] com with the name of your user group, and the screenshot.

Deploy Nerd Dinner Cloud Service Package via Windows Azure Portal

Removing your Deployment

The Windows Azure Introductory Special only offers 25 (small) compute hours per month, you’ll want to remove your deployment so you don’t end up getting charged.

Removing your Deployment from Windows Azure Portal

Barry and I will be creating 3 more videos to complete a series with more content from AzureFest. Be sure to check the ObjectSharp Events Page for Upcoming Windows Azure Training Sessions.


Ike Ellis will present a Top 5 Things to Know About SQL Azure Webinar on 2/5/2011:

image The cloud is a brave new world for database developers and DBAs. With SQL Azure, disks don’t matter, but bandwidth does. Paying attention to long running queries is now more important than ever. Graceful error handling is no longer optional.

imageCome learn 5 things about SQL Azure development that are different than on-premise SQL Servers. We’ll walk you through performance tuning, report authoring, and backup and restore strategies in this fast-paced, demo-filled, action-packed webcast.

Ike is the Lead SQL Instructor and SQL Course Author for DevelopMentor.


See Tom Mertens announced on 12/15/2010 that OData will be one of the topics at Web Camp Belgium (24 Jan) w/ Scott Hanselman & James Senior in the Marketplace DataMarket and OData section above.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Jeff Barr (@jeffbarr) announced VM Import - Bring Your VMware Images to The Cloud on 12/15/2010:

image If you have invested in virtualization to meet IT security, compliance, or configuration management requirements and are now looking at the cloud as the next step toward the future, I've got some good news for you.

image VM Import lets you bring existing VMware images (VMDK files) to Amazon EC2. You can import "system disks" containing bootable operating system images as well as data disks that are not meant to be booted.

This new feature opens the door to a number of migration and disaster recovery scenarios. For example, you could use VM Import to migrate from your on-premises data center to Amazon EC2.

You can start importing 32 and 64 bit Windows Server 2008 SP2 images right now (we support the Standard, Enterprise, and Datacenter editions). We are working to add support for other versions of Windows including Windows Server 2003 and Windows Server 2008 R2. We are also working on support for several Linux distributions including CentOS, RHEL, and SUSE. You can even import images into the Amazon Virtual Private Cloud (VPC).

The import process can be initiated using the VM Import APIs or the command line tools. You'll want to spend some time preparing the image before you upload it. For example, you need to make sure that you've enabled remote desktop access and disabled any anti-virus or intrusion detection systems that are installed (you can enable them again after you are up and running in the cloud). Other image-based security rules should also be double-checked for applicability.

The ec2-import-instance command is used to start the import process for a system disk. You specify the name of the disk image along with the desired Amazon EC2 instance type and parameters (security group, availability zone, VPC, and so forth) and the name of an Amazon S3 bucket. The command will provide you with a task ID for use in the succeed steps of the import process.

The ec2-upload-disk-image command uploads the disk image associated with the given task ID. You'll get upload statistics as the bits make the journey into the cloud. The command will break the upload into multiple parts for efficiency and will automatically retry any failed uploads.

The next step in the import process takes place within the cloud; the time it takes will depend on the size of the uploaded image. You can use the ec2-describe-conversion-tasks command to monitor the progress of this step.

When the upload and subsequent conversion is complete you will have a lovely, gift-wrapped EBS-backed EC2 instance in the "stopped" state. You can then use the ec2-delete-disk-image command to clean up.

The ec2-import-volume command is used to import a data disk, in conjunction with ec2-upload-disk-image. The result of this upload process is an Amazon EBS volume that can be attached to any running EC2 instance in the same Availability Zone.

There's no charge for the conversion process. Upload bandwidth, S3 storage, EBS storage, and Amazon EC2 time (to run the imported image) are all charged at the usual rates. When you import and run a Windows server you will pay the standard AWS prices for Windows instances.

As is often the case with AWS, we have a long roadmap for this feature. For example, we plan to add support for additional operating systems and virtualization formats along with a plugin for VMware's vSphere console (if you would like to help us test the plugin prior to release, please let us know at ec2-vm-import-plugin-preview@amazon.com). We'll use your feedback to help us to shape and prioritize our roadmap, so keep those cards and letters coming.


Alex Williams commented about VMware on Amazon Web Services or How the Cloud Becomes a Data Fabric in this 12/15/2010 post to the ReadWriteCloud blog:

Thumbnail image for Cirrus_clouds2.jpgThe news today from Amazon Web Services (AWS) and VMware reinforces how the cloud is far less about data centers and public clouds than one extended network that allows for data to flow without distinction between the two.

According to a post on the AWS blog, VMware images can now be imported into AWS. That means a data center can essentially be imported into AWS.

imageThe cloud is becoming far more than AWS or a data center built on VMware technology. It's now an infrastructure that supports a data fabric more than anything else. That data is like dust, blanketing the entire network, seeping everywhere.

The virtualized data network is changing the definition of cloud computing. The network is flattening. That allows for data to pass from on-premise systems to cloud environments. CPUs can be moved to where they are needed, based on the network load.

According to Amazon's Jeff Barr:

"VM Import lets you bring existing VMware images (VMDK files) to Amazon EC2. You can import "system disks" containing bootable operating system images as well as data disks that are not meant to be booted.

This new feature opens the door to a number of migration and disaster recovery scenarios. For example, you could use VM Import to migrate from your on-premises data center to Amazon EC2."

But there are some questions that come of this. For instance, when is Amazon going to allow the full export of data?

All Barr would say on Twitter is that AWS will listen to its customers.

In the meantime, here's what the new service means.

You can start importing 32 and 64 bit Windows Server 2008 SP2 images right now. Amazon supports the Standard, Enterprise, and Datacenter editions.

Barr says AWS is working to add support for other versions of Windows including Windows Server 2003 and Windows Server 2008 R2. It will support several Linux distributions including CentOS, RHEL, and SUSE. Images can be imported into the Amazon Virtual Private Cloud (VPC).

This is just another example that shows how a company can move out of the so-called private cloud, a term that is becoming far less relevant than ever before.

Instead, what we see is a world where the cloud extends beyond data centers into services like AWS.

This will be a dominating trend in the next year. The data center will be fully connected to the cloud.

The significant issue to emerge will center more around storage and networking. But we will save that topic for another day.


Chris Hoff (@Beaker) posted CloudSwitch: Traitor To the [Public Cloud] Cause… on 12/15/2010:

imageEllen Rubin and John Considine from CloudSwitch chuckled when I muttered this toward them in some sort of channeled pantomime of what an evaluation of their offering might bring from public-only cloud apologists.

image After all, simply taking an application and moving it to a cloud doesn’t make it a “cloud application.”  Further, to fully leverage the automation, scale, provisioning and orchestration of “true” cloud platforms, one must re-write one’s applications and deal with the associated repercussions of doing so.  Things like security, networking, compliance, operations, etc.  Right?

Well…

CloudSwitch’s solutions — which defy this fundamental rearchitecture requirement –  enable enterprises to securely encapsulate and transport enterprise datacenter-hosted VM-based applications as-is and run them atop public cloud provider environments such as Amazon Web Services and Terremark in a rather well designed, security-conscious manner.

The reality is that their customer base — large enterprises in many very demanding verticals — seek to divine strategic technologies that allow them to pick and choose what, how and when to decide to “cloudify” their environments.  In short, CloudSwitch TODAY offers these customers a way to leverage the goodness of public utility in cloud without the need to fundamentally rearchitect the applications and accompanying infrastructure stacks — assuming they are already virtualized.  CloudSwitch seeks to do a lot more as they mature the product.

I went deep on current product capabilities and then John and I spent a couple of hours going off the reservation discussing what the platform plans are — both roadmapped and otherwise.

It was fascinating.

The secure isolation and network connectivity models touch on overlay capabilities from third parties, hypervisor/cloud stack providers like VMware (vCloud Director) as well as offers from folks like Amazon and their VPC, but CloudSwitch provides a way to solve many of the frustrating and sometimes show-stopping elements of application migration to cloud.  The preservation of bridged/routed networking connectivity back to the enterprise LAN is well thought out.

This is really an audit and compliance-friendly solution…pair a certified cloud provider (like AWS, as an example) up with app stacks in VMs that the customer is responsible for getting certified (see the security/compliance=shared responsibility post) and you’ve got something sweeter’n YooHoo…

It really exemplifies the notion of what people think of when they envision Hybrid Cloud today.  “Native” cloud apps written specifically for cloud environments, “transported” cloud apps leveraging CloudSwitch, and on-premises enterprise datacenters all interconnected.  Sweet.  More than just networking…

For the sake of not treading on FrieNDA elements that weaved their way in and out of our conversation, I’m not at liberty to discuss many of the things that really make this a powerful platform both now and in future releases.  If you want more technical detail on how it works, call ‘em up, visit their website or check out Krishnan’s post.

Let me just say that the product today is impressive — it has some features from a security, compliance, reporting and auditing perspective I think can be further improved, but if you are an enterprise looking for a way to today make graceful use of public cloud computing in a secure manner, I’d definitely take a look at CloudSwitch.


Oracle Corp. dropped the other shoe when it asserted “Industry's Complete, Open Standards-Based Office Productivity Suites for Desktop, Web and Mobile Users” in an Oracle Announces Oracle Cloud Office and Open Office 3.3 Completed post by Elizabeth White of 12/15/2010:

image "Oracle Cloud Office and Oracle Open Office 3.3 deliver complete, open and cost-effective office productivity suites that are designed and optimized for our customers' needs," said Michael Bemmer, vice president of Oracle Office. "Customers now have the flexibility to support users across a wide variety of devices and platforms, whether via desktop, private or public cloud. With Oracle Office, enterprises can reduce costs while helping to increase productivity and speed innovation."

image Oracle on Wednesday introduced Oracle Cloud Office and Open Office 3.3, two complete, open standards-based office productivity suites for the desktop, web and mobile devices -- helping users significantly improve productivity, reduce costs and achieve greater innovation across the enterprise. Based on the Open Document Format (ODF) and open web standards, Oracle Office enables users to share files on any system as it is compatible with both legacy Microsoft Office documents and modern web 2.0 publishing.

Leveraging Oracle Office products, users gain personal productivity, web 2.0 collaboration and enterprise-integrated document tools on Windows, Mac, Linux, Web browsers and smartphones such as the iPhone at an unrivaled price. The Oracle Office APIs and open standards-based approach provides IT users with flexibility, lower short and long-term costs and freedom from vendor lock-in -- enabling organizations to build a complete Open Standard Office Stack.

Open and Integrated for Office Productivity Anywhere

Oracle’s press release is here.


<Return to section navigation list> 

0 comments: