Friday, June 11, 2010

Windows Azure and Cloud Computing Posts for 6/6/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this daily series.

 
Update 6/11/2010: This is the first of a series of posts with content from posts about TechEd North America 2010 sessions, as well as the usual sources.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated in June 2010 for the January 4, 2010 commercial release. 

Azure Blob, Drive, Table and Queue Services

No significant articles today.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Russell Sinclair’s Access 2010 and SQL Azure describes a new no-charge introductory service for SQL Azure, Windows Azure, and the Windows Azure AppFabric and shows you how to connect via ODBC from Access 2010 to SQL Azure databases:

Microsoft SQL Azure is a new cloud-based relational database built on SQL Server that provides highly available, scalable, multi-tenant database services hosted by Microsoft in the cloud. It eliminates the need to install, setup and manage SQL Server, freeing IT to handle day-to-day operations rather than having to manage and maintain an on-premise server. It also enables access to the database from any location with an internet connection, providing the ability to create connected applications that are accessible from anywhere.

Access 2010 supports connections to SQL Azure over ODBC, opening up opportunities for Access users and technology providers to create rich experiences using cloud computing. For Information Workers, this gives an easy way to connect directly from Access to cloud-based relational databases, enabling ease of use and providing flexibility to IT. Access users now have more choices to integrate rich client-server applications that can connect directly to both on-premise or cloud databases, creating unique and agile solutions.

Getting Started with SQL Azure

In order to use SQL Azure, you need to create an account and choose a plan that suits your needs. For a limited time, Microsoft is providing an introductory special that is free of charge and includes access to a single 1GB SQL Azure database, Windows Azure and AppFabric services. This is a great way to try out the features available before committing to a monthly plan.

SQL Azure requires that you specify the IP addresses that have access to the database. You can set the valid IP addresses in the Firewall Settings area of the administration tools provided on the SQL Azure site. Adding your current IP is easy as the dialog will show you this information. Check with your network administrator if you want to open a range of addresses that corresponds with the IPs on which your users might connect.

The SQL Azure administration site also allows you to create your database and users, but you will need to use SSMA or another SQL management tool in order to create your tables and other database objects.

Access supports connecting to SQL Azure over ODBC (linked tables and pass-through queries), but you need to use the “SQL Server Native Client 10.0” driver available as part of the SQL Server 2008 R2 release. You should install SQL Server 2008 R2 Management Studio (SSMS) as well since it provides updated functionality for working with SQL Server in a mode that is compatible with SQL Azure. The easiest way to install the ODBC drivers and SSMS is to download SQL Server 2008 Express. You can choose not to install the database engine but you may find it useful as you will need access to a SQL Server instance in order to migrate your database from Access (see details below). A redistributable version of the updated ODBC driver is available as part of the SQL Server 2008 R2 Feature Pack.

Connecting from Access 2010

Once you have completed the necessary installations and configuration, you can connect to your SQL Azure database using a standard ODBC linked table. ODBC links to SQL Azure have a couple of requirements of which you need to be aware. The following points detail the settings you should use when creating an ODBC DSN to a SQL Azure database.

  • Use the SQL Server Native Client 10.0 driver, updated with change in SQL Server 2008 R2
  • Use SQL authentication when connecting. SQL Azure does not support NT authentication.
  • Login ID should be in the format “user@server”, where server is the server name specified in the SQL Azure administration site without the “.database.windows.net”. I.E. If your assigned server is “abcserver.database.windows.net”, and your Login ID is “myuser”, use “myuser@abcserver”
  • Set the default database to the database you want to use for this connection. This will ensure that the connection goes to the right database instead of the master database if the login has access to multiple databases on the account
  • Check the option to use strong encryption for data. SQL Azure will automatically enforce this option but you might want to set it explicitly

With the ODBC DSN successfully configured, you can now link from Access to any tables or views in the SQL Azure database as you would to any SQL Server database, or create pass-through queries to execute custom SQL on the server.

Migrating to SQL Azure

When migrating your database to SQL Server 2008 or SQL Azure, we recommend that you use the SQL Server Migration Assistant (SSMA) for Access, available from the SQL Server Migration webpage, instead of using the Upsizing Wizard included with Access. This tool is similar to the Upsizing Wizard but is tailored to the version of SQL Server you are targeting. SSMA will not use deprecated features in the migration and may choose to use some of the new features such as new data types included with SQL Server 2008.

In the summer of 2010, the SQL Server team will release an updated version of SSMA 2008 that will support migration of Access databases directly to SQL Azure. Until that time, migrating your database is a simple, three-step process

  • Migrate your database to a non-Azure SQL Server (preferably 2008) database
  • Use SQL Server 2008 R2 Management Studio to generate a SQL Azure compatible database creation script from your database
  • Run the scripts created in SSMS on the SQL Azure database

The Upsizing Wizard included with Access is a general-purpose tool to help migrate to SQL Server and does not target a specific version of the server. Some of the features used by the Upsizing Wizard have been deprecated in SQL Server 2008 and are used in order to maintain backwards-compatibility with previous versions. Although SQL Server 2008 does not prevent you from using many deprecated features, SQL Azure directly blocks them.

Once your database is migrated to SQL Server you can generate scripts to recreate your database and the data it contains in SQL Azure. Open SSMS, connect to your local server, right-click the database in question and choose Tasks – Generate Scripts to open the Generate and Publish Scripts wizard (note, do not select the menu option “Script Database As…” as this does not provide the same functionality). Step through the wizard, selecting options appropriate to your migration, until you reach the Set Scripting Options screen. On this screen, click the Advanced button to show the Advanced Scripting Options screen shown below. There are three important settings in this dialog:

1. Script for the database engine type: SQL Azure Database. This setting is essential when scripting your database to SQL Azure. It specifies that the script should only use SQL Azure-compatible scripting, avoiding the use of unsupported deprecated SQL Server features or language.

2. Types of data to script: Schema and data. This setting specifies that the script will include the information necessary to recreate all of your data on the SQL Azure. This saves you from having to migrate the data manually. You can set this to “Schema only” if the data is not required.

3. Script Triggers: True. If you want your triggers to be migrated set this option to True as it will default to False the first time you run the tool.

Once your script is created, open Management Studio, connect to the SQL Azure database and run the script against that database to complete the job.

You’re now ready to use the full functionality of the SQL Azure database from Access.

Unsupported Features

Unfortunately, there are a few Access features we cannot support on SQL Azure due to various restrictions or issues:

  • The Upsizing Wizard does not support directly upsizing to SQL Azure. As stated above, you need to migrate to an intermediary SQL Server database first and we suggest you use SSMA to do so
  • ADPs are not supported as they rely on features that are not supported in SQL Azure (including ADO 2.x connections)
  • ODBC connections are not supported in Access versions prior to Access 2010
  • Exporting tables to SQL Azure using the export functionality built in to Access (export to ODBC or DoCmd.TransferDatabase) will fail if the tables contain any date/time fields – use SSMA

In addition to these restrictions, you should read through the SQL Azure help to determine what features of SQL Server 2008 are and are not supported. SQL Azure explicitly blocks use of features that are considered deprecated in SQL Server 2008. There are also many features that are not supported in the current release, such as extended properties and cross-database queries.

Exciting Features

We feel that SQL Azure and Access 2010 are opening some great new scenarios for Access users. We can’t wait to see what solutions you come up with.

The Shayne Burgess announced a live TechEd 2010 OData Service in this 6/6/2010 post:

image_thumb[13]TechEd North America 2010 is fast approaching (it starts Monday). This year we have added an OData service to the TechEd site, just like we did for MIX10 earlier this year. The service exposes the sessions, speakers and other associated information for the conference and is a great way to learn OData. Check-out the API page on the TechEd site here for more information on the service. If you are attending TechEd make sure you stop by the DMG booth (in the DAT section) and show the folks from DMG your OData App.

Head over to http://www.odata.org/consumers to see the list of applications and libraries that can consume the OData feed. To demonstrate what you can do with the service, I have some screen shots below of browsing the feed using the OData Explorer and finding a list of all sessions about OData – there are a bunch (make sure you run the explorer in OOB mode when browsing the service).

clip_image002

Liam Cavanaugh’s Introducing Data Sync Service for SQL Azure post of 6/7/2010 announces open registration for this new service:

I am extremely happy to announce the public preview of our new Data Sync Service for SQL Azure. For those of you who have been following this blog, you might recall our Project “Huron”. In this project we talked about our vision of creating a Data Hub in the cloud to provide users with the ability to seamlessly share and collaborate on data regardless of the location and regardless of network connectivity. Back in November ’09 we introduced SQL Azure Data Sync as the first part of this vision which was to take SQL Server data and extend that to the cloud and then take that data and share it with other SQL Server databases.

Now with the Data Sync Service we are extending on the "Huron" vision.  With no code, you can configure your SQL Azure database to be synchronized with one or more SQL Azure databases in any of our Windows Azure data centers. By doing so, it provides you the ability to extend that data to the location closest to your end users. All the while our scheduled synchronization service moves changes back and forth between these databases ensuring that changes get propagated to each of the databases in the data center. In conjunction with SQL Azure Data Sync, this data can also be synchronized back on-premises.

I have a TechEd session this week where I will be demonstrating all of this as well as how we will be extending the capabilities of the sync framework for creating offline applications, specifically allowing Silverlight, Windows Phone 7 and even non-MSFT platforms to be used for the clients. 

If you are interested in giving the Data Sync Service a try please go to http://sqlazurelabs.com and click on “SQL Azure Data Sync Service” and send us your registration code.  Starting early next week we will start adding registered users to the service.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Vittorio Bertocci (Vibro) reported Updated identity labs in the new Windows Azure Platform Training Kit in this 6/7/2010 post:

Among the flurry of announcements today, I am sure you didn’t miss David’s announcement of the new release of the Windows Azure platform Training Kit!

What you may have missed is the new identity content, straight from the identity+Windows Azure labs in the March2010 and April2010 releases of the Identity Training Kit:

  • Using WIF for securing WCF services hosted in Windows Azure
  • Update to the ACS intro to use the visual management tool (as opposed to command line based instructions)
  • ALL THE LABS on VS2010 :-)

So, if you missed those from the Identity Training Kit… enjoy them in the Windows Azure Platform Training Kit ;-)

The Windows Azure Platform App Fabric Team released Updated IP addresses for AppFabric Data Centers on 6/7/2010:

image Today (6/7/2010) the Windows Azure platform AppFabric has updated the IP Ranges on which the AppFabric nodes are hosted.  If your firewall restricts outbound traffic, you will need to perform the additional step of opening your outbound TCP Ports and IP Addresses for these nodes. Please see the 1/28/2010 “Additional Data Centers for Windows Azure platform AppFabric” posted which was updated today to include the new set of IP Addresses.

Cliff Simpkins announced Windows Server AppFabric now Generally Available in this 6/7/2010 post:

We are thrilled to announce the final availability of Windows Server AppFabric!

As announced today at TechEd North America, Windows Server AppFabric is available for download to Windows Server 2008 (and Windows Server 2008 R2) Standard Edition and Enterprise Edition customers. 

Additional information on Windows Serer AppFabric can be found at the following locations:

Thank you to everyone who participated in the beta program and provided invaluable feedback – we couldn’t have shipped it without your input.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Eugenio Pace explains Windows Azure Architecture Guide – Part 2 – Tenant & Public site (and some notes on geo-location) in this 6/8/2010 post:

As you might recall from my the introduction post, Tailspin has essentially 3 sites: the ISV (#1) , the tenants (#2) and the public (3):

image

The usage patterns of these are all very different: #1 would probably have 10’s of users, #2 (hopefully for TailSpin) will have 1000’s of users (maybe 10,000s), but #3 can potentially have 100,000’s or even 1,000,000’s of users. Moreover, #3 might have burst demands as surveys are most probably rather short lived.

In Tailspin Surveys the Public site will be hosted separately from the tenants site, precisely to provide the flexibility of managing each in a different way.

A few observations:

  • If we use the same Hosted Service, but different web roles, then each on is differentiated by a port. for example: http://tailspin.cloudapp.net:80 and http://tailspin.cloudapp.net:81 for each web role.
  • In Tailspin specifics, the tenant site is under SSL, but not the public site, so this differentiation is “automatic”  as HTTPS uses port 443 and HTTP port 80. However, this might not be true for all applications.
  • If you need a different name altogether, you might want to consider using a different “hosted service”
  • Taispin is a “geo aware” service. This means that tenants can chose to subscribe to a geo-specific Tailspin instance. For example, a US company, will probably choose a US based service, European companies will subscribe to the Europe based service, etc. However, there might be a situation where a say US based company targets a specific survey to a different location. This is a subtle, but important difference. Keeping the site to host tenants pages and surveys separate might help, but it doesn’t automatically solve the synching issue:

image

Depending how the red line is implemented there might be little benefits of European users going to a European datacenter. The best results would probably be achieved if everything that needs to happen “online” is kept on the same place. Here’s an example of what you’d probably want to avoid: having the repository classes in the European site call the US based storage to render the survey. You’d probably want to push the definitions of a survey to the European datacenter (that happens once) and then collect all surveys there. Then, when you are done (or using a worker process), you would synch the results back to the US. More or less like this:

image

Steps 2 and 4 would happen only once. Step 3 is all local, minimizing response time for those users.

David Chou describes the Windows Azure Platform Training Kit – June Update in this 6/7/2010 post:

I’m pleased to announce the June update for the Windows Azure Platform training kit.

You can download it from here.

The training kit has everything you need to get started and then dig deep on the Windows Azure platform, including Windows Azure, SQL Azure, “Dallas”, Identity and the Windows Azure Platform AppFabric.

We have some minor updates to the training presentations and some new & updated hands on labs since the December release.

If you have used this content as a base for your own PLEASE take a look at the updated decks. I still see content from POST-PDC 09 that is now technically wrong.

Here is what is new in the kit:

  • Introduction to Windows Azure – VS2010 version
  • Intro To SQL Azure – VS2010 version
  • Intro Service Bus – VS2010 version
  • Intro To Dallas – VS2010 version
  • Intro Access Control Service – VS2010 version
  • Web Services and Identity in the Cloud
  • Exploring Windows Azure Storage
    • VS2010 version
    • + new Exercise: “Working with Drives”
  • Windows Azure Deployment
    • VS2010 version
    • + new Exercise: “Securing Windows Azure with SSL”
  • Minor fixes to presentations – mainly timelines, pricing, new features etc.

We are currently working on updating the presentations and demos in the training kit. These will form the basis of a new 3 day course which digs much deeper into the platform – watch out for this update during July.

If you have any ideas on how we should improve the kit, please drop me an email. My email address is my firstname.lastname at microsoft.com. …

Jim Nakashima explains Using IntelliTrace to debug Windows Azure Cloud Services in this 6/7/2010 post:

One of the cool new features of the June 2010 Windows Azure Tools + SDK is the integration of IntelliTrace to allow you to debug issues that occur in the cloud.

IntelliTrace support requires .NET 4, Visual Studio 2010 Ultimate and the cloud service has to be deployed with IntelliTrace enabled. If you are using a 32-Bit OS, you need this patch/QFE.

To enable IntelliTrace, right click on the cloud service project and select “Publish”.

image

At the bottom of our publish dialog, click to select “Enable IntelliTrace for .NET 4 roles”.

image

You can also configure IntelliTrace for the cloud settings (these are separate from the settings in Tools | Options which are used for the debug (F5) scenario which we currently do not support with Cloud Services/Development Fabric.  

A couple of notes about IntelliTrace settings. 

We default to high mode which is different from the F5 IntelliTrace settings in Visual Studio.  The reason is that F5 IntelliTrace includes both debugger and IntelliTrace data while in the cloud, you are only able to get back IntelliTrace data.

Additionally, we exclude Microsoft.WindowsAzure.StorageClient.dll as we found that the slow down caused by IntelliTrace instrumenting resulted in time outs to storage.  You may find you will want to remove the storage client assembly from the exclusion list.

To reset the IntelliTrace settings back to the default, you can delete “collectionplan.xml” from %AppData%\Roaming\Microsoft\VisualStudio\10.0\Cloud Tools

Click “OK” to package up everything you need to IntelliTrace the web and worker host processes in the cloud and start the deployment process.

Note: There is a current limitation that child processes cannot be IntelliTrace debugged. …

Jim continues his guided tour of IntelliTrace for Windows Azure.

Sumit Mehrotra illustrates Using the Portal: Auto-upgrade mode to manage the Windows Azure guest OS for your service in this 6/7/2010 post:

1. New Deployment: For a new deployment, auto-upgrade mode is chosen by default on the portal.

clip_image003

Sumit continues with the same process for an existing deployment.

Gunther Lenz offers a First Look – Windows Azure Toolkit for Facebook in this post and 00:14:52 video of 6/6/2010:

image In this quick overview, Gunther Lenz ISV Architect Evangelist with Microsoft, shows how to set-up a Facebook developer account and configure the Windows Azure toolkit for Facebook to run the samples provided in the Codeplex download. The toolkit significantly streamlines development of Facebook (and other) applications that connect Windows Azure Table Storage as well as SQL and incorporate best practices for scalable cloud solutions.

Also check out CloudPoll reference application, built on the Windows Azure Toolkit for Facebook, and free to use for any Facebook user

Return to section navigation list> 

Windows Azure Infrastructure

Lori MacVittie asserts “The right form-factor in the right location at the right-time will maximize the benefits associated with cloud computing and virtualization” in her Data Center Feng Shui post of 6/8/2010 to F5’s DevCentral blog:

Feng Shui, simply defined, is the art of knowing where to place things to maximize benefits. There are many styles of Feng Shui but the goal of all forms is to create the most beneficial environment in which one can live, work, play, etc… based on the individual’s goals.

quote-left Historically, feng shui was widely used to orient buildings—often spiritually significant structures such as tombs, but also dwellings and other structures—in an auspicious manner. Depending on the particular style of feng shui being used, an auspicious site could be determined by reference to local features such as bodies of water, stars, or a compass. Feng shui was suppressed in China during the cultural revolution in the 1960s, but has since seen an increase in popularity, particularly in the United States.”

-- Feng Shui, Wikipedia

In the US, at least, Feng Shui has gained popularity primarily as it relates to interior design – the art of placing your furniture in the right places based on relationship to water, stars, and compass directions. Applying the art of Feng Shui to your data center architecture is not nearly as difficult as it may sound because essentially you’re doing the same thing: determining the best location (on or off-premise? virtual or physical? VNA or hardware?) for each network, application delivery network, and security component in the data center based on a set of organizational (business and operational) needs or goals. The underlying theory of Feng Shui is that location matters, and it is certainly true in the data center that location and form-factor matter to the harmony of the data center. The architectural decisions regarding a hybrid cloud computing infrastructure Links directly to a PDF white paper (a mix of virtual network appliances, hardware, and software) have an impact on many facets of operational and business goals.

TO BE or NOT TO BE a VIRTUAL NETWORK APPLIANCE

image

Some pundits have put forth the notion that there’s no real reason for hardware-based solution because, after all, they’re just general purpose servers “under the hood”. This kind of statement is hyperbole and fails to recognize the difference between “hardware” and “appliance” and between “general purpose compute” and “purpose-specific compute.”

There are certainly data center components across all four data center infrastructure tiers – security, network, storage, and application delivery – that can be (and perhaps should be) virtualized. There are also data center components in those tiers that should not be virtualized. Then there’s the components that could go either way, depending on the specific organizational and operational goals, needs, and budget.

In fact, there are times when the same component may in fact be deployed in both form factors simultaneously.

It isn’t as simple a decision as it initially sounds. Some folks would have us believe that virtualized components are exactly the same as their hardware counterparts and therefore the choice is obviously to go with the virtual solution because, well, it’s trendy. Like the iPad.

But while the functionality offered by a virtualized version of a hardware component may be equivalent, there remains differences in capability (speeds and feeds), costs, and management of the solution across varying form-factors.

For example, pricing of virtualized network appliances is lower than their hardware counterparts – for the core solution, that is. The virtualized network appliance still carries along the licensing costs of the virtualization platform and the compute resources required to deploy the solution. Virtualization isn’t free, after all, there are costs incurred in licensing, configuration, deployment, and the storage and transfer of images before, during, and after execution. There may be additional changes (dependencies) required of the underlying infrastructure to support virtualized solutions in terms of logging and auditing and compliance with data retention policies. There may be changes to network management systems required to support new protocols or methods of management and monitoring. There may be budgetary issues and obstacles in obtaining the right mix of OPEX and CAPEX to acquire, deploy, and maintain the solution.

There’s a lot more that goes into (or should go into) the decision to go virtual than just whether or not it’s available in that form-factor.

Lori continues with a “EACH COMPONENT and SITUATION is a NEW DECISION” topic.

David Linthicum claimed “All of Microsoft's recent cloud announcements leave me underwelmed” and asked In 5 years, will Microsoft be relevant in the cloud? in his 6/8/2010 post to InfoWorld’s Cloud Computing blog:

image As reported by Network World's John Cox and Jon Brodkin, at Microsoft's annual TechEd conference this week in New Orleans, "industry watchers are looking for Microsoft to sustain the buzz around Windows Phone 7, as well as share details about its cloud computing strategy in general and Azure cloud services in particular." Also, Microsoft is opening a cloud computing center in Taiwan: "Microsoft will also work with two Taiwanese companies to develop a new generation of servers designed for cloud computing."

Or, as Microsoft CEO Steve Ballmer put it when talking about cloud computing: "We're all in." Considering all of Microsoft's recent cloud activity, that is clearly the case. But is it enough to capture the huge cloud computing wave?

The trouble is that Microsoft has always been a follower when it comes new, emerging, and hype-driven technology, then typically leaped into first place by leveraging its dominance on the desktop and penetration of the Global 2000 businesses. That was clearly the case when the Web took off in the early 1990s; Microsoft turned the ship and captured most of the browser and server software market.

But today is a different story. With Apple, Amazon.com, and Google out there with well-funded and innovative solutions, Microsoft is finding its dominance on the desktop may not be enough to launch it into a cloud computing leadership position unless drastic measures are taken.

I've defended Microsoft in the past and its ability to capture the emerging cloud computing productivity application space, and I still believe Google won't be able to displace Microsoft from that market, at least for now. The larger question is in five years, will Microsoft be in the top five cloud computing companies? I'm not sure it will, given its current trajectory.

Lydia Leong asserts “Just because it’s in the cloud doesn’t make it magic. And it can be very, very dangerous to assume that it is” in her The cloud is not magic post of 6/8/2010:

image I recently talked to an enterprise client who has a group of developers who decided to go out, develop, and run their application on Amazon EC2. Great. It’s working well, it’s inexpensive, and they’re happy. So Central IT is figuring out what to do next.

I asked curiously, “Who is managing the servers?”

The client said, well, Amazon, of course!

Except Amazon doesn’t manage guest operating systems and applications.

It turns out that these developers believed in the magical cloud — an environment where everything was somehow mysteriously being taken care of by Amazon, so they had no need to do the usual maintenance tasks, including worrying about security — and had convinced IT Operations of this, too.

Imagine running Windows. Installed as-is, and never updated since then. Without anti-virus, or any other security measures, other than Amazon’s default firewall (which luckily defaults to largely closed).

Plus, they also assumed that auto-scaling was going to make their app magically scale. It’s not designed to automagically scale horizontally. Somebody is going to be an unhappy camper.

Cautionary tale for IT shops: Make sure you know what the cloud is and isn’t getting you.

Cautionary tale for cloud providers: What you’re actually providing may bear no resemblance to what your customer thinks you’re providing.

Eric Nelson reported SQL Azure Roadmap gets a little clearer: announcements from Tech Ed in this (abbreviated) summary of 6/8/2010:

On Monday at Tech▪Ed 2010 we announced new stuff (I like new stuff) that “showcases our continued commitment to deliver value, flexibility and control of data through data cloud services to our customers”.

Ok, that does sound like marketing speak (and it is) but the good news is there is some meat behind it. We have some decent new features coming and we also have some clarity on when we will be able to get our hands on those features.

SQL Azure Business Edition Extends to 50 GB – June 28th
  • SQL Azure Business Edition database is now extending from 10GB to 50GB
  • The new 50GB database size will be available worldwide starting June 28th
SQL Azure Business Edition Subscription Offer – August 1st
Public Preview of the Data Sync Service  - CTP now
  • Data Sync Service for SQL Azure allows for more flexible control over data by deciding which data components should be distributed across multiple datacenters in different geographic locations, based on your internal policies and business needs. 
  • Available as a community technology preview after registering at http://www.sqlazurelabs.com
SQL Server Web Manager for SQL Azure - CTP this Summer
  • SQL Server Web Manager (SSWM) is a lightweight and easy to use database management tool for SQL Azure databases, to be offered this summer.
Access 10 Support for SQL Azure – available now
  • Yey – at last! Microsoft Office 2010 will natively support data connectivity to SQL Azure – we can now start developing those “departmental apps” with the confidence of a highly available SQL store provisioned in seconds.
  • NB: I don’t believe we will support any previous versions of Access talking to SQL Azure.
The Pre-announced Spatial Data Support to Become Live – Live now*

Related Links

Brian Loesgen announced Our new book SOA with .NET and Windows Azure is now available!on 6/8/2010:

After a long effort, working with Thomas Erl and a very talented team of individuals, I finally got to hold a copy of the latest book I co-authored (and carry it around like a proud new father might) at TechEd in New Orleans.

It’s real, and it’s here now! It is in-stock at Amazon.

If you’re here at TechEd, John deVadoss, Christoph Schittko and myself (we are all part of the author team) will be doing our session tomorrow (Wednesday) at 8:00am. The session is “ASI202 - Real-World SOA with Microsoft .NET and Windows Azure”, so you can see the parallels with the book title. It is an architectural-level presentation that will focus on pain points, patterns and solutions. We will be in room 287.

We will also be doing a book signing tomorrow at the Addison-Wesley/Sams Publishing booth in the expo hall (booth #1942 and 1944) at 2:45 on Wednesday. The way things are set up this year, if you want to get a book signed, you need to go buy it beforehand at the bookstore (on the second floor) and then come and and see us. We’ll be there for 45 minutes or so.

SOA with .Net

http://www.soabooks.com/net

Dom Green describes Azure dev Portal, OS Settings in this 6/7/2010 post:

Microsoft have made a small update to the Windows Azure Dev Portal and added the ability to configure the operating system that are running on your nodes and how they are updated.

As you can see in the below image the new magic “OS Settings…” button is now available.

image

Clicking on the “OS Settings…” button takes you to the following screen to configure your operating system options and how it is updated:

image Hopefully, more updates will start rolling out to the dev portal over the coming months.

Michael Coté asks Whatever Happened to the cheap cloud? in this 6/7/2010 post:

“I’m kind of reminded of that Dilbert cartoon where he’s complaining about the size of his mailbox. And he hands someone a quarter and says, “There, double the size of my mailbox.” That’s the challenge that a lot of these services present—the [cloud] services arguably are different, and a lot of them are the same, but the perception is that they’re all easy to get and very cheap, and why the heck does it take our internal IT function so much longer and so much more money to achieve the same perceived results?”

Joseph Tobolski, partner at Accenture Technology Labs

Remember how the cloud was going to be cheap? Not so fast, many vendors are now saying. (And if it allows HP to get rid of 9,000 people, “not so fast” seems a good strategy for the existing meat-cloud – I bet the Morlocks are buying monkey-wrenches in bulk to throw into the gears of the cloud) It’s about enabling business and services, not saving you money. I’ve noticed that subtle and disturbing trend of late from what I call “elder companies” (incumbent, big money making tech vendors).

Rather than enter an IT infrastructure pricing war, they’re focusing on the business growth of using cloud computing. On the face of it, there’s nothing wrong with that. Except that CIOs and IT departments the world are pressured to spend less. In fact, expensive IT becomes a boat-anchor itself, just like any calcified brown-field enterprise IT you’re business models are shackled to.

In reality, “business/IT alignment” was largely about “why the hell am I spending so much for 50 meg inbox quotas and Intranet search that doesn’t work?” Sure, if you’re a company lucky enough to actually have IT that can be part of making money off new and tweaked business processes, you “alignment” can mean something other than cutting costs. But, for many organizations, IT is the last place you want to go to grow your organization’s revenues.

The early hype around cloud computing promised a big fat remedy to all of that:

It’s dirt cheap! You can stick it on a credit card ferchristsake.

It lets you rapidly deploy new applications.

It lets you completely skip over the IT (and their Abandon All Hope Ye Who Enter Here gloom) department and DIY projects.

These are all fatuous points that my fellow cloud gas-bags and I could rotate over for weeks without so much as advancing the state of IT one micron. As I recall, Service Oriented Architectures were going to scramble IT’s collective egg while it was still inside its shell as well.

There’s absolutely benefits aside from cost savings to using cloud computing technologies – we spend just about every episode of the IT Management & Cloud podcast yammering about them. Still, if you’re in the market for anything cloud related, make sure saving money is part of the feature list, not just more IT to pay out the nose for.

It’s doubtful that anyone would call Windows Azure “cheap.”

James Urquhart announced the availability of The 'Cloud Computing Bill of Rights': 2010 edition in this 6/7/2010 post:

[Preamble]
The Cloud Computing Bill of Rights, 2010 edition

In the course of technical history, there exist few critical innovations that forever change the way technical economies operate; forever changing the expectations that customers and vendors have of each other, and the architectures on which both rely for commerce. We, the parties entering into a new era driven by one such innovation--that of network based services, platforms, and applications, known at the writing of this document as "cloud computing"--do hereby avow the following (mostly) inalienable rights:

Article I: Customers Own Their Data

  1. No vendor (or supplier of service to a vendor) shall, in the course of its relationship with any customer, claim ownership of any data uploaded, created, generated, modified, hosted, or in any other way associated with the customer's intellectual property, engineering effort or media creativity. This also includes account-configuration data, customer-generated tags and categories, usage and traffic metrics, and any other form of analytics or metadata collection.

    Customer data is understood to include all data directly maintained by the customer, as well as that of the customer's own customers. It is also understood to include all code and data related to configuring and operating software directly developed by the customer, except for data expressly owned by the underlying infrastructure or platform provided by the vendor.

  2. Vendors shall always provide, at a minimum, API level access to all customer data as described above. This API level access will allow the customer to write software which, when executed against the API, allows access to any customer-maintained data, either in bulk or record-by-record as needed. As standards and protocols are defined that allow for bulk or real-time movement of data between cloud vendors, each vendor will endeavor to implement such technologies, and will not attempt to stall such implementation in an attempt to lock in its customers.

  3. Customers own their data, which in turn means they own responsibility for the data's security and adherence to privacy laws and agreements. As with monitoring and data access APIs, vendors will endeavor to provide customers with the tools and services they need to meet their own customers' expectations. However, customers are responsible for determining a vendor's relevancy to specific requirements, and to provide backstops, auditing, and even indemnification as required by agreements with their own customers.

    Ultimately, however, governments are responsible for the regulatory environments that define the limits of security and privacy laws. As governments can choose any legal requirement that works within the constraints of their own constitutions or doctrines, customers must be aware of what may or may not happen to their data in the jurisdictions in which data resides, is processed or is referenced. As constitutions vary from country to country, it may not even be required for governments to inform customers what specific actions are taken with or against their data. That laws exist that could put their data in jeopardy, however, is the minimum that governments convey to the market.

    Customers (and their customers) must leverage the legislative mechanisms of any jurisdiction of concern to change those parameters.

    In order for enough trust to be built into the online cloud economy, however, governments should endeavor to build a legal framework that respects corporate and individual privacy, and overall data security. While national security is important, governments must be careful not to create an atmosphere in which the customers and vendors of the cloud distrust their ability to securely conduct business within the jurisdiction, either directly or indirectly.

  4. Because regulatory effects weigh so heavily on data usage, security and privacy, and location is key to determining which regulations and laws are in effect, vendors shall, at a minimum, inform customers specifically where their data is housed. A better option would be to provide mechanisms by which users can choose where their data will be stored. Either way, vendors should also endeavor to work with customers to assure that their systems designs do not conflict with known legal or regulatory obstacles. As noted earlier, however, ultimate responsibility for data security and legality remains the responsibility of the customer.

    These rights are assumed to apply to primary, backup, and archived data instances.

Article II: Vendors and Customers Jointly Own System Service Levels

  1. Vendors own, and shall do everything in their power to meet service level targets committed to with any given customer. All required effort and expense necessary to meet those explicit service levels will be spent freely and without additional expense to the customer. While the specific legally binding contracts or business agreements will spell out these requirements, it is noted here that these service level agreements are entered into expressly to protect both the customer's and vendor's business interests, and all decisions by the vendor will take both parties equally into account.

    Where no explicit service level agreement exists with a customer, the vendor will endeavor to meet any expressed service level targets provided in marketing literature or the like. At no time will it be acceptable for a vendor to declare a high level of service at a base price, only to later indicate that this level of service is only available at a higher premium price.

    It is perfectly acceptable, however, for vendors to expressly sell a higher level of service at a higher price, as long as they make that clear at all points where a customer may evaluate or purchase the service.

  2. Ultimately, though, customers own their service level commitments to their own internal or external customers, and customers understand that it is their responsibility to take into account possible failures by each vendor that they do business with.

    Customers relying on a single vendor to meet their own service level commitments enter into an implicit agreement to tie their own service level commitments to the vendor's, and to live and die by the vendor's own infrastructure reliability. Those customers who take their own commitments seriously will seek to build or obtain independent monitoring, failure recovery, and disaster recovery systems.

    If a vendor recommends specific architectures or procedures in order to achieve specific service levels, it will be the customer's responsibility to adhere to them. Failure to do so means liability for service failures rests with the customer, not the vendor.

  3. Where customer/vendor system integration is necessary, the vendors must offer options for monitoring the viability of that integration at as many architectural levels as required to allow customers to meet their own service level commitments. Where standards exist for such monitoring, the vendor will implement those standards in a timely and complete fashion. The vendor should not underestimate the importance of this monitoring to the customer's own business commitments.

    A bare minimum form of this monitoring would be timely and thorough communication of events such as security breaches, service outages or degredations, and pricing changes.

  4. Under no circumstances will vendors terminate customer accounts for political statements, inappropriate language, statements or content related to sexuality or other taboo subjects, religious commentary, or statements critical of the vendor's service, with exceptions for specific laws, e.g. hate speech, where they apply.

Article III: Vendors Own Their Interfaces
  1. Vendors are under no obligation to provide "open" or "standard" interfaces, other than as described above for data access and monitoring. APIs for modifying user experience, frameworks for building extensions, or even complete applications for the vendor platform, or other such technologies can be developed however the vendor sees fit. If a vendor chooses to require developers to write applications in a custom programming language with esoteric data storage algorithms and heavily piracy protected execution systems, so be it.

    If it seems that this completely abdicates the customer's power in the business relationship, this is not so. As the "cloud" is a marketplace of technology infrastructures, platforms, and applications, customers exercise their power by choosing where to spend their hard-earned money. A decision to select a platform vendor that locks you into proprietary programming libraries or execution containers, for instance, is a choice to support such programming lock-in. On the other hand, insistence on portable virtual machine formats or programming frameworks will drive the market towards a true commodity compute capacity model.

    The key reason for giving vendors such power is to maximize innovation. By restricting how technology gets developed or released, the market risks restricting the ways in which technologists can innovate. History shows that eventually the "open" market catches up to most innovations (or bypasses them altogether), and the pace at which this happens is greatly accelerated by open-source software. Nonetheless, forcing innovation through open source or any other single method runs the risk of weakening capitalist entrepreneurial risk taking.

  2. The customer, however, has the right to use any method legally possible to extend, replicate, leverage or better any given vendor technology. If a vendor provides a proprietary API for virtual machine management in their cloud, customers (aka "the community" in this case) have every right to experiment with "home grown" implementations of alternative technologies using that same API. This is also true for replicating cloud platform functionality, or even complete applications--though, again, the right only extends to legal means.

    This provision does NOT protect customers from illegal use of patented or trademarked technologies, but vendors and third parties should be aware that demanding royalties for defacto standard technologies will only result in other, more open technologies taking the defacto role. With the advent of several open-source projects to decouple client code from proprietary interfaces, it is unlikely to be tremendously expensive for customers to replace cloud interfaces if forced to do so.

    Possibly the best thing cloud vendors can do to extend their community, and encourage innovation on their platform from community members is to open their platform as much as possible. By making themselves the "reference platform" for their respective market space, an open vendor creates a petrie dish of sorts for cultivating differentiating features and successes on their platform. Protective proprietary vendors are on their own.

These three articles serve as the baseline for customer, vendor and, as necessary, government relationships in the new network-based computing marketplace. No claim is made that this document is complete, or final. These articles may be changed or extended at any time, and additional articles can be declared, whether in response to new technologies or business models, or simply to reflect the business reality of the marketplace. It is also a community document, and others are encouraged to bend and shape it in their own venues.

InformationWeek::Analytics offers Michael Biddick’s The Who and How of Private Clouds Cloud Computing Brief on 6/7/2010:

Companies are showing much more interest in private clouds than public clouds.IT pros may be getting sick of cloud computing terminology, but make no mistake: The private cloud is a new and powerful data center strategy. What IT leaders are sorting out today is just how much of the concept they want to embrace.

More than half of the 504 business technology professionals we surveyed say they're either using private clouds (28%) or planning to do so (30%). However, cloud computing isn't a single, shiny toy a company buys. Instead, it's a new approach to delivering IT services. It requires certain key technologies, but more broadly, it focuses on standards and process.

Download

<Return to section navigation list> 

Cloud Security and Governance

Don Nelson offers “The Notion of ‘On Demand’ as Opposed to Hours, Days and Weeks in the Cloud” in his Cloud Management and Security: Being Ready post to Cisco’s SP360: Service Provider blog of 6/7/2010:

Well it seems like yesterday since my participation at TM Forum Management World 2010 in Nice, France, during May 17 2010 week. Specifically, I participated in a wonderful session, Opportunities, Business Models & Requirements for Cloud Providers.

Having just returned from Paris, France where I had presented at the Cloud Telco 2010 Conference, I could not help but pick up on common themes of Cloud Management and Security and will state that management and security are absolute table stakes when developing a sustainable Cloud-based service architecture blueprint. Further, another theme for discussion was the role of network virtualization (not a new concept by the way) specific to the network infrastructure enablement e.g. multi-tenancy. Architecturally, considerations such as proximity of the network to the data center infrastructure itself as to mitigate latency and ensure the required quality of service when moving virtual machines were also common topics for discussion at these venues. Oh yes, it is no surprise that with the launch of Cisco’s CRS-3, we have integrated the Network Positioning System (NPS), that implements L3-L7 best path for content.

However, in focusing on cloud management there is the notion of “on demand” as opposed to hours, days and weeks. Therefore cloud management autonomic flow-through provisioning may be the norm and will consequently have implications to OSS systems as these move to cloud management architecture as I highlighted at the Cloud Telco 2010 Conference in my presentation last week:

cloud management

In the context of cloud security, there are few fundamental questions-considerations that may serve as inputs to developing a cloud security blueprint and by no means comprehensive, such as:

  1. Where is my data?
  2. Geographical location of data
  3. Who is accessing it on the physical and virtual servers?
  4. Is it segregated from others?
  5. Can I recover it?
  6. What is the threat vector for cloud services?
  7. How do I identify the weakest link in cloud services security chain?
  8. Would centralization of data bring more security?
  9. Federated trust and identity issues
  10. Who would manage risk for my business assets?
  11. And, can I comply with regulatory requirements

Finally, next week the ITU-T will kick off it’s first Focus Group meeting in Cloud computing, June 14-16, Geneva, Switzerland.

I do look forward to hearing your thoughts on cloud management and cloud security.

Don Nelson is the moderator and Program Manager for the CiscoSP360 Blog. He is a member of the Service Provider marketing team at Cisco Systems, Inc.

<Return to section navigation list> 

Cloud Computing Events

The Windows Azure Team summarized Windows Azure News at TechEd 2010 in this 6/7/2010 post:

image_thumb[7][1]Want to know what's being announced about Windows Azure at TechEd 2010 this week in New Orleans? Here's the rundown: the Windows Azure team will discuss the release of the June 2010 version of the Windows Azure Tools + SDK, the production launch (including pricing) of the Windows Azure Content Delivery Network (CDN) and enablement of the OS auto-upgrade feature.  The SQL Azure team is also making several announcements so be sure to read about them on their blog here.

New Windows Azure Tools + SDK

The June 2010 release of the Windows Azure Tools + SDK will include:

  • Full support for Visual Studio 2010 RTM
  • .NET 4 support to provide developers with the flexibility to build services targeting either the .NET 3.5 or .NET 4 framework
  • Cloud storage explorer to make it easier for developers to build compelling services by displaying a read-only view of Windows Azure tables and blob containers through the Visual Studio Server Explorer
  • Integrated deployment, which will enable developers to deploy services directly from Visual Studio by selecting 'Publish' from the Solution Explorer
  • Service monitoring to help developers track and manage the state of their services through the 'compute' node in Server Explorer
  • IntelliTrace support for services running in the cloud to simplify the process of debugging services in the cloud through the Visual Studio 2010 Ultimate IntelliTrace feature. For more information see this post.

To download the new Tools + SDK, please click here.

Production Launch of the Windows Azure CDN

As previously announced on this blog, usage of the Windows Azure CDN for all billing periods that begin after June 30, 2010 will be charged using the three following billing meters and rates:

  • $0.15 per GB for data transfers from European and North American locations
  • $0.20 per GB for data transfers from other locations
  • $0.01 per 10,000 transactions

With 19 locations globally (United States, Europe, Asia, Australia and South America), the Windows Azure CDN offers developers a global solution for delivering high-bandwidth content, enhancing end user performance and reliability by placing copies of data, at various points in a network, so that they are distributed closer to the user. Content types supported by the Windows Azure CDN include web objects (e.g. JPG, CSS, and JavaScript), downloadable objects (media files, software, documents) and other components for Internet delivery. The Windows Azure CDN supports HTTP delivery of public content stored in Windows Azure storage.

All usage for billing periods beginning prior to July 1, 2010 will not be charged. To help you determine which pricing plan best suits your needs, please review the comparison table, which includes this information.

To learn more about the Windows Azure CDN and how to get started, please be sure to read our previous blog post or visit the FAQ section on WindowsAzure.com.

OS Auto-upgrade Feature

The OS-auto upgrade feature provides developers the flexibility to have the Guest OS for their service deployment automatically upgrade to the latest available release. Developers will continue to retain the ability to manually upgrade to a specific release of the Guest OS as well. Developers can enable the OS auto-upgrade feature either through the Service Management API or through the developer portal. The list of OS releases can be found here

If you want to stay on top of all that's happening with Windows Azure at TechEd this week, follow WindowsAzure on Twitter; to stay up-to-date on all things TechEd, follow @TechEd_NA on Twitter.

The Microsoft News Center claims “At Tech•Ed 2010, Bob Muglia speaks to the potential of cloud computing for enterprises, outlines product updates and provides guidance for customers” in its Microsoft and Its Partners Help Customers Reach the Cloud on Their Own Terms press release of 6/7/2010:

image In an opening-day keynote speech at Microsoft Corp.’s Tech•Ed 2010 North America conference, Bob Muglia, president of the company’s Server and Tools Division, outlined the benefits and possibilities of cloud computing and how Microsoft can help customers harness the next generation of IT. Muglia noted that customers have different needs, with some desiring to extend efficiencies and productivity through existing investments in datacenters and applications while others look to cloud computing as a new model to enhance IT and the way they do business.

Bob Muglia on Microsoft Helping Customers Move to the Cloud

image“Our job, simply put, is to deliver what customers need to take advantage of cloud computing on their own terms,” Muglia said. “Some vendors would have you believe that you must move everything to the cloud now and there is only one way to achieve cloud computing; don’t be misled and lose sight of the value of all the investments you have already made to enable the full promise of cloud computing.

“Microsoft’s strategy is to deliver software, services and tools that enable customers to realize the benefits of a cloud-based model with the reliability and security of on-premises software,” Muglia continued. “Microsoft is unique in that no other solution vendor has the same level of experience and expertise in software and services. We are providing the most comprehensive set of choices available to customers.” …

Cloud Connect and InformationWeek announced an on-demand version of their "The Anatomy of the Cloud: A Look Ahead" Virtual Event is available: 

"The Anatomy of the Cloud: A Look Ahead" Virtual Event which was, presented live on April 20, 2010 … is now available on-demand until April 20, 2011. Login anytime to access a wealth of informative content and valuable webcast presentation replays.

Click here to view the on-demand content. (Site registration required without virtual event registration.)

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Ammalgam reported Google testing cloud printing internally in this 6/8/2010 post:

Google is testing its Google Cloud Print service internally and hopes to make it available in the coming months after testing it with printer makers, the company said June 7. HP announced its support for Google Cloud Print at an event in New York City, where the printer maker unveiled a suite of cloud-aware printers for consumers and businesses. These printers will work with Google Cloud Print out of the box.

Google is testing its Google Cloud Print service internally and hopes to make it available in the coming months after testing it with printer makers, company officials said June 7.

The company introduced Cloud Print in April as a service that lets any application print to any printer from any computing device using Google’s cloud computing infrastructure.

Google envisions the solution as an alternative to the on-premises print solutions that use myriad drivers to execute print tasks, though the company plans to support those legacy printers as well through a proxy running on Microsoft Windows, Apple Mac and Linux machines.

Cloud Print is also the de facto print medium for Google’s Chrome Operating System, a Web-based operating system that boots up computers in a fraction of the time it takes to start most of today’s machines. Netbooks running Chrome OS are coming from Acer and others later this year.

“Development is progressing quickly, and we are now testing the service internally at Google,” said Google Cloud Print Product Manager Mike Jazayeri.

“Those testing it are particularly excited about being able to print from their phones to any printer in the company. We hope to launch the service in the coming months.” …

Ingram Micro offers links to sessions in its Cloud Summit Recap on 6/7/2010:

Ingram Micro has been a solid technology partner for hundreds of thousands of VARs over the years, and the cloud will be no different. VARs should be poised to take advantage of Ingram Micro’s thriving managed services business and rapidly emerging cloud computing strategy as this market accelerates in 2010 and 2011.

  • Welcome to Ingram Micro’s Inaugural Cloud Summit
  • Cloud Partner Ecosystems
  • Next Generation Security Platform
  • Cloud to Managed Services
  • Building a Business in the Cloud
  • Hosting Is the New IT
  • Make the Cloud a Seamless Extension of Your Data Center
  • Four Ways to Extend the Cloud
  • The Cloud Introduces an Alternate Distribution Method
  • The Cloud Market: The Current Dynamics
  • Getting to "More than Backup" Using Cloud Technologies Effectively
  • The Emerging Cloud Channel Opportunity
  • HP Is in the Cloud
  • 10 Insights on the Future of Cloud Computing
  • We’re All in on the Cloud [Microsoft, of course]
  • As I mentioned in an earlier post, everyone [even distributors] want to get into the cloud computing act.

    <Return to section navigation list> 

    blog comments powered by Disqus