Friday, October 15, 2010

Windows Azure and Cloud Computing Posts for 10/11/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb31133  
• Updated 10/15/2010 for 10/14/2010 and some earlier articles marked

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.


Azure Blob, Drive, Table and Queue Services

• Neil MacKenzie (@mknz) posted Autoscaling in Windows Azure to his new Convective blog on 10/12/2010:

imageThe purpose of this post is to provide links to various posts about the performance of Azure and, in particular, Azure Storage.

Elasticity of Cloud Services

The great promise of cloud computing is the closer matching of compute resources to compute needs. This leads to significant cost savings and allows the creation of novel services that previously would have been cost prohibitive.

image The driver of all this is the elasticity of cloud services – the ability to scale up and scale down services as needed. Much of the emphasis is on the ability to scale up – but the ability to scale down is a no less significant factor in reducing costs. Elasticity is a better descriptor of this than scalability since the latter traditionally refers to scaling up not scaling down. However, the term auto-elasticity has never taken off so autoscaling it is.

Workload Patterns for the Cloud

In a PDC09 presentation, Dianne O’Brien described four workload patterns that were optimal for the cloud:

  • On and off
  • Growing fast
  • Unpredictable bursting
  • Predictable bursting

An on and off workload is one used periodically or occasionally. An example is a batch process performed once a month.

A growing fast workload is one trending up with time. A more pessimistic name for this is grow fast or fail fast. An example is a rapidly growing website.

An unpredictable bursting workload is one where a steady load is occasionally and unpredictably subject to  sudden spikes. An example is a magazine website where interest can spike if an article suddenly becomes popular.

A predictable bursting workload is one that varies periodically but predictably. An examples is a business-focused service where the usage drops off outside work hours.

Other than an on and off workload these workloads can all be managed automatically by the service detecting how busy it is and adjusting the compute resources dedicated to the service. Although an on and off workload can be managed automatically it would be difficult for the service to autoscale were it completely off.

Autoscaling

The basic idea of autoscaling is to use performance parameters of the service to indicate the amount of compute resources to devote to the service. If the instances of a web role are constantly CPU bound serving up pages then additional instances could be added. Another example is when the Azure Queue used to drive a worker role is growing faster than the worker role instances are able to handle the messages in the queue. The performance parameters used to drive autoscaling depend on the performance requirements of the Azure service.

There are practical limitations on autoscaling in Azure. It takes about 10 minutes to start an additional instance of a running service and 1-2 minutes to tear an instance down. However, Azure instances are charged by the hour. Consequently, it does not make sense to autoscale an Azure service at timescales much less than an hour.

Another limitation is that it is not possible to scale down to 0 instances of a role. This could be useful in a scenario where a role is needed only occasionally.

Windows Azure Service Management REST API

The Windows Azure Service Management REST API supports the programmatic management of Azure Services. It can be used to develop applications to manage Azure services and Azure Storage accounts. It can also be invoked inside a service to manage the service itself and implement autoscaling.

The number of instances for a role is specified in the Instances element of the Service Configuration file. This number can be changed manually using the Azure Services portal. Alternatively, the Azure Service Management API can be used to modify the Service Configuration file and, specifically, the number of running instances. This is how autoscaling of an Azure role is implemented.

The Azure Service Management API works only in the cloud so it is not possible to test autoscaling in the development fabric. It is possible to test manual scaling in the development fabric using csrun. However, this is limited to increasing the number of instances of a service started without debugging. csrun can not be used to reduce the number of instances.

Examples of Autoscaling

The good folk at Lokad have published their lokad-cloud project which supports autoscaling.

Grzegorz Gogolowicz and Trent Swanson have uploaded their Azurescale project to the MSDN Code Gallery. They also provide extensive documentation of autoscaling and their project.

In his new Forecast: Cloudy column in MSDN Magazine Joseph Fultz describes autoscaling and provides source code for the example he uses.

Steven Nagy has an article on autoscaling in Windows Azure Platform: Articles from the Trenches Volume 1

I described the Azure Service Management API in an earlier post in which I walked through the process of using it to modify the number of running instances.


• Eric Brewer (@eric_brewer), pictured at the right, tweeted on 10/8/2010 (missed when posted):

image I really need to write an updated CAP [Consistency, Availability, Partition tolerance] theorem paper. But until then, this is pretty good: http://bit.ly/btkdJ5 (from @coda)

Coda Hale (@coda) published a detailed analysis of Eric Brewer’s CAP theorem as it applies to Partition tolerance on 10/7/2010:

I've seen a number of distributed databases recently describe themselves as being “CA” —that is, providing both consistency and availability while not providing partition-tolerance. To me, this indicates that the developers of these systems do not understand the the CAP theorem and its implications.

A Quick Refresher

In 2000, Dr. Eric Brewer gave a keynote at the Proceedings of the Annual ACM Symposium on Principles of Distributed Computing1 in which he laid out his famous CAP Theorem: a shared-data system can have at most two of the three following properties: Consistency, Availability, and tolerance to network Partitions. In 2002, Gilbert and Lynch2 converted “Brewer’s conjecture” into a more formal definition with an informal proof. As far as I can tell, it’s been misunderstood ever since.

So let’s be clear on the terms we’re using.

On Consistency

From Gilbert and Lynch2:

Atomic, or linearizable, consistency is the condition expected by most web services today. Under this consistency guarantee, there must exist a total order on all operations such that each operation looks as if it were completed at a single instant. This is equivalent to requiring requests of the distributed shared memory to act as if they were executing on a single node, responding to operations one at a time.

Most people seem to understand this, but it bears repetition: a system is consistent if an update is applied to all relevant nodes at the same logical time. Among other things, this means that standard database replication is not strongly consistent. As anyone whose read replicas have drifted from the master knows, special logic must be introduced to handle replication lag.

That said, consistency which is both instantaneous and global is impossible. The universe simply does not permit it. So the goal here is to push the time resolutions at which the consistency breaks down to a point where we no longer notice it. Just don’t try to act outside your own light cone…

On Availability

Again from Gilbert and Lynch2:

For a distributed system to be continuously available, every request received by a non-failing node in the system must result in a response. That is, any algorithm used by the service must eventually terminate … [When] qualified by the need for partition tolerance, this can be seen as a strong definition of availability: even when severe network failures occur, every request must terminate.

Despite the notion of “100% uptime as much as possible,” there are limits to availability. If you have a single piece of data on five nodes and all five nodes die, that data is gone and any request which required it in order to be processed cannot be handled.

(N.B.: A 500 The Bees They're In My Eyes response does not count as an actual response any more than a network timeout does. A response contains the results of the requested work.)

On Partition Tolerance

Once more, Gilbert and Lynch2:

In order to model partition tolerance, the network will be allowed to lose arbitrarily many messages sent from one node to another. When a network is partitioned, all messages sent from nodes in one component of the partition to nodes in another component are lost. (And any pattern of message loss can be modeled as a temporary partition separating the communicating nodes at the exact instant the message is lost.)

This seems to be the part that most people misunderstand.

Some systems cannot be partitioned. Single-node systems (e.g., a monolithic Oracle server with no replication) are incapable of experiencing a network partition. But practically speaking these are rare; add remote clients to the monolithic Oracle server and you get a distributed system which can experience a network partition (e.g., the Oracle server becomes unavailable).

Network partitions aren’t limited to dropped packets: a crashed server can be thought of as a network partition. The failed node is effectively the only member of its partition component, and thus all messages to it are “lost” (i.e., they are not processed by the node due to its failure). Handling a crashed machine counts as partition-tolerance. (N.B.: A node which has gone offline is actually the easiest sort of failure to deal with—you’re assured that the dead node is not giving incorrect responses to another component of your system.)

For a distributed (i.e., multi-node) system to not require partition-tolerance it would have to run on a network which is guaranteed to never drop messages (or even deliver them late) and whose nodes are guaranteed to never die. You and I do not work with these types of systems because they don’t exist.

Coda continues with a lengthy argument supporting the above point. …

References (i.e., Things You Should Read)
  1. Brewer. Towards robust distributed systems. Proceedings of the Annual ACM Symposium on Principles of Distributed Computing (2000) vol. 19 pp. 7-10

  2. Gilbert and Lynch. Brewer’s conjecture and the feasibility of consistent, available, partition-tolerant web services. ACM SIGACT News (2002) vol. 33 (2) pp. 59

  3. DeCandia et al. Dynamo: Amazon’s highly available key-value store. SOSP ‘07: Proceedings of twenty-first ACM SIGOPS symposium on Operating systems principles (2007)

  4. Fox and Brewer. Harvest, yield, and scalable tolerant systems. Hot Topics in Operating Systems, 1999. Proceedings of the Seventh Workshop on (1999) pp. 174 – 178

  5. Brewer. Lessons from giant-scale services. Internet Computing, IEEE (2001) vol. 5 (4) pp. 46 – 55

    (As a sad postscript: most of the theoretical papers I've referenced are about a decade old and all of them are freely available online.)

Coda is a software engineer and a cyclist who lives in Emeryville, CA, and works for Yammer, an enterprise social network, as their infrastructure architect. Read some things he believes in here.


<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

• Scott Klein posted a OData - WP7 Update on 10/14/2010:

imageAfter I posted my last blog post [SQL Azure, OData, and Windows Phone 7 of 9/29/2010], Ralph Squillace responded with a link that everyone that is doing OData/WP7 development should be aware of. Two days before I posted my Data/WP7 post, Microsoft's Mike Flasco posted this regarding the changes that are coming in the RTM version of the Windows Phone 7 library.

imageThe current WP7 phone platform is missing some core types, types that data service client LINQ provider requires to function properly. The CTP that is currently available (and which my example was based on) has LINQ support but what Mike calls "shaky workarounds".

imageTo address this, they are removing the LINQ translator from the data service client. There are pros and cons to this. The downside is that you will no longer be able to use LINQ based queries, but instead create your queries based on the WCF Data Service URI. The good news is that because it is beased on the WCF Data Service client that many of the features you know and love such as serialization, binding, change tracking) will be available to you.

This also means that I will be re-doing my blog post.


Steve Yi described SQL Azure to SQL Azure Synchronization in this 10/14/2010 post to the SQL Azure Team blog:

image We have blogged in the past about SQL Azure to SQL Azure synchronization using Data Sync Service for SQL Azure (Introduction to Data Sync Service for SQL Azure). However what if Data Sync Service doesn’t do exactly what you want?

imageIt stands to reason that the Service will be adding more features as it matures, however if you need a custom synchronization you can easily program your own using the Sync Framework V2.1 Software Development Kit and a Windows Azure worker role. This newly published wiki article shows how to synchronize between two SQL Azure databases using the Sync Framework SDK and a Windows Azure worker role.


Steve Yi explained Porting the MVC Music Store to SQL Azure in a 10/13/2010 post:

image The MVC Music Store is an outstanding tutorial application built on ASP.NET MVC 2. It's a lightweight sample store which sells albums online, demonstrating ASP.NET MVC 2's productivity features and data access via Entity Framework 4. You can find it in CodePlex here. As a practice exercise we are going to port it from SQL Server to SQL Azure to show you how easy it is to get an existing application running on SQL Azure and Windows Azure.

The process for moving the database is fairly straightforward: you need create the database, schema, and data, then modify the connection string to point to SQL Azure. To move the application to Windows Azure you’ll need to create a Windows Azure Cloud project and attach the MVC Music Store project. A few tweaks to the MVC Music Store project and it is up and running.

Getting the Schema and Data on SQL Azure

imageThe MVC Music Store that you can download from CodePlex runs on SQL Server, the download includes SQL Server .mdf and .ldf files to attach to your on-premises SQL Server database to quickly create the Music Store database. However, the CodePlex project also contains a Transact-SQL creation script for creating the database schema and data, one for SQL Azure and one for SQL Server. Here are the steps to get the SQL Azure database, schema and data created.

  1. Download the MVC Music Store from CodePlex using the .zip file linked from here. The Transact-SQL file can be found here, download and copy it onto your hard drive.
  2. Login into the SQL Azure Portal.
  3. Create a SQL Azure project, or use an existing project to create a database named MvcMusicStore.
  4. Using SQL Server Management Studio, open the SQLAzure-MvcMusicStore-Create.sql , and select your new created database for the connection information. If you’ve never done this before, there’s a blog post on how to do this: SQL Server Management Studio to SQL Azure. Server name, and your Administrative login and password can be found on the SQL Azure Portal.
  5. Execute the script from within SQL Server Management Studio against the new database on SQL Azure; this will create the schema and all the data need.
Creating a Cloud Project

The next step is to take the MVC Music Store project that is designed to run locally on the Internet Information Server and enable it for Windows Azure. There is a detail explanation on how to do this with a generic project here. However, let’s walk through the steps for the MVC Music Store:

  1. The first thing that we want to do is create a Windows Azure Cloud Server project using Visual Studio 2010. It is much easier to add an existing MVC project to a Cloud project, then trying to add the cloud aspects to the MVC project.

    clip_image002

    I am naming my cloud service MVCMusicStoreCloudService.

  2. In the next step, I add no roles to the cloud service. I will be adding the MVC Music Store later as a web role.

    clip_image003

  3. Now that I am finished with the wizard my solution explorer looks like this:

    clip_image004

  4. Going back to the .zip file that I downloaded early, I am going to copy the MvcMusicStore project into the directory structure I just created as a sibling to the MVCMusicStoreCloudServer like this:

    clip_image006

  5. Going back to Visual Studio 2010, I right click on the solution in the Solution Explorer and add the MvcMusicStore project to my solution.
  6. To add the MvcMusicStore project as a web role, I right click on the Roles in the MVCMusicStoreCloudService project and choose Add then Web Role Project in solution.

    clip_image007

  7. Choose the MvcMusicStore, press Add and your solution explorer should now look like this:

    clip_image008

  8. Now that the MvcMusicStore has been added as a project you need to right click on it and choose select Unload Project. This will allow us to modify the .csproj a file.
  9. Right click on the unloaded MvcMusicStore and choose Edit MvcMusicStore.csproj

    clip_image009

  10. Modify the .csproj file to add a Web Role Type.

    clip_image010

  11. Save the MvcMusicStore.csproj and reload the project file into the solution.
  12. Next thing to do is mark System.Web.Mvc assembly in the MvcMusicStore project as Local Copy true. You can do this by right clicking on the assembly, choosing Properties and changing the Local Copy setting.

    clip_image011

  13. One final thing to do is, in the MVCMusicStore project, add a reference to the Microsoft.WindowAzure.ServiceRuntime assembly.

    clip_image012

Changing the Connection String

In order to access SQL Azure we need to change the connection string in the web.config of the MvcMusicStore project. The web.config that is downloaded points to a local database in App_Data. The easiest way to change the connection string is to return to the SQL Azure Portal and copy the connection string from the portal.

Mine looks like this:

Server=tcp:XXXXXXXX.database.windows.net;Database=MvcMusicStore;User ID=XXXXXX@XXXXXX;Password=myPassword;Trusted_Connection=False;Encrypt=True;

The downloaded connection string appears like this:

metadata=res://*/Models.StoreDB.csdl|res://*/Models.StoreDB.ssdl|res://*/Models.StoreDB.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\MvcMusicStore.mdf;Integrated Security=True;User Instance=True;MultipleActiveResultSets=True&quot;

We need to change the connection string to point to SQL Azure, rather than SQL Express. Note the text in bold:

metadata=res://*/Models.StoreDB.csdl|res://*/Models.StoreDB.ssdl|res://*/Models.StoreDB.msl;provider=System.Data.SqlClient;provider connection string=&quot;Server=tcp:XXXXXXXX.database.windows.net;Database=MvcMusicStore;User ID=XXXXXX@XXXXXX;Password=myPassword;Trusted_Connection=False;Encrypt=True;&quot;

Save it and you are done with all the changes you need to connect to SQL Azure. Now that you are done you can remove the MvcMusicStore.mdf from the App_Data directory by deleting it. This will prevent it from being packaged up and deployed to Windows Azure. The less files you have the faster the upload time to Windows Azure.

Just a Few More Tweaks

The default membership and roles providers that are used in MVC Music Store aren’t completely available in the Windows Azure Platform. For simplicity of the blog post, we are going remove this functionality from the MVC Music Store. However if you are interested in keep it you can use implementations that are in included in the SDK samples that use Cloud Storage.  See this post for more information on how to set those up.

To remove them that I am going to remove these lines from the web.config:

<add name="ApplicationServices" connectionString="data source=.\SQLEXPRESS;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient" />

And these lines:

<authentication mode="Forms">
  <forms loginUrl="~/Account/LogOn" timeout="2880" />
</authentication>
<membership>
  <providers>
    <clear />
    <add name="AspNetSqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="ApplicationServices" enablePasswordRetrieval="false" enablePasswordReset="true" requiresQuestionAndAnswer="false" requiresUniqueEmail="false" maxInvalidPasswordAttempts="5" minRequiredPasswordLength="6" minRequiredNonalphanumericCharacters="0" passwordAttemptWindow="10" applicationName="/" />
  </providers>
</membership>
<profile>
  <providers>
    <clear />
    <add name="AspNetSqlProfileProvider" type="System.Web.Profile.SqlProfileProvider" connectionStringName="ApplicationServices" applicationName="/" />
  </providers>
</profile>
<roleManager enabled="true">
  <providers>
    <clear />
    <add connectionStringName="ApplicationServices" applicationName="/"
      name="AspNetSqlRoleProvider" type="System.Web.Security.SqlRoleProvider" />
    <add applicationName="/" name="AspNetWindowsTokenRoleProvider"
      type="System.Web.Security.WindowsTokenRoleProvider" />
  </providers>
</roleManager>

Since we aren’t implementing membership and roles, we also need to remove the AccountController and Account Views from the project files. Finally, the admin link in the Site.Master needs to be removed, it looks like this:

<li><a href="/StoreManager/">Admin</a></li>

You can also remove the ASPNETDB.MDF database from App_Data by deleting it, again sliming the project file.

Ready to Deploy

That is all the changes you need to deploy the MVC Music Store to Windows Azure and SQL Azure. Test it in the Windows Azure DevFabric and then deploy away to Windows Azure.


Steve Yi continues his roll with a SQL Server to SQL Azure Synchronization post of 10/13/2010:

imageAnother wiki article has been published in the wiki section of TechNet entitled: “SQL Server to SQL Azure Synchronization”.  This article shows how to use the 2.1 version of the Sync Framework to write a console application to synchronize SQL Azure with on-premises SQL Servers. The Sync Framework takes care of all the messy details for you, leaving you just tying the pieces together in very few lines of code. This article is building on top of Liam’s blog post and video.

Read: SQL Server to SQL Azure Synchronization


imageSee My Windows Azure and SQL Azure Synergism Emphasized at Windows Phone 7 Developer Launch in Mt. View post of 10/12/2010 about the use of OData in Falafel Software’s EventBoard Windows Phone 7 application in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section.


Raghu Ram announced the SQL Connectivity Annual Survey on 10/12/2010:

imageLast year in October 2009, we introduced  the process of interacting on a regular basis with the developers and users in the form of surveys.  During the last 12 months, we completed surveys that focused on the broad SQL Connectivity components, including ODBC, ADO.NET, JDBC and PHP.  These surveys provide us with an ability to validate some of the requests we have got from developers, users and partners such as you as well as ideas that we have gathered internally as a part of our development process.    You have seen our roadmap for SQL Server evolve based on the feedback that we have received.

We view your organization as a key stakeholder in the process that we have to identify areas for future investments.  The feedback you provide is valuable and each response will be read and will be treated with utmost confidence.

The survey can be found in the link below and will be available until 25th October, 2010 5:00 PM PST.

http://www.zoomerang.com/Survey/WEB22BAW29Z8P2


Patrick Wood posted Download the Free SQL Azure™ Cloud Demonstration App for Microsoft Access® on 12/11/2010:

imageI have just finished a simple free demonstration application, the SQL Azure Cloud App 1.0, and I have not had time to test it yet but people are asking for it so I am making it available. You can learn more and download the file here.

It is just a simple database to demonstrate how Microsoft Access can get data from SQL Azure. Since I have not had time to test it on other machines yet, so please et me know how it works for you. It is most important that you read the ReadMeFirst.txt and the Release Notes.txt files before using the application. It was made for Access 2010 but you may be able to use Access 2007. If you can use it with Access 2007 please let me know.

SQL Azure provides access to your data anywhere your computer can connect to the internet, 365 days a year around the clock. With no startup costs for equipment and affordable management plans SQL Azure provides new possibilities for any size company or organization.

More Free Downloads:

Happy computing,
Patrick (Pat) Wood
Gaining Access


<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Alex Williams describes Vittorio Bertocci’s Fabrikam Shipping sample project in a Better Show Than Tell - A Simulated SaaS for Windows Azure post to the ReadWriteCloud of 11/13/2010:

logo-boxes.pngYou might as well show by doing. Telling a story is great but to get users it's critical for people to try the service by using it themselves. Right?

imageThat's the idea behind Fabrikam Shipping SaaS, a lab of sorts used by Microsoft to demonstrate Windows Azure. The service is designed to show how customers can move from traditional, on-premises software to a full subscription model.

image7223Fabrikam Shipping shows how customers can be moved to a SaaS platform. The site teaches people how to create customized application instances and handle multi-tenancy. Customers can learn to authenticate and authorize users and organizations. Other capabilities include how to scale the subscription service; integrations with Paypal and full isolation and multi-tenant application deployment styles.

Users are first brought to the Fabrikam shipping page to pick the service they wish to use. It shows a few things. You can set features to each level of service. Tracking is available. Federated identity services are integrated with the enterprise package.

FabrikamShippingStart.jpg

The enterprise edition provides explanations about its requirements. It offers single sign-on with the subscriber's network, which requires the tenant to have their own identity provider. In particular, Microsoft recommends its own federated identity service: Active Directory Federated Service (2.0).

The information page further explains:

"Fabrikam Shipping spawns exclusive Windows Azure hosted services and storage accounts, and createds dedicated SQL Azure databases: this provides the maximum level of isolation between tenants, while minimizing the changes that Fabrikam had to do on the application in order to move to a subscription model."

The next steps take the user through four steps: company data; single sign-on; access policies and confirmation.

According to Microsoft, Fabrikam Shipping SaaS allows you to create a service with all the features of what a SaaS would provide if one set it up themselves. This may be no different than using a service that has a sandbox or test environment. What sets this demo apart is its polish and simplicity.

Services such as Fabrikam Shipping help break down the complexities of what it takes to use a cloud service.

Really, there is no better way to understand what a service provide. It's how we learn about its capabilities. Telling a story is one thing. Showing what you can do is something entirely different.


The Windows Azure AppFabric Team reported Windows Azure AppFabric Labs October release – announcement and scheduled maintenance on 10/12/2010:

image7223The next update to the Windows Azure AppFabric LABS environment is scheduled for October 26, 2010 (Tuesday).  Users will have NO access to the AppFabric LABS portal and services during the scheduled maintenance down time.   

When:

  • START: October 26, 2010, 8 am PST
  • END:  October 28, 2010, 11 am PST

Impact Alert:

The AppFabric LABS environment (Service Bus, Access Control and portal) will be unavailable during this period. Additional impacts are described below.

Action Required:

Existing accounts and Service Namespaces will be available after the services are deployed.

You will have an additional -mgmt entry as follows:

Your ns.sb.AppFabricLabs.com will remain ns.sb.AppFabricLabs.com and an additional entry and corresponding relying party will be created as ns-mgmt.sb.AppFabricLabs.com

However, ACS Identity Providers, Rule Groups, Relying Parties, Service Keys, and Service Identities will NOT be persisted and restored after the maintenance. As the service will not support the bulk upload of rules from the previous version, the user will be responsible for both backing up and restoring any rules they care to reuse after the Windows Azure AppFabric LABS October Release.

Thank you for working in LABS and giving us valuable feedback.  Once the update becomes available, we'll post the details via this blog. 

Stay tuned for the upcoming LABS release!


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Tim Anderson (@timanderson) asserts that “It is all too early to call” in his Which mobile platforms will fail? post of 10/15/2010:

imageGartner’s Nick Jones addressed this question in a [“Dead platforms running”] blog post yesterday. He refers to the “rule of three” which conjectures that no more than three large vendors can succeed in a mature market. If this applies in mobile, then we will see no more than three survivors, after failures and consolidation, from the following group plus any I’ve missed. I have shown platforms that have common ownership and are already slated to be replaced in strikeout format.

  • Apple iOS
  • Google Android
  • Samsung Bada
  • Maemo MeeGo
  • RIM BlackBerry OS BlackBerry Tablet OS (QNX)
  • HP/Palm WebOS
  • Symbian
  • Windows Mobile Windows Phone 7 and successors

Jones says that success requires differentiation, critical mass, and a large handset manufacturer. I am not sure that the last two are really distinct. It is easy to fall into the tautology trap: to be successful a platform needs to be successful. Quite so; but what we are after is the magic ingredient(s) that make it so.

imageDrawing up a list like this is hard, since some operating systems are more distinct than others. Android, Bada, MeeGo and WebOS are all Linux-based; iOS is also a Unix-like OS. Windows Mobile and Windows Phone 7 are both based on Windows CE.

While it seems obvious that not all the above will prosper, I am not sure that the rule of three apples. I agree that it is unlikely that mobile app vendors will want to support and build 8 or more versions of each app in order to cover the whole market; but this problem does not apply to web apps, and cross-platform frameworks and runtimes can solve the problem to some extent – things like Adobe AIR for mobile, PhoneGap and Appcelerator. Further, there will probably always be mobile devices on which few if any apps are installed, where the user will not care about the OS or application store.

Still, pick your winners. Gartner is betting on iOS and Android, predicting decline for RIM and Symbian, and projecting a small 3.9% share for Microsoft by 2014.

imageI am sure there will be surprises. The question of mobile OS market share should not be seen in isolation, but as part of a bigger picture in which cloud+device dominates computing. Microsoft has an opportunity here, because in theory it can offer smooth migration to existing Microsoft-platform businesses, taking advantage of their investment – or lock-in – to Active Directory, Exchange, Office and .NET. In the cloud that makes Microsoft BPOS and Azure attractive, while a mobile device with great support for Exchange and SharePoint, for example, is attractive to businesses that already use these platforms. [Emphasis added.]

The cloud will be a big influence at the consumer end too. There is talk of a Facebook phone which could disrupt the market; but I wonder if we will see the existing Facebook and Microsoft partnership strengthen once people realise that Windows Phone 7 has, from what I have seen, the best Facebook integration out there.

So there are two reasons why Gartner may have under-rated Microsoft’s prospects. Equally, you can argue that Microsoft is too late into this market, with Android perfectly positioned to occupy the same position with respect to Apple that worked so well for Microsoft on the desktop.

It is all too early to call. The best advice is to build in the cloud and plan for change when it comes to devices.


Dan Simmons “previews the new Microsoft Windows Phone 7 operating system for smartphones and finds a completely different user experience from the tech giant's past efforts” in this very positive 00:05:10 BBC Click video review:

image 

Some excerpts:

  • This operating system has been built from the ground upwards looking at the way people use phones as well as the way they say they use phones.
  • Microsoft hasn’t used two clicks on this phone to get around the place when they could have used one, and the result is something very light and airy as an interface -- easy to use and enjoyable.
  • [Windows Phone 7] is something that will give iOS from Apple and Google’s Android operating systems a run for [their] money.

The post notes that “Samsung, HTC, and LG will all be offering handsets using Windows Phone 7 from the launch in most of Western Europe, including the UK, on 21 October.”


• My (@rogerjenn) Solving Dependency Problems in Drop 5 of p&p’s Windows Phone 7 Developer Guide post of 10/14/2010 explains issues with the many libraries required by the sample TailSpin and TailSpin.PhoneOnly solutions:

imagepatterns & practices posted Drop 5 (release candidate) of the Windows Phone 7 Developer Guide to CodePlex on 10/12/2010. The guide describes a WP7 client app with a Windows Azure backend:

imageThis new guide from patterns & practices will help you design and build applications that target the new Windows Phone 7 platform.
The key themes for these projects are:

  1. A Windows Phone 7 client application
  2. A Windows Azure backend for the system

The TailSpin and TailSpin.PhoneOnly solutions have a substantial number of dependencies. The guide includes a CheckDependencies.exe tool to test for dependency fulfillment and install missing libraries:

DependencyCheckerFull563px

I suggest you read my Problems with DependencyChecker in Drop 5 post of 10/13/2010 (updated 10/14/2010) to minimize issues when running the DependencyChecker.exe app for the first time. It currently recommends:

1. Docs and readme should include advice to users to Set execution policy in PS to "unrestricted". If not set to unrestricted, almost all dependency checks will fail with required apps and libraries installed.

2. The current Func v1.0 doesn't include Funq.Silverlight.dll. You must download Func Beta (Funq.0.1.226.1-src.zip) to get Funq.Silverlight.dll. Do you plan to test and upgrade the dependency to v1.0's Func.dll?

3. Advise users that a local instance of SQL Server 200? [Express] or later is required for the sample database.

4. The readme files for Funq.Silverlight.dll and the two Silverlight3UnitTestingFramework DLLs should indicate that these files must be unblocked. Otherwise initial builds fail with errors.

5. Users should be advised to start the Development Fabric and Development Storage before running projects that use Windows Azure storage.


• Bruce Guptil asserted Microsoft Windows Phone 7 Effort Emphasizes Mobility's Role in Emergent Cloud and Hybrid IT in this 10/14/2010 Research Alert for Saugatuck Technology (site registration required):

imageWhat Is Happening?  This week, Microsoft released details of its Windows Phone 7 operating system. The IT trade media, including blogs and analyst reviews, have been full of evaluations of the OS, and predictions regarding Microsoft's role in smartphones and consumer-oriented, mobile telecom as a result.

image In Saugatuck's view, the announcement was significant for Microsoft and mobile telephony markets, but more as an indicator of emergent and future Cloud IT market direction and strategy, and Microsoft's influence in both.

Bruce continues with the usual Why Is It Happening and Market Impact sections. The report is especially interesting in the light of Microsoft’s intention to not incorporate SQL Server CE in future Windows Phone 7 versions (see below).


• David Ramel reported Yup, No SQL Server CE in Windows Phone 7 in his 10/12/2010 Data Driver column for Redmond Developer News:

imageYesterday's Windows Phone 7 launch extravaganza renewed the months-long debate among developers about database options--specifically, whether the new mobile OS should come with persistent local storage such as built-in SQL Server Compact Edition.

image Microsoft's answer, of course, is go to the cloud. And if you don't want to do that, you can opt for a local storage alternative such as XML files, isolated storage or third-party embedded solutions such as Perst.

Besides the "The cloud is the answer. What's the question?" mentality in Redmond, many (even Microsoft people) have pointed out that Win Phone 7 targets consumers more than enterprises, so there is less need of any SQL Server.

Still, considering all the integration with other Microsoft technologies such as Xbox, Office and so on, it seems strange there's no stated intention to provide SQL Server in the future, like they're doing with copy-and-paste functionality. Surely Microsoft isn't going to ignore the enterprise market, and developers in the enterprise market have clearly stated their preference. Check out these comments from the debate mentioned above on an MSDN forum:

  • No database in WM7 Phones? This is ridiculous!
  • With this I may stick into iPhone development.
  • Without real database support, a phone is just a portal device.  It will have very little value in itself, and will be easily replaced with the next shiny new portal device with a flashy UI.
  • This is indeed a step backward from WM 6.5. Why not implement Sql CE Compact? You cannot write real business application without database support and cannot presume that we live in an always connected world. Developers need database support ... Are you really considering not to implement it?
  • This doesn't make any sense, surely it can't be a huge undertaking to incorporate SQL Server Compact with its managed wrapper into Windows Phone 7?
  • We just started our project (2 months ago) to move to WM ... THE one deciding factor was SqlCe ... with that gone we will definitely regroup and probably move to Droid or iPhone before we throw any more resources after WM with no stated intentions!
  • I concur with everyone else...developers NEED database support on Windows Phone 7!

It's pretty obvious what mobile developers want. Is Microsoft listening?

What do you think? We'd love to hear your arguments, pro and con. Has anybody seen definitive indications that SQL Server CE is coming to Windows Phone 7? Comment here or drop me a line.

I agree with the comments above. Microsoft is unlikely to be as successful in the consumer phone segment as they could be in the SMB and enterprise markets, assuming the firm adds enterprise features to v.next and later. Third-party embedded database vendors already have smelled the bait.


Ryan Dunn (@dunnry) described Sticky HTTP Session Routing in Windows Azure in this 10/14/2010 post:

image I have been meaning to update my blog for some time with the routing solution I presented in Cloud Cover 24.  One failed laptop harddrive later and well. I lost my original post and didn't get around until now to try to replace it.  Anyhow, enough of my sad story.

imageOne of the many great benefits of using Windows Azure is that networking is taken care of for you.  You get connected to a load balancer for all your input endpoints and we automatically round-robin the requests to all your instances that are declared to be part of the role.  Visually, it is pretty easy to see:

image

However, this introduces a slight issue common to all webfarm scenarios:  state doesn't work very well.  If your service likes to rely on the fact that it will communicate with exactly the same machine as last time, you are in trouble.  In fact, this is why you will always here an overarching theme of designing to be stateless in the cloud (or any scale-out architecture).

There are inevitably scenarios where sticky sessions (routing when possible to the same instance) will benefit a given architecture.  We talked a lot about this on the show.  In order to produce sticky sessions in Windows Azure, you need a mechanism to route to a given instance.  For the show, I talked about how to do this with a simple socket listener.

image

We introduce a router between the Load Balancer and the Web Roles in this scenario.  The router will peek (1) the incoming headers to determine via cookies or other logic where to route the request.  It then forwards the traffic over sockets to the appropriate server (2).  For all return responses, the router will optionally peek that stream as well to determine whether to inject the cookie (3).  Finally, if necessary, the router will inject a cookie that states where the user should be routed to in subsequent requests (4).

I should note that during the show we talked about how this cookie injection could work, but the demo did not use it.  For this post, I went ahead and updated the code to actually perform the cookie injection.  This allows me to have a stateless router as well.  In the demo on the show, I had a router that simply used a dictionary lookup of SessionIDs to Endpoints.  However, we pointed out that this was problematic without externalizing that state somewhere.  A simple solution is to have the client deliver to the router via a cookie where it was last connected.  This way the state is carried on each request and is no longer the router's responsibility.

You can download the code for this from Code Gallery.  Please bear in mind that this was only lightly tested and I make no guarantees about production worthiness.


My Windows Azure and SQL Azure Synergism Emphasized at Windows Phone 7 Developer Launch in Mt. View post of 10/12/2010 described two Windows Phone 7 ISV’s use of Windows Azure compute and SQL Azure database with their new Windows Phone 7 apps:

Today’s post is late because I attended Day 1 of the Windows Phone 7 Developer Launch at the Microsoft Silicon Valley Convention Center in Mt. View, CA.

image MobilePay USA (@MobilePayUSA) demonstrated a free application for paying bills with your Windows Phone 7 (or iPhone). Here’s how it works:

  1. When inside a store accepting MobilePay, GPS technology enables the store to be displayed on your phone.
  2. At the checkout stand, tell the cashier you are paying by phone.
  3. On your phone, tap the "PAY STORE" button, enter your pin, the payment amount due and tap the "PAY NOW" button.
  4. Now, in just seconds, a payment confirmation will appear on your phone and the merchant’s terminal.

image Randy Smith announced that his “team [was] waiting to announce Windows 7 Phone platform at Microsoft headquarters in Silicon Valley! http://ow.ly/2Sk4L” in a 10/12/2010 tweet. You can watch demos by the same pair that presented in Mt. View. Here’s a link to the video of the TechCrunch Disrupt presentation, with commentary by Sean Parker, James Slavet, Greg Tseng and Victoria Ransom, that won the Attendee's Startup Alley favorite award. IntoMobile also offers a MobilePay USA story and shakeycam video in their MobilePay USA wants your phone to be your wallet [update] of 9/29/2010.


imageAccording to MobilePay, a small Windows Azure compute instance can support up to 10,000 transactions/second. Both the iPhone and WP7 versions use Windows Azure for data processing. The demo team said creating the iPhone app took two weeks but they finished the WP7 version in two days.

imageLino Tadros and John Waters of Falafel Software described their free EventBoard conferenceware product that’s available for iPhone and Adroid devices and has been submitted to the new Windows Phone Market Place.

image  

image The pair’s session covered a “range of real world development stories and deep dive into using Visual Studio 2010 and Expression Blend to build Windows Phone Applications with Windows Azure Cloud Services and Push Notifications.”

imageFalafel developed the initial version of EventBoard for the Silicon Valley Code Camp 2010 to enable attendees to view and manage information about sessions, tracks, rooms, and speakers with the goal of enriching attendees’ conference experience. Here’s the OData metadata for the initial OData source:

image

And most of the first of the SessionsOverviews:

image

The sessions were simulcast at MSDN Simulcast:  Windows Phone 7 Developer Launch Event, Oct. 12, but there’s no indication (so far) if a video archive of Day 1 will be provided.


Vijay Rajagopalan announced Windows Azure Tools for Eclipse for PHP: new update, new tutorial in a 10/12/010 post to the Interoperability @ Microsoft blog:

Things are moving pretty fast!

A few weeks back we announced a series of new and updated Tools/SDKs for PHP developers targeting Windows Azure, which included the Windows Azure Tools for Eclipse/PHP, a comprehensive set of tools that use the Eclipse development environment to create, test and deploy modern cloud applications for the Windows Azure Platform.

Today we’re releasing the October 2010 Community Technology Preview (CTP). This update is based on your feedback and includes many new features, as well as enhancing the workflow of features for version 2, which should be released by November. Here’s a quick rundown of the features we’re introducing:

clip_image002[4]

  • One-click deployment of PHP Applications from Eclipse directly to Windows Azure
  • Support for Windows Azure Diagnostics
  • Integration of Open Source AppFabric SDK for PHP Developers for connecting on-premise PHP applications to cloud applications.
  • Support for multiple Web Roles and Worker Roles for large PHP applications
  • Support of Windows Azure Drive to enable ease of migration of legacy PHP applications.

To learn more, take a look at Brian Swan’s complete “Get Started” tutorial called Using the Windows Azure Tools For Eclipse with PHP, in which Brian shows how to get the most out of Windows Azure.

The Windows Azure Tools for Eclipse/PHP can be downloaded from here: http://www.windowsazure4e.org/download/, and will of course work from auto upgrade functionality in Eclipse.
As always, do give us your feedback at http://sourceforge.net/tracker/?group_id=266877&atid=1135912

Vijay Rajagopalan, Principal Architect


Matias Woloski promoted FabrikamShipping: SaaS Application built on top of Windows Azure in this 10/11/2010 post:

Back in 2007 I had the pleasure of working with people like Eugenio Pace and Gianpaolo Carraro, eimagexploring the unexplored territories of “software delivered like electricity”. We did LitwareHR together, the first software as a service sample application released by Microsoft… At that time I was looking for the topic of my thesis and Eugenio encouraged me to write about Software as a Service. The word cloud computing was going to appear in Wikipedia in November 2007

Fast forward to October 2010, Microsoft is in a much better position to give guidance about building this kind of applications. I think because there are better tools (ie. the whole Azure offering) and more demand for these kind of app models.

In this context, last week Vittorio Bertocci released “FabrikamShipping in SaaS Sauce”. I’m really happy to be able to participate and help in the development of this solution with the rest of the Southworks team (iaco, pc, lito, nacho, nico and the QA team) and Vittorio as usual does a great job explaining what this is about in the intro video.

Since there is a lot to cover and there are things that the community will benefit from, I will write some posts about the app covering different topics from the dev point of view. logo

  • A reusable pattern for building subscription based apps on the Windows Azure platform
  • General-purpose onboarding UI engine
  • Full isolation and multi-tenant application deployment styles
  • Integration with PayPal Adaptive Payment APIs for one-time and preapproved continuous payments
  • How to run complex processes in Worker Roles, including compensation logic and handling external events
  • Message-activated task execution
  • Handling notifications
  • Automated provisioning
  • Email notifications
  • Dynamic Source Code customization and creation of Windows Azure packages via CSPACK from a worker role
  • Creation of SQL Azure databases from a worker role
  • Self-service authorization settings
  • Using the Access Control Service (ACS) Management API for automating access and relationship handling
  • A fully functional user onboarding system from multiple web identity providers, including account activation via automate mails
  • Multi-tenant MVC application authentication based on Windows Identity Foundation (WIF)
  • Securing Odata services with ACS and full WIF integration
  • …and much more

Patrick Butler Monterde posted a link to an explanation of Sending E-Mail from Windows Azure Part 1 of 2 in a 10/11/2010 post:

image Many custom-developed applications need to send email, whether it is a part of the registration process, a way of notifying users when important events occur or something else.  If you're a .NET developer you've probably used classes of the System.Web.Mail namespace to accomplish this.  However these classes require access to an SMTP e-mail server to send messages and Windows Azure does not currently provide such a capability.  However all is not lost. 

imageThis two-part series describes some patterns for enabling emailing capabilities for applications deployed to the Windows Azure platform.

  1. Using a custom on-premise Email Forwarder Service: This pattern, described in this post, utilizes an on-premise email server to send emails on behalf of the application running on Windows Azure.This is accomplished by creating a custom service using a distributed asynchronous model that uses Windows Azure storage queues and blobs to deliver emails generated in Windows Azure to the an on-premise email server.

Link: http://blogs.msdn.com/b/windowsazure/archive/2010/10/08/adoption-program-insights-sending-emails-from-windows-azure-part-1-of-2.aspx

Part 2 of 2 will include:

  1. Using Email Server's Web Services APIs:  This pattern, that will be described in Part 2, uses the web services API provided by Microsoft Exchange to send email directly from Windows Azure. This pattern can be applied to other messaging products that provide a similar web services interface.
  2. Using a third party SMTP Service: This pattern, described in Steve Marx's blog post Email The Internet.com:Sending and Receiving Email in Windows Azure  utilizes a 3rd party Email service like SendGrid or AuthSMTP to relay emails. The solution described in this post goes one step further and also shows how to receive email from a Windows Azure application by listening for SMTP traffic on port25.

The first article and a link to the sample source code appears below.


The Windows Azure Team posted Adoption Program Insights: Sending Emails from Windows Azure (Part 1 of 2) on 10/8/2010:

imageThe Adoption Program Insights series describes experiences of Microsoft Services consultants involved in the Windows Azure Technical Adoption Program assisting customers deploy solutions on the Windows Azure platform. This post is by Patrick Butler Monterde and Tom Hollander.

Many custom-developed applications need to send email, whether it is a part of the registration process, a way of notifying users when important events occur or something else.  If you're a .NET developer you've probably used classes of the System.Web.Mail namespace to accomplish this.  However these classes require access to an SMTP e-mail server to send messages and Windows Azure does not currently provide such a capability.  However all is not lost.  This two-part series describes some patterns for enabling emailing capabilities for applications deployed to the Windows Azure platform.

  1. Using a custom on-premise Email Forwarder Service: This pattern, described in this post, utilizes an on-premise email server to send emails on behalf of the application running on Windows Azure. This is accomplished by creating a custom service using a distributed asynchronous model that uses Windows Azure storage queues and blobs to deliver emails generated in Windows Azure to the an on-premise email server.
  2. Using Email Server's Web Services APIs:  This pattern, that will be described in Part 2, uses the web services API provided by Microsoft Exchange to send email directly from Windows Azure. This pattern can be applied to other messaging products that provide a similar web services interface.
  3. Using a third party SMTP Service: This pattern, described in Steve Marx's blog post EmailTheInternet.com: Sending and Receiving Email in Windows Azure  utilizes a 3rd party Email service like SendGrid or AuthSMTP to relay emails. The solution described in this post goes one step further and also shows how to receive email from a Windows Azure application by listening for SMTP traffic on port 25.

Pattern 1:  Using a Custom On-premise Email Forwarder Service

This pattern utilizes your existing on premise email server to send email on behalf of a Windows Azure application. This is accomplished by creating a custom on-premise Email Forwarder Service that uses Windows Azure Storage queues and blobs to deliver emails generated in Windows Azure to an on-premise email server.  The pattern is divided into two main sections: 

  1. Preparing and sending email work items: This is the implementation of a Windows Azure Web/Worker Role that generates the email. It serializes the email object and creates an email work item in Windows Azure Storage.
  2. Receiving and sending email work items: This is the implementation of an Email Forwarder Service which retrieves the email work items from Windows Azure storage, deserializes the email object and sends it to the email server.

For the distribution of emails from Windows Azure to the on premise email servers, we will define a concept of a "work item". A work item is a logical container composed of:

  1. One Queue Item: The queue item stores the reference (URI) of Blob where the email message is stored. It can also hold up to 8k of metadata you may need.
  2. One Blob Item: The Blob item contains the serialized email object. Because Blobs can have up to 1TB of size, the email object could hold multiple large attachments.

The following diagram shows the pattern's workflow:

This is what happens when an application hosted in Windows Azure needs to send an email message:

  1. A worker/web role generates an email message. This email message is in the form of a System.Net.Mail.MailMessage instance. This mail object could include any number of attachments.
  2. The email object is serialized, and stored into a blob. The blob's URL is then added to a Queue item. The combination of the queue item and the blob become the email work item. You can make use of both the Queue and Blob items metadata to store additional information.
  3. On premise, an Email Forwarder Service constantly monitors the Queues for emails. Queue items can be retrieved a rate of 32 items at the time. The Email Forwarder Service first retrieves the queue item, and then it extracts the Blob URI and retrieves the serialized email.
  4. Once deserialized, the Email Forwarder Service uses the on-premise email server information to send the email.  After delivering the email, it removes the work item from the queue and blob storage.
  5. The on-premise email server receives the emails. Because it is an on-premise application, the authentication and authorization should be straightforward.
  6. The email server sends the email to the appropriate user.

To better illustrate the pattern, a sample implementation of the Email Forwarder Service and a Windows Azure application that uses it can be downloaded below as a .zip file.  The code sample contains the following projects:

  • Email Forwarder Service: Implementation of the on-premise Email Forwarder Service. For simple demonstration purposes it is implemented as a Windows Form application; however for real-world deployment you would implement this in a Windows service. To test the sample service, edit the app.config file to include the details of your on-premise SMTP Server.
  • Entity: Class Library that contains the email message serialization capabilities and the operations to add and remove email work items from Windows Azure storage. Both the Email Forwarder Service and the Web/Worker roles use this project.
  • Email Generator Web Role: Implementation of a simple Web role that can send email. The role provides a web user interface that lets you enter details of the email to be sent.
  • Email Generator Worker Role: Implementation of a simple worker role that can send Email. The role generates and sends email messages every 10 seconds using details found in the role's app.config file.

Architectural Considerations

It is important to understand the architectural implications for any solution. Some of the considerations for the custom Email Forwarder Service include:

Cost: Data storage in Blobs and Queues and the data flow to the on-premise service will incur the additional cost to the overall solution. The overall cost impact will vary based on the email volume in individual solution, which must be taken in consideration before implementing this pattern. Use of compression may be desirable to reduce the size of serialized email objects. To minimize bandwidth costs, the Windows Azure Storage account used for the blobs and queues should be located in the same affinity group as the web/worker roles sending the emails.

Performance: There are two main observations regarding performance:

  1. Serialized email objects that contain large email attachments may pose some performance impact since these needs to be serialized, moved to storage and then retrieved and de-serialized by the Email Forwarder Service.
  2. Due to asynchronous nature of this pattern, the Email Forwarder Service checks the Windows Azure Storage Queues periodically for work items. This generated a marginal delay sending the emails to the email server. This must be studied carefully and set as per the individual needs.

Management: This service should be monitored. We recommend adding logging and monitoring capabilities to the implementation.

Reliability: Proper retry mechanisms (including exponential back-off) should be implemented in the Email Forwarder Service to take care of any connection failures.

Attachment: AzureEmailForwarder-1.zip

Part 2 will be posted here later this week.


<Return to section navigation list> 

Visual Studio LightSwitch

Beth Massi (@BethMassi) reported LightSwitch Resources & Links from SVCC in a 10/13/2010 post:

imageThanks to those folks who came out to Silicon Valley Code Camp this weekend. Peter Kellner and crew put on another awesome event! I can’t believe we had over 2000 people there. Also thanks to Peter for using LightSwitch to quickly build some administration screens for the SVCC website.

image22242Here are some resources from the two LightSwitch talks I did on Saturday that you should explore.

Get Started
  • Download the Beta and get started with LightSwitch by going to the LightSwitch Developer Center http://msdn.com/lightswitch. All the following resources are easily accessible from the Dev Center and it’s my job to keep cranking content out on that site so check back often!
Learn
Discuss
More Community

Thanks again to all that came out to Silicon Valley code camp this weekend!


Beth Massi (@BethMassi) posted New How Do I Video Released Today on LightSwitch Access Control on 10/12/2010:

image If you would rather watch videos than read articles, we just released a new How Do I video on the LightSwitch Developer Center today on setting up permissions to control user access in LightSwitch.

#11 - How Do I: Set up Security to Control User Access to Parts of a Visual Studio LightSwitch Application?

This video compliments the article I wrote in my last post: Implementing Security in a LightSwitch Application.

image22242In this video I walk you through how to set up security to control user access to parts of a LightSwitch application by specifying sets of permissions on Screens and Entities and checking those permissions in code. You will see how to test these permissions during debugging as well as how to set up users and roles in the system once deployed.

We also updated the Learn page of the Dev Center to better fit all the videos we’ve released as well as feature content from the library on the right-rail.  Be sure to check out all the videos on this page.

Next one will be on how to deploy 2 and 3 tier LightSwitch applications.

Enjoy!


Return to section navigation list> 

Windows Azure Infrastructure

• TechNet suggested on 10/12/2010 that you Register for Beta Exam 71-583, Pro: Designing and Developing Windows Azure Applications which becomes available on 10/28/2010:

You are invited to take beta exam 71-583, Pro: Designing and Developing Windows Azure Applications.

If you pass the beta exam, the exam credit will be added to your transcript and you will not need to take the exam in its released form. The 71-xxx identifier is used for registering for beta versions of MCP exams, when the exam is released in its final form the 70-xxx identifier is used for registration.

By participating in beta exams, you have the opportunity to provide the Microsoft Certification program with feedback about exam content, which is integral to development of exams in their released version. We depend on the contributions of experienced IT professionals and developers as we continually improve exam content and maintain the value of Microsoft certifications.

Exam 71-583, Pro: Designing and Developing Windows Azure Applications counts as credit towards the following certification(s).

MCPD: Windows Azure Developer 4


Availability

  • Registration begins: October 11, 2010
  • Beta exam period runs: October 28, 2010– November 17, 2010

Receiving this invitation does not guarantee you a seat in the beta; we recommend that you register immediately. Beta exams have limited availability and are operated under a first-come-first-served basis. Once all beta slots are filled, no additional seats will be offered.

Testing is held at Prometric testing centers worldwide, although this exam may not be available in all countries (see Regional Restrictions). All testing centers will have the capability to offer this exam in its live version.

Regional Restrictions: India, Pakistan, China


Registration Information

You must register at least 24 hours prior to taking the exam.
Please use the following promotional code when registering for the exam: AZPRO
Receiving this invitation does not guarantee you a seat in the beta; we recommend that you register immediately.


Test Information and Support


Frequently Asked Questions

For Microsoft Certified Professional (MCP) help and information, you may log in to the MCP Web site at http://www.microsoft.com/learning/mcp/or contact your Regional Service Center: http://www.microsoft.com/learning/support/worldsites.asp.


Aashish Dhamdhere (@dhamdhere) provided a sneak peek at A Windows #Azure poster that we will start sharing beginning #PDC on 10/14/2010:

image

I’m a collector of vintage posters (World War I, II and the Spanish Civil War) and am underwhelmed with both the slogan and the artwork.


John Taschek attributes The death spiral of the WIMPs of our Time to cloud computing in his 10/14/2010 post that quotes IDC’s Frank Gens:

Wimp Back in 1989 give or take a year or so, there were two competing DOS-based graphical operating systems and neither one of them was Windows. They were WIMPs, which was the common name for what was eventually called a GUI and is now not called anything by anyone except an ordinary somewhat painful part of life on the Internet.

The first DOS WIMP (windows, icon, menu, pointing device) was GEM, from Digital Research. There were earlier CP/M versions too. Don’t go ballistic on me. The second was GEOworks (aka GEOS), a superior solution to me back then, but it had almost no applications on it, while GEM had the all important Ventur[a] Publisher.

Maybe they all saw it coming, but only one DOS-based WIMP actually won that war. Soon after it was released, it amassed a massive following, was pre-installed perhaps coercively on every PC, and created a gigantic industry dedicated to bug fixing and virus checking. I think there was even a cottage compression industry that took off because it was so bloated even then.

We all loved it. We were that myopic. It was Windows. The biggest WIMP of all time.

The early WIMPs were battling for computing supremacy. One of them was sued by Apple and lost because it was pretty much a direct copy of it. They never recovered, although the company is still kicking around. The other was so demolished by the competition that it became the user interface of choice for AOL, a bizarre but practical decision back then, since it supported the booting off the billions of CDs that are now buried half a mile deep in America’s offline landfills.

There is a parallel battle happening now. A battle that had been over before it began. It’s hardware and software versus stuff that looks like hardware and software. Meanwhile, cloud computing has already won.

CloudBlog interviewed Frank Gens, the Chief Analyst for IDC, the largest analyst firm on the planet. It is larger than Gartner, larger than Forrester, and Frank runs research. What is Gens focused on? For the last three years, it’s been all cloud.

In this interview, Gens points out that cloud has become so part of our vernacular so fast that the very things that were once selling points have been superceded by new points that are more pragmatic to CIOs:

  • Time to benefit
  • Speed of deployment
  • The scale of deployments
  • The ability to attract developers (the best developers are developing on the cloud)
  • Mobile

Gens points out that mobile – which he views at an endpoint of the user experience – is becoming a cloud enabler at the same time that cloud is boosting the deployments of mobile. That’s at about the seven minute mark.

Gens articulately deconstructs any notion that the old model is a thriving entity. It is the WIMP of our time. With IT spending somewhat flat, cloud spending according to IDC is skyrocketing. And while spend is just one factor in determining momentum, it is clear. Cloud is the future, but it is also the present.


Simon Ellis posted Choosing your cloud – AppEngine, Azure or EC2… to the CloudTweaks blog on 10/14/2010:

The pointy haired boss at your company will inevitably want you to look at cloud. But what most bosses don’t realize is that it’s not a simple “forklift operation” of moving existing code to a new platform. Choosing the right cloud can be a challenge with factors such as cost, platform selection, language availability, scalability and automation coming into play. Below is a quick primer to help you choose between the leading cloud providers:

Google AppEngine

Google is a great choice for startups and in many ways they are building a cloud that can run the next Facebook or Twitter on their dedicated platform. Their business model highlights this. They offer 500 MB of cloud storage and up to 5 million page views per month for free. They also geared AppEngine towards the hacker’s choice of Java and Python, run code on an abstracted platform layer (real developers don’t want to touch the underlying OS) and provide automated scaling controls.

Historically Google has excelled at targeting their products at individuals and SMBs, and it seems they are heading in a similar direction with AppEngine. If you are a small company with smart developers then you will likely want the low-cost, hacker-oriented and highly-automated (whew!) solution offered by AppEngine. If you’re an Enterprise then you will more than likely be concerned about lock-in, skills availability (.NET is the most readily available dev platform in large enterprises today) and lack of fine grained control.

Microsoft Azure

imageMicrosoft is offering a similar product to Google AppEngine, obviously geared towards the Microsoft community of Visual Studio developers. Web applications that currently run on the .NET/IIS7/SQL Server stack can be easily migrated to the Azure cloud, which comprises solely of a cluster of virtualized Windows 2008 servers. Developers familiar with the Microsoft development stack should have no problem moving to the cloud. As with AppEngine, the Microsoft solution offers scalability automation and abstraction from the underlying platform.

While Microsoft Azure can be tweaked to run non-Microsoft technologies (such as PHP), it is still very much oriented towards the Microsoft technology stack.  Google AppEngine and Microsoft Azure may not end up as direct competitors — they can likely carve out a market that splits users into “hacker startups and SMBs” for Google AppEngine and “enterprise users” for Microsoft Azure.

Amazon EC2

image Amazon is the granddaddy of cloud computing, offering a mature and stable cloud solution that has been active for almost 5 years. The Amazon solution differs greatly from that provides by Google and Microsoft. They selling on-demand, virtual slices of a computing infrastructure (Infrastructure as a Service), rather than an underlying development platform (Platform as a Service). That makes Amazon EC2 very similar to running an application within your own datacenter. You can use any operating system and development language that you like, with full console access to configure and manage your box.

The Amazon EC2 cloud is the easiest to get started with and probably the closest to how your company currently runs its environment. There’s very little lock-in, as Amazon is only selling you computing time on a box and some peripheral technologies to aid in scalability and monitoring of the environment.


Simon is the owner of LabSlice, a new startup that allows companies to distribute Virtual Demos using the cloud.


David Pallman posted Cloud Computing Assessments, Part 2: Envisioning Benefits & Risks on 10/13/2010:

image In Part 1 of this series we discussed the ideal rhythm for exploring cloud computing responsibly and the critical role a cloud computing assessment plays. An assessment allows you to get specific about what the cloud can mean to your company. Here in Part 2 we will consider assessment activities for envisioning, in which you take a look at where the cloud could take you.

Envisioning: Finding the Cloud’s Synergies with your Business

Cloud computing has so many value propositions that it’s almost problematic! When you hear general cloud messaging you’re exposed to many potential benefits that include cost reduction, faster time to market, and simplified IT among others. Different organizations care about different parts of the value proposition. In an assessment, we want to find out which benefits have strong synergy with your company—and focus on them.

A good exercise to evaluate where the value proposition resonates is to gather business and technical decision makers, provide some education on benefits, and have a discussion to see where there is interest. Your people may be enthusiastic about some of these benefits but neutral or even negative about others. Here are some of the benefits to consider:

Elasticity

cca_2_01

In the cloud you can change your footprint anytime, quickly and easily. Think of the cloud as a big rubber band.

No Commitment

cca_2_02

In the cloud you have easy entry and easy exit. You can stay in the cloud as long as you wish, but you can walk away any time, with no financial or legal commitments beyond your current month’s bill.

Reduced Cost

cca_2_03

In the cloud you are likely to see reduced costs, in some cases extremely reduced costs. These reduced costs derive from the use of shared resources, the economy of scale in the cloud, and your ability to only use and pay for resources as long as you need them.

Consumption-based Pricing

cca_2_04

In the cloud you only pay for what you use, and you only use what you need.

Extra Capacity

cca_2_05

In the cloud you can expand capacity whenever you need to, even if a surge in demand is sudden and unexpected. You have the comfort of knowing extra capacity is there for you, but you only pay for it when you actually need it.

Faster Time to Market

cca_2_06

In the cloud you can deploy new and updated applications very quickly. On Windows Azure for example you can deploy an application in 20 minutes or less.

Self-Service IT

cca_2_07

In the cloud some IT tasks become so simple anyone can do them. Company IT cultures differ on this, but for some companies the ability to let more individuals and departments directly control their own deployments and level of scale is attractive.

SLA

cca_2_08

In the cloud you have a Service Level Agreement that boils down to 3 9’s (99.9%), or up to 8 hours of unavailability in a year. For some companies and applications that’s an improvement over their current SLA; for others it may be a downgrade.

Simplify IT

cca_2_09

In the cloud certain IT tasks become very simple, just a click or two in a web portal. This includes provisioning, software deployment, and upgrades.

Management

cca_2_10

In the cloud you have automated management working on your behalf. In the case of Windows Azure, patches are applied to your servers automatically; server health is monitored; and your availability and data integrity are protected through managed redundancy.

Convert CapEx to OpEx

cca_2_11

In the cloud you do away with a lot of capital expenditures such as buying server hardware. This is replaced with operating expenditures, your pay-as-you-go monthly bill. For many companies this means a healthier balance sheet, but not all companies and managers see this as positive. Some people have easier access to capital budget than operating budget.

New Capabilities

cca_2_12

In the cloud you have new capabilities. For example, Windows Azure provides new capabilities for business-to-business communication and federated security. These new capabilities can allow you to innovate and realize a competitive edge. The cloud also enables some new business models such as Software-as-a-Service that you may have interest in.

It can be useful to have separate envisioning meetings with business and technical people; you’ll likely find different audiences have different interests and concerns. For example, a CIO could be gung-ho about the cloud while the IT department below them is apprehensive about the cloud.

Risks & Concerns

A benefits discussion must be complemented with a risk discussion. Anything new like cloud computing will naturally lead to concerns, real or imaged. Each concern needs to be mitigated to the stakeholders’ satisfaction. Examples of concerns frequently raised are security, performance, availability, disaster recovery, vendor lock-in, in-house capability, and runaway billing concerns.

Security

Security comes up in nearly all cloud discussions. Sometimes there will be a specific risk in mind but often the concern is just a general expression of “I’m concerned about security in the cloud”. The best way to feel good about security in the cloud is first to understand how good security in the cloud is: cloud providers invest a massive amount in security. Next, start getting specific about areas of concern: only then can remedies be designed.

For specific security concerns, the consulting firm performing your assessment should have a knowledge base of commonly-raised concerns—such as data falling into the wrong hands—and standard mitigations for them. In addition, there should be a defined approach for threat modeling risks and planning defenses.

Security in the cloud is best viewed as a partnership between you and the cloud provider. There are certain things the cloud environment will do to protect you, and there are complementary things you can do yourself. An example of something you can do is encrypting all of the data you transmit and store. An assessment should capture your concerns and record the plan for dealing with them.

Performance & Availability

Since the cloud is a different environment from your enterprise, you can’t assume the dynamics are the same. You may find performance to be stellar, about the same, or disappointing depending on what you’re used to. An assessment should consider the performance requirements of applications and plan to validate them in a proof-of-concept.

Availability is more straightforward to predict because there is a published SLA, but the Internet path between the cloud computing data center and your users is outside the cloud provider’s control. If your users are in an area with poor or unreliable Internet service, availability expectations should be revised accordingly.

Vendor Lock-in

Some organizations have a fear of vendor lock-in: if you move something to the cloud, are you stuck there? There’s an interesting discussion to be had here. On the one hand, it’s perfectly possible to write applications that can run on-premise or in the cloud, preserving your ability to move back and forth. On the other hand, if you take advantage of new, only-in-the-cloud features such as Windows Azure AppFabric, you’ll lose some portability (but it may be worth doing so for the benefits). An assessment is an occasion to weigh these tensions and pick a lane.

Disaster Recovery

Cloud providers have many mechanisms to protect your data, such as redundancy, but much of this is automatic and neither visible nor controllable by you. You may require a level above this where you can for example make time-stamped snapshots of your data and be able to restore them on demand. An assessment should map out your DR requirements, including RTO & RPO, and determine how you and the cloud platform will collaborate to meet them.

In-house Capability & Process

If you are going to adopt cloud computing your developers and IT department will need the appropriate skills. An assessment should include an analysis of where people skills are today and where they need to be for cloud computing adoption. It’s not only skills that need updating but process as well: the cloud will surely impact your development and deployment processes. Your cloud computing plans should budget for this training and process refinement.

Billing Concerns

Some find the “just like electricity” metering aspect of the cloud unnerving: what if your billing runs out of control? An assessment should identify procedures for measuring billing and monitoring applications proactively, identifying disturbing trends early so they can be investigated before large charges accrue. In the case of Windows Azure, for example, billing can be inspected daily and it’s not necessary to wait till the end of the month to learn how charges are trending.

Trust

By and large, trust is at the root of most cloud computing concerns. Trust is something that needs to be earned, and in cloud computing it can and should be earned in degrees. If you’ve had a good experience with a proof-of-concept in the cloud, that will bolster your confidence to put something in production in the cloud. Your assessment should produce a roadmap that promotes measured, increasing use of the cloud with validation that expectations were met at every step.

Alignment

Since the purpose of a cloud computing assessment is to find the fit for your organization, it’s very important to understand what is already going on in the company. Any cloud computing plans should align with this backdrop.

Business Alignment

Your company’s business plan likely has significant events on the calendar, for example launch of a new product line or service. Annual planning and budgeting are another example. The flow of business initiatives may suggest that certain cloud opportunities make sense sooner or later on the timeline.

IT Alignment

Your IT department is also likely to have events on the calendar that should be taken into consideration. Is a server refresh cycle scheduled? Consider that using the cloud might allow you to avoid or reduce buying that hardware. Are there plans to overhaul the data center? A cloud strategy might allow you to drastically alter the size and cost of your data center, using the cloud for overflow at peak times.

Envisioning Provides Business Context

Much of a cloud computing assessment will involve identifying and analyzing specific opportunities (applications), but this initial envisioning activity is important. It gives you the business context for your technical decisions. In envisioning you capture both the areas of traction and disconnect between cloud computing and your organization. This information will help you in forming your cloud computing strategy and it will color the suitability scoring of potential cloud opportunities. Timing of cloud initiatives should take business and IT initiatives into account.

In subsequent installments we’ll look at more activities that are performed in a cloud computing assessment. If you’d like to see how we do it at Neudesic, visit http://cloud-assessment.com/.


Buck Woody asked What Role for IT Infrastructure in a Cloud Environment? in a 10/14/2010 post:

image Most companies I deal with are seriously considering a cloud strategy. When you evaluate the benefits of a platform as a service, it makes sense to do that. But some of the folks I talk to are those who buy, build, configure and administer systems like file servers, mail servers, and database servers. They often ask - "If business people buy the capacity with a PO, and if architects design the system, and if developers write the code, what do I do?" It's a fair question.

First, I'm not aware of a cloud vendor, including Microsoft, that expects a medium to large company to replace it's entire infrastructure with the cloud. The latency for large file transfers, printing, government privacy requirements, all kinds of factors prevent that. So the first answer to that question is that your job isn't going away. For the foreseeable future, there will be a physical IT infrastructure in place that will still need care and feeding.

The second answer is that as you educate yourself on each new technology, including cloud, you should help your company understand where it does and does not fit. It's not a one-size-fits-all. There are places where you can switch an application to the cloud, places where you need to keep the application local, and still others where the cloud can be your “burst” strategy or part of your High-Availability and Disaster Recovery plan.

image22242You can also have the cloud do things you don’t want to do. In some cases you have “shadow IT” departments that need or want to create their own small applications, which are not part of the core IT process at your organization. You could help them set up an account, and use simple programs like LightSwitch to write their own applications which are paid for by that department, and maintained by Microsoft, leaving you free to fight the more important battles for your larger applications.

On that note, some shops consider the cloud to be simply hosting a Virtual Machine somewhere else, but this isn’t really “cloud”, in my mind. It’s just “somebody else’s systems”, and just moves the problem around. In fact, that strategy can indeed lead to erosion in an IT Infrastructure. But a platform as a service play (see my post of “Which Cloud are you talking about”) allows you to focus on balancing your use of the cloud as a strategic resource.

The point is that we need to start thinking more strategically about our role in the organization. We can’t just provide a single level of support for architectures that to the business are invisible. I invite you to learn what options you have, and then help your organization apply them intelligently. Don’t dismiss the cloud because it doesn’t have feature X or Y, or has a latency, or isn’t HIPPA or S[a]rbox compliant. Look for places where it fits, and go from there.

imageOn that topic, I’m reading a really good book right now called ”Applied Architecture Platforms on the Microsoft Platform”. It’s exactly what the title says – a rundown of the various Microsoft platforms like SQL Server, Exchange and Windows – and Azure – and covers where each of these can be used. I really like it – you should definitely check it out. Even working at Microsoft I’m not always sure where to use each platform – this book helps me understand where to do that. I’d love to see an open-source version, Oracle and others with this same information. It’s invaluable for the Architect.


David Linthicum listed 3 Things that are Killing SOA in the Cloud in a 10/12/2010 post to ebizQ’s Where SOA Meets Cloud blog:

image You have to take the good with the bad, and while SOA is doing great things in the world of cloud computing, there are some downers that I'm noticing as well. Let's discuss my top 3.

First, SOA vendors that don't understand their own role in the cloud are killing SOA in the cloud. Time and time again I'm running up against older SOA players that have no idea how their technology works and plays in the world of cloud computing. Thus, they undersell or oversell their products with very little understanding of context. Understand what you do, and the role that your product plays in the clouds. You'll sell more, trust me.

image Second, cloud computing consultants that don't understand SOA, but say they do, are killing SOA in the cloud. Many cloud computing consultants who came from the infrastructure side of things, have figured out that it's good to have SOA on the client presentations and proposals, but have no clear understanding of how SOA actually functions when defining and building a cloud computing solution. SOA is not just about turning everything and anything into a service and calling a SOA. It's about defining an architecture that becomes many solutions that align with the business, not just a single instance of a solution.

Finally, the hype eating monsters are killing SOA in the cloud. This is the guy who says "cloud computing" so many times in meetings that if it was a drinking game, the entire conference room would be blitzed 10 minutes into the discussion. This guy speeds to the cloud avoiding any sort of pesky architectural planning, and thus SOA is tossed out the window quickly, although he's selling it as a SOA project internally. The end result is the development of more stovepipes, less agility, and a major cloud computing hang over once this guy has had his cloud computing way with most of the major IT systems.


Buck Woody questioned Which Cloud Are You Talking About? on 10/11/2010:

image In the early days of computing, IT delivered computing using huge mainframes, accepting input from users, crunching the numbers and returning results to a very thin client - just a green-screen terminal. PC's came into the mix, allowing users a local experience with graphical interfaces and even other larger PC's as "servers".

And now comes the "cloud". It seems everything with a network card, or anything that uses a network card, is "cloud" or "cloud enabled". But what is the "real" definition of the "cloud"? In effect, it's just outsourced or virtualized IT.

There are really three levels of this definition. The first is "Infrastructure as a Service" or IaaS. This means you just rent a server or storage on someone else's network. While this is interesting, you still have to figure out scale, management and so on. You're only relieved of the hardware costs and maintenance. It's interesting to IT because it buffers them against hardware costs and upgrades, at least directly. In fact, you're probably doing this yourself already, by using Hyper-V or VMWare in your datacenters.

The next level is "Software as a Service", or SaaS. You can think of things like Hotmail or Expedia in this way. You log on to an application over the web, use it, and log off. Nothing to install, no upgrades, no management, no maintenance. This is also an interesting "cloud" offering, although the limitations are the same as the mainframe days - you can't easily make changes to the software, and in some cases, you just get what you get. Great for consumers, not as interesting to IT. In fact, for us IT folks, this is what we need to provide, not necessarily something we use.

imageFinally there is the "Platform as a Service" or PaaS. In this model, you have a platform to write on, which relieves the hardware issue of IaaS, you can write code that delivers software as in SaaS, and provides a way to create, test and deliver both. You're insulated against hardware and even platform upgrades, and you're given a way to scale the application without caring where it really lives. This is the road that Microsoft takes with Windows and SQL Azure.

Of course it isn't all rainbows and candy - there are things to think about, like security, uptime and more. There's a formal process for determining the applications that fit - which I'll write about later.


Aashish Dhamdhere (@Dhamdhere) posted Announcing the 'Powered by Windows Azure' logo program to the Windows Azure log on 9/21/2010 (missed when posted):

image An important part of my charter on the Windows Azure marketing team is to help customers, partners and developers be successful on the platform. Over the last few months, I've heard from many partners and customers that they would like to identify and showcase the fact that an application is running on Windows Azure. To help meet this need, I'm excited to announce a new 'Powered by Windows Azure' logo program that developers can use to promote their Windows Azure applications.

image

The program is designed for applications that have been verified as running on Windows Azure and meet the requirements outlined here. If you're interested in learning more send an email to PBWA@microsoft.com

Aashish is Senior Product Manager, Windows Azure

<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA)

David Linthicum asserted “Public cloud vendors are trying to make the private cloud look bad; here's where you need to ignore their claims” in adding Exposed: 3 more bogus myths about the private cloud to InfoWorld’s Cloud Computing blog on 10/14/2010:

image Earlier this week, I described three bogus myths about the private cloud perpetuated by private-cloud vendors. But public-cloud vendors are doing their best to create FUD about their competitors, spreading bogus myths about private clouds -- which is causing confusion for IT.

image So that you don't get fooled and waste countless dollars and hours pursuing a silly public cloud strategy rather than a sensible private-cloud strategy, here are the top three myths about the private cloud promulgated by the public cloud vendors. (For the record, both public clouds and private clouds are necessary. I'm favor of using each where it makes sense.)

Public cloud myth 1: Private clouds are not clouds at all
I get it: Private clouds are not consumed from an outside provider using massive amounts of IT resources that you don't own. Thus, it's difficult to say that private clouds provide the same elasticity and value as public clouds. Right? Wrong. There is no reason you can't build an IT architecture within the enterprise that takes advantage of the same design patterns and, thus, the value attributes of public clouds -- including elasticity.

Public cloud myth 2: Private clouds are just virtualized servers
The ability to be a cloud means you're doing much more than simple virtualization. This includes supporting a true multitenant architecture, use-based accounting, deep management, and auto-provisioning. These days, private clouds provide most of the features and functions of public clouds. Indeed, all public clouds but Amazon Web Services provide a private-cloud instance of their software.

Public cloud myth 3: Public clouds are always cheaper than private clouds
Although it's true that the ability to use applications, development platforms, and infrastructure from a pay-as-you-go subscription service seems cheaper than buying, installing, and supporting your own hardware and software, that's not always the case. You have to consider each offering, looking at all aspects of the cost over at least a three-year horizon. In many instances, public clouds are less expensive, but it's never "always cheaper." You would be surprised by how many times it's not.


David Linthicum claimed “With the huge marketing spend in the private cloud computing space, the misinformation from vendors is deafening” in a preface to his Exposed: 3 bogus myths about the private cloud post to InfoWorld’s Cloud Computing blog:

imageThe amount of hogwash out there in the cloud computing space is beginning to backfire. I've been talking to many people who just got off the phone with their large enterprise software provider, now selling private clouds, which told them a half a dozen things that just are not true.

image I'm not pushing back on private clouds, mind you, but I am concerned about the FUD approach to selling it. Private clouds are fit for certain problem domains, and public clouds are a fit for others. However, you need to approach the decision with correct information.

So that you don't get fooled and waste countless dollars and hours pursuing a silly private cloud strategy egged on by self-interested vendors, here are the top three vendor-created myths that need to be dispelled.

Private cloud myth 1: It's illegal to use a public cloud in your industry, so you need a private cloud
Although there are compliance issues around how personal medical information and financial information are stored and secured, these rules do not extend to all enterprise data. Moreover, in many cases the public cloud providers understand how to adhere to these rules and regulations -- for example, by allowing the cloud user to specify where data will reside, as well as the required audit and security mechanisms.

Private cloud myth 2: Hybrid clouds will provide you with a way to move to a private cloud, and then easily to the public cloud
You won't be able to follow that path without a lot of work, money, and risk. The fact of the matter is that when you localize applications and data in public and/or private clouds, it's no easy task to move them between private and public clouds. If you follow this vendor strategy, you'll have to port twice if a public cloud is your final destination -- and there are no common or standard ways of doing this yet.

Private cloud myth 3: Public clouds are not reliable
Although there have been some well-publicized outages, typically regional, public clouds are by and large are more reliable then most enterprise IT infrastructure. The cloud providers focus more on redundancy and do a better job in monitoring their service, so they have better uptime than private systems.


<Return to section navigation list> 

Cloud Security and Governance

Microsoft’s Trustworthy Computing Group made their Privacy in the Cloud Computing Era white paper of 11/2009 available in PDF and XPS format from the Download Center on 10/11/2010.

From the TOC:

    • Cloud Computing and Privacy
    • The Evolution of Cloud Computing
    • Privacy Questions in Cloud Computing
    • Consumer-Oriented Cloud Computing Today
    • Cloud Computing for Governments and Businesses
    • Legal and Regulatory Challenges
    • Conclusion

From the introduction (Cloud Computing and Privacy):

A new generation of technology is transforming the world of computing. Internet-based data storage and services—also known as “cloud computing”—are rapidly emerging to complement the traditional model of software running and data being stored on desktop PCs and servers. In simple terms, cloud computing is a way to enhance computing experiences by enabling users to access software applications and data that are stored at off-site datacenters rather than on the user’s own device or PC or at an organization’s on-site datacenter.

E-mail, instant messaging, business software, and Web content management are among the many applications that may be offered via a cloud environment. Many of these applications have been offered remotely over the Internet for a number of years, which means that cloud computing might not feel markedly different from the current Web for most users. (Technical readers will rightly cite a number of distinct attributes—including scalability, flexibility, and resource pooling—as key differentiators of the cloud. These types of technical attributes will not be addressed here because they are outside the scope of this document.)

Cloud computing does raise a number of important policy questions concerning how people, organizations, and governments handle information and interactions in this environment. However, with regard to most data privacy questions as well as the perspective of typical users, cloud computing reflects the evolution of the Internet computing experiences we have long enjoyed, rather than a revolution.

Microsoft recognizes that privacy protections are essential to building the customer trust needed for cloud computing and the Internet to reach their full potential. Customers also expect their data and applications stored in the cloud to remain private and secure. While the challenges of providing security and privacy are evolving along with the cloud, the underlying principles haven’t changed—and Microsoft remains committed to those principles. We work to build secure systems and datacenters that help us protect individuals’ privacy, and we adhere to clear, responsible privacy policies in our business practices—from software development through service delivery, operation, and support.

Enterprise customers typically approach cloud computing with a predefined data management strategy, and they use that strategy as a foundation to assess whether a given service offering meets their specific needs. As a result, privacy protections might vary in different business contexts. This is not new or unique to the cloud environment. Ultimately, we expect the technology industry, consumers, and governments to agree on baseline privacy practices that span industries and countries. As that consensus view evolves, Microsoft will remain an active voice in the discussion—drawing on our extensive experience and our commitment to helping create a safer, more secure Internet that enables free expression and commerce.

Thanks to Brent Stineman (@BrentCodeMonkey) for the heads-up.


D’Arcy Lussier posits Government Security Law in Cloud Computing is a Consideration, not … in a 10/14/2010 post:

Jennifer Kavur recently published an article on IT World Canada’s website stating “Don’t use the Patriot Act as an excuse.” Let me sum up the article:

You shouldn’t worry about it because Canada has their own anti-terrorism act that is close to the Patriot Act, so really Canadians are under similar scrutiny.

Data isn’t guaranteed to travel only in Canada and could cross over wires to the US, putting it under US laws (i.e. Patriot Act). So it could go there anyway.

Canadian and US authorities share information all the time, so we should be ok with storing our information in the US too.

BULLSHIT!

image All of those are weak arguments against the underlying issue: we are a separate, sovereign country who’s personal data should not be accessible by agencies of other governments. If I register with a US based site, that’s my choice to make that data available. But if a government agency or a company that holds information of a highly personal nature (bank or medical records for instance), that better be stored within Canada or I should be notified and given the opportunity to decline the storage of my data.

We live in a connected world of websites, Twitter, and Facebook where we can forget that there are real borders and real laws that are still in effect. Cloud computing makes it easy to forget this. “It’s all stored in the cloud, in the nether, in this airy fairy place that data lives, spread out over geographies”. Unfortunately, there’s a reality of jurisdiction that can’t be denied.

I do agree with the articles point that Canada does need a cloud strategy to enable us to reap the technical benefits of cloud computing. But being concerned about personal privacy is not an excuse…in fact, it should be a mandatory consideration.

image

No significant articles today.


<Return to section navigation list> 

Cloud Computing Events

Elizabeth White reported “Dave Cutler to discuss Microsoft Online Services Productivity Suite and CRM Online” as a lead-in to her Microsoft Cloud Technologies at Cloud Expo Silicon Valley post of 11/14/2010:

image

How do you bring the integrated Microsoft infrastructure stack story and optimize for collaboration and business intelligence?

In his session at the 7th International Cloud Expo, Dave Cutler, Practice Director at Slalom Consulting, will describe innovations with the Business Productivity Online Suite and CRM Online through examples of customer scenarios.

imageDave Cutler [pictured at right] is Practice Director at Slalom Consulting. He is responsible for managing Slalom's national practice for solutions focused on enterprise messaging, unified communications, and collaboration with a particular focus on cloud-based services for these offerings.


Vittorio Bertocci (@vibronet) announced that he will be Speaking @… PayPal Innovate X 2010 on 10/27/2010 at San Francisco’s Moscone West:

image One of the parts in FabrikamShipping SaaS I’m most proud of is the integration with PayPal’s Adaptive Payments API. We’ve been collaborating directly with PayPal for few weeks, and specifically with Praveen Alavilli and  Shah Rasesh, for finding the best way to take advantage of the payment APIs in the FabrikamShipping scenario. Working with those guys has been great, and I can definitely say (no matter how cliche’ that sounds) that we barely scratched the surface of what can be done with Windows Azure and the adaptive payment APIs.

In fact, I am very honored to say that I’ve been invited to present FabrikamShipping SaaS and its use of the PayPal platform at Paypal Innovate X 2010!

Innovate2010 Registration banner

We have a breakout titled “Building Apps that Scale Using Microsoft and PayPal's Developer Platform and Tools”, which we will deliver on Wednesday in the 3:00-3:45 slot. I am going to share the session with James Senior, who will present on WebMatrix and PayPal integration; that will pretty much cover the two ends of the gamut of our developer platform offering.

Session Title: Building Apps that Scale Using Microsoft and PayPal's Developer Platform and Tools

Microsoft and PayPal are working together to make it easier for developers to get to market faster with apps that scale, using the latest web and cloud platforms and tools. In this demo-tastic session we'll first take a tour of WebMatrix which is perfect for developers who are cranking out websites from scratch or building off open source apps instead. No matter which approach you choose, you'll learn how you can get an app with tight PayPal integration created, customized and deployed to a low-cost host in a matter of minutes so you can bill your client and move onto the next project. We'll also demonstrate how you can easily add payment capabilities to multi-tenant solutions hosted in Windows Azure. Thanks to the flexibility of the PayPal Adaptive Payment API, .NET developers can easily integrate features ranging from one-time payments to full-blown billing relationships with recurring, pre-approved payments.

image7223It’s great to have the chance to participate to Innovate X; my main regret is that, being the dates of the conference so close to PDC, I won’t be able to hang around the whole time as I would have liked. However, I should be at our booth Tuesday afternoon and Wednesday morning. Come over and I’ll give you the grand tour of FabrikamShipping SaaS! In fact, I’ll tell you what: Praveen and Shah gave me a special discount code for you, LETSINNOVATE, which you can use to get $100 off when registering for your conference pass. Sounds great, doesn’t it :)

image

Our slot squarely overlaps with the identity session from my good friends Eve Maler and Ashish Jain; I’ll have to try grabbing them the night before and see if I can get a private preview…

See you there!


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Jeff Barr reported AWS Elastic Load Balancing: Support for SSL Termination on 10/14/2010:

image You can now create a highly scalable, load-balanced web site using multiple Amazon EC2 instances, and you can easily arrange for the entire HTTPS encryption and decryption process (generally known as SSL termination) to be handled by an Elastic Load Balancer. Your users can benefit from encrypted communication with very little operational overhead or administrative complexity.

image Until now, you had to handle the termination process within each EC2 instance. This added to the load on the instance and also required you to install an X.509 certificate on each instance. With this new release, you can simply upload the certificates to your AWS account and we'll take care of getting them distributed to the load balancers.

  1. Create or purchase a certificate.
  2. Upload the certificate to your AWS account using the AWS Management Console or the iam-servercertupload command, then retrieve the ID of the uploaded certificate.
  3. Create a new load balancer which includes an HTTPS listener, and supply the certificate ID from the previous step.
  4. Configure a health check and associate EC2 instances with the load balancer as usual.

As you may know, you can use the AWS Management Console to create and manage your Elastic Load Balancers. As you can see from the second and third screen shots below, you can now select one of your existing SSL certificates or upload a new one when you create a new Elastic Load Balancer:

A lot of our users have asked for this feature and we are happy to be able to meet their needs.


<Return to section navigation list> 

Technorati Tags: Windows Azure, Windows Azure Platform, Azure Services Platform, Azure Storage Services, Azure Table Services, Azure Blob Services, Azure Drive Services, Azure Queue Services, SQL Azure Database, SADB, Open Data Protocol, OData, Windows Azure AppFabric, Azure AppFabric, Windows Server AppFabric, Server AppFabric, Cloud Computing, Visual Studio LightSwitch, LightSwitch, Amazon Web Services, AWS, Eric Brewer, CAP Theorem

0 comments: